Tag Archives: cognitive research

Children playing musical instruments in Scotland. Photograph: Murdo Macleod/Murdo Macleod

Learning to play a musical instrument doesn’t make you smarter, study finds

Children playing musical instruments in Scotland. Photograph: Murdo Macleod/Murdo Macleod

Children playing musical instruments in Scotland. Photograph: Murdo Macleod/Murdo Macleod

There seems to be a general belief, especially among parents, that if you send children to music lessons the experience will make them smarter. However, a group of researchers at  University of Toronto, intrigued  by this highly thrown about, yet never proven, link between the two conducted a study to see if this belief genuinely holds. Their findings suggest, in the authors’ own words, that for a child to take music lessons purely for the presumed educational benefit is a “complete waste of time.”

Prof Glenn Schellenberg, a psychologist from the University of Toronto, lead the study in which a group of 130 children, aged 10 to 12, were surveyed for the link between a presumed increase in intelligence and musical training. What the researchers concentrated on were two key personality traits: conscientiousness and openness to new experiences. The psychologists believe that these traits are essential to mental processes like memory, learning and reasoning, researchers said.

“We were motivated by the fact that kids who take music lessons are particularly good students, in school they actually do better than you would predict from their IQ, so obviously something else is going on and we thought that personality might be the thing,” Prof Schellenberg explained.

Based on these personality traits for each child, for whom data related to school grades and IQ scores were correlated, the psychologists ended up with an equation. After the likely contribution of each child’s personality was subtracted from the equation, the link between musical training and intelligence, or better said achievement, disappeared.

The psychologists explain that the fact that  children that are musically trained perform well in school is due to social reasons, not cognitive ones. Typically, these children grow up in homes where the parents are well educated, earn above average, that offer them a better upbringing, in general, than that offered in a typical household. So a more privileged background is what actually the key difference maker, according to the researchers.

To emphasize their point further, the researchers were even able to rather accurately estimate how long a child had been taking music lessons based on their answers to a personality questionnaire alone. Previously other studies pointed to conclusions the other way around – that musical training does in fact boost cognitive capabilities.

“What this means is that kids who take music lessons have different personalities, and many or virtually all of the findings that have shown links between music and cognition may be an artifact of individual differences in personality,” he said.

“You can explain almost all of the data that are out there by saying that high-functioning kids take music lessons.”

Prof Daniel Levitin, a psychologist from McGill University in Montreal, said this did not mean music lessons were of no value, however.

“There are benefits to having a society where more people are engaged with the arts, so even if music instruction doesn’t make you a better mathematician or a better athlete, even if it only gives you the enjoyment of music, I think that is a good end in and of itself,” he said.

Now, in my humble opinion I agree with Schellenberg’s conclusions in one respect, but disagree on the other. Yes, it is very likely that personality traits help children achieve at school. On the other hand, however, isn’t musical training an important part in nurturing these traits and values? Playing the piano for instance puts a lot of strain and incentive on memory, perspective and most of all perseverance to achieve success, a highly important personality trait. So, while the root of child achievement may have been accurately signaled by the researchers, their conclusions that musical training adds no weight to this may be flawed.

ZME readers, discuss.

The 3D multiple object tracking test. Eight spheres are presented, after a while four of them are highlighted then turn back to normal color. All the spheres then change their position and the participants needs to correctly identify all four spheres. If you get it wrong, the velocity of the spheres is decreased or heightened if you're correct in your tries.

Professional athletes learn faster than University students

There’s a common stereotype that depicts athletes as being grunts that are all brawn and no brain. In reality, the truth couldn’t be farther. Athletes, the good ones at least, seemingly posses an above average intelligence, and a recent study by cognitive scientists at University of Montreal adds further weight to this statement. In the study professional and amateur athletes bested university students in cognitive tests. The findings may help unravel the mechanisms that allow some athletes to be so much better than  everyone else at what they do.

“What we found is spectacular, the difference,” said University of Montreal researcher Joceyln Faubert in an interview Thursday for CBC News. “It’s not just little.”

Manchester United's top star, Wayne Rooney, might not be the sharpest knife in the drawer off the pitch, but on it he is simply formidable.

Manchester United’s top star, Wayne Rooney, might not be the sharpest knife in the drawer off the pitch, but on it he is simply formidable.

Intelligence isn’t about being good at math, sciences or art, although its underlying mechanisms help an intelligent person perform well in these fields. It’s all about being able to learn quickly, improve quickly and being creative at what you do.

“We all have stereotypes about athletes: ‘They can’t even say two words straight, they’re not very good at expressing themselves’ and so on,'” Faubert said. “But their brain is busy doing something else.”

The researchers asked study participants – including professional NHL hockey players and English Premier League soccer players, elite amateur athletes involved in team or combat sports, drawn from the NCAA American University sports program and a European Olympic training centre and lastly  non-athlete University of Montreal students – to engage in a cognitive test that involved paying attention to and tracking fast-moving objects, something the researchers describe as being akin to using similar skills as driving or crossing a busy street.

[ALSO Read] Evolution dictated by brawn instead of brain

Mens sana in corpore sano

None of the study participants had ever seen this kind of test before, so everybody started from a clean slate in learning the skills required to score high.  Their performance was recorded over 15 learning sessions, at end of which the results came in – no less than staggering.

Professional athletes both started better and improved more quickly than the other two group. The amateur athletes started off performing at a similar level to the university students, but they improved more quickly. There were no recorded performance differences between males and females. So, apparently the university students, although they were enrolled in a higher education establishment, scored lower at a cognitive test than athletes which supposedly use their brains less.

The 3D multiple object tracking test. Eight spheres are presented, after a while four of them are highlighted then turn back to normal color. All the spheres then change their position and the participants needs to correctly identify all four spheres. If you get it wrong, the velocity of the spheres is decreased or heightened if you're correct in your tries.

The 3D multiple object tracking test. Eight spheres are presented, after a while four of them are highlighted then turn back to normal color. All the spheres then change their position and the participants needs to correctly identify all four spheres. If you get it wrong, the velocity of the spheres is decreased or heightened if you’re correct in your tries.

Although the test might look like it was centered around athletes, since it implied tracking moving objects, the researchers claim it’s just as good at assessing academic potential as well since it involves one key factor in learning – attention. Athletes’ superior ability to focus and pay attention means “if they concentrated on something else, they’d probably be good at that, too,” Faubert believes.

Faubert and his colleagues involved in the study now intended to see whether the three-dimensional multiple-object tracking task can improve people’s ability to pay attention and learn, something people with low attention span, like those suffering from ADHD or the elderly, could benefit from.

Findings were reported in the journal Scientific Reports.

Boredom (c) Paul Popper/Popperfoto/Getty Images

Understanding boredom and whether or not it can be cured

Boredom (c) Paul Popper/Popperfoto/Getty Images

(c) Paul Popper/Popperfoto/Getty Images

Boredom seems to be a dominant “affliction” of the 21st century. That’s not to say it’s a sole modern life problem. People have been bored since the dawn of mankind, and actually some of the world’s greatest advancements surfaced from the need to battle boredom. Understanding, on an empirical level, what is boredom and what causes it, and in term how to defeat it systematically, is a matter that has eluded philosophers, free thinkers or psychologists however.

Recently a team of scientists at York University in Canada compiled as many as 100 studies published from the turn of the last century to this present day in order to find a common denominator and form an unified theory of boredom. Their findings are interesting at least – they suggest that boredom is the product between  conflict of attention and environmental factors. Either we focus too much or too little attention on a particular task.

“Boredom is a neglected topic in psychology,” noted Timothy Wilson, a leading social psychologist at the University of Virginia who is undertaking boredom studies of his own. He calls the new review a “good, solid paper,” adding, “There is a lot of research on attention and mind wandering, but [until now], no attempt to bring it together under the topic of boredom per se.”

There has been little effort directed towards analyzing the cognitive processes that underline boredom, mostly because most of the time it’s viewed as more of an effect or consequence than a stand alone condition. Don’t get me wrong, I’m not implying boredom is like a disease or even comparable, rather that it is a distinct state of mind that isn’t necessarily linked with known conditions, like depression.  Exactly what defines boredom is what  cognitive psychologist John Eastwood and his team at York University have been researching.

Boredom is innocent most of the time. Falling asleep in class is one thing, but loosing sight of important maneuvers while piloting an aircraft or driving a ten tonne trucks is a whole different thing, though. Boredom can lead to excessive reliance on automotive behavior which sidetracks you from focusing on an important task. It’s also a factor that might attract people towards alcohol or drug abuse, gambling, or excessive compulsive disorder.

“Boredom has at its core the desiring of satisfying engagement but not being able to achieve that,” Eastwood said. “And attention is the cognitive process whereby we interface with both the external world and our internal thoughts and feelings. So it falls logically that attention must be at the core of the definition.”

“I’m Bored”

Among the myriad of studies the researchers studied was a 1989 experiment at Clark University. Back then, scientists asked study participants to read and remember a moderately engaging magazine article while a TV set was on in a room next door. When the TV was set too loud, participants reported feelings of frustration, but not boredom, in the whole process. When the television noise was more subtle, however, the participants reported they felt the experience was boring. A distraction was in both places present, but boredom surfaced only in the latter occasion.

Another experiment comes from Bond University and looked on how people reacted  to ongoing background conversations as they completed one of three tasks: an assembly task that didn’t need much attention, an uninteresting proofreading task that required monitoring, and a management task that required sustained attention but was also quite interesting. While completing the task that required the least attention, the conversations actually entertained the participant and decreased boredom. During the second task is where things become truly interesting: while performing the dull task, which required attention focus however, the background conversations triggered feelings of boredom. In the last task, the background conversation was of little consequence, since the task was so engaging that participants simply tuned noise out.

“Putting attention at the center of the experience…allowed us to explain the subjective experience of boredom: time passing slowly, difficulty focusing, disordered arousal, disrupted agency, negative affect,” said Eastwood.

“When we are in a stimulation-intense environment,” he continues “we are more likely to experience things as unsatisfying because our attention is being pulled in different directions.”

The conclusion is that boredom is triggered by a combination of inner focus and environment. Meaning, in order to tackle boredom you need to change of these two parameters.

Specifically, we’re bored when:

  • We have difficulty paying attention to the internal information (e.g., thoughts or feelings) or external information (e.g., environmental stimuli) required for participating in satisfying activity
  • We’re aware of the fact that we’re having difficulty paying attention
  • We believe that the environment is responsible for our aversive state (e.g., “this task is boring,” “there is nothing to do”).

Changing one’s mood and deliberately  shifting focus for long hours, at times, isn’t something most of us can do with any task, but it serves to say that boredom can be completely in your control. If enough focus is directed at a task, you might even find it interesting to the point of entertaining. Chaining the environment also works.

  “Ambient movement is a known way to help people stay attentionally engaged,” Eastwood said. “Just sitting at a desk is a terrible idea.” Wilson agreed, adding that even small environmental changes can make a big difference. When airports moved baggage claims further from arrival gates, Wilson observed, flyers’ satisfaction increased. “They didn’t mind walking so much as they minded waiting.”

The researchers advise that the more distractions we allow ourselves subjected to, the easier we will become bored. “It’s like quicksand,” Eastwood said. “If we thrash around, we end up making it much, much worse.” In a world where both the media and society constantly bombard us for attention, it seems like things will get even more boring in the future.

Findings were detailed in the journal Perspectives on Psychological Science.

[via Boston Globe]

The tiny neurosynaptic core produced by IBM. (c) IBM

Cognitive computing milestone: IBM simulates 530 billon neurons and 100 trillion synapses

First initiated in 2008 by IBM, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program whose final goal is that of developing a new cognitive computer architecture based on the human brain. Recently, IBM announced it has reached an important milestone for its program after the company successfully simulated 10 billion neurons and 100 trillion synapses on most powerful supercomputer.

It’s worth noting, however, before you get too exited, that the IBM researchers have not t built a biologically realistic simulation of the complete human brain – this is still a goal that is still many years away. Instead, the scientists devised a cognitive computing architecture called TrueNorth with 1010 neurons (10 billion) and 1014 synapses (100 trillion) that is inspired by the number of synapses in the human brain; meaning it’s modular, scalable, non-von Neumann, ultra-low power. The researchers hope that in the future this essential step might allow them to build an electronic neuromorphic machine technology that scales to biological level.

 “Computation (‘neurons’), memory (‘synapses’), and communication (‘axons,’ ‘dendrites’) are mathematically abstracted away from biological detail toward engineering goals of maximizing function (utility, applications) and minimizing cost (power, area, delay) and design complexity of hardware implementation,” reads the abstract for the Supercomputing 2012 (SC12) paper (full paper link).

Steps towards mimicking the full-power of the human brain

 Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

IBM simulated the TrueNorth system running on the world’s fastest operating supercomputer, the Lawrence Livermore National Lab (LBNL) Blue Gene/Q Sequoia, using 96 racks (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads).

IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 53×1010  (530 billion) neurons and 1.37×1014 (100 trillion) synapses running only 1542 times slower than real time.

The tiny neurosynaptic core produced by IBM. (c) IBM

The tiny neurosynaptic core produced by IBM. (c) IBM

“Previously, we have demonstrated a neurosynaptic core and some of its applications,” continues the abstract. “We have also compiled the largest long-distance wiring diagram of the monkey brain. Now, imagine a network with over 2 billion of these neurosynaptic cores that are divided into 77 brain-inspired regions with probabilistic intra-region (“gray matter”) connectivity and monkey-brain-inspired inter-region (“white matter”) connectivity.

“This fulfills a core vision of the DARPA SyNAPSE project to bring together nanotechnology, neuroscience, and supercomputing to lay the foundation of a novel cognitive computing architecture that complements today’s von Neumann machines.”

According to Dr. Dharmendra S. Modha, IBM’s cognitive computing manager, his team goal is that of mimic processes of the human brain. While IBM competitors focus on computing systems that mimic the left part of the brain, processing information sequentially, Modha is working on replicating functions from the right part of the human brain, where information can be processed in parallel and where incredibly complex brain functions lie. To this end, the researchers combine neuroscience and supercomputing to reach their goals.

Imagine that the room-sized, cutting-edge, billion dollar technology used by IBM to scratch the surface of artificial human cognition still doesn’t come near our brain’s capabilities, which only occupies a fixed volume comparable to a 2L bottle of water and needs less power than a light bulb to work. The video below features Dr. Modha explaining his project in easy to understand manner and its only 5 minutes long.


source: KurzweilAI

 

"Hey, guys! Now back to work!"

Viewing photos of cute animals at work boosts productivity, Japanese study says

Interestingly enough, a group of cognitive psychologists at Japan’s Hiroshima University found that browsing through cute photos, such as those of baby animals like kittens, serves as a productivity booster. Although the lolcats peak is long gone, there’s still a significant wave of viral enthusiasm for sharing and collecting photos of cute animals – a practice often times associated with procrastination.

"Hey, guys! Now back to work!"

“Hey, guys! Now back to work!”

To test their hypothesis, the researchers separated 48 volunteer students into two groups, and asked them to play game similar to Milton Bradley’s “Operation.” After the first session of the game, students from the first group were shown various photos of cute baby animals, while the second was shown photos of adult animals. The first group fared better.

In a second test, the researchers split the volunteers into three groups. This time they were tasked with remembering and stating the number of times a given number appeared in a sequence. Again, a group was shown photos of baby animals, another photos of adult animals, and the other was shown photos of “pleasant foods,” including sushi and steak. Alright, moving on, apparently the volunteers which were entertained with photos of cute baby animals significantly outperformed participants from the other two study groups.

In a third and final test, 36 right-handed university students who did not participate in the previous experiments were selected. Again, these were divided into three groups, each stimulated by photos of baby animals, adult animals and neutral (food), respectively, and asked to perform a reaction time (RT) task. Participants were asked to indicate whether a stimulus presented on a cathode ray tube screen contained the letter H or the letter T by pressing the left or right key on a response pad as quickly and as accurately as possible. Curiously, here the cute animal viewing group scored the lowest. The researchers write that  “the narrowed attention may be beneficial to performance on tasks that require carefulness in the motor and perceptual domains, such as the tasks used in the first two experiments.”

The scientists theorize that this may be because caring for baby animals (nurturance) requires very tender treatment of the animals, as well as “careful attention to the targets’ physical and mental states as well as vigilance against possible threats to the targets.”

That is to say, in other words, that browsing through cute photos of kittens might boost productivity in a limited array of fields, and most likely turning this into a habit leads to procrastination, which we all know how well it goes hand in hand with productivity…

Still, I for one found the study very interesting, and you can all read it in much broader detail in the online version of the journal PLoS One, where it was published.

artist graph

Computer analyses fine art like an expert would. Art only for humans?

When fine art is concerned, or visual arts in general for that matter, complex cognitive functions are at play as the viewer analyze it. As you go from painting to painting, especially different artists, the discrepancies in style can be recognized, and trained art historians can catch even the most subtle of brush strokes and identify a certain artist or period, solely based on this. For a computer, this kind of analysis can be extremely difficult to undertake, however computer scientists at Lawrence Technological University in Michigan saw this as an exciting challenge and eventually developed a software that can accurately analyse a painting, without any kind of human intervention based solely on visual cues, much in the same manner an expert would.

The program was fed 1,000 paintings from 34 well known artists and was tasked with grouping artists by their artistic movements, and provide a map of similarities and influential links. In first instance, the program separated the artists into two main, distinct groups: modern (16 painters) and classical (18 painters).

Each painting had 4,027 numerical image context descriptors analyzed – content of the image such as texture, color and shapes in a quantitative fashion. Using pattern recognition algorithms and statistical computations, the software was able to group artists into styles based on the similarities and dissimilarities, and then quantify these similarities.

artist graph

So, from these two broad groups, the software sub-categorized even further. For instance, the computer automatically placed the High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo very close to each other.  High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo were grouped similarly. The software also branched sub-groups by similarities, so artists like  Gauguin and Cézanne, both considered post-impressionists have been identified by the algorithm as being similar in style with Salvador Dali, Max Ernst, and Giorgio de Chirico, all are considered by art historians to be part of the surrealism school of art.

[RELATED] Computer recognizes attractiveness in women

The researchers conclude their “results demonstrate that machine vision and pattern recognition algorithms are able to mimic the complex cognitive task of the human perception of visual art, and can be used to measure and quantify visual similarities between paintings, painters, and schools of art.”

While a computer can analyze art with conclusions similar to those of an expert, human art historian, a serious question arises – is then, in this case, a computer able to understand art? If so, will a computer ever able to feel art?

The findings were reported in the journal Journal on Computing and Cultural Heritage (JOCCH).

[via KurzweilAi]

 

 

Cramming for tests

Pulling all-nighters before tests is counter-productive – does more harm than good

The findings of a new research at UCLA, suggest that cramming all night before a big test, something that we’ve all went through at least once in a point of our lives with personal mixed results, is generally counter-productive as the sleep deprivation acts its toll on cognitive performance.

Cramming for testsWhether we’re talking about high school or university, especially the latter, we’ve all experienced situations where you postponed studying for a mid-term or homework until the last minute. Coffee soon became a much needed beacon of light as the night turned to day, and you crammed in much needed extra information for your exam. This will help you, or so you though. According to the researchers, however, regardless of how much a student generally studies each day, sacrificing a night’s sleep for extra studying hours does more harm than good.

“These results are consistent with emerging research suggesting that sleep deprivation impedes learning,” says Andrew J. Fuligni, a UCLA professor of psychiatry,

“The biologically needed hours of sleep remain constant through their high school years, even as the average amount of sleep students get declines,” he continues.

The scientists, based on their findings, advise that a consistent study schedule is best for learning, for most people at least.

Other research has shown that in 9th grade, the average adolescent sleeps for 7.6 hours per night, then declines to 7.3 hours in 10th grade, 7.0 hours in 11th grade, and 6.9 hours in 12th grade. “So kids start high school getting less sleep then they need, and this lack of sleep gets worse over the course of high school.”

The findings were published in the paper Child Development.

via KurzeweilAI . Image credit

A cap worn by subjects in a Michigan State University is fitted with electrodes which pick up EEG signals at the scalp; the signals are then transmitted via optical cable to a computer where brain activity is analyzed and stored. (c) G.L. Kohuth

We make mistakes more often and learn harder when rules change

Someone whose been driving for 20 years, let’s say, in the United States and somehow ends up driving a car in the UK will be in a lot of trouble. Going from right side driving to left side driving, or vice versa, will bewilder just about anyone, and if you’ve gone through such an experience maybe you can relate to the fact that although you realize the rules of the game have changed, you’ll still be prone to mistakes like signaling the wrong way. Sure adapting takes time, but it takes longer and at a more frustrating energy cost than learning from scratch – a fact attested by a recent new study by Michigan State University psychology researchers.

A cap worn by subjects in a Michigan State University is fitted with electrodes which pick up EEG signals at the scalp; the signals are then transmitted via optical cable to a computer where brain activity is analyzed and stored. (c) G.L. Kohuth

A cap worn by subjects in a Michigan State University is fitted with electrodes which pick up EEG signals at the scalp; the signals are then transmitted via optical cable to a computer where brain activity is analyzed and stored. (c) G.L. Kohuth

“There’s so much conflict in your brain,” said Schroder, who continues on the same foreign car lane analogy “that when you make a mistake like forgetting to turn on your blinker you don’t even realize it and make the same mistake again. What you learned initially is hard to overcome when rules change.”

To test the theory, the scientists invited study participants to a computerized task which involved recognizing the middle letter either in “NNMNN” or “MMNMM.” For “M”, participants simply had to press a button on the left, while for “N” on the right. After 50 trials or so, the commands were reversed. The scientists found that the volunteers made repeated mistakes, and, moreover, didn’t learn from them. In addition, a cap measuring brain activity showed they were less aware of their errors. Also, when the participants did indeed respond correctly, brain activity showed showed intense connections suggesting it was put to harder work and consuming more energy.

“We expected they were going to get better at the task over time,” said Schroder, a graduate student in MSU’s Department of Psychology. “But after the rules changed they were slower and less accurate throughout the task and couldn’t seem to get the hang of it.”

“These findings and our past research suggest that when you have multiple things to juggle in your mind – essentially, when you are multitasking – you are more likely to mess up,” said Jason Moser, assistant professor of psychology and director of MSU’s Clinical Psychophysiology Lab. “It takes effort and practice for you to be more aware of the mistakes you are missing and stay focused.”

The findings were reported in the journal Cognitive, Affective & Behavioral Neuroscience.

Participants saw a fully clothed person from head to knee. After a brief pause, they then saw two new images on their screen: One that was unmodified and contained the original image, the other a slightly modified version of the original image with a sexual body part changed. Participants then quickly indicated which of the two images they had previously seen. They made decisions about entire bodies in some trials and body parts in other trials. (c) University of Nebraska-Lincoln

Human brain perceives men as persons and women as parts, study finds

Oleg Schuplyek ImagineWhen you first see this magnificent painting by Ukranian painter Oleg Shuplyak, your brain perceives the portrait of the famous Beatles frontman John Lenon. However, on a closer look, one will immediately notice that the portrait is actually made out of a sum of parts – a table and a troubadour make for the mouth, two men dressed in long clothes make for the eyes and cheeks, a building’s double archway makes for the eyebrows and so on. There are two mental functions which help us distinguish shapes and geometry into objects – global when the sum is put ahead of the parts, and local when the parts are put ahead of the sum.

In a study which tackles a highly controversial subject, psychologists at University of  Nebraska-Lincoln have found that people, men and women included, perceived images of men as whole (global cognition), while images of women were processed as an assemblage of its various parts (local cognition). The researchers believe this is a deeply rooted human cognition feature, which serves to  provide clues as to why women are often the targets of sexual objectification. I told you this was controversial.

“Local processing underlies the way we think about objects: houses, cars and so on. But global processing should prevent us from that when it comes to people,” said Sarah Gervais, assistant professor of psychology at the University of Nebraska-Lincoln and the study’s lead autho. “We don’t break people down to their parts – except when it comes to women, which is really striking. Women were perceived in the same ways that objects are viewed.”

The participants were presented in a random manner with  dozens of images of fully clothed, average-looking men and women. Each person was shown from head to knee, standing, with eyes focused on the camera. For each individual, the system paused, after which it presented the participant with two images, slightly different from one another, from which the participant had to choose quickly what he  or she thought was the original. One of the images had a slightly modified version of the original image that comprised a sexual body part.

Participants saw a fully clothed person from head to knee. After a brief pause, they then saw two new images on their screen: One that was unmodified and contained the original image, the other a slightly modified version of the original image with a sexual body part changed. Participants then quickly indicated which of the two images they had previously seen. They made decisions about entire bodies in some trials and body parts in other trials. (c) University of Nebraska-Lincoln

Participants saw a fully clothed person from head to knee. After a brief pause, they then saw two new images on their screen: One that was unmodified and contained the original image, the other a slightly modified version of the original image with a sexual body part changed. Participants then quickly indicated which of the two images they had previously seen. They made decisions about entire bodies in some trials and body parts in other trials. (c) University of Nebraska-Lincoln

Women’s sexual body parts were more easily recognized when presented in isolation than when they were presented in the context of their entire bodies. But men’s sexual body parts were recognized better when presented in the context of their entire bodies than they were in isolation.

“We always hear that women are reduced to their sexual body parts; you hear about examples in the media all the time. This research takes it a step further and finds that this perception spills over to everyday women, too,” Gervais said. “The subjects in the study’s images were everyday, ordinary men and women … the fact that people are looking at ordinary men and women and remembering women’s body parts better than their entire bodies was very interesting.”

If you thought it was the men solely that perceived women in such a manner, you’d be wrong. Both men and women come out with more or less the same results.

“It’s both men and women doing this to women,” Gervais said. “So don’t blame the men here.”

In a second experiment the same participants were presented with images of letters made up of other tiny letters  — an H made up of hundreds of little Ts, for example. Some were asked to identify the bigger, whole letters to test their global cognition, while other were asked to perceive the smaller letters that made up the big letter to test their local cognition. Participants involved in the global processing part of the test were less likely to objectify women, the researchers found. They no longer were better at recognizing a woman’s parts than her whole body.

The second test proves that objectifying women is a habit that came be overcome, suggest the scientists involved in the study.

“Our findings suggest people fundamentally process women and men differently, but we are also showing that a very simple manipulation counteracts this effect, and perceivers can be prompted to see women globally, just as they do men,” Gervais said. “Based on these findings, there are several new avenues to explore.

Findings were published in the  European Journal of Social Psychology.

source

foreign language

Humans think more rationally in a foreign language, study finds

“Would you make the same decisions in a foreign language as you would in your native tongue?” asks Boaz Keysar, a psychologist at University of Chicago, who recently published a study which discusses this highly interesting question. The scientists involved in the study found that, indeed, counter to popular belief thinking in a foreign language makes take you more rational decisions.

“It may be intuitive that people would make the same choices regardless of the language they are using, or that the difficulty of using a foreign language would make decisions less systematic. We discovered, however, that the opposite is true: Using a foreign language reduces decision-making biases,” wrote Keysar’s team.

foreign languageTo test this hypothesis, the researchers severed a series of tests to students based on the work of the eminent psychologist Daniel Kahneman, Nobel Prize laureate in 2002 for his work on prospect theory. Kahneman’s theory describes how people perceive risks and how this affects their decision making abilities – according to his findings, humans inherently chose the safe decision when faced with gain, and take the risky decision when faced with loss.

With this in mind the psychologists devised several experiments. For the first, they asked 121 American students who were studying Japanese to to make a hypothetical choice – to fight a disease threatening the lives of 600,000 people, doctors could make a cure that would surely work for only 200,000, or a medicine that would have a 33,33% chance of curing all 600,000 and a 66,66% of not saving a single life.

When prompted to answer when the question when it was framed as “saving lives”, 80% of the students chose the safe option. When it was framed as “losing lives”, only 47% of the students chose the safe option. However, when asked in Japanese, the students chose the safe-option in 40% proportion, no matter how the question was phrased. “Using a foreign language diminishes the framing effect,” wrote Keysar’s team.

A similar experiment was carried out in the company of South Korean students studying English, and English native speakers studying French. They were presented with a low-loss, high-gain betting game  – each student was offered a number of one dollar bills to bet on a coin toss. If they lost the bet, they’d lose the 1$ bill, however if they won they would re-gain the dollar bill, plus another 1.5$. In the long run, choosing to place multiple bets would be profitable, however only  just 54 percent of students took the bets when the proceedings were made in the native-speaking language. When the proceedings were made in non-native speaking language, 71% of the students took the leap.

“They take more bets in a foreign language because they expect to gain in the long run, and are less affected by the typically exaggerated aversion to losses,” wrote Keysar and colleagues.

What the researchers conclude is that instinctive decision making, made with thought in a native language, are less rational and more emotionally based than those made in a second language.

“Given that more and more people use a foreign language on a daily basis, our discovery could have far-reaching implications,” they wrote, suggesting that people who speak a second language might use it when considering financial decisions. “Over a long time horizon, this might very well be beneficial.”

source: wired

The findings were reported in the journal Psychology Science.

Chewing gum

Chewing gum makes you smarter

Chewing gum Chewing without actually eating seems pretty weird, if you think a bit about it, even so it’s a highly popular habit best described by the billion dollar industry of chewing gum. If you’re one of the regular chewers, here’s something to lighten your mood for the day – chewing gum increases your cognitive abilities, albeit for a short burst of time, as researchers from St. Lawrence University have concluded in a recently published study. Wish you knew that before your teacher kicked you out of class in the seventh grade, right?

The scientists asked the 159 volunteers, which they divided into two groups (chewers and non-chewers), to solve  a series of tests, including difficult logic problems or repeating numbers backwards. The researchers found that people who were chewing gum during the application outperformed non-chewers, as a whole, in five out of six tests. The only exception was the verbal fluency test, in which subjects were asked to name as many words they can from various lexical families.

When I first read the paper’s byline I thought it all had something to do with sugar-rush intake, but apparently this is not the case. The chewers were given both sugar and sugar-free gum, with no significant discrepancies in their results. The performance induced by chewing gum, however, is short lived improvements lasting during the first 20 minutes of testing.

If it doesn’t have anything to do with sugar or gloucose, how does chewing gum improve your mental abilities? Well it all has something to do with chewing, be it anything, not just gum, because of a process called “mastication-induced arousal”. This acts as boost to the brain, waking us up and allowing better focus to a task at hand. After the boost phase is over, the lack of improvement in cognitive function when gum is chewed throughout testing may be because of interference effects due to a sharing of resources by cognitive and masticatory processes, researchers suggest.

Short-term memory

Expand short-term memory through exercises

Short-term memoryThe average brain can only hold about five to seven pieces of information at a time within 30 seconds – this is called working memory. What people usually do to get pass the 30 seconds interval is they re-expose themselves to the information, for instance if you want to remember a 7 digit phone number (seven pieces of information) you’ll have to constantly play the sequence inside your head. Through repetition, you’ll be able to move it away from your working memory to some extent.

But, how can you increase you short-term memory capacity in general? What if you could go from remembering the names of the last 5 people you just met to 10 names? Would short-term memory improvement have any effects on other cognitive senses? These questions and more or less satisfying answers can be found in a recently published study by Jason Chein at Temple University in Philadelphia, Pennsylvania.

Past attempts to expand short-term memory implied specific strategies, such as rehearsing long strings of numbers, often improved their performance on the particular task at hand, but with no visible long-term effects on the memory. Chein’s training technique is different and most importantly turns results – a software  asks people to answer questions about a string of successive sentences while simultaneously remembering the last word of each sentence. It is very difficult to develop conscious shortcuts to deal with the two conflicting sources of information, so the brain is forced to make more long-lasting changes.

The technique reportedly works amazingly, with a whooping 15 per cent improvement over a training course of five weeks, meaning expanding your working memory from seven to eight items. While it’s evident that short-term memory improvement is possible, scientists argue whether it has any implications on other cognitive areas – some say it doesn’t have any connections, while others stress that cognitive abilities from logical reasoning and arithmetic to verbal skills and reading comprehension are directly linked to the working memory.

As published in Psychonomic Bulletin & Review.

Cognitive Research

 

brain

For some reason, people are a bit reluctant to believe the things neuroscientists are telling them. Things such as bar graphs or data just to not appeal to them, especially when they try to dig inside your brain.

So scientists tried to find a way to make the people believe them and believe it or not, the solution was using colours. Scientists and journalists have recently suggested that brain images have a persuasive influence on the public perception of research on cognition. This was tested by David McCabe, an assistant professor in the Department of Psychology at Colorado State, and his colleague Alan Castel, an assistant professor at University of California-Los Angeles.

“We found the use of brain images to represent the level of brain activity associated with cognitive processes clearly influenced ratings of scientific merit,” McCabe said.

In three experiments, undergraduate students were asked to read not so unsubstantiated claims like watching television increases math and then read real data like how brain imaging could be used as a lie detector. They were asked to rate that and they rated higher the articles which had brain images; regardless of whether the article described a fictitious, implausible finding or realistic research.

“Cognitive neuroscience studies which appear in mainstream media are often oversimplified and conclusions can be misrepresented,” McCabe said. “We hope that our findings get people thinking more before making sensational claims based on brain imaging data, such as when they claim there is a ‘God spot’ in the brain.”.

So people, seriously, at least when it comes to the brain, leave the conclusions to the neuroscientists, and trust them – they’re a good bunch.