Tag Archives: cognition

Credit: Pixabay.

What is the highest IQ in the world (and should you actually care?)

Credit: Pixabay.

Credit: Pixabay.

IQ stands for ‘Intelligence Quotient’ and is a numerical score based on standardized tests which attempt to measure general intelligence. However, Aan IQ test does not measure intelligence in the same way a ruler might measure the height of a person. Instead, IQ scores are always relative to the median score (typically 100) that reflects the general intelligence of the population.

Modern IQ tests measure a person’s ability to reason and use information to solve problems through questions and puzzles. Some of the things that an IQ test will typically measure is short-term and long-term memory, how well a person can solve puzzles, and how quickly.

Measuring intelligence

People have always been aware that some are better at mental tasks than others, but it wasn’t until a French psychologist by the name of Alfred Binet that a qualitative lens was cast on the diversity of human intelligence. Together with colleague Théodore Simon, in 1905, the psychologists devised the Binet-Simon test, which focused on verbal abilities and was designed to gauge ‘mental retardation’ among school children.

These tests, which in time also included questions that gauged attention, memory, and problem-solving skills, quickly showed that some young children were better able to answer complex questions that older children. Based on this observation, Binet concluded that there is such a thing as ‘mental age’ which can be higher or lower than a person’s chronological age.

In 1916, Stanford University translated and standardized the test using a sample of American students. Known as the Stanford-Binet Intelligence Scale, this test would go on to be used for decades to quantify the mental abilities of millions of people around the world.

The Stanford-Binet intelligence test used a single number, known as the intelligence quotient (or IQ), to represent an individual’s score on the test. This score was computed by dividing a person’s mental age, as revealed by the test, by their chronological age and then multiplying the result by 100. For instance, a child whose chronological age is 12 but whose mental age is 15 would have an IQ of 125 (15/12 x 100).

The Stanford-Binet Intelligence Scale – Fifth Edition test measures five content areas, including fluid reasoning, knowledge, quantitative reasoning, visual-spatial processing and working memory.

A reasoning question typical of IQ tests. The participant has to figure out what shape should come next in the pattern. Credit: Wikimedia Commons.

A reasoning question typical of IQ tests. The participant has to figure out what shape should come next in the pattern. Credit: Wikimedia Commons.

Building upon the Stanford-Binet test, psychologist David Wechsler developed a new IQ test that better measures a person’s different mental abilities. The first test, known as the Wechsler Adult Intelligence Scale (WAIS), was released in 1955. Later, Wechsler released two different IQ tests: one specifically designed for children, known as the Wechsler Intelligence Scale for Children (WISC), and the other designed for adults, known as the Wechsler Preschool and Primary Scale of Intelligence (WPPSI). The modern adult version of the test is known as the WAIS-IV and has gone through numerous revisions to accommodate recent research.

A WAIS-IV is made of 10 subtests and 5 supplemental tests, which score an individual in four major areas of intelligence: a Verbal Comprehension Scale, a Perceptual Reasoning Scale, a Working Memory Scale, and a Processing Speed Scale. These four index scores are combined into the Full-Scale IQ score (what people generally recognize as the ‘IQ score’). There’s also the General Ability Index which is based on six subset scores, which are good at identifying learning disabilities. For instance, scoring low in some areas of the General Ability Index but scoring well in other areas may indicate a specific learning difficulty, perhaps warranting specialized attention.

[panel style=”panel-default” title=”How IQ is scored” footer=””]A person’s overall IQ score is calculated from their aggregate performance on all of these various subtests, by ranking the person’s score on each subtest against the scores of other people who have taken it.

[/panel]

The modern WAIST test does not score IQ based on chronological and mental age but rather based on the scores of other people in the same age group. The average score is fixed at 100, with two-thirds of the population scoring between 85 and 115, while at the extremes, 2.5% of the population scores above 130 and 2.5% scores below 75. Basically, the IQ score moves 15 points in either direction with each standard deviation.

Some IQ tests measure both crystallized and fluid intelligence. Crystallized intelligence refers to knowledge and skill gained through life, meaning it’s based on facts and grows with age. Situations that require crystallized intelligence include reading comprehension and vocabulary exams.  For instance, a test might ask “what’s the difference between weather and climate” or “who was the first president of the United States”. These sort of questions test a person’s knowledge of things that are valued in a certain culture (a person from India might not know the answer to many IQ test questions given in the US, but that doesn’t make them any less intelligent).

Fluid intelligence, on the other hand, is the ability to reason, solve problems, and make sense of abstract concepts. This ability is considered independent of learning, experience, and education.  For example, participants of an IQ test might have to figure out what a shape would look like if it were rotated.

What’s the highest IQ score

When IQ scores are plotted on a graph, they follow what’s known in statistics as a ‘bell curve’. The peak of the “bell” lies at the mean, where the majority of IQ scores lie. The bell then slopes down to each side; one side represents scores that are lower than the average, and the other side represents scores that are above the average. As the slope of the bell trails off, you’ll find the extremely high (gifted) and extremely low (disabled) IQ scores. Most people have average intelligence.

IQ scores follow a bell curve distribution.

IQ scores can be interpreted in brackets, as follows:

  • 1-70: low;
  • 71-84: below average;
  • 85-115: average;
  • 116-144: above average;
  • 145-159: high;
  • 160+: genius.

The problem is that IQ tests can get really fuzzy in the uppermost bracket, the reason being that the higher the IQ, the smaller the population group there is to use for scoring. For instance, people with an IQ of 160 have a population size of only 0.003% — that’s only 3 out of 100,000 people. That being said, although there is no known upper IQ limit, all of this implies some practical limitations when evaluating the IQ of super gifted individuals.

William James Sidis. Credit: Wikimedia Commons.

William James Sidis. Credit: Wikimedia Commons.

This brings the question: who’s the person with the highest IQ ever? According to some, that would be William James Sidis (1898-1944), with an IQ estimated between 250 and 300. A true child prodigy, Sidis could read English by the time he was two and could write in French by age four. At age five, the young Sidis devised a formula whereby he could name the day of the week for any given historical date. When he was eight, he made a new logarithms table based on the number 12. At age 12, Sidis was admitted to Harvard where he wrote theories on “Fourth Dimensional Bodies” and graduated cum laude before his sixteenth birthday. At this age, Sidis could already speak and read fluently in French, German, Russian, Greek, Latin, Armenian, and Turkish.

Young Sidis’ achievements did not fly under the radar, with the foremost newspapers of the time following his academic record and reporting outlandish stories.  They also constantly harassed the young Sidis, who came to loathe the press and the “genius” staple. The celebrity and pressure might have gotten to him in the end. After a brief stint in 1918 teaching at Rice University in Texas, Sidis went through various clerk jobs. Reclusive in nature, all Sidis wanted in life was a job that paid his most basic expenses and which made no further demands from him. Sidis died poor and with not much to show for in terms of academic achievements (Harvard professors would speak of the young Sidis, while he was still attending the university, that he would grow to be the greatest mathematician in the world). His only published work is a three-hundred-page treatise on collecting streetcar transfers. According to American Heritage

“The book, Notes on the Collection of Transfers, contains densely printed arcana about various interconnecting lines, scraps of verse about streetcars, and some simple, foolish streetcar jokes that the author might have enjoyed in his childhood, had he had one. Sidis published it under the unlovely pseudonym of Frank Folupa, but reporters managed to ascribe the book to him, tracked him down, and again he fled.”

Sidis’ IQ is said to have been tested by a psychologist, and his score was allegedly the very highest ever recorded. William Sidis took general intelligence tests for Civil Service positions in New York and Boston, gaining phenomenal records which are the stuff of legends. This information could not be verified at this date, and perhaps never will be.

Terence Tao. Credit: YouTube.

The most reliable record-high IQ score belongs to Terence Tao, with a confirmed IQ of 230. Tao is an Australian-American mathematician born in 1975, who showed a formidable aptitude for mathematics from a very young age. He entered high school at the age of 7, where he began taking calculus classes. He earned his bachelor’s degree at 16 and his Ph.D. degree at 21.

Tao, who reportedly had a normal social life while growing up and is now married with children, really exploited his talent. Over the years, Tao has garnered a bevy of prestigious awards for his work, including the Fields Medal (which is like the Nobel Prize of math), and the MacArthur Foundation grant (which is often referred to as the “genius prize”). At the moment, Tao is a professor of mathematics and the James and Carol Collins Chair at the University of California (UCLA).

In an interview with National Geographic, Tao rejected lofty notions of genius, claiming that what really matters is “hard work, directed by intuition, literature, and a bit of luck.”

Cristopher Hirata. Credit: Breakthrough Prize.

Cristopher Hirata. Credit: Breakthrough Prize.

The second highest confirmed IQ belongs to Christopher Hirata, with an IQ of 225. He was only 13 years old when he won the gold medal in 1996 at the International Physics Olympiad. From age 14 to 18, Hirata studied physics at Caltech, graduating with a bachelor’s degree in 2001. While at Caltech, Hirata did research for NASA on the colonization of Mars and received his Ph.D. in 2005 from Princeton University in Astrophysics. The 36-year-old works for NASA where he supervises the design of the next generation of space telescopes. His theoretical research deals with Cosmic Microwave Background (CMB), dark energy and accelerated expansion of the universe, galaxy clusters, and the large-scale structure of the universe. In 2018, Hirata was awarded the prestigious New Horizons in Physics Breakthrough Prize for fundamental contributions to understanding the formation of the first galaxies in the universe and for sharpening and applying the most powerful tools of precision cosmology.

Terence Tao and Cristopher Hirata have both taken actual IQ tests, but you’ll find on the internet so-called “top 10 smartest people” lists which include many individuals who have never been tested. For instance, some websites include in their lists people such as Gary Kasparov (IQ 180), Johann Goethe (IQ 225), Albert Einstein (IQ 160), and even Leonardo da Vinci (IQ 160) or Isaac Newton (IQ 190). These scores are estimated based on the individuals’ biographies so they shouldn’t be trusted, which doesn’t mean such famous personalities weren’t highly intelligent individuals — after all, the magnitude of their success speaks for itself.

How much does an IQ score matter?

IQ scores can predict how many children or how much money a person can hope to have throughout life. Credit: Wikimedia Commons.

IQ scores can predict how many children or how much money a person can hope to have throughout life. Credit: Wikimedia Commons.

According to the scientific literature, a person’s IQ is highly correlated with measures of longevity, health, and prosperity. According to one study involving one million Swedes, having a high IQ also protects people from the risk of death — so much so that there was a three-fold difference in the risk of death between the highest and the lowest IQs.

IQ is also positively correlated with career success, unsurprisingly showing that more intelligent people make for better employees (see graph below). The correlation is not perfect though — measured from -1 to 1, where a correlation of 1 would mean in this case that every IQ point would result in an incremental increase in career success — so there’s plenty of room for other individual factors not measured by standard intelligence tests.

Credit: All That Matters.That being said, there’s a lot of leeway as to what makes a person successful or helps him or her master a craft. Luck certainly plays a role (terminal illness on one end of the negative extreme or having a loving, wealthy family while growing up at the other end of the positive extreme). But then there’s a far more important and, at the same time controllable, variable: that’s grit.

Angela Duckworth, a psychologist at the University of Pennsylvania in Philadelphia, interviewed people from all walks of life, attempting to determine what characteristics made some of them successful in life.  She found grit was the one trait that stood out among the people who had ‘made it’. Grit, Duckworth told Science News, has two parts: passion and perseverance. In one of her studies, Duckworth found that students with higher grades in university tended to have more grit (unsurprisingly). However, students with higher university entrance exam scores tended to be less gritty than those who scored lower. In other words, by the end of university, grit is a better predictor of success (graduation score) than intelligence (as measured by entrance-exam scores).

Let’s talk a bit about the higher end of achievement, or what’s the traditionally considered the domain of geniuses. In the early 21st century, Professor Lewis M. Terman evaluated a large sample of children who score at the top end of the IQ scale and followed them as they aged to see if they would become veritable geniuses in adulthood. By the end of his evaluation, the researcher wound up with 1,528 extremely bright boys and girls who averaged around 11 years old. Their average IQ was 151, with 77 children claiming IQs between 177 and 200 — that’s on the extremely gifted scale.

Until they reached middle age, the original study participants (affectionately called “Termites”) were periodically tested, the results of which were included in the five-volume work, entitled the Genetic Studies of Genius. No one among the study’s participants went on to achieve what society truly deems genius — a person who has made an outstanding contribution in a certain field of study, let’s say. Many became more or less successful lawyers, engineers, doctors, scientists, and other respectable professionals. Although we should bear in mind many of the participants grew up between the two world wars, it’s perhaps surprising to learn that many other participants were far less likely to graduate from college or to attain professional or graduate degrees.

When the IQs of the most successful Termites was compared to the comparatively least successful ones, researchers found little differences, suggesting intelligence is not a good predictor for high achievement. As chances have it, this fact is nowhere better illustrated than the cases of Luis Walter Alvarez and William Shockley, Nautilus wrote. When they were little boys, the two were tested by Terman but didn’t make the cut. However, both were monumentally successful. Alvarez went on to become one of the most brilliant and productive experimental physicists of the 20th century, earning the 1968 Nobel Prize in Physics. Shockley earned his P.h.D. from MIT and wrote his first patent at age 28. In 1956, he shared the Nobel Prize in Physics with two other colleagues for inventing a device without which our rich digital lives would be all but impossible — that’s the transistor. No Termite ever won a Nobel Prize.

Yes, having a high IQ score is a good predictor of achieving success and living a better life than the mean — it’s a nice head start, but that’s not enough in and of itself. Unless you’re disabled, you can make up for a lack of special aptitude (as perceived by an IQ score) through grit, resilience, and working on something you truly love to do.

Did you ever take an IQ test? Share your results and opinion on the matter in the comment section below. 

[UP NEXT] Can you raise your IQ score?

radiation exposure astronaut

A trip to Mars might incur permanent brain damage from cosmic rays

Researchers at University of California Irvine exposed mice to radiation similar to the cosmic rays that permeate space and found the animals  experienced declines in cognition and changes in the structure and integrity of brain nerve cells and the synapses where nerve impulses are sent and received. The mice became easily confused and lost their tendency to explore new environments. Similar cognitive impairments are likely to be felt by astronauts traveling to Mars, according to the researchers. Even with shielding, the effects of cosmic rays exposure are sure to be noticed, considering the journey to Mars lasts  six to eight months. This without counting the time spent on the red planet and the journey back home.

radiation exposure astronaut

“Astronauts may incur cognitive impairments that lead to performance decrements, confusion, increased anxiety and longer-term problems with cognitive health,” said University of California, Irvine radiation oncology professor Charles Limoli, whose study appears in the journal Science Advances.

In August 1912, Austrian physicist Victor Hess made a historic balloon flight that opened a new window on matter in the universe. As he ascended to 5300 metres, he measured the rate of ionization in the atmosphere and found that it increased to some three times that at sea level. He concluded that penetrating radiation was entering the atmosphere from above. He had discovered cosmic rays. Galactic cosmic rays can be found everywhere in the Universe. They’re the remnants of supernovas – exploding stars, which release huge amounts of energy. Luckily, the Earth’s atmosphere acts like a shield against it, but in space it’s all free to roam and wreck havoc to life.

To study what kind of effects such exposure would have, the UC Irvine researchers genetically modified mice to have green fluorescent neurons. This aided structural analysis. The mice were exposed to the cosmic rays at the  NASA Space Radiation Laboratory at the Brookhaven National Laboratory in New York and then analyzed six weeks later.

Besides the clear structural damage inferred, the researchers noted that the mice performed poorly on learning and memory tests. They were also sluggish and less curious.

“Previous studies show synaptic impairment or loss of synapses is an early and invariant feature of Alzheimer’s disease, and there is a strong correlation between the extent of synapse loss and the severity of dementia,” said University of California, Irvine neuroscientist Vipan Kumar Parihar.

astrnaut exposure radiation

This definitely sounds like bad news for interplanetary missions where long-time exposure to cosmic rays is certain. Of course, there are ways to shield against cosmic rays. One creative solution actually involves lining a spacecraft’s walls with human feces to protect against radiation. The brain-dulling particles would still get on board, though. “There is really no escaping them,” Limoli said.

diet goal

When following goals, people pay attention to progress more than they do to setbacks

Hopes are high this time of year, but before your make your New Year’s resolution you might want to consider an important cognitive bias: when following goals, progress is given a lot more consideration than setbacks. Say your resolution is to lose weight, so next year you’ll be on a diet. Chances have it, according to a study made by University of Colorado Boulder, you’ll feel refraining from eating ice cream (goal-consistent behavior) will help you in your resolution more than eating the ice cream will obstruct it. In doing so, you overestimate movement toward versus away from your target. In a more general context, there’s this bias that makes most people believe good behaviors are more beneficial in reaching goals than bad behaviors are in obstructing goals. It’s an innocent bias, but one that might make you lose focus or fail without even knowing what happened.

It’s all about thinking in net gain

diet goal

Credit: iStock

“Basically what our research shows is that people tend to accentuate the positive and downplay the negative when considering how they’re doing in terms of goal pursuit,” said Margaret C. Campbell, lead author of the paper — published online in the Journal of Consumer Research — and professor of marketing at CU-Boulder’s Leeds School of Business.

 

There’s an upside to it, though. When you accentuate the progress you’ve made and minimize the setbacks, you’ll feel more motivate which will help in reaching your goal, be it eating healthier, saving money or learning a new foreign language. A lapse away from the goal, known as  goal-inconsistent behavior, thus becomes less damaging in perception, so people feel these lapses can be redeemed later on. Success in working toward a goal, known as goal-consistent behavior, then feel like big accomplishments.

The big downside is that there’s a considerable risk people engage in too many goal-inconsistent behaviors and too few goal-consistent behavior, all while the goal pursuer feels he’s making progress when in fact he’s making none.

 “So our moral for the season is monitor, monitor, monitor,” said Campbell. “For example, dieters need to pay close attention to calories in and out — both aspects — during this tempting time to keep from falling prey to the bias.”

The researchers found that even when the goal-consistent and goal-inconsistent behaviors are the same size, like saving $90 or spending $90, the bias tends to be present.

What’s interesting is that a lack of confidence in reaching a goal can lessen the bias, the researchers found. You could say that being realistic makes you more attentive to both progress and setbacks. Of course, this can be dangerous to reaching your goal when the behavior turns to pessimism, since this tends to hinder motivation.

Long-term shift work Deteriorates the Brain

Long term shift work has a permanent negative effect on the brain damaging cognitive ability and memory, a new study has revealed.

Working shifts has significant negative effects on the brain, a new study has shown. Image via Medic Cast.

The study found clear links between shift work and impairments in memory and thinking (cognition). People who worked in shifts for 10 years or more have, on average, an extra 6.5 years fall in memory as well as thinking skills. Scientists are not sure if it’s the shift work itself that’s causing the decline, or rather the stress of having to constantly change and adapt your schedule.

Jean-Claude Marquie, the research director at the National Center for Scientific Research at the University of Toulouse France says that while the effects are permanent even after shift work has ended, moving on to a regular program can still help you recover from these negative effects. The study found that after working 10 years or more in shifts, the recovery time is typically around 5 years.

“Shift work chronically impairs cognition, with potentially important safety consequences not only for the individuals concerned, but also for society,” say the authors, led by Marquie.

Many reports have suggested that shift work may have negative effects on the brain, and while there have been studies on the issue, they have been relatively few. In 2001, Scott Davies, a PhD student at the time, showed a correlation between night shifts and the risk of beast cancer. In 2002, one year later, Torbjörn Åkerstedt revealed something which surprised no one: shifts can cause major disturbances in sleep patterns and affect the circadian rhythms. Another worrying study was published in 2001; the researchers observed over 27,000 people and showed that obesity and high levels of triglycerides cluster together more often in shift workers than in day workers, and they suspected a connection between metabolic syndromes and shift works.

This new study goes in the same line, showing that shifts have potent negative effects, and furthermore, it shows that the effects are chronic. Marquie and colleagues tracked mental abilities of over 3,000 people from various regions in France who worked in a broad range of fields; about 1,500 of them worked in shifts. Tests including gauge memory, processing speed and thinking ability were conducted.

The results clearly highlighted that shift-workers had overall lower memory as well as thinking ability scores as compared to those who didn’t shift work.

 

Photo: flickr

Taking a walk encourages creativity more than sitting

Photo: flickr

Photo: flickr

If you’ve ever read the biographies of some of the world’s greatest thinkers, you may have noticed that one of their favorite pastimes was taking long and relaxing walks. For instance, Charles Darwin had a fixed schedule that demanded he begins his morning rituals with a walk upon waking at 7:00, and only after take breakfast. Aldous Huxley, Winston Churchill, Immanuel Kant, just to name a few. These were all great men that excelled in their creativity and problem solving, and though each may have left their mark on posterity in a different manner, they all share a common trait – no day went by without taking a walk.

Now, I’m not saying walking in parks all day is going to make you a champion, but according to a recent study  published by the American Psychological Association, when the task at hand requires some imagination, taking a walk may lead to more creative thinking than plain ol’ sitting.

“Many people anecdotally claim they do their best thinking when walking,” said Marily Oppezzo, PhD, of Santa Clara University. “With this study, we finally may be taking a step or two toward discovering why.”

The power of a simple walk down the park

Previous studies showed that regular aerobic exercise may protect cognitive abilities, however Oppezzo and colleagues showed that even mild physical activity can have significant positive effects on cognition and creativity in particular. Multiple experiments were conducted involving 176 participants, who were divided into walkers and sitters.

They found that those who walked instead of sitting or being pushed in a wheelchair consistently gave more creative responses on tests commonly used to measure creative thinking, such as thinking of alternate uses for common objects and coming up with original analogies to capture complex ideas. When asked to solve problems with a single answer, however, the walkers fell slightly behind those who responded while sitting.

[ALSO READ] Walking through doorways makes you forget things, study finds

What’s remarkable is how more creative the walkers were. Of the students tested for creativity while walking, 100 percent came up with more creative ideas in one experiment, while 95 percent, 88 percent and 81 percent of the walker groups in the other experiments had more creative responses compared with when they were sitting. Of course, stating a wacky idea didn’t get you points – all answers, though the questions called for originality, had to be feasible and respect certain imposed constraints.

The experiments were thought such that the participant’s creativity was engaged. For one experiment, the researchers put each of the 48 participants alone in a small room facing a blank wall – this ensured minimum external stimuli that might interfere with their creative process. They were then asked to think of as many alternative uses they could for a common object. For example, for the word “button,” a person might say “as a doorknob on a dollhouse.”

With a different group of 48 students, some sat for two different sets of the tests, some walked during two sets of the test and some walked and then sat for the tests.

“This confirmed that the effect of walking during the second test set was not due to practice,” Oppezzo said. “Participants came up with fewer novel ideas when they sat for the second test set after walking during the first. However, they did perform better than the participants who sat for both sets of tests, so there was a residual effect of walking on creativity when people sat down afterward. Walking before a meeting that requires innovation may still be nearly as useful as walking during the meeting.”

A novel idea is considered to be an idea that hadn’t been encountered in a response from any of the participants, regardless of the group. Students who walked in another experiment doubled their number of novel responses compared with when they were sitting.

[RELATED] Ourdoor activities enhance creativity and problem solving abilities

But is it exposure to nature or simply being outside that causes these cognitive benefits? To see if walking in itself, no matter the environment, leads to the observed benefits  the researchers devised  another experiment with 40 participants and compared responses of students walking outside or inside on a treadmill with the responses of students being pushed in a wheelchair outside and sitting inside. Again, the students who walked, whether indoors or outside, came up with more creative responses than those either sitting inside or being pushed in a wheelchair outdoors. “While being outdoors has many cognitive benefits, walking appears to have a very specific benefit of improving creativity,” said Oppezzo.

There you have it. Tomorrow, maybe you’d like to have your coffee with you outside.

The study was published in APA’s Journal of Experimental Psychology: Learning, Memory and Cognition. 

rat-navigation

Virtual reality for rats shows how different brain functions cooperate during navigation

Some people are better navigators than others, i.e. men better than women. Whether you can make your way effortlessly through the woods to reach a safe house or get seemingly lost on your way home from a different bus stop, it doesn’t make that much of a different at a sensory level. Navigation is often taken for granted, but the truth is it’s one of the most complex neurological function of the brain, one which requires a great of energy and complexity. This fundamental skill is paramount to avoiding threats and locating food (the reward), and through the mechanisms of evolution which promotes survival traits, it has steadily improved generation after generation.

rat-navigation

Rat in virtual reality. (c) UCLA

The connection between spatial reasoning and reward (anticipating and actually locating food) has been very difficult to measure, mainly because current technology doesn’t permit to simultaneously study both while an animal was moving. A team of researchers at UCLA have devised, however, an ingenious multisensory virtual world for rats in order to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task.

Rats are inserted in a sort of cube, with displays on each side, and are trained to navigate their environment that changes each time through a trackball to reach their reward (sugar water). Since the animal moves on the trackball, it actually is stationary, but is offered the illusion of movement aided by visual and auditory cues.

Previously, the same team of  UCLA researchers, led by neurophysicist Mayank Mehta, discovered how individual brain cells compute how much distance the subjects traveled. All animals, including humans, need to know where they’re located at a certain point in order to compare to their reference frame and navigate. Which way is left, right, up, down etc. How reward anticipation and reward seeking or navigation are connected has escaped scientists for some time.

“Look at any animal’s behavior,” Mehta said, “and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking.”

Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it. Mehta and colleagues, using their rat virtual environment, have now found a way to correlated the two.

The rat MATRIX

While the rats where navigating their environment in search for the reward (food), visual and auditory cues were played. When both sound and visual was played, the rats used both their legs and tongue to navigate in harmony and easily locate the feed tube. Yum!  This confirmed a long held expectation, that different behaviors are synchronized. When the visual cues were shut off, and only sound was there, the rats legs seemed to be “lost” as the rodents randomly walked about, but their tongue showed a clear map of space, as if the tongue knew where the food was.

“They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward,” he said. “So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together.”

Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.

“But to our great surprise, the legs sometimes do not seem to know what the tongue is doing,” he said. “We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption.”
The study results were reported in the journal  PLOS ONE.
(c) UC San Diego Language and Development Lab

How language affects the way toddlers learn to count

(c) UC San Diego Language and Development Lab

(c) UC San Diego Language and Development Lab

A new study made by a team of international scientists found that English-speaking toddlers learn the concept of number “one” faster than than Japanese- and Chinese-speaking kids, while Arabic and Slovenian speaking kids learned to grasp the idea of number “two” faster than their English-speaking counterparts. The study provides a new set of evidence supporting the already entrenched idea that language affects the way we grasp numbers at an early age.

Some languages have completely different clauses and noun agreements, which apparently influence cognition. American linguist Benjamin Whorff famously argued that Eskimos have 200 words for snow, indicating that they think differently about the substance than do, say, English-speakers. Other scientists have disputed that the word count is that high, or that it really reflects different ways of thinking. Take music for instance; Japanese speakers sense rhythm differently than westerners, something that’s been attributed to both culture and language.

Examples could go on, but it’s interesting to see how children who are just beginning to speak differ function of their mother tongue. The researchers tested dozens of two- to four-year-old English, Arabic and Slovenian speakers , asking them to perform various tasks like “Put two buttons in the box” and name “What’s on this card?”.

[RELATED] Humans think more rationally in a foreign language, study finds

Indifferent of age, far more Arabic and Slovenian speakers knew the concept of “two” than English speakers. The difference is quite staggering: 42 percent of the Slovenian two-year-olds knew “two,” while only four percent of English two-year-olds did. Also, Slovenian and Arab toddlers were more likely to grasp the number  two than Russian, Japanese and Chinese toddlers. Why? In Slovenian, for instance, speakers have distinct known for singular nouns, nouns in twos, and nouns in numbers three or greater. In Slovenian  one button is a gumb, two buttons are gumba, and three or more buttons are gumbi. This noun agreement, different from English where there’s only singular and plural, likely helps the toddlers understand numbers better.

But the lead doesn’t last much. As they get older, English speaking kids actually outperform Slovenians in number higher up. What the study shows, ultimately, is that language plays an important role in acquiring low numbers.

The findings were reported in the journal Proceedings of the National Academy of Sciences. [via Pop Sci]

[NOW READ] Babies can tell two languages apart as early as seven months old

chimps-thought

Chimps also ‘think about thinking’ akin to humans

chimps-thought

Our close primate relatives, chimpanzees, have been constantly amazing us with their incredible cognitive abilities and personality traits that are so similar to our own. If you believe much of what you undertake today is limited to human cognition only, think again. Chimps do it too – thinking about thinking that is, as the findings of a recent research by scientists at Georgia State University and the University at Buffalo show.

Chimps are our closest relatives, sharing 98% of the human genome, which might explain a bit why their social, cognitive and even emotional display is remarkably similar to that found in humans. Chimps have been shown to be self-aware (possess consciousness), have a sense of fairness, solve puzzles just for fun and even hold elections!

That’s remarkably human-like, however this recent demonstration of chimp metacognition – “thinking about thinking” or “knowing about knowing” – puts things into a whole new perspective.  Though a term that’s both thrown loosely in educational psychology discussions and discouraging at the same time, since not a lot of people know what it means, metacognition in itself represents nothing new, but highlights an important hallmark of intelligence. It’s believed metacognition plays a fundamental role in learning, since it’s only when you begin to rationalize your train of thought that you can begin to control what goes on in your environment and project your thoughts into actions. Activities such as planning how to approach a given learning task, monitoring comprehension, and evaluating progress toward the completion of a task are metacognitive in nature.

Existential chimps

In order to assess a chimps ability to recognize one’s own cognitive states, the researchers devised an experiment to query animals about their states of knowing or not knowing. Chimps at Georgia State’s LRC, like some in other labs, have been trained  to use a language-like system of symbols to name things, which came really handy later on in communicating their thoughts and ideas.

The chimps were tasked with naming which food was hidden in a particular location, by typing the symbol for the respective type of food. For instance, if a banana was hidden, the chimp would report this fact by pressing the banana symbol on symbols keyboard, and in the process would also gain the food.

But then, the researchers provided chimpanzees either with complete or incomplete information about the identity of the food rewards.

In some cases, the chimpanzees had already seen what item was available in the hidden location and could immediately name it by touching the correct symbol without going to look at the item in the hidden location to see what it was.

In other cases, the chimpanzees could not know what food item was in the hidden location, because either they had not seen any food yet on that trial, or because even if they had seen a food item, it may not have been the one moved to the hidden location.

So, basically the chimps named the food items when they that these were there, and sought them to be sure when they needed more information before naming them.

“This pattern of behavior reflects a controlled information-seeking capacity that serves to support intelligent responding, and it strongly suggests that our closest living relative has metacognitive abilities closely related to those of humans,” the researchers note.

The findings are important not just because it proves yet another important cognitive trait, thought to be solely reserved to humans, is present in non-human primates as well, but because it also may help shed light on the emergence of self-reflective mind during humans’ cognitive evolution. The paper was published in the the journal Psychological Science of the Association for Psychological Science.

Dogs understand humans’ point of view – are much more likely to steal food at night, when they can’t be seen

A study conducted by Dr. Juliane Kaminski of the University of Portsmouth’s Department of Psychology concluded that when humans forbid dogs to eat foods, dogs are 4 times more likely to steal the food in the dark, when they think humans can’t spot them.

dog stealing food

It’s interesting to see how dogs actually adapt to what they believe humans can see, basically understanding our point of view.

“That’s incredible because it implies dogs understand the human can’t see them, meaning they might understand the human perspective,” Dr. Kaminski said in a press statement.

Dogs based their stealing strategies based not on the fact that they can see or not in the darkness, but based on their belief that humans can see or not in low light conditions. According to Dr. Kaminski, humans attribute a few qualities and emotions to other living things – this study being quite a slap on the wrist in that matter.

The study had 42 female and 42 male domestic dogs who were 1-year-old or more. She made sure the selected dogs were comfortable sitting in a dark room without their owners. The report states that the tests were complex and involved many variables to rule out that dogs were basing their decisions on other simple associations – like for example a pavlovian reflex implying that dark means food. The research concluded:

“The results of these tests suggest that dogs are deciding it’s safer to steal the food when the room is dark because they understand something of the human’s perspective.”

The research was published in Animal Cognition