Tag Archives: cognitive ability

alzheimers

Genetic variant explains why women are more prone to Alzheimer’s

Alzheimer's

Photo: e-manonline.com

Like a sticking nail, Alzheimer’s has been irritating neuroscientists for decades. After so many years and billions worth of research, the underlying causes and mechanics that cause the gruesome neurodegenerative disease have yet to be identified, though hints suggest genetics have a major role to play – never mind a cure! Clearly, Alzhaimer’s is formidable and while we’ve yet to fully understand it, scientists are doing their best and every year there seems to be a new piece added that might one day fit the whole puzzle.

For instance, a team of researchers at Stanford confirmed earlier findings that suggests a genetic variant makes women more prone to the disease than men. This is evidence that the disease affects genders unequally and suggests that future treatment should be prescribed gender specific.

It’s those genes

In 1993, researchers found that elders who inherit a gene variant called apolipoprotein E4 (APOE4) are more prone to the common form of Alzheimer’s that strikes in late life. Other variants have also been identified as being linked with Alzheimer’s: APOE3, the risk neutral variant, and the much rarer variant APOE2, which actually decreases a person’s risk of developing Alzheimer’s. A bit later, in 1997, researchers combed through more than 40 studies and analyzed data pertaining to  5930 Alzheimer’s patients and 8607 dementia-free elderly and found females with the APOE4variant were four times more likely to have Alzheimer’s compared with people with the more common, neutral form of the gene.

alzheimers

Photo: triumf.ca

That’s a really big difference, but for some reason the findings didn’t become that widely known. Michael Greicius, a neurologist at Stanford University Medical Center in California re-discovered the findings in 2008 and decided it was worth making a new investigation. He and his team first performed some neuroimaging on patients and found from the brain scans that  women with the APOE4 variant had poor connectivity in brain networks typically afflicted by Alzheimer’s, even though there weren’t any symptoms for Alzheimer’s present in the first place. This was fishy.

A more comprehensive view

Greicius and colleagues settled they would have to perform a longitudinal study on this to see the full extent of this genetic variance, so they pulled data from 2588 people with mild cognitive impairment and 5496 healthy elderly who visited national Alzheimer’s centers between 2005 and 2013. Every participant was logged according to genotype (did he have the APOE4 or APOE2?) and gender. Most importantly, each participant was surveyed in follow-up studies to see if the mild impairments had grown into full-blown Alzheimer’s.

Confirmed that the APOE4 is a risk gene, males and females participants with mild cognitive disabilities who were identified carrying the gene  variant equally progressed to Alzheimer’s disease more readily than those without  the gene.  However, among healthy seniors, women who inherited the APOE4 variant were twice as likely as noncarriers to develop mild cognitive impairment or Alzheimer’s disease, whereas APOE4 males fared only slightly worse than those without the gene variant. This is a full step ahead of the previous 1997 study because it tell us more about how the gene variant potentially leads to Alzheimer’s, especially in women.

The findings will most likely have significant implications in how Alzheimer’s is treated. Interestingly enough, some previous studies, according to the researchers, have shown that there are some side effects when treating patients that carry the APOE4 variant, but these studies were not subdivided according to gender.  Moreover, it’s possible that some treatments are more effective to treating symptoms for men more than women, and this is something definitely worth taking into account.

 

Photograph: Christopher Furlong/Getty Images

Musical training doesn’t make you smarter, but that doesn’t mean it’s not important

Photograph: Christopher Furlong/Getty Images

Photograph: Christopher Furlong/Getty Images

Playing an instrument comes with a wide range of benefits, especially for children. It teaches them discipline and how to focus on an important task at hand. It also fuels creativity. There’s a well constructed myth, however, that playing an instrument makes you smarter, as in it improves your cognitive abilities somehow. This idea is so entrenched that nearly 80% of American adults agree with this statement. Where did this notion come from and is true in the first place? A study from Harvard University performed a thorough analysis of currently published literature on the matter and after making a study of their own, they concluded that there’s no significant cognitive benefits following music lessons.

The “Mozart Effect”

It all started with a study published in 1993 in the journal Nature which concluded that listening to music improves temporal and spatial reasoning. The findings – which remained known under the label the ” Mozart effect” –  were then featured in the press all over the world, as confirmation of something everybody thought they already knew inside. Follow-up studies later debunked the 1993 study’s methodology, but somehow people hanged-on to this false notion. Nevertheless other researchers became interested in going further with this by studying whether taking music lessons can improve cognitive skills.

So far, there have been dozens of studies that explore whether and how music and cognitive skills might be connected.  Samuel Mehr, a Harvard Graduate School of Education(HGSE) doctoral student, looked at most of the scientific literature on the subject, but could only find five studies that used randomized trials – otherwise there’s a big chance causal relationships in cognitive behavior may become skewed. Of the five, only one showed an unambiguously positive effect, and it was so small — just a 2.7 point increase in IQ after a year of music lessons — that it was barely enough to be statistically significant.

“The experimental work on this question is very much in its infancy, but the few published studies on the topic show little evidence for ‘music makes you smarter,’” Mehr said.

Playing for the love of music, not for the love of brains

Mehr and colleagues decided they would make their own study on the subject and recruited 29 parents and 4-year-old children from the Cambridge area. Before starting, the children’s vocabulary skills as well as the parents’ musical aptitudes were evaluated. Then, each parent-child pair was assigned to one of two classes: either musical lessons or visual art lessons.

“We wanted to test the effects of the type of music education that actually happens in the real world, and we wanted to study the effect in young children, so we implemented a parent-child music enrichment program with preschoolers,” Mehr said. “The goal is to encourage musical play between parents and children in a classroom environment, which gives parents a strong repertoire of musical activities they can continue to use at home with their kids.”

Also, the researchers wanted to really look deeper into any effects musical lessons might have on cognition, so they looked for improvements in other specific areas of cognition, not just the standard IQ score.

“Instead of using something general, like an IQ test, we tested four specific domains of cognition,” Mehr said. “If there really is an effect of music training on children’s cognition, we should be able to better detect it here than in previous studies, because these tests are more sensitive than tests of general intelligence.”

The assessments showed that children who received music training performed slightly better at one spatial task, while those who received visual arts training performed better at the other. Still, only 29 children were involved in the study and since the effects were really slight, a statistical irrelevance resulted. So, the study was replicated with 45 parents and children, this time half of them received musical trained, while the other didn’t – not even visual art lessons.

Just as in the first study, Mehr said, there was no evidence that music training offered any cognitive benefit. Even when the results of both studies were pooled to allow researchers to compare the effect of music training, visual arts training, and no training, there was no sign that any group outperformed the others.

“There were slight differences in performance between the groups, but none were large enough to be statistically significant,” Mehr said. “Even when we used the finest-grained statistical analyses available to us, the effects just weren’t there.”

Music doesn’t make you smarter – but it’s no less important!

Parents who think of sending their kids to musical lessons just to make them smarter, should think again. If this is their only goal in mind, they’re wasting good money and time. However, listening or playing music isn’t about getting smarter. There’s much more to it – clearly there are benefits to learning to play an instrument. Playing an instrument improves self-confidence, social cohesion, discipline and nurtures the soul.

“There’s a compelling case to be made for teaching music that has nothing to do with extrinsic benefits,” he said. “We don’t teach kids Shakespeare because we think it will help them do better on the SATs. We do it because we believe Shakespeare is important.

“Music is an ancient, uniquely human activity. The oldest flutes that have been dug up are 40,000 years old, and human song long preceded that,” he said. “Every single culture in the world has music, including music for children. Music says something about what it means to be human, and it would be crazy not to teach this to our children.”

The findings were reported in a paper published in the journal PLoS One.

 

 

cacaktoo experiment

Cackatoos exhibit remarkable self-control akin to humans

You might be used to seeing birds peck grains as soon as you throw the food in front of them, so it’s no wonder why might find this surprising. University of Vienna established a cognitive experiment centered around a most intelligent type of bird – Cackatoos – and found that they’re capable of self-control, restraining themselves from immediately eating food put at their disposal, despite being highly tempted to do so. The findings suggest that yet another personality trait typically believed to be encountered in humans or primates only is present in other animals as well.

The experiment itself was inspired by a famous psychological experiment from the 1970s that studied self-control ability in babies, in order to see how early on this highly valuable cognitive trait is developed. In the ‘Stanford Marshmallow Experiment’ babies were asked to restrain from eating the marshmallow right in front of them for the time being and were promised another one if they behaved. This is a perfect example of economics decision making, and for many years the ability to foresee a delayed reward has been thought to be encountered in humans only

Now, simply waiting might not seem like much to you, but truth of the matter is it proves the presence of an important cognitive ability which is believed to be encountered in large brained animals only. It’s not only about the ability to control one’s instincts and impulses, but more about foreseeing – the capability of assessing present conditions and establishing whether taking action or staying passive will rend more rewards in the future.

cacaktoo experiment

For their experiment, the Austrian researchers chose to study an Indonesian cockatoo species – the Goffin’s cockatoo. The birds were instructed to pick pecan nuts and return them back to the researchers after a time delay. If they were successful and returned the food back without nibbling on it, the birds would then receive cashew nuts – an even greater treat.

“If the initial food item had not been nibbled, the bird received another reward of an even more preferred food type or of a larger quantity than the initial food,” explained researcher Isabelle Laumer (pictured, top of page). “We picked pecan nuts as an initial reward as they are highly liked by the birds and would under normal circumstances be consumed straight away, [but] we found that all 14 of the birds waited for food of higher quality – such as a cashew nut – for up to 80 seconds.”

Lead researcher Alice Auersperg was particularly impressed by the cackatoos surprising ability to assess economic advantages, liking them to human economic agents,  flexibly trading-off between immediate and future benefits.

“They did so, relative not only to the length of delay, but also to the difference in trade value between the ‘currency’ and the ‘merchandise,’ tending to trade their initial items more often for their most preferred food, than for one of intermediate preference value,” she noted.

What’s maybe more impressive is the extent of their self-control. In the marshmallow experiment, babies were faced with the choice of eating the marshmallow placed in front of them. How many of them would have been able to resist the temptation if the food was placed right in their mouths, like in the case of the cackatoos who have no other means of transportation other than their beaks?

Cackatoos, belong to an order of highly intelligent birds called Corvidae, which also includes ravens, crows and rooks.;

“Until recently, birds were considered to lack any self-control. When we found that corvids could wait for delayed food, we speculated which socio-ecological conditions could favor the evolution of such skills. To test our ideas we needed clever birds that are distantly related to corvids. Parrots were the obvious choice and the results on Goffins show that we are on the right track,” said Thomas Bugnyar, one of the study authors.

The study’s findings were reported in a paper published in the journal Biology Letters.

The tiny neurosynaptic core produced by IBM. (c) IBM

Cognitive computing milestone: IBM simulates 530 billon neurons and 100 trillion synapses

First initiated in 2008 by IBM, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program whose final goal is that of developing a new cognitive computer architecture based on the human brain. Recently, IBM announced it has reached an important milestone for its program after the company successfully simulated 10 billion neurons and 100 trillion synapses on most powerful supercomputer.

It’s worth noting, however, before you get too exited, that the IBM researchers have not t built a biologically realistic simulation of the complete human brain – this is still a goal that is still many years away. Instead, the scientists devised a cognitive computing architecture called TrueNorth with 1010 neurons (10 billion) and 1014 synapses (100 trillion) that is inspired by the number of synapses in the human brain; meaning it’s modular, scalable, non-von Neumann, ultra-low power. The researchers hope that in the future this essential step might allow them to build an electronic neuromorphic machine technology that scales to biological level.

 “Computation (‘neurons’), memory (‘synapses’), and communication (‘axons,’ ‘dendrites’) are mathematically abstracted away from biological detail toward engineering goals of maximizing function (utility, applications) and minimizing cost (power, area, delay) and design complexity of hardware implementation,” reads the abstract for the Supercomputing 2012 (SC12) paper (full paper link).

Steps towards mimicking the full-power of the human brain

 Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

IBM simulated the TrueNorth system running on the world’s fastest operating supercomputer, the Lawrence Livermore National Lab (LBNL) Blue Gene/Q Sequoia, using 96 racks (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads).

IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 53×1010  (530 billion) neurons and 1.37×1014 (100 trillion) synapses running only 1542 times slower than real time.

The tiny neurosynaptic core produced by IBM. (c) IBM

The tiny neurosynaptic core produced by IBM. (c) IBM

“Previously, we have demonstrated a neurosynaptic core and some of its applications,” continues the abstract. “We have also compiled the largest long-distance wiring diagram of the monkey brain. Now, imagine a network with over 2 billion of these neurosynaptic cores that are divided into 77 brain-inspired regions with probabilistic intra-region (“gray matter”) connectivity and monkey-brain-inspired inter-region (“white matter”) connectivity.

“This fulfills a core vision of the DARPA SyNAPSE project to bring together nanotechnology, neuroscience, and supercomputing to lay the foundation of a novel cognitive computing architecture that complements today’s von Neumann machines.”

According to Dr. Dharmendra S. Modha, IBM’s cognitive computing manager, his team goal is that of mimic processes of the human brain. While IBM competitors focus on computing systems that mimic the left part of the brain, processing information sequentially, Modha is working on replicating functions from the right part of the human brain, where information can be processed in parallel and where incredibly complex brain functions lie. To this end, the researchers combine neuroscience and supercomputing to reach their goals.

Imagine that the room-sized, cutting-edge, billion dollar technology used by IBM to scratch the surface of artificial human cognition still doesn’t come near our brain’s capabilities, which only occupies a fixed volume comparable to a 2L bottle of water and needs less power than a light bulb to work. The video below features Dr. Modha explaining his project in easy to understand manner and its only 5 minutes long.


source: KurzweilAI

 

artist graph

Computer analyses fine art like an expert would. Art only for humans?

When fine art is concerned, or visual arts in general for that matter, complex cognitive functions are at play as the viewer analyze it. As you go from painting to painting, especially different artists, the discrepancies in style can be recognized, and trained art historians can catch even the most subtle of brush strokes and identify a certain artist or period, solely based on this. For a computer, this kind of analysis can be extremely difficult to undertake, however computer scientists at Lawrence Technological University in Michigan saw this as an exciting challenge and eventually developed a software that can accurately analyse a painting, without any kind of human intervention based solely on visual cues, much in the same manner an expert would.

The program was fed 1,000 paintings from 34 well known artists and was tasked with grouping artists by their artistic movements, and provide a map of similarities and influential links. In first instance, the program separated the artists into two main, distinct groups: modern (16 painters) and classical (18 painters).

Each painting had 4,027 numerical image context descriptors analyzed – content of the image such as texture, color and shapes in a quantitative fashion. Using pattern recognition algorithms and statistical computations, the software was able to group artists into styles based on the similarities and dissimilarities, and then quantify these similarities.

artist graph

So, from these two broad groups, the software sub-categorized even further. For instance, the computer automatically placed the High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo very close to each other.  High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo were grouped similarly. The software also branched sub-groups by similarities, so artists like  Gauguin and Cézanne, both considered post-impressionists have been identified by the algorithm as being similar in style with Salvador Dali, Max Ernst, and Giorgio de Chirico, all are considered by art historians to be part of the surrealism school of art.

[RELATED] Computer recognizes attractiveness in women

The researchers conclude their “results demonstrate that machine vision and pattern recognition algorithms are able to mimic the complex cognitive task of the human perception of visual art, and can be used to measure and quantify visual similarities between paintings, painters, and schools of art.”

While a computer can analyze art with conclusions similar to those of an expert, human art historian, a serious question arises – is then, in this case, a computer able to understand art? If so, will a computer ever able to feel art?

The findings were reported in the journal Journal on Computing and Cultural Heritage (JOCCH).

[via KurzweilAi]

 

 

Cramming for tests

Pulling all-nighters before tests is counter-productive – does more harm than good

The findings of a new research at UCLA, suggest that cramming all night before a big test, something that we’ve all went through at least once in a point of our lives with personal mixed results, is generally counter-productive as the sleep deprivation acts its toll on cognitive performance.

Cramming for testsWhether we’re talking about high school or university, especially the latter, we’ve all experienced situations where you postponed studying for a mid-term or homework until the last minute. Coffee soon became a much needed beacon of light as the night turned to day, and you crammed in much needed extra information for your exam. This will help you, or so you though. According to the researchers, however, regardless of how much a student generally studies each day, sacrificing a night’s sleep for extra studying hours does more harm than good.

“These results are consistent with emerging research suggesting that sleep deprivation impedes learning,” says Andrew J. Fuligni, a UCLA professor of psychiatry,

“The biologically needed hours of sleep remain constant through their high school years, even as the average amount of sleep students get declines,” he continues.

The scientists, based on their findings, advise that a consistent study schedule is best for learning, for most people at least.

Other research has shown that in 9th grade, the average adolescent sleeps for 7.6 hours per night, then declines to 7.3 hours in 10th grade, 7.0 hours in 11th grade, and 6.9 hours in 12th grade. “So kids start high school getting less sleep then they need, and this lack of sleep gets worse over the course of high school.”

The findings were published in the paper Child Development.

via KurzeweilAI . Image credit

Chewing gum

Chewing gum makes you smarter

Chewing gum Chewing without actually eating seems pretty weird, if you think a bit about it, even so it’s a highly popular habit best described by the billion dollar industry of chewing gum. If you’re one of the regular chewers, here’s something to lighten your mood for the day – chewing gum increases your cognitive abilities, albeit for a short burst of time, as researchers from St. Lawrence University have concluded in a recently published study. Wish you knew that before your teacher kicked you out of class in the seventh grade, right?

The scientists asked the 159 volunteers, which they divided into two groups (chewers and non-chewers), to solve  a series of tests, including difficult logic problems or repeating numbers backwards. The researchers found that people who were chewing gum during the application outperformed non-chewers, as a whole, in five out of six tests. The only exception was the verbal fluency test, in which subjects were asked to name as many words they can from various lexical families.

When I first read the paper’s byline I thought it all had something to do with sugar-rush intake, but apparently this is not the case. The chewers were given both sugar and sugar-free gum, with no significant discrepancies in their results. The performance induced by chewing gum, however, is short lived improvements lasting during the first 20 minutes of testing.

If it doesn’t have anything to do with sugar or gloucose, how does chewing gum improve your mental abilities? Well it all has something to do with chewing, be it anything, not just gum, because of a process called “mastication-induced arousal”. This acts as boost to the brain, waking us up and allowing better focus to a task at hand. After the boost phase is over, the lack of improvement in cognitive function when gum is chewed throughout testing may be because of interference effects due to a sharing of resources by cognitive and masticatory processes, researchers suggest.

Mimicry in the wild: what is it and how it works

A technique I call ‘advanced mimicry’ is the main indicator of humanity’s cognition to us, or at least to this seventeen-year-old student.

How many toads do you see in this picture ?

Mimicry is defined as “the act, practice, or art of mimicking”. This definition is almost self-referencing, so I feel the inclusion of the definition of mimicking, “apt at or given to imitating; imitative; simulative,” necessary. The term ‘advanced’ is widely understood, but to avoid confusion and to develop continuity I am including the dictionary.com definition of the particular meaning the word has in this specific context. The definition I’m using here is “ahead, far or further along in progress, complexity, knowledge, skill, etc.”
Combining these terms in “spread-out” definition form accurately describes the technique I wish to discuss here. In it’s most basic sense, ‘advanced mimicry’ describes imitating at a level that is more complex than other forms of imitation.

This description does not do the idea justice. When I say that humanity’s cognition is due to the use of advanced mimicry, I do not mean that humanity is intelligent because it is better at copying than other species. That’s ridiculous. The idea has to do with early psychological development in children, the resultant psychological condition of the adults and linguistics.

To my understanding, the developmental process of cognitive ability works like this:
A child is conceived, already sharing genetic characteristics of both parents. All input from this point onwards contributes to the psyche and physical characteristics of this person, from the variations in all the nutrients, minerals and toxins that the child intakes while growing in the mother, the sounds that penetrate the womb, to the day night cycles of the mother’s specific geographical locations. The development is far more complex than I understand, but in the most basic of descriptions I can confidently state that literally everything about an environment works to shape us in fundamental ways.

Further along the timeline is the development of language. The child is subjected to the language native to the environment since conception. In the first 1-2 months, the child will begin making unintelligible noises, practicing replication of the sounds that it has been hearing throughout its existence. After somewhere between 18 months and 2 years the child will begin to speak almost coherently, attempting to form full words. These attempts are widely regarded as the results of underdeveloped physical ability and mental abilities including emotional understanding, but I personally feel strongly that these sounds are the basis of developing the mimicry of language into what we call cognitive ability.

Ideas further contributing to my confidence in this particular idea is the speculation that the native language of a person contributes to their thought processes at a fundamental level. The idea was referenced in George Orwell’s novel 1984, describing the development of the fictional language of ‘Newspeak’ as a ploy to remove specific thoughts from the minds of the populace by removing words that relate to those specific, undesirable thoughts. The artificial languages Lojban and Esperanto are both exercises in understanding exactly how languages affect the development of cognition in people and simultaneously exist as efforts to work against cultural biases associated with language.

While these artificial languages are largely regarded as uninteresting to the majority of people, the spreading use of these languages implies a certain level of notoriety in the fields of social psychology and linguistics.

Compounding these ideas identifies a phenomenon that accurately describes the developmental process of people. The language that you learn, through essentially trying to reproduce the sounds you hear around you all the time, fundamentally effect the way that you understand the world and create ideas.

This probably relates to the tribal nature of humanity, a throwback characteristic from our history. Evolutionarily, we would have an advantage to understand the world in the same basic way as our close family, protectors, friends, mates and dependants. Additionally, this skill allows people to quickly “adapt” to situations. An example of this kind of adaptation is an organism, such as the polar bear requiring hundreds of generations to develop a fur coat and a human child requiring one or two instances of watching a parent killing a polar bear to take the fur coat to know how to do it. According to V.S. Ramachandran, a neurologist, we have these brain cells called “mirror neurons” that respond to visually registering other people being touched, effectively exhibiting a type of mechanical empathy. This indicates a natural means for humans to mimic the actions of other humans, above and beyond most other animals methods of ancestral memory.

This idea, however original it seems to me, may have been documented already under a different name, or maybe even the same name and have simply forgotten about an article discussing the idea. I have read that most cases of professional plagiarism come from the authors reading some obscure or rare book, pamphlet or perhaps even decrepit flyers, and the ideas presented in those forsaken pieces of literature lose the association with the author and even the article itself and the author will feel that the idea is a personal creation.

Ironically, or to be more accurate, idiosyncratically, I have forgotten the name of the title and author of the source for that particular idea. However, this creates an excellent illustrative example for the idea I’m discussing: My idea is similar, but not identical, and I have forgotten exactly what it is similar to. Moreover, I have broadened the phenomenon from simple mimicry regardless of source to include a statement that the skill is responsible for the entire of humanity’s creation of what we call original thought.

As artist Pablo Picasso once said, “Good artists copy, Great artists steal.” Or maybe it was graffiti artist Banksy. Regardless of the source, the quote remains essential to my argument: humans are not capable of creation. Instead, we excel at mimicry.

 

Picture source: 1 2 3