Tag Archives: brain activity

After scanning canine brains: “Dogs are people, too,” says neuroscientist

Credit: Pixabay.

Most dog owners will tell you that their pets are awesome. They love their dogs, and their dogs seem to love them back. But do dogs genuinely feel love or any kind of positive emotion similar to how a human does for that matter? This is a question that has eluded scientists for a long time.

It’s easy to put a dog’s tremendous enthusiasm whenever their owner comes home as just as a form of attachment, viewing the human as a walking, breathing food dispenser and nothing more.

But a breakthrough research might change the way people view dogs forever. According to neuroscientist Gregory Berns,  “dogs are people, too.”

He reached this conclusion after performing MRI scans on over a dozen dogs, finding the same brain region responsible for positive emotions in humans is activated in dogs as well.

[Also Read: Do dogs dream?]

To infer animal sentience and other neurological traits, scientists rely on animal behaviorism. You can’t ask a dog how it feels, or what it’s thinking. As such, it’s been considered an extremely challenging area of research. By using brain scans, however, one can bypass having to directly ‘speak’ to an animal. Instead, you let the brain do all the talking.

But this doesn’t mean performing MRI on animals is straightforward. The machines are racketing, claustrophobic, and generally unpleasant even for humans. For them to work you have to stay completely still. You can imagine how difficult it is to get a hyper labrador to stay put while all kinds of machinery are diverting its attention. Typically, veterinarians perform anesthesia on dogs whose brain scans they need to perform, but this renders any kind of emotion monitoring useless.

Dog emotions, not too different from ours

Berns tackled this issue by training dogs using painstaking reward exercises to stay still when inside the operating MRI, and in doing so he has performed the first wake dog MRIs, as reported in PLOS ONE. Inside the scanner, the dogs’ brain activity was measured for a two-hand signal (which they learned to associate with food), as well as for scents of familiar and unfamiliar dogs and humans.

Both the human and dog brains are strikingly similar in function and structure in one key region: the caudate nucleus. Located between the brainstem and the cortex, the dopamine-rich caudate plays a key role in the anticipation of things we enjoy, like food, love, and money — things that are associated with positive emotions.

“Many of the same things that activate the human caudate [part of the brain], which are associated with positive emotions, also activate the dog caudate. Neuroscientists call this a functional homology, and it may be an indication of canine emotions,” Berns wrote in an article for the NY Times.

Berns with one of the dogs from his research. Credit: Gregory Berns.

In response to hand signals indicating food, as well as smells of familiar humans, the canine caudate activity increased. And in preliminary tests, it activated to the return of an owner who had momentarily stepped out of view. Neuroscientists call this a functional homology, and it may be an indication of canine emotions.

“The ability to experience positive emotions, like love and attachment, would mean that dogs have a level of sentience comparable to that of a human child. And this ability suggests a rethinking of how we treat dogs,” Berns said.

“DOGS have long been considered property. Though the Animal Welfare Act of 1966 and state laws raised the bar for the treatment of animals, they solidified the view that animals are things — objects that can be disposed of as long as reasonable care is taken to minimize their suffering.”

“But now, by using the M.R.I. to push away the limitations of behaviorism, we can no longer hide from the evidence. Dogs, and probably many other animals (especially our closest primate relatives), seem to have emotions just like us. And this means we must reconsider their treatment as property.”

So, do dogs truly love us? We can’t be sure, but next time you see your dog wag his tail you can be sure he’s happy, scientific proof included.

[NOW READ] Study proves humans can read a dog’s emotions just by looking at its face

Gregory Berns is a professor of neuroeconomics at Emory University and the author of “How Dogs Love Us: A Neuroscientist and His Adopted Dog Decode the Canine Brain.”

The new generation of brain scanner that can be worn like a helmet. It allows patients to move naturally whilst being scanned. Credit: University of Notthingham.

Helmet-like wearable scanner can record brain activity in moving subjects

The new generation of brain scanner that can be worn like a helmet. It allows patients to move naturally whilst being scanned. Credit: University of Notthingham.

The new generation of brain scanner that can be worn like a helmet. It allows patients to move naturally whilst being scanned. Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.

It looks like something out of a SciFi blockbuster, but this one-of-a-kind helmet might revolutionize human brain imaging. Using this light-weight, magnetoencephalography (MEG) system, scientists can measure the brain activity of participants while they perform natural movements like nodding, stretching, interacting with other people, or playing sports. This used to be impossible to do with the traditional, fixed and cumbersome MEGs that require participants to stay completely still in order to measure brain activity. As such, this wacky-looking helmet could reveal new insights into the neural pathways and mechanisms involved in the human brain that would have been otherwise impossible to ascertain.

Neurons communicate with each other using chemicals called neurotransmitters. However, to transmit the actual message from the receiving neuron’s dendrites to its own axon terminals, a different medium is used: electricity. When a neurotransmitter such as dopamine triggers the receiving neuron to fire, it sends an electrical “action potential” along its length, similarly to how an electrical pulse flows down a metal wire. Instead of electrons moving through a circuit, an action potential in a neuron occurs because ions move across the neuronal membrane.

This current generates weak magnetic fields which can be detected right outside the scalp. A MEG measures these magnetic fields, allowing scientists to see which parts of the brains are engaged when undertaking certain tasks. Doctors often rely on MEG to plot a roadmap of the brain that is useful for preoperative and treatment planning for individuals with epilepsy, as well as for patients undergoing surgery to remove a brain tumor or other lesions. In research, MEG has proven indispensable for scientists who are looking to understand human brain function, as well as neurological and psychiatric disorders.

A typical MEG setup. The difference is striking. Credit: University of Nothingham.

A typical MEG setup. The difference is striking. Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.

Now, British researchers at University of Nottingham and University College London have come up with a revolutionary new setup for a wearable MEG. Unlike the MEGs used in practice today, which are incredibly large and weigh half a tonne, the new system is basically a helmet that can be worn while the user performs movements — something which would normally cause a lot of imaging problems in a conventional setup.

The scanner measures electrophysiological brain function – it allows is to pinpoint, with spatial accuracy of a few millimetres and temporal accuracy of a few milliseconds, which parts of the brain are involved when we undertake specific tasks. It can do this in an environment where subjects are free to move around. This is a step change for neuroscientific research, with neuroscientists able to study the brain in a whole new way,” Dr. Matt Brookes, who leads the MEG work in Nottingham, told ZME Science.

The crux of the innovation lies in the new ‘quantum’ sensors that are mounted in a 3-D printed prototype. These sensors are lightweight and can work at room temperature, whereas the sensors employed by a typical MEG have to be kept very cold (-269°C), hence the bulky configuration. The closer these quantum sensors are to the scalp, the better the brain activity signal they can pick up.

However, the sensors are so sensitive, they receive interference from Earth’s magnetic field. The team of researchers solved this problem by developing special electromagnetic coils, which reduced Earth’s magnetic field around the scanner by a factor of 50,000.

“One of the biggest challenges was that, in order to develop the system such that subjects could move their head, we had to null the Earth’s magnetic field in a region surrounding the head. Our system is housed in a magnetically screened room, which reduces the earths field by approximately a factor of 2,000, but that wasn’t good enough – we needed a factor of ~50,000. To do this we were able to design and build a novel set of electromagnetic coils. These coils had to be designed to sit on planes either side of the subject, so as not to enclose the person being scanned, or make them claustrophobic. The coils that we designed and built were able to almost completely remove Earth’s field, thus enabling the sensors to operate and imaging data to be captured whilst the subject moved their head,” Brookes explained.


Credit: Wellcome Trust Centre for Human Neuroimaging at UCL.

The new wearable MEG will likely prove revolutionary for research and will enable treatment for patients who traditionally could use a MEG scanner, such as young children with epilepsy or patients suffering from neurodegenerative disorders like Parkinson’s disease. Tests so far suggest that the MEG helmet works just as well as its fixed counterpart, although it will require further tweaking in order to provide imaging for lower bandwidths.

“The only minor limitation compared to traditional systems is bandwidth. The overwhelming majority of brain activity that MEG measures is in the 1-100Hz band – well within the scope of our sensors. However, there are effects in the brain at a higher frequency (e.g. 300Hz). This would be a challenge for our system – however future developments in quantum sensing may make this possible,” Brookes wrote in an email.

Besides improving bandwidth, the researchers will further refine their prototype so it is more “patient-friendly.” Hopefully, clinical trials will follow soon after.

“This is very much a prototype system – a one of a kind proof of principle machine. Our next step is to make it more patient (and in particular more child) friendly. To do that we intend to make the helmets less intimidating – constructing them from a flexible material so they become more like a ‘scrum-cap’ worn by rugby players. In that way we hope to construct more generic helmets that fit anyone, rather than bespoke helmets that only fit one person,” Brookes said.

The findings appeared in the journal Nature.



c. elegans nematode brain activity

What a worm’s brain looks like fired up

These aren’t Christmas lights, but the actual neural activity of Caenorhabditis Elegans, a parasitic nematode. The brain imaging was done by researchers at Princeton University, and no worm had to be cut open. Instead, the researchers used a special protein which  fluoresces in response to calcium.

c. elegans nematode brain activity

When scientists tap the brain, they’re looking for one prime indicator: electrical activity. When a neuron is active, it fires an action potential which is basically a depolarization made between a neuron’s axon to another neuron it signals to. Now, traditionally neuroscientists use a technique called electrophysiology to study the patterns of neuron electrical activity. It’s precise, yet the analysis is limited to a handful of neurons at a time. A more interesting method exploits the fact that when a neuron is active (again, depolarized), calcium flows into it. Using special dyes (proteins) that fluoresce in response to whether or not they bind to calcium, scientists can monitor these calcium dynamics and in turn the depolarization.

That’s exactly what the Princeton researchers achieved, allowing them to monitor in real time  77 of the nematode’s 302 neurons as they light up. These have been shared in this amazing video, split into four frames. In the upper left, we see the location of the neurons, while the upper right shows a simulation of the calcium signaling which is analogous to neural electrical patterns. In the lower two panels we zoom out: the worm itself (left) and the location of the brain (right).

Using this data, the researchers would like to devise a mathematical model that will allow them to simulate and control the worm’s brain. Previously, other efforts identified how C. elegans can identify magnetic fields, while a more ambitious team from Harvard  targeted laser pulses at the worm’s neurons, and directed it to move in any directions they wanted,  even tricking the worm in thinking there’s food nearby.

humiliated cat

Humiliation may be the most intense of human emotions

humiliated cat

Photo: worldwidewhiskers.wordpress.com

If you look back, you’ll find that some of your most treasured memories are linked to powerful emotions, be them positive or negative. Somehow, it may seem that negative emotions linger longer in our lives, long after the event that triggered them passed. Now, research has garnered tantalizing proof that suggests the most intense of human emotions is humiliation.

The rainbow of feelings

Love, hate, happiness, anger, dismay, relief. Our whole lives are influenced and governed by a whole spectrum of emotions – it’s what makes us human after all. Gift and curse, feelings make life worth living, even though at times they can cause terrible pain that makes you wish you were never born. Such is life, yet some feelings are more intense than other. Is there a master emotion dominating all the rest by magnitude or is everything kept in a delicate balance of negative and positive, action and reaction, ying and yang? If there were such a thing, the feeling of being humiliated might take the emotional crown.

Marte Otten and Kai Jonas, both psychologists, decided to investigate some claims that humiliation is a particularly intense, even unique, human emotion with great personal and social consequences. Some humiliating scenes can haunt people all their lives and leave dents in personalities that are had to mend. In extreme cases, humiliation may be responsible for war and strife. Otten and Jonas knew, like most of us, that humiliation is intense, but their efforts led them to turn this view into an objective analysis.

Dissecting humiliation

The researchers performed two separate studies. In the first one, they asked participants, both male and female, to read short stories involving different emotions, and had to imagine how they’d feel in the described scenarios. The first study compared humiliation (e.g. your internet date takes one look at you and walks out), anger (e.g. your roommate has a party and wrecks the room while you’re away) and happiness (e.g. you find out a person you fancy likes you). The second study compared humiliation with anger and shame (e.g. you said some harsh words to your mother and she cried).

Throughout the reading and imagination process, all participants had an EEG strapped to their scalps which read their brain activity. Two measures particularly interested in the researchers: a larger positive spike (known as the “late positive potential” or LPP); and evidence of “event-related desynchronization”, a marker of reduced activity in the alpha range. Both these measures are signs of greater cognitive processing and cortical activation.

Imagining being humiliated resulted in higher LPPs and more event-related desynchronizations than any other emotion.

“This supports the idea that humiliation is a particularly intense and cognitively demanding negative emotional experience that has far-reaching consequences for individuals and groups alike,” they concluded.

The study tells us that humiliation causes strain on the brain’s resources and mobilizes more brain power, but it doesn’t tell us why this happens. It’s a cause, not an effect. The researchers have yet to identify the mechanism that leads to this neural build-up. Then, the study setting itself wasn’t the best for this kind of evaluation. Imagining your being humiliated or falling in love doesn’t come close to the real thing (you can’t expect to cause genuine feelings of humiliation in a study either). At best, the study does indeed lend credence that humiliation is the master emotion relative to intensity, but it’s far from being a settled thing. Where’s all the love?

The findings appeared in the journal Social Neuroscience.

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

Never before seen brain activity in deep coma detected

Coma patients, be it inflicted from trauma or initiated by doctors to preserve bodily functions, have their brain activity regularly monitored using electroencephalography (EEG). When in a deep coma the brain activity is described by a flat-pattern signal- basically minimal to no response, one of the limits that nearly prompts  establishing brain death. A group of physicians at University of Montreal, however, have discovered an up until now never before seen type of brain activity that kicks in after a patient’s EEG shows an isoelectric (“flat line”) EEG.

The discovery was first spurred by the findings of Dr. Bogdan Florea who was caring for a human patient in an extreme deep hypoxic (deprived of oxygen) coma under powerful anti-epileptic medication, typically used to control seizures. Instead of just a flatline, though, Florea also observed some unusual signals – anything that wasn’t flat was basically weird at this point. So Florea contacted the University of Montreal team and explained his peculiar situation.

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

The Montreal researchers found, after analyzing the patient’s records, “ that there was cerebral activity, unknown until now, in the patient’s brain,” said Dr. Florin Amzica. To test whether or not this was a measuring glitch of some sort, Amzica and team performed an experiment. The team recreated the initial patient’s coma state in cats (the model animal for neurological studies) by drugging them with a higher dose of isoflurane anesthetic than normal. This effectively placed the cats in a deep coma and the EEG showed the expected flat (isoelectric) EEG line. Things were all normal until then. However, after a while strong oscillations were observed.

When pinpointing their origin, the researchers found the signal’s origin was in the hippocampus, the part of the brain responsible for memory and learning processes. The researchers concluded that the observed EEG waves, or what they called “Nu-complexes,” were the same as those observed in the human patient.

Besides its peculiar nature, the finding might prove to be extremely important. For one, there are many cases in which doctors intentionally induce certain patients into coma to protect their bodies and brain. This may be technically faulty in practice. A deep coma, based on the experiment on cats, might be better suited since it preserves a certain brain activity.

“Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG,” says Professor Amzica.

“An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active is absolutely vital for the benefit of patients.”

“As these functions fade at the onset of unconsciousness, the orchestrating powers are relinquished to more basic structures such as the thalamus (in the case of sleep) or the limbic system [per the current data in the experiment],” the researchers said in the paper. “When these structures are released from neocortical influence, they begin to pursue activity patterns on their own and proceed to impose these patterns on other brain regions including the neocortex.”

Findings were reported in the journal PLoS ONE.

[NOW READ] How long can a person remain conscious after being decapitated

This image, compiled using data from multiple researchers, shows how cortical thickness varies across species. In humans, on the right, there are noticeable changes as a person ages.

Poverty might cause changes to the brain

It’s rather clear that social-economic factors have a huge part to play in the development of an individual, but when discussing this we typically refer to education, something that can be more or less manipulated at any time, albeit with various degrees of difficulty. How do social-economic aspects affect the brain, though? Martha Farah, the founding director for Penn’s Center for Neuroscience and Society is currently conducting research in this direction, and so far her preliminary results seem to suggest that the  brain’s response to circumstances of social class should not be taken lightly. For instance, there seems to be direct link between poverty and stunting of brain development in children.

Through out her career, Farah has mainly specialized in neuroscience fields related to vision and memory, however she has always been intrigued by how social class affects brain development. As the developed world is facing an ever discrepant segregation of classes, the topic is worthy of consideration.

“I actually became pretty obsessed with social class, this major dimension of variation in the human race and certainly in American society,” Farah said.

“We’re so segregated by class, we don’t even realize we’re segregated because we don’t even know what life is like just two miles north of here,” she said.

The stress of poverty on the brain

This image, compiled using data from multiple researchers, shows how cortical thickness varies across species. In humans, on the right, there are noticeable changes as a person ages.

This image, compiled using data from multiple researchers, shows how cortical thickness varies across species. In humans, on the right, there are noticeable changes as a person ages.

As sociological studies have corroborated, it seemed to Farah that child-rearing and children’s early experience was very different depending on social class.

Poor children don’t get as much exposure to language as their wealthier counterparts, research has found, and they tend to get more negative feedback. What they do hear is not as grammatically complex, with a narrower range of vocabulary. There is less understanding of how children develop and what they need for cognitive development, Farah said.

Stress seems to also play a major role. Parents of low-income are more predisposed to subject their children to a stressful environment, since they themselves are stressed at their own term by the uncertainties of meeting means, bad neighborhoods, crowding and so on. Stressed parents are less patient and affectionate, further stressing their children, according to Farah.

A recent study published in the journal PLoS One seems to be very revealing in this respect, involving a group African American adolescents who came from households of low socioeconomic status. When the participants were age 4, their parents’ responsivity (warmth and supportiveness) was evaluated, then some 11 to 14 years later, the now adolescents were subjects to a stress test. The participants had to hold a talk in front of a unfriendly audience.

After the test was over, the participants had their saliva sampled to measure cortisol – the stress hormone. Researchers found that cortisol reactivity was related to parental responsivity, and the less parental responsivity, the less of a normal stress response the volunteers had.

“You might say, ‘Well, of course life is more stressful in lower socioeconomic strata,’ ” she said. “But the degree of magnitude of the stress that they live with is just unbelievable.”

The idea that stress impairs brain development is thus born, but an even bigger question is beckoned – would this damage be permanent? It is unknown whether that stunting can be reversed, but you shouldn’t assume that it’s unchangeable, Farah said.

“If you’re interested in child policy and stuff, the important bottom line is: You never want to say, ‘Oh, damaged goods, so there’s nothing we can do now,’ ” she said.

The study

Along with Brian Avants, assistant professor of radiology, Farah followed 53 children who came from low socioeconomic status from birth through adolescence and performed brain imaging. The researchers performed their evaluation with two scales in mind: environmental stimulation (“child has toys that teach color” at age 4, and “child has access to at least 10 appropriate books” at age 8 etc) and parental stimulation (parent holds child close 10-15 minutes per day” at age 4).

The researchers wanted to see whether they could predict the thickness of cortisone based on these two major social factors. Greater cortical thickness in childhood is associated with poor outcomes such as autism, Avants explained. Later in adolescence, relatively reduced cortical thickness is linked to higher IQ and other mental processes.

From this study, Farah and colleagues suggested that environmental stimulation at age 4 predicts cortical thickness in the late teenage years, but parental nurturing did not appear to be linked.

Their work has yet to be published, and the final conclusions will be very interesting to follow when the time comes. Farah and colleagues call for awareness, in the meantime. There are far fewer children with autism than there are poor children in the United States, for example, but autism as a condition gets more attention from the science community than the neurological implications of poverty, the authors write.

Farah presented the study at the Society for Neuroscience meeting in November.


This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain. (c) Jeremiah Wander, UW

Brain-computer interfaces are as easy to learn and use as waving a hand

Mind-controlled devices, also known as brain-computer interfaces, have evolved a lot in the past few years alone, offering countless people who are paralyzed or missing limbs the chance of living a normal life. Non-afflicted individuals may profit from brain-computer interfaces as well, either for fun (why bother with gamepads or even Kinect when you can control everything in a video game just by thinking about it?) or other applications (we’ve written a while ago about a military project in which soldiers could control an avatar robot with their thoughts; crazy stuff!). Just how easy is using these interfaces though?

This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain. (c) Jeremiah Wander, UW

This image shows the changes that took place in the brain for all patients participating in the study using a brain-computer interface. Changes in activity were distributed widely throughout the brain. (c) Jeremiah Wander, UW

A study recently conducted by scientists at University of Washington analyzed the brain activity of seven people with severe epilepsy, which had their skulls drilled and fitted with thin sheets of electrodes directly onto their brains. This served a double purpose: physicians could scan for epilepsy signals, while a bioengineering team studied which parts of the brain were used as the patients learned to move a cursor using their thoughts alone. The findings took everyone by surprise.

Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen. Apparently, when the brain uses interface technology it behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. What’s most impressive is that it only takes 10 minutes for this automation to kick in, after the researchers observed activity went from being centered on the prefrontal cortex, which is associated with learning new skills, to areas seen during more automatic functions.

“What we’re seeing is that practice makes perfect with these tasks,” Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study, said in a school news release. “There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up.”

This is the first study that clearly maps the neurological signals throughout the brain.

“We now have a larger-scale view of what’s happening in the brain of a subject as he or she is learning a task,” Rao said. “The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren’t directly involved to get the job done.”

Brain-computer interfaces are definitely getting traction, but let’s face it, the technological limitations are rather cumbersome. For one, the procedure is extremely invasive, as in extreme! You need your skull drilled, and have electrodes implanted, for the interface to become effective. Sure, we’ve seen some work perform pretty well using only electrodes temporarily attached to the scalp, but the signal processing is too weak for the interface to become reliable, especially for those in need of smart prosthetic. Now, armed with a better understanding of how the brain reacts, scientists might come up with solutions that don’t entail drilling holes through skulls.

“This is one push as to how we can improve the devices and make them more useful to people,” said Jeremiah Wander, a doctoral student in bioengineering. “If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly.”

The findings were reported in a paper published in the journal PNAS.

Your brain detects grammar errors even when you’re not aware of them

A rather debatable theory in psychology says  the brain detects grammar errors even when we don’t consciously pay attention to them, sort of working on autopilot. Now, researchers at University of Oregon have come with  tangible evidence pointing toward this idea after they performed a brain scan study.

The team of psychologists, led by Laura Batterink, a postdoctoral researcher, invited native-English speaking people, ages 18-30, to read out various sentences, some of which  containing grammatical errors, and signal whether these were correct or not. During the whole task, the participants had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP).


Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.”  In order to create a distraction and make participants less aware, a 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.

 “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” Batterink said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”

Your brain: a grammar nazi

The researchers found that when the tones appeared after grammatical errors, subjects detected 89 percent of the errors, but when the tones appear before the grammatical errors, subjects detected only 51 percent of them. It’s clear the tone created a disruption in the participants’ attention. Even so, while the participants weren’t able to be consciously aware of the grammar errors, their brains picked up the errors generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.

[RELATED] Humans think more rationally in a foreign language

“Even when you don’t pick up on a syntactic error your brain is still picking up on it,” Batterink said. “There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly.”

“While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”

These findings might warrant changes in the way adults learn new languages. Children, for instance, learn to speak a language, and conversely pick up its grammar structure, simply  routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction.

“Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does,” said Neville, referring  to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.

The findings were detailed in the  Journal of Neuroscience.

Autism can be detected by analyzing brain activity

Researchers Case Western Reserve University School of Medicine and the University of Toronto have demonstrated for the first time that is possible to confirm or independently assess clinical diagnoses of autism in children simply by analyzing their brain activity. Reliable and efficient, their method might be employed at a massive scale as a means of accurately detecting the condition much longer in advance, as well as determining the anatomical abnormalities associated with autism.

autistic-brainRoberto Fernández Galán, an assistant professor of neurosciences at Case Western Reserve and an electrophysiologist seasoned in theoretical physics, led the research team that used magnetoencephalography (MEG) to determine the brain’s functional connectivity – communication from one region to another. Autism has been long hypothesized as being linked with brain connectivity disorders, and a team of researchers from the Netherlands found that children with autism showed a significantly increased normalized path length and reduced normalized clustering, suggesting a reduced global communication capacity already during early brain development. In addition, whole brain connectivity was found to be significantly reduced in these young patients suggesting an overall under-connectivity of functional brain networks in autism. These findings were reported in the journal Brain Connect.

Since the brain’s neurons generate electric current to communicate between each other, a magnetic field also arises. After placing 141 sensors on the scalps of 19 children, nine of whom were previously diagnosed with autism spectrum disorder (ASD), the researchers were able to track the magnetic field generated in the cortex of each child and generate a dynamic pattern of brain activity.  Researchers found significantly stronger connections between rear and frontal areas of the brain in the ASD group; there was an asymmetrical flow of information to the frontal region, but not vice versa.

“We asked the question, ‘Can you distinguish an autistic brain from a non-autistic brain simply by looking at the patterns of neural activity?’ and indeed, you can,” Galán said. “This discovery opens the door to quantitative tools that complement the existing diagnostic tools for autism based on behavioral tests.”

This latest insights are of significant importance, since while other methods are also capable of inferring brain connectivity patterns, they do not however indicate the interactions’ directionality.  Using their magnetoencephalography method, the researchers were able to identify ASD with 94 percent accuracy, suggesting this technique may be considered reliable enough to  identify anatomical abnormalities in ASD brains.

“It is not just who is connected to whom, but rather who is driving whom,” Galán said.

Their approach also allows them to measure background noise, or the spontaneous input driving the brain’s activity while at rest. A spatial map of these inputs demonstrated there was more complexity and structure in the control group than the ASD group, which had less variety and intricacy.

Findings were reported in the journal PLOS One.

An illustration that represents how brain activity synchronizes to that of an attending speaker, while ignoring other speakers in background at a cocktail party. (c) Zion-Golumbic et al./Neuron

How the brain concentrates at one speaker at time in noisy crowds

An illustration that represents how brain activity synchronizes to that of an attending speaker, while ignoring other speakers in background at a cocktail party. (c) Zion-Golumbic et al./Neuron

An illustration that represents how brain activity synchronizes to that of an attending speaker, while ignoring other speakers in background at a cocktail party. (c) Zion-Golumbic et al./Neuron

It’s remarkable how adaptable the human brain is especially in these extremely busy, crowded and most of all noise times. Focus is key, of course, and recently researchers have shown for instance how the brain hones in at one speaker at a time when subjected to multiple external stimuli, like other people jabbering around at a cocktail party.

There’s no easy way of blocking sound. You can’t just close your ears, like you can with your eyes, however luckily our brains have specially developed filters that only process information related to sound that is deemed important, and that’s very fortunate since otherwise we all would have gone insane.

At a sensory level, all sounds are picked up by the brain, and this is very important to know, but how does the brain prioritize which sounds need to be encoded? Senior author Dr. Charles Schroeder, of Columbia University’s Department of Psychiatry, along with colleagues directly recorded brain patterns from in surgical epilepsy patients, who were listening to natural spoken sentences. In the auditory cortex – the part of the brain responsible for processing sound, like speech – both attended and ignored speech was reflected in brain signals. The attended speech, however, had a much greater signal amplitude.

However, in “higher-order processing” regions of the brain – responsible for language processing and attention control – things were a lot different. Here attended speech  was clear, while that of ignored speech was not detectable.

“While confirming this, we also provide the first clear evidence that there may be brain locations in which there is exclusive representation of an attended speech segment, with ignored conversations apparently filtered out,” the authors write in their paper published in the journal Neuron.

The findings could help scientists develop solutions for people suffering from deficits such as those associated with attention deficit hyperactivity disorder, autism, and aging.



How the brain loses and gains consciousness

conscious-unconsciousFor more than two centuries physicians have been using general anesthetics to perform surgeries, however even now in the 21st century scientists know very little about what happens to the brain when the patient moves to and fro a state of consciousness. This becomes even more important when you consider the very rare but frightening cases in which some patients wake up from anesthesia during surgery. MIT scientists have now found the process that moves the brain from conscious to unconscious and vice versa, furthering our understanding. Also, novel monitoring devices that can accurately determine whether a patient is about to wake up can be now created.

“When anesthesiologists are taking care of someone in the operating room, they can use the information in this article to make sure that someone is unconscious, and they can have a specific idea of when the person may be regaining consciousness,” says senior author Emery Brown, an MIT professor of brain and cognitive sciences and health sciences and technology and an anesthesiologist at MGH.

In the new study, MIT scientists monitored the brain activity of volunteers that were under anesthetics for two hours at a time. Each participant had an array of 64 electrodes attached to the scalp. Propofol, the most common anesthetic, was steadily injected in the participants, while the researchers monitored their response to sounds.

Every four seconds a mechanical tone or the participant’s name was played in the foreground, time at each the participant had to push a button to signal that it was received. During all this time, the EEG monitored brain activity. Once the subjects became less responsible, distinct brain patterns surfaced. Early on, when the subjects were just beginning to lose consciousness, the researchers detected an oscillation of brain activity in the low frequency (0.1 to 1 hertz) and alpha frequency (8 to 12 hertz) bands, in the frontal cortex. A specific relationship between the two frequency bands was also inferred: Alpha oscillations peaked as the low-frequency waves were at their lowest point.

Consciousness: switch on, switch off

Later on they found that while the subject become fully under anesthesia the alpha oscillations flipped so their highest points occurred when the low frequency waves were also peaking. This resulting pattern blocked neural communications between various regions of the brain. For instance, the frontal cortex and the thalamus, which normally communicate with each other across a very broad frequency band to relay sensory information and control attention, were constrained from sharing information.

This array of videos shows spectrographic data (representing brain wave frequencies) from each of 44 electrodes attached to the scalp of a healthy volunteer undergoing propofol anesthesia. The spectrograms are arranged according to their approximate position on the scalp, with the front of the head at the top of the screen, and the back of the head at the bottom of the screen. Activity moves from back to front with loss of consciousness (levels 1 to 5) and from back to front with return of consciousness (levels 6 to 8). Each video shows brain activity throughout a 140-minute period of the study. Video by Aylin Cimenser.

In a similar previous study, conducted by the same team of researchers, only with epileptic volunteers, instead of healthy ones, similar findings were made. Then, researchers found that during anesthesia, neurons within small, localized brain regions are active for a few hundred milliseconds, then shut off again for a few hundred milliseconds. It’s this flickering brain pattern that creates the low frequency oscillations that block communications between areas of the brain and pulls us into unconsciousness.

“You were not supposed to wake up”

When the anesthetic dose was lowered, the participants began to regain consciousness, and a reversal of brain activity occurred. Yet again, the alpha frequencies flipped so that they were at their peak when the low-frequency waves were at their lowest point.

“That is the signature that would allow someone to determine if a patient is coming out of anesthesia too early, with this drug,” said Patrick Purdon, an instructor of anesthesia at MGH and Harvard Medical School.

Only in one in 10,000 operations patients wake up from anesthesia, but this is enough to cause general panic surrounding surgeries. Armed with this new found knowledge, anesthesiologists might soon have monitoring tools based on brain wave patterns that accurately signal whether or not the patient is fully unconscious or not.

The researchers now plan on monitoring brain signals for other anesthesia drugs as well. The findings were reported in the journal Proceedings of the National Academy of Sciences