Tag Archives: neurology

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

The coronavirus epidemic could be fueling higher rates of delirium, brain inflammation, stroke, and nerve damage

A new study from the University College London (UCL) worryingly reports that COVID-19 can lead to complications such as delirium, brain inflammation, stroke, or nerve damage.

A 3D-printed human brain.
Image credits Flickr / NIH Image Gallery.

The study identified one rare but sometimes fatal inflammatory condition known as acute disseminated encephalomyelitis (ADEM) which has been increasing in prevalence during the epidemic. While ADEM itself is quite rare and usually seen in children, the team found a high prevalence of this condition among COVID-19 patients suffering from neurological symptoms.

Not good for the brain

“We should be vigilant and look out for these complications in people who have had COVID-19. Whether we will see an epidemic on a large scale of brain damage linked to the pandemic—perhaps similar to the encephalitis lethargica outbreak in the 1920s and 1930s after the 1918 influenza pandemic—remains to be seen” says joint senior author Dr. Michael Zandi from the UCL Queen Square Institute of Neurology.

The team analyzed the cases of 43 people (aged 16-85) being treated at the National Hospital for Neurology and Neurosurgery, UCLH, who were confirmed with COVID-19. ADEM, they explain, has a surprisingly-high prevalence among this group.

Some patients didn’t experience any severe respiratory symptoms, with their neurological disturbances being the first main indications of COVID-19. The team identified 10 cases of transient encephalopathies (temporary brain dysfunction) with delirium, 12 cases of brain inflammation, 8 cases of stroke, and 8 patients with nerve damage, mainly Guillain-Barré syndrome.

Some patients in the study did not experience severe respiratory symptoms, and the neurological disorder was the first and main presentation of COVID-19.

The authors describe this as a “higher than expected number of people with neurological conditions” and explain that these did not always correlate with the intensity of their respiratory symptoms.

Out of the 12 cases of brain inflammation, 9 were diagnosed with ADEM. Typically, the team says one adult patient with ADEM will come in every month, but this has increased in frequency to at least one per week since the start of the pandemic.

How it does this is still unclear

The coronavirus attacks the respiratory system first and foremost. This is a chest tomography of a 38 years old male patient. The white “grid-like lobules” represent areas where the virus is attacking tissues.
Image via Wikimedia.

SARS-CoV-2 was not detected in samples of cerebrospinal fluid from any of the patients, however. This suggests that the virus isn’t directly responsible for the neurological symptoms (i.e. it doesn’t attack brain tissue itself). However, that also means we’re not sure, for now, exactly why these symptoms appear — more research is needed to find out. Preliminary data suggests that at least some patients are experiencing these symptoms due to their own immune response to the disease.

“Given that the disease has only been around for a matter of months, we might not yet know what long-term damage Covid-19 can cause,” says joint-first author Dr. Ross Paterson from the UCL Queen Square Institute of Neurology.

“Doctors need to be aware of possible neurological effects, as early diagnosis can improve patient outcomes. People recovering from the virus should seek professional health advice if they experience neurological symptoms.”

The findings align well with previous work into the neurological symptoms of SARS-CoV-2. And while the ever-growing list of symptoms this virus seems to cause is definitely worrying, the more we know about what it does the better we can fight those symptoms.

If these neurological effects are indeed caused by our own immune systems, immunosuppressants could be used to eliminate them entirely (along with a host of the virus’ damaging effects). For now, we need to better understand how these neurological symptoms arise, and how to safely combat them — and studies such as this one show us where to start.

The paper “The emerging spectrum of COVID-19 neurology: clinical, radiological and laboratory findings” has been published in the journal Brain.

New magnetic brain stimulation technique relieved depression in 90% of the participants in a small-scale study

Researchers at the Stanford University School of Medicine have developed a form of magnetic brain stimulation that could ‘rapidly’ relieve symptoms of severe depression in 90% of participants in a small study.

Image via Pxhere.

Although the findings are limited by the small sample size so far, the team is working on a larger, double-blind trial to test the approach; in this trial, half of the patients will receive similar electromagnetic stimulation, while the other half will receive fake treatment. In this second trial, the team hopes to prove that their approach will be effective in treating people whose conditions are resistant to medication, talk therapy, or other forms of electromagnetic stimulation.

The real positive vibes

“There’s never been a therapy for treatment-resistant depression that’s broken 55% remission rates in open-label testing,” said Nolan Williams, MD, assistant professor of psychiatry and behavioral sciences and a senior author of the study. “Electroconvulsive therapy is thought to be the gold standard, but it has only an average 48% remission rate in treatment-resistant depression. No one expected these kinds of results.”

The method was christened Stanford Accelerated Intelligent Neuromodulation Therapy, or SAINT, and is a form of transcranial magnetic stimulation, an approach currently approved by the Food and Drug Administration for treatment for depression. Transcranial magnetic stimulation involves the use of a magnetic coil placed on the scalp to excite a region of the brain — in this case, those involved in depression.

Compared to other similar approaches, the SAINT method uses more magnetic pulses (1,800 pulses per session instead of the traditional 600), which helps speed up the pace of treatment, and focuses them depending on each patient’s particular neural architecture. Study participants underwent an accelerated treatment program compared to similar treatment approaches, 10 sessions per day of 10-minute treatments, with 50-minute breaks in between.

In their trial study, the team worked with 21 participants with severe depression — as determined by several diagnostic tests — which proved resistant to medication, FDA-approved transcranial magnetic stimulation, or electroconvulsive therapy. After receiving treatment, 19 of them scored within the nondepressed range, the team explains. All of the participants reported having suicidal thoughts before treatment, but none of them reported such thoughts afterward.

“There was a constant chattering in my brain: It was my own voice talking about depression, agony, hopelessness,” explains Deirdre Lehman, 60, one of the participants of the study. “I told my husband, ‘I’m going down and I’m heading toward suicide.’ There seemed to be no other option.”

There were some side effects of this treatment, but they were relatively minor: fatigue and some physical discomfort during treatment.

“By the third round, the chatter started to ease,” she said. “By lunch, I could look my husband in the eye. With each session, the chatter got less and less until it was completely quiet.”

“That was the most peace there’s been in my brain since I was 16 and started down the path to bipolar disorder.”

Although Lehman’s scores indicated that she was no longer depressed after a single day of therapy, it took up to five days for other participants to see the same results. Postdoctoral researcher Eleanor Cole, Ph.D., a lead author of the study, says that the “less treatment-resistant participants are, the longer the treatment lasts”.

The team evaluated each participant’s cognitive functions before and after treatment to ensure safety, and found no negative effects. One month after the therapy, 60% of participants were still in remission from depression. Follow-up studies are underway to determine the duration of the antidepressant effects, the team adds.

The researchers plan to study the effectiveness of SAINT on other conditions, such as obsessive-compulsive disorder, addiction, and autism spectrum disorders.

The paper “Stanford Accelerated Intelligent Neuromodulation Therapy for Treatment-Resistant Depression” has been published in the American Journal of Psychiatry.

We can’t grow new neurons in adulthood after all, new study says

Previous research has suggested neurogenesis — the birth of new neurons — was able to take place in the adult human brain, but a new controversial study published in the journal Nature seems to challenge this idea.

a. Toluidine-blue-counterstained semi-thin sections of the human Granule Cell Layer (GCL) from fetal to adult ages. Note that a discrete cellular layer does not form next to the GCL and the small dark cells characteristic of neural precursors are not present.

Scientists have been struggling to settle the matter of human neurogenesis for quite some time. The first study to challenge the old theory that humans did not have the ability to grow new neurons after birth was published in 1998, but scientists had been questioning this entrenched idea since the 60’s when emerging techniques for labeling dividing cells revealed the birth of new neurons in rats. Another neurogenesis study was published in 2013, reinforcing the validity of the results from 1998.

Arturo Alvarez-Buylla, a neuroscientist at the University of California, San Francisco, and his team conducted a study to test the neurogenesis theory using immunohistochemistry — a process that applies various fluorescent antibodies on brain samples. The antibodies signal if young neurons as well as dividing cells are present. Researchers involved in this study were shocked by the findings.

“We went into the hippocampus expecting to see many young neurons,” says senior author Arturo Alvarez-Buylla. “We were surprised when we couldn’t find them.”

In the new study, scientists analyzed brain samples from 59 patients of various ages, ranging from fetal stages to the age of 77. The brain tissue samples came from people who had died or pieces were extracted in an unrelated procedure during brain surgery. Scientists found new neurons forming in prenatal and neonatal samples, but they did not find any sustainable evidence of neurogenesis happening in humans older than 13. The research also indicates the rate of neurogenesis drops 23 times between the ages one and seven.

But some other uninvolved scientists say that the study left much room for error. The way the brain slices were handled, the deceased patients’ psychiatric history, or whether they had brain inflammation could all explain why the researchers failed to confirm earlier findings.

The 1998 study was performed on brains of dead cancer patients who had received injections of a chemical called bromodeoxyuridine while they were still alive. The imaging molecule — which was used as a cancer treatment — became integrated into the DNA of actively dividing cells. Fred Gage, a neuroscientist involved in the 1998 study, says that this new paper does not really measure neurogenesis.

“Neurogenesis is a process, not an event. They just took dead tissue and looked at it at that moment in time,” he adds.

Gage also thinks that the authors used overly restrictive criteria for counting neural progenitor cells, thus lowering the chances of seeing them in adult humans.

But some neuroscientists agree with the findings. “I feel vindicated,” Pasko Rakic, a longtime outspoken skeptic of neurogenesis in human adults, told Scientific American. He believes the lack of new neurons in adult primates and humans helps preserve complex neural circuits. If new neurons would be constantly born throughout adulthood, they could interfere with preexisting precious circuits, causing chaos in the central nervous system.

“This paper not only shows very convincing evidence of a lack of neurogenesis in the adult human hippocampus but also shows that some of the evidence presented by other studies was not conclusive,” he says.

Dividing neural progenitors in the granule cell layer (GCL) are rare at 17 gestational weeks (orthogonal views, inset) but were abundant in the ganglionic eminence at the same age (data not shown). Dividing neural progenitors were absent in the GCL from 22 gestational weeks to 55 years.

Steven Goldman, a neurologist at the University of Rochester Medical Center and the University of Copenhagen, said, “It’s by far the best database that has ever been put together on cell turnover in the adult human hippocampus. The jury is still out about whether there are any new neurons being produced.” He added that if there is neurogenesis, “it’s just not at the levels that have been presumed by many.”

The debate still goes on. No one really seems to know the answer yet, but I think that’s a positive — the controversy will generate a new wave of research on the subject.

Scientists discover how ketamine is so good against depression

Ketamine, a drug generally used for anesthesia, but also for recreational purposes, is now in the spotlight for its promising results in fighting depression. As shown in previous research, ketamine improves depression’s symptoms in a few hours, unlike the rest of anti-depressants, which may take weeks, even months to work. Scientists have now discovered exactly how ketamine so rapidly soothes depression.

Via Wikipedia

 “People have tried really hard to figure out why it’s working so fast, because understanding this could perhaps lead us to the core mechanism of depression,” says Hailan Hu, a neuroscientist at Zhejiang University School of Medicine in Hangzhou, China, and a senior author of the study.

The team believed that ketamine affected a small part of the brain, called the lateral habenula, also known as the “anti–reward center.”

If you are wondering where is the habenula – follow the yellow area in the center of the brain.
Via Wikipedia

Neurons from the lateral habenula are activated by stimuli associated with unpleasant events, like the absence of the reward or punishment, especially when these are unpredictable. To better understand how they work, here is an example: If a rat or a mouse solves a maze, it will expect some form of reward. If the rodent doesn’t get any reward, even though it had successfully completed a task, the neurons from the lateral habenula will fire, thus inhibiting the activity of the reward areas. Researchers believe that these ‘reward-negative’ neurons in the brain are overreactive in depression.

To see if their hypothesis was right, researchers designed an experiment in which they directly infused the drug into the lateral habenula of rats with depression-like symptoms. Scientists discovered that the pattern of neuronal activity, not the overall activity of the lateral habenula was a key factor in triggering depression: a percentage of the neurons in the lateral habenula fire several times in quick bursts, rather than firing once at regular intervals.

These bursts of activity in rats with symptoms of depression are absent in healthy rodents. An analysis of brain slices of healthy rats showed that they only had about 7% of these bursting type of neurons, in comparison to the depressed rodents that had almost 23% bursting neurons.

Scientists found similar results when recording the brain activity of mice: The animals who suffered stressful events had more bursting cells in the lateral habenula. After using optogenetics — a technique that allows cells to be ‘turned on or off’ with the help of light — the mice became more depressed, refusing to swim in a container of water even if forced.

But after the mice and rats were given ketamine, the number of bursting neurons became similar to the one found in healthy animals. Even when the scientists directed the neurons to fire in bursts, animals that had been administered ketamine no longer exhibited symptoms of depression.

“Anything that can block the bursting … should be a potential target based on our model,” Hu says.

In an accompanying study published at the same time in the journal Nature, the team found a protein synthesized by astrocytes (another type of brain cell that interacts closely with neurons) could be one of these targets. This molecule controls the flow of ions between a cell and its environment and it is involved in the process of resetting the nerve cell after an electrical signal, which requires regathering all the ions that flowed out of the cell during the signal.

The protein identified by the research team changes the amount of potassium available to the nerve cell, altering the cell’s ability to fire again soon. By increasing the amount of this protein, researchers were able to induce depression-like symptoms in mice.

The paper published in the journal Nature truly casts light upon the exceptional anti-depressant mechanism of ketamine, also providing us with important insight into further understanding the pathology of depression.

Researchers find brain’s ‘physics engine’

There’s a physics expert inside all of us, though deeper in some than in others. While you may not have aced the subject in high school, your brain is very qualified at understanding how objects around you move and behave. A researcher from Johns Hopkins University believes he has pinpointed the area of our brains responsible for that physical intuition.

Here it is! The location of the ‘physics engine’ in the brain is highlighted in color in this illustration. Jason Fischer/JHU

When researchers started using fMRIs, they got a unique glimpse into the human brain. Particularly, they got to see what parts of the brain “light up” when doing specific actions. Strangely, they found that when people watch physical events unfold, it’s not the brain vision center that lights up. Instead, it’s an area responsible for planning actions. This would suggest that the brain is constantly running physical simulations, enabling us to anticipate how the things around us will move and interact. Scientists have called this the brain’s “physical engine” and we now know where it is.

“We run physics simulations all the time to prepare us for when we need to act in the world,” said lead author Jason Fischer, an assistant professor of psychological and brain sciences in the university’s Krieger School of Arts and Sciences. “It is among the most important aspects of cognition for survival. But there has been almost no work done to identify and study the brain regions involved in this capability.”

Fischer’s study seems to indicate that action planning and intuition are closely related.

“Our findings suggest that physical intuition and action planning are intimately linked in the brain,” Fischer said. “We believe this might be because infants learn physics models of the world as they hone their motor skills, handling objects to learn how they behave. Also, to reach out and grab something in the right place with the right amount of force, we need real-time physical understanding.”

The implications of this study can be far reaching, but for now, there are two applications. Firstly, this could provide insight into conditions such as apraxia or damage to the motor areas. Apraxia is a motor disorder caused by damage to the brain in which someone has difficulty with the motor planning to perform tasks or movements – not because the muscles are weak, but because something is disrupting the brain’s control over the body.

The other benefit is it might allow us to build better, nimbler robots. If we could better understand what allows us to be so proficient at anticipating movements, we could pass it on to them.

The research was published in Proceedings of the National Academy of Sciences.

photo: vimeo.com

Cyber synesthesia: computer turns image into sound, allowing the blind to ‘see’

photo: vimeo.com

photo: vimeo.com

After growing up to adulthood blinded from birth, a man now has taken a peculiar hobby: photography. Were it not for the efforts of a group of researchers who have devised a system that converts images into sequences of sound, this new found pastime had been impossible. Hobbies or not, the technology is particular impressive and judging from the stream of data reported thus far, it could prove to be a marvelous system for everyday use, helping the blind navigate their surroundings, recognize people and even appreciate visual arts — all through sound.

In all began in 1992 when a Dutch engineer called  Peter Meijer invented vOICe – an algorithm that converted simple grayscale, low-resolution images into sounds that would break into an unique, discernible pattern by the trained ear. As the algorithm scans from left to right, each pixel or group of pixels has a corresponding frequency (higher positions in the image  –> higher acoustic frequencies). A simple image, for instance, only showing  a diagonal line stretching upward from left to right becomes a series of ascending musical notes, while a more complicate image, say a man leaning on a chair, turns into a veritable screeching spectacle.

Amir Amedi and his colleagues at the Hebrew University of Jerusalem took things further and made vOICe portable, while also studying the participants’ brain activity for clues. They recruited people that had been blind all their lives from birth, but after just 70 hours of training and obviously despite any visual cues,   the individuals went from “hearing” simple dots and lines to “seeing” whole images such as faces and street corners composed of 4500 pixels. Mario on Nintendo only has 192 pixels and it still felt freaking realistic sometimes (was that just me as kid or what?).

Seeing with sound

Using head-mounted cameras that communicated with the vOICe technology, the blind participants could then navigate their surroundings and even recognize human silhouettes. To prove they could visually sense accurately, the  participants mimicked the silhouette’s stances.

Things turned really interesting when the researchers analyzed the brain activity data. The traditional sensory-organized brain model says the brain is organized in regions each devoted to certain senses. For instance, the visual cortex is used for sight processing; in the blind, where these areas aren’t used conventionally, these brain regions are re-purposed to boost some other sense, like hearing. Amedi and colleagues found, however,  that the area of the visual cortex responsible for recognizing body shapes in sighted people was signaling powerfully when the blind participants were  interpreting the human silhouettes. Neuroscientist Ella Striem-Amit of Harvard University, who co-authored the paper, thinks it’s time for a new model. “The brain, it turns out, is a task machine, not a sensory machine,” she says. “You get areas that process body shapes with whatever input you give them—the visual cortex doesn’t just process visual information.”

“The idea that the organization of blind people’s brains is a direct analog to the organization of sighted people’s brains is an extreme one—it has an elegance you rarely actually see in practice,” says Ione Fine, a neuroscientist at the University of Washington, Seattle, who was not involved in the study. “If this hypothesis is true, and this is strong evidence that it is, it means we have a deep insight into the brain.” In an alternative task-oriented brain model, parts of the brain responsible for similar tasks—such as speech, reading, and language—would be closely linked together.

The team also devised a vOICe version that can be run as a free iPhone app, called EyeMusic. The researchers demonstrated that using the app, blind participants could recognize drawn faces and distinguish colours. The video below showcases the app. The study was reported in the journal Current Biology.

source: scimag

3D rendering from the Allen Human Brain Atlas showing the expression a single gene across the cortex of two human brains, revealing areas with higher (red) and lower (green) expression. Photograph: Allen Institute for Brain Science

Atlas of the human brain might help identify the mechanics of neural conditions

Neuroscientists at the Allen Institute for Brain Science in Seattle have created an atlas of the human brain, which highlights the activity of genes across the entire organ. The brain map was created after many hard years of labor, and might help scientists from across the world  identify factors that underlie neurological and psychiatric conditions.

3D rendering from the Allen Human Brain Atlas showing the expression a single gene across the cortex of two human brains, revealing areas with higher (red) and lower (green) expression. Photograph: Allen Institute for Brain Science

3D rendering from the Allen Human Brain Atlas showing the expression a single gene across the cortex of two human brains, revealing areas with higher (red) and lower (green) expression. (c) Allen Institute for Brain Science

“The human brain is the most complex structure known to mankind and one of the greatest challenges in modern biology is to understand how it is built and organised,” said Seth Grant, a professor of molecular neuroscience at Edinburgh University who worked on the map.

The scientists created high-resolution maps of the genetic activity in the brain, after the studied the brains of two donated, fully integral adult brains and third man’s hemisphere. All of the tissue was healthy upon collection and came from males. With this maps, the researchers can now for the first time overlay the human genome on the human brain, practically bridging the brain and the genome, and thus offering clues as to what genes trigger what functions in the brain. The result is the most complete, most accurate rendering of a human brain we have ever had.

Plotting a genetic map of the brain

One of the human brain slices. (c) Allen Institute for Brain Science

One of the human brain slices. (c) Allen Institute for Brain Science

To reach their goal, the scientists first performed fMRI scans on the brains to captured the organs in complete 3-D anatomical detail. Then came the tricky part – the brains were chopped into many tiny slices and chemically analyzed genetic activity within about 900 precise areas – this took nine months for each brain. From more than 100 million measurements on brain pieces, with some only a few cubic millimetres, the scientists found that 84% of all genes are turned on in some part of the organ.

Are just two and half brains (what a TV show spoof this could’ve turned out to) enough to generalize for any human cortex? Well, the researchers found, when probing the neocortex, which is the center of higher mental function in humans, that gene activity was in principal the same. This extremely similar pattern of gene activity made the researchers suspect there may be a common blueprint for the expression of genes in the human brain; genes which are still sadly oh so poorly understood.

“The fact that so many of the brain-expressed genes have not been well-characterized means that there are huge voids in our understanding of how genes relate to proper brain function,” said researcher Ed Lein, a neuroscientist at the Allen Institute for Brain Science in Seattle. “Many of these genes are used in highly selective ways — in particular structures and cell types — and this map we have created can provide functional predictions to catalyze a wave of new research in molecular brain research.”

Similar genetic maps were constructed for rodents, but held non-conclusive results when attempts to transfer these to humans were made, and thus this project, just recently completed, was first initiated four and a half years ago. Now armed with this new data, researchers hope to make a comparison and see whether brain research on lab animals may or may not reflect the human condition.

The atlas, which overlays the genetic results on to a 3D image of the brain, is freely available for researchers to use online, so if you’re planning on researching how a particular gene affects the brain, this might really come in handy. For instance, Clyde Francks at the Max Planck Institute for Psycholinguistics in the Netherlands is already using genetic data from the Allen Institute for Brain Science to pinpoint genes that give rise to brain asymmetries in a set of 1,300 Dutch students.

The scientists detail their findings in the Sept. 20 issue of the journal Nature.

Memory deficits of the elderly may be reversed

A team of researchers from Yale University have shown at a cellular basis why we tend to be more forgetful as we age, and claim that the condition may be reversed.

There’s no secret to the fact that an elderly person has a much weaker memory than the one he did at 20 years of age, but the whole process which leads to this degradation is still far from being entirely known. The researchers behind the study, recently published in the journal Nature, believe neural networks in the brains of the middle-aged and elderly have weaker connections and fire less robustly than in youthful ones.

“Age-related cognitive deficits can have a serious impact on our lives in the Information Age, as people often need higher cognitive functions to meet even basic needs, such as paying bills or accessing medical care,” says Amy Arnsten, professor of neurobiology and psychology at Yale University. “These abilities are critical for maintaining demanding careers and being able to live independently as we grow older.”

Experimenting, they looked for age-related changes in the activity of neurons in the prefrontal cortex (PFC), the area of the brain that is responsible for higher cognitive and executive functions, of variously aged animals. The PFC network of neurons is one of the brain’s most active regions when a person isn’t asleep, firing up signals constantly – this is where all the “working memory” magic is.

A neat working memory is essential for complex tasks such as reasoning, comprehension and learning. It also controls short-memory, allowing you to remember simple things like where your parked the car or put your keys, while being constantly updated and refreshed.

Arnsten and colleagues analyzed the PFC signals in young, medium-aged and old animals. They observed that young animals could maintain a higher rate of signal shooting in the working memory than those much older. Scientists looking for a reason for this, believe the PFC of older animals accumulate excessive levels of a signaling molecule called cAMP, which can open ion channels and weaken prefrontal neuronal firing.

By using agents that inhibit the generation of cAMP in the brain, they were able to to restore more youthful firing patterns in the aged neurons, albeit not exactly like those found in a young specimen – the improvements were dramatic, however. One of such agents employed was guanfacine, a neural enhancer already prescribed to children with PFC deficiencies and for hypertension treatment.

A clinical trial will shortly commence headed by Yale School of Medicine to see what kind of improvements guanfacine might have for the working memory in human subjects. Scientists doubt however that the substance might be used for treatments against Alzheimer or other forms of dementia.

Brainy Quote by Valerie van Mulukom

Amazing Brain Art

Brain-Art competition is an annual celebration of the beauty and creativity of artistic renderings emerging from the neuroimaging community. Last month concluded the first edition in which various artists from around the world submitted some incredible work for the competition’s galleries – 3D-rendering gallery, connectome gallery, abstract gallery and humorous gallery.

Below are a few pieces I found really enjoyable to see and delight myself with. Check out gallery winners, as well as more illustrations on the competition’s website.

Rebrain by Roberto Toro

Brain Mohawk by Johanna Bergmann & Erhan Genç

White Matter Fiber Tracts: Visualizations of fiber track data from diffusion MR imaging by Betty Lee

White Matter Fiber Tracts: Visualizations of fiber track data from diffusion MR imaging by Betty Lee

White Matter Mohawk by Johanna Bergmann & Erhan Genç

White Matter Mohawk by Johanna Bergmann & Erhan Genç

Amygdala anatomy and associated symptomalogy in autism by Isabel Dziobek & Michael Madore

Amygdala anatomy and associated symptomalogy in autism by Isabel Dziobek & Michael Madore

Abstract Image by Karl Zilles, Markus Axer, David Graessl, Katrin Amunts

Abstract Image by Karl Zilles, Markus Axer, David Graessl, Katrin Amunts

Andy Warhol for Neuroscientists I by Valerie van Mulukom

Andy Warhol for Neuroscientists I by Valerie van Mulukom

Fashion Show by Michel Thiebaut de Schotten & Bénédicte Batrancourt

Fashion Show by Michel Thiebaut de Schotten & Bénédicte Batrancourt

Brainy Quote by Valerie van Mulukom

Brainy Quote by Valerie van Mulukom

An interesting problem for you to solve. Although the large triangles are made up of the same smaller triangles, why is there a space left over in the lower one? The solution on request in the comments section.

Introspective individuals spot illusions harder, study says

According to a new study by scientists from University College London, it seems people who find optical illusion solving easier are less inclined to think about the process and understand how they came to that decision.

This conclusion came after further analysis of data from a research conducted last year, which showed people with more grey matter in the primary visual cortex were better at solving visual illusions. The team then looked for size differences elsewhere in the brain that correlate with variation in the visual cortex.

To get there in the first place, they analyzed the cortex of 30 participants with a RMI scanner while they were showed images through a computer. Soon, researchers had a structural image of the brain, in which they were surprised to find a relationship between the primary visual cortex and a region at the front of the brain called the anterior prefrontal cortex.

“When people have a bigger anterior prefrontal cortex, they have a smaller visual cortex, and vice versa,” lead study author, Chen Song, says.

An interesting problem for you to solve. Although the large triangles are made up of the same smaller triangles, why is there a space left over in the lower one? The solution on request in the comments section.

An interesting problem for you to solve. Although the large triangles are made up of the same smaller triangles, why is there a space left over in the lower one? The solution on request in the comments section.

Interestingly enough, previous research has shown that the size of the aPFC is linked to introspection – the more gray matter an individual has in his aPFC, the more he is likely be able to assess whether or not he made the right decision. Correlating, the study suggests that individuals with a greater introspection capability have a harder time spotting illusions. The team now plans on carrying out behavioral studies to find out if this is truly the case.

Why is this important?

“Animal studies have shown that some genes involved in brain development are expressed at differing levels along the anterior-posterior axis of the brain,” says Song.

Those differences might be most stark when comparing structures at opposite ends of the cortex – such as the aPFC and primary visual cortex.

Bigger is not better, brain-wise

Elliot Freeman at City University in London agrees that the results are a surprise. “But bigger is not necessarily better in terms of brain power,” he says. Despite evidence that a large aPFC might be linked to better introspection, a small aPFC might be beneficial too, Freeman says. “It might be better to have fewer synaptic connections for more focused and coherent decision making.”

However, more neurons in the visual cortex might boost resolution in visual processing, Freeman adds. “A brain with more visual volume and less frontal volume might actually work better.”

The study was published in the Journal of Neuroscience.


How the human brain differs according to sex – male and female brains compared

I recently came across a very interesting piece in the NY Post which cites a study that shows that while it was well known that a difference in size between male and female brains exists, there is now evidence that there are significant differences in the size of certain structural parts of the brain, according to gender.

As such, researchers have found, for instance, that  a female’s frontal lobe, responsible for problem-solving, is larger than in a man. Meanwhile, a male’s amygdala, which regulates sexual behavior and “fight or flight” reaction, is bigger.

Men have 9% bigger brains, even after correcting for body size. But men and women share the same amount of neurons, they're just more densely packed in a woman's brain. Click for detail view.

Surprisingly, for me at least, it seems in male brains, men have six and a half times more gray matter than women do. Gray matter is partly responsible for information processing, so may explain in general men tend to be better in math.

As for women, it seems human females have 10 times as much white matter — the part of the brain that’s I partially responsible for connecting information processing centers. This could contribute to the stereotype that why women are good multi-taskers.

Women are thought to have 10 times the amount of "white matter" than men. Some researchers believe that it might play a role in why women often excel at language and verbal skills. But, like the gray matter hypothesis, these are controversial conclusions. Click for detail view.

Of course, this doesn’t prove anything. This doesn’t mean men are smarter than women, just because their brains are bigger, or that women will always be more detail orientated than men and so on. Quality is not proven by size, like in most aspects of life. Hormones, genetic different and more add to the puzzle that compose the human brain, be it man or woman, but this particular research remains very interesting, still.

You can read the hypothesis in greater detail here. I’d love to hear some thoughts on this very controversial piece.