Tag Archives: brain

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

Scientists find neurons in the human brain that only respond to singing

Credit: Pixabay.

Music and the human brain seem to be deeply intertwined, a bond that may have first appeared when the first australopithecine ancestor got up on her hind legs 4.4 million years ago and walked. This bipedal rhythm may have made our lineage particularly sensitive to musicality, so much so that we now know that the human brain has dedicated neural circuitry for processing and interpreting musical information.

In 2015, neuroscientists at MIT identified a population of neurons in the auditory cortex that responds specifically to music. In a new study that appeared today in the journal Current Biology, the same team of researchers led by Sam Norman-Haignere have identified specific neurons in the brain that light up only when we hear singing, but not other types of music.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” said Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The singing brain

For their original 2015 work, the scientists used functional magnetic resonance imaging (fMRI) to scan the brains of participants as they listened to a collection of 165 sounds. These included everyday sounds like a dog barking or traffic in a busy city, as well as different types of speech and music.

After analyzing the brain patterns using a novel interpretation technique for fMRI data, the researchers identified a neural population that responded differently to both music and speech.

However, fMRI –which detects the changes in blood oxygenation and flow that occur in response to neural activity while a person lies down inside a machine equipped with very powerful magnets — has its limitations. A much more precise method for recording electrical activity in the brain is electrocorticography (ECoG), which directly measures patterns of activity using electrodes implanted inside the skull. The obvious drawback is that this is highly invasive. Let’s just say there aren’t too many keen volunteers that would gladly have their skulls drilled for science — unless you already don’t have much to lose.

Electrocorticography is becoming relatively widely used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. This allows doctors to pinpoint the exact location in the brain where a patient’s seizures are originating, which can be different from person to person.

Some of these patients agreed to participate, and MIT researchers were able to gather data from them over several years. Many of the 15 participants involved in the study didn’t have electrodes fitted in their auditory cortex, but some did — and the insight they provided proved valuable. Using a novel statistical approach, the researchers were able to identify neural populations that were responsible for the electrical activity recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” he added.

When ECoG data was combined with fMRI, the researchers were able to determine even more precisely the locations of the neural populations that responded specifically to signing, but not other kinds of music.

“The intracranial recordings in this study replicated our prior findings with fMRI and revealed a novel component of the auditory response that responded nearly exclusively to song,” Norman-Haignere told ZME Science.

These song-specific hotspots were found at the top of the temporal lobe, near regions that are selective for language and music. This suggests that song-specific populations of neurons likely respond to perceived pitch, so they might tell the difference between spoken words and musical vocalization, before sending this information to other parts of the brain for further processing.

These findings enrich our understanding of how the human brain responds to music. For instance, previous research showed music impacts brain function and human behavior, including reducing stress, pain and symptoms of depression, as well as improving cognitive and motor skills, spatial-temporal learning, and neurogenesis, which is the brain’s ability to produce neurons. 

But many mysteries still remain, which is why the MIT researchers plan to study infants’ neural response to music, in hopes of learning more about how brain regions tuned to music develop. 

“At present, we know very little about song-selective neural populations, in part because we just discovered them and in part because this type of data takes a long time to collect. Those are great questions that future research will hopefully shed some light on,” Norman-Haignere told ZME Science.

Researchers have just taught cyborg brains how to play Pong

An international research team has grown a brain-like organoid that is capable to play the simple video game Pong. This is the first time that such a structure (which researchers called a “cyborg brain”) is capable of performing a goal-directed task.

Pong is one of the simplest video games. You have a paddle and a ball (in the single-player version) or two paddles and a ball (in the two-player version), and you move the paddle to keep the ball in play and bounce it to the other side — much like a real ping-pong game. For most people familiar with computer games, it’s a simple and intuitive game. But for cells in a petri dish, it’s a bit of a tougher challenge.

Researchers at the biotech startup Cortical Labs took up the challenge. They created “mini-brains” (“we think it’s fair to call them cyborg brains,” the company’s chief scientific officer said in an interview) consisting of 800,000-1,000,000 living human brain cells. They then placed these cells on top of a microelectrode array that analyzes electrical changes and monitors the activity of the “brain.”

Electrical signals are also sent to the brain to tell it where the ball is located and how fast it is coming. It was taught to play the game just like humans: by playing the game repeatedly and by being offered feedback (in this case, in the form of electrical signals to electrodes).

It took about five minutes to learn the game. While the cyborg brain wasn’t quite as a human would be, it was able to learn how to play the game faster than some AIs, researchers say.

The fact that it was able to learn so quickly is a real stunner, but this is just the beginning. It’s the first time this type of brain-like structure was able to achieve something like this, and it could be a real step towards a true, advanced cyborg brain.

“Integrating neurons into digital systems to leverage their innate intelligence may enable performance infeasible with silicon alone, along with providing insight into the cellular origin of intelligence,” the researchers write in the study.

The researchers say their work can bring improvements in the design of or in therapies targeting the brain. For now, as exciting as this achievement is, it’s still hard to say what it will amount to.

The study was published in a pre-print and was not yet peer-reviewed. Journal Reference: Brett J. Kagan et al, In vitro neurons learn and exhibit sentience when embodied in a simulated game-world, biorxiv (2021). DOI: 10.1101/2021.12.02.471005.

Our brains may be naturally wired for multilingualism, being ‘blind’ to changes between languages

Our brains may be tailored for bilingualism, new research reports. According to the findings, the neural centers that are tasked with combining words together into larger sentences don’t ‘see’ different languages, instead treating them as if they belong to a single one.

Image credits Willi Heidelbach.

The same pathways that combine words from a single language also do the work of combining words from two different languages in the brain, according to the paper. In the brains of bilingual people, the authors report, this allows for a seamless transition in comprehending two or more languages. Our brains simply don’t register a switch between languages, they explain.

The findings are directly relevant to bilingual people, as it allows us a glimpse into how and why they often mix and match words from different languages into the same sentences. However, they are also broadly relevant to people in general, as it helps us better understand how our brains process words and meaning.

Speaking in tongues

“Our brains are capable of engaging in multiple languages,” explains Sarah Phillips, a New York University doctoral candidate and the lead author of the paper. “Languages may differ in what sounds they use and how they organize words to form sentences. However, all languages involve the process of combining words to express complex thoughts.”

“Bilinguals show a fascinating version of this process — their brains readily combine words from different languages together, much like when combining words from the same language,” adds Liina Pylkkänen, a professor in NYU’s Department of Linguistics and Department of Psychology and the senior author of the paper.

Bilingualism and multilingualism are widespread around the world. In the USA alone, according to data from the U.S. Census, roughly 60 million people (just under 1 in 5 people) speak two or more languages. Despite this, the neurological mechanisms that allow us to understand and use more than a single language are still poorly understood.

The specific habit of bilinguals to mix words from their two languages together into single sentences during conversation was of particular interest to the authors of this paper. In order to find out, the duo set out to test whether bilinguals use the same neural pathways to understand mixed-language expressions as they do to understand single-language expressions.

For the study, they worked with Korean/English bilinguals. The participants were asked to look at a series of word combinations and pictures on a computer screen. These words either formed a meaningful two-word sentence or pairs of verbs that didn’t have any meaning, such as “jump melt” for example. Some of these pairings had two words from a single language, while others used one word from English and another from Korean. This was meant to simulate mixed-language conversations.

Participants then had to indicate whether the pictures matched the words that preceded them.

Their brain activity was measured during the experiment using magnetoencephalography (MEG), which records neural activity by measuring the magnetic fields generated in the brain when electrical currents are fired off from neurons.

The data showed that bilinguals used the same neural mechanisms to interpret mixed-language expressions as they did to interpret single-language expressions. More specifically, activity in their left anterior temporal lobe, a brain region known for playing a part in combining meaning from multiple words, didn’t show any differences when interpreting single- or mixed-language expressions. This was the region that actually combined the meanings of the two words participants were reading, as long as they did combine together into a meaningful whole.

All in all, the authors explain, these findings suggest that the mechanisms tasked with combining words in our brains are ‘blind’ to language. They function just as effectively, and in the same way, when putting together words from a single language or multiple ones.

“Earlier studies have examined how our brains can interpret an infinite number of expressions within a single language,” Phillips concludes. “This research shows that bilingual brains can, with striking ease, interpret complex expressions containing words from different languages.”

The research was carried out with bilingual people, for the obvious limitation that non-bilinguals only understand a single language. While the findings should be broadly-applicable, there is still a question of cause and effect here. Is the neural behavior described in this paper a mechanism that’s present in all of our brains? Or is it something that happens specifically because bilinguals have learned and become comfortable with using multiple languages? Further research will be needed to answer these questions.

The paper “Composition within and between Languages in the Bilingual Mind: MEG Evidence from Korean/English Bilinguals” has been published in the journal eNeuro.

New brain stimulation technique cured 80% of major depression cases during trial run

We might soon have a reliable treatment for severe depression. New research at the Stanford University School of Medicine reports that a new type of magnetic brain stimulation was successful in treating almost 80% of participants with this condition.

Image via Pixabay.

The treatment approach is known as the Stanford Accelerated Intelligent Neuromodulation Therapy (SAINT), or Stanford neuromodulation therapy for short. It is an intensive, individualized transcranial magnetic stimulation therapy, and it shows great promise against severe depression — so far, in controlled trials. While effective, there are some side effects to this treatment: temporary fatigue and headaches.

All in all, the authors are confident that the benefits far outweigh the risks with SAINT, and they hope their work will pave the way towards new treatment options for many patients around the world.

A promising approach

“It works well, it works quickly and it’s noninvasive,” said Nolan Williams, MD, an assistant professor of psychiatry and behavioral sciences, and senior author of the study. “It could be a game changer.”

The study included 29 participants with treatment-resistant depression. They ranged in age from 22 to 80, and had suffered from depression for an average of nine years at the time of the study. All of these cases have proven to be resistant to medication. Participants who were on medication during the study maintained their regular dosage, but those who weren’t did not start any course during the treatment period.

They were split into two groups, one of which received the SAINT treatment, with the other receiving a placebo procedure that mimicked it. Five days into the treatment, 78.6% of the participants in the SAINT group no longer qualified for depression as judged using several evaluation efforts. The effects were sustained over time after the treatment had ceased, the authors note.

Current transcranial magnetic stimulation options that carry the approval of the Food and Drug Administration require six weeks of daily sessions, the authors explain. It’s effective in about half the patients who undergo such treatments, and only about a third show remission from depression following the treatment.

SAINT builds on these approaches by first targeting the pulses in different areas tailored after each patient’s neurocircuitry, and by delivering a greater number of magnetic pulses at a higher frequency.

In order to determine the particularities of each patient’s dorsolateral prefrontal cortex — an area of the brain involved in regulating executive functions –, the authors performed an MRI analysis on each participant before the start of the study. Their goal was to find the exact subregion in the brain that had the strongest functional link to the subgenual cingulate. This structure has been documented to exhibit heightened levels of activity in people experiencing depression. The goal of the magnetic stimulation treatment is to strengthen the link between the two areas in order to allow the dorsolateral prefrontal cortex to better control the activity in the subgenual cingulate.

The density of the pulses delivered in this trial was three times greater than that of currently-approved treatments: 1,800 per session compared to the regular number of 600. Finally, instead of providing one treatment session per day, the team gave their participants 10 10-minute treatments, with 50-minute breaks in between. The control group underwent ‘treatment’ with a magnetic coil that mimics the experience of the magnetic pulses.

Both groups wore noise-canceling earphones and received a topical ointment to dull sensation before each session.

Four weeks after the trial, 12 of the 14 participants in the experimental group showed improvements in their symptoms. According to FDA criteria for remission, 11 of them were officially cured of depression. In the control group, only 2 out of 15 patients met the criteria for remission.

The team is particularly interested in using SAINT to treat patients who are at a crisis point. Their study revealed that participants felt better and had attenuated symptoms within days of starting SAINT; this timeframe is much shorter than what is seen with medication, where improvements can take up to a month or more.

“We want to get this into emergency departments and psychiatric wards where we can treat people who are in a psychiatric emergency,” Williams said. “The period right after hospitalization is when there’s the highest risk of suicide.”

The paper “Stanford Neuromodulation Therapy (SNT): A Double-Blind Randomized Controlled Trial” has been published in the American Journal of Psychiatry.

Spanish researchers developed an “artificial retina” that beams sight directly into the brain of blind patients

A team of Spanish researchers is working to restore sight to blind people by directly stimulating their brains and bypassing the eyes entirely.

The 57-year-old participant of the study during testing of the device. Image credits Asociación RUVID.

Current efforts to address blindness generally revolve around the use of eye implants or procedures to restore (currently limited) functionality to the eye. However, a team of Spanish researchers is working on an alternative approach: bypassing the eyeball entirely.

Their work involves the use of an artificial retina, mounted on an ordinary pair of glasses, that feeds information directly into the users’ brains. The end result is that users can perceive images of what the retina can see. In essence, they’re working to create artificial eyes.

Eyeball 2.0

“The amount of electric current needed to induce visual perceptions with this type of microelectrode is much lower than the amount needed with electrodes placed on the surface of the brain, which means greater safety” explains Fernández Jover, a Cellular Biology Professor at Miguel Hernández University (UMH) of Spain, who led the research.

The device picks up on light from a visual field in front of the glasses and encodes it into electrical signals that the brain can understand. These are then transmitted to an array of 96 micro-electrodes implanted into a user’s brain.

The retina itself measures around 4 mm (0.15 inches) in width, and each electrode is 1.5 mm (0.05 inches) long. These electrodes come into direct contact with the visual cortex of the brain. Here, they both feed data to the neurons and monitor their activity.

So far, we have encouraging data on the validity of such an approach. The authors successfully tested a 1,000-electrode version of their system on primates last year (although the animals weren’t blind). More recently, they worked with a 57-year-old woman who had been blind for over 16 years. After a training period — needed to teach her how to interpret the images produced by the device — she has successfully identified letters and the outlines of certain objects.

The device was removed 6 months after being implanted with no adverse effects. During this time, the authors worked with their participant to document exactly how her brain activity responds to the device, to analyze the learning process, and to check whether the use of this device would lead to any physical changes in the brain.

Although limited in what images it can produce so far, the good news is that the system doesn’t seem to negatively interfere with the workings of the visual cortex or the wider brain. The authors add that because the system requires lower levels of electrical energy to work than other systems which involve electrode stimulation of the brain, it should also be quite safe to use.

Such technology is still a long way away from being practical in a day-to-day setting, and likely even farther away from being commercially available. There are still many issues to solve before that can happen, and safely addressing these will take a lot of research time and careful tweaking. But the results so far are definitely promising and a sign that we’re going the right way. The current study was limited in scope and duration but, based on the results, the authors are confident that a longer training period with the artificial retina would allow users to more easily recognize what they’re seeing.

The team is now working on continuing their research by expanding their experiments to include many more blind participants. They’re also considering stimulating a greater number of neurons at the same time, which should allow the retina to produce much more complex images in the participants’ minds. During the course of this experiment, they also designed several video games to help their participant learn how to use the device. The experience gained during this study, as well as these video games, will help improve the experience of future users and give them the tools needed to enjoy and understand the experience more readily.

Apart from showcasing the validity of such an approach, the experiments also go a long way to proving that microdevices of this type can be safely implanted and explanted in living humans, and they can interact with our minds and brains in a safe and productive way. Direct electrode stimulation of the brain is a risky proposition, but the team showed that this can be performed using safe, low levels of electrical current and still yield results.

Professor Jover believes that neuroprosthetics such as the one used in this experiment are a necessity for the future. There simply aren’t any viable alternative treatments of aids for blind people right now. Although retina prostheses are being developed, many patients cannot benefit from them, such as people who have suffered damage to their optical nerves. The only way to work around such damage right now is to send visual information directly into the brain.

This study proves that it can be done. It also shows that our brains can still process visual information even after a prolonged period of total blindness, giving cause for hope for many people around the world who have lost their sight.

The paper “Visual percepts evoked with an Intracortical 96-channel microelectrode array inserted in human occipital cortex” has been published in The Journal of Clinical Investigation.

Pain impairs our ability to feel pleasure — and now we know why, and how

Researchers are homing in on the brain circuits that handle pain-induced anhedonia, the reduction in motivation associated with experiencing pain. The findings, currently only involving lab rats, might prove pivotal in our efforts to address depression and the rising issue of opioid addiction.

Pain is definitely not a sensation most of us are excited to experience. And although physical hurt is obviously unpleasant, it isn’t the only component of this sensation. Affective pain can be just as debilitating, and much more insidious. New research has identified the brain circuits that mediate this kind of pain, in a bid to counteract its long-term effects — which can contribute to the emergence of depression and make people vulnerable to addictions that take that pain away, such as opioid use disorder (OUD).

Show me where it hurts

Chronic pain is experienced on many levels beyond just the physical, and this research demonstrates the biological basis of affective pain. It is a powerful reminder that psychological phenomena such as affective pain are the result of biological processes,” said National Institute on Drug Abuse (NIDA) Director Nora D. Volkow, M.D, who was not affiliated with this study.

“It is exciting to see the beginnings of a path forward that may pave the way for treatment interventions that address the motivational and emotional effects of pain.”

Pain, the authors explain, has two components: a sensory one (the part you can feel) and an affective, or emotional, component. Anhedonia — an inability to feel pleasure and a loss of motivation to pursue pleasurable activities — is one of the central consequences of affective pain. Considering the strong links between anhedonia, depression, and substance abuse, the NIDA has a keen interest in understanding how our brains produce and handle affective pain.

Previous studies found that rats in pain were more likely to consume higher doses of heroin compared to their peers. In addition to this, they lost a sizable chunk of their motivation to seek out other sources of reward (pleasure), such as sugar tablets.

The current paper built on these findings, and aimed to see exactly how this process takes place in the brain. The team measured the activity of dopamine-responding neurons in a part of the brain’s “reward pathway” known as the ventral tegmental area. This activity was measured while the rats used a lever with their front paw to receive a sugar tablet. In order to see what effect pain would have on the activity of these neurons, rats in the experimental group received an injection that produced local inflammation in their hind paw. Rats in the control group were injected with saline solution.

After 48 hours, the researchers noted that rats in the experimental group pressed the lever less than their peers, indicative of a loss of motivation. They also saw lower activity levels in their dopamine neurons. Further investigations revealed that these neurons were less active because the sensation of pain was activating cells from another region of the brain known as the rostromedial tegmental nucleus (RMTg). Neurons in the RMTg are, among other tasks, responsible for producing the neurotransmitter GABA, which inhibits the functions of dopamine neurons.

Despite this, when the authors artificially restored functionality to the dopamine neurons, the effects of pain on the reward pathway was completely reversed and the rats regained the motivation to push the lever and obtain their sugar tablet even with the sensation of pain.

In another round of lab experimentation, the team were able to reach the same effects by blocking the activity of neurons which produce GABA in response to pain. The rats who were part of this round of testing were similarly motivated to pick a solution of water and sugar over plain water even when experiencing pain. This, the authors explain, shows that the rats were better able to feel pleasure despite also experiencing pain.

All in all, even though the findings are valuable in and of themselves, the team says that this is the first time a link has been established between pain, an increase of activity of GABA neurons, and an inhibitory pathway effect in the reward system which causes decreased activity of dopamine neurons.

“Pain has primarily been studied at peripheral sites and not in the brain, with a goal of reducing or eliminating the sensory component of pain. Meanwhile, the emotional component of pain and associated comorbidities such as depression, anxiety, and lack of ability to feel pleasure that accompany pain has been largely ignored,” said study author Jose Morón-Concepcion, Ph.D., of Washington University in St. Louis.

“It is fulfilling to be able to show pain patients that their mental health and behavioral changes are as real as the physical sensations, and we may be able to treat these changes someday,” added study author Meaghan Creed, Ph.D., of Washington University in St. Louis.

The paper “Pain induces adaptations in ventral tegmental area dopamine neurons to drive anhedonia-like behavior” has been published in the journal Nature Neuroscience.

Our brains don’t pick the shortest route between two points — they pick ‘the pointiest’ one

Research from (Massachusetts Institute of Technology) seems to suggest that our brains aren’t the most effective navigation tools out there. According to the findings, people navigating cities tend not to follow as straight a trajectory as possible, which would be the shortest path, but tend to take the one that points most towards their destination — even if they end up walking a longer distance.

Image via Pixabay.

The team calls this the “pointiest path” approach. In technical terms, it is known as vector-based navigation. Animals, from the simplest to the most complex, have also shown in various experiments that they employ the same strategy. The authors believe that animal brains evolved to use vector-based navigation because, even though it isn’t the most effective approach, it is much easier to implement computationally — saving time and energy.

A general direction

“There appears to be a tradeoff that allows computational power in our brain to be used for other things—30,000 years ago, to avoid a lion, or now, to avoid a perilous SUV,” says Carlo Ratti, a professor of urban technologies in MIT’s Department of Urban Studies and Planning and director of the Senseable City Laboratory.

“Vector-based navigation does not produce the shortest path, but it’s close enough to the shortest path, and it’s very simple to compute it.”

The findings are based on a dataset comprising the routes of over 14,000 people going about their daily lives in a city environment. These records were anonymized GPS signals from pedestrians in Boston and Cambridge, Massachusetts, and San Francisco, California, over a period of one year. All in all, they include over 550,000 paths.

The overwhelming majority of people didn’t use the shortest routes, judging from where they left and their destination. However, they did pick routes that minimized their angular derivation from the destination — they chose the routes that pointed towards where they were going the most.

“Instead of calculating minimal distances, we found that the most predictive model was not one that found the shortest path, but instead one that tried to minimize angular displacement—pointing directly toward the destination as much as possible, even if traveling at larger angles would actually be more efficient,” says Paolo Santi, a principal research scientist in the Senseable City Lab and at the Italian National Research Council, and a corresponding author of the paper. “We have proposed to call this the pointiest path.”

Pedestrians employed this navigation strategy both in Boston and Cambridge, which have a convoluted street layout, as well as in San Francisco, which has a highly organized, grid-style layout. In both cases, the team notes that pedestrians also tend to follow different routes when making a round trip between two points. Ratti explains that such an outcome would be expected if pedestrians made “decisions based on angle to destination” instead of judging distances only.

“You can’t have a detailed, distance-based map downloaded into the brain, so how else are you going to do it? The more natural thing might be useful information that’s more available to us from our experience,” Tenenbaum says. “Thinking in terms of points of reference, landmarks, and angles is a very natural way to build algorithms for mapping and navigating space based on what you learn from your own experience moving around in the world.”

While definitely fun, such findings may seem a bit inconsequential. The authors however believe that as we come to rely more heavily on computers such as our smartphones for everyday tasks, it is more important than ever to understand the way our own brains compute the world around us. This would allow us to design better software and improve our quality of life by tailoring our devices around the way our minds and brains work.

The paper “Vector-based pedestrian navigation in cities” has been published in the journal Nature Computational Science.

Pill for your thoughts: what are nootropics?

Nootropics are drugs that have a stimulating effect on our minds and brains. They’re meant to improve our cognitive abilities in various ways. On the face of it, that sounds awesome; who doesn’t want to get smarter by taking a pill? But many drugs touted with having a nootropic effect have no evidence to show for it. Some are complete swindles.

Image credits Lucio Alfonsi.

All of this doesn’t help give nootropics, which are a genuine category of drugs, a good name. Despite the undeniable appeal of being referred to as ‘cognitive enhancers’.

Today, we’re going to take a look at what nootropics are, talk about a few that we know are genuine, their effects, and some of the controversy around this subject.

So what are they?

The term was coined in 1972 by Romanian-born chemist and psychologist Corneliu Giurgea. At the time, he stated that to qualify as a nootropic, a compound should do the following:

  • Improve learning and memory.
  • Make learned behaviors or memories more resilient in the face of factors or conditions that disrupt them, such as hypoxia.Protect the brain against chemical or physical injuries.
  • Increase the efficacy of the tonic cortical/subcortical control mechanisms.
  • Have extremely low levels of toxicity, produce few (ideally no) side-effects, and to not induce the same effects of other psychotropic drugs (i.e. not get you high).

All of these are very useful pointers. However, I’ve found that the best way to explain what a certain family of drugs is to someone is to point at the examples people have direct experience with. We’re lucky, then, since virtually every one of us uses nootropics. Caffeine, nicotine, or L-theanine in various types of tea are some of the most-used nootropics in the world. Caffeine is the single most widely-used one. Besides coffee, caffeine is also naturally present in chocolate and tea. Many processed items such as food supplements, energy drinks, or sodas also contain caffeine.

All of these compounds influence our cognitive abilities in one form or another. Caffeine is notorious for helping pick us up when we’re feeling sleepy. But it also has a direct influence on the levels of various neurotransmitters in the brain. Past research has noted this leads to improved short-term memory performance and learning ability. These effects were not related to the stimulating effects of caffeine but occurred alongside it. According to Stephanie M. Sherman et al., 2016:

“Participants who drank caffeinated coffee were significantly more awake by the end of the experiment, while participants who drank decaffeinated coffee did not experience the same increase in perceived wakefulness”, it notes, adding that caffeine also “increased explicit memory performance for college-aged adults during early morning hours. Young adults who drank caffeinated coffee showed a 30% benefit in cued recall performance compared to the decaffeinated coffee drinkers, and this effect was independent of the perceived positive effect of the caffeine.”

Nicotine, an active ingredient in tobacco plants, also seems to have nootropic potential. D M Warburton, 1992, reports on a range of effects nicotine has on the (healthy) brain, including improvements in attention “in a wide variety of tasks” and improvements in short- and long-term memory. It further explains that nicotine can help improve attention in “patients with probable Alzheimer’s Disease”. Some of these effects were attributed to the direct effect nicotine has on attention, while others “seem to be the result of improved consolidation as shown by post-trial dosing” — meaning the compound likely also helps strengthen memories after they are formed.

Please do keep in mind here that I do not, in any way, condone that you pick up smoking. There isn’t any scenario under which I’d estimate that the potential nootropic effect of nicotine outweighs the harm posed by smoking. There are other ways to introduce nicotine into your system if you’re really keen on it.

L-theanine is very similar in structure to the neurotransmitter glutamate — which has the distinction of being the most abundant neurotransmitter in the human brain. Glutamate is our main excitatory neurotransmitter, and a chemical precursor for our main inhibitory neurotransmitter, as well. To keep things short, glutamate is an important player in our brains.

Because of how similar they are chemically, L-theanine can bind to the same sites as glutamate, although to a much lower extent. We’re not very sure what effects L-theanine has on the brain exactly, there is some evidence that it can work to reduce acute stress and anxiety in stressful situations by dampening activation in the sympathetic nervous system (Kenta Kimura et al., 2006).

How they work

Coffee and tea are some of the world’s most popular sources of natural nootropics. Image via Pixabay.

A wide range of chemically distinct substances can have nootropic effects. As such, it’s perhaps impossible to establish a single, clear mechanism through which they act. But in very broad lines, their end effect is that of boosting one or several mental functions such as memory, creativity, motivation, and attention.

The nootropic effects of caffeine come from it interacting with and boosting activity in brain areas involved in the processing and formation of short-term memories. It does this, as we’ve seen, by tweaking neurotransmitter levels in the brain. Others, like nicotine and L-theanine, also influence neurotransmitter levels, or bind to receptor sites themselves, thus influencing how our minds and brains function. Others still influence our mental capacity through more mechanical means. As noted by Noor Azuin Suliman et al., 2016:

“Nootropics act as a vasodilator against the small arteries and veins in the brain. Introduction of natural nootropics in the system will increase the blood circulation to the brain and at the same time provide the important nutrient and increase energy and oxygen flow to the brain”. Furthermore, “the effect of natural nootropics is also shown to reduce the inflammation occurrence in the brain […] will protect the brain from toxins and [minimize] the effects of brain aging. Effects of natural nootropics in improving brain function are also contributed through the stimulation of the new neuron cell. [Through this] the activity of the brain is increased, enhancing the thinking and memory abilities, thus increasing neuroplasticity”.

The brain is a very complicated mechanism, one whose inner workings we’re only beginning to truly understand. Since there are so many moving parts involved in its functions, there are many different ways to tweak its abilities. Way too many to go through them all in a single sitting. One thing to keep in mind here is that nootropics can be both natural and synthetic in nature. In general — and this is a hard ‘in general’ — we understand the working mechanisms of natural nootropics a bit more than those of synthetic nootropics.

Still, even with caffeine, we start seeing one of the main drawbacks — most of which remain poorly understood — of nootropics. The word ‘nootropic’ is a compound of two Ancient Greek root words and roughly translates to “mind growers”. But, just as tuning a guitar’s strings alters what chords it can play overall, nootropics affect our minds and brains in their entirety. They often act on multiple systems in the body at the same time to produce these effects.

We separate nootropics by their effects in three classes. These are eugeroics, which promote wakefulness and alertness. One prominent eugeroic is Modafinil, currently used to treat narcolepsy, obstructive sleep apnea, and shift work sleep disorder. It’s also being investigated as a possible avenue for the treatment of stimulant drug withdrawal.

The second class is part of the ADHD medication family, which includes Methylphenidate, Lisdexamphetamine, Dexamfetamine. Ritalin is a drug in this category. It was originally used to treat chronic fatigue, depression, and depression-associated psychosis. Today, Ritalin is the most commonly prescribed medication for ADHD as it addresses the restlessness, impulsive behaviour, and inattentiveness associated with the disorder.

Finally, we have nootropic supplements. These include certain B vitamins, fish oil, and herbal supplements such as extracts of Gingko biloba and Bacopa monnieri. Supplements tend to be the more contested than the rest, with the plant extracts themselves being the most contested overall. One thing to keep in mind here is that the FDA doesn’t regulate nootropic supplements the same way it does for prescription drugs, so buyer beware. Another is that there is little reliable evidence that these supplements actually help boost memory or cognitive performance beyond a placebo effect. A review of literature on the efficacy of supplements (Scott C. Forbes et al., 2015) concludes that:

“Omega-3 fatty acids, B vitamins, and vitamin E supplementation did not affect cognition in non-demented middle-aged and older adults. Other nutritional interventions require further evaluation before their use can be advocated for the prevention of age-associated cognitive decline and dementia”.

One final point here is that the nutrients these supplements provide — if they work — shouldn’t produce meaningful effects unless you’ve been taking them for a while. Dr. David Hogan, co-author of that review and a professor of medicine at the University of Calgary in Canada, told Time.com that age also plays a factor, and that such nutrients may not be of much help if taken “beyond the crucial period” of brain development.

No side effects?

“Caffeine has been consumed since ancient times due to its beneficial effects on attention, psychomotor function, and memory,” notes Florian Koppelstaetter et al., 2010. “Caffeine exerts its action mainly through an antagonism of cerebral adenosine receptors, although there are important secondary effects on other neurotransmitter systems”.

Adenosine receptors in the brain play a part in a number of different processes, but a few that are important to our discussion right now are: regulating myocardial (heart) activity, controlling inflammation responses in the body, and keeping tabs on important neurotransmitters in the brain such as dopamine.

Caffeine helps make us be more alert by impairing the function of these receptors; one of the things that happen when adenosine binds to these sites is that we start feeling drowsy, even sleepy. But our brains come equipped with these receptors for a very important reason — they keep us alive and healthy. Messing with their activity can lead us to some very dangerous situations. Caffeine intake, for example, increases blood pressure and heart rate, at least in part by interfering with these adenosine receptors. Heavy caffeine intake has been linked to tachycardia (rapid heart contractions) in certain cases.

The risk posed by nootropics comes down to their very nature. By design, these are drugs meant to tweak the way our brains work. But our brains are so essential to keeping our bodies alive that any wrong tweak can lead to a lot of problems. There is some evidence that the use of certain nootropics comes at “a neuronal, as well as ethical, cost”. Revving our brains ever harder could mean they wear out more quickly.

“Altering glutamate function via the use of psychostimulants may impair behavioral flexibility, leading to the development and/or potentiation of addictive behaviors”, Kimberly R. Urban, Wen-Jun Gao, 2014, reports. “Healthy individuals run the risk of pushing themselves beyond optimal levels into hyperdopaminergic and hypernoradrenergic states, thus vitiating the very behaviors they are striving to improve. Finally, recent studies have begun to highlight potential damaging effects of stimulant exposure in healthy juveniles.”

“This review explains how the main classes of cognitive enhancing drugs affect the learning and memory circuits, and highlights the potential risks and concerns in healthy individuals, particularly juveniles and adolescents. We emphasize the performance enhancement at the potential cost of brain plasticity that is associated with the neural ramifications of nootropic drugs in the healthy developing brain”.

This leads us neatly to:

The controversy

The ethical implications of using nootropics in school

Although nootropics are still poorly understood, they have an undeniable allure. And there’s no shortage of people willing to capitalize on that demand.

There are valid uses for nootropics, and there is research to support these uses. ADHD medication being a prime example of that. But there is also a lot of false advertising, inflated claims, false labeling, and general snake-oilery going on in the field of nootropics.

We live in a world where cognitive ability and academic achievement have a large impact on our livelihoods, and the quality of our lives. As such, there is a lot of incentive for us to boost these abilities, and nootropics seem to offer an easy way to achieve them. So, naturally, there’s a lot of incentive for people to try and sell them to you. There is a growing trend of use of nootropics by students trying to make it through the curriculum — or to get an edge over their peers — in universities around the world. Factor in the fact that we still have a poor understanding of nootropics, and a poorer understanding still of their side- and long-term effects on our brains, and it becomes worrying.

The Federal Drug Administration and Federal Trade Committee have sent multiple warnings to manufacturers and distributors of nootropic drugs and supplements over the years over charges of misleading marketing, the manufacture and distribution of unapproved drugs or no proven safety or efficiency at the marketed doses, even over the use of illegal substances.

In closing, nootropics are a valid and real class of drugs. While there is still much we don’t yet understand about them, we know that they exist and they can work in the way we envision them, as long as we do so responsibly. In many ways, however, they suffer from their fame. Everybody wants a pill that would make them smarter, sharper, more focused. That in itself isn’t damnable. The trouble starts when we’re willing to overlook potential risks or even willingly ignore known side-effects in chasing that goal.

Are male and female brains really different?

Early crude measurements in the 19th-century showed the male brains are significantly larger (about 11% larger) than female brains, which is sometimes used as an argument that the average male is more intellectually equipped than the average female. However, this neurosexist viewpoint has been refuted by modern brain imaging and investigations that show there are very little to no functional differences between the male and female brains.

Men are from Mars, women are from Venus. Oh, really?

The invention of magnetic resonance imaging (MRI) in the early 1990s allowed scientists to produce highly detailed 2-D and 3-D images of the brain, unleashing a revolution in neuroscience. Some researchers took advantage of this opportunity to look for differences between men’s and women’s brains, spurred by observable gender-specific differences in terms of personality, as well as dimorphic traits between the sexes (hormone production, reproductive organs, chromosomes).

Over the years, a grand body of studies has amassed in the scientific literature pertaining to sex-linked brain differences between the two sexes. Not all that surprising, these findings have proven extremely controversial, ranging from conclusions that can be interpreted as “women are inferior” to “men and women’s brains are different, but complementary”.

Women’s brains are said to be wired better for empathy and intuition, whereas male brains are better equipped for reason and action. This would explain stereotypes about genders, such as that women are more emotional and better at communicating, while men are more competitive.

But these pop neuroscience notions are based on very thin and shaky research, and forming world views based on them can even be damning. James Damore, a former Google engineer, learned this the hard way. In 2017, Damore wrote a 10-page manifesto that basically argued against workplace diversity since “the distribution of preferences and abilities of men and women differ in part due to biological causes, and that these differences may explain why we don’t see equal representation of women in tech and leadership.”

The Google engineer, who holds a graduate degree in biology, linked to various scientific studies that support his claims, such as research suggesting that women care more about people than things, later concluding that “differences in distributions of traits between men and women may in part explain why we don’t have 50% representation of women in tech and leadership. Discrimination to reach equal representation is unfair, divisive, and bad for business.” Damore was later dismissed from Google following the leaking of the internal memo for violation of the company’s code of conduct.

Although it’s easy to see Damore’s sacking as unfair and political, the harsh truth may be that he was the victim of flawed, gender-biased thinking that is pervasive in all corners of society, academia included. Although Damore’s views have been supported by some noted psychologists such as Debra Soh and Jordan Peterson, the consensus is that the engineer gravely overemphasized the literature. Gina Rippon, the chair of cognitive brain imaging at Aston University, noted that Damore “relied on data that was suspect, outdated, irrelevant, or otherwise flawed.”

The problem with many of these studies is that they can be flawed in methodology because the human brain is inherently complicated to understand and still very much a work in progress, or only approach a small subset of supposed sex differences in cognitive abilities and brain structure that can be easily taken out of context by laypeople. Take for instance one study from the UK that looked at brain scans using MRI for 2750 women and 2466 men and examined the volumes of 68 regions within the brain. On average, scientists found that women tended to have significantly thicker cortices than men while men had higher brain volume than women in subcortical regions. Thicker cortices are associated with higher scores on cognitive and general intelligence tests. Alright, but what does this mean for how the brain works? Very unclear.

Even so, some scientists believe that there have to be some sex-specific differences in the human brain in order to explain significant differences between men’s and women’s cognitive function. For instance, there are many instances where the male-female ratios are unbalanced for cognitive and neuropsychiatric disorders. Women are twice as likely to experience clinical depression and anxiety as men, whereas men are about three times as likely to suffer from autism and twice as likely to have ADHD as women. Boys’ dyslexia rate is perhaps 10 times that of girls and they’re 40% more likely to develop schizophrenia in adulthood.

Gender-binary societies can lead to gender-binary assumptions. Even in science

Credit: Pixabay.

The fact that there could be biological differences between the sexes that may explain these significant gender differences is not only logical but also seductive.

But Lise Eliot, a professor of neuroscience at the Chicago Medical School, claims that anyone searching for innate differences between the sexes is on a futile journey. Although she acknowledges some slight differences between the male and female brains, Eliot believes the human brain is a unisex organ.

Eliot is the lead author of a 2021 study that conducted a mega-synthesis of hundreds of the largest and most highly-cited brain imaging studies addressing 13 distinct measures of alleged sex differences. The meta-analysis encompasses three decades’ worth of research.

For nearly every measure, Eliot and colleagues found virtually no differences that could be widely reproduced across studies. For instance, the volume or thickness of specific regions in the cerebral cortex is often cited to differ among men and women, as in the UK study listed above in this article. However, the analysis showed that the regions identified differ by a wide margin between studies.

Another red flag when it comes to drawing conclusions from sex-specific brain research is the poor replication between diverse populations. The analysis found wild variations in findings between Chinese versus American populations, for instance, which indicates that we lack a universal brain marker for distinguishing men and women’s brains across the human species, if one even exists.

“Since the dawn of MRI, studies finding statistically significant sex differences have received outsized attention by scientists and the media,” said Dr. Eliot in a statement.

“The handful of features that do differ most reliably are quite small in magnitude,” Dr. Eliot said. “The volume of the amygdala, an olive-sized part of the temporal lobe that is important for social-emotional behaviors, is a mere 1% larger in men across studies.”

This study, titled “Dump the Dimorphism”, debunks the idea that the human brain is sexually dimorphic. This is science-speak for biological structures that come in two distinct forms in males and females, such as how only male deers have antlers or the genitalia of men and women. That’s not the case for the human brain though, the authors claim.

Concerning brain size, when overall body size and mass are controlled for, no individual brain regions varied by more than about 1% between men and women. These differences, tiny as they are, are not reported consistently across geographically or ethnically diverse human populations. Furthermore, the nominal brain difference in the size of the brain between sexes is actually smaller than those seen in other internal organs. For instance, the heart, lungs, and kidneys are between 17% and 25% larger in men.

A highly-cited 2014 study from the University of Pennsylvania found females’ brains show more coordinated activity between the left and right hemispheres, while males’ brain activity was more tightly coordinated at local brain regions.

However, the notion that men’s brains are more lateralized, whereas women’s two hemispheres are better connected operating more in sync with each other has been rebutted by Eliot et al. Other studies have found that the actual difference in both accounts is even less than 1% across populations. If men’s and women’s brains were indeed connected significantly differently, we’d see much more disabilities in men following brain injuries such as stroke. Large-scale datasets show that there is no gender difference in aphasia (loss of language) following a stroke in the left hemisphere.

One 2018 study, which summarized the last 40 years of research, found “cognitive sex differences often emerge in the absence of sex differences in hemispheric asymmetry (and vice versa), implying the two phenomena are at least partly independent of each other.” So assumptions that explain sex differences in cognitive abilities such as mental rotation or verbal memory based on studies that reported sex-specific differences in brain asymmetry mistook correlation for causation.

Another point of contention is supposed cognitive differences revealed by studies employing functional MRI, which shows which brain regions ‘light up” during language, spatial, and emotional tasks. Across hundreds of studies compiled by the researchers, the research that reported different activities between men and women also exhibited poor reproducibility.

Another explanation for the large number of contradictory studies in this field is a phenomenon known as publication bias. Smaller, early studies in the late ’90s and early 2000s that found sex differences in the brain were likelier to get published, whereas those that found no male-female brain difference were left unpublished. This file drawer effect is pervasive across all scientific fields, not just neuroscience, and as a result, many studies skew towards those that report “novel” findings or “discover” something. But without complementary studies that find negligible effects, we lack the proper context on how to frame novel findings, and science is left poorer and less reliable as a result.

“Sex comparisons are super easy for researchers to conduct after an experiment is already done. If they find something, it gets another publication. If not, it gets ignored,” Dr. Eliot said. Publication bias is common in sex-difference research, she added, because the topic garners high interest.

Sex-difference research is rife with not just publication bias, but also flawed methodologies (inadequate controls and weak statistical significance). This is why you see a lot of brain studies published in the media that expose differences between men and women. But when peers highlight the hyped extrapolation or all too common design flaw, these rebuttals receive abysmal attention.

But if there are no inherent, hard-wired sex-specific brain differences at birth between the sexes, what then could explain the significant and sometimes obvious gender dissimilarities seen in things ranging from cognitive tests to personality traits. One possible explanation is that the human brain is extremely plastic, meaning its neural circuits morph with practicing certain skills, so socialization and upbringing may play a grander role than we thought. Sex hormones also affect the brain, but the idea that these effects add up to create two distinct types of brain, male and female, has never been proven.

Another reason why there are many inconclusive and controversial studies in this field may have to do with a lot of individual variabilities. One 2015 study that compared the brains of 1,400 men and women, analyzing their volume, connections, and other physical structures, found the human brain is actually a tangled mix of both sex-congruent and sex-incongruent features. The left hippocampus, a region of the brain associated with memory, was found to be generally larger in males, but women with a large left hippocampus were common. Up to 53% of individual brains included in this study had a mix of both “typically male” and “typically female” features, and only 8% had “very male” or “very female” brains.

These findings are corroborated by a similar analysis of personality traits, attitudes, interests, and behaviors of more than 5,500 individuals, which reveals that internal consistency is extremely rare.

“This extensive overlap undermines any attempt to distinguish between a ‘male’ and a ‘female’ form for specific brain features,” said Daphna Joel, a psychologist at Tel Aviv University and lead author. These findings have “important implications for social debates on long-standing issues such as the desirability of single-sex education and the meaning of sex/gender as a social category.”

“We separate girls and boys, men and women all the time,” she says. “It’s wrong, not just politically, but scientifically – everyone is different,” Joel told New Scientist.

Whether or not male and female brains may be no more different from each other than male and female hearts or livers will likely remain a point of contention for years to come, but modern research is showing that, if anything, early studies in this field have been greatly exaggerated. The male and female brains are much more similar than they are different.

“Sex differences are sexy, but this false impression that there is such a thing as a ‘male brain’ and a ‘female brain’ has had wide impact on how we treat boys and girls, men and women,” Dr. Eliot said.

Samsung wants to “copy/paste” your brain into a 3D chip

The tech company is proposing a way to copy a brain’s neuron wiring map into a 3D neuromorphic chip. The approach, detailed in a new paper, relies on a nanoelectron array developed by a group of researchers that enters a big volume of neurons to record where the connections are made and the strength of those connections. Researchers believe that with this approach, they will one day be able to download people’s brains onto 3D chips.

Image credit: Samsung.

Neuro-morphism is a term we may want to become familiar with. It refers to something that takes the form of the brain. If you want to build something that follows the shape of the brain, you also have to understand the mechanisms of the brain — from why it remembers information to how many neurons are activated before a decision is made. A neuromorphic device could work in a way that’s mechanically analogous to our understanding of some part of the brain.

From Google to Microsoft, many organizations are working to develop neuromorphic chips. Researchers at MIT even discovered (last year) how to put thousands of artificial brain synapses on a very small chip. Even billionaire Elon Musk recently took some big steps with his Neuralink tech company as it builds a device to embed on a person’s brain. 

Working with engineers from Harvard University, Samsung presented an approach to create a memory chip that approaches the computing features of the brain that have so far been outside the reach of current technology. This includes autonomy, cognition, low power, facile learning, and adaptation to the environment, for example, all developed around a brain-like design.

“The vision we present is highly ambitious,” Donhee Ham, Fellow of Samsung Advanced Institute of Technology (SAIT) and Professor of Harvard University, said in a media statement from Samsung. “But working toward such a heroic goal will push the boundaries of machine intelligence, neuroscience, and semiconductor technology.

Revisiting neuromorphism

The human brain contains roughly 86 billion neurons, and the way they are connected is even more complex. These connections are largely behind the functions of the brain and they’re what make it so special as an organ. For neuromorphic engineering, the goal has always been, at least since the field officially started in the 1980s, to mimic the structure and function of the neuronal network on a silicon chip. 

Nevertheless, this has proven more difficult than expected, as there’s not that much knowledge on how neurons are linked together to create the brain’s higher functions. That’s why the original target of neuromorphic engineering was been recently changed to design a chip “inspired” by the brain instead of trying to mimic it so rigorously. 

However, the researchers at Samsung are now suggesting a way to back to the original goal of neuromorphics. The nanoelectrode the developed would enter a big number of neurons and register their electric signals with a high level of sensitivity. The recordings would then inform where neurons connect with each other and the strength of those connections. 

The neuronal wiring map could be copied based on those recordings and then pasted to memories, either non-volatile ones, such as those commercially available in solid-state drives (SSD), or to recently developed ones, such as resistive random-access ones (RRAM). Each memory would represent the strength of the neural connection in the map.

In their paper, the researchers also suggest a way to paste the neuronal map to a memory network, using specially-engineered non-volatile memories. These can learn and express the neuronal map when driven by intercellularly recorded signals. There’s a challenge, however, as the human brain has 100 billion neurons, and a thousand times more synaptic connections.

This means the chip will require about 100 trillion or so memories, a difficult challenge for Samsung. While the researchers are optimistic that they could use a 3D integration of memories to address this issue, it will probably take quite a bit of time for Samsung or any other company working on neurophormism to further implement the technology. 

The study behind the technology was published in the journal Nature. 

White matter density in our brains at birth may influence how easily we learn to understand and use language

New research at the University of Boston found that the brain structure of babies can have an important effect on their language development within the first year of life. The findings show that, although nurture plays a vital role in the development of an infant’s language abilities, natural factors also matter.

Image via Pixabay.

The study followed dozens of newborns over the course of five years, looking to establish how brain structure during infancy relates to the ability to learn language during early life. While these results definitely show that natural factors influence said ability, they’re also encouraging — upbringing, or nurture, has a sizable influence on a child’s ability to develop their understanding and use of language.

For the study, the authors worked with 40 families to monitor the development of white matter in infants’ brains using magnetic resonance imaging (MRI). This was particularly difficult to pull off, they explain, as capturing quality data using an MRI relies on the patient keeping completely still.

Born for it

“[Performing this study] was such a fun process, and also one that calls for a lot of patience and perseverance,” says BU neuroscientist and licensed speech pathologist Jennifer Zuk, lead author of the study. “There are very few researchers in the world using this approach because the MRI itself involves a rather noisy background, and having infants in a naturally deep sleep is very helpful in accomplishing this pretty crazy feat.”

The fact that babies have an inborn affinity for absorbing and processing information about their environment and the adults around them isn’t really any news. Anyone who’s interacted with an infant can hear the hints of developing language in their cries, giggles, and myriads of other sounds babies produce.

But we also like to talk to babies, thus helping them understand language better. The team wanted to determine how much of an infant’s ability to learn is due to their inborn traits, and how much of it comes down to the practice they get with the adults in their lives.

The new study reports that functional pathways in the brain play a large role in forming a child’s language-learning abilities during the first year of their life. These pathways are represented by white matter, the tissue that acts as a connector in the brain and links together areas of gray matter, where neurons reside and perform the actual heavy lifting in our brains. The team was interested in white matter in particular as it is the element that actually allows neurons to work together to perform tasks. The practice of any skill leads to the reinforcement of connections that underpin it, they explain, showcasing the importance of white matter in brain functionality.

“A helpful metaphor often used is: white matter pathways are the ‘highways,’ and gray matter areas are the ‘destinations’,” says Zuk.

Together with senior author Nadine Gaab from Boston Children’s Hospital, Zuk met with 40 families with infants to record the development of their white brain matter. In order to ensure the quality of the recorded data, they had to make sure that the babies were sound asleep before placing them in the MRI machine — which was quite a challenge, as these devices can become quite loud. This is the first time researchers have monitored the relationship between changes in brain structure over time and the development of language throughout the first few years of children’s lives.

One area they studied, in particular, is the arcuate fasciculus, a strip of white matter that connects two regions of the brain responsible for the understanding and use of language. MRI machines can determine the density of tissues (in this case, of white matter pathways) by measuring the behavior of water molecules through individual pieces of tissue.

Five years after first peering into the babies’ brains, the team met up with the families again, in order to assess each child’s language abilities. They tested for vocabulary knowledge, their ability to identify sounds within individual words, or to form words from individual sounds.

They report that children born with higher levels of white matter organization showed better language skills at the five-year mark, suggesting that biological factors do have an important role to play in the development of language skills. By itself, however, these results are not enough to prove that biological factors outweigh nurture completely. They’re simply an indication that brain structure can predispose someone towards greater language abilities. The findings are meant to be a piece of a much larger image and not the whole.

“Perhaps the individual differences in white matter we observed in infancy might be shaped by some combination of a child’s genetics and their environment,” she says. “But it is intriguing to think about what specific factors might set children up with more effective white matter organization early on.”

Even if the foundation for language skills is established in infancy, the team explains, our upbringing and experiences are critical to build upon this natural predisposition and play a very important role in a child’s outcome. Judging from the findings, however, the first year of a child’s life is a very good time to expose them to language in order to promote the development of this skill in the long term.

The paper “White matter in infancy is prospectively associated with language outcomes in kindergarten” has been published in the journal Developmental Cognitive Neuroscience.

Mice can develop neural signs of depression when forced to watch other mice experiencing stress

Depression is a global problem, affecting an ever-growing number of individuals. In a bid to better understand its physiological underpinnings, a team from the Tokyo University of Science has explored how neural deterioration in areas of the brain such as the hippocampus, as well as physical and psychological stress, is tied to depression.

Image credits Tibor Janosi Mozes.

There are several theories regarding why and how depression emerges, both from psychological and physiological factors. In regards to the latter, the “neurogenic hypothesis of depression” has garnered a lot of scientific interest. It states that depression can stem from physical degradation in areas of the brain such as the hippocampus, degradation which can be incurred by stress.

While the link between physical stress and depression has been investigated in the past, much less is known about the effects of psychological stress. A new study aims to give us a better understanding of this topic, using mice as a model organism.

A grinding toll

“The number of individuals suffering from depression has been on the rise the world over. However, the detailed pathophysiology of depression still remains to be elucidated. So, we decided to focus on the possible mechanism of psychological stress in adult hippocampal neurogenesis, to understand its role in depressive disorders,” says Prof. Akiyoshi Saitoh from Tokyo University of Science, co-lead author of the study.

“We have found out that chronic mental stress affects the neurogenesis of the hippocampal dentate gyrus. Also, we believe that this animal model will play an important role in elucidating the pathophysiology of depression, and in the development of corresponding novel drug.”

For the study, the team exposed mice to “repeated psychological stress” in order to test how this impacts hippocampus degeneration in their brains. The experiment consisted of making the mice experience chronic social defeat stress (cSDS) via their peers — a source of psychological stress for the animals, as they are a highly social species. Chronic social defeat stress is an experimental tool through which stress is induced in a subject (such as a mouse), the ‘naive mouse’ to ‘aggressor’ mice. As part of this research, the mice were made to witness the naive mice, who were participating in the stressful situation.

After this exposure, the team analyzed their brains to measure the level of degradation it produced in key brain areas, as well as noting changes in behavior.

First off, they report that the mice exposed to this repeated source of stress started exhibiting behavioral issues such as social withdrawal, indicative of depression. In their brains, more specifically the dentate gyrus area of the hippocampus, the team recorded a decreased survival rate of new-born neurons compared to those of controls. This area is heavily involved in memory and sensory perception.

Lower new-born neuron survival rates persisted for up to four weeks after the animals were exposed to the stress-inducing scenarios. Chronic treatment with antidepressant fluoxetine was efficient in restoring neuronal survival rates for these mice. Other characteristics, such as cell growth, differentiation, and maturation rates were not impacted by stress in the experimental mice (as compared to controls), the team adds.

The authors link neural degradation in the hippocampus to the emergence of depression through the fact that avoidance behaviors in the experimental mice was “significantly enhanced” 4 weeks after the last stress-inducing exercise, compared to the first day after it. This behavior, they explain, is likely produced by degradation mounting in neurons of the hippocampus following the experience.

Although these findings have not yet been validated in humans, the authors believe that they can form an important part of understanding how depression emerges in the brain even among us. Further work is needed to validate the results and see whether they translate well to humans, however.

The paper “Chronic vicarious social defeat stress attenuates new-born neuronal cell survival in mouse hippocampus” has been published in the journal Behavioural Brain Research.

Machine learning tool 99% accurate at spotting early signs of Alzheimer’s in the lab

Researchers at the Kaunas Universities in Lithuania have developed an algorithm that can predict the risk of someone developing Alzheimer’s disease from brain images with over 99% accuracy.

Image credits Nevit Dilmen via Wikimedia.

Alzheimer’s is the world’s leading cause of dementia, according to the World Health Organization, causing or contributing to an estimated 70% of cases. As living standards improve and the average age of global populations increase, it is very likely that the number of dementia cases will increase greatly in the future, as the condition is highly correlated with age.

However, since the early stages of dementia have almost no clear, accepted symptoms, the condition is almost always identified in its latter stages, where intervention options are limited. The team from Kaunas hopes that their work will help protect people from dementia by allowing doctors to identify those at risk much earlier.

Finding our early

“Medical professionals all over the world attempt to raise awareness of an early Alzheimer’s diagnosis, which provides the affected with a better chance of benefiting from treatment. This was one of the most important issues for choosing a topic for Modupe Odusami, a Ph.D. student from Nigeria,” says Rytis Maskeliūnas, a researcher at the Department of Multimedia Engineering, Faculty of Informatics, Kaunas University of Technology (KTU), Odusami’s Ph.D. supervisor.

One possible early sign of Alzheimer’s is mild cognitive impairment (MCI), a middle ground between the decline we could reasonably expect to see naturally as we age, and dementia. Previous research has shown that functional magnetic resonance imaging (fMRI) can identify areas of the brain where MCI is ongoing, although not all cases can be detected in this way. At the same time, finding physical features associated with MCI in the brain doesn’t necessarily prove illness, but is more of a strong indicator that something is not working well.

While possible to detect early-onset Alzheimer’s this way, however, the authors explain that manually identifying MCI in these images is extremely time-consuming and requires highly specific knowledge, meaning any implementation would be prohibitively expensive and could only handle a tiny amount of cases.

“Modern signal processing allows delegating the image processing to the machine, which can complete it faster and accurately enough. Of course, we don’t dare to suggest that a medical professional should ever rely on any algorithm one-hundred-percent. Think of a machine as a robot capable of doing the most tedious task of sorting the data and searching for features. In this scenario, after the computer algorithm selects potentially affected cases, the specialist can look into them more closely, and at the end, everybody benefits as the diagnosis and the treatment reaches the patient much faster,” says Maskeliūnas, who supervised the team working on the model.

The model was trained on fMRI images from 138 subjects from The Alzheimer’s Disease Neuroimaging Initiative fMRI dataset. It was asked to separate these images into six categories, ranging across the spectrum from healthy through to full-onset Alzheimer’s. Several tens of thousands of images were selected for training and validation purposes. The authors report that it was able to correctly identify MCI features in this dataset, achieving accuracies between 99.95% and 99.99% for different subsets of the data.

While this is not the first automated system meant to identify early onset of Alzheimer’s from this type of data, the accuracy of this system is nothing short of impressive. The team cautions that “such high numbers are not indicators of true real-life performance”, but the results are still encouraging, and they are working to improve their algorithm with more data.

Their end goal is to turn this algorithm into a portable, easy-to-use software — perhaps even an app.

“Technologies can make medicine more accessible and cheaper. Although they will never (or at least not soon) truly replace the medical professional, technologies can encourage seeking timely diagnosis and help,” says Maskeliūnas.

The paper “Analysis of Features of Alzheimer’s Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 Network” has been published in the journal Diagnostics.

Your brain is cleaning itself while you’re dreaming, new research suggests

The findings help us better understand why virtually all animals sleep, despite the fact that it leaves us helpless against predators and other threats.

Image via Pixabay.

The team, led by members from the University of Tsukuba explains that a certain phase of sleep (rapid eye movement sleep, or REM) gives our brains the opportunity to perform necessary maintenance. This, in turn, ensures that they’re running at peak capacity the rest of the time. The research builds on previous measurements of blood flow in the brain during different phases of sleep and wakefulness, which yielded conflicting results. In this study, the researchers used a technique to directly visualize how red blood cells move through the brain capillaries of sleeping and awake mice, while also measuring electrical activity in the brain.

Housekeeping

“We used a dye to make the brain blood vessels visible under fluorescent light, using a technique known as two-photon microscopy,” says senior author of the study Professor Yu Hayashi. “In this way, we could directly observe the red blood cells in capillaries of the neocortex in non-anesthetized mice.”

“We were surprised by the results. There was a massive flow of red blood cells through the brain capillaries during REM sleep, but no difference between non-REM sleep and the awake state, showing that REM sleep is a unique state”

In order to help elucidate the confusing previous findings around this topic, the authors monitored brain flow rates in different areas of the brain alongside electrical activity. The latter was used to distinguish between different states of awareness (non-REM sleep, REM sleep, full wakefulness). Since we know that the development of certain conditions such as Alzheimer’s — which involve the buildup of waste products in the brain — is associated with reduced blood flow in the brain, the former was used as a rough estimate for maintenance and cleaning processes taking place in the mice’s brains.

The link between the two is that the removal of these waste products involves biochemical processes that eventually culminate in an increased blood flow (as the waste needs to be physically removed) during rest. Disposal of this material doesn’t take place, to the best of our knowledge, during wakefulness; or, at least, not to any extent that we’ve been able to pick up on.

After recording the differences between the three states, the team also disrupted the mice’s sleeping. They report that this resulted in their brains engaging in a “rebound” REM sleeping pattern later in the experiment. This state, which resembles a stronger REM sleeping state, was likely used to compensate for the earlier disruption, the team hypothesizes. This, by itself, suggests that REM sleep has an important role to play in brain functionality.

Later, the team repeated this sleep disruption experiment with mice whose brain A2a receptors were artificially blocked — these are the same receptors that get blocked after you have a cup of coffee, and doing so makes you feel more awake. In these conditions, they saw a much lower increase in blood flow during both REM and rebound-REM sleep. This is a strong indicator “that adenosine A2a receptors may be responsible for at least some of the changes in blood flow in the brain during REM sleep,” says Professor Hayashi.

Judging from these findings, the team says that there may be merit in investigating whether the heightened blood flow seen in brain capillaries during REM sleep facilitates waste removal from brain tissues. This could, in time, lead us towards treatments or preventive measures against conditions such as Alzheimer’s disease. They also point to adenosine A2a receptors as a prime candidate for such treatments, given the observed role of these neurons in modulating blood flow in the brain during REM sleep.

The paper “Cerebral capillary blood flow upsurge during REM sleep is mediated by A2a receptors” has been published in the journal Cell Reports.

Our brains fire up their ‘prediction engine’ when faced with uncertainty — at least with music

When listening to music, our brains don’t just sit back and relax. Instead, they get hard to work trying to predict the patterns of the song.

Image via Pixabay.

We know from past research that our brains are surprisingly active when we’re listening to music, much more so than would be the case if they were simply processing the sounds. New research shows that the human brain processes music by analyzing what we’ve already heard, and using that to try to predict what’s coming next.

Music to my ears

“The brain is constantly one step ahead and matches expectations to what is about to happen,” said Niels Chr. Hansen, a fellow at the Aarhus Institute of Advanced Studies and one of two lead authors on the paper. “This finding challenges previous assumptions that musical phrases feel finished only after the next phrase has begun.”

“We only know a little about how the brain determines when things start and end. Here, music provides a perfect domain to measure something that is otherwise difficult to measure — namely, uncertainty.”

The study focused on musical phrases, one of the most basic units of music — if notes are treated as equivalent to individual letters, musical phrases would be words that go together. Musical phrases are made up of a sequence of sounds that together form a distinct element within a larger melody. They’re coherent within themselves, meaning that although they are only a part of a larger melody, they do “make sense” so to speak even when played alone.

The team chose this as the basis for their research particularly because of this property. Being coherent by themselves means that our brains can perceive them as music, but they don’t offer any information about what comes after them, because they’re a full sequence in themselves and do not necessarily impact other phrases in the melody, although they can.

What the team wanted to determine was how our brains react to the uncertainty this creates. Our brains like to look for patterns in the world around us (an inclination they developed while trying to keep us alive in the wild). They worked with 38 participants who were asked to listen to Bach chorale melodies, note by note. They were able to pause and restart the music at will by pressing the spacebar on a computer keyboard and were told that they would be tested afterward to check how well they remembered the melodies. This allowed the researchers to use the time participants dwelled on each tone as an indirect measure of their understanding of musical phrasing.

In a second experiment, 31 participants listened to the same musical phrases and were then asked to rate them on how ‘complete’ they sounded. They rated melodies that ended on high-entropy tones (those with higher uncertainty) to be more complete and tended to listen to them more on average.

“We were able to show that people have a tendency to experience high-entropy tones as musical-phrase endings. This is basic research that makes us more aware of how the human brain acquires new knowledge not just from music, but also when it comes to language, movements, or other things that take place over time,” said Haley Kragness, a postdoctoral researcher at the University of Toronto Scarborough and the paper’s second lead author.

“This study shows that humans harness the statistical properties of the world around them not only to predict what is likely to happen next, but also to parse streams of complex, continuous input into smaller, more manageable segments of information,” adds Hansen.

While studying how our brains interpret music might seem trivial, it feeds into the much wider topic of the mechanisms that allow us to perceive and process the world around us. It might also be valuable for researchers seeking to understand the very foundation of communication between people, as this involves an exchange of information in various forms that our brains may or may not try to interpret and understand in the same ways seen in this study’s participants.

The paper “Predictive Uncertainty Underlies Auditory Boundary Perception” has been published in the journal Psychological Science.

Potential new treatment for drug-resistant depression identified in mice — blocking histamines in the brain

Inflammation could have a direct impact on our mood through a molecule known as histamine, according to new research. Histamines are produced when white blood cells encounter a potential allergen, but their release in the blood also seems to interfere with serotonin, a mood-regulating neurotransmitter.

Histamine molecule. Image via Wikimedia.

Nobody likes allergies, but science has found a new reason to dislike them even more. Histamines, the molecules that mediate allergic responses, also seem to sour the mood of lab mice. New research from Imperial College London and the University of South Carolina reports that inflammation, and the release of histamines that accompanies it, interferes with serotonin in the brain. It also seems to affect how effective antidepressants can be at improving our mood, since these compounds also work by regulating serotonin production in the brain.

If these findings are replicated in humans, the team explains, we could open up new avenues of treatment for depression and treatment-resistant depression, which together form the most common mental health problem worldwide.

Unfortunate interaction

“Inflammation could play a huge role in depression, and there is already strong evidence that patients with both depression and severe inflammation are the ones most likely not to respond to antidepressants,” explains Dr. Parastoo Hashemi from Imperial’s Department of Bioengineering, lead author of the paper.

“Our work shines a spotlight on histamine as a potential key player in depression. This, and its interactions with the ‘feel-good molecule’ serotonin, may thus be a crucial new avenue in improving serotonin-based treatments for depression.”

While histamines are best known for the part they play in allergic reactions, they’re actually involved in basically every episode of inflammation in our bodies. Inflammation is an expansive term that refers to the process through which immune cells fight off pathogens and other threats. Swelling is one of the most obvious symptoms of inflammation, so the two terms are colloquially used to mean the same thing.

Inflammation is generally a response to infections, but can also be caused by stress, chronic diseases, obesity, neurodegenerative diseases, and allergic responses. Histamines mediate this process by increasing blood flow to affected areas and drawing immune cells to it.

Serotonin, colloquially known as the “happiness molecule”, is a key mood-regulating neurotransmitter. It is the chemical that makes you feel pleasure, and underpins our brains’ reward pathways. It’s also one of the main targets for today’s antidepressants. One of the most commonly prescribed classes of antidepressants today, selective serotonin reuptake inhibitors (SSRIs), helps to alleviate this condition by preventing our bodies from scrubbing serotonin from the brain — essentially, it doesn’t touch the happiness tap, but it does block the drain.

However, many patients are resistant to SSRIs. The team set out to determine whether specific interactions between serotonin and other neurotransmitters could explain this resistance. They applied serotonin-measuring microelectrodes to the brains of live mice, especially on the hippocampus area, as it’s known to play a part in regulating mood. The technique is known as fast scan cyclic voltammetry (FSCV) and allows for live measurements of serotonin levels without harming the brain.

After placing the microelectrodes, they injected half the mice with lipopolysaccharide (LPS), an inflammation-causing toxin found in some bacteria, and half the mice with a saline solution as a control.

Brain serotonin levels dropped sharply within minutes of the LPS injection, but remained constant in the control group. This shows how quickly inflammation can affect serotonin levels in the brain, the team explains, as LPS is unable to cross the blood-brain barrier — and therefore cannot be the cause of the drop.

Further investigation revealed that histamines released in response to the LPS in the brain inhibitied the release of serotonin by attaching to inhibitory receptors on the serotonin neurons. Humans also have these inhibitors, the team explains. SSRIs administered to these mice had very modest results in regards to boosting their brain serotonin levels.

However, when administered alongside histamine-reducing drugs, the SSRIs countered the observed drop, and serotonin levels rose to the same levels as seen in the control group. According to the authors, these drugs lower histamine levels throughout the body and are distinct from antihistamines taken for allergies, which block histamine’s effects on neurons — so don’t try to self-medicate with anti-allergy pills for depression.

That being said, if these findings can be replicated in humans, we’d gain access to a new and powerful avenue of treatment, especially for cases that do not respond to our current options. However, because the current findings are based solely on work with lab animals, there’s no guarantee that they will be replicated in humans.

The paper “Inflammation-Induced Histamine Impairs the Capacity of Escitalopram to Increase Hippocampal Extracellular Serotonin” has been published in the Journal of Neuroscience.

Learning music changes how our brains process language, and vice-versa

Language and music seem to go hand-in-hand in the brain, according to new research. The team explains that music-related hobbies boost language skills by influencing how speech is processed in the brain. But flexing your language skills, by learning a new language, for example, also has an impact on how our brains process music, the authors explain.

Image credits Steve Buissinne.

The research, carried out at the University of Helsinki’s Faculty of Educational Sciences, in cooperation with researchers from the Beijing Normal University (BNU) and the University of Turku, shows that there is a strong neurological link between language acquisition and music processing in humans. Although the findings are somewhat limited due to the participant sample used, the authors are confident that further research will confirm their validity on a global scale.

Eins, Zwei, Polizei

“The results demonstrated that both the music and the language programme had an impact on the neural processing of auditory signals,” says lead author Mari Tervaniemi, a Research Director at the University of Helsinki’s Faculty of Educational Sciences.

“A possible explanation for the finding is the language background of the children, as understanding Chinese, which is a tonal language, is largely based on the perception of pitch, which potentially equipped the study subjects with the ability to utilise precisely that trait when learning new things. That’s why attending the language training programme facilitated the early neural auditory processes more than the musical training.”

The team worked with Chinese elementary school pupils aged 8-11 whom they monitored, for the duration of one full school year. All of the participants were attending music training courses, or a similar programme to help them learn English. During this time, the authors measured and recorded the children’s brain responses to auditory stimuli, both before and after the conclusion of the school programmes. This was performed using electroencephalogram (EEG) recordings; at the start, 120 children were investigated using EEG, with 80 of them being recorded again one year after the programme.

During the music training classes, pupils were taught to sing from both hand signs and sheet music and, obviously, practised singing quite a lot. Language training classes combined exercises for both spoken and written English, as it relied on a different orthography (writing system) compared to Chinese. Both were carried out in one-hour-long sessions twice a week, either after school or during school hours, throughout the school year. Around 20 pupils and two teachers attended these sessions at a time.

All in all, the team reports that pupils who underwent the English training programme showed enhanced processing of musical sounds in their brains, particularly in regards to pitch.

“To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant,” they explain.

The results support the hypothesis that music and language processing are closely related functions in the brain, at least as far as young brains are concerned. The authors explain that both music and language practice help modulate our brain’s ability to perceive sounds since they both rely heavily on sound — but that being said, we can’t yet say for sure whether these two have the exact same effect on the developing brain, or if they would influence it differently.

At the same time, the study used a relatively small sample size, and all participants belonged to the same cultural and linguistic background. Whether or not children who are native speakers of other languages would show the same effect is still debatable, and up for future research to determine.

The paper “Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference?” has been published in the journal Cerebral Cortex.

Six cups of coffee a day is enough to start damaging your brain

A coffee each morning can work as a quick pick-me-up. But don’t go overboard, researchers from the University of South Australia warn, as it could negatively impact your brain’s health.

Image credits Karolina Grabowska.

One of the largest studies of its kind reports that high coffee consumption is associated with an increased risk of dementia and smaller total brain volumes. The study included data from 17,702 UK Biobank participants aged 37-73, finding that those who drank six or more cups of coffee per day had a 53% increased risk of dementia, and showed reduced volumes in their overall brains, white matter, gray matter, and their hippocampus.

Brain drain

“Coffee is among the most popular drinks in the world. Yet with global consumption being more than nine billion kilograms a year, it’s critical that we understand any potential health implications,” says Kitty Pham, lead researcher on the paper and a Ph.D. candidate at the University of South Australia (UniSA). “This is the most extensive investigation into the connections between coffee, brain volume measurements, the risks of dementia, and the risks of stroke—it’s also the largest study to consider volumetric brain imaging data and a wide range of confounding factors.

“Accounting for all possible permutations, we consistently found that higher coffee consumption was significantly associated with reduced brain volume—essentially, drinking more than six cups of coffee a day may be putting you at risk of brain diseases such as dementia and stroke.”

Although I personally know nobody who actually drinks six or more cups of coffee a day, there are certainly a few out there. As such, the findings could be quite important for public health, pointing to a source of preventable brain damage, including stroke and dementia.

Dementia affects about 50 million people worldwide, affecting an individual’s ability to think, their memory, impacting their behavior, and their ability to perform even everyday tasks. It’s a degenerative brain condition and a sizeable cause of death worldwide.

Strokes involve the disruption of blood flow to the brain, usually through blood clots or the rupturing of blood vessels, and end up starving areas of the brain of oxygen. This, in turn, leads to (usually significant) brain damage and loss of function. They’re surprisingly common, affecting one in four adults over the age of 25 worldwide.

The team explains that the exact mechanism through which excessive caffeine can impact brain health is not yet known but these results — along with previous research on the topic — make a strong argument that it does have such an effect. Still, this doesn’t mean you have to put your cup down for good. Moderation is the name of the game, the team explains.

“This research provides vital insights about heavy coffee consumption and brain health, but as with many things in life, moderation is the key,” says Professor Elina Hyppönen, senior investigator and Director of UniSA’s Australian Centre for Precision Health.

“Together with other genetic evidence and a randomized controlled trial, these data strongly suggest that high coffee consumption can adversely affect brain health. While the exact mechanisms are not known, one simple thing we can do is to keep hydrated and remember to drink a bit of water alongside that cup of coffee.

People typically consume between one and two cups of coffee per day, the team adds, which is not a very accurate measure, as cups are quite variable. Still, such low levels of intake should be fine. As long as you’re not closing in on five of six cups a day, they conclude, you should be safe.

The paper “High coffee consumption, brain volume and risk of dementia and stroke” has been published in the journal Nutritional Neuroscience.

Your first memory is probably older than you think

What’s your earliest memory? Statistically speaking, it’s likely from when you were two-and-a-half years old, according to a new study.

Image credits Ryan McGuire.

Up to now, it was believed that people generally form their earliest long-term memories around the age of three-and-a-half. This initial “childhood amnesia” is, to the best of our knowledge, caused by an overload of the hippocampus, an area heavily involved in the formation and retention of long-term memory, in the infant brain.

However, new research is pushing that timeline back by a whole year — it’s just that we don’t usually realize we have these memories, for the most part.

There, but fuzzy

“When one’s earliest memory occurs, it is a moving target rather than being a single static memory,” explains lead author and childhood amnesia expert Dr. Carole Peterson, from the Memorial University of Newfoundland.

“Thus, what many people provide when asked for their earliest memory is not a boundary or watershed beginning, before which there are no memories. Rather, there seems to be a pool of potential memories from which both adults and children sample. And, we believe people remember a lot from age two that they don’t realize they do.”

Dr. Peterson explains that remembering early memories is like “priming a pump”: asking an individual to remember their earliest memory, and then asking them for more, generally allows them to recall even earlier events than initially offered, even things that happened a year before their ‘first’ memory. Secondly, she adds, the team has documented a tendency among people to “systematically misdate” their memories, typically by believing they were older during certain events than they really were.

For this study, she reviewed 10 of her research articles on childhood amnesia along with both published and unpublished data from her lab gathered since 1999. All in all, this included 992 participants, with the memories of 697 of them also being compared to the recollections of their parents. This dataset heavily suggests that people tend to overestimate how old they were at the time of their first memories — as confirmed by their parents.

This isn’t to say that our memories aren’t reliable. Peterson did find evidence that, for example, children interviewed after two and eight years had passed since their earliest memory were still able to recall the events reliably, but tended to give a later age when they occurred in subsequent interviews. This, she believes, comes down to a phenomenon called ‘telescoping’.

“Eight years later many believed they were a full year older. So, the children, as they age, keep moving how old they thought they were at the time of those early memories,” says Dr. Peterson. “When you look at things that happened long ago, it’s like looking through a lens. The more remote a memory is, the telescoping effect makes you see it as closer. It turns out they move their earliest memory forward a year to about three and a half years of age. But we found that when the child or adult is remembering events from age four and up, this doesn’t happen.”

By comparing the information provided by participants with that provided by their parents, Dr. Peterson found that people likely remember much earlier into their childhood than they think they do. Those memories are also accessible, generally, with a little help. “When you look at one study, sometimes things don’t become clear, but when you start putting together study after study and they all come up with the same conclusions, it becomes pretty convincing,” she adds, admitting that this lack of hard data is quite a serious limitation on her work.

According to her, all research in this field suffers from the same lack of hard, verifiable data. Going forward, she recommends that research into childhood amnesia needs verifiable proof — either in the shape of independently confirmed memories or through documented external dates against which memories can be compared — as this would prevent errors from both participants and their parents, thus improving the reliability of the results.

The paper “What is your earliest memory? It depends” has been published in the journal Memory.