Tag Archives: electroencephalogram

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

sleep

Automated tasks are still processed while you sleep

sleep

Image: Washington Post

Despite an incredible body of work dedicated to researching what goes inside the brain while we sleep, consensus among neuroscientists suggests we’re just beginning to scratch the surface. For instance, we’ve yet to answer a fundamental question: why do we need sleep? We all agree that we needed it  – going without sleep for long periods of time can bring terrible consequences – but the mechanics that underlie it are far from being understood. A new research made by a team of French and British scientists lends us further insight into the amazing world of the sleeping brain. The findings suggest that we are still capable of processing verbal instructions even though we’re fast asleep, which might go to explain why you wake up when someone calls your name in the background, but not when other sounds are about.

Pushing buttons in your sleep

Studies so far suggest there’s a definite connection between sleep, memory and learning, but the present research (published in Cell) focused on how the brain responds to automatic tasks while sleeping. First, the researchers asked volunteers to identify spoken words as either animals or objects while they were awake by pushing a corresponding button – right hand for animals or their left hand for objects. The participants did this until the task became automatic and all the while their brain waves were scanned.

EEG (electroencephalogram)  showed where activity was taking place in the brain and what parts of the brain were being prepped for response. When the word elephant is heard, a part of the brain recognizes the word while another part processes it as being an animal.

[ALSO READ] Newly discovered ‘sleep node’ in the brain puts you to sleep without sedatives

In the second part of the experiment, the researchers waited until the participants fell asleep in a comfortable reclining chair. While in a state between light sleep and the deeper sleep known as rapid eye movement (REM), the participants were told a new list of words. Of course, their hands couldn’t move this time, but their brains showed the same sorting pattern akin to when they were awake.

“In a way what’s going on is that the rule they learn and practice still is getting applied,” Tristan Bekinschtein, one of the authors of the study, told Shots. The human brain continued, when triggered, to respond even through sleep.

The researchers weren’t totally satisfied with these results so they re-made the experiment, only this time instead of animals and objects, they exposed participants to real or fake words. Just as before,  sleeping participants showed brain activity that indicated they were processing and preparing to move their hands to correctly indicate either real words or fake words were being spoken.

[RELATED] Why some people need less sleep than others

“It’s pretty exciting that it’s happening during sleep when we have no idea,” Ken Paller, a cognitive neuroscientist at Northwestern University who is unaffiliated with the study, told Shots. “We knew that words could be processed during sleep.” But, Paller adds, “we didn’t know how much and so this takes it to say, the level of preparing an action.”

So, does this mean that you can perform tasks while asleep? The findings suggest that our brains are capable of processing tasks and instructions for automated tasks, but this doesn’t mean you can use shuteye time to memorize verbs or learn a new language. It might be possible, though, that certain tasks  begun in an awake state might continue through early sleep — like crunching calculations.

“It’s a terrible thought, in the modern world,” says Bekinschtein, referring to the pride people take in forgoing sleep for work. “I think in a way, these experiments are going to empower people … that we can do things in sleep that are useful.”

Image: CARLES GRAU ET AL., PLOS ONE, 2014

Brain-to-brain communication demonstrated for the very first time

Image: CARLES GRAU ET AL., PLOS ONE, 2014

Image: CARLES GRAU ET AL., PLOS ONE, 2014

A group of neuroscientists have achieved what some might believe strictly belongs to the realm of science fiction – they’ve successfully transmitted a message relayed by the brain of a person to another directly; no voice, no video, no sound, no text. The information was fed directly to the brain. If that wasn’t enough, the message was transmitted over thousands of miles via the internet from the brain of one person in India to not one, but three people located in France.

What’s truly remarkable about this achievement is that it wasn’t performed using alien, state-of-the art technology. The researchers simply made use of neurorobotics software and hardware that have been developed by several labs in recent years, albeit in an extremely clever way. 

Messaging at the speed of thought

 

“We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways,” said one of the team, Harvard’s Alvaro Pascual-Leone in a press release. “One such pathway is, of course, the Internet, so our question became, ‘Could we develop an experiment that would bypass the talking or typing part of internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?'”

Yes, they could. Here’s how it all works:

  • The ’emitter’ wears a modified EEG (electroencephalogram) cap called an electrode-based brain-computer (BCI) that effectively interprets and translates her brain’s electrical impulses  into binary code – language that computers can read. In fact, the researchers used a compact form of binary code called  Bacon’s cipher.
  •  It’s important to note that the participant’s thoughts aren’t relayed directly to other brains through this setup – that would be some form of mental telepathy. Think of it as if the emitter is relaying the message in morse code, only the mechanism is  neural.
  • So, the emitter has to enter the message in binary string using her thoughts. She does this by using her thoughts to move the white circle on-screen to different corners of the screen. (Upper right corner for “1,” bottom right corner for “0.”) If you find this process familiar, ZME previously reported how this kind of technology has been used to help paralyzed individuals to control computer cursors or robotic arms.
  • The code is then transmitted via the internet to the other participant(s) – the receiver – who is fitted with a reverse device, the computer-brain interface (CBI). The computer translates the binary code into electromagnetic pulses sent by a transcranial magnetic stimulation machine, which causes the wearer to  see flashes of light in their peripheral vision that aren’t actually there. These phantom flashes called  phosphenes, appear in one position corresponding to 1s in the emitter’s message, while flashes appearing in another position correspond to 0s.
  • The press release isn’t clear how the receiver decodes the information, but listing 1s and 0s with a simple pen on paper is effective enough.
  • Using this technique, three people in France were able to correctly identify the message: “hola” and “ciao”.

A second similar experiment was conducted between individuals in Spain and France, with the end result a total error rate of just 15 percent, 11 percent on the decoding end and five percent on the initial coding side.

I mentioned telepathy earlier and how this present process does constitute telepathy per se. It’s one heck of a good start in this direction, however. It’s not hard to imagine a refined setup where this sort of communication is fluid. The phosphenes should be relayed extremely precisely (a 15% error is just too much for a complicated message, let a alone a simple sentence) and the receiver should be able to interpret the flashes really quickly in order to sustain a fluid conversation. Alternatively, the computer to brain interface might become advanced enough to stimulate precise neural networks in the brain that cause the mind to welcome a certain thought (words). Imagine initiating ‘assisted telepathy’ and having your mind flooded with someone else’s thoughts. If such a thing would ever be possible (remember, primitive first successful steps forward do not guarantee sophisticated goals. The challenges ahead become orders of magnitude more intense. It’s like classical mechanics versus quantum physics), then this would have an immense potential to transform the world. A whole new suit of ethics would need to be revised. The freedom of one’s mind is considered the ultimate bastion, after all.

“By using advanced precision neuro-technologies including wireless EEG and robotized TMS, we were able to directly and noninvasively transmit a thought from one person to another, without them having to speak or write,” says Pascual-Leone. “This in itself is a remarkable step in human communication, but being able to do so across a distance of thousands of miles is a critically important proof-of-principle for the development of brain-to-brain communications. We believe these experiments represent an important first step in exploring the feasibility of complementing or bypassing traditional language-based or motor-based communication.”


Findings appeared in the journal PLOS ONE.

humiliated cat

Humiliation may be the most intense of human emotions

humiliated cat

Photo: worldwidewhiskers.wordpress.com

If you look back, you’ll find that some of your most treasured memories are linked to powerful emotions, be them positive or negative. Somehow, it may seem that negative emotions linger longer in our lives, long after the event that triggered them passed. Now, research has garnered tantalizing proof that suggests the most intense of human emotions is humiliation.

The rainbow of feelings

Love, hate, happiness, anger, dismay, relief. Our whole lives are influenced and governed by a whole spectrum of emotions – it’s what makes us human after all. Gift and curse, feelings make life worth living, even though at times they can cause terrible pain that makes you wish you were never born. Such is life, yet some feelings are more intense than other. Is there a master emotion dominating all the rest by magnitude or is everything kept in a delicate balance of negative and positive, action and reaction, ying and yang? If there were such a thing, the feeling of being humiliated might take the emotional crown.

Marte Otten and Kai Jonas, both psychologists, decided to investigate some claims that humiliation is a particularly intense, even unique, human emotion with great personal and social consequences. Some humiliating scenes can haunt people all their lives and leave dents in personalities that are had to mend. In extreme cases, humiliation may be responsible for war and strife. Otten and Jonas knew, like most of us, that humiliation is intense, but their efforts led them to turn this view into an objective analysis.

Dissecting humiliation

The researchers performed two separate studies. In the first one, they asked participants, both male and female, to read short stories involving different emotions, and had to imagine how they’d feel in the described scenarios. The first study compared humiliation (e.g. your internet date takes one look at you and walks out), anger (e.g. your roommate has a party and wrecks the room while you’re away) and happiness (e.g. you find out a person you fancy likes you). The second study compared humiliation with anger and shame (e.g. you said some harsh words to your mother and she cried).

Throughout the reading and imagination process, all participants had an EEG strapped to their scalps which read their brain activity. Two measures particularly interested in the researchers: a larger positive spike (known as the “late positive potential” or LPP); and evidence of “event-related desynchronization”, a marker of reduced activity in the alpha range. Both these measures are signs of greater cognitive processing and cortical activation.

Imagining being humiliated resulted in higher LPPs and more event-related desynchronizations than any other emotion.

“This supports the idea that humiliation is a particularly intense and cognitively demanding negative emotional experience that has far-reaching consequences for individuals and groups alike,” they concluded.

The study tells us that humiliation causes strain on the brain’s resources and mobilizes more brain power, but it doesn’t tell us why this happens. It’s a cause, not an effect. The researchers have yet to identify the mechanism that leads to this neural build-up. Then, the study setting itself wasn’t the best for this kind of evaluation. Imagining your being humiliated or falling in love doesn’t come close to the real thing (you can’t expect to cause genuine feelings of humiliation in a study either). At best, the study does indeed lend credence that humiliation is the master emotion relative to intensity, but it’s far from being a settled thing. Where’s all the love?

The findings appeared in the journal Social Neuroscience.

neuroware

The ‘neurocam’ records your most precious moments – do we need it though?

neuroware

(c) neuroware

With Google Glass, the search engine giant wants to bring social networking and personal video editing a step further, by offering the means to record, edit, augment reality and share your point of view in real time. It’s very interesting, and I’m guessing Glass is where Dr. Yasue Mitsukura of Keio University, Japan got the inspiration for her ‘neurocam’.

This contraption is a combination of Mind Wave Mobile and a customized brainwave sensor. Basically, the headset has a built in camera, and the brainwave sensor is designed to read specific emotions – like falling in love, delight at seeing something special, yada, yada. When the particular brain pattern associated with these emotions is detected, the camera switches to record. Can you see the pattern? The device is there to record your most treasured emotions, and of course memories.

We as individuals, as persons, are the sum of our recollections – no doubt about it. The past, riddled with suffering or joy alike, is what makes us who we are. There are bits and pieces that we forget though, especially with old age, and this is why people love to take pictures or record videos during important life celebration events. Watching these digital memoirs later not only triggers the memory of the event, but also elicits an emotional response.

Going back to the Japanese device, the users’ interests are quantified on a range of 0 to 100. The camera automatically records five-second clips of scenes when the interest value exceeds 60, with timestamp and location, and can be replayed later and shared socially on Facebook. It’s a sort of auto-time capsule. With gear like this one can only wonder why the heck do we need a brain in the first place.

Seriously, folks, we’ve all been there – on the digital memoir lane. Be it at a concert, where thousands of flashing mobiles phones are flung in the air to catch that riff, a date, even in the supermarket. People nowadays apparently feel the need to keep a digital record of their most important events – some even the trivial ones. Mitsukura’s invention seems like a logical step, if you’ve been following how technology and social networking have been evolving side by side in the past decade. Will it work and catch to the public? The inventors and investors will most likely be interested in this. Do we actually need it and would such a device enrich our lives or on the contrary? This last question I’d like you all, the ZME readers, to participate with an answer. Share your comments below, in the discussions section.

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

Never before seen brain activity in deep coma detected

Coma patients, be it inflicted from trauma or initiated by doctors to preserve bodily functions, have their brain activity regularly monitored using electroencephalography (EEG). When in a deep coma the brain activity is described by a flat-pattern signal- basically minimal to no response, one of the limits that nearly prompts  establishing brain death. A group of physicians at University of Montreal, however, have discovered an up until now never before seen type of brain activity that kicks in after a patient’s EEG shows an isoelectric (“flat line”) EEG.

The discovery was first spurred by the findings of Dr. Bogdan Florea who was caring for a human patient in an extreme deep hypoxic (deprived of oxygen) coma under powerful anti-epileptic medication, typically used to control seizures. Instead of just a flatline, though, Florea also observed some unusual signals – anything that wasn’t flat was basically weird at this point. So Florea contacted the University of Montreal team and explained his peculiar situation.

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

Flat line and Nu-complex signals (credit: Daniel Kroeger et al./PLoS ONE)

The Montreal researchers found, after analyzing the patient’s records, “ that there was cerebral activity, unknown until now, in the patient’s brain,” said Dr. Florin Amzica. To test whether or not this was a measuring glitch of some sort, Amzica and team performed an experiment. The team recreated the initial patient’s coma state in cats (the model animal for neurological studies) by drugging them with a higher dose of isoflurane anesthetic than normal. This effectively placed the cats in a deep coma and the EEG showed the expected flat (isoelectric) EEG line. Things were all normal until then. However, after a while strong oscillations were observed.

When pinpointing their origin, the researchers found the signal’s origin was in the hippocampus, the part of the brain responsible for memory and learning processes. The researchers concluded that the observed EEG waves, or what they called “Nu-complexes,” were the same as those observed in the human patient.

Besides its peculiar nature, the finding might prove to be extremely important. For one, there are many cases in which doctors intentionally induce certain patients into coma to protect their bodies and brain. This may be technically faulty in practice. A deep coma, based on the experiment on cats, might be better suited since it preserves a certain brain activity.

“Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG,” says Professor Amzica.

“An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active is absolutely vital for the benefit of patients.”

“As these functions fade at the onset of unconsciousness, the orchestrating powers are relinquished to more basic structures such as the thalamus (in the case of sleep) or the limbic system [per the current data in the experiment],” the researchers said in the paper. “When these structures are released from neocortical influence, they begin to pursue activity patterns on their own and proceed to impose these patterns on other brain regions including the neocortex.”

Findings were reported in the journal PLoS ONE.

[NOW READ] How long can a person remain conscious after being decapitated

Your brain detects grammar errors even when you’re not aware of them

A rather debatable theory in psychology says  the brain detects grammar errors even when we don’t consciously pay attention to them, sort of working on autopilot. Now, researchers at University of Oregon have come with  tangible evidence pointing toward this idea after they performed a brain scan study.

The team of psychologists, led by Laura Batterink, a postdoctoral researcher, invited native-English speaking people, ages 18-30, to read out various sentences, some of which  containing grammatical errors, and signal whether these were correct or not. During the whole task, the participants had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP).

resume_mistakes

Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.”  In order to create a distraction and make participants less aware, a 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.

 “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” Batterink said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”

Your brain: a grammar nazi

The researchers found that when the tones appeared after grammatical errors, subjects detected 89 percent of the errors, but when the tones appear before the grammatical errors, subjects detected only 51 percent of them. It’s clear the tone created a disruption in the participants’ attention. Even so, while the participants weren’t able to be consciously aware of the grammar errors, their brains picked up the errors generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.

[RELATED] Humans think more rationally in a foreign language

“Even when you don’t pick up on a syntactic error your brain is still picking up on it,” Batterink said. “There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly.”

“While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”

These findings might warrant changes in the way adults learn new languages. Children, for instance, learn to speak a language, and conversely pick up its grammar structure, simply  routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction.

“Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does,” said Neville, referring  to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.

The findings were detailed in the  Journal of Neuroscience.

Vegetative patients can now communicate with the outside world through fMRI and EEG

As amazing as it sounds, communicating with a person in a vegetative state is no longer something we see in sci-fi movies, it is beginning to become a reality.

A vegetative state occurs when some patients come out of a come and wake up, but not with their minds, just their bodies. While they are able to breathe on their own and exhibit some reflexive behaviors, they are thought to be in a state where they cannot have any brain activity whatsoever: no thoughts or emotions. At least that’s what is currently believed to be true of the people in this condition.

An fMRI machine. [Via singularityhub.com]

Recent studies using EEG or fMRIhave lead some scientists to conclude that in some of the patients they studied, awareness was detected. And not only that, what is even more amazing is that the doctors succeeded in establishing a form of communication with these people – showing how they can answer yes or no questions.

fMRI from http://www.bbc.co.uk/news/health-20268044

Patient having an fMRI scan performed. (c) BBC.co.uk

Prof. Adrian Owen, a neuroscientist at the Brain and Mind Institute at the University of Western Ontario used fMRI to read the brain activity in several patients in this condition. How did he manage to instruct them to communicate back ?

Through technologies such as the fMRI, scientists are able to distinguish from different types of thoughts. In the case of these studies they used the ability to distinguish from what is referred to as spatial movement from the body movement-type brain activity. As such, for the first case, the patients were instructed to think of travelling through the streets of a familiar city or through their home and for the second type of activity they were told to think about playing tennis and hitting the ball back to an instructor.

[RELATED] Science brings mind-reading tech a step closer 

By telling them to assign the first type of activity to a “no” answer and the second type to a “yes” – they were able to answer questions and thus capable of establishing a basic type of communication. This is an incredible breakthrough for the people in this condition and for their loved ones.

Besides the fMRI, Dr Owen managed to use the same strategy for communication by using EEG – a technology that is much more cheap and easy to use in comparison with the fMRI, thus potentially enabling a wide scale use of this method.

Undergoing an EGG scan. (c) BBC.co.uk

It’s true that this is not a miracle method yet – only 5 of the 54 patients that participated in this study were able to modulate their brain activity willfully at least as far as the fMRI could detect, but it still opens up an amazing opportunity.

Professor Julian Savulescu, the director of the Oxford Centre for Neuroethics stated that “This important scientific study raises more ethical questions than it answers. People who are deeply unconscious don’t suffer.

“But are these patients suffering? How bad is their life? Do they want to continue in that state? If they could express a desire, should it be respected?

“The important ethical question is not: are they conscious? It is: in what way are they conscious? Ethically, we need answers to that.”

[Via singularityhub.com, and BBC News]

Dolphins can stay awake for 15 days straight

After staying awake for many hours or days at time, humans and other mammals alike are forced to sleep, not because the body asks it, but because the brain inevitably calls for a shut down of the conscious psyche, in order to replenish and function properly when awake. Dolphins, however, have been found to have a remarkable resistance to sleep deprivation, as scientists discovered that they’re capable of staying alert for 15 days straight. How is this possible? Dolphins sleep with only half a brain, and regularly switch between sides.

“After being awake for many hours or days, humans and other animals are forced to stop all activity and sleep,” said researcher Brian Branstetter, a marine biologist at the National Marine Mammal Foundation in San Diego. “Dolphins do not have this restriction, and if they did, they would probably drown or become easy prey.”

Dolphins use a sort of native sonar to map their environments, in order to navigate through murky waters, find their peers and identify threats, by emitting their famous “clicking” sounds and probing the echo responses. Scientists tested this ability over a long period of time to see whether dolphins respond. They immersed a sound projector and microphone under water that respond with sounds mimicking dolphin clicking echoes, essentially acting as phantom targets.

(c) Brian Branstetter

The biologists at National Marine Mammal Foundation in San Diego trained two dolphins, a female Say, and a male, Nay, to press a paddle which retrieved food whenever they detected the phantom targets – the dolphins often squealed in sign of victory whenever they succeeded. The researchers found that the dolphins could employ their native echolocation with great accuracy with no sign of deterioration for up 15 days. This period could have very likely been greater, since the biologists restricted their observations to this time frame only.

The findings suggest that dolphins evolved this function to avoid drowning, but most importantly to remain vigilant in the wake of predators, like sharks. Upcoming research plans on scanning dolphin brains for electrical activity via electroencephalogram, or a EEG to better assess how long dolphins can stay alert.

“Research with freely moving humans who wear portable EEG equipment has been conducted; training a dolphin to wear a similar portable EEG backpack that is capable of withstanding and functioning in an ocean environment presents much greater challenges,” Branstetter said. “However, these hurdles are not insurmountable. Also, we are interested in investigating if dolphins can perform more complex cognitive tasks without rest, like problem-solving or understanding an artificial language.”

Findings were detailed in the journal PLoS One.

Not your ordinary video game. (c) DARPA

DARPA’s new threat detection system: one 120-megapixel camera + one supercomputer + one EEG strapped soldier

Boy, oh boy. Here’s a run for your dollar – DARPA’s latest ultimate threat detection system seems like it’s stripped from a bad war movie, but crazy as it may sound, it works and very well, according to officials.

The system, called Cognitive Technology Threat Warning System (CT2WS), consists of an extremely high resolution camera of 120-megapixels, which captures its surroundings. These images are then fed to a supercomputer which runs cognitive visual processing algorithms, on the lookout for threats like a sniper scope or a camouflaged tank nozzle. The output is then interfaced through a display where a soldier is stationed, tasked with confirming these threats. The soldier, however, has an EEG (electroencephalogram) strapped to his scalp.

Not your ordinary video game. (c) DARPA

Not your ordinary video game. (c) DARPA

As the soldier’s brain rules out threats, the brain signals are registered by the EEG and then processed. With enough data to make it statistically viable, the system will soon be able to accurately detect threats on its own. Spotting threats is tiresome, but with such a system already built-in for a scout helicopter or directly in the headset display of a foot-soldier, these could be interfaced terminator-style.

“DARPA set out to solve a common challenge for forward troops: how can you reliably detect potential threats and targets of interest without making it a resource drain?” said Gill Pratt, DARPA program manager.  “The prototype system has demonstrated an extremely low false alarm rate, a detection rate in the low nineties, all while reducing the load on the operator.”

The whole system works around our brain’s P300 response – a signal triggered when your brain recognizes something important. This can be a face, a football or a threat, doesn’t really matter. Your brain is wired to recognize familiar features, especially when they’re out of place with the scenery. No computer can recognize patterns, spatial ones especially, like the human brain, and by correlating data gathered by human intervention the system learns along, becoming smarter and smarter.

In tests so far, the system generated 810 false alarms per hour. That may seem like much, but according to DARPA the human operator can handle the 10 images per second it’s fed by the CT2WS display. Overall accuracy of the system is 91%, but expect it to improve as it moves past the prototype phase.

source