Tag Archives: neuroscience

Gut bacteriophages associated with improved cognitive function and memory in both animals and humans

A growing body of evidence has implicated gut bacteria in regulating neurological processes such as neurodegeneration and cognition. Now, a study from Spanish researchers shows that viruses present in the gut microbiota can also improve mental functions in flies, mice, and humans.

Credit: CDC.

They easily assimilate into their human hosts — 8% of our DNA consists of ancient viruses, with another 40% of our DNA containing genetic code thought to be viral in origin. As it stands, the gut virome (the combined genome of all viruses housed within the intestines) is a crucial but commonly overlooked component of the gut microbiome.

But we’re not entirely sure what it does.

This viral community is comprised chiefly of bacteriophages, viruses that infect bacteria and can transfer genetic code to their bacterial hosts. Remarkably, the integration of bacteriophages or phages into their hosts is so stable that over 80% of all bacterial genomes on earth now contain prophages, permanent phage DNA as part of their own — including the bacteria inside us humans. Now, researchers are inching closer to understanding the effects of this phenomenon.

Gut and brain

In their whitepaper published in the journal Cell Host and Microbe, a multi-institutional team of scientists describes the impact of phages on executive function, a set of cognitive processes and skills that help an individual plan, monitor, and successfully execute their goals. These fundamental skills include adaptable thinking, planning, self-monitoring, self-control, working memory, time management, and organization, the regulation of which is thought, in part, to be controlled by the gut microbiota.

The study focuses on the Caudovirales and Microviridae family of bacteriophages that dominate the human gut virome, containing over 2,800 species of phages between them.

“The complex bacteriophage communities represent one of the biggest gaps in our understanding of the human microbiome. In fact, most studies have focused on the dysbiotic process only in bacterial populations,” write the authors of the new study.

Specifically, the scientists showed that volunteers with increased Caudovirales levels in the gut microbiome performed better in executive processes and verbal memory. In comparison, the data showed that increased Microviridae levels impaired executive abilities. Simply put, there seems to be an association between this type of gut biome and higher cognitive functions.

These two prevalent bacteriophages run parallel to human host cognition, the researchers write, and they may do this by hijacking the bacterial host metabolism.

To reach this conclusion, the researchers first tested fecal samples from 114 volunteers and then validated the results in another 942 participants, measuring levels of both types of bacteriophage. They also gave each volunteer memory and cognitive tests to identify a possible correlation between the levels of each species present in the gut virome and skill levels.

The researchers then studied which foods may transport these two kinds of phage into the human gut -results indicated that the most common route appeared to be through dairy products.

They then transplanted fecal samples from the human volunteers into the guts of fruit flies and mice – after which they compared the animal’s executive function with control groups. As with the human participants, animals transplanted with high levels of Caudovirales tended to do better on the tests – leading to increased scores in object recognition in mice and up-regulated memory-promoting genes in the prefrontal cortex. Improved memory scores and upregulation of memory-involved genes were also observed in fruit flies harboring higher levels of these phages.

Conversely, higher Microviridae levels (correlated with increased fat levels in humans) downregulated these memory-promoting genes in all animals, stunting their performance in the cognition tests. Therefore, the group surmised that bacteriophages warrant consideration as a novel dietary intervention in the microbiome-brain axis.

Regarding this intervention, Arthur C. Ouwehand, Technical Fellow, Health and Nutrition Sciences, DuPont, who was not involved in the study, told Metafact.io:

“Most dietary fibres are one way or another fermentable and provide an energy source for the intestinal microbiota.” Leading “to the formation of beneficial metabolites such as acetic, propionic and butyric acid.”

He goes on to add that “These so-called short-chain fatty acids may also lower the pH of the colonic content, which may contribute to an increased absorption of certain minerals such as calcium and magnesium from the colon. The fibre fermenting members of the colonic microbiota are in general considered beneficial while the protein fermenting members are considered potentially detrimental.”

It would certainly be interesting to identify which foods are acting on bacteriophages contained within our gut bacteria to influence cognition.

Despite this, the researchers acknowledge that their work does not conclusively prove that phages in the gut can impact cognition and explain that the test scores could have resulted from different bacteria levels in the stomach but suggest it does seem likely. They close by stating more work is required to prove the case.

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

Demystifying nootropics – Is cognitive enhancement even a thing?

Whether you’re a college student hoping to improve your grades, a professional wanting to achieve more at work, or an older adult hoping to stave off dementia, the idea of popping a magic pill that boosts your brainpower can be tempting. So it’s no surprise that the use of nootropics or smart drugs is on the rise globally. But do they work? And more importantly, are they safe? In a sea of supplements and marketing blurb, what’s the real story behind these supposed cognitive enhancers? Let’s have a look at some of these questions.

Nootropics are prescription drugs, supplements, or natural substances that claim to boost cognitive functions such as memory, creativity, or motivation. Similarly, cognitive enhancement refers to the use or abuse of said smart drugs by healthy people exhibiting no neurological-based deficiency. Meaning, more often than not, ‘smart drugs’ are an off-label prescription medication used for non-medical purposes. Despite this unsettling fact, the use of off-label prescription nootropics is on the rise globally.

Developed in 1964 by Romanian chemist Corneliu E. Giurgea, the concept of nootropics involves a list of criteria which is as follows:

1. Nootropics should aid with improvement in working memory and learning

2. Supports brain function under hypoxic conditions or after electroconvulsive therapy.

3. Protects the brain from physical or chemical toxicity.

4. Natural cognitive functions are enhanced.

5. Nootropics should be non-toxic to humans without causing depression or stimulation of the brain.

The criterion above may suggest that cognitive enhancers are purely lab-made; however, they’re also present in everyday foodstuffs and beverages. As an example, caffeine is a natural nootropic and the most widely consumed psychoactive substance worldwide. Found in coffee, cocoa, tea, and certain nuts, an intake of one or two cups of coffee a day has been shown in clinical trials to increase alertness and decrease reaction time, albeit very gently. And while caffeine was once considered risky, many experts now agree that natural caffeine present in foodstuffs is more beneficial than harmful when consumed in moderation.  

Due to the sheer volume of false advertising surrounding nootropics, the first thing to check is whether a cognitive enhancer is backed by science — the best thing to do this is to see if it has gone through clinical or human trials. A prime example here is caffeine, whose cognitive benefits have been thoroughly tested in humans by various academic institutions. To date, it has been shown that caffeine consumption increases intracellular messengers, prolongs adrenaline activity, and circulates calcium into cells. Collectively, these mechanisms provide neuroprotection, increases heart rate, vascular tone, blood pressure, and bronchodilation. Human trials have also indicated that caffeine improves vigilance and attention without affecting memory or mood.

Eggs are another proven brain food that has been through clinical trials; shown to be rich in choline, a substance key to the production of acetylcholine, instrumental in many bodily functions, from achieving deep sleep to retaining new memories. Frequent egg consumption is associated with higher cognitive performance as well, particularly among the elderly. However, as with synthetic nootropics, too much of these foods also has adverse consequences, with higher doses of caffeine causing jittery, anxious feelings. Nevertheless, you’ll be pleased to hear there is no official daily limit on the number of eggs a person can eat just as long as they don’t add saturated fat or too much salt to them.

Another well-trialed natural nootropic is an ancient herb called Ginkgo biloba – both human and animal models have elucidated the herb’s neuroprotective effects. As a result, Gingko has been studied repeatedly in treating Alzheimer’s disease due to its antioxidant and antiapoptotic properties. Numerous studies have also cited its safety in humans with cognitive impairment, where the nootropic induced inhibition against caspase-3 activation and amyloid-β-aggregation in Alzheimer’s disease. The list of human studies proving the benefits of Ginkgo Biloba in healthy volunteers is extensive, with no safety issues noted. However, as with other cognitive enhancers, contrasting studies contradict these positive findings suggesting that all trials should employ neuroimaging.

The most salient factor to note here is that all of the above nootropics are proven in human or clinical studies – severely lacking with the majority of cognitive enhancers currently on the market today. A simple search on the PubMed database will tell you which nootropics have been trialed in humans and list any safety issues. Another excellent way to navigate the minefield of false advertising by some nootropics manufacturers is to use established brands.

Similarly, it’s also crucial to check whether mixing nootropics with alcohol or other drugs are safe. Firstly, always approach a medical professional before mixing drugs or alcohol with prescription medicine. Secondly, over-the-counter (OTC) medication bought in pharmacies should come with safety leaflets advising whether it is safe to take with medications, other supplements, or alcohol. Unfortunately, not all OTC remedies contain safety information as they are mostly unregulated. And while there are many papers on the use of caffeine with alcohol, most OTC nootropics haven’t been tested with other drugs. Experts advise: if you begin to mix or stack OTC medicines and start to feel ill, you should stop your drug regime and see a medical professional right away – this includes the stacking of nootropics.

I’m confused. Just how many types of nootropics are there?!

With a tsunami of potions and powders on the market, it can be challenging to take brain boosters responsibly. The first thing to know is that nootropics can either be synthetic or natural where they’re manufactured like prescription drugs or occur in plants and food. Likewise, dietary supplements or OTC drugs can contain natural and synthesized products – with prescription drugs being purely synthetic in structure.

Synthetic nootropics are composed of artificial chemicals rather than natural ingredients – being heavily laden with synthesized chemicals designed to mimic natural neurotransmitters. For instance, caffeine is found naturally in coffee beans and synthesized for bulk manufacturing. The synthetic version, found in many energy drinks, possesses a higher absorption rate into the body than its natural counterpart, causing significantly more side effects. Meaning, the raw version of caffeine is far less severe on the human body than its synthetic counterpart.

Notably, the only proven nootropics to make an immediate, marked difference in cognition are prescription drugs prescribed by your doctor. Specifically, drugs designed for Attention Deficit Hyperactivity Disorder (ADHD) such as Adderall and Ritalin, as well as the anti-narcoleptic modafinil, show demonstrable effects on healthy people’s concentration, attention, and alertness. And even though their impact on cognitive enhancement is questionable in healthy people, their off-label use is still on the rise despite numerous health risks, including dependence, tolerance, and cardiovascular, neurologic, and psychological disorders.

Prescription nootropics primarily consist of stimulants comprising methylphenidate, amphetamine, and dextroamphetamine- designed to counteract ADHD. And although these work well for many people with this condition, these pharmaceuticals aren’t proven safe for healthy people who want to improve their focus and attention. Many college students acquire this medication nefariously, and while they appear to help in the short term, there are dangerous risks.

Yet, modafinil, a novel stimulant FDA-approved to treat narcolepsy, sleep apnea, and shift work disorder, has several remarkable features distinguishing it from other medications. Unlike amphetamines, for example, modafinil is reported to have minimal side effects at the correct therapeutic doses. It also appears to have low abuse potential, with some studies suggesting that it may help with learning and memory in healthy people. 

Carrying on in the vein of synthetic nootropics, the biggest OTC nootropic in this class is the racetam family. An alleged cognitive enhancer designed to improve memory and suppress anxiety and based on a native brain-derived neurotrophic factor modulator. Racetam products are mainly derivative of Pyrrolidinone, a colorless, organic compound that supposedly enhances the learning process, diminishes impaired cognition, and protects against brain damage. Several pyrrolidine derivatives are commercially available, including piracetam, oxiracetam, aniracetam, noopept, and pramiracetam. However, in reality, research on their effectiveness in healthy adults is non-existent.

In contrast, human studies categorically link naturally occurring nootropics with healthy brain function. Explicitly, past studies have shown that food-derived nutrients such as unsaturated fat, vitamins, caffeine, minerals, various proteins, glucosinolates, and antioxidants can boost brain function. Despite this, the evidence backing the psychological benefits of their diet supplementary doppelgangers is weak. A fact that will shock many whose morning ritual involves the intake of supplements bought over-the-counter or online.

To compound this, a 2015 review of various dietary supplements found no convincing evidence of improvements in cognitive performance, even in unhealthy participants. Dr. David Hogan, the lead author of the review, feels nutritional supplements don’t provide the same benefits as food. “While plausible mechanisms link food-sourced nutrients to better brain function. Data showed that supplements cannot replicate the complexity of natural food and provide all its potential benefits.” However, he concedes that: “None of this rules out the potential for some OTC nootropics to improve cognition. Still, there isn’t much compelling evidence to support these claims.” Suggesting there is still much conjecture when it comes to dietary supplements as an aid to cognitive enhancement.

These findings make complete sense as all nutrients and fuel for our bodies come from our diet – proven to act as vasodilators against the small arteries and veins in the brain. When introduced into our system, these healthy foods increase blood circulation, vital nutrients, energy, and oxygen flow towards the brain. They also counteract inflammatory responses in the brain, modulating neurotransmitter concentration. For this reason, experts will always state that a healthy balanced diet is their preferred mode of treatment for healthy cognitive function – at least for now.

How do nootropics work?

Coffee — one of the most popular nootropics.

A recurring critical theme in many whitepapers covering the subject is that unless you’re deficient in a nootropic chemical, it’s unlikely taking more of it will help to enhance your brain processes. Officially, cognitive enhancement works by strengthening the components of the memory/learning circuits — dopamine, glutamate, or norepinephrine to improve brain function in healthy individuals beyond their baseline functioning.

Most experts state that nearly all OTC and dietary supplements lose their potency and thus stop working over time. Moreover, scores of non-prescription drug effects (if present at all) seem to be temporary, lasting until their metabolism and elimination. Meaning you may have to take more for any noticeable benefit if there is one. The author’s general advice is to ensure that the brand is well-established and trusted, avoiding prescription drugs for non-medical purposes. 

In an interview with InsiderDavid A. Merrill, MD, director of the Pacific Brain Health Center, states that nootropics likely won’t benefit you much if you’re not already experiencing symptoms such as trouble focusing or poor memory.

Indeed, as nootropic intake is also rising amongst gamers, Dr. Migliore adds in her interview with PC Gamer, ingesting these compounds is unlikely to help you if your body isn’t deficient in any of them. Adding “If you spend 10-15 minutes outside every day and eat a balanced diet, your vitamin D levels are most likely normal”. She then goes on to ask: “Will taking a supplement of vitamin D do anything for you? Probably not. On the other hand, if you avoid the sunlight and don’t eat meat, your vitamin D levels may be low. For those people, a vitamin D supplement might lead to increased energy.”  

Is Dr. Migliore, licensed clinician, and world-famous gamer, hinting that sun-deprived gamers may benefit from smart drugs? Also, how will I know when I’m deficient in a specific nutrient? I can only glean my ‘deficient behavior.’ Would it not, therefore, make sense to take cognitive enhancers where a nutritional inadequacy is suspected? 

Despite how logical this sounds, all experts agree that a sensible diet, social interaction, and regular exercise help boost cognition, with many naturally occurring nootropics found in food shown to improve mental faculties.  

So should we use nootropics then?

There are numerous ethical arguments concerning the ongoing nootropics debate, with a slew of countries hurriedly adapting their laws to this ever-expanding field. Side effects and false advertising aside, there is no doubt that nootropics exist that work. And if there are nootropics that work, more smart drugs will soon be developed that work even better with increased functionality. And this is where ethical problems arise concerning the point at which treating disorders becomes a form of enhancement, where patients become super-humans. Should resources be spent trying to turn ordinary people into more brilliant and better performing versions of themselves in the first place?

I mean, how should we classify, condone or condemn a drug that improves human performance in the absence of pre-existing cognitive impairment once proven efficacious? Are we in danger of producing ‘synthetic’ geniuses? And even worse, will they be better than the real thing? Approximately 95% of elite athletes have used performance-enhancing drugs to compare doping in competitive sports here. If brain doping becomes acceptable in working life and education, will the same go for sports? Will we see separate competitions for these synthetic geniuses to level the playing field? Governmental bodies must address these urgent issues. 

And even though the use of nootropics has risen over the past years with such drugs broadly perceived as improving academic and professional performances – not enough empirical evidence supports the assumption that these brain boosters give rise to cognitive enhancement in healthy users. Married with a deluge of reports on the unwanted, and sometimes dangerous, side effects of these drugs, the case for their use is fragile.

For example, the non-medical use of prescription stimulants such as methylphenidates for cognitive enhancement has recently increased among teens and young adults in schools and college campuses. Accordingly, memory enhancement dominated the market with more than 30% share in 2018. However, this enhancement likely comes with a neuronal, as well as ethical, cost. 

In that respect, a 2017 study involving 898 undergraduates, who were not diagnosed with ADHD, reported that off-label prescription nootropics did not increase the grade point average or advantage of any healthy volunteers. Further confirmation that research on nootropics still appears to be inconclusive in terms of clarifying and defining how such drugs act as mind stimulants even where proven medication is involved. 

Just how safe are these nootropic ‘supplements’?

The problems relating to the safety of nootropics are linked directly to the adverse events reporting systems. Concentrating on the United States, even the FDA, usually a benchmark for drug regulation globally, is uncharacteristically vague about smart drugs. Most nootropics are sold as OTC supplements, meaning there are no figures for side effects associated with OTC nootropics in the USA. For this reason, only adverse events linked to indistinct dietary supplements are compiled in unprocessed data sets – meaning there is no analytics available. Historically, adverse events associated with dietary supplements are difficult to monitor in the USA because the manufacturer doesn’t register such products before a sale. Thus, little information about their content and safety is available, with no way to know if a supplement contains what producers claim or to glean the long-term effects. Compounding the reason to use only well-known, trusted brands found at reputable pharmacies.

To enumerate, the official FDA system that records adverse events for dietary supplements, the CFSAN Adverse Event Reporting System (CAERS), covers foods, nutritional supplements, and cosmetics and only provides raw data. The reported adverse events document serious events, including death and hospitalization, and minor events, including taste, coloring, or packaging. Unbelievably, even though CAERS includes severe medical incidents, the names of up to 35% of all side effects in this database are redacted under Exemption 4. A regulation that exempts manufacturers from disclosing information that constitutes “trade secrets and commercial information obtained from a person which is confidential.” Companies whose products have caused death are also allowed to purge their brand name and products from the FDA database using this privilege.

Hence, it’s challenging to gain statistics for the number of adverse events related to dietary supplements, making tracking dangerous supplements that have used the Exemption 4 clause unfeasible. Accordingly, most studies covering adverse events attributed to OTC supplements explore predictive statistics, signs, or signals that could roughly approximate the number of hospitalizations, doctor’s visits, or deaths that may happen that year. Many studies rely on multiple sources to assess the number of adverse events related to dietary supplements. Even then, it can prove impossible to track one brand. In general, knowledge regarding the safety of OTC supplements is limited, with many studies finding that CAERS underrepresent adverse events associated with OTC drugs. To give readers an idea of the enormity of the problem, among the 1,300 supplements labeled Exemption 4 in the CAERS database, more than one-third involved deaths or hospitalizations.  

Another emerging safety issue, OTC drugs can also cause hospitalization even where prescription drug regimes have ended – particularly with patients with a history of psychiatric illness. Posing the question, does this show a loss of plasticity as these psychopharmaceuticals permanently reroute and lay down brain circuitry and tracts. Thus, we have a false opposition in terms here – how can these prescription stimulants be viewed as nootropics, which are temporary by their very nature?

In short, this suggests that healthcare providers, specifically those in the mental health and substance abuse fields, should keep in mind that nootropic use is an under-recognized and evolving problem that can cause severe episodes, particularly amongst those with pre-existing mental disorders or illnesses. 

 Have other nootropics been elucidated in human trials?

Yes, numerous nootropics have been through human trials, with significantly more natural cognitive enhancers trialed instead of synthetic drugs. Making sense as foodstuffs are part of our everyday diets, needed to fuel our whole body.

First on the list is Bacopa monnieri, a herb found throughout the Indian subcontinent in marshy areas, used for centuries in ayurvedic medicine to improve brain function.  Human studies reveal consistent cognitive enhancement resulting from Bacopa monnieri administration across young, old and impaired adult populations. The most robust effects of Bacopa monnieri are memory performance, including positive effects on learning and consolidation of target stimuli, delayed recall, visual retention of information, and working memory. 

In adults aged 55 and over, Bacopa monnieri has shown improvements in executive functioning and mental control. Clinical studies have also revealed that it may boost brain function and alleviate anxiety and stress, possessing numerous antioxidant properties – a class of potent compounds called bacosides present in the herb thought to be responsible for this. 

Surprisingly, despite its addiction liability and undesired adverse effects, preclinical and clinical studies have demonstrated that nicotine has cognitive-enhancing effects. Functions like attention, working memory, fine motor skills, and episodic memory are all susceptible to nicotine’s effects. There may also be a link between dementia and this nootropic with nicotinic receptor activity observed in Alzheimer’s disease patients. Despite this, experts agree that nicotine use is only justified to quit smoking and is, therefore, avoided as a smart drug.

One of the most popular drugs for cognitive enhancement is methylphenidate, otherwise known as Ritalin – a commonly prescribed medication for treating ADHD. Users should note that a large proportion of literature on the safety and efficacy of this drug comes from studies performed on normal, healthy adult animals, as there is currently no sufficiently reliable animal model for ADHD.

Methylphenidate is a stimulant closely related to amphetamine and cocaine that works by increasing levels of dopamine and norepinephrine in the brain. For healthy users, most studies on its cognitive effects involved adult animals or humans. In studies on healthy volunteers, higher doses increased movement and impaired attention and performance in prefrontal cortex-dependent cognitive tasks – lower doses improved mental performance and reduced locomotor activity. Nevertheless, long-term use of stimulants like Ritalin can lead to attention-based side effects, hyperactivity, being distracted easily, and poor impulse control – also seen in patients who use the medication for ADHD.

Many reports discuss the role of Panax ginseng, a herb used in Chinese medicine, in improving the cognition function of Alzheimer’s disease patients due to its antioxidant properties, claimed to suppress Alzheimer’s disease pathology. Over the last decade, several studies have revealed that single doses of Panax ginseng can modulate aspects of brain activity measured by electroencephalography and peripheral blood glucose concentrations in healthy young volunteers. The same studies have also indicated that the herb enhances aspects of working memory, improves mental arithmetic performance, and speeds attentional processes.

Another natural nootropic, Rhodiola rosea, known as golden root, is a flowering plant that improves cognitive function. It’s mainly known for its ability to counteract physical and mental fatigue, with numerous human studies hosted on the subject. Sharing the same property with Bacopa monnieri and Panax ginseng, it is considered an “adaptogen,” a substance that enhances endurance, resistance and protects against stressful situations. Human studies show that Rhodiola rosea may also protect the nervous system against oxidative damage, thus lowering the risk of Alzheimer’s disease.  

Research on nootropics indicates that the big hope appears to be modafinil. This prescription drug is considered first-line therapy for excessive daytime sleepiness associated with narcolepsy in adults. However, clinicians need to be cautious with younger users because of reports of side effects involving tachycardia, insomnia, agitation, dizziness, and anxiety. Nevertheless, modafinil is FDA-approved for use in children over age 16 years. 

The efficacy of the drug modafinil in improving alertness and consciousness in non-sleep-deprived, healthy individuals has led to the military trialing the drug as a cognitive enhancer. Pointedly, a 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, including alertness, energy, focus, and decision-making. In non-sleep-deprived adults, this also includes improvements in pattern recognition accuracy and the reaction-based stop-signal trial. 

Furthermore, modafinil improved the accuracy of an executive planning task and faster reaction times, with one study even listing increased digit span. Side effects are also dampened, with numerous cognitive functions remaining unaffected by modafinil. These include trail making, mathematical processing, spatial working memory, logical memory, associative learning, and verbal fluency.

As can be seen, cognitive enhancement is genuine, with human studies available to verify this exciting field’s mode of action and mechanisms.

Recommendations for smart drug usage

Nootropics and smart drugs are on the rise in today’s society, but more research involving neuroimaging is needed to understand their benefits better. However, there is no doubt that nootropics fulfilling Giurgea’s original criteria exist, particularly in their natural form.

In addition to these considerations, it’s always important to highlight that an active lifestyle with regular mental and physical activity, social interaction, and high-quality nutrition shows protective-preventive effects on various diseases and positively impacts brain health. Many experts are only willing to recommend these factors for cognitive enhancement. In particular, exercise increases dendrite length and the density of dendrite thorns and promotes the expression of synaptic proteins. An increase in the availability of growth factors and increased neurogenesis in the hippocampus also occurs, conversely decreasing beta-amyloid levels. No other nootropic currently has been so extensively studied or proven.

But the medical community can not ignore the many contrasting views of natural and synthetic nootropics; there’s growing evidence that some of these pills and powders can boost cognitive function, albeit temporarily. To date, Ginkgo biloba is the most studied and established herb for cognitive enhancement. In contrast, despite the vast number of studies on the subject, no prescription drug is officially recommended for non-medical use, despite evidence that they may provide cognitive enhancement for healthy people.

As we have seen, smart drugs exist; the main point to cover is safety. Experts recommend you only use trusted brands, checking the CAERS database for every new supplement or drug you use. They also state that if you become ill when using any prescription, OTC drugs, or dietary supplements, stop using them immediately and see a medical professional. Don’t forget to check the PubMed database for human trials and safety data regarding any cognitive enhancers you’re taking. It’s also an excellent place to double-check the credibility of any brands you may want to try. If they’re not involved in any studies, the chances are their products may be unsuitable for academic trials.

Finally, an underground movement is happening in the nootropics field, a faction demanding to be better, demanding their forced evolution, desperate to be good as the next person, terrified of being left behind. The next generation of smart drugs (and they are coming) will either advance humanity as a whole or divide us irrevocably. Will these synthetic geniuses, who feel so inferior they’ll risk their health to win the race, show us the same kindness afforded them? The answer awaits us all.

Scientists find hidden brain patterns that predict what video is gonna go viral

What makes people go crazy over videos of some dude singing Chocolate Rain or an infant getting his finger bitten by Charlie — again? Ever since social networks were invented, marketers have tried to find the secret sauce, the magic recipe that they can apply to turn a video into a viral sensation that can gather millions of hits within days.

Part of the answer lies in identifying psychological triggers that prompt people to hit that share button like crazy. Another part of that secret sauce might be nestled deep within the human brain.

Brian Knutson is a neuroscientist and professor of psychology at Stanford University. One day, he decided to track his smartphone usage. He knew already that he was spending way too much time mindlessly using his smartphone, but, to his surprise, he actually found that his phone usage was twice what he expected.

“In many of our lives, every day, there is often a gap between what we actually do and what we intend to do,” said Knutson in a press release. “We want to understand how and why people’s choices lead to unintended consequences – like wasting money or even time – and also whether processes that generate individual choice can tell us something about choices made by large groups of people.”

Knutson and colleagues scanned the brains of 36 participants using fMRI while they selected and watched various videos in order to see what goes on inside their brains when they decide whether to skip or watch a video to the end. The participants were also interviewed about their behavior like what made them skip a video over another.

Since the neural response to video content can be complex and changes over time, the researchers specifically focused on brain activity at the start and end of videos, as well as the average patterns of brain responses for each video.

Longer video views were associated with activity in the reward-sensitive regions of the brain while shorter video views were linked with activity in regions known to be involved in arousal or punishment, the results suggest.

However, when it came to predicting the behavior of others, brain activity alone could forecast the popularity (views/day) of each video.

During just the first four seconds of watching each video, heightened activity in the reward anticipating region of the brain was associated with a greater chance of popularity, whereas activity in the region associated with anticipating punishment forecasted decreased popularity.

Using brain data recorded during the process of making a decision to forecast how larger groups of individuals will respond when faced with the same choices is known as “neuroforecasting”.

“Here, we have a case where there is information contained in subjects’ brain activity that allows us to forecast the behavior of other, unrelated, people – but it’s not necessarily reflected in their self-reports or behavior,” explained Lester Tong, a graduate student in the Knutson lab. “One of the key takeaways here is that brain activity matters, and can even reveal hidden information.”

Previously, Knutson and colleagues investigated the neural mechanisms when people make emotional decisions such as online shopping, engaging in crowdfunding, as well as in the context of drug addiction.

These new findings suggest that brain data can be very important to uncover patterns that are otherwise hidden, complementing behavioral data.

“If we examine our subjects’ choices to watch the video or even their reported responses to the videos, they don’t tell us about the general response online. Only brain activity seems to forecast a video’s popularity on the internet,” explained Knutson.

The findings appeared in the Proceedings of the National Academy of Sciences.

My thoughts are my password, because my brain reactions are unique

File 20181018 67185 dbf3km.png?ixlib=rb 1.1

A test subject entering a brain password. Credit: Wenyao Xu, et al., CC BY-ND.

Your brain is an inexhaustible source of secure passwords – but you might not have to remember anything. Passwords and PINs with letters and numbers are relatively easily hacked, hard to remember and generally insecure. Biometrics are starting to take their place, with fingerprints, facial recognition and retina scanning becoming common even in routine logins for computers, smartphones and other common devices.

They’re more secure because they’re harder to fake, but biometrics have a crucial vulnerability: A person only has one face, two retinas and 10 fingerprints. They represent passwords that can’t be reset if they’re compromised.

Like usernames and passwords, biometric credentials are vulnerable to data breaches. In 2015, for instance, the database containing the fingerprints of 5.6 million U.S. federal employees was breached. Those people shouldn’t use their fingerprints to secure any devices, whether for personal use or at work. The next breach might steal photographs or retina scan data, rendering those biometrics useless for security.

Our team has been working with collaborators at other institutions for years, and has invented a new type of biometric that is both uniquely tied to a single human being and can be reset if needed.

Inside the mind

When a person looks at a photograph or hears a piece of music, her brain responds in ways that researchers or medical professionals can measure with electrical sensors placed on her scalp. We have discovered that every person’s brain responds differently to an external stimulus, so even if two people look at the same photograph, readings of their brain activity will be different.

This process is automatic and unconscious, so a person can’t control what brain response happens. And every time a person sees a photo of a particular celebrity, their brain reacts the same way – though differently from everyone else’s.

We realized that this presents an opportunity for a unique combination that can serve as what we call a “brain password.” It’s not just a physical attribute of their body, like a fingerprint or the pattern of blood vessels in their retina. Instead, it’s a mix of the person’s unique biological brain structure and their involuntary memory that determines how it responds to a particular stimulus.

Making a brain password

A person’s brain password is a digital reading of their brain activity while looking at a series of images. Just as passwords are more secure if they include different kinds of characters – letters, numbers and punctuation – a brain password is more secure if it includes brain wave readings of a person looking at a collection of different kinds of pictures.

A range of visual stimuli generates the best brain password. Credit: Wenyao Xu, et al., CC BY-ND.

A range of visual stimuli generates the best brain password. Credit: Wenyao Xu, et al., CC BY-ND.

To set the password, the person would be authenticated some other way – such as coming to work with a passport or other identifying paperwork, or having their fingerprints or face checked against existing records. Then the person would put on a soft comfortable hat or padded helmet with electrical sensors inside. A monitor would display, for example, a picture of a pig, Denzel Washington’s face and the text “Call me Ishmael,” the opening sentence of Herman Meville’s classic “Moby-Dick.”

The sensors would record the person’s brain waves. Just as when registering a fingerprint for an iPhone’s Touch ID, multiple readings would be needed to collect a complete initial record. Our research has confirmed that a combination of pictures like this would evoke brain wave readings that are unique to a particular person, and consistent from one login attempt to another.

Later, to login or gain access to a building or secure room, the person would put on the hat and watch the sequence of images. A computer system would compare their brain waves at that moment to what had been stored initially – and either grant access or deny it, depending on the results. It would take about five seconds, not much longer than entering a password or typing a PIN into a number keypad.

After a hack

Brain passwords’ real advantage comes into play after the almost inevitable hack of a login database. If a hacker breaks into the system storing the biometric templates or uses electronics to counterfeit a person’s brain signals, that information is no longer useful for security. A person can’t change their face or their fingerprints – but they can change their brain password.

It’s easy enough to authenticate a person’s identity another way, and have them set a new password by looking at three new images – maybe this time with a photo of a dog, a drawing of George Washington and a Gandhi quote. Because they’re different images from the initial password, the brainwave patterns would be different too. Our research has found that the new brain password would be very hard for attackers to figure out, even if they tried to use the old brainwave readings as an aid.

Brain passwords are endlessly resettable, because there are so many possible photos and a vast array of combinations that can be made from those images. There’s no way to run out of these biometric-enhanced security measures.

Secure – and safe

As researchers, we are aware that it could be worrying or even creepy for an employer or internet service to use authentication that reads people’s brain activity. Part of our research involved figuring out how to take only the minimum amount of readings to ensure reliable results – and proper security – without needing so many measurements that a person might feel violated or concerned that a computer was trying to read their mind.

We initially tried using 32 sensors all over a person’s head, and found the results were reliable. Then we progressively reduced the number of sensors to see how many were really needed – and found that we could get clear and secure results with just three properly located sensors.

Three electrodes high on the back of a user’s head are enough to detect a brain password. Credit: Wenyao Xu et al., CC BY-ND.

Three electrodes high on the back of a user’s head are enough to detect a brain password. Credit: Wenyao Xu et al., CC BY-ND.

This means our sensor device is so small that it can fit invisibly inside a hat or a virtual-reality headset. That opens the door for many potential uses. A person wearing smart headwear, for example, could easily unlock doors or computers with brain passwords. Our method could also make cars harder to steal – before starting up, the driver would have to put on a hat and look at a few images displayed on a dashboard screen.

Other avenues are opening as new technologies emerge. The Chinese e-commerce giant Alibaba recently unveiled a system for using virtual reality to shop for items – including making purchases online right in the VR environment. If the payment information is stored in the VR headset, anyone who
uses it, or steals it, will be able to buy anything that’s available. A headset that reads its user’s brainwaves would make purchases, logins or physical access to sensitive areas much more secure.

Wenyao Xu, Assistant Professor of Computer Science and Engineering, University at Buffalo, The State University of New York; Feng Lin, Assistant Professor of Computer Science and Engineering, University of Colorado Denver, and Zhanpeng Jin, Associate Professor of Computer Science and Engineering, University at Buffalo, The State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Mouse Mazes and Cutting Edge Science: A Discussion with Harvard’s Shuhan He

Shuhan He is a Resident Physician at Harvard Emergency Medicine. He’s a neuroscientist interested in neurocritical care and improving neurobehavioral testing in laboratory animals and humans. He also believes mouse mazes can be a key part of that. We sat down with him (through the wonders of the internet) and asked him a few questions about this challenging field.

Andrei Mihai, ZME Science: Thank you for taking the time to share your thoughts. I was curious what drew you to this intricate field of research? How did you start working with mice, and subsequently, with mazes?

Shuhan He.

Shuhan He: One of the things that pushed me into this field has been the lack of an effective set of tools to carry out more comprehensive experiments in the lab. While working in the lab, it became evident there wasn’t a way to gather big amounts of data from the research and repeat the experiments based on this evidence. Most lab equipment companies are still failing to provide the devices needed for this, which is why it was a good time to step forward.

AM: As in all fields of science, replicability is vital. How much of an issue is this with mouse studies in general, and with mazes in particular? 

SH: It’s not much of an issue as long as all the conditions are met and repeated in the same manner every time. Even though there are small factors that could influence the experiments such as the ambient light or the fragrance you are wearing, all of these should be noted with anticipation.

Most mice experiments can be repeated and it’s even easier when you have automated devices that can do most of the job for you and decrease the differences between the studies to a minimum.

AM: We often read about mouse studies, but we rarely get a chance to see how they actually take place. Could you describe how such a study would take place, what are the stages, and how long does it take? Is there a great variability between different types of studies?

SH: Let’s take the Morris Water Maze task as an example [a behavioral procedure widely used in behavioral neuroscience]. This experiment is carried out with the objective of measuring mice’s memory and learning capabilities.

Schematic drawing of the Morris water navigation test for rats. Image credits: Samuel John.

First, the water pool needs to be prepared. There’s a platform at the center of this pool which the rodent should reach each time it is put to test. The test basically consists in placing the mice inside the pool and teaching it to reach this safe platform instead of trying to escape the water pool.

We eventually change the position of this platform for each trial once the subject learns the route to safety, and then we measure how long does the rat take to reach the target in addition to tracking its movements. The result of this particular test lets us know how fast the rat can learn a task and it’s also a good indicator of their ability to recall past experiences. The successful application of the morris water maze is what led researchers to this fantastic claim that they could engineer smarter mice.

As for the different types of studies, there are different devices and mazes depending on the case. For example, the previously mentioned Morris Water Maze is usually used for memory and learning tests and then there is also the Automated 8 Arm Radial Maze that helps a similar purpose, with the difference that the latter is more focused on short-term memory. Another example would be the Treadmill, which can be used for resilience tests and sleep deprivation studies but it’s important to mention that many of these devices can be combined with others to create new trial environments.

AM: What’s the process you go through when designing a maze? Are there basic principles, do you make different types of mazes for different types of studies? Can you show us some of your coolest projects? 

SH: We have a different purpose in mind each time we design a maze or any other research device. Some of them are old mazes that we brought back to life by adding new technologies, features and quality components that were not previously found in the market, a good example of this would be the Automated T Maze or the MWM (Morris Water Maze).

If there’s anything that all of these devices have in common is that they are created with the highest quality pieces but not a single one of them is the same as the other. 

Our latest creation is the Labyrinth. This device acts as a housing system for rodents that is completely automated and can be customized for different purpose, it also includes the latest technologies available and we are very proud to present it. Other mazes worth mentioning here are the Automated 8 Arm Radial Maze and the Automated T Maze.

Here is a demonstration of the Automated 8 Arm Radial Maze in action:


AM: How has the development of neuropsychiatric drugs changed in recent years, with the development of so many new technologies (i.e. 3D printing, more powerful computing)? How do you see the field evolving in the next few years? 

SH:This is a really exciting time for behavioral neuroscience. Modern computational tools are making it possible to take complex data sets of behavior and tease out subtle findings of intelligence, learning, motor function and even social dynamics patterns. This would have never been possible before with manual interventions.
The biggest advantage of behavioral work in rodents is that we can fully control their environment. Closed arenas where the machines take live data and control the experiments are definitely the future. We can even see the brain firing as it functions right now, so imagine taking that data live and having the machine modify the environment to try entirely new categories of experiments. That’s the exciting future in front of us.

AM: Lastly, what motivated you to start Maze Engineers? Where do you see this going in the future, and what is the role you hope to play? 

SH: Maze Engineers has been created because mazes are an underappreciated technology. Behavior is fundamentally a combination of all of the components, and they can’t be broken down into something more basic. Mazes are the only thing that can really tell us how the brain as a whole is functioning.

We have a bright future ahead, especially with the inevitable advancement and application of artificial intelligence in the automation of processes and in making complex analysis much faster than any team of scientists could ever do manually. I am certain AI is going to play a huge role in the scientific field for the upcoming times and we hope to be at the forefront of this thrilling period.


Human rights day at the University of Essex.

As neuroscience advances, new human right laws are required to ensure our minds remain our own

Advances in neuroscience and neurotechnology could infringe on the “freedom of the mind” by prying information straight from our brains, a team of researchers reports. The team has identified four new human rights laws which it believes would protect our right to our own, unaltered and private minds.

Human rights day at the University of Essex.

Chalked messages on the steps of UoE in celebration of human rights day.
Image credits University of Essex / Flickr.

Advances in neurotechnology and know-how are making some incredible things, such as sophisticated brain imaging and the development of brain-computer interfaces, possible. But there’s always the risk of these technologies being used for applications that aren’t what we consider, in the broadest sense of the word, ‘good.’

The four rights are:

  1. The right to cognitive liberty.
  2. The right to mental privacy.
  3. The right to mental integrity.
  4. The right to psychological continuity.


“Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending,” explains Roberto Andorno, Associate Professor and Research Fellow at the Institute of Biomedical Ethics at the University of Zurich and co-author of the paper.

“Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders‘ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology becomes more refined and commonplace, the risk of it being hacked so a third-party can listen in on someone’s mind also increases. Even an unsuccessful attack on a brain-computer interface, for example, can cause a lot of physical and psychological damage to a consumer. And even if there’s no foul play, there are legal and ethical concerns regarding the protection of data generated by these devices.

Currently, international human right laws don’t extend to neuroscience. But it’s not the first time they’ve been extended to cover the advancements in a field of science — for example, laws concerning individuals’ genetic data. The team says that recent advancements in neuroscience will eventually have us sit down to re-draw certain human right laws and pen in new ones, just as lawmakers have done in answer to the genetic revolution.

So these four rights should emerge in the near future to protect what has always been our final (and increasingly in the modern world, last) refuge of personal freedom. Taken together, they should enable anyone to refuse coercive or invasive neurotechnology, give them a legal defense to the privacy of data collected through such methods, and protect both body and mind from damage caused by misuse of neurotechnology.

The full paper “Towards new human rights in the age of neuroscience and neurotechnology” has been published in the journal Life Sciences, Society and Policy.

An old game console could challenge all we know about how the brain works

By applying data analysis techniques used by neuroscientists to a simple man-made neural network — an Atari 2600 running “Donkey Kong” — a team of researchers found they may relay an incomplete image of how our brains work.

Image credits digitalskennedy / Pixabay.

Neuroscientists today have tools at their disposal that the field could only dream about a few decades earlier. They can record the activity of more neurons at a time with better resolution than ever before. But while we can record a huge volume of data, we have no way of testing the validity of the results because we just don’t understand how even the simplest brain works.

So Eric Jonas of U.C. Berkeley and Konrad Kording of Northwestern University set out to put these algorithms to the test using a neural framework we do understand: a 6502 microprocessor housed in the Atari 2600 console.

“Since humans designed this processor from the transistor all the way up to the software, we know how it works at every level, and we have an intuition for what it means to ‘understand’ the system,” Jonas says.

“Our goal was to highlight some of the deficiencies in ‘understanding’ that arise when applying contemporary analytic techniques to big-data datasets of computing systems.”

The duo applied standard neuroscience techniques to analyze the hardware’s functions. They wanted to see how well they could re-create known characteristics such as the chipset’s architecture or the effect of destroying individual transistors. And it didn’t go very well. These techniques didn’t relay as much information about the processor as a typical electrical engineering student is expected to posses.

“Without careful thought, current big-data approaches to neuroscience may not live up to their promise or succeed in advancing the field,” Jonas said.

“Progress requires better experiments, theories, and data analysis approaches,” Kording added.

There are of course some major limitations to the study. Jonas and Kording didn’t apply all techniques neuroscientists use and, the elephant in the room — microprocessors are really different from brains. Still, the findings do suggest that there are limitations to what modern neurosciene can reveal about the brain. The two researchers hope that by trying our hand at reverse-engineering synthetic systems first, we may gain better understanding on how to do the same with the brain.

The full paper “Could a Neuroscientist Understand a Microprocessor?” has been published in the journal PLOS Computational Biology.

Detailed new map of human brain reveals almost 100 new regions

The human brain is one of the most complex phenomena known to man and despite extensive research, scientists have yet to fully understand it. Although a complete grasp of the nature of the human brain is still far-off, a new study by researchers from the Washington University School of Medicine brings us closer to this goal in the form of a detailed new map of the outermost layer of the brain, revealing almost 100 new regions.

The detailed new map of the human brain's cerebral cortex. Credit: Matthew Glasser and Eric Young

The detailed new map of the human brain’s cerebral cortex. Credit: Matthew Glasser and Eric Young

The outermost layer of the brain, referred to as the cerebral cortex, is a layer of neural tissue that encases that rest of the brain. It is the primary structure involved in sensory perception, attention, and numerous functions that are uniquely human, including language and abstract thinking.

In the new study, the team divided the cortex of the left and right cerebral hemispheres into 180 areas based on physical differences such as cortical thickness, functional differences and neural connectivity.

“The brain is not like a computer that can support any operating system and run any software,” said David Van Essen of the Washington University School of Medicine and senior author of the paper. “Instead, the software – how the brain works – is intimately correlated with the brain’s structure—its hardware, so to speak. If you want to find out what the brain can do, you have to understand how it is organized and wired.”

Matthew Glasser, lead author of the study, spearheaded the research after he realized that the current map of the human cortex – created by German neuroanatomist Korbinian Brodmann in the first decade of the 20th century – just wasn’t cutting it for modern research.

“My early work on language connectivity involved taking that 100-year-old map and trying to guess where Brodmann’s areas were in relation to the pathways underneath them,” Glasser said. “It quickly became obvious to me that we needed a better way to map the areas in the living brains that we were studying.”

Using data from 210 healthy young adults, both male and female, the team took measures of cortical thickness and neuronal cable insulation and combined them with magnetic resonance imaging (MRI) scans of the brain at rest as well as during simple tasks.

“We ended up with 180 areas in each hemisphere, but we don’t expect that to be the final number,” Glasser said. “In some cases, we identified a patch of cortex that probably could be subdivided, but we couldn’t confidently draw borders with our current data and techniques. In the future, researchers with better methods will subdivide that area. We focused on borders we are confident will stand the test of time.”

In the future, such cortical maps could be created on an individual basis and help in the diagnosis and treatment of neurological or psychiatric illnesses such as dementia and schizophrenia.

Journal Reference: A multi-modal parcellation of human cerebral cortex. 20 July 2016. 10.1038/nature18933


Genetic variant explains why women are more prone to Alzheimer’s


Photo: e-manonline.com

Like a sticking nail, Alzheimer’s has been irritating neuroscientists for decades. After so many years and billions worth of research, the underlying causes and mechanics that cause the gruesome neurodegenerative disease have yet to be identified, though hints suggest genetics have a major role to play – never mind a cure! Clearly, Alzhaimer’s is formidable and while we’ve yet to fully understand it, scientists are doing their best and every year there seems to be a new piece added that might one day fit the whole puzzle.

For instance, a team of researchers at Stanford confirmed earlier findings that suggests a genetic variant makes women more prone to the disease than men. This is evidence that the disease affects genders unequally and suggests that future treatment should be prescribed gender specific.

It’s those genes

In 1993, researchers found that elders who inherit a gene variant called apolipoprotein E4 (APOE4) are more prone to the common form of Alzheimer’s that strikes in late life. Other variants have also been identified as being linked with Alzheimer’s: APOE3, the risk neutral variant, and the much rarer variant APOE2, which actually decreases a person’s risk of developing Alzheimer’s. A bit later, in 1997, researchers combed through more than 40 studies and analyzed data pertaining to  5930 Alzheimer’s patients and 8607 dementia-free elderly and found females with the APOE4variant were four times more likely to have Alzheimer’s compared with people with the more common, neutral form of the gene.


Photo: triumf.ca

That’s a really big difference, but for some reason the findings didn’t become that widely known. Michael Greicius, a neurologist at Stanford University Medical Center in California re-discovered the findings in 2008 and decided it was worth making a new investigation. He and his team first performed some neuroimaging on patients and found from the brain scans that  women with the APOE4 variant had poor connectivity in brain networks typically afflicted by Alzheimer’s, even though there weren’t any symptoms for Alzheimer’s present in the first place. This was fishy.

A more comprehensive view

Greicius and colleagues settled they would have to perform a longitudinal study on this to see the full extent of this genetic variance, so they pulled data from 2588 people with mild cognitive impairment and 5496 healthy elderly who visited national Alzheimer’s centers between 2005 and 2013. Every participant was logged according to genotype (did he have the APOE4 or APOE2?) and gender. Most importantly, each participant was surveyed in follow-up studies to see if the mild impairments had grown into full-blown Alzheimer’s.

Confirmed that the APOE4 is a risk gene, males and females participants with mild cognitive disabilities who were identified carrying the gene  variant equally progressed to Alzheimer’s disease more readily than those without  the gene.  However, among healthy seniors, women who inherited the APOE4 variant were twice as likely as noncarriers to develop mild cognitive impairment or Alzheimer’s disease, whereas APOE4 males fared only slightly worse than those without the gene variant. This is a full step ahead of the previous 1997 study because it tell us more about how the gene variant potentially leads to Alzheimer’s, especially in women.

The findings will most likely have significant implications in how Alzheimer’s is treated. Interestingly enough, some previous studies, according to the researchers, have shown that there are some side effects when treating patients that carry the APOE4 variant, but these studies were not subdivided according to gender.  Moreover, it’s possible that some treatments are more effective to treating symptoms for men more than women, and this is something definitely worth taking into account.



Measuring creativity through spontaneous single spoken words

creativityWhat is creativity? Although definitions vary, one might be inclined to say that creativity, ultimately, is anything that has to do with ideas – generating them, building them, transforming them into reality. There are a lot of tests that measure creativity and chances are if you’ve been to a job interview recently you might have actually been handed out such a test. Michigan State University neuroscientist  may have found the quickest test to measure a person’s creativity, while in the process of studying what sparks creativity and what brain processes are involved.

The researchers measured the effectiveness of “noun-verb” test, an experiment virtually anyone can do. For their work, the researchers asked 193 participants to quickly respond with a verb after a noun was shown. For the noun “chair,” for example, instead of answering with the standard verb “sit,” a participant might answer “stand,” as in to stand on a chair to change a light bulb. The verb needs to be related to the noun and non-nonsensical replies were not considered. The test only lasts two minutes.

[RELATED] Brain scans of rappers offer valuable insight on creativity

After the test, each participant was engaged in a more in-depth creative process  like story writing, drawing or recalling their creative achievements in real life.  Those who gave creative answers in the noun-verb test were indeed the most creative as measured by the more in-depth methods. This suggests the noun-verb test, or a future variation, could be successful by itself in measuring creativity.

“We want to understand what makes creativity tick, what the specific processes are in the brain,” said  MSU neuroscientist Jeremy Gray. “Innovation doesn’t just come for free — nobody learns their ABCs in kindergarten and suddenly writes a great novel or poem, for example. People need to master their craft before they can start to be creative in interesting ways.”

Next, the researchers plan on repeating the experiment with the participants in a MRI while brain activity is recorded. Hopefully parts of the brain or certain mechanics that come into play during the creative process might be identified. If anything, the findings as they are could help people from professors, to students, to entrepreneurs enhance their creative flow by making simple exercises such as the noun-verb test. Better tests, like the creativity tests at interviews, might also be devised based on the noun-verb experiment.

“Ultimately, this work could allow us to create better educational and training programs to help people foster their creativity,” Gray said.

Results were published in the journal Behavior Research Methods

Backyard Brains

Is making cyborg cockroaches immoral?

Backyard Brains

(c) Backyard Brains

Through the halls of TedxDetroit last week, participants were introduced to an unfamiliar and unlikely guest – a remote controlled cyborg cockroach. RoboRoach #12 as it was called can be directed to either move left or right by transmitting electrical signals through electrodes attached to the insect’s antennae  via the Bluetooth signals emitted by a smartphone. Scientists have been doing these sorts of experiments for years now in attempt to better understand how the nervous system works and to demonstrate how it can be manipulated.

Greg Gage and Tim Marzullo – co-founders of an educational company called Backyard Brains and the keynote speakers at the Ted event where the cyborgroach was shown – have something different in mind. They want to send RoboRoaches all over the U.S. to anyone who’d be willing to experiment with them. For 99$, the company sends you a kit with instructions on how to convert your very own roach into a cyborg for educational purposes – actually, it’s intended for kids as young as ten years old and the project’s aim is to spark a neuroscience revolution.  Post-TedxDetroit, however, a lot of people, including prominent figures from the scientific community, were outraged and challenged the  ethical nature of RoboRoaches.

“They encourage amateurs to operate invasively on living organisms” and “encourage thinking of complex living organisms as mere machines or tools,” says Michael Allen Fox, a professor of philosophy at Queen’s University in Kingston, Canada.

“It’s kind of weird to control via your smartphone a living organism,” says William Newman, a presenter at TEDx and managing principal at the Newport Consulting Group, who got to play with a RoboRoach at the conference.

How does the RoboRoach#12 and its predecessors become slaves to the flick on an iPhone touchscreen? In the instruction kit, which also ships with a live cockroach, students are guided through the whole process. First the student is instructed to anesthetize the insect by dousing it with ice water. Then the insects head is sanded with a patch of shell so that it become adhesive, otherwise the superglue and electrodes won’t stick. In the insect’s thorax a grounwire is inserted. Next, students need to be extremely careful while trimming the insect’s antennae before inserting silver electrodes into them. Finally, a circuit fixed to the cockroach’s back relays electrical signal to the electrodes, as instructed via a smartphone Bluetooth.

Gage says, however, that the cockroaches do not feel any pain through out this process, though it is questionable how certain he is of this claim. Many aren’t convinced. For instance  animal behavior scientist Jonathan Balcombe of the Humane Society University in Washington, D.C.  says“if it was discovered that a teacher was having students use magnifying glasses to burn ants and then look at their tissue, how would people react?”

That’s an interesting question, but I can also see its educational benefits of course. It teaches students how quintessential the brain is and how it governs bodily functions through electrical signals. Scientists, unfortunately, heavily rely on model animals like mice, worms, monkeys and such for their research. These animals certainly suffer, but until a surrogate model is found the potential gain convinces most policy makers that this practice needs to continue , despite the moral questions it poses. Of course, this kind of research is performed by adults, behind closed doors, in the lab – not by ten year old children. Also, what about frog dissections in biology classes? Some schools in California have banned the practice entirely, should other schools follow suit?

What happens to the roaches after they’re ‘used and abused’? Well, they go to a roach retirement home, of course. I’m not kidding. Gage says that , all students learn that they have to care for the roaches—treating wounds by “putting a little Vaseline” on them, and minimizing suffering whenever possible. When no longer needed, the roaches are sent to a retirement tank the scientists call Shady Acres where disabled insects go with their lives. “They do what they like to do: make babies, eat, and poop.”

Gage acknowledges, however, that he has indeed received a ton of hate mail. “We get a lot of e-mails telling us we’re teaching kids to be psychopaths.”

It’s worth nothing that cyber roaches are being used for some time in research. Scientists  in North Carolina are trying to determine if remote-controlled cockroaches will be the next step in emergency rescue, for instance. The researchers are now hoping that these roaches will be able to be equipped with tiny microphones and navigate their way through cramped, dark spaces in effort to find survivors in disaster situations.

So, ZME readers, what do you think? Should Cyber

Lady Luck

Dwelling inside the gambler’s mind

Lady Luck

Lady Luck

There’s a lot more to gambling than just luck, and whilst it’s impossible to predict an outcome or utilise a system effectively, the human brain and our emotions have a lot to do with the decisions we make during the gambling process.


spinning wheels

spinning wheels and that feeling of hope

Say you slide a coin into a one armed bandit, pull the lever and watch the reels spin. Whilst there’s no way you can predict which of the fruits are going to eventually present themselves once the reels have stopped, we still have that feeling of hope. These kind of machines are programmed to return only about 90% of wagered money, so the chances are high that you lost your money – but what makes us return for more?

The answer lies with neurology
. The human brain is invaded by dopamine neurons – these are used to predict future rewards. These neurons struggle to decipher the patterns or determine a solid outcome when you are gambling. Because they can’t get to grips with the machine’s patterns or algorithms produced by a microchip, our dopamine neurons, instead of surrendering, become obsessed. Because the release of dopamine is a ‘feel good’ chemical like serotonin, whenever we pull the lever and win, we experience a rush of pleasure. The end result of our neurons continuously trying to figure a pattern out is that the machine transfixes us and we continue to play.


Technically All You Enjoy

Serotonin and dopamine are both chemicals linked with feeling good, so when these are stimulated through winning and the thrill of the next win, they can influence the decisions you make. Whilst the pragmatic side of you may know you will lose, the feelings induced by winning may over power any pragmatism.
Alcohol also affects the way you gamble. If you mix alcohol with dopamine and serotonin you are creating a mix almost impossible to surrender to should you start winning at the casino. Alcohol creates confidence and gives you a false sense of security about future wins.


Some people are more susceptible to feeling a dopamine rush than others, and as games of chance are designed to take advantage of this cellular pathway inside the brain, some people are more likely to become addicted to gambling than others. Gamblers who experience massive rushes of pleasure from a win are blinded by the fact they are losing money searching for that next win. Casinos have figured out a way to make us almost want to lose money.

Psychological Trickery

casino chips

Although it’s the gambler that usually cheats the casino, these establishments use a few psychological tricks on the brains of its visitors.

A casino’s flashing lights and the sounds of clinking money express winning and the joy felt by winning – it’s contagious. Because we possess the need to feel socially normal, experiencing the thrill of winning and being admired by those around you is extremely powerful.

Chips are used instead of real money – this alters the gamblers’ sense of money. Throwing down three £50 chips whilst in the act of an exciting gambling moment becomes just like using three $10 chips.
Casinos always offer complimentary alcoholic drinks and food to keep players gambling, and as I previously mentioned, alcohol can impair judgement as to the chances of winning. The money a casino makes from the gamblers’ loss massively outweighs the cost of the food and drink.


cheating at cards

Cards the classic cheaters hit

I think it’s safe to say everyone has considered or thought about cheating, especially after hearing about people who’ve cheated millions from a casino. Unfortunately this is nothing but a romantic image only seen in the movies. Although cheating has serious consequences in real life, this doesn’t seem to prevent the interest in gambling cheats.

Past posting is a method used by players after a bet has been made. In roulette, for example, the player will attract the attention of the dealer so they can either switch the winning chips for a higher denomination chip or push their chips to the winning number.

Collusion is a very popular form of cheating, one in which needs two or more people. You often see this used in the movies but it can be picked up on by the casinos. In poker, the two partners signal to each, expressing to each other the values of their cards. A player may have a friend strategically placed as to spy on the other players and then signal their cards.

So the next time you are in the casino or playing online at Gaming Club online casino and find yourself faced with that tricky decision whether to stop or not, take a minute and ask yourself what your brain is really trying to tell you.

Roles of Neuregulin 1 in neural development. NRG1 is released from neurons to promote the formation and maintenance of radial glial cells. Tangential migration of γ-aminobutyric acid-ergic interneurons requires NRG1 in the cortical region. Myelination and ensheathment of peripheral nerves are controlled by the amounts of NRG1 produced in substrate axons. NRG1 from axons might regulate oligodendrocyte development and myelination of axons in the CNS. NRG1 is also necessary for the formation of neuromuscular junctions – NMJs. NRG1 stimulates CNS synapse formation (Lin Mei & Wen-Cheng Xiong, 2008)

Schizophrenia symptoms canceled in mice after gene therapy

A group of international researchers may have reached a breakthrough moment after they successfully eliminated schizophrenia symptoms in mice after they targeted a specific gene and manipulated its expression. Their findings offer hope that similar results might be possible for humans as well.

Despite schizophrenia being well documented for many years now and it being a somewhat prevailing mental disorder,  schizophrenia continues to confound both health professionals and the public. The hallucinations, peculiar ideas and extremely odd behavior people suffering from schizophrenia exhibit never cease to baffle those of us like to consider themselves normal. In the middle ages, people suffering from the disease were though to be possessed by demons and because of this they were quickly marginalized, tortured, exiled or worse…

Currently, schizophrenia isn’t curable and those suffering from it are forced to live with it for the rest of their days. Treatment exists, of course, which battles one or multiple symptoms. However, while medication helps control the psychosis associated with schizophrenia (e.g., the delusions and hallucinations), it cannot help the person find a job, learn to be effective in social relationships, increase the individual’s coping skills, and help them learn to communicate and work well with others. This is one nasty mental disorder, make no mistake, and scientists have been trying to find means of effectively combating it for many years.

Roles of Neuregulin 1 in neural development. NRG1 is released from neurons to promote the formation and maintenance of radial glial cells. Tangential migration of γ-aminobutyric acid-ergic interneurons requires NRG1 in the cortical region. Myelination and ensheathment of peripheral nerves are controlled by the amounts of NRG1 produced in substrate axons. NRG1 from axons might regulate oligodendrocyte development and myelination of axons in the CNS. NRG1 is also necessary for the formation of neuromuscular junctions – NMJs. NRG1 stimulates CNS synapse formation (Lin Mei & Wen-Cheng Xiong, 2008)

Roles of Neuregulin 1 in neural development. NRG1 is released from neurons to promote the formation and maintenance of radial glial cells. Tangential migration of γ-aminobutyric acid-ergic interneurons requires NRG1 in the cortical region. Myelination and ensheathment of peripheral nerves are controlled by the amounts of NRG1 produced in substrate axons. NRG1 from axons might regulate oligodendrocyte development and myelination of axons in the CNS. NRG1 is also necessary for the formation of neuromuscular junctions – NMJs. NRG1 stimulates CNS synapse formation (Lin Mei & Wen-Cheng Xiong, 2008)

One of the most recent such attempts shines with promise that schizophrenia might be dramatically alleviated, after a team of international researchers reversed schizophrenia-like symptoms in adult mice by restoring normal expression to a gene called Neuregulin-1 (NRG1 for short). The protein is important for brain development, however previous studies have shown that there seem to be a direct correlation between high levels of NRG1 and schizophrenia.

One might ask how does one diagnose a mouse with schizophrenia? Does it think it’s a cat or a dog? Does it start barking? Well, jokes aside, after the scientists bio-engineered mice which expressed higher levels of NRG1, the mice exhibited uncanny schizophrenia characteristics: hyperactivity, poor short-term and long-term memory, poor ability to ignore distracting background or white noise. When they returned NRG1 levels to normal in adult mice, the schizophrenia-like symptoms went away.

Like patients with schizophrenia, adult mice biogenetically-engineered to have higher NRG1 levels showed reduced activity of the brain messenger chemicals glutamate and γ-aminobutyric acid (GABA). The mice also showed behaviors related to aspects of the human illness. To genetically alter the mice, the scientists put a copy of the NRG1 gene into mouse DNA then, to make sure they could control the levels, they put in front of the DNA a binding protein for doxycycline, a stable analogue for the antibiotic tetracycline, which is infamous for staining the teeth of fetuses and babies. The mice are born expressing high levels of NRG1 and giving the antibiotic restores normal levels.

With this in mind, it might be possible for schizophrenia symptoms in humans to be alleviated as well by reducing the expression of the NRG1 gene. Findings were reported in the journal Neuron [source].


Neurobiologist can see in 3-D after being stuck in 2-D for 48 years. [amazing brain adaption]


Meet Susan Barry. She’s an accomplished neurobiologist and a professor of biological studies at Mount Holyoke College. For 48 years of her life, however, Susan was visually stuck in 2-D world. You see, she was born with her eye crossed and could only see in two dimensions. Our eyes each produce an image, and since they’re very close to another and in the same plane, unlike those of a horse for instance, the images are of an area more or less the same but from slightly different angles. The two separate images are then processed by the brain which combines the two by matching up similarities and adding up the slight differences between the two. The combined image is more than the sum of its parts. It is a three-dimensional stereo picture.

stereo vision drawingBack to Susan, though. Susan had a really tough time growing up, as you might imagine. Because of her condition, Susan could never focus both her eyes on the same point at the same time – the key to stereoscopic 3D vision – and was stuck in flat world. She had a lot of trouble reading, gaining visual perspective and living a normal life, bottom line. For many years, physicians believed that stereo vision can only be developed during a critical time in infancy. Susan believed this as well, after all, she was told this countless times by many different doctors.

As Susan got older, however, her vision worsen. She complained about her eyesight becoming “jittery” to her optometrist and began practicing a series of exercises designed to help stabilize her gaze. Along the way, however, she found that she could see in 3-D! Imagine her impression – she was 48! Better late than never, I suppose. The feat earned her the moniker “Stereo Sue,” coined by neurologist Oliver Sacks.

What “Stereo Sue” achieved, though, transcends her personal experiences because it shows the brain is capable of much greater adaption than we might credit it. “If we’re stuck in a rut, it’s because we think we’re stuck in a rut,” says Sue. “We can get better at everything.”  Her experience taught her that the brain could adapt – maybe even in some of the ways that we want it to adapt – even beyond the boundaries of the “critical period.”

Check out this 10 questions video with Susan, part of the “Secret Life of Scientists” show on PBS. Check out the bonus beneath the video.

Watch 10 Questions for Susan Barry on PBS. See more from Secret Life of Scientists.


Let’s have some fun. Below is a stereogram, which is basically an image which when visualized very closely will give the impression of depth. If you’ve never experienced a stereogram before, you’ll definitely rejoice. I remember being simply astonished at the sight when I gazed my first stereogram. At first glace, the image might not seem like much – a bunch of cows layered up in a seemingly chaotic order. Focus!

Read this to learn how to look at a stereogram first.

animal stereogram

Brain to Brain interface

Rats’ brains connect to form an organic computer

In an incredible feat of neuroscience and communications, researchers at Duke University School of Medicine formed a link between pairs of rats by electronically linking their brains. As such, the rats could exchange motor and tactile information between each other. In one particular case, the experiment showed that a pair of linked rats – one rat on a continent, the other in another continent – could still effectively communicate even though they were spaced by thousands of miles from another.

Brain to Brain interface

(c) Katie Zhuang, Nicolelis Labs, Duke University

The findings offer hints to the solid possibility of developing what the researchers call “organic computers“, consisting in sharing information, either motor or tactile, between animals to solve a problem. Just recently, we reported about another breakthrough in the field, from the same Duke University scientists, after a rat was granted a sixth sense. The rat in question had its brain adapted to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. Naturally, a puzzling question overwhelmed the researchers: if the brain can be trained to recognize information from an external sensory input, could it also be able to process information from a foreign body?

Yes it can, according to their findings. The researchers first trained pairs of rats to solve a simple problem, in which they were tasked with pressing the right lever when an indicator light above the lever switched on. If the correct action was taken the rats would be rewarded with a sip of water. With this basic info inserted, the researchers then connected the two rats’ brains  via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

Here’s where the nifty part starts. One of the rats was designated as the encoder, tasked with pressing the right lever when the visual cue was on, just like in the first experiment. However, this time an electrical signal that encoded the brain activity registered during this behavior was sent directly into the brain of the second rat, the decoder. In its chamber, the decoder rat had the same levers, only with no visual cues, so therefore it would have to rely on the cues sent by the encoder rat. The decoder rat had a a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent theorized by the researchers.

It’s worth noting – to get a finer picture of just how solid the brain-to-brain interface between the two rates is – that neither of rats would receive a reward if one of them failed to press the correct lever, proving the the communication is two-way.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine.

“The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, again pair of rats were trained to distinguish between a narrow or wide opening using their whiskers and signal this by nose-pocking water ports corresponding to each opening. In this test, the  decoder had a success rate of about 65 percent, significantly above expectations.

The two rats don’t even need to be near each other – far from it. To test just how far the transmission limit of the brain-to-brain interface can stretch, the researchers  placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. The two rats could still communicate between each other, even though they were on different continents.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Nicolelis added, “These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device. So basically, we are creating an organic computer that solves a puzzle.”

“But in this case, we are not inputting instructions, but rather only a signal that represents a decision made by the encoder, which is transmitted to the decoder’s brain which has to figure out how to solve the puzzle. So, we are creating a single central nervous system made up of two rat brains,” said Nicolelis. He pointed out that, in theory, such a system is not limited to a pair of brains, but instead could include a network of brains, or “brain-net.” Researchers at Duke and at the ELS-IINN are now working on experiments to link multiple animals cooperatively to solve more complex behavioral tasks.

“We cannot predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves,” continued Nicolelis. Such a connection might even mean that one animal would incorporate another’s sense of “self,” he said.

“In fact, our studies of the sensory cortex of the decoder rats in these experiments showed that the decoder’s brain began to represent in its tactile cortex not only its own whiskers, but the encoder rat’s whiskers, too. We detected cortical neurons that responded to both sets of whiskers, which means that the rat created a second representation of a second body on top of its own.” Basic studies of such adaptations could lead to a new field that Nicolelis calls the “neurophysiology of social interaction.”

Findings were published in the journal Scientific Reports.

Spaun Artificial Brain A screen capture from a simulation movie of Spaun in action shows the input image on the right. The output is drawn on the surface below Spaun's arm. Neuron activity is approximately mapped to relevant cortical areas and shown in color (red is high activity, blue is low). Chris Eliasmith

Meet SPAUN – the most complex artificial human brain yet

Spaun Artificial Brain A screen capture from a simulation movie of Spaun in action shows the input image on the right. The output is drawn on the surface below Spaun's arm. Neuron activity is approximately mapped to relevant cortical areas and shown in color (red is high activity, blue is low). Chris Eliasmith

A screen capture from a simulation movie of Spaun in action shows the input image on the right. The output is drawn on the surface below Spaun’s arm. Neuron activity is approximately mapped to relevant cortical areas and shown in color (red is high activity, blue is low). (c) Chris Eliasmith

Needless to say, the human brain is the most complex neural structure encountered so far. While a computer can outwork a human in many cognitive tasks, our brain can perform a variety of tasks that no computing machine can even scratch the surface. Just think a bit about imagination – how could a computer ever come as close as generating a sole, original and uninfluenced idea by itself? By becoming a human brain. Alas, technology is decades away from achieving such a feat, but efforts are constantly being made.

A milestone in this quest for building human-like artificial intelligence has been recently reached after researchers at the University of Waterloo unveiled to the world SPAUN or Semantic Pointer Architecture Unified Network – the most complex and largest human brain working model ever performed.

SPAUN is not a robot, though, SPAUN is a simulation that lives inside a computer. There it resides inside a simulated world, with simulated physics matching our own. It can think, remember, see using its  28×28 (784-pixel) camera and even write by using its mechanical arm.  For example, show it the number “3” and it will write its own “3”, even mimicking the style of the numeral in the process.

“It has been interesting to see the reactions people have had to Spaun. Even seasoned academics have not seen brain models that actually perform so many tasks. Models are typically small, and focus on one function,” said Mr. Eliasmith in a statement

An artificial mind stuck in an artificial world

SPAUN isn’t that smart, though. In many respects, it’s actually less intelligent than a monkey, who can do more general recognition than what this model does, Eliasmith said.

“It’s not as smart as monkeys when it comes to categorization, but it’s actually smarter than monkeys when it comes to recognizing syntactic patterns, structured patterns in the input, that monkeys won’t recognize,” Eliasmith said.

Ask a monkey to fill in the blank in a  pattern like 1, 11, 111; 3, 33, 333; 4, 44, _____ and it won’t be able to perform. SPAUN, however, can figure this out, showcasing a hallmark of intelligence.

Example input and output from Spaun. a) Handwritten numbers used as input. b) Numbers drawn by Spaun using its arm.

Example input and output from Spaun. a) Handwritten numbers used as input. b) Numbers drawn by Spaun using its arm.

SPAUN has 2.5 million artificial neurons that are broken down into a bunch of simulated cranial subsystems, including the prefrontal cortex, basal ganglia, and thalamus. Neurons are the individual building blocks that make up the brain, and these cells communicate by changing their voltages. The pattern of these voltage “spikes” is what transmits information from cell to cell, and SPAUN’s brain works fairly in the same manner. The typical human brain however has 100 billion neurons – not quite close, and it kinda shows.

For instance, SPAUN isn’t able to perform tasks in real-time. Every second in the demonstration video from below equates to 2.5 hours of processing time.

Now you’ve most likely heard about computer models of neurons, but what makes SPAUN special is that its neurons actually interpret the patterns of the signals they’re firing. This is an important distinction from computing-monsters like IBM’s Watson or brain simulations that use mathematical abstractization. You may think your iPhone’s Siri app is smarter than SPAUN, and in some respects it is, but they way it works has absolutely nothing in common with the human brain.

“The reason that the Spaun model is so compelling, is that it brings all of this work together,” Mr. Eliasmith said. “Human cognition isn’t interesting because we can recognize visual patterns […] move our arms in an integrated way […] solve a particular task or puzzle. It’s interesting because we can do all of this with the same brain, in any order, and at any time.”

If SPAUN is to become more human brain-like, it should need to learn new tasks by itself, and … make mistakes. The researchers are working on exactly this, that is to say they’re developing a way for SPAUN’s neurons to be capable of  adaptive plasticity or the ability to rewire neurons when performing tasks, essentially learning by doing. Currently SPAUN is only capable of performing pre-programmed tasks.

At the forefront of engineering, computer science, biology, philosophy, psychology and statistics SPAUN is a most impressive system, indeed and most useful one as well. By looking on how SPAUN performs simple tasks, neuroscientists can better zoom in on the process that underlay them and better understand how the human brain evolved to its current complexity. It is also capable of offering insight on how the brain deals with problems such as stroke or Alzheimer’s.

“There are not only deep philosophical questions you can approach using this work — such as how the mind represents the world – but there are also very practical questions you can address about the diseased brain,” Mr. Eliasmith noted. “I believe that critical innovations are going to come from basic research like this. I can’t predict what specific industry or company is going to use this work or how — but I can list a lot that might.”

Will we ever witness a self-aware, dare I say conscious, artificial entity during our lifetimes?  Eliasmith isn’t sure, since him and his team are still “miles away”, according to the scientist, but this is what they’re working on.

Findings were published in the journal Science.

[via PopSci]

The tiny neurosynaptic core produced by IBM. (c) IBM

Cognitive computing milestone: IBM simulates 530 billon neurons and 100 trillion synapses

First initiated in 2008 by IBM, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program whose final goal is that of developing a new cognitive computer architecture based on the human brain. Recently, IBM announced it has reached an important milestone for its program after the company successfully simulated 10 billion neurons and 100 trillion synapses on most powerful supercomputer.

It’s worth noting, however, before you get too exited, that the IBM researchers have not t built a biologically realistic simulation of the complete human brain – this is still a goal that is still many years away. Instead, the scientists devised a cognitive computing architecture called TrueNorth with 1010 neurons (10 billion) and 1014 synapses (100 trillion) that is inspired by the number of synapses in the human brain; meaning it’s modular, scalable, non-von Neumann, ultra-low power. The researchers hope that in the future this essential step might allow them to build an electronic neuromorphic machine technology that scales to biological level.

 “Computation (‘neurons’), memory (‘synapses’), and communication (‘axons,’ ‘dendrites’) are mathematically abstracted away from biological detail toward engineering goals of maximizing function (utility, applications) and minimizing cost (power, area, delay) and design complexity of hardware implementation,” reads the abstract for the Supercomputing 2012 (SC12) paper (full paper link).

Steps towards mimicking the full-power of the human brain

 Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

IBM simulated the TrueNorth system running on the world’s fastest operating supercomputer, the Lawrence Livermore National Lab (LBNL) Blue Gene/Q Sequoia, using 96 racks (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads).

IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 53×1010  (530 billion) neurons and 1.37×1014 (100 trillion) synapses running only 1542 times slower than real time.

The tiny neurosynaptic core produced by IBM. (c) IBM

The tiny neurosynaptic core produced by IBM. (c) IBM

“Previously, we have demonstrated a neurosynaptic core and some of its applications,” continues the abstract. “We have also compiled the largest long-distance wiring diagram of the monkey brain. Now, imagine a network with over 2 billion of these neurosynaptic cores that are divided into 77 brain-inspired regions with probabilistic intra-region (“gray matter”) connectivity and monkey-brain-inspired inter-region (“white matter”) connectivity.

“This fulfills a core vision of the DARPA SyNAPSE project to bring together nanotechnology, neuroscience, and supercomputing to lay the foundation of a novel cognitive computing architecture that complements today’s von Neumann machines.”

According to Dr. Dharmendra S. Modha, IBM’s cognitive computing manager, his team goal is that of mimic processes of the human brain. While IBM competitors focus on computing systems that mimic the left part of the brain, processing information sequentially, Modha is working on replicating functions from the right part of the human brain, where information can be processed in parallel and where incredibly complex brain functions lie. To this end, the researchers combine neuroscience and supercomputing to reach their goals.

Imagine that the room-sized, cutting-edge, billion dollar technology used by IBM to scratch the surface of artificial human cognition still doesn’t come near our brain’s capabilities, which only occupies a fixed volume comparable to a 2L bottle of water and needs less power than a light bulb to work. The video below features Dr. Modha explaining his project in easy to understand manner and its only 5 minutes long.

source: KurzweilAI


Brain scans of rappers offer valuable insight on creativity

Freestyle rapping is perhaps the most prized skill in hip hop – it is the ability to make rhymes on the fly, and it’s usually what rappers do to “duel” – the one who makes the better insults win.

But Siyuan Liu and Allen Braun, neuroscientists, didn’t go to a rap show – they brought the rap show to the lab. They and their team had 12 rappers freestyle in a magnetic resonance imaging (fMRI) machine. The artists were also asked to recite some memorized lyrics chosen by scientists. By comparing their brain when they were reciting from their memory to improvising, they were able to see which areas of the brain are used in improvisation – and are linked to creativity.

This study complements that conducted by Braun and Charles Limb, a doctor and musician at Johns Hopkins University in Baltimore, Maryland, who did the same thing to jazz musicians while they were improvising. Both sets of artists showed increased activity in a part of their frontal lobes called the medial prefrontal cortex. It can also be inferred that areas inactive in the process are unrelated to the creation process.

“We think what we see is a relaxation of ‘executive functions’ to allow more natural de-focused attention and uncensored processes to occur that might be the hallmark of creativity,” says Braun.

Rex Jung, a clinical neuropsychologist at the University of New Mexico in Albuquerque has also put a lot of effort into understanding the links between the brain and creativity, and he believes the highlighted areas are active in all creative processes, not only in music.

“Some of our results imply this downregulation of the frontal lobes in service of creative cognition. [The latest paper] really appears to pull it all together,” he says. “I’m excited about the findings.”

Michael Eagle, a study co-author who also raps in his spare time and provided inspiration for this study believes the creation process comes somehow outside of the “conscious awareness”:

“That’s kind of the nature of that type of improvisation. Even as people who do it, we’re not 100% sure of where we’re getting improvisation from.”

The next step in the research however will require something different than freestyle rapping; neuroscientists want to find out what happens after that first phase of creative burst.

“We think that the creative process may be divided into two phases,” he says. “The first is the spontaneous improvisatory phase. In this phase you can generate novel ideas. We think there is a second phase, some kind of creative processing [in] revision.”

No, fortune telling isn't real, but a recent study which examines various research from the past 20 years has found that humans posses a yet to be explained innate biological ability to anticipate events before they happen, despite the lack of obvious sensory cues.

Humans are capable of short-term precognition, study finds

No, fortune telling isn't real, but a recent study which examines various research from the past 20 years has found that humans posses a yet to be explained innate biological ability to anticipate events before they happen, despite the lack of obvious sensory cues.

No, fortune telling isn’t real, but a recent study which examines various research from the past 30 years has found that humans possess a yet to be explained innate biological ability to anticipate events before they happen, despite the lack of obvious sensory cues.

How many times did you find yourself anticipating a certain event shortly before it happened? Whether you guessed someone is going to look towards you before it happened or you immediately head to catch a bottle just as it begins to fall off a table, your subconscious seems to dictate actions, while your conscious psyche remains puzzled to as what triggered these actions. Some call it intuition, researchers at Northwestern University call it “anomalous anticipatory activity”.

It’s rather common for humans to anticipate an impending storm just by looking at cloud formations and sensing the wetness in the air, but this can’t be classed as precognition per se, according to the researchers, since the conclusion is based on sensory cues. The scientists note in a new meta-study based on an analysis of the results of 26 studies published between 1978 and 2010, that even without obvious sensory cues, the human body is able to react preemptively.

“Physiological measures of subconscious arousal, for instance, tend to show up before conscious awareness,” explained the review’s lead author Julia Mossbridge. “What hasn’t been clear is whether humans have the ability to predict future important events even without any clues as to what might happen.”

How can you explain instinct?

The studies compiled by the authors examined various events such as presentations of arousing versus neutral stimuli or guessing games with correct vs incorrect feedback. The results weren’t measured by the actual verbal or action based input from the part of the volunteers but in physical activity of the skin, heart, blood, eyes and brain. The findings across most of the studies seem to be consistent with the idea that humans, like other animals most likely, can subconsciously anticipate events, despite they can’t consciously express these future occurrences.

 “I like to call the phenomenon ‘anomalous anticipatory activity,'” Mossbridge said. “The phenomenon is anomalous, some scientists argue, because we can’t explain it using present-day understanding about how biology works; though explanations related to recent quantum biological findings could potentially make sense. It’s anticipatory because it seems to predict future physiological changes in response to an important event without any known clues, and it’s an activity because it consists of changes in the cardiopulmonary, skin and nervous systems.”

Mossbridge offers an example of one such study scenario, in which a man playing video games and wearing headphones at work shouldn’t be able to tell when a supervisor comes around the corner.

 “But our analysis suggests that if you were tuned into your body, you might be able to detect these anticipatory changes between two and 10 seconds beforehand and close your video game,” she explained. “You might even have a chance to open that spreadsheet you were supposed to be working on. And if you were lucky, you could do all this before your boss entered the room.”

The researchers are far from claiming humans can sense the future, however they do conclude that the presentiment phenomenon is very much real, though still unexplained.

“If this seemingly anomalous anticipatory activity is real, it should be possible to replicate it in multiple independent laboratories,” she and her co-authors write. “The cause of this anticipatory activity, which undoubtedly lies within the realm of natural physical processes (as opposed to supernatural or paranormal ones), remains to be determined.”

Findings were published in the journal Frontiers in Perception Science.