Tag Archives: Science

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

The FDA finally approved a condom for anal sex. Here’s why it’s a good thing

Whether you’re in a committed relationship or prone to the throws of lust (or both, we’re not judging), you need to protect yourself and your partner — which usually means using a condom.

Still, as humans tend to be, we’re not always careful. We like to experiment, we sometimes falter — and pick up sexually-transmitted diseases (STDs). Whatever the reason, condoms are a great way to stay safe and can be used by people of the appropriate age just about anywhere–and they can also be lots of fun. Now, there’s a new type of condom on the block.

A victory for all genders and denominations

There’s never been an approved condom specifically for anal intercourse. Until now, condoms on the market were only approved for vaginal intercourse, which omits a large section of our society.

Condoms for vaginal sex currently on the market are recommended for use during anal or oral intercourse by the Center for Disease Control – meaning they’re legally backed by a drug agency for one activity and informally deemed effective for another in what is known as ‘off-label’ use. But the US Food and Drug Administration (USFDA) has finally approved the first condom for anal sex: the ONE Male Condom.

The approval is seen as a victory for sexual health and especially important for the LGBTQ community, who, until now, have not had a condom aimed specifically at them. Courtney Lias, director of the USFDA’s Office of GastroRenal, Obstetrics-Gynecological, General Hospital, and Urology Devices, says:

“The risk of STI transmission during anal intercourse is significantly higher than during vaginal intercourse. The FDA’s authorization of a condom that is specifically indicated, evaluated, and labeled for anal intercourse may improve the likelihood of condom use during anal intercourse.” 

What’s different with this condom

The newly approved condom is a natural rubber latex sheath that covers the penis. It’s available in three different versions: standard, thin and fitted. The fitted condoms, available in 54 different sizes, incorporate a paper template to find the best condom size for each user to minimize leakage. Global Protection Corp, which makes the condom, stresses that during anal intercourse, users should employ a compatible lubricant with their condom and all other brands.

“We want people to have lots of sex — but we also want them to be empowered and informed,” said Davin Wedel, president of Global Protection Corp.

Scientists studied the safety and efficacy of the condom in a clinical trial comprised of 252 men who prefer sex with men and 252 men who prefer intercourse with women. All volunteers were between 18 and 54 years of age. 

Results show the total condom failure rate was 0.68% for anal sex and 1.89% for vaginal intercourse. Researchers defined the condom failure rate as the number of slippages, breakage, or both slippage and breakage events over the total number of sex acts recorded in a diary by participants.

Disappointingly, the trial didn’t calculate the STD baseline as too many variables (such as not wearing a condom) could cause infection during the trial. Therefore, the rate of STDs was not measured at the beginning of the study and compared with later data. Despite this, the trial center did allow participants to self-report any genital-based infections which could have resulted from the use of a different condom brand before or during tests.

The researchers from Emory University who were behind the study said an essential reason for the trial’s success was that volunteers used lubricant, which prevents slippage and breakage, and the inclusion of instructions.

Taken together, these findings suggest that health bodies should provide lubricant along with the billions of condoms distributed as part of HIV and STD prevention efforts to minimize failure. 

The USFDA will help get more condoms like these on the market

The USFDA is responsible for controlling and supervising food, tobacco, dietary supplements, prescription drugs, blood transfusions, medical devices, cosmetics, and animal & veterinary products. They achieve this by inspecting manufacturing premises and reviewing the safety and effectiveness of a product before a business can sell it on the market after it has undergone extensive clinical trials that can last for over a decade.

A rigid classification, under the terms of a De Novo, the submitting company, must prove that their product presents a ‘medium risk’ to humans. In contrast, under the 510(k) submission, an organization only has to show their device presents no more risk to human health than the approved equivalent product – even where the marketed product has been deemed dangerous. De Novo submissions are also more expensive than the cheaper 510(k).

Surprisingly, even though the ONE condom is already approved by the USFDA using the flexible 510(k) category for vaginal sex, the agency has cleared the new product for anal sex through the De Novo pathway. This fact certainly raises questions regarding the lack of equivalency between condoms used for vaginal sex and anal sex.

On a positive note, they have established special controls so that other devices can now show equivalence to the ONE condom using a 510(k) classification to receive quicker clearance without the need for clinical trials. 

In its press release, the USFDA said the green light could pave the way for more condom makers to apply for faster approval if they show equivalent results. They add that they expect authorization of the ONE Male Condom to help reduce the transmission of STDs, including HIV/AIDS in both anal and vaginal intercourse.

All approved condoms are an easy way to protect yourself

Experts remind all sexually-active couples that they can still use other approved condoms on the market during anal sex:

“This isn’t a groundbreaking advancement in my opinion. All condoms can (and should!) be used to make anal sex safer, so just because this one brand has FDA approval doesn’t make it any better than other condom brands on the market,” says obstetrician-gynecologist and author Jennifer Lincoln who wasn’t part of the trial, for PopSci. “Don’t let the ‘FDA approved’ label sway you when you are at the grocery store—the best condom to use for safe sex is the one you have access to and the one you will actually use.”

Still, this is a galvanizing moment for the LGBTQ movement.

“This authorization helps us accomplish our priority to advance health equity through the development of safe and effective products that meet the needs of diverse populations. This De Novo authorization will also allow subsequent devices of the same type and intended use to come to the market through the 510k pathway, which could enable the devices to get on the market faster,” Lias added in the USFDA statement.

It remains to be seen whether this will trigger a longer-term movement. In the meantime, stay safe.

Gut bacteriophages associated with improved cognitive function and memory in both animals and humans

A growing body of evidence has implicated gut bacteria in regulating neurological processes such as neurodegeneration and cognition. Now, a study from Spanish researchers shows that viruses present in the gut microbiota can also improve mental functions in flies, mice, and humans.

Credit: CDC.

They easily assimilate into their human hosts — 8% of our DNA consists of ancient viruses, with another 40% of our DNA containing genetic code thought to be viral in origin. As it stands, the gut virome (the combined genome of all viruses housed within the intestines) is a crucial but commonly overlooked component of the gut microbiome.

But we’re not entirely sure what it does.

This viral community is comprised chiefly of bacteriophages, viruses that infect bacteria and can transfer genetic code to their bacterial hosts. Remarkably, the integration of bacteriophages or phages into their hosts is so stable that over 80% of all bacterial genomes on earth now contain prophages, permanent phage DNA as part of their own — including the bacteria inside us humans. Now, researchers are inching closer to understanding the effects of this phenomenon.

Gut and brain

In their whitepaper published in the journal Cell Host and Microbe, a multi-institutional team of scientists describes the impact of phages on executive function, a set of cognitive processes and skills that help an individual plan, monitor, and successfully execute their goals. These fundamental skills include adaptable thinking, planning, self-monitoring, self-control, working memory, time management, and organization, the regulation of which is thought, in part, to be controlled by the gut microbiota.

The study focuses on the Caudovirales and Microviridae family of bacteriophages that dominate the human gut virome, containing over 2,800 species of phages between them.

“The complex bacteriophage communities represent one of the biggest gaps in our understanding of the human microbiome. In fact, most studies have focused on the dysbiotic process only in bacterial populations,” write the authors of the new study.

Specifically, the scientists showed that volunteers with increased Caudovirales levels in the gut microbiome performed better in executive processes and verbal memory. In comparison, the data showed that increased Microviridae levels impaired executive abilities. Simply put, there seems to be an association between this type of gut biome and higher cognitive functions.

These two prevalent bacteriophages run parallel to human host cognition, the researchers write, and they may do this by hijacking the bacterial host metabolism.

To reach this conclusion, the researchers first tested fecal samples from 114 volunteers and then validated the results in another 942 participants, measuring levels of both types of bacteriophage. They also gave each volunteer memory and cognitive tests to identify a possible correlation between the levels of each species present in the gut virome and skill levels.

The researchers then studied which foods may transport these two kinds of phage into the human gut -results indicated that the most common route appeared to be through dairy products.

They then transplanted fecal samples from the human volunteers into the guts of fruit flies and mice – after which they compared the animal’s executive function with control groups. As with the human participants, animals transplanted with high levels of Caudovirales tended to do better on the tests – leading to increased scores in object recognition in mice and up-regulated memory-promoting genes in the prefrontal cortex. Improved memory scores and upregulation of memory-involved genes were also observed in fruit flies harboring higher levels of these phages.

Conversely, higher Microviridae levels (correlated with increased fat levels in humans) downregulated these memory-promoting genes in all animals, stunting their performance in the cognition tests. Therefore, the group surmised that bacteriophages warrant consideration as a novel dietary intervention in the microbiome-brain axis.

Regarding this intervention, Arthur C. Ouwehand, Technical Fellow, Health and Nutrition Sciences, DuPont, who was not involved in the study, told Metafact.io:

“Most dietary fibres are one way or another fermentable and provide an energy source for the intestinal microbiota.” Leading “to the formation of beneficial metabolites such as acetic, propionic and butyric acid.”

He goes on to add that “These so-called short-chain fatty acids may also lower the pH of the colonic content, which may contribute to an increased absorption of certain minerals such as calcium and magnesium from the colon. The fibre fermenting members of the colonic microbiota are in general considered beneficial while the protein fermenting members are considered potentially detrimental.”

It would certainly be interesting to identify which foods are acting on bacteriophages contained within our gut bacteria to influence cognition.

Despite this, the researchers acknowledge that their work does not conclusively prove that phages in the gut can impact cognition and explain that the test scores could have resulted from different bacteria levels in the stomach but suggest it does seem likely. They close by stating more work is required to prove the case.

Brain scans are saving convicted murderers from death row–but should they?

Over a decade ago, a brain-mapping technique known as a quantitative electroencephalogram (qEEG) was first used in a death penalty case, helping keep a convicted killer and serial child rapist off death row. It achieved this by swaying jurors that traumatic brain injury (TBI) had left him prone to impulsive violence.

In the years since, qEEG has remained in a weird stasis, inconsistently accepted in a small number of death penalty cases in the USA. In some trials, prosecutors fought it as junk science; in others, they raised no objections to the imaging: producing a case history built on sand. Still, this handful of test cases could signal a new era where the legal execution of humans becomes outlawed through science.

Quantifying criminal behavior to prevent it

As it stands, if science cannot quantify or explain every event or action in the universe, then we remain in chaos with the very fabric of life teetering on nothing but conjecture. But DNA evidentiary status aside, isn’t this what happens in a criminal court case? So why is it so hard to integrate verified neuroimaging into legal cases? Of course, one could make a solid argument that it would be easier to simply do away with barbaric death penalties and concentrate on stopping these awful crimes from occurring in the first instance, but this is a different debate.

The problem is more complex than it seems. Neuroimaging could be used not just to exempt the mentally ill from the death penalty but also to explain horrendous crimes to the victims or their families. And just as crucial, could governments start implementing measures to prevent this type of criminal behavior using electrotherapy or counseling to ‘rectify’ abnormal brain patterns? This could lead down some very slippery slopes.

Especially it’s not just death row cases that are questioning qEEG — nearly every injury lawsuit in the USA also now includes a TBI claim. With Magnetic Resonance Imaging (MRIs) and Computed tomography (CT) being generally expensive, lawyers are constantly seeking new ways to prove brain dysfunction. Readers should note that both of these neuroimaging techniques are viewed as more accurate than qEEG but can only provide a single, static image of the neurological condition – and thus provide no direct measurement of functional, ongoing brain activity.

In contrast, the cheaper and quicker qEEG testing purports to monitor active brain activity to diagnose many neurological conditions continuously and could one-day flag those more inclined to violence, enabling early interventional therapy sessions and one-to-one help, focusing on preventing the problem.

But until we can reach this sort of societal level, defense and human rights lawyers have been attempting to slowly phase out legal executions by using brain mapping – to explain why their convicted clients may have committed these crimes. Gradually moving from the consequences of mental illness and disorders to understanding these conditions more.

The sad case of Nikolas Cruz

But the questions surrounding this technology will soon be on trial again in the most high-profile death penalty case in decades: Florida vs. Nikolas Cruz. On the afternoon of February 14, 2018, Cruz opened fire on school children and staff at Marjory Stoneman Douglas High in Parkland when he was just 19 years of age. Now classed as the deadliest school shooting in the country’s history, the state charged the former Stoneman Douglas High student with the premeditated murder of 17 school children and staff and the attempted murder of a further seventeen people. 

With the sentencing expected in April 2022, Cruz’s defense lawyers have enlisted qEEG experts as part of their case to persuade jurors that brain defects should spare him the death penalty. The Broward State Attorney’s Office signaled in a court filing last month that it will challenge the technology and ask a judge to exclude the test results—not yet made public—from the case.

Cruz has already pleaded guilty to all charges, but a jury will now debate whether to hand down the death penalty or life in prison.

According to a court document filed recently, Cruz’s defense team intends to ask the jury to consider mitigating factors. These include his tumultuous family life, a long history of mental health disorders, brain damage caused by his mother’s drug addiction, and claims that a trusted peer sexually abused him—all expected to be verified using qEEG.

After reading the flurry of news reports on the upcoming case, one can’t help but wonder why, even without the use of qEEG, someone with a record of mental health issues at only 19 years old should be on death row. And as authorities and medical professionals were aware of Cruz’s problems, what were the preventative-based failings that led to him murdering seventeen individuals? Have these even been addressed or corrected? Unlikely.

On a positive note, prosecutors in several US counties have not opposed brain mapping testimony in more recent years. According to Dr. David Ross, CEO of NeuroPAs Global and qEEG expert, the reason is that more scientific papers and research over the years have validated the test’s reliability. Helping this technique gain broader use in the diagnosis and treatment of cognitive disorders, even though courts are still debating its effectiveness. “It’s hard to argue it’s not a scientifically valid tool to explore brain function,” Ross stated in an interview with the Miami Herald.

What exactly is a quantitative electroencephalogram (qEEG)?

To explain what a qEEG is, first, you must know what an electroencephalogram or EEG does. These provide the analog data for computerized qEEGs that record the electrical potential difference between two electrodes placed on the outside of the scalp. Multiple electrodes (generally >20) are connected in pairs to form various patterns called montages, resulting in a series of paired channels of EEG activity. The results appear as squiggly lines on paper—brain wave patterns that clinicians have used for decades to detect evidence of neurological problems.

More recently, trained professionals have computerized this data to create qEEG – translating raw EEG data using mathematical algorithms to help analyze brainwave frequencies. Clinicians then compare this statistical analysis against a database of standard or neurotypical brain types to discern those with abnormal brain function that could cause criminal behavior in death row cases.

While this can be true, results can still go awry due to incorrect electrode placement, unnatural imaging, inadequate band filtering, drowsiness, comparisons using incorrect control databases, and choice of timeframes. Furthermore, processing can yield a large number of clinically irrelevant data. These are some reasons that the usefulness of qEEG remains controversial despite the volume of published research. However, many of these discrepancies can be corrected by simply using trained medical professionals to operate the apparatus and interpret the data.

Just one case is disrupting the use of this novel technology

Yet, despite this easy correction, qEEG is not generally accepted by the relevant scientific community to diagnose traumatic brain injuries and is therefore inadmissible under Frye v. the United States. An archaic case from way back in 1923 based on a polygraph test, the trial came a mere 17-years after Cajal and Golgi won a Nobel Prize for producing slides and hand-drawn pictures of neurons in the brain.

Experts could also argue that a lie detector test (measuring blood pressure, pulse, respiration, and skin conductivity) is far removed from a machine monitoring brain activity. Furthermore, when the Court of Appeals of the District of Columbia decided on this lawsuit, qEEG didn’t exist. 

Applying the Frye standard, courts throughout the country have excluded qEEG evidence in the context of alleged brain trauma. For example, the Florida Supreme Court has formally noted that the relevant scientific community for purposes of Frye showed “qEEG is not a reliable method for determining brain damage and is not widely accepted by those who diagnose a neurologic disease or brain damage.” 

However, in a seminal paper covering the use of qEEG in cognitive disorders, the American Academy of Neurology (AAN) overall felt computer-assisted diagnosis using qEEG is an accurate, inexpensive, easy to handle tool that represents a valuable aid for diagnosing, evaluating, following up and predicting response to therapy — despite their opposition to the technology in this press. The paper also features other neurological associations validating the use of this technology.

The introduction of qEEg on death row was not that long ago

Only recently introduced, the technology was first deemed admissible in court during the death-penalty prosecution of Grady Nelson in 2010. Nelson stabbed his wife 61 times with a knife, then raped and stabbed her 11-year-old intellectually disabled daughter and her 9-year old son. The woman died, while her children survived. Documents state that Nelson’s wife found out he had been sexually abusing both children for many years and sought to keep them away from him.

Nelson’s defense argued that earlier brain damage had left him prone to impulsive behavior and violence. Prosecutors fought to strike the qEEG test from evidence, contending that the science was unproven and misused in this case.

“It was a lot of hocus pocus and bells and whistles, and it amounted to nothing,” the prosecutor on the case, Abbe Rifkin, stated. “When you look at the facts of the case, there was nothing impulsive about this murder.”

However, after hearing the testimony of Dr. Robert W. Thatcher, a multi-award-winning pioneer in qEEG analysis for the defense, Judge Hogan-Scola, found qEEG met the legal prerequisites for reliability. She based this on Frye and Daubert standards, two important cases involving the technology.

She allowed jurors to hear the qEEG report and even permitted Thatcher to present a computer slide show of Nelson’s brain with an explanation of the effects of frontal lobe damage at the sentencing phase. He testified that Nelson exhibited “sharp waves” in this region, typically seen in people with epilepsy – explaining that Grady doesn’t have epilepsy but does have a history of at least three TBIs, which could explain the abnormality seen in the EEG.  

Interpreting the data, Thatcher also told the court that the frontal lobes, located directly behind the forehead, regulate behavior. “When the frontal lobes are damaged, people have difficulty suppressing actions … and don’t understand the consequences of their actions,” Thatcher told ScienceInsider.

Jurors rejected the death penalty. Two jurors who agreed to be interviewed by a major national publication later categorically stated that the qEEG imaging and testimony influenced their decision.

“The moment this crime occurred, Grady had a broken brain,” his defense attorney, Terry Lenamon, said. “I think this is a huge step forward in explaining why people are broken—not excusing it. This is going to go a long way in mitigating death penalty sentences.”

On the other hand, Charles Epstein, a neurologist at Emory University in Atlanta, who testified for the prosecution, states that the qEEG data Thatcher presented flawed statistical analysis riddled with artifacts not naturally present in EEG imaging. Epstein adds that the sharp waves Thatcher reported may have been blips caused by the contraction of muscles in the head. “I treat people with head trauma all the time,” he says. “I never see this in people with head trauma.”

You can see Epstein’s point as it’s unclear whether these brain injuries occurred before or after Nelson brutally raped a 7-year old girl in 1991, after which he was granted probation and trained as a social worker.

All of which invokes the following questions: Firstly, do we need qEEG to state this person’s behavior is abnormal or that the legal system does not protect children and secondly, was the reaction of authorities in the 1991 case appropriate, let alone preventative?

As more mass shootings and other forms of extreme violence remain at relatively high levels in the United States, committed by younger and younger perpetrators flagged as loners and fantasists by the state mental healthcare systems they disappear into – it’s evident that sturdier preventative programs need to be implemented by governments worldwide. The worst has already occurred; our children are unprotected against dangerous predators and unaided when affected by their unstable and abusive environments, inappropriate social media, and TV.  

A potential beacon of hope, qEEG is already beginning to highlight the country’s broken socio-legal systems and the amount of work it will take to fix them. Attempting to humanize a diffracted court system that still disposes of the product of trauma and abuse like they’re nothing but waste, forcing the authorities to answer for their failings – and any science that can do this can’t be a bad thing.

Your microbiota will be having non-stop sex this Valentine’s Day

Even if you’re alone this Valentine’s Day, there’s no need to worry: some parts of your body will be getting plenty of action. In fact, your body will host a veritable carnival of the sensual in your tummy, as your microbiota will engage in an orgy of sex and swinger’s parties — where they’ll be swapping genes instead of keys.

A medical illustration of drug-resistant, Neisseria gonorrhoeae bacteria. Original image sourced from US Government department: Public Health Image Library, Centers for Disease Control and Prevention. Image in the public domain.

The salacious gene

Imagine you have a severe disease with a very unusual cure: you can treat by making love with someone who then passes on the necessary genes to cure your ailment. It is, as they say, sexual healing. Using sex to protect or heal themselves is precisely what bacteria can do, and it’s a crucial defense mechanism.

In the past, the research community thought bacterial sex (or conjugation, as scientists call it) was a terrible threat for humans, as this ancient process can spread DNA capable of conveying antibiotic resistance to their neighbors. Antibiotic resistance is one of the most pressing health challenges the world is facing, being projected to cause 10 million deaths a year by 2050.

But there’s more to this bacterial sex than meets the eye. Recently, scientists from the University of Illinois at Urbana-Champaign and the University of California Riverside witnessed gut microbes sharing the ability to acquire a life-saving nutrient with one another through bacterial sex. UCR microbiologist and study lead Patrick Degnan says:

“We’re excited about this study because it shows that this process isn’t only for antibiotic resistance. The horizontal gene exchange among microbes is likely used for anything that increases their ability to survive, including sharing vitamin B12.”

For well over 200-years, researchers have known that bacteria reproduce using fission, where one cell halves to produce two genetically identical daughter cells. However, in 1946, Joshua Lederberg and Edward Tatum discovered bacteria could exchange genes through conjugation, an entirely separate act from reproduction.

Conjugation occurs when a donor and a recipient bacteria sidle up to each other, upon which the donor creates a tube, called a pilus that attaches to the recipient and pulls the two cells together. A small parcel of DNA is then passed from the donor to the recipient, providing new genetic information through horizontal transfer.

Ironically, it wasn’t until Lederberg met and fell in love with his wife, Esther Lederberg, that they made progress regarding bacterial sex.

Widely acknowledged as a pioneer of bacterial genetics, Esther still struggled for recognition despite identifying the horizontal transfer of antibiotic resistance and viruses, which kill bacteria known as bacteriophages. She discovered these phages after noticing small objects nibbling at the edges of her bacterial colonies. Going downstream to find out how they got there, she found these viral interlopers hiding dormant amongst bacterial chromosomes after being transferred by microbes during sex.

Later work found that environmental stresses such as illness activated these viruses to replicate within their hosts and kill them. Still, scientists assumed that bacterial sex was purely a defense mechanism.

Esther Ledeberg in her Stanford lab. Image credits: Esther Lederberg.

Promiscuity means longevity

The newly-published study builds on Esther’s work. The study authors felt this bacterial process extended beyond antibiotic resistance. So they started by investigating how vitamin B12 was getting into gut microbial cells, where the cells had previously been unable to extract this vitamin from their environment — which was puzzling as, without vitamin B12, most types of living cells cannot function. Therefore, many questions remained about how these organisms survived without the machinery to extract this resource from the intestine.

The new study in Cell Reports uses the Bacteroidetes species, which comprise up to 80% of the human microbiome in the intestines, where they break down complex carbohydrates for energy.

“The big, long molecules from sweet potatoes, beans, whole grains, and vegetables would pass through our bodies entirely without these bacteria. They break those down so we can get energy from them,” the team explained.

This bacteria was placed in lab dishes mixing those that could extract B12 from the stomach with some that couldn’t. The team then watched in awe while the bacteria formed their sex pilus to transfer genes enabling the extraction of B12. After the experiment, researchers examined the total genetic material of the recipient microbe and found it had incorporated an extra band of DNA from the donor.

Among living mice, something similar happens. When the group-administered two different subgroups of Bacteroidetes to a mouse – one that possessed the genes for transferring B12 and another that didn’t — they found the genes had ‘jumped’ to the receiving donee after five to nine days.

“In a given organism, we can see bands of DNA that are like fingerprints. The recipients of the B12 transporters had an extra band showing the new DNA they got from a donor,” Degnan said.

Remarkably, the team also noted that different species of phages were also transferred during conjugation, exhibiting bacterial subgroup specificity in some cases. These viruses also showed the capacity to alter the genomic sequence of its bacterial host, with the power to promote or demote the life of its microbic vessel when activated.

Sexual activity in our intestines keeps us healthy

Interestingly, the authors note they could not observe conjugation in all subgroups of the Bacteroidetes species, suggesting this could be due to growth factors in the intestine or a possible subgroup barrier within this large species group slowing the process down.

Despite this, Degnan states, “We’re excited about this study because it shows that this process isn’t only for antibiotic resistance.” And that “The horizontal gene exchange among microbes is likely used for anything that increases their ability to survive, including sharing [genes for the transport of] vitamin B12.”

Meaning that bacterial sex doesn’t just occur when microbes are under attack; it happens all the time. And it’s probably part of what keeps the microbiome and, by extension, ourselves fit and healthy.

Pap tests could one day tell women if they have breast or ovarian cancer

Experts have identified changes in a woman’s cervix that can help detect tumors elsewhere in the body. These tests involve scraping cells from the cervix to detect any abnormalities that could cause cervical cancer. But researchers from Innsbruck University and gynecological cancer research charity The Eve Appeal found the cells from this test can also give clues and alerts for other types of cancers. With development, they state that the method used could one day predict the risk of developing ovarian, breast, womb, and cervical cancers from a straightforward smear pap test.

They developed their system using a process known as DNA methylation — epigenetic modifications to DNA that don’t alter the genetic sequence but do determine whether a gene expresses or stifles its function: in this case, forming or preventing cancer in the body. These modifications leave ‘methylation markers or signatures’ on genomic regions that scientists can read to determine what has occurred within a person’s body throughout their lifetime. Akin to the rings of a tree, this method can provide chronological clues as to what has happened in our biological life.

Researchers created the test, dubbed WID (Women’s Risk Identification), to analyze markers left by cancerous activity in the DNA of cervical cells. By calculating a woman’s WID, they hope to identify those with a high risk of developing ovarian, breast, womb, or cervical cancers: providing an early-warning system for medical teams to increase treatment outcomes.

The team was able to spot these modifications because they matched DNA markers found in diseased cervical, breast, ovarian, and womb biopsy tissue (a highly invasive procedure) to those found in the easier to access cells of the cervix — whose similar biological structures undergo the same hormonal changes as the tissues these cancers flourish in.

Finding cancer through the cervix

The first study examined cervical cell samples collected from 242 women with ovarian cancer and 869 healthy controls. To develop the WID risk scale, the scientists measured 14,000 epigenetic changes to identify ovarian cancer’s unique DNA signature to spot the presence of the disease in epithelial tissue scraped from the cervix.

They then validated the signature in an additional cohort of 47 women who had ovarian cancer and 227 healthy subjects. Results identified 71% of women under 50 and roughly 55% of the volunteers older than 50 who had previously tested positive for the disease — giving the tests an overall specificity of 75%. A test’s specificity is its ability to correctly identify people without the disease.

Professor Martin Widschwendter of the University of Innsbruck and UCL, heading up the research, said the findings suggest their WID index is picking up cancer predisposition, adding that the results were similar to a study on women with cancer of the womb. He is adamant their test cannot predict ovarian, with more studies needed.

A possible screening method for an undetectable cancer 

In the second study, the same team analyzed epigenetic changes in cervical cell samples provided by 329 women with breast cancer against those from the same 869 healthy volunteers in the first study. Using the WID index, they were able to identify women with breast cancer based on a unique epigenetic signature. The group once again confirmed these markers in a smaller consort of 113 breast cancer patients and 225 women without this condition.

The researchers also used the patterns to predict whether patients had breast cancer-but they didn’t say exactly how accurate the tests were. Instead, they stressed that further trials are needed-with the hope that clinicians could use their WID as a regular test for women in the future-specifically for those under fifty years of age who do not have access to screening for this disease.

“This research is incredibly exciting,” said Liz O’Riordan, a breast cancer surgeon who was also diagnosed with this disease. “At the moment, there is no screening test for breast cancer in women under the age of 50. If this test can help pick up women with a high risk of developing breast, ovarian, cervical, and uterine cancer at a younger age, it could be a game-changer.”

The team adds that these findings are also crucial for ovarian cancer, whose symptoms can be as benign as a bloated abdomen. The biggest killer of women out of gynecological-based tumors, this disease is diagnosed late by clinicians in an alarming 3 out of four cases.

But for now, Widschwendter says, the findings suggest that the molecular signatures in cervical cells may detect the predisposition to other women-specific cancers rather than providing a solid prediction of the disease.

Because of the pandemic, women have stopped taking pap tests

A pap smear test detects abnormal cells on the cervix, which is the entrance to the uterus from the vagina. Removing these cells can prevent cervical cancer, which most commonly affects sexually-active women aged between 30 and 45. In most cases, the human papillomavirus causes this cancer after being acquired through unprotected sex or skin-to-skin contact. To summarise, the whole point of these tests is to detect women at risk of developing cancer and encourage them to carry further health check-ups, not to find those displaying cancer symptoms.

Around the world, the number of women taking smear tests has dropped substantially during the pandemic. In England, for instance, one of the countries with the highest testing rates, just 7 out of 10 eligible women got a cervical check-up — and conditions are expected to worsen due to a new policy brought in by the UK government at the start of 2022, which saw all eligible women in Wales have their wait times increased from three to five years in between tests. The government expects to roll out the policy in England this year after the pandemic caused the delay of its initial release. Experts insisted the move was safe, but campaigners hit back at the plans, arguing it would cause preventable deaths by delaying the detection of cancer or pre-cancerous issues.

In a statement to the Guardian, the UK’s Secretary for Patient Safety and Primary Care says it’s “great to see how this new research could help alert women who are at higher risk to help prevent breast, ovarian, womb, and cervical cancer before it starts.” Until this time, cancer screening remained vital and urged all women aged 25 and above to attend their appointments when invited. The secretary did not remark on the new government policy.

An ovarian cancer specialist urged caution in interpreting the data: They show a “moderate association” between the methylation signature and ovarian cancer, said Dr. Rebecca Stone, the Kelly Gynecologic Oncology Service director at Johns Hopkins Hospital. “They are not showing that it’s predictive or diagnostic,” Stone stressed. Clarifying that to see whether the cervical cell signature predicts cancer, a study would have to observe a large group of women over a long period.

Filling the gap in screening options for women

In contrast, Athena Lamnisos, CEO of the Eve Appeal, emphasizes the importance of a new screening tool:

“Creating a new screening tool for the four most prevalent cancers that affect women and people with gynae organs, particularly the ones which are currently most difficult to detect at an early stage, from a single test could be revolutionary.”

The Eve Appeal goes on that women could get separate risk scores for each of the four cancers in the future where medical teams could offer those with high scores more active monitoring, regular mammograms, risk-reducing surgery, or therapeutics.

Ultimately, it’s better to prevent than to treat, and this method could offer women worldwide access to proper screening services that could save lives through the application of early intervention and preventative medicine.

Study on mice: Exercising later in life can keep your muscles young

Exercising can not only make you feel younger, but it can also actually keep you younger as well. A study on mice suggests that exercising, even later in life, can do wonders for your muscles. In addition to underscoring the importance of staying active, the study could also help us uncover some of the secrets of rejuvenation.

Even though some diseases are inherited, we can still improve our overall health through lifestyle choices such as diet and exercise. Still, whatever the reason, the genes related to some of these conditions must be expressed for them to develop. So how does this happen?

A new study has brought us closer to an answer by mapping the genetic changes involved in rejuvenating the muscle cells of elderly mice put on an exercise program.

Turning genes on and off

The analysis centers on DNA, the “blueprint” for our bodies. DNA consists of four bases, called cytosine, guanine, adenine, and thymine, and the process used to help manage these massive helixes: a methyl molecule composed of one carbon and three hydrogen atoms. These atoms attach themselves to one of the four bases (cytosine) to form what’s known as a CpG site.

When this occurs, the CpG becomes methylated and the site produces proteins to regulate something in the body — whatever that something may be. In contrast, the region becomes unmethylated when you lose that methyl group, turning that gene off. In this way, a process called DNA methylation can promote or inhibit the expression of specific genes — whether it’s stopping a tumor, preventing cancer, or activating genes responsible for causing wrinkles in old age. This process is constant, occurring billions of times a second in every cell throughout the body, and we’re just starting to understand it.

DNA methylation is one of the many mechanisms of epigenetics, where inborn or acquired changes in DNA don’t touch the actual sequence – meaning a person can potentially reverse things like fat deposits through diet or exercise. More and more studies are starting to suggest that this is an unharnessed and robust process, linked to longevity and the regulation of lifespan in most organisms on earth.

The current study attempts to further this theory using lifestyle interventions such as exercise to roll back genetic aging in skeletal muscle – measuring the animal’s ‘epigenetic clock’ for accuracy. This clock is measured via methylation levels in the blood to reflect exposures and disease risks independent of chronological age, providing an early-warning system and a true representation of a period of existence.

Kevin Murach, an assistant professor at the University of Arkansas, says, “DNA methylation changes in a lifespan tend to happen in a somewhat systematic fashion. To the point, you can look at someone’s DNA from a given tissue sample and with a fair degree of accuracy predict their chronological age.”

Using exercise to turn back the clock

The study design was relatively simple: mice nearing the end of their natural lifespan, at 22 months, were given access to a weighted exercise wheel to ensure they built muscle. They required no coercion to run on the wheel, with older mice running from six to eight kilometers a day, mostly in spurts, and younger mice running up to 10-12 kilometers.

Results from the elderly mice after two months of weighted wheel running suggested they were the epigenetic age of mice eight weeks younger, compared to sedentary mice of the same maturity.

The team also used the epigenetic clock to map a multitude of genes involved in the formation and function of muscles, including those affected by exercise. Blood work indicated that the genes usually over methylated (hypermethylated) in old age resumed normal methylation in the active aged mice, unlike those mapped in their sedentary counterparts.

For instance, the rbm10 gene is usually hypermethylated in old age, disrupting the production of proteins involved in motor neuron survival, muscle weight & function, and the growth of striated muscle. Here it was shown to undergo less methylation in older mice who exercised, improving its performance. Normal methylation levels also resumed across the Timm8a1 gene, keeping mitochondrial function and oxidant defense at workable levels – even where neighboring sites exhibited dysfunctional epigenetic alterations.

More work is needed to harness DNA methylation

Murach notes that when a lifespan is measured incrementally in months, as with this mouse strain, an extra eight weeks — roughly 10 percent of that lifespan — is a noteworthy gain, further commending the importance of exercise in later life.

He adds: that although the connection between methylation and aging is clear, methylation and muscle function are less clear. Despite these sturdy results, Murach will not categorically state that the reversal of methylation with exercise is causative for improved muscle health. “That’s not what the study was set up to do,” he explained. However, he intends to pursue future studies to determine if “changes in methylation result in altered muscle function.”

And, “If so, what are the consequences of this?” he continued. “Do changes on these very specific methylation sites have an actual phenotype that emerges from that? Is it what’s causing aging or is it just associated with it? Is it just something that happens in concert with a variety of other things that are happening during the aging process? So that’s what we don’t know.”

He summarizes that once the medical community has mapped the mechanics of dynamic DNA methylation in muscle, their work could provide modifiable epigenetic markers to improve muscle health in the elderly. 

Scientists identify the specific gene that protects against severe COVID-19

Researchers from Karolinska University have discovered a gene that reduces the severity of Covid infections by 20%. In their paper the scientists state that this explains why the disease’s symptoms are so variable, hitting some harder than others.

Why do some people fall severely ill from COVID-19 while others don’t? In addition to risk factors like age or obesity and plenty of other environmental factors, it also comes down to our varying genetic makeup. Therefore, researchers across the globe have begun the mammoth task of mapping the genes involved in making people more susceptible to catching SARS-CoV-2 (COVID-19) and developing a severe infection.

These large-scale efforts have thrown up more than a dozen genomic regions along the human chromosome containing large clusters of genes associated with severe COVID-19. However, the specific causal genes in these regions are yet to be identified, hampering our ability to understand COVID-19’s often selective pathology.

Now, scientists build on these findings to pinpoint a gene that confers protection from critical illness.

Neanderthal DNA protects against severe COVID-19

The previous studies from 2020 concentrated on the genetic data of people of European ancestry recorded by multi-disciplinary teams all over the world for the 1000 Genomes Project. This monumental collaboration uncovered a specific segment of DNA known as the OAS1/2/3 cluster, which lowers the risk of developing an acute COVID-19 infection by 20%. Inherited from Neanderthals in roughly half of all people outside of Africa, this segment is responsible for encoding genes in the immune system.

The genetic array came about as a result of the migration of an archaic human species out of the African continent about 70,000 years ago who mated and mingled DNA with Neanderthals reproduced in their offspring’s haplotypes, a set of inheritable DNA variations close together along a chromosome. 

However, most human haplotypes outside Africa now include DNA from Neanderthals and Denisovans (an ancient human originating in Asia). Consequently, this ancient region of DNA is heaving with numerous genetic variants, making it challenging to distinguish the exact protective gene that could serve as a target for medical treatment against severe COVID-19 infection.

A possible solution is that people of African descent do not contain these archaic genes in their haplotypes, making them shorter and easier to decipher.

To test this theory, the researchers checked the 1000 Genomes project database for individuals carrying only parts of this DNA segment – focusing on individuals with African ancestry who lack heritage from the Neanderthals. Remarkably, the researchers found that individuals of predominantly African ancestry had the same protective gene cluster as those of European origin.

Genetic studies should be a multi-cultural affair

Once they established this, the researchers collated 2,787 COVID-19 cases with the genetic data of 130,997 individuals of African ancestry to reveal the gene variant rs10774671 G thought to convey protection against COVID-19 hospitalization. Their results correspond to a previous, more extensive study of individuals of European heritage, with analysis suggesting it is likely the only causal variant behind the protective effect.

Surprisingly, this previously ‘useless’ ancient variant was found to be widespread, present in one out of every three people of white European ancestry and eight out of ten individuals of African descent.

In evolutionary terms, the researchers write that the variant exists today in both these gene pools “as a result of their inheritance from the ancestral population common to both modern humans and Neanderthals.” Accordingly, their data adds more weight to the standard held theory that a common ancestor originated in Africa millions of years ago before sharing their DNA across the globe.

And while there’s much more to uncover regarding the newly discovered variant, the researchers can firmly suggest at this stage that the protective gene variant (rs10774671 G) works by determining the length of a protein encoded by the gene OAS1. As the longer version of the protein is more effective at breaking down the virus than the unaltered form, a life-threatening infection is less likely to occur.

Using genetic risk factors to design new COVID-19 drugs

Despite their promising results, the team cautions that the 1000 Genomes Project does not provide a complete picture of this genomic region for different ancestries. Nevertheless, it’s clear that the Neanderthal haplotype is virtually absent among individuals of primarily African ancestry, adding, “How important it is to include individuals of different ancestries” in large-scale genetic studies.

Senior researcher Brent Richards from McGill University says that it is in this way “we are beginning to understand the genetic risk factors in detail is key to developing new drugs against COVID-19.”

If these results are anything to go by, we could be on the cusp of novel treatments that can harness the immune system to fight this disease.

China builds the world’s first artificial moon

Chinese scientists have built an ‘artificial moon’ possessing lunar-like gravity to help them prepare astronauts for future exploration missions. The structure uses a powerful magnetic field to produce the celestial landscape — an approach inspired by experiments once used to levitate a frog.

The key component is a vacuum chamber that houses an artificial moon measuring 60cm (about 2 feet) in diameter. Image credits: Li Ruilin, China University of Mining and Technology

Preparing to colonize the moon

Simulating low gravity on Earth is a complex process. Current techniques require either flying a plane that enters a free fall and then climbs back up again or jumping off a drop tower — but these both last mere minutes. With the new invention, the magnetic field can be switched on or off as needed, producing no gravity, lunar gravity, or earth-level gravity instantly. It is also strong enough to magnetize and levitate other objects against the gravitational force for as long as needed.

All of this means that scientists will be able to test equipment in the extreme simulated environment to prevent costly mistakes. This is beneficial as problems can arise in missions due to the lack of atmosphere on the moon, meaning the temperature changes quickly and dramatically. And in low gravity, rocks and dust may behave in a completely different way than on Earth – as they are more loosely bound to each other.

Engineers from the China University of Mining and Technology built the facility (which they plan to launch in the coming months) in the eastern city of Xuzhou, in Jiangsu province. A vacuum chamber, containing no air, houses a mini “moon” measuring 60cm (about 2 feet) in diameter at its heart. The artificial landscape consists of rocks and dust as light as those found on the lunar surface-where gravity is about one-sixth as powerful as that on Earth–due to powerful magnets that levitate the room above the ground. They plan to test a host of technologies whose primary purpose is to perform tasks and build structures on the surface of the Earth’s only natural satellite.

Group leader Li Ruilin from the China University of Mining and Technology says it’s the “first of its kind in the world” that will take lunar simulation to a whole new level. Adding that their artificial moon makes gravity “disappear.” For “as long as you want,” he adds.

In an interview with the South China Morning Post, the team explains that some experiments take just a few seconds, such as an impact test. Meanwhile, others like creep testing (where the amount a material deforms under stress is measured) can take several days.

Li said astronauts could also use it to determine whether 3D printing structures on the surface is possible rather than deploying heavy equipment they can’t use on the mission. He continues:

“Some experiments conducted in the simulated environment can also give us some important clues, such as where to look for water trapped under the surface.”

It could also help assess whether a permanent human settlement could be built there, including issues like how well the surface traps heat.

From amphibians to artificial celestial bodies

The group explains that the idea originates from Russian-born UK-based physicist Andre Geim’s experiments which saw him levitate a frog with a magnet – that gained him a satirical Ig Nobel Prize in 2000, which celebrates science that “first makes people laugh, and then think.” Geim also won a Nobel Prize in Physics in 2010 for his work on graphene.

The foundation of his work involves a phenomenon known as diamagnetic levitation, where scientists apply an external magnetic force to any material. In turn, this field induces a weak repulsion between the object and the magnets, causing it to drift away from them and ‘float’ in midair.

For this to happen, the magnetic force must be strong enough to ‘magnetize’ the atoms that make up a material. Essentially, the atoms inside the object (or frog) acts as tiny magnets, subject to the magnetic force existing around them. If the magnet is powerful enough, it will change the direction of the electrons revolving around the atom’s nuclei, allowing them to produce a magnetic field to repulse the magnets.

Diamagnetic levitation of a tiny horse. Image credits: Pieter Kuiper / Wiki Commons.

Different substances on Earth have varying degrees of diamagnetism which affect their ability to levitate under a magnetic field; adding a vacuum, as was done here, allowed the researchers to produce an isolated chamber that mimics a microgravity environment.

However, simulating the harsh lunar environment was no easy task as the magnetic force needed is so strong it could tear apart components such as superconducting wires. It also affected the many metallic parts necessary for the vacuum chamber, which do not function properly near a powerful magnet.

To counteract this, the team came up with several technical innovations, including simulating lunar dust that could float a lot easier in the magnetic field and replacing steel with aluminum in many of the critical components.

The new space race

This breakthrough signals China’s intent to take first place in the international space race. That includes its lunar exploration program (named after the mythical moon goddess Chang’e), whose recent missions include landing a rover on the dark side of the moon in 2019 and 2020 that saw rock samples brought back to Earth for the first time in over 40 years.

Next, China wants to establish a joint lunar research base with Russia, which could start as soon as 2027.  

The new simulator will help China better prepare for its future space missions. For instance, the Chang’e 5 mission returned with far fewer rock samples than planned in December 2020, as the drill hit unexpected resistance. Previous missions led by Russia and the US have also had related issues.

Experiments conducted on a smaller prototype simulator suggested drill resistance on the moon could be much higher than predicted by purely computational models, according to a study by the Xuzhou team published in the Journal of China University of Mining and Technology. The authors hope this paper will enable space engineers across the globe (and in the future, the moon) to alter their equipment before launching multi-billion dollar missions.

The team is adamant that the facility will be open to researchers worldwide, and that includes Geim. “We definitely welcome Professor Geim to come and share more great ideas with us,” Li said.

Electric knee implants could help millions of arthritis patients

An answer could be on the horizon for millions of people living with arthritis after scientists have found a way to repair joints using electrical implants. The implants work by producing a current every time the person moves their joint to regrow the protective cartilage that cover the ends of bones .

Bioengineers from the University of Connecticut developed a biodegradable mesh implant, about half a millimeter thick, which generated tiny electrical signals to repair arthritic joints in rabbits. The study, published in Science Translational Medicine, saw the team successfully regrow cartilage in rabbits’ knees without using potentially toxic growth factors or stem cells. Crucially, the cartilage that grows back is mechanically robust, with further plans to trial the implant in larger animals and humans.

In their white paper, the team states that although more work is needed to improve the scaffold, this study provides evidence that biodegradable implants that produce electricity independently can use exercise to treat arthritis.

No cure for arthritis despite tens of millions of sufferers

According to the CDC, 58.5 million people currently have arthritis in the United States, which costs the American people $303.5 billion annually. While there are treatments, arthritis technically has no cure.

It is a widespread and painful disease caused by damage to joints formed between the body’s bones. One of the subtypes of this disease, called osteoarthritis, attacks the cartilage at the end of bones in the joint. As this buffer deteriorates, bones begin to rub against each other so that everyday activities like walking become agonizingly painful – making the growth of new cartilage highly desirable. 

Sufferers face years of pain without surgical or pharmaceutical intervention, but these treatments can only slow down the damage instead of repairing damage to the joint. However, even this process involves taking healthy cartilage from the patient or a donor and comes with inconveniences and risks.

Therefore, regrowing healthy cartilage in the damaged joint itself would be very helpful. Some researchers have investigated chemical growth factors to induce the body to regrow it; other attempts rely on a bioengineered scaffold to promote tissue growth. But, neither of these approaches works-even in combination-with the regrown cartilage breaking under the everyday stresses of the joint.

Your joints can generate electricity to heal you

The new breakthrough involves a tissue scaffold made out of poly-L lactic acid (PLLA) nanofibers, a material often used to stitch surgical wounds that dissolve after the person heals. The scaffold produces a little burst of electrical current when squeezed in a process known as piezoelectricity. In this case, the joint’s regular ‘squeezing’ is provided by walking, which generates a weak electrical field that encourages cells to colonize the implant and grow into cartilage.

“Piezoelectricity is a phenomenon that also exists in the human body. Bone, cartilage, collagen, DNA, and various proteins have a piezoelectric response. Our approach to healing cartilage is highly clinically translational, and we will look into the related healing mechanism”, says Dr. Yang Liu, a postdoctoral fellow in Nguyen’s group and the lead author of the published work.

Nguyen’s group implanted their scaffold in the knee of injured rabbits. After a month in recovery, the rabbits were encouraged to walk for 20 minutes a day on a slow-moving treadmill to exercise their legs and generate the electric current. The charge encouraged the regrowth of fresh, mechanically robust cartilage, making the knee as solid and functional as before it was injured. Whereas rabbits treated with nonpiezoelectric scaffold and exercise treatment still had a hole in this protective sheath and limited healing.

In an interview with New Scientist, Thanh Nguyen, an assistant professor in the department of mechanical engineering, says, “If used in people, the material used to make the implant would dissolve after about two months – although it could be tweaked to make it last longer.”

What next for this promising implant?

Nguyen states that the results are exciting but cautions that further tests need to be carried out on larger animals that bear more similarities to humans.

His lab now plans to observe the treated animals for 1-2 years to ensure the cartilage is durable and wants to test the PLLA scaffolds in older animals as arthritis usually affects the elderly. He concludes by saying that if the scaffolding helps older animals heal, it indeed could be a bioengineering breakthrough.

Masks made of ostrich cells make COVID-19 glow in the dark

In the two years that SARS‑CoV‑2 has ravaged across the globe, it has caused immeasurable human loss. But we as a species have been able to create monumental solutions amidst great adversity. The latest achievement involves a standard face mask that can detect COVID-19 in your breath, essentially making the pathogen visible.

A COVID-19 sample becomes apparent on a mask filter under ultraviolet light. Image credits: Kyoto Prefectural University.

Japanese researchers at Kyoto Prefectural University have created a mask that glows in the dark if COVID-19 is detected in a person’s breath or spit. They did this by coating masks with a mixture containing ostrich antibodies that react when they contact the SARS‑CoV‑2 virus. The filters are then removed from the masks and sprayed with a chemical that makes COVID-19 (if present) viewable using a smartphone or a dark light. The experts hope that their discovery could provide a low-cost home test to detect the virus.

Yasuhiro Tsukamoto, veterinary professor and president of Kyoto Prefectural University, explains the benefits of such a technology: “It’s a much faster and direct form of initial testing than getting a PCR test.”

Tsukamoto notes that it could help those infected with the virus but who show no symptoms and are unlikely to get tested — and with a patent application and plans to commercialize inspection kits and sell them in Japan and overseas within the next year, the test appears to have a bright future. However, this all hinges on large-scale testing of the mask filters and government approval for mass production. 

Remarkably, this all came with a little help from ostriches.

The ostrich immune system is one of the most potent on Earth

To make each mask, the scientists injected inactive SARS‑CoV‑2 into female ostriches, in effect vaccinating them. Scientists then extracted antibodies from the eggs the ostriches produced, as the yolk transfers immunity to the offspring – the same way a vaccinated mother conveys disease resistance to her infant through the placenta. 

An ostrich egg yolk is perfect for this job as it is nearly 24 times bigger than a chicken’s, allowing a more significant number of antibodies to form. Additionally, immune cells are also produced far more quickly in these birds—taking a mere six weeks, as opposed to chickens, where it takes twelve.

Because ostriches have an extremely efficient immune system, thought to be the strongest of any animal on the planet, they can rapidly produce antibodies to fight an enormous range of bacteria and viruses, with a 2012 study in the Brazilian Journal of Microbiology showing they could stop Staphylococcus aureus and E. coli in their tracks – experts also predict that this bird will be instrumental in fending off epidemics in the future.

Tsukamoto himself has published numerous studies using ostrich immune cells harvested from eggs to help treat a host of health conditions, from swine flu to hair loss.

Your smartphone can image COVID-19 with this simple test

The researchers started by creating a mask filter coated with a solution of the antibodies extracted from ostriches’ eggs that react with the COVID-19 spike protein. After they had a working material, a small consort of 32 volunteers wore the masks for eight hours before the team removed the filters and sprayed them with a chemical that caused COVID-19 to glow in the dark. Scientists repeated this for ten days. Masks worn by participants infected with the virus glowed around the nose and mouth when scientists shone a dark light on them.

In a promising turn, the researchers found they could also use a smartphone LED light to detect the virus, which would considerably widen the scope of testing across the globe due to its ease of use. Essentially, it means that the material could be used to the fullest in a day-to-day setting without any additional equipment.

“We also succeeded in visualizing the virus antigen on the ostrich antibody-carrying filter when using the LED ultraviolet black light and the LED light of the smartphone as the light source. This makes it easy to use on the mask even at home.”

To further illustrate the practicability of the test, Tsukamoto told the Kyodo news agency he discovered he was infected with the virus after he wore one of the diagnostic masks. The diagnosis was also confirmed using a laboratory test, after which authorities quarantined him at a hotel.

Next, the team aims to expand the trial to 150 participants and develop the masks to glow automatically without special lighting. Dr. Tsukamoto concludes: “We can mass-produce antibodies from ostriches at a low cost. In the future, I want to make this into an easy testing kit that anyone can use.”

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

Immune cells from the common cold offer protection against COVID-19, researchers find

If one in 10 cold infections are from coronaviruses, then antibodies produced from these illnesses could surely give a bit more protection against COVID-19, right? A new study has just provided the answer to this question by showing that immunity induced by colds can indeed help fight off the far more dangerous novel coronavirus.

Image credits: Engin Akyurt.

A study from Imperial College London that studied people exposed to SARS-CoV-2 or COVID-19 found that only half of the participants were infected, while the others tested negative. Before this, researchers took blood samples from all volunteers within days of exposure to determine the levels of an immune cell known as a T cell – cells programmed by previous infections to attack specific invaders.

Results show that participants who didn’t test positive had significantly higher levels of these cells; in other words, those who evaded infection had higher levels of T cells that attack the Covid virus internally to provide immunity — T cells that may have come from previous coronavirus infections (not SARS-CoV-2). These findings, published in the journal Nature Communications, may pave the way for a new type of vaccine to prevent infection from emerging variants, including Omicron.

Dr. Rhia Kundu, the first author of the paper from Imperial’s National Heart & Lung Institute, says: “Being exposed to the SARS-CoV-2 virus doesn’t always result in infection, and we’ve been keen to understand why. We found that high levels of pre-existing T cells, created by the body when infected with other human coronaviruses like the common cold, can protect against COVID-19 infection.” Despite this promising data, she warns: “While this is an important discovery, it is only one form of protection, and I would stress that no one should rely on this alone. Instead, the best way to protect yourself against COVID-19 is to be fully vaccinated, including getting your booster dose.”

The common cold’s role in protecting you against Covid

The study followed 52 unvaccinated people living with someone who had a laboratory-confirmed case of COVID-19. Participants were tested seven days after being exposed to see if they had caught the disease from their housemates and to analyze their levels of pre-existing T cells. Tests indicated that the 26 people who tested negative for COVID-19 had significantly higher common cold T cells levels than the remainder of the people who tested positive. Remarkably, these cells targeted internal proteins within the SARS-CoV-2 virus, rather than the spike protein on its surface, providing ‘cross-reactive’ immunity between a cold and COVID-19.

Professor Ajit Lalvani, senior author of the study and Director of the NIHR Respiratory Infections Health Protection Research Unit at Imperial, explained:

“Our study provides the clearest evidence to date that T cells induced by common cold coronaviruses play a protective role against SARS-CoV-2 infection. These T cells provide protection by attacking proteins within the virus, rather than the spike protein on its surface.”

However, experts not involved in the study caution against presuming anyone who has previously had a cold caused by a coronavirus will not catch the novel coronavirus. They add that although the study provides valuable data regarding how the immune system fights this virus, it’s unlikely this type of illness has never infected any of the 150,000 people who’ve died of SARS-CoV-2 in the UK to date.

Other studies uncovering a similar link have also warned cross-reactive protection gained from colds only lasts a short period.

The road to longer-lasting vaccines

Current SARS-CoV-2 vaccines work by recognizing the spike protein on the virus’s outer shell: this, in turn, causes an immune reaction that stops it from attaching to cells and infecting them. However, this response wanes over time as the virus continues to mutate. Luckily, the jabs also trigger T cell immunity which lasts much longer, preventing the infection from worsening or hospitalization and death. But this immunity is also based on blocking the spike protein – therefore, it would be advantageous to have a vaccine that could attack other parts of the COVID virus.

Professor Lalvani surmises, “The spike protein is under intense immune pressure from vaccine-induced antibodies which drives the evolution of vaccine escape mutants. In contrast, the internal proteins targeted by the protective T cells we identified mutate much less. Consequently, they are highly conserved between the SARS-CoV-2 variants, including Omicron.” He ends, “New vaccines that include these conserved, internal proteins would therefore induce broadly protective T cell responses that should protect against current and future SARS-CoV-2 variants.”

Demystifying nootropics – Is cognitive enhancement even a thing?

Whether you’re a college student hoping to improve your grades, a professional wanting to achieve more at work, or an older adult hoping to stave off dementia, the idea of popping a magic pill that boosts your brainpower can be tempting. So it’s no surprise that the use of nootropics or smart drugs is on the rise globally. But do they work? And more importantly, are they safe? In a sea of supplements and marketing blurb, what’s the real story behind these supposed cognitive enhancers? Let’s have a look at some of these questions.

Nootropics are prescription drugs, supplements, or natural substances that claim to boost cognitive functions such as memory, creativity, or motivation. Similarly, cognitive enhancement refers to the use or abuse of said smart drugs by healthy people exhibiting no neurological-based deficiency. Meaning, more often than not, ‘smart drugs’ are an off-label prescription medication used for non-medical purposes. Despite this unsettling fact, the use of off-label prescription nootropics is on the rise globally.

Developed in 1964 by Romanian chemist Corneliu E. Giurgea, the concept of nootropics involves a list of criteria which is as follows:

1. Nootropics should aid with improvement in working memory and learning

2. Supports brain function under hypoxic conditions or after electroconvulsive therapy.

3. Protects the brain from physical or chemical toxicity.

4. Natural cognitive functions are enhanced.

5. Nootropics should be non-toxic to humans without causing depression or stimulation of the brain.

The criterion above may suggest that cognitive enhancers are purely lab-made; however, they’re also present in everyday foodstuffs and beverages. As an example, caffeine is a natural nootropic and the most widely consumed psychoactive substance worldwide. Found in coffee, cocoa, tea, and certain nuts, an intake of one or two cups of coffee a day has been shown in clinical trials to increase alertness and decrease reaction time, albeit very gently. And while caffeine was once considered risky, many experts now agree that natural caffeine present in foodstuffs is more beneficial than harmful when consumed in moderation.  

Due to the sheer volume of false advertising surrounding nootropics, the first thing to check is whether a cognitive enhancer is backed by science — the best thing to do this is to see if it has gone through clinical or human trials. A prime example here is caffeine, whose cognitive benefits have been thoroughly tested in humans by various academic institutions. To date, it has been shown that caffeine consumption increases intracellular messengers, prolongs adrenaline activity, and circulates calcium into cells. Collectively, these mechanisms provide neuroprotection, increases heart rate, vascular tone, blood pressure, and bronchodilation. Human trials have also indicated that caffeine improves vigilance and attention without affecting memory or mood.

Eggs are another proven brain food that has been through clinical trials; shown to be rich in choline, a substance key to the production of acetylcholine, instrumental in many bodily functions, from achieving deep sleep to retaining new memories. Frequent egg consumption is associated with higher cognitive performance as well, particularly among the elderly. However, as with synthetic nootropics, too much of these foods also has adverse consequences, with higher doses of caffeine causing jittery, anxious feelings. Nevertheless, you’ll be pleased to hear there is no official daily limit on the number of eggs a person can eat just as long as they don’t add saturated fat or too much salt to them.

Another well-trialed natural nootropic is an ancient herb called Ginkgo biloba – both human and animal models have elucidated the herb’s neuroprotective effects. As a result, Gingko has been studied repeatedly in treating Alzheimer’s disease due to its antioxidant and antiapoptotic properties. Numerous studies have also cited its safety in humans with cognitive impairment, where the nootropic induced inhibition against caspase-3 activation and amyloid-β-aggregation in Alzheimer’s disease. The list of human studies proving the benefits of Ginkgo Biloba in healthy volunteers is extensive, with no safety issues noted. However, as with other cognitive enhancers, contrasting studies contradict these positive findings suggesting that all trials should employ neuroimaging.

The most salient factor to note here is that all of the above nootropics are proven in human or clinical studies – severely lacking with the majority of cognitive enhancers currently on the market today. A simple search on the PubMed database will tell you which nootropics have been trialed in humans and list any safety issues. Another excellent way to navigate the minefield of false advertising by some nootropics manufacturers is to use established brands.

Similarly, it’s also crucial to check whether mixing nootropics with alcohol or other drugs are safe. Firstly, always approach a medical professional before mixing drugs or alcohol with prescription medicine. Secondly, over-the-counter (OTC) medication bought in pharmacies should come with safety leaflets advising whether it is safe to take with medications, other supplements, or alcohol. Unfortunately, not all OTC remedies contain safety information as they are mostly unregulated. And while there are many papers on the use of caffeine with alcohol, most OTC nootropics haven’t been tested with other drugs. Experts advise: if you begin to mix or stack OTC medicines and start to feel ill, you should stop your drug regime and see a medical professional right away – this includes the stacking of nootropics.

I’m confused. Just how many types of nootropics are there?!

With a tsunami of potions and powders on the market, it can be challenging to take brain boosters responsibly. The first thing to know is that nootropics can either be synthetic or natural where they’re manufactured like prescription drugs or occur in plants and food. Likewise, dietary supplements or OTC drugs can contain natural and synthesized products – with prescription drugs being purely synthetic in structure.

Synthetic nootropics are composed of artificial chemicals rather than natural ingredients – being heavily laden with synthesized chemicals designed to mimic natural neurotransmitters. For instance, caffeine is found naturally in coffee beans and synthesized for bulk manufacturing. The synthetic version, found in many energy drinks, possesses a higher absorption rate into the body than its natural counterpart, causing significantly more side effects. Meaning, the raw version of caffeine is far less severe on the human body than its synthetic counterpart.

Notably, the only proven nootropics to make an immediate, marked difference in cognition are prescription drugs prescribed by your doctor. Specifically, drugs designed for Attention Deficit Hyperactivity Disorder (ADHD) such as Adderall and Ritalin, as well as the anti-narcoleptic modafinil, show demonstrable effects on healthy people’s concentration, attention, and alertness. And even though their impact on cognitive enhancement is questionable in healthy people, their off-label use is still on the rise despite numerous health risks, including dependence, tolerance, and cardiovascular, neurologic, and psychological disorders.

Prescription nootropics primarily consist of stimulants comprising methylphenidate, amphetamine, and dextroamphetamine- designed to counteract ADHD. And although these work well for many people with this condition, these pharmaceuticals aren’t proven safe for healthy people who want to improve their focus and attention. Many college students acquire this medication nefariously, and while they appear to help in the short term, there are dangerous risks.

Yet, modafinil, a novel stimulant FDA-approved to treat narcolepsy, sleep apnea, and shift work disorder, has several remarkable features distinguishing it from other medications. Unlike amphetamines, for example, modafinil is reported to have minimal side effects at the correct therapeutic doses. It also appears to have low abuse potential, with some studies suggesting that it may help with learning and memory in healthy people. 

Carrying on in the vein of synthetic nootropics, the biggest OTC nootropic in this class is the racetam family. An alleged cognitive enhancer designed to improve memory and suppress anxiety and based on a native brain-derived neurotrophic factor modulator. Racetam products are mainly derivative of Pyrrolidinone, a colorless, organic compound that supposedly enhances the learning process, diminishes impaired cognition, and protects against brain damage. Several pyrrolidine derivatives are commercially available, including piracetam, oxiracetam, aniracetam, noopept, and pramiracetam. However, in reality, research on their effectiveness in healthy adults is non-existent.

In contrast, human studies categorically link naturally occurring nootropics with healthy brain function. Explicitly, past studies have shown that food-derived nutrients such as unsaturated fat, vitamins, caffeine, minerals, various proteins, glucosinolates, and antioxidants can boost brain function. Despite this, the evidence backing the psychological benefits of their diet supplementary doppelgangers is weak. A fact that will shock many whose morning ritual involves the intake of supplements bought over-the-counter or online.

To compound this, a 2015 review of various dietary supplements found no convincing evidence of improvements in cognitive performance, even in unhealthy participants. Dr. David Hogan, the lead author of the review, feels nutritional supplements don’t provide the same benefits as food. “While plausible mechanisms link food-sourced nutrients to better brain function. Data showed that supplements cannot replicate the complexity of natural food and provide all its potential benefits.” However, he concedes that: “None of this rules out the potential for some OTC nootropics to improve cognition. Still, there isn’t much compelling evidence to support these claims.” Suggesting there is still much conjecture when it comes to dietary supplements as an aid to cognitive enhancement.

These findings make complete sense as all nutrients and fuel for our bodies come from our diet – proven to act as vasodilators against the small arteries and veins in the brain. When introduced into our system, these healthy foods increase blood circulation, vital nutrients, energy, and oxygen flow towards the brain. They also counteract inflammatory responses in the brain, modulating neurotransmitter concentration. For this reason, experts will always state that a healthy balanced diet is their preferred mode of treatment for healthy cognitive function – at least for now.

How do nootropics work?

Coffee — one of the most popular nootropics.

A recurring critical theme in many whitepapers covering the subject is that unless you’re deficient in a nootropic chemical, it’s unlikely taking more of it will help to enhance your brain processes. Officially, cognitive enhancement works by strengthening the components of the memory/learning circuits — dopamine, glutamate, or norepinephrine to improve brain function in healthy individuals beyond their baseline functioning.

Most experts state that nearly all OTC and dietary supplements lose their potency and thus stop working over time. Moreover, scores of non-prescription drug effects (if present at all) seem to be temporary, lasting until their metabolism and elimination. Meaning you may have to take more for any noticeable benefit if there is one. The author’s general advice is to ensure that the brand is well-established and trusted, avoiding prescription drugs for non-medical purposes. 

In an interview with InsiderDavid A. Merrill, MD, director of the Pacific Brain Health Center, states that nootropics likely won’t benefit you much if you’re not already experiencing symptoms such as trouble focusing or poor memory.

Indeed, as nootropic intake is also rising amongst gamers, Dr. Migliore adds in her interview with PC Gamer, ingesting these compounds is unlikely to help you if your body isn’t deficient in any of them. Adding “If you spend 10-15 minutes outside every day and eat a balanced diet, your vitamin D levels are most likely normal”. She then goes on to ask: “Will taking a supplement of vitamin D do anything for you? Probably not. On the other hand, if you avoid the sunlight and don’t eat meat, your vitamin D levels may be low. For those people, a vitamin D supplement might lead to increased energy.”  

Is Dr. Migliore, licensed clinician, and world-famous gamer, hinting that sun-deprived gamers may benefit from smart drugs? Also, how will I know when I’m deficient in a specific nutrient? I can only glean my ‘deficient behavior.’ Would it not, therefore, make sense to take cognitive enhancers where a nutritional inadequacy is suspected? 

Despite how logical this sounds, all experts agree that a sensible diet, social interaction, and regular exercise help boost cognition, with many naturally occurring nootropics found in food shown to improve mental faculties.  

So should we use nootropics then?

There are numerous ethical arguments concerning the ongoing nootropics debate, with a slew of countries hurriedly adapting their laws to this ever-expanding field. Side effects and false advertising aside, there is no doubt that nootropics exist that work. And if there are nootropics that work, more smart drugs will soon be developed that work even better with increased functionality. And this is where ethical problems arise concerning the point at which treating disorders becomes a form of enhancement, where patients become super-humans. Should resources be spent trying to turn ordinary people into more brilliant and better performing versions of themselves in the first place?

I mean, how should we classify, condone or condemn a drug that improves human performance in the absence of pre-existing cognitive impairment once proven efficacious? Are we in danger of producing ‘synthetic’ geniuses? And even worse, will they be better than the real thing? Approximately 95% of elite athletes have used performance-enhancing drugs to compare doping in competitive sports here. If brain doping becomes acceptable in working life and education, will the same go for sports? Will we see separate competitions for these synthetic geniuses to level the playing field? Governmental bodies must address these urgent issues. 

And even though the use of nootropics has risen over the past years with such drugs broadly perceived as improving academic and professional performances – not enough empirical evidence supports the assumption that these brain boosters give rise to cognitive enhancement in healthy users. Married with a deluge of reports on the unwanted, and sometimes dangerous, side effects of these drugs, the case for their use is fragile.

For example, the non-medical use of prescription stimulants such as methylphenidates for cognitive enhancement has recently increased among teens and young adults in schools and college campuses. Accordingly, memory enhancement dominated the market with more than 30% share in 2018. However, this enhancement likely comes with a neuronal, as well as ethical, cost. 

In that respect, a 2017 study involving 898 undergraduates, who were not diagnosed with ADHD, reported that off-label prescription nootropics did not increase the grade point average or advantage of any healthy volunteers. Further confirmation that research on nootropics still appears to be inconclusive in terms of clarifying and defining how such drugs act as mind stimulants even where proven medication is involved. 

Just how safe are these nootropic ‘supplements’?

The problems relating to the safety of nootropics are linked directly to the adverse events reporting systems. Concentrating on the United States, even the FDA, usually a benchmark for drug regulation globally, is uncharacteristically vague about smart drugs. Most nootropics are sold as OTC supplements, meaning there are no figures for side effects associated with OTC nootropics in the USA. For this reason, only adverse events linked to indistinct dietary supplements are compiled in unprocessed data sets – meaning there is no analytics available. Historically, adverse events associated with dietary supplements are difficult to monitor in the USA because the manufacturer doesn’t register such products before a sale. Thus, little information about their content and safety is available, with no way to know if a supplement contains what producers claim or to glean the long-term effects. Compounding the reason to use only well-known, trusted brands found at reputable pharmacies.

To enumerate, the official FDA system that records adverse events for dietary supplements, the CFSAN Adverse Event Reporting System (CAERS), covers foods, nutritional supplements, and cosmetics and only provides raw data. The reported adverse events document serious events, including death and hospitalization, and minor events, including taste, coloring, or packaging. Unbelievably, even though CAERS includes severe medical incidents, the names of up to 35% of all side effects in this database are redacted under Exemption 4. A regulation that exempts manufacturers from disclosing information that constitutes “trade secrets and commercial information obtained from a person which is confidential.” Companies whose products have caused death are also allowed to purge their brand name and products from the FDA database using this privilege.

Hence, it’s challenging to gain statistics for the number of adverse events related to dietary supplements, making tracking dangerous supplements that have used the Exemption 4 clause unfeasible. Accordingly, most studies covering adverse events attributed to OTC supplements explore predictive statistics, signs, or signals that could roughly approximate the number of hospitalizations, doctor’s visits, or deaths that may happen that year. Many studies rely on multiple sources to assess the number of adverse events related to dietary supplements. Even then, it can prove impossible to track one brand. In general, knowledge regarding the safety of OTC supplements is limited, with many studies finding that CAERS underrepresent adverse events associated with OTC drugs. To give readers an idea of the enormity of the problem, among the 1,300 supplements labeled Exemption 4 in the CAERS database, more than one-third involved deaths or hospitalizations.  

Another emerging safety issue, OTC drugs can also cause hospitalization even where prescription drug regimes have ended – particularly with patients with a history of psychiatric illness. Posing the question, does this show a loss of plasticity as these psychopharmaceuticals permanently reroute and lay down brain circuitry and tracts. Thus, we have a false opposition in terms here – how can these prescription stimulants be viewed as nootropics, which are temporary by their very nature?

In short, this suggests that healthcare providers, specifically those in the mental health and substance abuse fields, should keep in mind that nootropic use is an under-recognized and evolving problem that can cause severe episodes, particularly amongst those with pre-existing mental disorders or illnesses. 

 Have other nootropics been elucidated in human trials?

Yes, numerous nootropics have been through human trials, with significantly more natural cognitive enhancers trialed instead of synthetic drugs. Making sense as foodstuffs are part of our everyday diets, needed to fuel our whole body.

First on the list is Bacopa monnieri, a herb found throughout the Indian subcontinent in marshy areas, used for centuries in ayurvedic medicine to improve brain function.  Human studies reveal consistent cognitive enhancement resulting from Bacopa monnieri administration across young, old and impaired adult populations. The most robust effects of Bacopa monnieri are memory performance, including positive effects on learning and consolidation of target stimuli, delayed recall, visual retention of information, and working memory. 

In adults aged 55 and over, Bacopa monnieri has shown improvements in executive functioning and mental control. Clinical studies have also revealed that it may boost brain function and alleviate anxiety and stress, possessing numerous antioxidant properties – a class of potent compounds called bacosides present in the herb thought to be responsible for this. 

Surprisingly, despite its addiction liability and undesired adverse effects, preclinical and clinical studies have demonstrated that nicotine has cognitive-enhancing effects. Functions like attention, working memory, fine motor skills, and episodic memory are all susceptible to nicotine’s effects. There may also be a link between dementia and this nootropic with nicotinic receptor activity observed in Alzheimer’s disease patients. Despite this, experts agree that nicotine use is only justified to quit smoking and is, therefore, avoided as a smart drug.

One of the most popular drugs for cognitive enhancement is methylphenidate, otherwise known as Ritalin – a commonly prescribed medication for treating ADHD. Users should note that a large proportion of literature on the safety and efficacy of this drug comes from studies performed on normal, healthy adult animals, as there is currently no sufficiently reliable animal model for ADHD.

Methylphenidate is a stimulant closely related to amphetamine and cocaine that works by increasing levels of dopamine and norepinephrine in the brain. For healthy users, most studies on its cognitive effects involved adult animals or humans. In studies on healthy volunteers, higher doses increased movement and impaired attention and performance in prefrontal cortex-dependent cognitive tasks – lower doses improved mental performance and reduced locomotor activity. Nevertheless, long-term use of stimulants like Ritalin can lead to attention-based side effects, hyperactivity, being distracted easily, and poor impulse control – also seen in patients who use the medication for ADHD.

Many reports discuss the role of Panax ginseng, a herb used in Chinese medicine, in improving the cognition function of Alzheimer’s disease patients due to its antioxidant properties, claimed to suppress Alzheimer’s disease pathology. Over the last decade, several studies have revealed that single doses of Panax ginseng can modulate aspects of brain activity measured by electroencephalography and peripheral blood glucose concentrations in healthy young volunteers. The same studies have also indicated that the herb enhances aspects of working memory, improves mental arithmetic performance, and speeds attentional processes.

Another natural nootropic, Rhodiola rosea, known as golden root, is a flowering plant that improves cognitive function. It’s mainly known for its ability to counteract physical and mental fatigue, with numerous human studies hosted on the subject. Sharing the same property with Bacopa monnieri and Panax ginseng, it is considered an “adaptogen,” a substance that enhances endurance, resistance and protects against stressful situations. Human studies show that Rhodiola rosea may also protect the nervous system against oxidative damage, thus lowering the risk of Alzheimer’s disease.  

Research on nootropics indicates that the big hope appears to be modafinil. This prescription drug is considered first-line therapy for excessive daytime sleepiness associated with narcolepsy in adults. However, clinicians need to be cautious with younger users because of reports of side effects involving tachycardia, insomnia, agitation, dizziness, and anxiety. Nevertheless, modafinil is FDA-approved for use in children over age 16 years. 

The efficacy of the drug modafinil in improving alertness and consciousness in non-sleep-deprived, healthy individuals has led to the military trialing the drug as a cognitive enhancer. Pointedly, a 2017 study found evidence that modafinil may enhance some aspects of brain connectivity, including alertness, energy, focus, and decision-making. In non-sleep-deprived adults, this also includes improvements in pattern recognition accuracy and the reaction-based stop-signal trial. 

Furthermore, modafinil improved the accuracy of an executive planning task and faster reaction times, with one study even listing increased digit span. Side effects are also dampened, with numerous cognitive functions remaining unaffected by modafinil. These include trail making, mathematical processing, spatial working memory, logical memory, associative learning, and verbal fluency.

As can be seen, cognitive enhancement is genuine, with human studies available to verify this exciting field’s mode of action and mechanisms.

Recommendations for smart drug usage

Nootropics and smart drugs are on the rise in today’s society, but more research involving neuroimaging is needed to understand their benefits better. However, there is no doubt that nootropics fulfilling Giurgea’s original criteria exist, particularly in their natural form.

In addition to these considerations, it’s always important to highlight that an active lifestyle with regular mental and physical activity, social interaction, and high-quality nutrition shows protective-preventive effects on various diseases and positively impacts brain health. Many experts are only willing to recommend these factors for cognitive enhancement. In particular, exercise increases dendrite length and the density of dendrite thorns and promotes the expression of synaptic proteins. An increase in the availability of growth factors and increased neurogenesis in the hippocampus also occurs, conversely decreasing beta-amyloid levels. No other nootropic currently has been so extensively studied or proven.

But the medical community can not ignore the many contrasting views of natural and synthetic nootropics; there’s growing evidence that some of these pills and powders can boost cognitive function, albeit temporarily. To date, Ginkgo biloba is the most studied and established herb for cognitive enhancement. In contrast, despite the vast number of studies on the subject, no prescription drug is officially recommended for non-medical use, despite evidence that they may provide cognitive enhancement for healthy people.

As we have seen, smart drugs exist; the main point to cover is safety. Experts recommend you only use trusted brands, checking the CAERS database for every new supplement or drug you use. They also state that if you become ill when using any prescription, OTC drugs, or dietary supplements, stop using them immediately and see a medical professional. Don’t forget to check the PubMed database for human trials and safety data regarding any cognitive enhancers you’re taking. It’s also an excellent place to double-check the credibility of any brands you may want to try. If they’re not involved in any studies, the chances are their products may be unsuitable for academic trials.

Finally, an underground movement is happening in the nootropics field, a faction demanding to be better, demanding their forced evolution, desperate to be good as the next person, terrified of being left behind. The next generation of smart drugs (and they are coming) will either advance humanity as a whole or divide us irrevocably. Will these synthetic geniuses, who feel so inferior they’ll risk their health to win the race, show us the same kindness afforded them? The answer awaits us all.

The other replication crisis: research that’s less likely to be true is cited more

Although science publishing is by far the best option we have for advancing our understanding of the world, the publishing system itself is far from perfect.

In the early 2010s, a new term became very hot in some fields of research: replication crisis. The problem researchers had discovered was that many scientific studies are difficult or impossible to replicate or reproduce. Because reproducing research is an essential pillar of research, this has grave consequences, and forced us to reconsider many of the things we took for granted, especially in the fields of medicine and psychology.

There’s much to be said about the replication crisis, but one particular aspect was brought up in a new study: how un-replicable studies are cited.

Citations can make or break someone’s scientific career; the more citations one study or author has, the more it is regarded as important and influential. But according to a new study, research that is less likely to be replicable is also more likely to be cited.

The problem is not new for experts. In fact, to some extent, researchers in various fields are already aware of this problem.

“We also know that experts can predict well which papers will be replicated,” write the authors Marta Serra-Garcia, assistant professor of economics and strategy at the Rady School, and Uri Gneezy, professor of behavioral economics also at the Rady School. “Given this prediction, we ask ‘why are non-replicable papers accepted for publication in the first place?'”

“Interesting” results

The problem, Serra-Garcia suspects, is that the review teams of academic journals face a trade-off. In order for a paper to get published, a study first needs to be peer-reviewed — edited by experts in the field. When a study is published on something that’s well-known and established, and has useful results but lacks the ‘wow’ factor, it will likely be edited very harshly. But reviewers are more likely to be lenient when the results are more “out there.”

The same thing happens in the media: studies that are striking or more interesting somehow are more likely to be picked up, although their validity may be a bit more questionable.

“Interesting or appealing findings are also covered more by media or shared on platforms like Twitter, generating a lot of attention, but that does not make them true,” Gneezy said.

Serra-Garcia and Gneezy analyzed data from three influential replication projects which tried to systematically replicate the findings in top psychology, economic, and general science journals like Nature and Science. In economics, 61% of 18 studies were successfully replicated; a similar figure was found for general science (62%). This is already less than ideal, but in psychology, things were way worse: just 39 of 100 experiments could successfully be replicated.

The disparity in citations is striking. Papers that are successfully replicated are cited 153 times less than those that couldn’t be replicated. Even when researchers took into account several characteristics of the replicated studies (the number of authors, the rate of male authors, the details of the experiment, and the field) — the relation between citations and replicability was still unchanged.

The impact of such citations also grows over time, becoming more pronounced as time passes. In other words, it’s not just a fad where some studies have unusual results and they get quoted at first but then researchers catch on — the effect continues over time and, on average, papers that could not be replicated are cited 16 times more per year.

“Remarkably, only 12 percent of post-replication citations of non-replicable findings acknowledge the replication failure,” the authors write.

An impactful problem

To see just how big a problem this is, you need look no further than the vaccine-autism controversy. It all started from a study published by Andrew Wakefield in 1998. The study has long been retracted, and Wakefield’s methods were shown to be not just fraudulent but also cruel to the participants — yet despite numerous studies disproving Wakefield’s study, claims that autism is linked to vaccines still continue.

The problem can be solved by improving the way scientific publishing works. Academics are under tremendous pressure to publish papers, especially groundbreaking papers that get a lot of citations. If unreliable papers are more likely to gather citations, then academics have an incentive to publish this type of study. Modern science is generally built on small, incremental progress, not big breakthrough leaps, but small incremental progress isn’t flashy.

The authors hope to raise attention to this problem, and encourage researchers (and readers) to think that something that is interesting and appealing may not always be replicable.

“We hope our research encourages readers to be cautious if they read something that is interesting and appealing,” Serra-Garcia said. “Whenever researchers cite work that is more interesting or has been cited a lot, we hope they will check if replication data is available and what those findings suggest.”

The study is published in Science.

Meet Islam’s Da Vinci: Al-Biruni, father of geodesy, anthropology, and master of pharmacy

At the turn of the first millennium, in the 10th and 11th centuries, a gifted scholar by the name of Abu Rayhan al-Biruni sent ripples through the Arab world. It was the Islamic Golden Age, and al-Biruni was a first-class scientist — a polymath. A historian, astronomer, botanist, pharmacologist, geologist, poet, philosopher, mathematician, geographer, and humanist, he revolutionized several fields and made important contributions, writing 146 books.

Let’s take a small journey into his world.

A USSR stamp celebrating Al-Biruni, via Wikimedia Commons.

The making of a genius

At the turn of the first millennium, the eyes of the educated world were focused on Muslim lands. Muslim scholars defined the intellectual world at the time, and according to George Sarton, the founder of the History of Science discipline, al-Biruni was a scholar up there with the best of them — “one of the very greatest scientists of Islam, and, all considered, one of the greatest of all times,” Sarton noted.

Al-Biruni was born in central Asia, in the city of Kath, in today’s Uzbekistan, in 973, in a large oasis region bordered by the Aral Sea on one side, and deserts on the other. He was fortunate enough to receive a good education, including from the eminent Abu Nasr Mansur, a member of the family then ruling at Kath, and a famous teacher in astronomy and mathematics.

At the time, the local caliphs promoted the research of mathematics and astronomy, alongside medicine and theology. Al-Biruni studied all these and more. He made a name for himself by Islamic jurisprudence and made several valuable astronomic observations.

From early on, his work was acknowledged by both fellow scholars and the monarchs of the time. Interestingly though, despite studying theology, it seems that he wasn’t a practicing Muslim (and was potentially agnostic). Still, his texts do sometimes make references to the divine, which more or less came with the territory of being a scholar in a religious medieval society.

Many Muslim scholars were actually inspired by their faith. They believed that if they understand more and more about the surrounding world (the creation), they can get closer to the Creator. They built opulent libraries, dedicated decades to deepening their studies, and were widely appreciated by local leaders, who kept them around the royal court as scholars and sometimes advisers. It was a period of intellectual enlightenment in the Muslim world, from Spain to India.

Al-Biruni’s life was not without agitation. He supported a dynasty which was overthrown, but managed to make peace with the victors and was taken in at the local court. As other conquerors took turns to conquer the lands, al-Biruni was always cherished and supported at the royal court. He even accompanied one ruler, Mahmud of Ghazni, through his conquests in India, making much scientific progress as he travelled. By all accounts, al-Biruni didn’t seem like the type of scholar who wanted to stay in an ivory tower — he enjoyed going out and getting his hands dirty, sometimes even literally, when he built tools from scratch.

Illustration of different phases of the moon, from a manuscript of Al-Biruni

Al-Biruni himself was an outstanding and adaptable scholar. He spoke Turkish, Sanskrit, Persian, Syriac, Hebrew, and Arabic. His work spanned over virtually all fields of science, from astronomy and mathematics to geology and history, and was possibly the most educated man in the Middle Ages, a true universal man. He explained natural springs through the principle of communicating vessels and weighed precious stones and metals, establishing their specific weight.

But perhaps his most striking contributions came from his physical observations.

The Earth and the Heavens — astronomy and geoscience

Al-Biruni calculated the radius of the Earth using trigonometric calculations, and his calculations were within 2% of the real figure. He did this by first measuring the height of a hill and then climbing the hill and measuring the dip of the horizon. He also calculated the longitudinal difference between Kath and Baghdad, by observing a lunar eclipse and noting the difference of time between the eclipse occurrence in the two cities — one of the very few times this method has been applied in history. What’s interesting about his astronomical calculations is that he always seemed to be looking for a practical application, some way in which he could use the newly-acquired information. He also produced a complextheory based for calculating the qibla, or the directions of Mecca from any place.

A diagram for the method that Al-Biruni used to calculate the radius of the Earth. Full description of the method here.

Since astronomy was essentially comprised of observations and mathematical calculations, it’s clear that Al-Biruni was well-versed in mathematics — he was likely one of the leading mathematicians of the time. What’s interesting is that he was also the first to clearly mark the difference between astronomy and astrology.

We take this for granted now but at the time, but disentangling the two was groundbreaking at the time. In his later life, he even wrote a refutation of astrology, which uses pseudoscience, in opposition to astronomy, which uses empirical observations and calculations. He wrote:

“I have begun with Geometry and proceeded to Arithmetic and the Science of Numbers, then to the structure of the Universe and finally to Judicial Astrology [sic], for no one who is worthy of the style and title of Astrologer [sic] who is not thoroughly conversant with these for sciences.”

Al-Biruni is also considered the father of modern geodesy, for his many observations on local and planetary geological features. In addition, he also made a very interesting postulation. He theorized the existence of a landmass along the ocean between Asia and Europe — you know, right where the Americas are. Of course, he had no way of knowing that the continents are there, but he suspected that the geological processes that gave rise to Eurasia must have surely given rise to another large landmass. He also claimed that some of this landmass would lie at latitudes that could be inhabited. He was speculating here and maybe got a bit lucky, but this assertion is remarkable nonetheless.

His power of creation was also impressive. He wrote a number of landmark treaties and books, maintained a correspondence with scholars of the time, and produced manuals and instructions in various fields.

The Al-Biruni crater on the moon. Image credits: NASA.

History and chronology

Al-Biruni was also the first to divide the hour the same way we do today: into 60 minutes, and each minute into 60 seconds. He seemed to have been fascinated by time. Although much of his work hasn’t been preserved, there are accounts that Al-Biruni went to great lengths to establish an accurate historical chronology and assess the duration of various historical eras.

But al-Biruny truly shines in the study of the history of religions. To this day, his work is considered encyclopedic, which is all the more remarkable since he was one of the very first to dabble in the field of comparative religion.

He looked at different religious practices and beliefs, noting them down and comparing them. In general, he seemed to support the superiority of Islam:

“We have here given an account of these things in order that the reader may learn by the comparative treatment of the subject how much superior the institutions of Islam are, and how more plainly this contrast brings out all customs and usages, differing from those of Islam, in their essential foulness.”

Still, he wasn’t afraid to exhibit admiration for other cultures. He also seemed to harbor an interesting idea that all cultures are somewhat related to each other because they are all human constructs, and therefore, all humans on the globe are similar rather than different.

You might think that’s enough for one man, but we should also mention that al-Biruni is also considered to be the first anthropologist, and in many ways, he laid the foundation for this field of science. His empirical observations of other cultures is strikingly similar to modern practices. He would learn people’s language, study their texts, and observe them with this gained knowledge, noting down observations with objectivity. This is all the more remarkable since he wrote much of this on Indian culture, and India was his people’s enemy, the two often clashing in war. Dr. Edward C. Sachau compares al-Biruni’s writing to “a magic island of quiet, impartial research in the midst of a world of clashing swords, burning towns, and plundered temples.”

Other inventions

Biruni never addressed physics specifically, but his writings often addressed physical processes. For instance, he developed weighing systems to calculate the density of substances, using a novel system of mathematical and mechanical methods. The Encyclopedia of the History of Arabic Science describes his approach thusly:

The classical results of Archimedes in the theory of the centre of gravity were generalized and applied to three-dimensional bodies, the theory of ponderable lever was founded and the ‘science of gravity’ was created and later further developed in medieval Europe. The phenomena of statics were studied by using the dynamic approach so that two trends – statics and dynamics – turned out to be interrelated within a single science, mechanics. … Numerous fine experimental methods were developed for determining the specific weight, which were based, in particular, on the theory of balances and weighing. The classical works of al-Biruni and al-Khazini can by right be considered as the beginning of the application of experimental methods in medieval science”.

In essence, al-Biruni built a hydrostatic balance system, which hints at an advanced understanding of physics.

A mizan al-hikma, or ‘balance of wisdom’, which is in fact a hydrostatic balance, like that of the “The
Book of the Balance of Wisdom” by Al-Khāzini. From Sparavigna (2013).

Al-Biruni also wrote a pharmaceutical encyclopedia called “Kitab al-saydala fi al-tibb” (Book on the Pharmacopoeia of Medicine). It lists a number of pharmaceutical compounds believed to be effective at the time, as well as instructions to find and prepare them.

It’s unclear why, but for centuries, his work was not really discussed or built on. Maybe it was because of political influences, maybe he wasn’t very well liked by other scholars, or maybe something else happened, but it wasn’t until several centuries later that his work was rediscovered by Western scholars and his work was truly appreciated again.

Al-Biruni’s fame and legacy rival that of any scholar in history. He’s truly a Da Vinci of Asia, a polymath whose knowledge, ability, and productivity extends to almost every domain imaginable.

Uncertainty can be reported without damaging trust — and we need that more than ever

The numbers on COVID-19 are often uncertain and based on imperfect assumptions. It’s an ever-changing situation that often involves uncertainty — but we’re better off communicating things that way.

Typing fonts.
Image credits Willi Heidelbach.

Communicating science is rarely an easy job. In addition to “translating” complex data and processes into a language that’s familiar and accessible to all, there’s also the problem of data itself, which is often not clear-cut.

Experts and journalists have long assumed that if science communication includes “noise” (things like margin of error, ranges, uncertainty), public trust in science will be diminished.

“Estimated numbers with major uncertainties get reported as absolutes,” said Dr. Anne Marthe van der Bles, who led the new study while at Cambridge’s Winton Centre for Risk and Evidence Communication.

“This can affect how the public views risk and human expertise, and it may produce negative sentiment if people end up feeling misled,” she said.

But this might not be the case, a new study concludes.

The researchers carried a total of five experiments involving 5,780 participants, who were shown titles with varying degrees of uncertainty. The participants were then queried on how much they trusted the news.

The researchers report that participants were more likely to trust the source that presented data in the most accurate format, where the results were flagged as an estimate, and accompanied by the numerical range from which it had been derived.

For example: “…the unemployment rate rose to an estimated 3.9% (between 3.7%–4.1%)”.

The results of one of the five experiments: perceived
uncertainty (A), trust in numbers (B), and trust in the source (C). Even as the trust in the numbers was lower, the trust in the source was slightly higher. Results were slightly different for some of the other experiments. Image credits: PNAS.

We’ve seen both before and during the COVID-19 pandemic how damaging scientific disinformation can be. Disinformation often presents things as certain and absolute, and science communicators are concerned about adding more uncertainty, and diminishing trust in science.

If this study is any indication, addressing uncertainty head-on might actually be better. At a time where scientific information and expertise is more important than ever, the researchers encourage communicators to consider their results.

“We hope these results help to reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge,” said co-author Prof Sir David Spiegelhalter, Chair of the Winton Centre at the University of Cambridge.

Speaking of uncertainty and assumption, this is a limited sample size, and all the participants were British — there could be a cultural component in this case, and the results might not apply to a larger sample of people, or to people in other countries.

The results are intriguing nonetheless. Uncertainty cannot be avoided at this point in the COVID-19 outbreak, and we should become more comfortable in dealing with it.

Read the study in its entirety here.

Researchers want free, open-access science — science publishers, not so much

To most people, the world of scientific publishing is pretty opaque — and even if you try to explain it, it doesn’t seem to make much sense. Here’s the gist of it: the writers don’t get paid, the people checking and editing the text don’t get paid, and yet downloading a single paper (which is usually around 10-20 pages) costs around $40. To make matters even worse, it often takes months and months to get a paper through, a process which can be very stressful and time-consuming for all researchers involved.

In a new article, three researchers propose a new solution for that. They call it Plan U, for “Universal”.

Publishing a scientific paper roughly works like this: you work for months (or years) and have some noteworthy results. You gather them all up and look for a relevant scientific journal. You decide on the best one, tailor your text to the specific requirements of that journal, and wait. After a while, your paper will be peer-reviewed and you’ll get a reply. In the vast majority of cases, that reply includes some edits or changes that you need to make. Alternatively, the manuscript can be rejected altogether, but let’s work in an optimistic scenario. You make those changes, re-send the manuscript, it gets accepted and BOOM! You’re a published author.

Almost.

The waiting period for publication is usually around a few months, but eventually, the article gets published. If you’re publishing for the first time, you’re probably excited and would like to share your work with friends and family, so you give them a link. Except, they can’t really access it, even after the long wait — unless you’ve published in one of the few open-source journals or they pay a hefty subscription to a journal.

Understandably, this dampens some of the joy and excitement of a young researcher, but at the end of the day, it’s getting the science out in the world that matters, right?

Well, there are a few problems with that too. For starters, money. Top universities’ journal subscriptions average $5 million, and even a medium university might pay close to $1 million every year, just to access scientific journals. That’s a lot of money but no campus can survive without access to journals. But even if your university does pay a hefty sum to access journals, it might not have access to the ones you want, or need, or even in the ones you choose to publish in.

In the time it takes to publish the paper, someone working in the same field as you does not have access to your work, so they might be needlessly doing something similar, or at the very least, are not benefitting from having access to your results. This can lead to lots of wasted time, and this is where researchers believe Plan U can make a difference.

The key idea with Plan U is to call on the organizations that fund research (whether it’s government agencies, charities, or other institutions) to require scientists to post drafts of their papers on so-called “preprint servers” before submitting them to academic journals. This isn’t necessarily a novel idea, though it has been applied at a much smaller scale with arXiv (pronounced “archive”), a repository of electronic preprints not exactly peer-reviewed, but approved for posting by a team of moderators. ArXiv hosts papers in a number of different fields, particularly physics, mathematics, economics, and statistical analyses — generally speaking, stuff that includes a lot of math. Already, in some fields like mathematics and physics, almost all are self-archived on the arXiv repository. More recently, a similar server (bioRxiv) was launched for biological sciences, and it’s becoming increasingly popular with each passing year.

Simply put, the approach has shown its worth, so why not try to expand it? This is what Plan U is all about.

In a new article published June 4 in the open-access journal PLOS Biology, Richard Sever and John Inglis from Cold Spring Harbor Laboratory and Mike Eisen from UC Berkeley describe this plan in great detail.

“Because preprint servers do not perform peer review (see below), they are able to operate at low per-paper costs that can be covered via central funding, making them free at the point of use to both authors and readers. With such low per-paper costs, the world′s entire research output could be accommodated on preprint servers relatively easily,” their paper reads.

“Plan U would establish preprint servers as the de facto means for disseminating all scientific research, which has long been the case in fields covered by arXiv.”

The idea is to reduce costs while also making science more accessible to the entire world.

Plan U would also be expected to speed up research itself, allowing experts to build on the work of their peers more quickly, and produce reproducibility — which has become a major concern in some fields of life sciences.

According to their analysis, everyone would win from this — although presumably, the profits of journal publishers would dwindle substantially. There is also a precedent for this: since 2017, the Chan Zuckerberg Initiative (CZI) has required all its grantees to deposit preprints prior to or at submission for formal publication. This has routinely transformed into depositing manuscripts on bioRxiv.

“Plan U therefore creates fertile ground for a dynamic new ecosystem, opening opportunities for experimentation with peer review rather than prescribing a particular process, endpoint, or business model. Such flexibility may be of particular benefit to scientific societies, nonprofit organizations, journals, and self-organizing groups of academics who wish to improve on existing approaches to peer review and/or explore alternative ways to evaluate academic output.”

So far, there haven’t been many reactions to this idea, and even if reactions are positive (which is an optimistic stretch at this stage), science publishing is a $10B/year industry and that won’t change course in one year. However, the number of voices saying that a change should happen is growing. The current environment is unhealthy and there are growing concerns that it is already leading to bad science — not to mention the life-long mental stress and strain which seem to go hand in hand with academia. The giants of the scientific publishing industry have made huge profits for decades, thriving in this environment. For the sake of scientists, students, and the whole of society, this needs to change.

Plan U isn’t a panacea for all that. It might not even be realistic or feasible. But it’s an interesting start, and it’s one which should, at the very least, be discussed.

Journal Reference: Citation: Sever R, Eisen M, Inglis J (2019) Plan U: Universal access to scientific and medical research via funder preprint mandates. PLoS Biol 17(6): e3000273. https://doi.org/10.1371/journal.pbio.3000273

The National Academies of Sciences, Engineering, and Medicine launches website about vaccine facts

The work of the National Academies spurs progress by connecting understandings of science, engineering, and medicine to advising national policies and practice. The studies they do have lasting impacts, from guiding NASA’s agenda for space exploration, to charting the course for improving the quality of health care, to proposing effective strategies to guard against cyber attacks. When faced with a complex question, the National Academies bring together experts from across disciplines to look at the evidence with fresh eyes and openness to insights from other fields.

To counter misinformation and pseudoscience about vaccines that is fueling measles outbreaks in the United States and other countries, the National Academies last week launched a website that provides clear, concise, and evidence-based information on the most frequently asked vaccine safety questions.

In a joint statement, the three National Academies presidents said the evidence base includes a number of the group’s studies that examined vaccine access, safety, scheduling, and possible side effects.

“Our work has validated that the science is clear—vaccines are extremely safe,” they said. The presidents are Marcia McNutt, PhD, with the National Academy of Sciences, C.D. Mote, Jr, PhD, with the National Academy of Engineering, and Victor Dzau, MD, with the National Academy of Medicine.

“Given our shared congressional mandate to advise the nation, we are compelled to draw attention to these facts in order to inform better decision-making at a time when it is urgently needed to protect the health of communities in our country and around the world. We call on our professional colleagues everywhere to share these facts as widely as possible.” they wrote.

The National Academy of Sciences was established in 1863 by an Act of Congress, signed by President Lincoln, as a private, nongovernmental institution to advise the nation on issues related to science and technology. Members are elected by their peers for outstanding contributions to research.

The National Academy of Engineering was established in 1964 under the charter of the National Academy of Sciences to bring the practices of engineering to advising the nation. Members are elected by their peers for extraordinary contributions to engineering.

The National Academy of Medicine (formerly called the Institute of Medicine) was established in 1970 under the charter of the National Academy of Sciences to advise the nation on medical and health issues. Members are elected by their peers for distinguished contributions to medicine and health.

Academy members are among the world’s most distinguished scientists, engineers, physicians, and researchers; more than 300 members are Nobel laureates. The three Academies work together as the National Academies of Sciences, Engineering, and Medicine to provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.

Planets collage.

Art-integrated science lessons make some students ‘learn at 105%’, new study finds

Mixing arts into science lessons can help students better retain information and be more creative in their learning process.

Planets collage.

Image via Pixabay.

Is there a place for arts in science? We’ve tackled this idea before (read about it here and here) and, long story short, we feel the answer is a confident “yes”. A new study supports our view: the team, led by the vice dean of academic affairs for the School of Education at the Johns Hopkins University (JHU), reports that art isn’t only desirable in the classroom — it’s “absolutely needed”.

Rappin’, dancin’, drawin’ science

“Our study provides more evidence that the arts are absolutely needed in schools. I hope the findings can assuage concerns that arts-based lessons won’t be as effective in teaching essential skills,” says Mariale Hardiman, the study’s first author.

Past research has shown that dabbling in the arts helps improve students’ academic outcomes and memory capacity, the team writes. However, it was still unclear whether instructing students on art, incorporating it into lesson plans, general exposure to it, or a combination of these factors, was responsible for the observed benefits.

The team writes that one of the biggest hurdles teachers are facing is that “children forget much of what they learn” in class, so the content of the previous year has to be taught again. The efforts of the current study focused on improving students’ retention of information (specifically science content) through the integration of art in the curriculum.

They followed 350 fifth-grade students in 16 different classrooms across 6 schools in Baltimore, Maryland, throughout the 2013 school year. Each student was randomly assigned in one of two classroom pairs: astronomy and life science, or environmental science and chemistry. The experiment consisted of two sessions, each spanning three to four weeks.

In each session, students first took an arts-integrated class or a conventional class — and switched for the second session. Thus, the team ensured that all students experienced both types of classes and that all eleven teachers involved in the study taught both types of classes.

Art-integrated classes included activities such as rapping or sketching to support learning new terms and expanding their vocabulary. The students also designed collages to separate living and non-living things. In conventional classes, these activities were matched with your regular educational process: reading paragraphs of texts with vocabulary words aloud in a group and completing worksheets.

To estimate how well each approach worked, the team analyzed students’ content retention before, right after, and 10 weeks after the study ended. Those at a basic reading level before the study began showed (a quite surprising) 105% content retention in the long term on average. The authors themselves seem surprised with this result, explaining that :

“The value of 105% […] is an actual value. This value for Basic Readers in the Arts Integrated condition resulted from students demonstrating enhanced retained content on the followup testing beyond what was initially demonstrated on the posttest,” they write in the paper.

So not only did their art-infused approach help students remember the subjects being taught during the study, it helped them better retain content they were later exposed to. The team explains that students remembered more in the delayed post-testing because they kept singing songs they had learned during their art activities. Much like how a catchy tune gets stuck in your head the more you think of it or sing it aloud, these songs helped students hold onto educational content in the long term.

The study also found that students who took a conventional session first remembered more science in the second (art-integrated) session. Students who took the art-integrated session first maintained performance over in the second session. The exact differences between the two groups aren’t enough to be statistically significant, the authors note, but it does suggest that students carry the creative problem-solving skills they learned in arts-and-science classes over to the conventional lessons — and it helps to enhance their ability to learn.

Looking forward, Hardiman hopes that educators and researchers will put their methods to use, which will serve to expand on their study and improve understanding of arts integration in schools. They also say that integrating arts into science lessons could be a very powerful tool for students who struggle the most with skills such as reading, because so much of the conventional curriculum relies on students reading material to learn — so if they cannot read very well, their ability to learn also suffers.

“Our data suggests that traditional instruction seems to perpetuate the achievement gap for students performing at the lower levels of academic achievement,” says Hardiman.

“We also found that students at advanced levels of achievement didn’t lose any learning from incorporating arts into classrooms, but potentially gained benefits such as engagement in learning and enhanced thinking dispositions. For these reasons, we would encourage educators to adopt integrating the arts into content instruction,” .

The paper “The effects of arts-integrated instruction on memory for science content” has been published in the journal Trends in Neuroscience and Education.