Author Archives: Tibi Puiu

About Tibi Puiu

Tibi is a science journalist and co-founder of ZME Science. He writes mainly about emerging tech, physics, climate, and space. In his spare time, Tibi likes to make weird music on his computer and groom felines.

Scrambled DNA of extinct rat suggests there’s no hope to resurrect the woolly mammoth

The Christmas Island rat. Credit: Public Domain.

In recent times, some high-profile geneticists gained a lot of publicity when they said they’re working to resurrect the wooly mammoth, an iconic megafauna species that went extinct during the last ice age, some 10,000 years ago. The whole thing gave off massive Jurassic Park vibes, and given its ambitious scope, the mission was widely picked up by the media. After all, is there anything that science can’t do?

The problem is that, in reality, this challenge could prove virtually impossible. Richard Feynman once said ‘science is imagination in a straitjacket,’ alluding to the fact that wild ideas, by themselves, are not enough to make a breakthrough. For imagination to become reality, it needs to be materialized within physical constraints — and a new study suggests there’s a hard floor when it comes to reconstructing the genetic material of long-extinct species.

Extinct species may be dead for good

Thomas Gilbert at the University of Copenhagen in Denmark set out to probe the limits of CRISPR — a powerful tool for editing genomes that allows researchers to easily alter DNA sequences and modify gene function.

Colossal, a bioscience company recently co-founded by Harvard University geneticist George Church, aims to leverage this technology to resurrect the wooly mammoth or at least a creature very closely resembling one.

In a nutshell, the idea is to sequence DNA from samples of mammoth tusks, bones, and other materials. This genetic material would then be inserted into the Asian elephant stem cells, which would be used to make an artificial womb and a fertilized egg to breed a mammoth-elephant hybrid.

To explore the feasibility of such a lofty goal, Gilbert’s team attempted to reconstruct the genome of the Christmas Island rat, also known as Maclear’s rat (Rattus macleari), a species of rodent that went extinct in the early 1900s.

The team was able to reassemble most of the extinct rodent’s genome thanks to bits of code gleaned from the genome of the closely related Norwegian brown rat (Rattus norvegicus). Researchers were able to recover 95% of the Christmas Island rat’s genome, which sounds like a lot. Except, it’s not.

The last 5% of the genome they couldn’t make sense of is actually the most crucial part since it corresponds to the genes that differentiate the Christmas Island rat from other living relatives.

Some of the genes the researchers were able to recover include those related to the expression of tissue like the hair and ears. The Christmas Island rat had characteristically long black hair and round ears. However, many other genes were lost, their DNA sequences being broken up into many tiny pieces that cannot be reassembled.

Lost genes include those involved in the rat’s immune system and sense of smell. Cutting and pasting genes from another rat species is not really an option since smell plays a crucial role in foraging food, avoiding predators, and mating, so a modified animal might look and behave differently from the original extinct species.

Credit: Royal BC Museum Victoria.

Gilbert describes reassembling the genome of an extinct species as trying to piece together every page of a shredded book. If you have an intact copy of the original book, you should be able to reconstruct the original material perfectly. It might take you a while, but you’ll get there. But herein lies the problem: there are no more original copies for the genome of an extinct species.

Your next best bet is to compare your shredded pages to a similar book, but that means you’ll never be able to recover the missing pages that don’t match, even if you manage to deduce some of the content. The Christmas Island rat diverged from its Norwegian brown rat cousin about 2.6 million years ago. Due to this evolutionary divergence, most of the genetic information sequenced from old Christmas Island rat biological samples is simply lost. And this divergence is pretty similar to that between the woolly mammoth and the Asian elephant.

Some of this missing data could be recovered using current solutions or some that will be developed in the future. But the sad reality may be that some data will never be recovered, which makes the perfect resurrection of an extinct species impossible.

That being said, it’s not impossible to breed an animal that looks and behaves very closely to what you’d expect from an original mammoth or Tasmanian tiger. It’s just that these would be some hybrids of some sort, with combined features from both extinct and living species.

Ultimately, these findings don’t change much about the developments of scientific projects currently underway to resurrect extinct species. However, the study is still valuable because it helps clarify the limits of what’s actually possible. With a tighter straitjacket, maybe scientists’ imagination is diverted to more useful pathways of research. For instance, some of these efforts may be better placed in saving vulnerable species from extinction. Just saying.

The findings appeared in the journal Current Biology.

Amazon indigenous people barely get dementia. Could a pre-industrial lifestyle protect against Alzheimer’s?

Nearly 1 in 10 Americans over the age of 65 have dementia, and as the U.S. struggles with an aging population, the proportion of elderly people with Alzheimer’s and other neurodegenerative diseases is bound to increase. But in the Amazon basin, where some indigenous people still employ a subsistence lifestyle as they have for hundreds of years isolated from industrialized society, the rate of dementia hovers at around just 1%. These findings, reported by a new study from the University of South California, suggest that the Western lifestyle may be seriously putting people at risk of dementia in old age.

“Something about the pre-industrial subsistence lifestyle appears to protect older Tsimane and Moseten from dementia,” said Margaret Gatz, the lead study author and professor of psychology, gerontology and preventive medicine at the University of South California.

The Tsimane have little or no access to health care but are extremely active and consume a high-fiber diet that includes vegetables, fish and lean meat. (Photo/Courtesy of the Tsimane Health and Life History Project Team)

Gatz and colleagues traveled to the Bolivian Amazon jungle, where they closely studied the elderly of the Tsimane’ and Mosetén tribes — two indigenous peoples that have remained largely isolated from urban life elsewhere in the country.

The Tsimane’ number about 16,000 people living in mostly riverbank villages scattered across about 3,000 square miles of the Amazon jungle. They are forager-farmers who fish, hunt, and cut down trees with machetes, which keeps everyone very physically active throughout their lifetimes.

The neighboring Mosetén, which number around 3,000 and have close cultural ties with the Tsimane’, also reside in rural villages and rely on subsistence agricultural work. However, they live closer to towns, have schools, and access to health posts, as well as access to roads and electricity. Within the last decade, the Mosetén have also received cell phone service and running water.

Researchers employed computer tomography (CT) brain scans, cognitive and neurological tests, and questionnaires to assess the mental health among the Tsimane’ and Mosetén aged 60 and over.

According to the results, the study found just 5 cases of dementia among 435 Tsimane’ and one case among 169 Mosetén, which is much less than the rate of incidence in Western countries. Previously, studies of indigenous populations in Australia, North America, Guam, and Brazil found dementia prevalence ranging from 0.5% to 20%. The authors note that the apparent higher rate of dementia among older adults from indigenous tribes elsewhere in the world could be due to their higher contact with their industrialized neighbors, and subsequent adoption of more sedentary lifestyles.

In the same over-60 groups, the researchers also diagnosed about 8% of elderly Tsimane’ and 10% of Mosetén with mild cognitive impairment (MCI) — the stage between the expected cognitive decline of normal aging and the more serious decline of dementia. This condition is characterized by memory loss and a decline in cognitive abilities, such as language and spatial reasoning. The MCI rates were comparable to those encountered in high-income countries.

In high-income countries with high rates of dementia among older adults, the population generally does not engage in the recommended amount of physical activity and has a diet rich in sugars and fats. As a result, older adults are more susceptible to heart disease and brain aging. In contrast, the Tsimane’ people have unusually healthy hearts for their age. That’s not surprising considering they also have the lowest prevalence of coronary atherosclerosis of any population in the world.

Alzheimer’s has been previously associated with hypertension, diabetes, cardiovascular diseases, physical inactivity, and even air pollution. It’s no coincidence that these chronic diseases and health problems are staples of modern Western lifestyles.

In 2021, the same team from the University of South California found that the Tsimane indigenous people of the Bolivian Amazon experience less brain atrophy than their American and European peers. Their decrease in brain volume happened at a rate that was 70% lower than in Western populations.

“We’re in a race for solutions to the growing prevalence of Alzheimer’s disease and related dementias,” said Hillard Kaplan, a study co-author and professor of health economics and anthropology at Chapman University who has studied the Tsimane for two decades. “Looking at these diverse populations augments and accelerates our understanding of these diseases and generates new insights.”

If the Tsimane’ and Mosetén offer any indication, a pre-industrial lifestyle can offer significant protection against dementia. But that doesn’t mean we can all revert to foraging in the woods and living under the stars. In case someone is romanticizing life in the Amazon jungle, bear in mind that the Tsimane’ have an average of nine children per family who live an average of just over 50 years compared to the world average of 71.5 years. So while it may be true that indigenous Amazon people rarely suffer from dementia at old age, what’s certain is that even fewer actually make it that far.

The findings were published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.

New test maps acidity in the mouth to spot cavities before they form

The phrase ‘prevention is better than the cure’ is a fundamental principle of modern health, and your oral health should be no different. One of the best ways to prevent cavities is by brushing and flossing correctly. But by now, most people do this and they still end up with some caries eventually. Taking prevention to the next level, scientists at the University of Washington have now developed an optical-based method that can identify the most at-risk teeth by mapping high acidity in the dental plaque that covers the teeth.

Shining light on teeth covered with a florescent dye solution can reveal where the enamel is most at risk from acidity. Credit: University of Washington/IEE Xplore.

Dental plaque is produced by bacteria that live in our mouths as a byproduct as they consume sugars, starches, and other bits of foods that haven’t been properly cleaned from the teeth. If plaque stays on the teeth for more than a few days, it hardens and becomes a substance called tartar. In time, the microorganisms that form on the plaque release acids that wear down the tooth enamel, then the next layer called dentin, before reaching the pulp. When acid attacks the pulp, you’ve officially gotten a new cavity.

But what if we could monitor this acidic activity and stop it before it crosses a point of no return that triggers the cavity formation? That’s exactly what researchers at the University of Washington set out to do. They’ve devised a system, which they call O-pH, that measures the pH levels, or acidity, of the plaque covering each tooth under inspection.

In order to map the acidity of the plaque, a person’s teeth are first covered in a non-toxic, safe chemical dye that reacts with light to produce fluorescent reactions. An optical probe then detects these fluorescent reactions, whose signals can reveal the exact acidity of the underlying dental plaque.

The proof of concept was demonstrated on a small sample of 30 patients, aged 10 to 18. Children and teenagers were selected because their enamel is much thinner than that of adults, which makes detecting any sign of erosion — and consequently a potential cavity — early on very important. The tooth acidity was read before and after sugar rinses, as well as pre- and post-professional dental cleaning.

In the future, this acidity test could be standard practice in dental practices. Eric Seibel, senior author and research professor of mechanical engineering at the University of Washington, says that when a patient comes in for routine teeth cleaning, “a dentist would rinse them with the tasteless fluorescent dye solution and then get their teeth optically scanned to look for high acid production areas where the enamel is getting demineralized.” The dentist and patient can then form a treatment plan to reduce the acidity and avoid costly cavities.

“We do need more results to show how effective it is for diagnosis, but it can definitely help us understand some of your oral health quantitatively,” said Manuja Sharma, lead author and a doctoral student in the UW Department of Electrical and Computer Engineering.  “It can also help educate patients about the effects of sugar on the chemistry of plaque. We can show them, live, what happens, and that is an experience they’ll remember and say, OK, fine, I need to cut down on sugar!”

The O-pH system was described in the journal IEEE Xplore.

Is information the fifth state of matter? Physicist says there’s one way to find out

Credit: Pixabay.

Einstein’s theory of general relativity was revolutionary on many levels. One of its many groundbreaking consequences is that mass and energy are basically interchangeable at rest. The immediate implication is that you can make mass — tangible matter — out of energy, thereby explaining how the universe as we know it came to be during the Big Bang when a heck lot of energy turned into the first particles. But there may be much more to it.

In 2019, physicist Melvin Vopson of the University of Portsmouth proposed that information is equivalent to mass and energy, existing as a separate state of matter, a conjecture known as the mass-energy-information equivalence principle. This would mean that every bit of information has a finite and quantifiable mass. For instance, a hard drive full of information is heavier than the same drive empty.

That’s a bold claim, to say the least. Now, in a new study, Vopson is ready to put his money where his mouth is, proposing an experiment that can verify this conjecture.

“The main idea of the study is that information erasure can be achieved when matter particles annihilate their corresponding antimatter particles. This process essentially erases a matter particle from existence. The annihilation process converts all the [remaining] mass of the annihilating particles into energy, typically gamma photons. However, if the particles do contain information, then this also needs to be conserved upon annihilation, producing some lower-energy photons. In the present study, I predicted the exact energy of the infrared red photons resulting from this information erasure, and I gave a detailed protocol for the experimental testing involving the electron-positron annihilation process,” Vopson told ZME Science.

Information: just another form of matter and energy?

The mass-energy-information equivalence (M/E/I) principle combines Rolf Launder’s application of the laws of thermodynamics with information theory — which says information is another form of energy — and Claude Shannon’s information theory that led to the invention of the first digital bit. This M/E/I principle, along with its main prediction that information has mass, is what Vopson calls the 1st information conjecture.

The 2nd conjecture is that all elementary particles store information content about themselves, similarly to how living things are encoded by DNA. In another recent study, Vopson used this 2nd conjecture to calculate the information storage capacity of all visible matter in the Universe. The physicist also calculated that — at a current 50% annual growth rate in the number of digital bits humans are producing — half of Earth’s mass would be converted to digital information mass within 150 years.

However, testing these conjectures is not trivial. For instance, a 1 terabyte hard drive filled with digital information would gain a mass of only 2.5 × 10-25 Kg compared to the same erased drive. Measuring such a tiny change in mass is impossible even with the most sensitive scale in the world.

Instead, Vopson has proposed an experiment that tests both conjectures using a particle-antiparticle collision. Since every particle is supposed to contain information, which supposedly has its own mass, then that information has to go somewhere when the particle is annihilated. In this case, the information should be converted into low-energy infrared photons.

The experiment

According to Vopson’s predictions, an electron-positron collision should produce two high-energy gamma rays, as well as two infrared photons with wavelengths around 50 micrometers. The physicist adds that altering the samples’ temperature wouldn’t influence the energy of the gamma rays, but would shift the wavelength of the infrared photons. This is important because it provides a control mechanism for the experiment that can rule out other physical processes.

Validating the mass-energy-information equivalence principle could have far-reaching implications for physics as we know it. In a previous interview with ZME Science, Vopson said that if his conjectures are correct, the universe would contain a stupendous amount of digital information. He speculated that — considering all these things — the elusive dark matter could be just information. Only 5% of the universe is made of baryonic matter (i.e. things we can see or measure), while the rest of the 95% mass-energy content is made of dark matter and dark energy — fancy terms physicists use to describe things that they have no idea what they look like.

Then there’s the black hole information loss paradox. According to Einstein’s general theory of relativity, the gravity of a black hole is so overwhelming, that nothing can escape its clutches within its event horizon — not even light. But in the 1970s, Stephen Hawking and collaborators sought to finesse our understanding of black holes by using quantum theory; and one of the central tenets of quantum mechanics is that information can never be lost. One of Hawking’s major predictions is that black holes emit radiation, now called Hawking radiation. But with this prediction, the late British physicist had pitted the ultimate laws of physics — general relativity and quantum mechanics — against one another, hence the information loss paradox. The mass-energy-information equivalence principle may lend a helping hand in reconciling this paradox.

“It appears to be exactly the same thing that I am proposing in this latest article, but at very different scales. Looking closely into this problem will be the scope of a different study and for now, it is just an interesting idea that must be followed,” Vopson tells me.

Finally, the mass-energy-information equivalence could help settle a whimsical debate that has been gaining steam lately: the notion that we may all be living inside a computer simulation. The debate can be traced to a seminal paper published in 2003 by Nick Bostrom of the University of Oxford, which argued that a technologically adept civilization with immense computing power could simulate new realities with conscious beings in them. Bostrom argued that the probability that we are living in a simulation is close to one.

While it’s easy to dismiss the computer simulation theory, once you think about it, you can’t disprove it either. But Vopson thinks the two conjectures could offer a way out of this dilemma.

“It is like saying, how a character in the most advanced computer game ever created, becoming self-aware, could prove that it is inside a computer game? What experiments could this entity design from within the game to prove its reality is indeed computational?  Similarly, if our world is indeed computational / simulation, then how could someone prove this? What experiments should one perform to demonstrate this?”

“From the information storage angle – a simulation requires information to run: the code itself, all the variables, etc… are bits of information stored somewhere.”

“My latest article offers a way of testing our reality from within the simulation, so a positive result would strongly suggest that the simulation hypothesis is probably real,” the physicist said.

8,000-year-old skeletons in Portugal could be world’s oldest mummies

After they revisited photos of ancient human skeletons first exhumed in Portugal’s Sado Valley in the 1960s, archaeologists now believe that the 8,000-year-old remains went through a mummification practice before their burial. This would make the remains the oldest evidence for Mesolithic mummification in Europe. In fact, it could very well be the earliest evidence of mummification in the world.

Researchers performed experiments to study how the human body decomposes in various conditions and positions. This illustration depicts three states of soft tissue decomposition, from the fully fleshed body on day one to body desiccation seven months later. Credit: European Journal of Archaeology.

The oldest evidence of deliberate mummification in Egypt, the most famous region in the world for mummies, is about 5,500 years old. However, researchers believe mummification may have been much more common during prehistoric times and could in fact be much older — it’s just that evidence is hard to come by due to the fragile nature of mummified tissue.

But using a clever technique, it may be possible to tell whether decomposed remains may have originally undergone mummification, significantly extending the timeline of such burial practices.

Excavations in the Sado Valley in southern Portugal, at the sites of Arapouco and Poças de S. Bento, between 1958 and 1964 recovered more than 100 skeletons dating between 8,000 and 7,000 years ago. Unfortunately, much of the original documentation for these extraordinary finds was lost, including photographs, site plans, and field drawings.

That’s until João Luís Cardoso, an archaeologist at the Open University in Lisbon, came across three rolls of film while studying a local archive.

These verified photos depict 13 bodies exhumed in 1961 and 1962, which Cardoso and colleagues used to reconstruct their likely burial positions using an archaeothanatological analysis. Based on knowledge of natural decay processes, this method has made it possible to reconstruct in detail how humans have historically dealt with their dead.

An illustration comparing the burial of a fresh cadaver and a desiccated body that has undergone guided mummification. Credit: Uppsala University and Linnaeus University in Sweden and University of Lisbon in Portugal.

In addition to observations about the spatial distribution of the ancient bones from Sado Valley,  forensic anthropologist Hayley Mickleburgh performed decomposition experiments to predict how human corpses pin different burial positions could look like if they had been mummified or not.

Together, these observations suggest that some of these remains must have been mummified. Although there was no soft tissue left, the archaeologists reached this conclusion based on deductions from indirect evidence like the position of the bodies, with their knees bent and pressed against the chest, as well as the presence of sediment infill around the bones and the absence of disarticulation. An unprepared decomposing corpse will disarticulate at weak joints relatively quickly after its burial, but mummified bodies still preserve articulation.

The authors of the new study believe that before being buried, the desiccating bodies were gradually tightened with ropes, binding the limbs in place and compressing the remains into the desired position. This would explain some of the signs of mummification, which was likely performed to ease transport to the grave and to preserve the shape of the body in life after burial.

Overall, the Portuguese researchers strongly believe that prehistoric mummification may have been much more widespread across the world than previously thought, despite the lack of direct evidence of soft tissue. This is why follow-up observations of ancient archaeological sites using archaeothanatological analysis are paramount in order to uncover new robust evidence of pre-burial practices in prehistory. In other words, this may just be the beginning of a new exciting phase in mummy archaeology.

Whether or not the Sado Valley burials represent the oldest mummies in the world discovered thus far remains contested. The oldest confirmed mummies in the world are the 7,000-year-old Chinchorro mummies, found on Chile’s coast. But people likely mummified their dead much earlier than that, even in hunter-gatherer communities.

The findings appeared in the European Journal of Archaeology.

Geese may be the first domesticated birds. It first started 7,000 years ago

Credit: Pixabay.

Although humans make up only a tiny fraction of all life on the planet, our impact upon diversity and wildlife has been enormous. By some accounts, human activity is responsible for the loss of 80% of all wild animals and about 50% of all plants. Much of this loss was necessary to make way for farmed livestock for human consumption.

Just consider this fact: 70% of all birds on Earth are chickens and other poultry, whereas wild birds comprise a meager 30%. Were an alien archaeologist to visit our planet after humans went extinct, they would surely be staggered by the abundance of chicken fossils.

But before we became hooked on chicken eggs and hot wings, we most likely first started with geese.

Japanese archaeologists performing excavations at Tianluoshan, a Stone Age site dated between 7,000 and 5,500 years ago in China, found extensive evidence of goose domestication. They claim this is the earliest evidence of bird domestication reported thus far.

The team identified 232 goose bones, which paint a convincing picture that Tianluoshan may be the cradle of modern poultry.

First and foremost, the researchers performed radiocarbon dating on the bones themselves, rather than the sediments which covered the remains. This lends confidence that the goose bones are really as old as 7,000 years.

At least four bones belonged to juveniles no older than 16 weeks. This shows that they must have hatched at the site because it would have been impossible for them to fly in from somewhere else at their age. This is likely the case for the adult geese found there as well, given that wild geese don’t breed in the area today and probably didn’t 7,000 years ago either.

But, to be sure, the team led by Masaki Eda at Hokkaido University Museum in Sapporo, Japan, thoroughly broke down the chemical makeup of the ancient bones, showing the water they drank was local. The strikingly uniform size of the bred geese is also very indicative of captive breeding.

Although not by any means definitive, all of these lines of evidence converge to the same conclusion: geese were probably the first birds humans have domesticated, and this happened more than 7,000 years ago in China.

New Scientist reports that other studies have claimed that chickens were the first domesticated birds, as early as 10,000 years ago, also in avian-loving northern China. But the evidence, in this case, has proven contentious. Genetic analysis suggests chickens were domesticated from wild birds called red junglefowl, but these birds do not live that far north. Furthermore, the chicken bones weren’t directly dated. The firmest evidence of chicken domestication only appeared 5,000 years ago.

While most domestication research has focused on dogs and cattle, it’s refreshing to see new perspectives on the evolutionary history of poultry, upon which our food security depends so much.

Scientists discover how genes from our parents may shape our behavior

Credit: Pixabay.

One major point of contention among psychologists has always been the nature versus nurture debate — the extent to which particular aspects of our behavior are a product of either inherited (i.e. genetic) or acquired (i.e. learned) influences. In a new study on mice, researchers at the University of Utah Health focused on the former, showing that genes inherited from each parent have their own impact on hormones and important neurotransmitters that regulate our mood and behavior.

Intriguingly, some of these genetic influences are sex-specific. For instance, the scientists found that genetics inherited from mom can shape the decisions and actions of sons, while genes from dad have biased control over daughters.

I got it from my Mom and Dad

Like chromosomes, genes also come in pairs. Both mom and dad each have two copies, or alleles, of each of their genes, but each parent only passes along one copy of each to the child. These genes determine many traits, such as hair and skin color.

But it’s not only our outward appearance that is influenced by genes. In a new study, researchers found that tyrosine hydroxylase and dopa decarboxylase — two genes that are heavily involved in the synthesis of hormones and neurotransmitters like dopamine, serotonin, norepinephrine, or epinephrine — are expressed differently from maternally versus paternally inherited gene copies. These chemicals play a crucial role in regulating an array of important functions from mood to movement.

The genes are also involved in the production of the adrenaline hormone by the adrenal gland, which triggers the “fight or flight” response when we encounter danger or stress. Together, these pathways form the brain-adrenal axis.

“The brain-adrenal axis controls decision making, stress responses, and the release of adrenaline, sometimes called the fight or flight response. Our study shows how mom’s and dad’s genes control this axis in their offspring and affect adrenaline release. Mom’s control the brain and dad’s control the adrenal gland,” Christopher Gregg, principal investigator and associate professor in the Department of Neurobiology at the University of Utah Health, told ZME Science.

In order to investigate how inherited gene copies introduce maternal or paternal biases in the brain-adrenal axis, the researchers genetically modified mice to attach a fluorescent tag to the dopa decarboxylase enzyme. Using a microscope, they could tell if a gene was inherited from the mother (colored red) or from the father (colored blue).

An investigation of the entire mouse brain revealed 11 regions that contained groups of neurons that only use mom’s copy of the dopa decarboxylase gene. Conversely, in the adrenal gland, there were groups of cells that were exclusively expressed by the gene copy inherited from the dad.

These findings immediately led to an existential question: could our behavior be influenced by these genetic biases? To answer, the researchers analyzed mice with mutations that switched off one parent’s copy in a select group of cells while the rodents were foraging for food.

The mice were left to explore freely so any external influence was kept to a minimum. Their behavior had to be as natural as possible as they encountered various obstacles, which prompted them to either take risks or retreat to safety, before resuming their quest for finding food.

These movements and behaviors look random and chaotic, but a machine algorithm developed by the researchers was able to pick up subtle, but significant patterns. When these foraging patterns were broken down into modules, the researchers were able to identify behavioral differences associated with each parent’s copy of the dopa decarboxylase genes.

“We have faced a lot of skepticism from the scientific community. The way we study decision-making by using machine learning to detect patterns was hard for scientists to understand. The community was surprised to find that such well-studied genes (Th and Ddc) express the Mum and Dad’s gene copies in different brain and adrenal cells. We had to do a lot of work to show how strong the evidence is for our discovery,” Gregg said.

Christopher Gregg pictured. Credit: Jen Pilgreen.

Gregg had been interested in how biological factors influence our decisions since he first came across Daniel Kahneman’s work in behavioral economics while he was still a postdoc. In the 1970s, Kahneman and Amos Tversky introduced the term ‘cognitive bias’ to describe our systematic but flawed patterns of responses to judgment and decision problems.

For instance, the gambler’s fallacy makes us tend to be certain that if a coin has landed heads up five times in a row, then it’s much more likely to land tails up the sixth time. The odds are, in fact, still 50-50. One of the most pervasive and damaging biases is the confirmation bias, which leads us to look for evidence confirming what we already think or suspect. If you’re disgruntled by the current political divides across the world, where each side seems unable to allow that the other side might be right about some things, you can point the finger at confirmation bias in many cases. There are many other biases, though, with Wikipedia listing at least 185 entries.

Now, Gregg seems convinced that these cognitive biases and some decision processes are deeply rooted in our biology, as well as that of other mammals. And with more research, it may be possible to modify maladaptive behaviors in a clinical setting, with potential new treatments for conditions like anxiety or depression.

The main caveat, however, is that all of this work has been performed on mice. Gregg and colleagues now want to develop and apply a new artificial intelligence platform called Storyline Health to human decision-making and behavior. They expect to discover genetic factors that control our behavior and cognition in a similar way to rodents.

“I am very excited about this new area that emerges from our work and merges decision making, machine learning and genetics. We are going to discover a lot of important new things about the factors that shape our decisions,” he said.

The findings appeared in the journal Cell Reports.

Amazon rainforest approaching tipping point of turning into savannah

A combination of climate change, deforestation, and fires has put immense strain on the Amazon basin — home to the single largest remaining tropical rainforest in the world, housing at least 10% of the world’s known biodiversity — since the early 2000s. A new study suggests that over three-quarters of the Amazon region is showing signs that rainforests may be nearing a tipping point, where they could turn into a savannah.

“There is a lot of discussion about the future of the Amazon rainforest and its tipping point. This comes from model studies that originally showed a fast loss of the Amazon rainforest. Since then there has been a lot of uncertainty about its future based on models not agreeing with each other, different future scenarios of climate change, etc. This leads us to look at the real world Amazon to actually see what is going on, and why wouldn’t you if the data is there? We use well-established indicators to measure the changing resilience of the forest, finding that 75% of the forest is losing resilience,” Chris Boulton, Associate Research Fellow at the University of Exeter in the UK, told ZME Science


R(1) values at each location are measured over time and approximate how much memory the forest has (how similar the forest is compared to how it was previously). Higher values suggest more memory, meaning the forest is responding more slowly to weather events, having lower resilience to them. Over the years, the increasing AR(1) values at individual locations, as well as the average behaviour over the region (shown by the time series) shows that there has been a loss of resilience in the Amazon rainforest, particularly over the last 20 years. Credit: Boulton, et al.; Nature Climate Change

Resilience, Boulton added, refers to an ecosystem’s ability to recover from strenuous events such as droughts. Monitoring ecosystem resilience is paramount because it can help determine the magnitude and timing of ecological interventions, such as environmental watering, as well as provide trajectories we can expect in highly disturbed ecosystems subject to ongoing change. And few regions across the world are under as much stress as the Amazon basin is currently experiencing.

Aggressive modern human economic invasion in the area over the past decades has supplanted once tropical foliage with roads, dams, cattle farms, and huge soy plantations. Adding insult to injury are the hundreds of wildfires that lit large chunks of the iconic rainforest up in flames. In 2020 alone, fires razed more than 19 million acres of the world’s largest tropical forest.

With the forest habitat shredded, many endemic species are under threat of extinction, their previous role being filled by often invasive animals. For instance, we’re seeing giant anteaters being replaced by rats and Brazil nut trees making way for weeds.

Using remote satellite sensing data, Boulton and colleagues modeled changes in the resilience of the Amazon rainforest between 1991 and 2016, coming to some stark conclusions. The analysis revealed that 75% of the Amazon has been steadily losing resilience since the early 2000s, which in simple terms means that the rainforests are finding it increasingly difficult to recover after a big drought or fire.

“I think the biggest challenge with this work was the amount of robustness checking that needed to be done. To have such a striking result, all of our coauthors had to be confident that what we were seeing stood up to various tests,” Boulton said.

These concerning developments suggest to the study’s authors that the Amazon may be approaching a critical threshold. Once crossed, key regions of the Amazon may irremediably transition into a new state, from luxurious rainforests to savannas.

The loss of resilience is most prominent in areas that are closer to human activity, as well as in regions that receive less rainfall. That was to be expected. But what was particularly surprising was finding loss of resilience did not necessarily overlap with loss in forest cover. That’s worrisome because it suggests ecosystems that look to be doing well from up above may be actually more vulnerable to changing their mean state than previously thought.

“On the surface, the Amazon may appear comfortable (by looking at the state of the forest), but you need indicators like the ones we use to really see its health. There is a section in the new IPCC report regarding the ‘committed response’ of the Amazon; that in the future, the Amazon may appear stable but the climate it is experiencing may not be good enough for it to survive. Because the forest overall responds slowly to change, it may have passed a tipping point without being realized from the outside,” Boulton said.

The study did not attempt to offer a timeline for this possible transformation of the rainforests. When such a threshold could be reached if things continue business as usual is a big enigma at this stage. But these alarming findings suggest that, if ecosystem resilience is any indication, the Amazon basin is heading towards this critical point of no return. Furthermore, the level of uncertainty is compounded by the many dependencies that characterize such a complex ecosystem like the Amazon.

“Losing part of the forest will also affect rainfall in other areas, which could create losses of resilience in areas where we do not see it at the moment. As for when, I think this is tough to answer, I am surprised to see these signals now over such a large area, and if others are too then it could give people a wake-up call to do something about it,” Boulton said.

Otodus megalodon. Credit: Flickr, Elena Regina.

Cold oceans may have helped Megalodon reach gargantuan proportions

Megalodon was the uncontested marine predator of its time. Being about the size of a bus — twice as long as the second-largest shark in history — this fierce sea monster must have been a sight to behold. But according to a new study, the extinct species of shark, which lived from the early Miocene to the end of the Pliocene, from 23 to 2.6 million years ago, these truly huge Megalodons may have only been found in higher latitudes, where the water is much colder. Around the equator, the size of Megalodons was less impressive, though they were still a force to be reckoned with.

Schematic showing general body size distribution of Otodus megalodon, with body size increasing towards cooler waters at higher latitudes. Credit: Kenshu Shimada.

Like all sharks, Megalodon’s skeleton was mostly made of cartilage. This means that only its teeth and vertebrae have survived in the fossil record. But this seemingly limited fossil evidence can reveal a wealth of information about the intimate lives of ancient sharks, such as what they ate, how they bred, and much more.

In early 2021, Kenshu Shimada—a professor of paleobiology at DePaul University in Chicago — analyzed growth bands in Megalodon specimens, showing the fierce sharks gave live birth to the largest babies in the shark world, measuring about 2 meters (6.6 feet) in length. This study also estimated that Megalodon had a life expectancy between 88 and 100 years. 

In a new study published today in the journal Historical Biology, Professor Shimada took a new look at Megalodon teeth, re-examining their geographical occurrence and corresponding estimated body sizes to see whether water temperature may have had any notable influence on ancient sharks’ development.

The idea for this new study originated while Shimada was out on a family fishing trip to the Florida Keys with Martin Becker, a professor of environmental science at William Paterson University in New Jersey.

“After my daughter’s ‘big catch,’ we started to discuss where large fish live, leading to a conversation about different populations of Megalodon,” Shimada told ZME Science.

The researchers employed previously published data, some of which offered possible Megalodon nursery areas judging from the presence of much smaller teeth relative to other locations. But there is another possible explanation — the new study found that these supposed nursery areas were concentrated close to the equator, and this could have had a major impact on the size of these prehistoric sharks.

The ocean receives most of its heat along the equator, where incoming solar radiation is about double that received at the poles. Hence, sea surfaces are much warmer along the equator than at the poles.

Larger animals tend to thrive in cooler climates — an empirical observation known as Bergmann’s rule — because their size helps them retain heat more efficiently. So given the data they had at their disposal, the researchers think that the smaller Megalodon teeth found close to the equator might not necessarily all come from juveniles. It could just be that Megalodons in this region attained a much smaller individual body size as a result of the warmer water — and the differences may have been quite striking.

“Generally speaking, the new study found that Megalodon populations towards the equator were small, roughly 6.5 meters (21 feet) on average, while those away from the equator measured 11 meters (36 feet) on average.  While individuals that exceeded 15 meters (50 feet) must have been uncommon, the new study suggests that those that could have reached 20 meters (65 feet) must have lived more commonly in cooler environments away from the equator,” Shimada said.

These findings suggest that the most menacing Megalodons were concentrated closer to polar regions, although their geographical distribution must have evolved dramatically over the ages during their nearly 20-million-year-old history. Whether or not Megalodon’s propensity for following Bergmann’s rule had any impact on its eventual demise some 2.5 million years ago is still an open question, but as climate change today is pushing marine animals increasingly towards the poles, perhaps there’s a cautionary tale hidden in these prehistoric patterns that we ought to be paying close attention to.

“To our knowledge, Bergmann’s rule has never been recognized for sharks previously, but our research team contends that the lack of modern examples should not be taken as evidence for our idea to be false, especially because of the fact that there is no comparable modern shark to Megalodon.”

“The cause for the extinction of Megalodon is still uncertain, but one hypothesis states that competition with the rising great white shark could be the reason.  Even if that is the case, where climate change may not have been the direct cause for the demise of Megalodon, such climatic changes could have affected the availability of food sources for Megalodon as well as the success of competitors that could have indirectly contributed to its extinction,” Professor Shimada concluded.

Iconic Tyrannosaurus may actually be three distinct species of dinosaur

By a wide margin, Tyrannosaurus rex is the most famous and most beloved dinosaur in the world. Named the “king of the tyrant lizards, everything about this ferocious Cretaceous predator looks like it was built to rule, from its muscular body stretching as long as a school bus, snout to tail, to its stuff skull adorned with 60 serrated teeth, each designed to pierce and grip the flesh of gigantic prey. And like a true king, T. rex reigned single-handedly over his domain, being the only species in the genus Tyrannosaurus. Or is it?

A recent analysis of T. rex fossils suggests that the genus may actually comprise three distinct but closely related species of Tyrannosaurus, citing both anatomical and stratigraphic evidence from three dozen specimens unearthed across the world.

The main differences identified in the three proposed distinct species are quite subtle but significant. These include the shape of the femur and tooth configuration.

Sharing the throne

Telling closely related species apart can be very challenging, especially those that have been extinct for over 65 million years. Take, for instance, a group of South American finches known as “capuchino” seedeaters. Many of these birds look very similar, apart from some subtle hints. Male dark-throated seedeaters and marsh seedeaters look exactly the same in terms of shape and size, except for the color of their plumage. The former has a black throat, while the latter has a white throat. Their songs are also different; one species might have trills in different sections of the song, while another might be buzzy.

Yet plumage color or vocalization can’t be preserved, so very closely related species of dinosaurs can be easily mashed together into a single one. Adding to the challenge is individual variation owed to age and sex.

Previously reported variations in femur shape and specimens with either one or two slender incisor teeth on each side of the front jaw suggested the Tyrannosaurus genus is richer than at first glance. Gregory Paul and colleagues picked up from here and compared the robustness of the femur in 24 T. rex specimens. They also measured the diameter of the base of teeth to see if a specimen had one or two slender incisor teeth.

Some specimens had more study femurs — calculated using the length and circumference of the thigh bone — while others had more gracile femurs. The researchers found that the robust femurs were twice more abundant than the gracile variety. If these substantial differences were owed to sex differences, you’d expect a ratio closer to 50/50. Robust femurs were found in T. rex juveniles while some adult dinosaurs had gracile femurs, which also rules out differences owed to age and development stages.

Concerning teeth variation, the scientists found that T. rex specimens with only one incisor tooth were correlated with often having high femur gracility.

And, finally, the researchers also analyzed each specimen’s stratigraphy, the classification of different layers or layering of sedimentary deposits. When a fossil is found in lower layers of sediment, this means it is older than those found in the soil further up. Of the 37 Tyrannosaurus specimens included in this study, 28 were found in the Lancian upper Maastrichtian formations in North America, which are estimated to be from between 67.5 to 66 million years ago.

But only robust Tyrannosaurus femurs were found in the lower layer of sediment, which is conducive to variation of robustness in other theropod species, indicating that only one species of Tyrannosaurus existed during this era.  Only one gracile Tyrannosaurus femur was identified in the middle layer with five other gracile femurs in the upper layer, alongside other robust femurs. This suggests that maybe these specimens found in sediment layers further up developed into more distinct specimens than those found in lower levels.

“We found that the changes in Tyrannosaurus femurs are likely not related to the sex or age of the specimen. We propose that the changes in the femur may have evolved over time from a common ancestor who displayed more robust femurs to become more gracile in later species. The differences in femur robustness across layers of sediment may be considered distinct enough that the specimens could potentially be considered separate species.”

The lizard emperor, king, and queen

Besides T. rex, the researchers have nominated two potential new species: Tyrannosaurus imperator (tyrant lizard emperor) and Tyrannosaurus regina (tyrant lizard queen), both aptly christened to preserve the royal nature of the lineage. The first relates to specimens found at lower and middle sediment layers, characterized by more robust femurs and two incisor teeth. The second, T. regina, is related to specimens from upper and possibly middle layers of sediment, having slender femurs and one incisor tooth. Meanwhile, good old T. rex is connected with the upper and possibly middle layer of sediment, with specimens displaying a more robust femur while having only one incisor tooth.

As a caveat, the number of specimens isn’t very large — at least not large enough to make a convincing case of speciation for such subtle morphological differences. After all, the authors themselves cannot rule out the possibility that the differences they’ve highlighted aren’t owed to extreme individual differences. Furthermore, the exact location within the sediment layers is not known for some specimens.

Even so, the mere tangible possibility that T. rex isn’t alone is fascinating and enthralling. Perhaps we’ll learn more about this royal lineage once more evidence surfaces.

The findings appeared in the journal Evolutionary Biology.

What color is a mirror? It’s not a trick question

Credit: Pixabay.

When looking into a mirror, you can see yourself or the mirror’s surroundings in the reflection. But what is a mirror’s true color? It’s an intriguing question for sure since answering it requires us to delve into some fascinating optical physics.

If you answered ‘silver’ or ‘no color’ you’re wrong. The real color of a mirror is white with a faint green tint.

The discussion itself is more nuanced, though. After all, a t-shirt can also be white with a green tint but that doesn’t mean you can use it in a makeup kit.

The many faces of reflected light

We perceive the contour and color of objects due to light bouncing off them that hits our retina. The brain then reconstructs information from the retina — in the form of electrical signals — into an image, allowing us to see.

Objects are initially hit by white light, which is basically colorless daylight. This contains all the wavelengths of the visible spectrum at equal intensity. Some of these wavelengths are absorbed, while others are reflected. So it is these reflected visible-spectrum wavelengths that we ultimately perceive as color.

When an object absorbs all visible lengths we perceive it as black while an object that reflects all visible wavelengths will appear white to our eyes. In practice, there is no object that absorbs or reflects 100% of incoming light — this is important when discerning the true color of a mirror.

Why isn’t a mirror plain white?

Not all reflections are the same. The reflection of light and other forms of electromagnetic radiation can be categorized into two distinct types of reflection. Specular reflection is light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that reflect light in all directions.

Credit: Olympus Lifescience.

A simple example of both types using water is to observe a pool of water. When the water is calm, incident light is reflected in an orderly manner thereby producing a clear image of the scenery surrounding the pool. But if the water is disturbed by a rock, waves disrupt the reflection by scattering the reflected light in all directions, erasing the image of the scenery.

Credit: Olympus Lifescience.

Mirrors employ specular reflection. When visible white light hits the surface of a mirror at an incident angle, it is reflected back into space at a reflected angle that is equal to the incident angle. The light that hits a mirror is not separated into its component colors because it is not being “bent” or refracted, so all wavelengths are being reflected at equal angles. The result is an image of the source of light. But because the order of light particles (photons) is reversed by the reflection process, the product is a mirror image.

However, mirrors aren’t perfectly white because the material they’re made from is imperfect itself. Modern mirrors are made by silvering, or spraying a thin layer of silver or aluminum onto the back of a sheet of glass. The silica glass substrate reflects a bit more green light than other wavelengths, giving the reflected mirror image a greenish hue.

This greenish tint is imperceptible but it is truly there. You can see it in action by placing two perfectly aligned mirrors facing each other so the reflected light constantly bounces off each other. This phenomenon is known as a “mirror tunnel” or “infinity mirror.” According to a study performed by physicists in 2004, “the color of objects becomes darker and greener the deeper we look into the mirror tunnel.” The physicists found that mirrors are biased at wavelengths between 495 and 570 nanometers, which corresponds to green.

So, in reality, mirrors are actually white with a tiny tint of green.

U.S. Army tests its first high-energy laser weapon

Artist illustration of a Stryker-mounter laser weapon taking out enemy airborne targets. Credit: Northrop Grumman.

The U.S. Army is just a few small steps away from fielding its first combat-ready, high-powered laser weapon. Over the summer, such a weapon was mounted on a Stryker military vehicle and used in tests at Fort Sill, Oklahoma, in a “combat shoot-off” against a series of possible combat scenarios. The first platoon of four laser-mounted Strykers is expected to join the ranks of the army in early 2022.

“This is the first combat application of lasers for a maneuver element in the Army,” said Lieutenant General L. Neil Thurgood in a statement to the press.

“The technology we have today is ready. This is a gateway to the future,” said Thurgood, who is the director for hypersonics, directed energy, space and rapid acquisition.

During the shoot-off, defense contractors Northrop Grumman and Raytheon each brought a 50-kilowatt laser weapon to the field in order to demonstrate short-range air defense (SHORAD) against a series of simulated threats and combat scenarios. These included drones, rockets, artillery, and mortar targets.

Laser-equipped Stryker on the field during tests. Credit: U.S. Army/Jim Kendell.

Once reserved for science fiction, laser weapons are now a reality — one that will hit hard once these lasers are deployed on the battlefield.

Lasers were first invented in the 1960s, but it was only recently that researchers were able to design a high-power laser system that is small enough to be deployed in a tactical environment, without taking up the entire space of a truck or the airplane.

Designing a laser that is powerful enough to take out a mortar shell from a mile away is a huge engineering challenge. The way it is done is through a technique known as Spectral Beam Combination, whereby multiple outputs of beams are combined into a single high-power beam rather than using a single individual fiber laser.

Lockheed’s ATHENA laser weapon punching a hole in a target vehicle. Credit: Lockheed Martin.

Think of a prism that breaks up a white light beam into the colors of the rainbow. High-power lasers run this process in reverse, combining a bunch of beams that cover different spectrums of electromagnetic energy and outputting a single beam.

Laser weapon development was ramped up in the past decade as a response to the rising threats of armed drones and short-range mortars or rocket barrages. These unguided projectiles can’t be put out of action with sophisticated countermeasures, such as jamming or redirecting. The timeline of a rocket or mortar impact is also very short.

In this rapidly evolving threat landscape, laser weapons suddenly become appealing. For instance, the US Navy has an ongoing program called HELIOS (High Energy Laser and Integrated Optical-dazzler and Surveillance, which aims to install a laser weapon system on a DDG Arleigh Burke class destroyer. The Air Force is currently testing the High Energy Laser Weapon System 2, made by Raytheon Space and Airborne Systems, with the primary goal of disabling enemy drones.

The US Army isn’t sitting idle either. These recent 50 kW trials represent a bold and major step forward in the Army’s ambitions to deeply laser weapons on the battlefield of the future, where it currently faces a gap in short-range air defense.

Lockheed Indirect Fire Protection Capability-High Energy Laser (IFPC-HEL). Credit: Lockheed Martin.

“Offering lethality against unmanned aircraft systems (UAS) and rockets, artillery and mortars (RAM), laser weapons now increase Army air and missile defense capability while reducing total system lifecycle cost through reduced logistical demand,” the Army said in a statement.

According to Task & Purpose, the Army aims to assemble four battalions of laser-equipped Strykers by 2022. The Army is also working on a monstrously powerful 300 kW Indirect Fires Protection Capability – High Energy Laser (IFPC-HEL) truck-mounted laser by 2024. The IFPC-HEL truck, currently in development by Lockheed Martin, would be powerful enough to put cruise missiles out of action.

Can crypto help Russia evade sanctions?

Credit: Canva.

The most ardent proponents of cryptocurrencies claim that, among their many supposed advantages over fiat money, crypto is very challenging if not impossible to regulate by governments. That’s because Bitcoin and other currencies like it are transacted over a peer-to-peer network. The decentralized nature of Bitcoin means that a centralized authority like the US government cannot control financial transactions, and users are free to exchange their tokens with anyone on the network, no matter their geographical location as long as they have an internet connection.

With the recent unprecedented financial sanctions that were imposed on Russia, chiefly by the United States and the European Union, many have wondered if Putin’s administration and his cronies could simply circumvent these rules using a clever laundering scheme involving cryptocurrencies.

Some of the harsh sanctions enacted upon Russia include banning a number of select banks from SWIFT, an international bank-to-bank transfer system, as well as freezing hundreds of billions in foreign currency held by Russia’s central bank overseas. These are the most important economic sanctions, and together they isolate Russia from the global financial system.

In the case of the US sanctions, it becomes illegal for American nationals and businesses to  do business with Russia and individuals connected to the Kremlin regime in ways the U.S. government considers material — and foreign individuals can face sanctions of their own if they don’t comply.

Key to enforcing these sanctions are the banks, which can see who is transferring money through their system, where it’s coming from, and where it’s heading. In order not to get hit by heavy fines or shut down, banks are careful to monitor and block any transactions linked to entities on a black list.

In this heavily restricted environment, cryptocurrency seems like a safe haven and the obvious choice for a sanctioned entity to continue running its business. After all, before the crypto space ballooned a few years ago, Bitcoin was widely used by criminal groups to receive payments for their illegal activities with great success. Why would it be different for a petrol state like Russia?

While Russian criminal organizations, some of which are suspected to receive the Kremlin’s blessing, have made hundreds of millions using hacking techniques like ransomware, it seems unlikely that such tactics could help fill the very deep pockets that a huge state like Russia needs filled.

In a twitter thread, Jake Chervinsky, head of policy at the Blockchain Association, makes his case clear: crypto won’t save Putin.

Chervinsky adds that many huge businesses across the world, not just in the US, are barred from dealing in any way with sanctioned Russian entities, and there’s no reason to convince them that crypto would help them do business without getting caught. While it may be technically possible to launder money and route it to Russia, the risks are huge. Besides, the last sanctions Russia faced when it annexed Crimea in 2014 led to losses amounting to at least $50 billion. That’s peanuts compared to the debacle they’ve gotten themselves into, and that’s not counting the cash they’re bleeding to sustain their war effort, estimated at €20 billion per day.

To support their shambling economy, Russia would need to launder crypto potentially in the hundreds of billions. This kind of liquidity is simply not available on any crypto market at the moment. Then you run into the trouble of somehow converting that crypto asset into fiat (i.e. US dollars, euros, yuan, etc.) in order to sustain your day to day operations. Printing more rouble that nobody wants anyway is obviously not a solution, unless Putin wants to mirror another infamous petrostate, Venezuela.

Another important point is that, contrary to popular belief, crypto transactions are very difficult to mask, even for sophisticated state actors. All crypto transactions are public on a digital ledger, which cannot be doctored or destroyed since the records are stored on the P2P network. You might erase the data on a computer in London, but there are millions more that have the original record. Crypto forensics have also gotten very good at spoofing laundering techniques. For instance, the Justice Department seized $3.6 billion worth of stolen Bitcoin from hackers who had stolen the lot more than six years ago from the Hong Kong-based Bitfinex, one of the world’s largest virtual currency exchanges.

Crypto is likely part of Putin’s plan to evade sanctions and cushion some of the blow dealt by the West — but it’s not the main tool. The Kremlin will probably unleash unprecedented ransomware attacks in order to attract foreign capital, but they can hope for a few billion at most in revenue. That’s woefully insufficient to keep them afloat.

Instead, Russia will likely count on its foreign reserves held in China as well as more trade in the future with its eastern ally. The price of oil is at its highest it’s been in more than a decade, currently around $111 a barrel, which will help a lot. Russia also has a super solid debt-to-GDP ratio of only 18% (the figure is 133% for the US) and a current account surplus, which could help keep the country stable even if it borrows money — for instance, from China — at exorbitant interest rates.

Although Putin seems unstable — he certainly looks that way to me — this invasion has been planned for years most likely. This gave them plenty of time to plan for sanctions, although they may have been much more severe than they bargained for. I don’t know what Plan B looks like, but they most certainly have one and they look prepared to wait this out for years if they have to. It’s just that crypto won’t play a major role in this plan. 

People with ADHD are more likely to be hoarders

The living room of a compulsive hoarder. Credit: Wikimedia Commons.

Living with too much stuff inside a cramped apartment sounds like a staple of modern life, but some people do take it too far. Acquiring an excessive number of items and storing them in a chaotic way has a name: hoarding. It’s even recognized as a clinical mental health disorder and is generally associated with negative outcomes in terms of quality of life. But mental health disorders rarely occur in a complete vacuum and are often associated with other disorders. So it might not be surprising to learn that people diagnosed with attention-deficit/hyperactivity disorder (ADHD) are also more likely to be hoarders, according to a new study.

Pay attention to the clutter around you

Hoarding, a mental health condition that was formally recognized fairly recently, in 2013, when it was added to the DSM-5 (the American Psychiatric Association’s primary handbook for diagnosing mental health conditions), involves the compulsive need to keep objects, many of which can be described as mere trinkets or even trash such as old newspapers. Sometimes, the hoarding of animals is involved. In the hoarder’s mind, one question comes up again and again whenever encountering an object: What if I need it one day? But that rarely if ever happens. Instead, the hoarder’s home is turned into an unlivable warehouse, with barely enough room to move but always enough to spare for the next shiny thing.

Hoarders experience a great deal of anxiety when attempting to discard items and find it difficult to organize their possessions, which explains why some of their homes look like a claustrophobic tangled mess. This behavior can have serious deleterious effects for both the hoarder and their family members, including emotional distress, social isolation, financial problems, and even legal consequences — all depending upon the severity of the condition.

That’s because, just like many other psychiatric conditions, clinical hoarding is on a spectrum. Indeed, hoarding-like behavior is common among many healthy, well-adjusted individuals. And who here can say with a straight face they’ve never impulsively bought useless crap that is now just gathering dust somewhere in the house. We’re talking about extremes, though. At level 1, although the home is visibly cluttered, the doors, windows, and stairs are still accessible. By level 5, the most severe hoarding level, the degree of clutter is extreme, blocking virtually all living quarters. Rotting food, excessive bugs, and poor animal sanitation often infest such homes, raising serious health concerns for people and their pets.

Hoarding disorder is formerly associated with obsessive-compulsive disorder (OCD), but researchers at Anglia Ruskin University were curious to see if there was any connection with ADHD too. In the first leg of their study, the researchers asked patients from an adult ADHD clinic in the UK to fill in a series of questionnaires designed to gauge various traits and behaviors, including hoarding. A control group of similar age, gender, and education, which involved people not diagnosed with ADHD, had to answer the same questions.

This preliminary study found that about 20% of the ADHD participants reported significant hoarding symptoms compared to just 2% in the control group, which is close to the previously reported 2.5% prevalence of hoarding disorder in the general population. The patients with the most severe hoarding symptoms were also likely to suffer from anxiety and depression.

This is the first study that found an association between ADHD and hoarding disorder, so further research is warranted. This is also important from a therapy standpoint since hoarding disorder is very challenging to address, particularly because people with this condition are rarely aware they have a problem. Hoarders rarely recognize or accept that they may be suffering from a mental condition, or simply downplay it.

For instance, one significant aspect of this study is that the average age of the participants with ADHD and hoarding disorder was 30, with both genders equally represented. This challenges the popular imagery of an elderly female surrounded by a mountain of clutter and a dozen cats. Future interventions may be designed to address both ADHD and hoarding disorders in younger individuals before their effects precipitate as the patient ages.

The findings were reported in the Journal of Psychiatric Research.

Insects could replace both beef and toxic synthetic fertilizers

This diagram shows a circular food system fueled by insects-as-food-and-feed production and waste. Credit: Barrágan-Fonseca et al.

Meat may be tasty, but it comes at a hefty price — and I don’t mean at the supermarket. From climate change due to the copious emissions it generates, to forest fires and human rights abuses, the global meat industry has a lot of problems. Plant-based alternatives and high-tech products like cloned meat have been hailed as possible solutions, but there’s an even better alternative that everyone seems to be ignoring at the moment.

It’s easy to see why though because we’re referring to insects. Though Westerners find the notion of insects in their kitchen disturbing and quite disgusting, let alone on their diner table, Marcel Dicke believes this prejudice needs to be cast aside.

Dicke, a researcher at Wageningen University in the Netherlands, just published an opinion paper today breaking down the benefits of insect farming for strengthening our global food security. In the article, the authors not only describe the value of insects as an excellent protein source, but also their role in fertilizing sustainable crops — something that isn’t talked about nearly enough.

“I have a long history in research on insect-plant interactions and microbe-plant-insect interactions, investigating how plants defend themselves against insects, for example by enlisting the enemies of their enemies or by receiving help from root-associated beneficial microbes. Moreover, I have carried out research on the use of insects as food and feed for a decade now,” Dicke told ZME Science.

Insect waste comes in two main forms. There’s the exuviae, or the exoskeletons left behind after molting. Then there’s frass, which basically refers to insect excrement and bits of unconsumed food.

Insect poop is rich in nitrogen, a crucial nutrient that plants require to grow, but is naturally in low abundance in the soil. Meanwhile, the exuviae are rich in chitin, which is typically not edible by most organisms apart from a set of bacteria that have a symbiotic relationship with plants. When the insects molt near plants, the bacteria increase in number helping the plants repel pests.

“When a soil microbiology colleague mentioned that insect exuviae (molted skins) when amended to soil resulted in a stimulation of Bacilli bacteria, that are known to promote plant growth and resilience, we formed an interdisciplinary team, obtained funding, and started investigating the effects of insect-derived soil amendments on plant growth, plant resistance to pests and diseases, plant pollination,” Dicke said.

When combined and added to the soil, the exuviae and frass both promote plant growth and health, acting like a fertilizer-pesticide combo. This means that it could be possible to grow high-yield crops without the need to add artificial fertilizers and pesticides that can be toxic to both humans and the environment. While conventional pesticides are efficient at destroying pests, they do so indiscriminately, also hurting beneficial soil bacteria and critical pollinators like bees.

“The EU has banned several pesticides because of negative effects on the environment. Recent studies on recording pesticides in human bodies are very worrying,” Dicke said.

The researchers envision a closed-loop insect farming circuit whereby insects are grown like livestock, except much more efficiently. While it can take up to 25 kilograms of grass to produce one kilogram of beef, the same amount of grass can produce ten times as much edible insect protein. That’s owed to the fact that insects synthesize protein more efficiently, as well as the fact that 90% of their body mass is edible compared to just 40% for a cow.

Insects could be fed waste streams from conventional crop farming to produce more protein-rich food. The byproducts of insect production, such as the excrements and exoskeletons, can later be used to fertilize more crops, whose activity can be used to feed new insects, closing the loop.

Although insect food sounds repulsive, currently an estimated two billion people in the world already consume insects as part of their diet or as supplements, mostly in Africa and Asia. Insects can also be rich in copper, iron, magnesium, manganese, phosphorus, selenium, and zinc, and are a source of fiber. Insects are also as abundant as they are nutritious.

Insect farming, however, isn’t widespread and in countries where insects are part of the daily diet, they’re mostly collected from forests or farmed in small, family-run establishments for niche markets.

That has to change, thinks Dicke, who has eaten his fair share of crickets, mealworms, and locusts. He and colleagues even published a cookbook with recipes containing insect ingredients. On this front, some forward-thinking companies are developing insect products that are not only nutritious but also tasty. Some are tofu-like other products involving mixing insect meal with sauces or pasta.

“The use of insect residual streams to promote sustainable crop production is a new contribution to designing the agriculture of the future. Such agriculture will be developed with the inclusion of many novel methods/materials and doing so is urgent given the need to produce sufficient food for the growing human population without harming the environment. The production of insects for food or feed is a sustainable activity and by using the residual stream we increase that sustainability,” Dicke concluded.

The paper appeared in the journal Trends in Plant Science.

Stonehenge may be a giant solar calendar whose roots may extend all the way to ancient Egypt

Credit: Antiquity Journal.

Over the years, archaeologists have put forward a number of theories attempting to explain why Stonehenge was built. Now, new research posits that the Stonehenge circles served as a calendar that tracks the solar year of 365.25 days, calibrated by the alignment of the solstices.

However, if that is indeed the case, it’s an odd one with 12 months of 30 days, divided into 10 day weeks. Such calendar designs were previously seen in ancient Egypt, which could mean the Stonehenge timekeeping system may have had its roots elsewhere.

What was Stonehenge used for?

Stonehenge is the world’s most famous Neolithic site but also one of the most enigmatic ancient monuments, whose precise purpose is still a mystery. Much of that mystery comes down to the fact that writing didn’t exist in England until the Romans arrived 2,500 years after the iconic circular stone pillars were raised. In this vacuum, various more or less evidenced-based theories have been proposed.

Some believe Stonehenge is an astronomical calculator, a religious site, or an important community gathering place like a sort of town hall. But whenever there’s a good mystery, fringe communities and their outlandish theories aren’t too far behind.

In the 1960s and 1970s, Stonehenge was a hot spot for hippies and the New Age counterculture, with millions flocking to the Salisbury Plain, a site thought to be imbued with magical and mystical powers. One Canadian gynecologist proposed in 2003 in an essay published in a medical journal that Stonehenge is a metaphorical vulva — the opening by which Earth Mother gave birth to her plants and animals. The article employed side-by-side illustrations of Stonehenge seen from above and female genitalia. Others think that, like the Egyptian pyramids, Stonehenge couldn’t have been possibly built by prehistoric humans. Instead, it was obviously made by aliens who used the stone pillars as a landing pad for their spacecraft. Yeah…

But ancient alien-origin enthusiasts may have gotten one thing right: Stonehenge most likely had a strong connection to the cosmos and the stars, specifically the hot glowing giant ball of helium and hydrogen close by, the Sun.

Small-sized sarsen stone S21 (left) in the Sarsen Circle, with the normal-sized S22 to the right. View looking outwards from inside the circle. Credit: Antiquity Journal, Timothy Darvill.

In a new study published today in the journal Antiquity, Professor Timothy Darvill from Bournemouth University in England takes a fresh look at the most recent evidence from the Salisbury archaeological site, concluding Stonehenge’s sarsen elements represent a calendar based on a tropical solar year of 365.25 days. But this calendar is just a tool. Its grander purpose, according to the researchers, was to facilitate festivals and ceremonies.

“I’ve been working on Stonehenge for more than 30 years and in 2008 undertook excavations inside the stone settings at the centre of the site. That led us to start looking at the individual components of the monument and wondering how they all fitted together. Instead of seeing it as one big structure we now see it as several pieces that fit together, rather like in modern times one might see a church or temple as having different elements each of which is connected with different aspects of the working of the site,” Darvill told ZME Science.

The solar origin of one of the most mysterious places on Earth

Most archaeologists agree that the current still standing Stonehenge structure was preceded by an earthwork circle built on the same spot, which seems to have been a cemetery for cremated bodies. Some 500 years later, between 2600 and 2500 BC, Stonehenge as we know it was built once it entered “Stage 2” with the constructions of the three sarsen structures –the Trilithons, Sarsen Circle, and the Station Stone Rectangle. Sarsen stones refer to the vertical pillars, which were capped by horizontal lintels.

Building Stonehenge with Neolithic technology is literally a monumental task. Each sarsen weighs 25 tons on average and could have required at least 1,000 people each to transport it over a distance of 24 km (15 miles). As such, it must have taken multiple generations to complete the project. But once in place, these components weren’t altered or moved ever again, a fact supported by analyses showing that most of the stones were quarried from a single source on the Marlborough Downs.

Summary of the way in which the numerology of sarsen elements at Stonehenge combine to create a perpetual solar calendar. Non-sarsen elements have been omitted for clarity. Drawing by V. Constant/Antiquity Journal.

It is under this guise of a unified group that the sarsen elements need to be understood, Darvill argues. This way, their “numerical significance” opens up the possibility that they represent building blocks for a calendar based on the 365.25 solar days in the mean tropical year. Each of the 30 stones in the sarsen circle represents a day within a month. One month is divided into three weeks each of 10 days, with distinctive stones in the circle marking the start of each week.

“The recognition that the sarsen stone elements have an integrity because the stones are almost all from the same source and were put up at the site at the same time. Given these observations, it seems likely that they also have an integrity of structure. What makes it novel is that while many people have tried to find a calendar in the arrangement of stones, non-one has previously shown how one might actually work. The perpetual solar calendar is very easy to use,” Darvill said.

Under this logic, every stone has its place and purpose. The five Trilithons in the center of the site represent the intercalary month, while the four Station Stones outside the Sarsen Circle serve as markers to notch up until a leap day. In doing so, the ancient people of Stonehenge managed to frame the winter and summer solstices by the same pair of stones every year. One of the trilithons also frames the winter solstice, perhaps marking the new year.

A 10-day week calendar seems odd, but the researchers claim that these were more common during this period. A very similar solar calendar was developed in the eastern Mediterranean around 3000 BC and was adopted in ancient Egypt as the Civil Calendar, still widely used in the Old Kingdom at about 2600 BC. This raises the genuine possibility that the calendar was imported from the continent, with archaeological evidence supporting the existence of trade and cultural networks that could have facilitated this knowledge transfer. The study mentions the nearby “Amesbury Archer”, who was buried nearby around the same period Stage 2 was erected, but was actually born thousands of miles away in the Alps and moved to Britain as a teenager.

However, there’s more to Stonehenge than just an oversized time-keeping device. The huge efforts undertaken at Stonehenge hint that the ancient site served a very important purpose. The researchers believe the calendar helped local communities synchronize conceptual cosmologies with the solar cycle, “so that the received narratives could be understood in ways that structured behaviors and relationships.”As such, the stone circles were essential for timing celebrations and other crucial rituals. Secondly, the calendar allowed elites to acquire and legitimize power, since they were the ones who now control the timing of important communal events and the interpretation of cosmologies as signs and messages from the gods.

“Time-reckoning systems bring communities closer to their gods by ensuring that events occur at propitious moments. Astrology was an important, though controversial, tool in ancient medicine and healing rites. An accurate calendar was required to maximise effects that depended on people being in the right place at the right time,” the researchers wrote in their study.

Although plausible, not everyone is convinced by this conclusion. Speaking to ZME Science, archaeologist Mike Pitts described the new proposal as “ingenious” but adds that it would be equally possible to come up with other explanations.

“For example, there could have been a “Fellowship of the Ring” with five or 10 members. The largest and most prominent stones are the five trilithons (each two uprights and one lintel). These could represent five different societies that had formed an alliance, which they marked by building Stonehenge. If the trilithons were heads (a male and female leader, say, united by a lintel) each of the five could then be represented by six descendant families, symbolised by stones in the circle (giving 30) united by the ring of lintels that joins them,” said Pitts, who is the editor of the publication British Archaeology and the author of a number of notable books on Stonehenge.

“Entirely fanciful, but no more or less supported by evidence than a calendar.”

Pitts added that it would be odd for Stonehenge society not to use the lunar cycle, especially since the lunar month could be neatly represented by the 30 stones in the circle.

“Almost all recorded human societies, at any time or place, have used the sun and moon to mark time. That there are roughly 365 days in a year is a fact of living on Earth. The people who built Stonehenge almost certainly had a calendar and it is very likely it was based on observations of the sky. There is no need to invoke connections with societies on the other side of Europe to explain this,” Pitts said.

Whatever may be the case, the origin and meaning of Stonehenge are still far from settled.

“There is still so much more to know about Stonehenge,” Darvill concludes.

Europe’s much anticipated Mars rover won’t launch in 2022 because of war in Ukraine

The rover will be the first mission to combine the capability to move across the surface and to study Mars at depth. Credit: ESA.

This year was supposed to be another landmark one for Mars exploration with the launch of the European Space Agency (ESA)’s robot rover to the red planet. But now, the mission has to be postponed as a result of the war in Ukraine and heavy sanctions on Russia, which owns the spaceport in Kazakhstan from which the rover was supposed to be launched.

The rover, known as Rosalind Franklin, named after the British chemist and DNA pioneer, is part of the ExoMars program, which also includes the Trace Gas Orbiter launched in 2016. Like NASA’s Curiosity and Perseverance rovers, the goal of the mission is to search for signs of past life on Mars, which is believed to have been a rich water world billions of years ago.

In order to achieve its goal, the ExoMars mission will do things differently than its American counterparts. The deepest anyone has dug on Mars is only six centimeters, and that is a problem if your goal is to look for signs of life, present or past. Scientists believe it is very unlikely to find such evidence in the top meter of Martian soil as millions of years of exposure to cosmic radiation, ultraviolet light, and powerfully oxidizing perchlorates likely destroyed any organic biosignature a long time ago.

“The recipe we have with ExoMars is we’re going to drill below all that crap,” to a depth of two meters, ExoMars project scientist Jorge Vago tells Inverse. “Our hypothesis is that if you go to the right place and drill deep enough, you may be able to get access to well preserved organic material from 4 billion years ago, when conditions on the surface of Mars were more like what we had on infant Earth.”

The astrobiology lab on six wheels is a joint venture between the ESA and Roscosmos, the Russian space agency. While Rosalind Franklin is operated by the ESA, Russia’s contribution includes the Kazachok lander vehicle, meant to land and safely release the rover on Mars’s Oxia Planum, a region thought to have once been the coastline of a very large northern hemisphere ocean. Additionally, Russia developed several important science instruments for the mission, as well as offered the launch platform. Only the International Space Station is more significant in terms of cooperation between the ESA and Russia.

Originally planned for 2020, the launch of the mission was postponed to 20 September 2022 from the Baikonur cosmodrome in Kazakhstan. But considering the dire situation in Ukraine and the heavy sanctions imposed by the United States and its European allies, the mission has now been postponed indefinitely.

“We are fully implementing sanctions imposed on Russia by our Member States,” the ESA announced in a press statement. “Regarding the ExoMars program continuation, the sanctions and the wider context make a launch in 2022 very unlikely.”

“We are giving absolute priority to taking proper decisions, not only for the sake of our workforce involved in the programmes, but in full respect of our European values, which have always fundamentally shaped our approach to international cooperation.”

The announcement comes on the heels of Roscosmos’s decision over the week to suspend flights of its Soyuz rockets from the Kourou spaceport in French Guiana, as a retaliation for the western sanctions. Roscosmos has even gone as far as putting into question the viability of the International Space Station, where it has been a founding partner since the station’s first modules were launched in 1998. That’s despite Washington having been clear that its stiff sanctions targeting the Russian economy and tech sector will continue to allow U.S.-Russian civil space cooperation.

It’s too early to say what might happen next but it’s likely that Roscosmos cannot be counted on for ExoMars moving forward. This means that the rover must be launched using a different partner and a new landing platform needs to be developed, which could amount to at least another two years of delay. That’s when the next favorable launch window is available — every two years Mars and Earth’s orbits align allowing for a much shorter journey between the two planets.

As the crisis in Ukraine drags on, it’s saddening to see how the war not only disrupts people’s lives across the world and causes unspeakable suffering, but also how its effects extend well beyond our borders — even to another planet.

A lot of plant genes actually come from bacteria. And this may explain the success of early land plants

The evolution of land plants (simplified). Around 500 million years ago land plants started to spread from water to land. Credit: IST Austria.

When we think of gene transfer, the first thing that pops into our mind is inheritance. We tend to physically resemble our parents, be it in terms of height, skin tone, eye color, or facial traits, because we inherited genes from each parent, who in turn got their genes from their parents, and so on. Some organisms, however, find sexual reproduction counterproductive for their needs and opt for cloning, creating perfect genetic copies of themselves in perpetuity, apart from the occasional mutated offspring that refuses to be another chip off the old block. But that’s not all there is to it.

Sometimes DNA jumps between completely different species, and the results can be so unpredictable, they can dramatically alter the course of the evolution of life on Earth. Case in point, a new study makes the bold claim that genes jumping from microbes to green algae many hundreds of millions of years ago, shifted the tides and drove the evolution of land plants. Hundreds of genes found in plants thought to be essential to their development may have originally appeared in ancient bacteria, fungi, and viruses and became integrated into plants via horizontal gene transfer.

Speaking to ZME Science, Jinling Huang, a biologist at East Carolina University and corresponding author of the new study, said there could have been two major episodes of horizontal gene transfer (HGT) in the early evolution of land plants.

“Many or most of the genes acquired during these two major episodes have been retained in major land plant groups and affect numerous aspects of plant physiology and development,” the researcher said.

Sharing (genes) is caring

Genome-swapping events are rather common in bacteria. In fact, HGT is one of the main reasons why antibiotic resistance is spreading rapidly among microbes. This exchange of genetic material can turn otherwise harmless bacteria into drug-resistant ‘superbugs’.

Until not too long ago, HGT was thought to occur only among prokaryotes like bacteria, but recent evidence suggests that it can also happen in plants and even some animals. For instance, a 2021 study made the bold claim that herrings and smelts, two groups of fish that commonly roam the northernmost reaches of the Atlantic and Pacific Oceans, share a gene that couldn’t have been transferred through normal sexual channels — in effect, the researchers claim that HGT took place between two vertebrates.

“In genetics classes, we learn that genes are transmitted from parents to offspring (as such, kids look similar to their parents). This is called vertical transmission. In horizontal gene transfer, genes are transmitted from one species to another species. Although the importance of HGT has been widely accepted in bacteria now, there are a lot of debates on HGT in eukaryotes, particularly plants and animals. The findings of this study show that HGT not only occurred in plants, but also played an important role in the evolution of land plants,” Huang told ZME Science.

In order to investigate the role of HGT in early plant evolution, Huang and colleagues from China analyzed the genomes of 31 plants, including mosses, ferns, and trees, as well as green algae related to modern terrestrial plants. The researchers suspected quite a few genes transferred over from bacteria, but the results were totally surprising. They suggest that nearly 600 gene families — far more than researchers had expected — found in modern plants were transferred from totally foreign organisms like bacteria and fungi.

Many of these genes are thought to be involved in important biological functions. For instance, the late embryogenesis abundant genes, which help plants adapt to drier environments, are bacterial in origin. The same is true for the ammonium transporter gene that’s essential for a plant’s ability to soak up nitrogen from the soil to grow. And if you just despise cutting tear-jerking onions, you have HGT to blame too. The researchers found that the genes responsible for the biosynthesis of ricin toxin and sulfine (the irritating substance released when we cut onions) are also derived from bacteria.

“We were a little surprised to find those genes,” Dr. Huang told me, adding that his team was able to reconstruct the phylogenies (the history of the evolution of a species) for the genes using independent lines of evidence to determine whether a gene is derived from bacteria and the result of some inherited mutation.

“For instance, an ABC complex in plants consists of two subunits. Phylogenetic analyses show that both genes were acquired from bacteria. We also found that the two genes are positioned next to each other on the chromosomes of both bacteria and some plants, suggesting that the two genes might have been co-transferred from bacteria to plants,” the scientist added.

The establishment of plant life on land is one of the most significant evolutionary episodes in Earth history, with evidence gathered thus far indicating that land plants first appeared about 500 million years ago, during the Cambrian period, when the development of multicellular animal species took off.

This terrestrial colonization was made possible thanks to a series of major innovations in plant anatomy and biochemistry. If these findings are true, bacteria must have played a major role. Due to HGT, the earliest plants could have gained advantageous traits that make them more adapted to their novel terrestrial environment almost immediately, rather than having to wait for who knows how many thousands or even millions of years to develop similar genetic machinery.

The findings appeared today in the journal Molecular Plant.

Why chocolate is really, really bad for dogs

The only good chocolate for dogs is a chocolate fur, like is majestic lab is rocking. Credit: Pixabay.

Unlike cats, which lack the ability to taste sweetness, dogs find chocolate just as appealing as humans. But while the dark treat can be a euphoric delight for us, it can be poisonous to canines.

That’s not to say that all dogs get poisoned by chocolate or that a candy bar is enough to necessarily kill your pet canine. The dose makes the poison. The weight of the dog also matters, so large canines should be able to handle a small amount of chocolate whereas smaller breeds might run into serious trouble.

Although you shouldn’t panic if your dog accidentally ingests chocolate, candy and other chocolate sweets should never be offered to dogs. Generally, you should treat chocolate as toxic to dogs and should make an effort to keep it away from them.

Why chocolate can be dangerous to dogs

Among the many chemical compounds found in dark chocolate and cocoa is theobromine. Formerly known as xantheose, theobromine is a bitter alkaloid compound that acts as a mild stimulant for the human body.

The consumption of theobromine is generally associated with positive effects, such as reduced blood pressure, improved focus and concentration, and enhanced mood. That’s in humans. In dogs, theobromine, as well as caffeine, raise the heart rate and can overstimulate the nervous system.

Because dogs can’t break down, or metabolize, theobromine as well as humans can, the compound is toxic to dogs, over a certain threshold, depending on their body weight.

Mild symptoms of chocolate toxicity occur when a canine consumes 20 mg of theobromine per kilogram per body weight. Cardiac symptoms occur at around 40 to 50 mg/kg and dangerous seizures occur at doses greater than 60 mg/kg.

This explains why a candy bar may cause a chihuahua (average weight 2 kg) to run in circles while Great Dane (average weight 70 kg) might feel just fine.

Darker, purer varieties of chocolate tend to be the most dangerous because they contain the highest concentration of theobromine. According to the USDA nutrient database, various chocolate/cocoa products contain the following amounts of theobromine per 100 grams;

  • Unsweetened cocoa powder: 2634 mg;
  • Baking chocolate (unsweetened): 1297 mg;
  • Dark chocolate (70% cocoa): 802 mg;
  • Mars Twix (twin bar): 39.9 mg;
  • White chocolate: 0 mg;

As a rule of thumb, chocolate poisoning in dogs generally occurs after the ingestion of 3.5g of dark chocolate for every 1kg they weigh, or 14g of milk chocolate for every kilogram.

Signs that your dog may be suffering from chocolate poisoning

Chocolate poisoning mainly affects the heart, central nervous system, and kidneys. The symptoms of theobromine toxicity usually appear within 6 to 12 hours after your dog eats too much chocolate and may last up to 72 hours. These include:

  • vomiting,
  • diarrhea,
  • restlessness,
  • increased urination,
  • tremors,
  • elevated or abnormal heart rate,
  • seizures,
  • and in extreme cases collapse and death.

Can chocolate kill dogs?

In short, yes. However, fatalities in dogs due to chocolate poisoning are very rare. According to the Veterinary Poisons Information Service from the U.S., out of 1,000 dog chocolate toxicity cases recorded in its database, only five dogs died. 

What do if your dog eats chocolate

If you caught your dog eating chocolate or you suspect this may have happened, it is best to call your veterinarian and ask for advice on how to proceed going forward. Based on your dog’s size and the amount and kind of chocolate ingested, the veterinarian may recommend monitoring your dog for any symptoms of poisoning or ask that you immediately come to the clinic.

If there are good reasons to believe potentially dangerous chocolate poisoning may be imminent, and as long as your pet consumed the chocolate less than two hours ago, the veterinarian may induce vomiting.

Sometimes, the dog may be given doses of activated charcoal, which helps to flush toxins out of the body before they are absorbed into the bloodstream.

In very extreme cases of poisoning, the veterinarian might administer medications and/or intravenous fluids to provide additional treatment.

Keep chocolate away from dogs

There’s no reason to believe chocolate isn’t as tasty to dogs as it is to humans. Unfortunately, many dog owners are ignorant to the fact that chocolate can poison their pets and intentionally offer chocolate snacks as a treat.

Usually, this isn’t a problem for very large breeds when they ingest small amounts of chocolate, but smaller dogs can suffer greatly and even die in extreme cases due to theobromine poisoning.

If you are aware that chocolate can poison your pet, you have no excuse to keep sweets accessible. It is advisable to keep any chocolate items on a high shelf, preferably in a closed-door pantry. Guests and children should be kindly reminded that chocolate is bad for dogs and that they shouldn’t offer chocolate treats regardless of how much the pet begs for them.

Most chocolate poisoning in dogs occurs around major holidays such as Christmas, Easter, or Valentine’s Day, so these are times when you should be extra careful. 

The smallest refrigerator in the world will keep your nanosoda cool

This electron microscope image shows the cooler’s two semiconductors — one flake of bismuth telluride and one of antimony-bismuth telluride — overlapping at the dark area in the middle, which is where most of the cooling occurs. The small “dots” are indium nanoparticles, which the team used as thermometers. Credit: UCLA.

By using the same physical principles that have been powering instruments aboard NASA’s Voyager spacecraft for the past 40 years, researchers at UCLA have devised the smallest refrigerator in the world. The thermoelectric cooler is only 100 nanometers thick — roughly 500 times thinner than the width of a strand of human hair — and could someday revolutionize how we keep microelectronics from overheating.

“We have made the world’s smallest refrigerator,” said Chris Regan, who is a UCLA physics professor and lead author of the new study published this week in the journal ACS Nano.

Instead of your vapor-compression system inside your refrigerator, the tiny device developed by Regan’s team of researchers is thermoelectric. When two different semiconductors are sandwiched between metal plates, two things can happen.

If heat is applied, one side becomes hot, while the other remains cool. This temperature difference can be harvested to generate electricity. Case in point, the Voyager spacecraft, which is believed to have traveled beyond the limits of the solar system after it visited the outermost planets in the 1970s, is still powered to this day by thermoelectric devices that generate electricity from heat produced by a plutonium nuclear reactor.

This process also works in reverse. When electricity is applied, one semiconductor heats up, while the other stays cold. The cold side can thus function as a cooler or refrigerator.

What the UCLA physicists were able to do is scale down thermoelectric cooling by a factor of more than 10,000 compared to the previous smallest thermoelectric cooler.

They did so using two standard semiconductor materials: bismuth telluride and antimony-bismuth telluride. Although the materials are common, the combination of the two bismuth compounds in two-dimensional structures proved to be excellent.

Typically, the materials employed in thermoelectric coolers are good electrical conductors but poor thermal conductors. These properties are generally mutually exclusive — but not in the case of the atom-thick bismuth combo.

“Its small size makes it millions of times faster than a fridge that has a volume of a millimeter cubed, and that would already be millions of times faster than the fridge you have in your kitchen,” Regan said.

“Once we understand how thermoelectric coolers work at the atomic and near-atomic level,” he said, “we can scale up to the macroscale, where the big payoff is.”

One of the biggest challenges the researchers had to face was measuring the temperature at such a tiny scale. Your typical thermometer simply won’t do. Instead, the physicists employed a technique that they invented in 2015 called PEET, or plasmon energy expansion thermometry. The method determines temperature at the nanoscale by measuring changes in density with a transmission electron microscope.

In this specific case, the researchers placed nanoparticles of indium in the vicinity of the thermoelectric cooler. As the device cooled or heated, the indium correspondingly contracted or expanded. By measuring the density of indium, the temperature of the nano-cooler could be determined precisely.

“PEET has the spatial resolution to map thermal gradients at the few-nanometer scale—an almost unexplored regime for nanostructured thermoelectric materials,” said Regan.

The winning combination of semiconductors found by the UCLA physicists may one day be brought to the macro scale, enabling a new class of cooling devices with no moving parts that regulate temperature in telescopes, microelectronic devices, and other high-end devices.