Category Archives: Biology

A better potato: researchers sequence the tuber’s entire genome for the first time ever

Researchers at the Max Planck Institute for Plant Breeding Research have set the groundwork for supercharging the potato, by mapping out the tuber’s complete genome.

Image credits James Hills.

Fried, mashed, or thrown in a stew, the humble potato has a special place in our hearts and our plates that nothing else seems to be able to fill. Researchers seem to love this tasty tuber as well, and have put significant effort into decoding its genetic secrets. This impressive work will allow us to create better varieties of potato much faster than traditional breeding methods allow for, with implications for the quality of our meals, the enjoyment we derive from it, and global food security.

Super Tuber

“The potato is becoming more and more integral to diets worldwide including even Asian countries like China where rice is the traditional staple food. Building on this work, we can now implement genome-assisted breeding of new potato varieties that will be more productive and also resistant to climate change — this could have a huge impact on delivering food security in the decades to come.”

The potato has not changed very much in the last 100 years or so. The overwhelming majority of varieties that are available in shops today are the same ones that were put to market over the last century and before. While these traditional cultivars are very popular, they do underline that there is a lack of variety of potatoes being grown, cooked, and enjoyed around the world. Thus, it stands to reason that improvements can be made to the baseline potato in order to make it more palatable, more resilient, or more abundant.

That’s what the team at the Max Planck Institute for Plant Breeding Research hopes to achieve with the full sequencing of the plant’s genome. The work, led by geneticist Korbinian Schneeberger, represents the first full assembly of the potato genome in history, allowing for researchers to work with a much better view of the plant’s genetic intricacies, and thus much more accuracy when trying to breed new varieties of the plant.

Low genetic diversity within a species — and the potato is a good example of one such species — means that it can have difficulties thriving in certain contexts, and leaves it vulnerable to disease. The near-extinction of the Gros Michel banana due to the Panama disease is a great example of such a genetic vulnerability at work. In the case of the potato, the Irish famine of the 1840s stands testament to how completely potato crops can be wiped out by pathogens. During this tragic event, Europeans were growing a single variety of potatoes, which was vulnerable to blight; as such, potato crops failed across the continent.

The Green Revolution of the 1950s and 60s saw a great diversification of crop varieties in staples like rice or wheat, but not potatoes. Efforts to breed new varieties with higher yields or more disease resistance have, so far, remained largely unsuccessful.

Potatoes, the team explains, inherit two copies of each chromosome from every parent — unlike humans, who inherit one copy of every chromosome from their parents. This makes them a species with four copies of each chromosome, a ‘tetraploid’, making them exceedingly difficult and slow to be coaxed into generating new varieties with desirable combinations of traits.

The same tetraploid structure also makes it technically difficult to reconstruct the potato’s genome.

To work around this issue, the team sequenced the DNA of potatoes working not with mature plants, but with large numbers of individual pollen cells. These contain only two copies of each parent chromosome, which made it easier for the team to use established genetic methods to reconstruct the plant’s genome.

The results should give scientists and plant breeders a powerful new tool with which to identify desirable gene variants in the potato and work to establish new varieties that contain them. Essentially, it gives them a baseline against which they can reliably compare individual plants and establish exactly where their desirable properties originate — and then work to reproduce them.

The paper “Chromosome-scale and haplotype-resolved genome assembly of a tetraploid potato cultivar” has been published in the journal Nature Genetics.

Geese may be the first domesticated birds. It first started 7,000 years ago

Credit: Pixabay.

Although humans make up only a tiny fraction of all life on the planet, our impact upon diversity and wildlife has been enormous. By some accounts, human activity is responsible for the loss of 80% of all wild animals and about 50% of all plants. Much of this loss was necessary to make way for farmed livestock for human consumption.

Just consider this fact: 70% of all birds on Earth are chickens and other poultry, whereas wild birds comprise a meager 30%. Were an alien archaeologist to visit our planet after humans went extinct, they would surely be staggered by the abundance of chicken fossils.

But before we became hooked on chicken eggs and hot wings, we most likely first started with geese.

Japanese archaeologists performing excavations at Tianluoshan, a Stone Age site dated between 7,000 and 5,500 years ago in China, found extensive evidence of goose domestication. They claim this is the earliest evidence of bird domestication reported thus far.

The team identified 232 goose bones, which paint a convincing picture that Tianluoshan may be the cradle of modern poultry.

First and foremost, the researchers performed radiocarbon dating on the bones themselves, rather than the sediments which covered the remains. This lends confidence that the goose bones are really as old as 7,000 years.

At least four bones belonged to juveniles no older than 16 weeks. This shows that they must have hatched at the site because it would have been impossible for them to fly in from somewhere else at their age. This is likely the case for the adult geese found there as well, given that wild geese don’t breed in the area today and probably didn’t 7,000 years ago either.

But, to be sure, the team led by Masaki Eda at Hokkaido University Museum in Sapporo, Japan, thoroughly broke down the chemical makeup of the ancient bones, showing the water they drank was local. The strikingly uniform size of the bred geese is also very indicative of captive breeding.

Although not by any means definitive, all of these lines of evidence converge to the same conclusion: geese were probably the first birds humans have domesticated, and this happened more than 7,000 years ago in China.

New Scientist reports that other studies have claimed that chickens were the first domesticated birds, as early as 10,000 years ago, also in avian-loving northern China. But the evidence, in this case, has proven contentious. Genetic analysis suggests chickens were domesticated from wild birds called red junglefowl, but these birds do not live that far north. Furthermore, the chicken bones weren’t directly dated. The firmest evidence of chicken domestication only appeared 5,000 years ago.

While most domestication research has focused on dogs and cattle, it’s refreshing to see new perspectives on the evolutionary history of poultry, upon which our food security depends so much.

A lot of plant genes actually come from bacteria. And this may explain the success of early land plants

The evolution of land plants (simplified). Around 500 million years ago land plants started to spread from water to land. Credit: IST Austria.

When we think of gene transfer, the first thing that pops into our mind is inheritance. We tend to physically resemble our parents, be it in terms of height, skin tone, eye color, or facial traits, because we inherited genes from each parent, who in turn got their genes from their parents, and so on. Some organisms, however, find sexual reproduction counterproductive for their needs and opt for cloning, creating perfect genetic copies of themselves in perpetuity, apart from the occasional mutated offspring that refuses to be another chip off the old block. But that’s not all there is to it.

Sometimes DNA jumps between completely different species, and the results can be so unpredictable, they can dramatically alter the course of the evolution of life on Earth. Case in point, a new study makes the bold claim that genes jumping from microbes to green algae many hundreds of millions of years ago, shifted the tides and drove the evolution of land plants. Hundreds of genes found in plants thought to be essential to their development may have originally appeared in ancient bacteria, fungi, and viruses and became integrated into plants via horizontal gene transfer.

Speaking to ZME Science, Jinling Huang, a biologist at East Carolina University and corresponding author of the new study, said there could have been two major episodes of horizontal gene transfer (HGT) in the early evolution of land plants.

“Many or most of the genes acquired during these two major episodes have been retained in major land plant groups and affect numerous aspects of plant physiology and development,” the researcher said.

Sharing (genes) is caring

Genome-swapping events are rather common in bacteria. In fact, HGT is one of the main reasons why antibiotic resistance is spreading rapidly among microbes. This exchange of genetic material can turn otherwise harmless bacteria into drug-resistant ‘superbugs’.

Until not too long ago, HGT was thought to occur only among prokaryotes like bacteria, but recent evidence suggests that it can also happen in plants and even some animals. For instance, a 2021 study made the bold claim that herrings and smelts, two groups of fish that commonly roam the northernmost reaches of the Atlantic and Pacific Oceans, share a gene that couldn’t have been transferred through normal sexual channels — in effect, the researchers claim that HGT took place between two vertebrates.

“In genetics classes, we learn that genes are transmitted from parents to offspring (as such, kids look similar to their parents). This is called vertical transmission. In horizontal gene transfer, genes are transmitted from one species to another species. Although the importance of HGT has been widely accepted in bacteria now, there are a lot of debates on HGT in eukaryotes, particularly plants and animals. The findings of this study show that HGT not only occurred in plants, but also played an important role in the evolution of land plants,” Huang told ZME Science.

In order to investigate the role of HGT in early plant evolution, Huang and colleagues from China analyzed the genomes of 31 plants, including mosses, ferns, and trees, as well as green algae related to modern terrestrial plants. The researchers suspected quite a few genes transferred over from bacteria, but the results were totally surprising. They suggest that nearly 600 gene families — far more than researchers had expected — found in modern plants were transferred from totally foreign organisms like bacteria and fungi.

Many of these genes are thought to be involved in important biological functions. For instance, the late embryogenesis abundant genes, which help plants adapt to drier environments, are bacterial in origin. The same is true for the ammonium transporter gene that’s essential for a plant’s ability to soak up nitrogen from the soil to grow. And if you just despise cutting tear-jerking onions, you have HGT to blame too. The researchers found that the genes responsible for the biosynthesis of ricin toxin and sulfine (the irritating substance released when we cut onions) are also derived from bacteria.

“We were a little surprised to find those genes,” Dr. Huang told me, adding that his team was able to reconstruct the phylogenies (the history of the evolution of a species) for the genes using independent lines of evidence to determine whether a gene is derived from bacteria and the result of some inherited mutation.

“For instance, an ABC complex in plants consists of two subunits. Phylogenetic analyses show that both genes were acquired from bacteria. We also found that the two genes are positioned next to each other on the chromosomes of both bacteria and some plants, suggesting that the two genes might have been co-transferred from bacteria to plants,” the scientist added.

The establishment of plant life on land is one of the most significant evolutionary episodes in Earth history, with evidence gathered thus far indicating that land plants first appeared about 500 million years ago, during the Cambrian period, when the development of multicellular animal species took off.

This terrestrial colonization was made possible thanks to a series of major innovations in plant anatomy and biochemistry. If these findings are true, bacteria must have played a major role. Due to HGT, the earliest plants could have gained advantageous traits that make them more adapted to their novel terrestrial environment almost immediately, rather than having to wait for who knows how many thousands or even millions of years to develop similar genetic machinery.

The findings appeared today in the journal Molecular Plant.

These African ticks survived for 8 years without food. Females laid eggs years after the last male had died

Argas brumpti. Credit: Jonathan Cohen.

The toughest animals on Earth are often not what you expect. A prime example is the eight-legged tardigrades capable of surviving extreme heat, cold, and even the vacuum of space. But there’s another tough guy you should know about, especially since they often like to take on humans. Meet the East African tick, a blood-sucking arachnid that can go without food for at least 8 years, and with a lifespan of over 27 years. What’s more, females have been able to lay eggs even 4 years after the last male in their group had died.

The remarkable longevity and resilience of the East African (Argas brumpti) tick were just recently revealed by a rare study almost 60 years in the making, which could be a separate story in itself, illustrating the virtues of patience in science.

It all started in 1976, when Julian Shepherd, an associate professor of biological sciences at Binghamton University in New York, was given six adult females, four adult males, and three nymphs of A. brumpti collected from caves near Nairobi, Kenya. He decided to monitor them in his lab in a habitat with stable conditions, where they were fed periodically on mice, rabbits, or drawn rat blood.

For years, the captive ticks enjoyed their regular feast until one day Shepard simply stopped giving them blood when his lab ran out of rabbits and mice to feed on. Little did the biologist realize at the time that, even starving, his original group of ticks would survive until the next century.

East African ticks have soft and leathery skins, unlike the hard shell sported by the common types of ticks that you’ll find in the parks and countryside. And unlike your run-of-the-mill tick, Argas brumpti is not reported to carry any diseases, although its bites can cause substantial, painful lesions with aftereffects sometimes persisting for many months and even years, something that Shepard knows from first-hand experience.

In their natural habitat, the ticks reside in shallow caves, rocky areas, or dust-bath areas used by their favorite prey, such as small to large mammals and lizards, notably in the dust around termite mounds that large mammals rub against. This perennially dry environment with few opportunities to encounter hosts may explain A. brumpti‘s extreme longevity, even within a taxon renowned for sustained survival even without food or water.

“I am always enthralled by the adaptations of organisms to their environment—in this case, a dry environment with virtually no access to water for long periods of time and a lifestyle that must wait for very long intervals of no food between encounters with host animals,” Shepherd said in a statement.

Adaptations to its environment may explain another incredible feat. Four years after the last original tick died, the females continue to live for another four years. These hungry females were eventually fed, and much to Shepherd’s surprise, at least one of the females laid a batch of eggs. This second generation of offspring is still alive and apparently healthy to this day, being 26 years old and counting. The oldest tick from the original batch died after 27 years, during which they were deprived of food for eight years.

One explanation is that the female ticks are capable of parthenogenesis, also known as “virgin births” because embryos can grow and develop without fertilization by sperm. But Shepherd thinks this is extremely unlikely. Instead, the females are probably capable of long-term sperm storage until they have ample food, at which point the sperm moves up the reproductive tract and fertilizes eggs.

In any case, both this longevity and long-term storage are records for any species of tick — and these insights could prove useful beyond the remarkable nature of conducting a 60-year experiment. That’s something for other researchers to learn though, as the ticks have been shipped to South Africa for further study, while Shepherd is now moving onto new research on moths and the physiology of their sperm.

“Research on how organisms master such challenges can inform understanding of how other organisms, including us, might manage similar challenges,” Shepherd said.

The findings appeared in the Journal of Medical Entomology.

Orangutans instinctively make and use basic stone tools

Loui (the juvenile male orangutan) using the core as an active element to vertically strike on the concrete floor of the testing room during the Flake Trading condition of Experiment 2. Image credits:

Orangutans are a crafty bunch. They seem to be able to use a bunch of tools in the wild and even make complex choices about these tools. So a team of researchers led by Alba Motes-Rodrigo at the University of Tübingen in Germany wanted to test their stone tool-making ability. The researchers tested out their hypothesis on two orangutans at Kristiansand Zoo in Norway.

“We wanted to investigate what stone-related behaviors might have served as stepping stones for the development of lithic technologies in our lineage. Extant apes (and monkeys) can be used as living models to build hypotheses in this regard,” Motes-Rodrigo tells ZME Science.

“We decided to test orangutans because despite being proficient tool users and using a variety of raw materials as tools, they do not use stone tools in the wild. This absence of stone tool use behaviors in the wild orangutan repertoire supports the naivety of our study subjects before the start of the experiments. This naivety allowed us to investigate the learning process of stone-related skills from the beginning, excluding previous knowledge of the tasks.”

Each orangutan was provided with a concrete hammer, a specially prepared blunt stone core, and two baited puzzle boxes. In order to get through the boxes, the orangutans had to cut through a rope or a silicon skin — but if they could do it, they got a treat.

Initially, both orangutans started hitting the hammer against the walls and floor of their enclosure. They didn’t strike the stone core directly. In the second experiment, they were also given a human-made sharp flint flake, which one orangutan used to cut the silicon skin, solving the puzzle.

It’s the first time cutting behavior has been observed in untrained, unenculturated orangutans. In a subsequent experiment, researchers demonstrated how to strike the core to create a flint to three female orangutans at another zoo (Twycross Zoo) in the UK. After being taught, one female went on to use the hammer to hit the core as demonstrated.

This suggests that two major prerequisites for creating stone tools (striking with stone hammers and recognizing that sharp edges can cut) may have existed in our common lineage with orangutans 13 million years ago. However, this is merely speculation at this point and we need more evidence before we can truly say whether this was the case or not.

“Our results have added a new piece to the puzzle of the technological origins of our species showing that an ape species that does not use stone tools in the wild and that diverged from our lineage 13 million years ago, spontaneously engages in stone-related behaviours crucial for stone tool making (lithic percussion) as well as has the ability to recognise and use sharp stones as cutting tools.”

“The lithic percussive behaviours that we observed seem to be relatively common among primates, with species such as macaques, capuchins and chimpanzees also expressing them in the wild and in some studies in captivity. The use of a sharp stone as a cutting tool had never been reported before in an untrained ape, but given that we only have one observation of this behaviour it would be premature to draw strong conclusions about its evolutionary history.”

Sharp-edged bits detached by the orangutan in the second experiment. Image credits: Motes-Rodrigo et al (2022).

The orangutans’ tool-making is remarkable, but they haven’t entered the Stone Age just yet, Motes-Rodrigo tells ZME Science. Essentially, their tools are not complex enough, and we haven’t seen them do this in a natural environment. They could be capable of doing it, but we haven’t observed them doing it. So, for the moment we can’t place them in the Stone Age just yet.

“Even the most primitive human stone tools were far more advanced than what we have seen in orangutans and reflect advanced spatial and cognitive skills. In addition, these behaviors have only been observed in captivity under experimental conditions. Perhaps if in future we would make similar observations in the wild, we could make such claims, but at the moment we can’t.”

Journal Reference: Motes-Rodrigo A, McPherron SP, Archer W, Hernandez-Aguilar RA, Tennie C (2022) Experimental investigation of orangutans’ lithic percussive and sharp stone tool behaviours. PLoS ONE 17(2): e0263343.

So this one wasp species turned out to be 16 species

In 1843, researchers described a small parasitoid wasp species; they called it Ormyrus labotus. There didn’t seem to be anything special about it at the time. Just a generalist parasite that lays its eggs in sixty-something species. But a new study found that Ormyrus labotus wasn’t one species at all — in fact, it’s 16 species that look similar in appearance but are genetically distinct.

This one species turned out to be 16 species. Image credits: Gallery image by Entomological Society of America; component images by Sofia Sheikh, Anna Ward, and Andrew Forbes, University of Iowa.

Many species on Earth have not been discovered yet, but some are hiding under our very noses, masquerading as other species. These so-called “cryptic” species may be pretty common — according to one estimate, up to 30% of all species could be cryptic.

But the advent of relatively cheap DNA testing is enabling researchers to discover these hidden species. In a recent study, researchers uncovered the secrets of one cryptic species: a wasp that actually turned out to be 16 different wasps.

“We know so much from ecology about how important even the smallest species can be to an ecosystem,” says Andrew Forbes, Ph.D., associate professor of biology at the University of Iowa and senior author of the study, “such that uncovering this hidden diversity—and, maybe more importantly, understanding the biology of each species—becomes a critical component of conservation and maintenance of ecosystem health.”

Super exciting

This story starts in 2015, when Sofia Sheikh and Anna Ward, then graduate students in Forbes’ lab, were working on a different project. They collected galls formed on oak trees and observed the insects that emerged. They noticed that a lot of the time, the galls looked different — but when the wasps came out, it was always Ormyrus labotus. This got them wondering.

“The Forbes lab is broadly interested in how parasitic insects interact with their hosts and how that relates to species diversification,” Sheikh tells ZME Science. “Species-rich systems, like oak gall wasps and their associated parasites, are useful for addressing that question, and to that end, we had been collecting oak galls from across the country and preserving the insects that emerged from them. We were surprised to find wasps that all morphologically looked like Ormyrus labotus emerging from a diverse set of these oak gall hosts.”

This was particularly curious because many parasitic insects tend to be host-specialized — but Ormyrus labotus seemed to have an exceptionally broad range.

“This expectation led us to ask whether these wasps that all physically look like O. labotus represent one generalist species, or if they constitute several lineages, each specializing on a smaller, less variable subset of hosts,” Sheikh adds.

The idea that the wasp species could, in fact, be multiple species is not far-fetched. In fact, researchers were expecting to find diversity hiding beneath this species — though they weren’t sure just how many species they would uncover.

“Even though it may not be surprising that cryptic diversity exists, its discovery is always super exciting, because making accurate predictions about how climate change will impact species, how we protect ecosystems, etc. relies on knowing what’s out there and how it exists,” Sheikh explains to ZME Science.

What happens now

For now, Ormyrus labotus will remain a “species complex” — while researchers have established the existence of different species, they haven’t formally described and named them — this falls “a bit outside our focus,” Sheikh tells me. There’s still a lot of work to do.

“We hope that studies like these can help us better understand insect diversity and, subsequently, its conservation – the naming of species and taxonomic revision is an incredibly important dimension to this. We’d be happy to send specimens, and help however else we can, to anyone who’d like to take on the taxonomic work!”

More than 40% of insect species are declining and a third are endangered, which is why it’s so important to understand the peculiarities of these individual species. There’s still plenty of work left to be done on discerning the biology of these wasps and their evolutionary relationship with their hosts.

For Forbes, this is a clear sign that we need to pay more attention (and offer more funding) to this type of study.

“one of the aspects of this kind of work that I find endlessly amazing is that there continues to be so much undiscovered diversity hidden even in the urban and suburban parks and backyards where we made many of these collections,” the researcher concluded in an email. “For some of the reasons Sofia mentioned, the US and the world should really invest more in discovery-based biology – there is a lot more left to find!”

The study was published in Insect Systematics and Diversity.

Researchers peer into the brain of birds as they’re singing their best song

Birds, just like artists or athletes, train and finesse their songs. When it’s crunch time, they’d better be ready to bust out their best song — or they may end up not having anyone to mate with. In a new study, researchers have zoomed in on the brains of birds practicing and

Zebra finches. Image via Pixabay.

Zebra finches are common birds in Australia. They’re loud and boisterous singers, and they spend a lot of time working on perfecting their songs. Male zebra finches will often go about their days practicing their courtship melodies, producing variations and trialing different versions of the song. But when they spot an attractive female zebra finch, they stop screwing around.

Researchers have observed that when the game is on, they always sing a singular, perfected version of their song — no more variations or experiments. Essentially, they produce the best song they can.

Researchers wanted to figure out how they do this and what happens inside their brain once they do, and thanks to a novel approach that allows them to monitor up to a hundred bird neurons at a time, they did.

“To figure out how to move, it needs to first try out many different movements, to try out different ways of accomplishing a goal of moving their body,” said Jonna Singh Alvarado, who led this project for his Ph.D. dissertation at Duke. “They need to learn, ‘If I think this, how am I about to move? How will that move my body?’ and it needs to do that in many variations.”

Credits: Alvarado et al. / Nature.

To the human ear, these differences are subtle and hard to detect, explains Richard Mooney, Alvarado’s thesis advisor. But female zebra finches are very receptive to these subtleties. They dislike practice songs, but a precise game-time song makes them intrigued and attentive.

When the males practice the not-serious song, neurons in an area of the brain called the basal ganglia (which is also responsible for controlling major movements) allow variation in the song. Various neuron circuitries are used, corresponding to different songs. But when it’s go-time, these alternative pathways are shut down by a squirt of the neurotransmitter noradrenaline in the basal ganglia.

“You’ve established this kind of brain-to-movement dictionary, where you’ve explored all these different ways that you can give commands and they can move your body,” said Alvarado, who is now a post-doctoral researcher at Harvard University. “And then, you can exploit the mapping you’ve created. ‘I’ve explored, I have this dictionary, let me grab the right words from this dictionary and perform exactly what I know I can perform, given what I know the female wants to hear.’ “

To keep this ‘best’ song in good shape, a lot of practice is required. Much like a human athlete or artist, birds practice a lot — and also just like in humans, practicing variations helps build a “dictionary” of workable notes that can then be used. The birds explore their vocal range and different musical combinations until they zoom in on the one they want to use. To Mooney, a self-described Jimi Hendrix fan, the males’ practice songs are a bit like Hendrix’s music.

“It kind of goes everywhere, there’s the kernel of one song, but then it sort of morphs. It’s like free jazz or something. And, you know, I think he was just really, really good at exploring when he was alone.”

Of course, tracking the neurons responsible for this is not an easy task. It took a lot of work from a lot of people working in different fields, Mooney explains.

“One of the things that’s been really hard in other animals is to figure out what the link is between the variability you’re producing, and the variability you want to produce,” said John Pearson, an assistant professor of biostatistics and bioinformatics at Duke, who led the statistical analysis of the neurons. “This is the first time that people have gotten a real sizable population of these cells, and we can begin to try to link the variability in vocal performance to the variability in neural activity.”

In addition to understanding how birds do things, this type of study could also be useful from a human perspective. The basal ganglia are present in all vertebrates, and in humans, it’s linked to conditions such as Parkinson’s, Huntington’s disease, and Tourette’s syndrome, among others. Understanding

But the work is important because insight into the bird’s basal ganglia has direct relevance to human movement disorders, including Parkinson’s and Huntington’s diseases, Tourette’s syndrome, and others, Mooney said. Understanding how basal ganglia neurons function normally and what happens when they malfunction is paramount to understanding how these conditions take shape — and how they can be fixed.

Journal Reference: Alvarado et al, Neural dynamics underlying birdsong practice and performance, Nature (2021). DOI: 10.1038/s41586-021-04004-1

Scientists make artificial fish powered by human heart cells

This hybrid fish-like robot swims using human heart cells. Credit: Michael Rosnach, Keel Yong Lee, Sung-Jin Park, Kevin Kit Parker.

Researchers have devised a biohybrid mechanical fish that can swim using a tailfin that flips from side to side, powered by human heart muscles grown in the lab. This strange-looking robot was able to swim to the beat of a heart for more than 100 days, and scientists hope to leverage these experiments to someday grow fully functioning human hearts for transplant.

Thousands of Americans are on the waiting list for a heart transplant, but only 55% receive one in time. And those who are lucky enough to get a transplant face the major risk of their bodies rejecting the new heart. Growing a new heart from scratch in the lab can solve both the shortage and biocompatibility problems since the transplanted organ would be grown from a patient’s own cells.

Easier said than done, though. Scientists across the world have devised all sorts of methods to grow lifelike heart models. Past studies have used a technique known as tissue engineering, in which heart cells are grown onto artificial support frames akin to building a house out of brick and mortar. However, although these tiny heart models promisingly look like developing hearts, they fail to mimic the physiological response of a healthy human heart.

The researchers at Harvard and Emory University took up the challenge of growing tissue that not only looks like a heart, but beats like one too. They built upon previous research in which a team from the University of Illinois devised a soft robot shaped like a stingray whose swimming movements were powered by over 200,000 bioengineered rat heart cells known as cardiomyocytes, activated by light.

This time, the cardiomyocytes were derived from human stem cells, which the researchers lined each side of the biohybrid fish’s tail fin with. Each time a muscle tissue contracted or stretched, it opened up ion channels, triggering the opposite motion to follow. The mere action of physical bending is what triggers the muscles to activate and contract.

As such, the artificial fish is completely autonomous, as it can self-perpetuate its own movements independent of any external stimuli. That’s exactly how the human heart functions too, receiving minimal instructions from the brain.

“By leveraging cardiac mechano-electrical signaling between two layers of muscle, we recreated the cycle where each contraction results automatically as a response to the stretching on the opposite side,” said Harvard University bioengineer Keel Yong Lee.

“The results highlight the role of feedback mechanisms in muscular pumps such as the heart.”

Over time, the tail wagging weakened but after the researchers introduced a pacemaker that releases regular electrical pulses, the artificial fish kept swimming for more than 100 days.

“Because of the two internal pacing mechanisms, our fish can live longer, move faster, and swim more efficiently than previous work,” explains biophysics researcher Sung-Jin Park, co-first author of the study that appeared in the journal Science.

The ultimate goal is to take these lessons to make an artificial, self-pulsing muscle that could replace parts of a damaged human heart; for instance, to repair the organ after a heart attack. However, the technology is still a long way from making its way into the operating room. Growing hearts from scratch for transplants is an even more distant dream.

But although this kind of research is still in its infancy, the results so far are very promising.

“Our ultimate goal is to build an artificial heart to replace a malformed heart in a child,” said Kit Parker, senior author of the study. “Most of the work in building heart tissue or hearts, including some work we have done, is focused on replicating the anatomical features or replicating the simple beating of the heart in the engineered tissues. But here, we are drawing design inspiration from the biophysics of the heart, which is harder to do. Now, rather than using heart imaging as a blueprint, we are identifying the key biophysical principles that make the heart work, using them as design criteria, and replicating them in a system, a living, swimming fish, where it is much easier to see if we are successful.”


The human tongue can actually ‘smell’ things

New research shows that the senses or taste and smell are much more intertwined than we’ve previously thought.


Image via Pixabay.

A team of researchers from the Monell Center report finding functional olfactory receptors — the sensors that detect odors in the nose — in the taste cells of our tongues. The findings suggest that the interactions between smell and taste, both of which comprise flavor, may actually begin on the tongue and not in the brain.

Smelling strawberries

“Our research may help explain how odor molecules modulate taste perception,” said study senior author Mehmet Hakan Ozdener, MD, PhD, MPH, a cell biologist at Monell.

“This may lead to the development of odor-based taste modifiers that can help combat the excess salt, sugar, and fat intake associated with diet-related diseases such as obesity and diabetes.”

You know how we recognize that something smells like strawberries, even though strawberries themselves don’t have a smell when you sniff them? This shows you how smell helps create flavor.

The sense of taste handles sweet, salty, sour, bitter, and umami (savory) molecules on the tongue. It evolved as a quick way for our brains to figure out how nutritious something we’re chewing on is, and make sure it’s not toxic or poisonous. But smell, too, was an important part in detecting the next snack. A pear and an apple taste pretty much the same if you hold your nose while eating. What our brains do when we eat something is to combine taste and smell, alongside information from other senses, to create what we perceive as flavor.

Common wisdom held that information from taste and smell stays separate until reaching the brain. However, Ozdener realized no one has previously checked this assumption. He thought of this when when his 12-year-old son asked him if snakes extend their tongues so that they can smell. So, alongside colleagues at Monell, Ozdener set about culturing living human taste cells.

After developing the techniques that would allow them to maintain such a culture, the team probed the cells, finding many of the molecules present in human olfactory receptors. Next, they employed calcium imaging to show that these cells respond to odor molecules in a manner similar to olfactory receptor cells. Taken together, the data points to olfactory receptors playing a role in our taste systems — possibly by interacting with taste receptors on the tongue. Other experiments by the Monell scientists demonstrated that a single taste cell can contain both taste and olfactory receptors, which supports the present findings.

“The presence of olfactory receptors and taste receptors in the same cell will provide us with exciting opportunities to study interactions between odor and taste stimuli on the tongue,” said Ozdener.

The findings help us better understand how smell and taste interact. However, they could also better inform us about either of those senses individually. We still don’t know, for example, what compounds activate the vast majority of the 400 types of functional human olfactory receptors. The cells cultured by the team, which respond to odors, could be used to screen molecules that bind to such receptors.

The paper “Mammalian Taste Cells Express Functional Olfactory Receptors” has been published in the journal Chemical Senses.

Invasive hammerhead worms are starting to conquer Europe and Africa

Researchers have described two species of worms sporting a distinctive hammerhead look. The worms, discovered in parts of Europe and Africa, are likely invasive species and could wreak havoc on soil biodiversity.

Humbertium covidum, an invasive hammerhead worm found in Italy. Image credits: Pierre Gros.

As the world is becoming increasingly globalized, species are being brought from one part of the world to the other. These “alien” species have the potential to overrun the new ecosystem they’re brought to, and oftentimes, by the time you realize there’s a problem, there’s little you can do about it.

Oftentimes, you don’t even notice these invasive species unless you’re really paying attention — and this is exactly the case here.

An international team led by Professor Jean-Lou Justine from ISYEB (Muséum National d’Histoire Naturelle, Paris, France) described two new species of hammerhead flatworms. This is the first study of these species, although flatworms have been invading Europe for some time.

“We were surprised at first that some of the species which were invading Europe, a place where biodiversity is supposed to be well known, did not even have a name. That was the case of Obama nungara, a species described only in 2016,” Justine told ZME Science. The researchers did not give it a name at the time, though they did describe it in a 2020 paper with a charming title. The name Obama is formed by a composition of the Tupi words oba (leaf) and ma (animal), a reference to its body shape.

“This is also the case for the two new species described in this paper, they had no names and were never described in their countries of origin.”

Hammerhead worms are predatory creatures, much like their shark namesakes. They can track their prey (typically other worms or mollusks), and bear a distinctive shape on their head region, which helps them creep over the soil substrate.

Diversibipalium mayottensis, an invasive species of hammerhead worm found in Mayotte

A number of hammerhead worms have been described by scientists but, in many cases, the researchers don’t describe them in their land of origin, instead finding them in countries that they have already invaded. For instance, two previously described species (Bipalium pennsylvanicum and Bipalium adventitium) originate from Asia but were first reported from the US. The two newest species follow the same trend.

“I have been working on invasive land flatworms since 2013, when I discovered that gardens in France (and Europe) were invaded by bizarre worms and that almost no scientist was working on this problem. Leigh Winsor, the Australian member of our team, has been working on them since the 80’s,” Justine adds.

The first new species was named Humbertium covidum, as an homage to the victims of COVID-19, but also because much of the work was carried out during the COVID-19 lockdown.

“Due to the pandemic, during the lockdowns most of us were home, with our laboratory closed. No field expeditions were possible. I convinced my colleagues to gather all the information we had about these flatworms, do the computer analyses, and finally write this very long paper. We decided to name one of the species “covidum”, paying homage to the victims of the pandemic.” 

The worm was found in two gardens in the Pyrénées-Atlantiques (France) and also in Veneto (Italy). Although some hammerhead worms can reach up to one meter, this one is small (3 cm) and looks uniformly metallic black — an unusual color among hammerhead flatworms.

These creatures are not easy to characterize based on their morphology alone, so researchers decided to use mitochondrial genetic analysis, which can provide a lot of information about the origin of this species and which other species it is related to. This species appears to have originated in Asia and is potentially invasive. By analyzing the contents of its stomach, researchers also found that it eats snails.

The second species, Diversibipalium mayottensis was only found in Mayotte (a French island in the Mozambique Channel, Indian Ocean). The species is as small as the other one, but instead of a metallic black, it exhibits a spectacular green-blue iridescence. Based on genetic analysis, this species appears to belong to a “sister group” of all other hammerhead flatworms, which means it could help researchers understand how these creatures evolved. Its origin could be Madagascar, but it’s not entirely clear. Presumably, at some point in the past, people brought plants from Madagascar and unknowingly, also brought the worm.

“All land flatworms are generally transported with potted plants,” Justine says. “For the species in Europe, Humbertium covidum, it is likely that the species was transported in recent years, from Asia, with some imported plant. For the species in Mayotte, Diversibipalium mayottensis, it is likely that it comes from Madagascar, but the transport might have happened a long time ago, perhaps even centuries ago, by traditional exchanges between islands in this part of Africa.”

Although finding new species is generally good news, this is not necessarily the case here. These flatworms are probably bad news, especially if they're not in their natural environment. For instance, one study found that one single worm species from New Zealand became invasive in the UK, and when it became established, earthworm biomass declined by 20%.

"All land flatworms are predators of the other animals of the soil fauna, and, as such, can threaten the biodiversity and ecological balance of species in a soil. However, there are only a very few papers in which their impact was thoroughly studied, because these studies are long and expensive," Justine explained in an email to ZME Science.

The study comes with a clear warning: invasive species are probably more prevalent than we realize. In the US alone, invasive species are estimated to cause damage of around $120 billion, and the figure is likely to increase as the world becomes more and more interconnected. Unfortunately, when it comes to dealing with invasive hammerhead worms, prevention is pretty much our only weapon.

"Basically, there is not much to be done once a land flatworm has invaded a country. Prevention is the key, we need to avoid importing new flatworms (that is true for Europe and US)," Justine concludes.

The study was published in the journal PeerJ.

Researchers successfully regrow limbs on frogs. They want to do the same thing with humans

Most animals have pretty good injury repair capabilities, but when it comes to lost limbs, only a select few can regrow them. The rest, including humans, have little they can do to repair such injuries. But as a new study shows, with the right treatments, our bodies may be hacked and “convinced” to regrow lost limbs. Although the study focused on frogs, which are obviously very different from humans, the proof-of-concept study suggests that this approach could work on many animals, including humans.

The African clawed frog (Xenopus laevis). Image via Wiki Commons.

Limb regeneration is a new frontier in biomedical science. It’s something we’ve long considered outside the realm of possibility, restricted only to superheroes and myth, but research is bringing it closer and closer to reality.

While many things differentiate humans from frogs, neither we nor they are able to regenerate limbs. So researchers at Tufts University and Harvard University’s Wyss Institute used frogs (specifically, the African clawed frog or Xenopus laevis) as a proof of concept. X. laevis is often used in research as it is easy to handle, lays eggs throughout the year, and for a model organism, shares a close evolutionary relationship with humans.

The researchers triggered the regrowth of a lost leg using a five-drug cocktail that they applied in a wearable silicone bioreactor dome that sealed the drugs over the stump for just 24 hours. After the treatment was administered, the regenerative process was kickstarted, and over the course of an 18-month period, the frogs regrew an almost fully functional leg.

“It’s exciting to see that the drugs we selected were helping to create an almost complete limb,” said Nirosha Murugan, research affiliate at the Allen Discovery Center at Tufts and first author of the paper. “The fact that it required only a brief exposure to the drugs to set in motion a months-long regeneration process suggests that frogs and perhaps other animals may have dormant regenerative capabilities that can be triggered into action.”

The experiment was repeated on dozens of frogs, and while not all of them regrew limbs, most did — including bone tissue and even toe-like structures at the end of the limb (though these weren’t supported by bone). It’s not a magic elixir, and the treatment is not perfect, but the drug cocktail delivered through the wearable bioreactor really does seem capable of regrowing limbs.

Regrowth of soft tissue. The MDT group (bottom) represents the five-drug cocktail treatment. Image credits: Murugan et al (2022).

The researchers essentially hacked the biological pathways that enable the growth and organization of tissue — much like in an embryo. This is why the treatment was only applied once, over the course of a day; meanwhile, other approaches involve numerous interventions over the course of the process.

“The remarkable complexity of functional limbs suggests that the fastest path toward this goal may lie in triggering native, self-limiting modules of organogenesis, not continuous micromanagement of the lengthy process at the cell and molecular levels,” the researchers write in the study. “We implemented this via a short exposure of limb amputation wounds to a wearable bioreactor containing a payload of five select biochemical factors.”

The first stage is the formation of a mass of stem cells at the end of the stump, which was then used to gradually reconstruct the limb. It’s essential that this structure is covered with the dome as quickly as possible after amputation to ensure its protection and activation. This treatment would be ideally applied right after amputation.

“Mammals and other regenerating animals will usually have their injuries exposed to air or making contact with the ground, and they can take days to weeks to close up with scar tissue,” said David Kaplan, Stern Family Professor of Engineering at Tufts and co-author of the study. “Using the BioDome cap in the first 24 hours helps mimic an amniotic-like environment which, along with the right drugs, allows the rebuilding process to proceed without the interference of scar tissue.”

At first, researchers tried using the protective dome with a single drug, progesterone. Progesterone is a steroid hormone involved in the menstrual cycle, pregnancy, and embryogenesis of humans and other species. This alone triggered some limb growth, but the resulting limb was essentially a non-functional spike. Each of the other four drugs fills a different role, ranging from reducing inflammation and the stopping of scar tissue formation to the promotion of growth of new nerves, blood vessels, and muscles. It’s the combination of all these together that leads to a nigh-functional limb.

Researchers note that while the limbs weren’t 100% identical to “normal” limbs, they featured digits, webbing, and detailed skeletal and muscular features. Overall, the results show the successful “kickstarting” of regenerative pathways

The plan now is to move on to mammal research. Despite the differences between frogs and mammals, researchers say that the biggest difference lies in the “early events of wound healing” — if these early processes can be understood and replicated, then there’s no apparent reason why this couldn’t be applied to mammals, and ultimately humans as well.

“The goal of triggering latent tissue-building routines to regrow limbs in humans may be achieved by identifying and exploiting principles observed in highly regenerative organisms,” the researchers conclude.

The study was published in the journal Science Advances.

World’s earliest flower fossils might untangle Darwin’s ‘abominable mystery’

Left inset: flower bud; the fruiting body is shown in the top right while the bud is depicted in the bottom right. Credit: Wang Xin.

Paleontologists in China claim they may have found the earliest fossils of a flower bud to date. The findings suggest that flowering plants, or angiosperms, appeared tens of millions of years earlier than the fossil record previously suggested and help resolve one of the vexing problems plaguing Darwin’s theory of evolution by natural selection, which he described as an “abominable mystery”.

Flowering plants produce flowers and bear their seeds in fruits, unlike their gymnosperm peers that have unenclosed seeds and lack a flower. Examples of angiosperms include monocots like lilies, orchids, agaves, and grasses, as well as and dicots like roses, peas, sunflowers, oaks and maples. Gymnosperm examples include non-flowering evergreen plants such as pine, spruce, and fir trees.

Gymnosperms represent some of the oldest plant life in geological history. But once angiosperms appeared, they relatively quickly started replacing their older peers within the span of only a few tens of millions of years. To this day, angiosperms are by far the dominant form of plant life on Earth.

How did angiosperms spread so quickly? Why and when did they appear in the first place? How come they were already incredibly diverse by the time they were spotted in the earliest fossil records?

As Charles Darwin pondered these questions in the late 19th century, he couldn’t come up with a satisfying answer. The fact that flowering plants conquered the world so fast while all other species seem to have evolved gradually was a huge thorn in the side for the British naturalist, who proclaimed that natura non facit saltum, or nature makes no leap. But then came angiosperms who didn’t seem to get the memo.

In an 1879 letter to fellow botanist and explorer Dr. Joseph Hooker, Darwin wrote: “The rapid development as far as we can judge of all the higher plants within recent geological times is an abominable mystery.”

This puzzle was to biology what Fermat’s Last Theorem was to mathematics.

The mystery may be explained by the fact that angiosperms evolved much earlier than Darwin or his followers thought, according to a new study. Prof. Wang Xin from the Nanjing Institute of Geology and Palaeontology of the Chinese Academy of Sciences (NIGPAS), and colleagues have described the earliest fossil flower bud so far, which was found in pristine condition in Inner Mongolia, China.

The ancient flower bud, christened Florigerminis jurassica, is dated to more than 164 million years ago. It is without a doubt an angiosperm, judging from the presence of the flower bud, connected fruit, and leafy branch.

Previously, scientists had identified very ancient flower fossils dating back as far as 145 million years ago, in the case of Euanthus, or even 174 million years for Nanjinganthus, which was also found in China. But although these fossils seem to include seeds completely enclosed in an ovary, many experts were not convinced these were true angiosperms.

However, Florigerminis jurassica has a flower bud, fruit, and leafy branch — an unmistakable trio that cements the status of the ancient fossils as an undeniable angiosperm.

Flowers are notoriously difficult to fossilize, which partly explains Darwin’s mystery. The remarkable rare discovery of Florigerminis jurassica shows flowering plants were well on their way to dominating the planet since the Jurassic, demanding a rethinking of the timeline of angiosperm evolution.

The findings appeared in the journal Geological Society, London, Special Publications.

Tasmanian Devils are picky eaters — and they may just have broken the laws of scavenging

Credit: Mathias Appel, Flickr.

As the largest carnivorous marsupial in the world, the Tasmanian devil is strictly carnivorous, hunting frogs, birds, fish, and insects. But most of their meals actually consist of carrion. Yet Tasmanian Devils aren’t your typical scavengers that will devour anything they get their teeth on. Much to everyone’s surprise, researchers in Australia found that the devils have very specific tastes and dietary preferences, which furthermore can vary from individual to individual. That’s rather unheard of for scavengers but the rowdy devils are not ones to play by the rules.

“It’s a scavenger’s job to just be a generalist and take whatever it can find,” says Tracey Rogers, senior author of the study and a Professor at the School of Biology, Earth and Environmental Studies at the University of New South Wales.

“But we’ve found that most Tasmanian devils are actually picky and selective eaters—they’ve broken the laws of scavenging.”

Scavengers, also called carrion-feeders, are animals that feed partly or wholly on the bodies of dead animals. Vultures, crows, and hyenas are among the most famous scavengers in the animal kingdom, playing an important role in the food web by keeping the ecosystem free of carrion and recycling organic matter into ecosystems as nutrients.

One of the reasons scavengers have a place in the food web, somewhere between prey and predator, is that they are very flexible about what they eat. The American crow will eat mice, eggs, seeds, and nuts, for instance, making them highly adapted to virtually any environment, be it the wild or sprawling urban areas.

Part scavengers, Tasmanian devils have always been thought to eat just about anything — but it turns out they’re pickier than a toddler.

Anna Lewis, the lead author of the study and Ph.D. candidate at UNSW Science, laid traps in the island of Tasmania for a week at a time, catching around 10 devils per day. In total, they captured 71 individuals across seven different sites, from which they removed small whisker samples before releasing them back to the wild. Each bristle is embedded with isotopes from the food the devils ate in the past, thus revealing their diets.

Just around one in ten devils had a generalist diet, consisting of whatever food was available in their habitats. Most devils, however, chose to eat their favorite foods, such as wallabies, possums, and rosellas, and turn up their noses at unappealing carrion.

The heaviest devils also proved to be the pickiest eaters. This could mean either that size is a driving factor in their food choices or, alternatively, specializing in certain types of carrion helps them gain weight.

What’s more, there was a great deal of variation among individuals. Just like humans, individual devils have their favorite meals.

“We were surprised the devils didn’t want to all eat the same thing,” said Lewis in a statement.

“Most of them just decided, ‘No, this is my favorite food.'”

Lewis and colleagues go on to add in their study published in Ecology and Evolution that this behavior seems to be devil-specific. Sure, there may be other scavengers that are non-generalists, but we’ve yet to find others.

Other scavengers can’t afford the luxury of saying ‘no thanks!’ to whatever carrion comes their way. Vultures in Africa, for instance, have to compete with myriad other predators and scavengers for food. Once they smell carrion, they’ll swoop right in, no questions asked. Check, please!

But in Tasmania, Tasmanian devils are virtually at the top of the food chain, with little competition for carcasses. “Their main competition is just with each other,” said Professor Rogers.

Arcturus, one of the devils from the study, named after one of the brightest stars in the sky, likes to eat pademelon and wallabies. But every once in a while, he decides to go for something different, indulging in a snake or two.

“Tasmanian devils are these really cool scavengers that are doing something completely different to every other scavenger in the world,” says Ms. Lewis.

“We’re lucky to have them here in Australia,” she added, hoping to keep it that way. The numbers of Tasmanian devils have plummeted since the 1990s due to a variety of reasons, chief among them a serious epidemic called Devil Facial Tumor Disease (DFTD).

It’s only one of three transmissible cancers known to man (the other being in dogs and shellfish), but also one of the most unforgiving, having an almost 100% kill rate. Today, the population of the iconic Australian marsupial is down 90% and many researchers fear the devil may be doomed unless something is done about it — and fast.

Until scientists develop a viable treatment or vaccine for DFTD, conservation groups have focused on minimizing interactions between populations, even opting for capturing some devils until it’s safe to release them back into the wild. Dietary studies such as these may help inform conservationists what kind of diets the devils respond best to in order to maximize their odds of survival in captivity.

“From a conservation perspective, the findings could help us work out if we’re feeding devils the appropriate thing in captivity,” says Ms Lewis.

“At the moment, there’s a long list of foods that devils can eat, but it’s not specific in how often they eat all those foods or whether most only focus on a few different food types.”

Extinct species of fish reintroduced into its native habitat in Mexico

A little river in Mexico is the site of one of 2021’s most heartwarming tales — the reintroduction of a species that had gone extinct in the wild.

Tequila splitfin (Zoogoneticus tequila). Image via Wikimedia.

We often hear stories about animals going extinct, and they’re always heartbreaking. But, every so often, we get to hear of the reverse: a species that had gone extinct, being reintroduced into the wild. The waters of the Teuchitlán, a river in Mexico that flows near a town bearing the same name, can now boast the same tale.

Efforts by local researchers, conservationists, and citizens, with international support, have successfully reintroduced the tequila splitfin (Zoogoneticus tequila), a tiny fish that only lived in the Teuchitlán river but had gone extinct during the 1990s, to the wild.


In the 1990s, populations of the tequila splitfin began to dwindle in the Teuchitlán river. Eventually, it vanished completely.

Omar Domínguez, one of the researchers behind the program that reintroduced the species, and a co-authored of the paper describing the process, was a university student at the time and worried about the fish’s future. Pollution, human activity, and invasive, non-native species were placing great pressure on the tequila splitfin.

Now a 47-year-old researcher at the University of Michoacán, he recounts that then only the elderly in Teuchitlán remembered the fish — which they called gallito (“little rooster”) because of its brightly-colored, orange tail.

Conservation efforts started in 1998 when a team from the Chester Zoo in England, alongside members from other European institutions, arrived with several pairs of tequila splitfin from the aquariums of collectors and set up a lab to help preserve the species.

The first few years were spent reproducing the fish in aquariums. Reintroducing these to the river directly was deemed to be too risky. So Domínguez and his colleagues built an artificial pond on-site, in which the fish could breed in semi-captivity. The then-40 pairs of tequila splitfins were placed in this pond in 2012, and by 2014 they had multiplied to around 10,000 individuals.

By now, their results gave all the organizations involved in the effort (various zoos and wildlife conservation groups from Europe, the United States, and the United Arab Emirates) enough confidence to fund further experimentation. So the team set their sights on the river itself. Here, they studied the species’ interactions with local predators, parasites, microorganisms, and how they fit into the wider ecosystem of the area.

Then, they placed some of the tequila splitfins back into the river — inside floating cages. This step, too, was a marked success, and the fish multiplied quickly inside the cages. When their numbers grew large enough, around late 2017, the researchers marked the individual fish and set them free. In the next six months, their population increased by 55%, the authors report. The fish are still going strong, they add: in December 2021, they were seen inhabiting a new area of the river, where they were completely extinct in the past.

It’s not just about giving a species a new lease on life, the team explains. Their larger goal was to restore the natural equilibrium of the river’s ecosystem. Although there is no hard data on environmental factors in the past to compare with, Domínguez is confident that the river’s overall health has improved. Its waters are cleaner, the number of invasive species has declined, and cattle are no longer allowed to drink directly from the river in some areas.

Local communities were instrumental in the conservation effort.

“When I started the environmental education program I thought they were going to turn a deaf ear to us — and at first that happened,” Domínguez said.

However, the conservationists made sustained efforts to educate the locals through puppet shows, games, and educational materials, and presentations about zoogoneticus tequila. Among others, citizens were told about the ecological role of the species, and the part it plays in controlling dengue-spreading mosquitoes.

The tequila splitfin is currently listed as endangered on the IUCN’s red list.

The paper “Progress in the reintroduction program of the tequila splitfin in the springs of Teuchitlán, Jalisco, Mexico” has been published online by the IUCN CTSG (Conservation Translocation Specialist Group). An update on the project has been published in the magazine Amazonas.

Women can tell which men are only interested in one night stands just by looking at their faces, study finds

The left portrait shows the average face of a man with a low perceived sociosexual score. The right pic corresponds to a high score. Credit: Evolution and Human Behavior.

Researchers found that women are really good at judging which men are only interested in short-term uncommitted relationships just by reading their faces. Apparently, men with longer faces, higher foreheads, longer noses, and larger eyes tend to be more open to casual sex, and women can pick up on this.

The team of researchers from Macquarie University in Australia embarked on this study starting from a debate about the mechanics of attraction in humans. According to evolutionary psychology, humans are attracted to healthy, fertile, and compatible potential mates. If such is the case, then humans must be able to read certain cues in our faces and bodies that reflect these desirable mating characteristics.

If humans can visually judge if someone is ‘hot’ or not, what other cues can they pick up that are evolutionarily important when selecting a mate? The Australian researchers hypothesized that being able to tell whether someone is interested in a monogamous relationship or casual sex could be the kind of information that humans may be able to extract from visual cues, such as facial traits.

With this in mind, they recruited 103 white individuals, both male and female, who had their portraits taken and had to complete a survey that assessed their level of sociosexuality (how open they are to casual, uncommitted sex). The researchers then associated the sociosexuality scores with facial shape characteristics in men, but not in women where no reliable association could be found.

In a subsequent study, these photos were shown to 65 heterosexual participants who were asked to assess the sociosexuality of the person of the opposite sex shown in the pictures. Interestingly, women’s perception of male sociosexuality matched the men’s self-reported sociosexuality scores, showing that women can predict some of men’s sexual desires and intentions from their faces. However, males were terrible at this task. Their perception of women’s sociosexuality did not match the women’s self-reported attitudes and behaviors towards casual sex.

Lastly, the researchers used the data gathered in the previous studies to make computer-generated pairs of portraits representing high- and low-sociosexuality faces. Participants correctly identified high-sociosexuality faces better than chance — yet again, just in men and not women.

It’s not clear what’s responsible for these observed effects. It may be that assessing whether a potential mate will stick around or not is much more important for women than men because sex (and pregnancy) is more costly.

The researchers suspect that the sociosexual orientation reflected in males’ faces may be due to testosterone, which leads to quite a bit more variation in facial features in men than seen in women. Men with higher testosterone tend to have more traditionally masculine faces (for example, a wider brow, a longer nose, and a wider distance between the eyes). High-testosterone men tend to be more attractive to women, but also tend to express more promiscuous tendencies.

However, this hypothesis cannot be tested because the study did not include testosterone measurements for the participants. As a caveat, the researchers caution that these findings shouldn’t be used as a license to make snap judgments about people’s personalities or intentions. Ultimately, the best test of a person’s character is getting to know them.

The findings appeared in the journal Evolution and Human Behavior.

This article originally appeared in July 2021.

Why your dog likes to eat grass

Credit: Pxhere.

Your beloved canine is obviously not a cow, but that doesn’t stop them from behaving like one sometimes. Many dog owners are baffled when they see a dog eat grass, perhaps because they’ve never imagined them as grazing animals. You shouldn’t fret, though. Dogs eating grass is a lot more common than you think.

This behavior of eating things that technically aren’t food, known as pica, has been observed before in wild dogs and wolves (plant material has been found in 2% to 74% of stomach content samples of wolves and cougars), so it may be completely natural. As to why exactly dogs engage in this strange behavior despite having access to an unlimited supply of scooby snacks, no one is really sure.

Some doctors believe that dogs eat grass because they are sick and need to vomit, although some studies we’ve found refute this idea. Alternatively, dogs may be experiencing a dietary deficiency, but if that’s the case why do dogs on a balanced diet still partake in consuming plants sprouting off the sidewalk or on your neighbor’s lawn?

Whatever may be the case, veterinarians unanimously agree that this behavior is both common and safe. A survey of 49 owners found that 79% of their dogs had eaten plants at least once, with grass being the most commonly eaten plant.

Do dogs eat grass because they’re sick?

Credit: Pixabay.

It’s believed that plant-eating in dogs is due to some illness and that the ingestion of grass and other plant material is followed within minutes by vomiting. In a 2008 study, 25 veterinary students who had pet dogs were asked about signs of sickness before grass consumption. All participants reported that their dogs ate grass but none observed any signs of illness before their dogs ingested the plants. Only 8% said that their dogs regularly vomit afterward.

A survey of 47 dog owners came up with similar results, with only four dogs showing signs of illness before ingesting grass and only six dogs vomited afterward.

The researchers then extended their study by making the same inquiries in an online survey, which this time included 1,571 participants. The findings showed that 68% of the respondents said their dogs regularly ingest plants (on a daily or weekly basis), but only 8% said that their dogs showed some signs of illness before plant-eating. Around 22% of the respondents said that their pets regularly vomit afterward. Younger dogs were more likely to eat plants more frequently than older dogs and were also less likely to appear ill beforehand or to vomit afterward.

Each dog owner also supplied information about the diets of their pets, showing no indication that dogs who were primarily fed table scraps or raw food were any more prone to eating grass than those on a commercial ‘dog food’ diet.

One proposed reason why dogs eat grass is that the canines may receive less fiber in their diet, but the study found no connection between the two.

These results suggest that grass eating is a highly common behavior and is likely unrelated to illness or vomiting afterward. “Vomiting seems to be incidental to, rather than caused by, plant eating,” wrote the researchers.

Eating grass may help Fido’s digestion

According to Benjamin Hart, Professor Emeritus at the School of Veterinary Medicine at the University of California, Davis, plant-eating likely played a role in the ongoing purging of intestinal parasites (nematodes) in wild canids. When the plant material passes through the intestinal tract, it increases intestinal motility and wraps around worms, thereby purging the tract of nematodes. This behavior may have been preserved in domesticated dogs, the researcher said, as well as in felines who also engage in the same type of pica.

Then again, some believe that dogs eat grass simply because they like the taste and texture of it. That may be entirely so, but whether dogs eat grass purely out of enjoyment is challenging if not impossible to prove. Likewise, others believe that dogs eat grass because they’re bored, which sounds very odd and begs the question: Why aren’t you playing enough with your dog? If you believe your dog is eating grass to draw your attention, it may be their way of communicating they feel neglected and would like some more pets, thank you.

Is eating grass safe for my dog?

The bottom line is that your pet’s tendency to consume plants is nothing to worry about nor is it out of the ordinary. Eating grass doesn’t seem to be associated with any illness. Instead, it seems like it is a trait inherited from wild ancestors.

There’s a caveat though. Some lawns are sprayed with certain herbicides and pesticides that may be toxic, depending on the dog’s size. In these cases, it may be safer to not allow your pet to eat plant material that may be contaminated.

In order to sway your dog away from eating grass, the best course of action is to present a viable alternative. So be prepared with a treat next time you’re out on a walk with your favorite canine. Offer the treat when the dog complies and refrains from nibbling grass.

Invasive cannibalistic toads are evolving so fast they’re pushing the limits of evolution

For cane toads in Australia, the biggest enemy is often… other cane toads. Cannibalistic tadpoles often munch on hatchlings like it’s an eating contest, and they do it so much they’re pushing the hatchlings into developing quicker — but this comes at a cost.

Invasive species are known for their ability to achieve high densities within their introduced range, the researchers note. Image in public domain.

It had to be Australia

The first cane toads (about 100 or so) were brought to Australia in 1935, in an attempt to control the cane beetles that had been running rampant through the plantations. Not only did the toads not eliminate the beetles, but they became a problematic invasive species themselves, multiplying way beyond control.

It’s a sad story that Australia went through multiple times, with different animals. Because they are poisonous, the cane toads (Rhinella marina) have no natural predators, and went on to grow and spread throughout large swaths of the country. To make things even worse, carnivorous marsupials in Australia can mistake the toads as their prey, falling victim to the toxin.

But although adult toads can be quite menacing (measuring 25 cm, or 10 inches long), it’s their tadpoles that are carnivorous (at least most of the time).

It’s not that uncommon for tadpoles to become cannibalistic, many frog and toad species do it. Normally, they only get snippy and try to eat their relatives in the pond when resources are scarce. But in the case of the Australian cane toads, this seems to be happening a lot.

A single clutch can have thousands or even tens of thousands of eggs. The tadpoles that hatch earlier can then gobble up the unhatched eggs — and they do it like there’s no tomorrow. Researchers have documented cases where over 99% of the hatchlings in a clutch were consumed by just a few tadpoles.

Jayna DeVore, an invasive-species biologist at Tetiaroa Society, a non-profit organization in French Polynesia, wanted to see whether all cane toads do this or just the Australian invaders. Along with her colleagues, she carried out a few experiments.

In one such experiment, repeated 500 times with different individuals, the researchers placed a tadpole in a container with 10 hatchlings. They found that all tadpoles engage in some cannibalism but hatchlings were “2.6 times as likely to be cannibalized if that tadpole was from Australia.”

In another experiment, tadpoles from invasive toads were much more attracted to hatchlings than non-Australian ones. The researchers placed two traps, one that was empty and one that held hatchlings. The Australian tadpoles were 30 times more attracted to the hatchlings than the other ones.

An arms race

Tadpoles from another species (Agalychnis callidryas). Image credits: B. Kimmel / Wiki Commons.

Of course, the hatchlings aren’t sitting still. Well, they are, in a pond, but they’re not still in an evolutionary perspective.

Hatchlings in Australia are developing at a much faster pace than the others. This comes at a cost — when they reach the tadpole and mature stages of their life, they will not be as well-developed as their non-Australian peers, but it beats being devoured by a tadpole.

Even more impressively, the hatchlings seem to speed up the pace of their development when they sense a chemical released by other tadpoles. Since it’s not worth developing quicker when there’s no risk of cannibalism, the hatchlings only do it when they sense a risk.

“Here, we find that toad tadpoles from invasive Australian populations have evolved both a strong behavioral attraction to the vulnerable hatchling stage and an increased propensity to cannibalize these younger conspecifics. In response, these toads have also evolved multiple strategies for reducing the duration of the vulnerable period, indicating an evolutionary arms race between the cannibalistic tadpole stage and the vulnerable egg and hatchling stages in invaded habitats,” the researchers note in the study.

Although cannibalism is generally a dangerous strategy, in the case of the cane toads, it could actually be helpful. Tadpoles that consume their relatives aren’t just getting a lot of nutrients — they’re eliminating competition for the pond resources, which are sometimes scarce. They develop to mature toad stage faster and tend to be larger.

But the good news is that at the very least, this works as a form of population control, limiting the spread of the invasive species.

It’s also a remarkable demonstration of how fast evolution can trigger changes. The toads roaming Australia now are notably different from those who first stepped foot on the continent. Australian cane toads are a frightening bunch: not only are they cannibalistic invaders, but they’re also evolving at a very rapid pace.

The study was published in PNAS.

Wild microorganisms are evolving to eat plastic pollution

Microorganisms around the world are likely evolving to be able to degrade and consume plastic materials.

Image via Pixabay.

A new global assessment of microorganism genomes, the largest study of its kind, found that wild bacteria and microbes are evolving to be able to consume plastics. Overall, the authors report that an average of one in four of the organisms analyzed in the study carried at least one enzyme that could degrade plastic. Furthermore, the number and types of enzymes matched the amount and type of plastic pollution at the location where samples of different organisms were collected — suggesting that this is a natural, ongoing process, caused by the presence of plastic in the environment.

These results are evidence that plastic pollution is producing “a measurable effect” on the world’s microbes, the authors conclude.

Plastic bacteria

“We found multiple lines of evidence supporting the fact that the global microbiome’s plastic-degrading potential correlates strongly with measurements of environmental plastic pollution — a significant demonstration of how the environment is responding to the pressures we are placing on it,” said Prof Aleksej Zelezniak, at Chalmers University of Technology in Sweden.

Millions of tons of plastic are dumped in the oceans and landfills every year, and plastic pollution has become endemic everywhere on Earth. Addressing this issue will be one of the defining challenges of future generations along with efforts to reduce our reliance on such materials and improve our ability to recycle and cleanly dispose of used plastic. However, plastics are hard to degrade — that hardiness is one of their selling points to begin with.

According to the findings, microbes in soils and oceans across the globe are also hard at work on the same project. The study analyzed over 200 million genes from DNA samples taken from environments all around the world and found 30,000 different enzymes that could degrade 10 different types of plastics. such compounds could serve us well in our efforts to recycle plastics, breaking them down into their building blocks. Having more efficient recycling methods on hand would go a long way towards cutting our need to produce more plastics.

“We did not expect to find such a large number of enzymes across so many different microbes and environmental habitats. This is a surprising discovery that really illustrates the scale of the issue,” says Jan Zrimec, also at Chalmers University, first author of the study.

The team started with a dataset of 95 microbial enzymes already known to degrade plastic; these compounds were identified in species of bacteria found in dumps and similar places rife with plastic.

They then looked at the genes that encode those enzymes and looked for similar genes in environmental DNA samples collected at 236 sites around the world. To rule out any false positives, they compared the enzymes with enzymes from the human gut — all of which are known to be unable to degrade plastic.

Roughly 12,000 new enzymes were identified from ocean samples. Higher levels of degrading enzymes were routinely found in samples taken at deeper points, which is consistent with how plastic pollution levels vary with depth. Some 18,000 suitable genes were identified in soil samples. Here, too, the researchers underscore the effect of environmental factors: soils tend to contain higher levels of plastics with phthalate additives than the ocean, and more enzymes that can attack these substances were identified in soil samples.

Overall, roughly 60% of the enzymes identified in this study did not fit into a previously-known class, suggesting that they act through chemical pathways that were previously unknown to science.

“The next step would be to test the most promising enzyme candidates in the lab to closely investigate their properties and the rate of plastic degradation they can achieve,” said Zelezniak. “From there you could engineer microbial communities with targeted degrading functions for specific polymer types.”

The paper “Plastic-Degrading Potential across the Global Microbiome Correlates with Recent Pollution Trends” has been published in the journal Microbial Ecology.

The difference between cardiac arrest and heart attack

While these two terms are used quite interchangeably, they denote different medical events. A heart attack (myocardial infarction) is a circulation problem that involves blood flow being blocked from reaching the heart. During cardiac arrest, an electrical problem causes the heart to stop beating and needs to be restarted.

Image credits Peggy and Marco Lachmann-Anke.

We’ve all heard these terms at one point or another. Because they’re both serious conditions and quite similar in symptoms, we also tend to lump them together and treat them as synonyms. That being said, however, they are not the same thing, and they are not interchangeable.

So let’s dive into the differences between them.

Heart attacks

These occur when one of the coronary arteries supplying oxygenated blood to a section of the heart gets blocked. If this blockage isn’t cleared quickly, cells in the affected area of the heart start dying due to a lack of oxygen. This effect builds up over time so the longer an individual goes without treatment, the more damage accumulates in tissues in that part of their heart.

Blockages are typically caused by build-ups of fat, either cholesterol (that’s why your doctor is so insistent you lower it), or a series of other substances.

While symptoms can definitely be immediate and intense (such as feelings of pressure, tightness, or intense pain in the chest), they can also occur over time, up to weeks in advance of an actual heart attack. There is also quite a large degree of variation in regards to the symptoms of various patients. Women can have different symptoms than men; some patients have no symptoms at all. Angina (recurrent chest pain or pressure) triggered by physical activity and relieved by stress is the most common and earliest warning sign of heart attack.

That being said, it’s important to act quickly in case you’re experiencing these symptoms or think you’re having a heart attack. Call emergency services even if you’re not sure you’re having a heart attack, as every minute matters. Emergency services personnel can begin treatment the moment when they arrive; getting to the hospital by yourself would take a lot longer. They can also provide resuscitation in case a patient’s heart has stopped completely.

Cardiac arrest

Unlike a heart attack, cardiac arrest occurs suddenly and very often without warning. It involves an abrupt loss of heart function and can be extremely dangerous.

It is caused by an electrical malfunction in the heart which produces arrhythmia (irregular heartbeat). Due to this disruption, blood flow to the brain, lungs, and other organs is disrupted — and with it, the flow of oxygen as well. The lack of oxygen supply to the brain can render a person unconscious in mere seconds and stop heart function completely. Victims of cardiac arrest can die within minutes without treatment.

Symptoms of cardiac arrest include dizziness, loss of consciousness, and shortness of breath. Cardiac arrest events can happen in individuals who may or may not have been diagnosed with heart disease. It may be reversed, however, if CPR is performed on the patient, and a defibrillator is used to restore a normal heart rhythm within a few minutes.

If someone near you is experiencing cardiac arrest, first call emergency services. Then, get an automated external defibrillator (AED) if one is available; if not, begin performing CPR on the patient. If two people are available, one should begin CPR immediately, while the other handles the call and retrieves an AED. If AED solutions are available, they must be used as quickly as possible.

It may be needed that you perform CPR on the patient for a longer period of time. If that’s the case, don’t worry. Hands-only CPR to the beat of “Stayin’ Alive” can double or even triple a victim’s chances of survival — hang in there!

What’s the difference between a raven and a crow?

Although these two terms are used interchangeably, these two species are not the same. Although the differences between them are subtle, we can learn to tell the two apart. So let’s get to it!

Image via Pixabay.

Ravens and crows are closely related. They both belong to the Corvus genus of the Corvidae family of birds. Outwardly, they’re very similar — both are jet black and share several morphological features. Their natural ranges also have a lot of overlap, so they’re often seen (and mistaken for one another) in the same areas of the world.

Here is where the terms get a bit muddy, however. “Crow” is often used as a catch-all term for any bird in the genus Corvus. At the same time, people tend to refer to any larger bird from this genus as a “raven”. Taken together, it’s easy to see why very few people seem to be able to describe with any real detail what truly differentiates these species.

But — lucky you! — we’re about to go through them today.

Crow or raven?

One of the first indications that you’re seeing a crow rather than a raven is that the former generally travels in large groups, while the latter prefers to hang out in pairs. If we happen upon a solitary bird, however, such context clues won’t do us much good; so we’ll have to look at the characteristics of the individual.

Common ravens (Corvus corax) are, indeed, larger than your average crow. This is especially useful to know in rural areas, where size can be a pretty reliable indicator of which of these birds you’re dealing with. Ravens aren’t particularly fond of urban areas and their bustling crowds, however, so if you’re in a city, you’re probably more likely to be seeing a ‘really big crow’ than a raven. As a rule of thumb, crows are about the size of a pigeon and weigh on average 20 oz / o.55 kgs, while ravens are roughly as large as hawks, typically weighing 40 oz / 1.1 kgs.

A stuffed common raven and carrion crow, side by side, at the Natural History Museum of Genoa. Image via Wikimedia.

Meanwhile “crows” — typically the Carrion Crow (Corvus corone) in Europe and American Crow (Corvus brachyrhynchos) in the U.S. — are quite fond of cityscapes and generally not people-shy.

The two species also produce different sounds. Crows vocalize through ‘caw’s or ‘purr’s (sound sample for carrion crows, American crows) while ravens use much lower, rougher croaks. Personally, I find the latter to sound much more ominous, and use this as a rough but reliable guideline when trying to identify ravens.

If vocalizations are not forthcoming, either, we can start looking at the physical features of the birds in question. As far as the plumage is concerned, both species sport jet-black feathers. Raven feathers are very glossy with green, blue, and purple iridescence; they can also have a wet or oily sheen. Crow feathers are iridescent blue and purple but are far less shiny than those of ravens (although they still do have a little bit of sheen to them).

Ravens have larger and curvier beaks than crows. Both sport bristles at the base of the beak, but for ravens, these are much more pronounced. Ravens tend to have ruffled feathers on the throat, whereas crows’ are swept neat and tidy.

On the ground, both birds behave similarly. One reliable way to tell a raven apart here, however, is by how they walk: ravens tend to mix little hops in their gait when moving more rapidly. At a slow pace, a raven’s walking pattern is the same as those employed by crows.

If you happen to spot the birds mid-flight, a few more tell-tale differences become apparent. A raven’s wingspan is much greater than that of a crow (3.5-4 ft / 1-1.2 m and 2.5 ft / 76 cm, respectively) and raven’s wing beats make a distinctive swishing sound while a crow’s are silent. In flight, the raven’s neck is also longer than a crow’s. Crows tend to actively flap their wings more often than ravens, which tend to prefer soaring on rising masses of air (they are heavier, and this helps them save energy). If you see such a bird soaring — gliding along with outstretched wings — for more than a few seconds at a time, chances are it’s a raven.

Ravens like to do all sorts of fancy acrobatics during flight, including somersaults (loops) or even flying upside-down, possibly just for fun. Such behavior is a dead giveaway that you’re looking at a raven, but it’s not very reliable; they tend to only engage in such playful behavior on windy days, or those with powerful thermals (rising masses of hot air) to keep them aloft.

As far as the shape of their wings is concerned, ravens have pointed wings with long primary feathers near their tip. Crows, meanwhile, have blunter wingtips; although their primaries are splayed as well, they are shorter and less pronounced than a raven’s.

Perhaps the single most distinctive difference between the two is the shape of their tails. All the feathers in a crow’s tail are the same length; in flight, their extended tails look like fans, with a rounded outline. Ravens meanwhile have longer feathers in the middle of their tails, giving them a wedge-like outline while the birds are in flight.

The differences between these two species are subtle — as well they should be, they are closely related, after all! The Corvidae family is also very numerous, and each species that belongs to it has its own particularities, some of which may not fit with what we’ve discussed here today. In general, however, they’re distinctive enough to tell apart.

Crows and ravens are some of the most similar — and most often-confused — species in this family. Hopefully the tips here will help you better tell them apart, and impress your friends with your knowledge of Corvidae!