Scientists have been looking at exoplanets for a while now, and one of the types of exoplanets that’s easiest to spot is the so-called Hot Jupiters.
Hot Jupiters are a type of planet that resembles Jupiter (they’re essentially gas giants) but they lie very close to their star — which makes them hot. But like the moon and the Earth, hot Jupiters are generally tidally locked with their star, which means one of the side is always facing the star while the other is always facing away. For the first time, researchers have looked into the dark side of a hot Jupiter.
WASP-121 b is nearly two times the size of Jupiter and it lies so close to its star that it only takes it just 30 hours to complete a rotation. The planet was first discovered in 2015, but now, thanks to some fresh data from Hubble, astronomers can analyze it in more detail than ever before.
The planet is tidally locked, and there’s a huge temperature difference between the side that is facing towards the star and the side facing outer space. Because it’s so close to its star, even the ‘cold’ side is hot, but not nearly as hot as the star-facing side. The hot side has temperatures beyond 4,940°F (2726 °C), so hot that it breaks water molecules apart into hydrogen and oxygen. Meanwhile, the dark side has temperatures of ‘just’ 2,780°F (1526 °C), obviously still very hot, but cold enough for water molecules to form again.
The team calculated that the planet’s atmospheric movements are pushed by winds that whip the planet at whopping speeds of up to 5 kilometers per second (or more than 11,000 miles per hour).
Because there’s such a big temperature difference between the two sides, strong winds rip from one side to the other, sweeping atoms around. There’s no way water clouds (let alone liquid water) can exist on such a planet, but Hubble data shows that temperatures are low enough for metal clouds to form on the nightside. Iron and corundum (the mineral that makes up rubies and sapphires) appear to be present on the planet, and these are likely the minerals that form clouds on WASP-121 b.
This study marks the first time an exoplanet’s global atmosphere has been studied, the researchers say. The study could help us understand how the entire class of hot Jupiters forms and what implications they have for the formation of solar systems.
“We’re now moving beyond taking isolated snapshots of specific regions of exoplanet atmospheres, to study them as the 3D systems they truly are,” says Thomas Mikal-Evans, who led the study as a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research.
Journal Reference: Mikal-Evans, T., Sing, D.K., Barstow, J.K. et al. Diurnal variations in the stratosphere of the ultrahot giant exoplanet WASP-121b. Nat Astron (2022). DOI: 10.1038/s41550-021-01592-w
In recent times, some high-profile geneticists gained a lot of publicity when they said they’re working to resurrect the wooly mammoth, an iconic megafauna species that went extinct during the last ice age, some 10,000 years ago. The whole thing gave off massive Jurassic Park vibes, and given its ambitious scope, the mission was widely picked up by the media. After all, is there anything that science can’t do?
The problem is that, in reality, this challenge could prove virtually impossible. Richard Feynman once said ‘science is imagination in a straitjacket,’ alluding to the fact that wild ideas, by themselves, are not enough to make a breakthrough. For imagination to become reality, it needs to be materialized within physical constraints — and a new study suggests there’s a hard floor when it comes to reconstructing the genetic material of long-extinct species.
Extinct species may be dead for good
Thomas Gilbert at the University of Copenhagen in Denmark set out to probe the limits of CRISPR — a powerful tool for editing genomes that allows researchers to easily alter DNA sequences and modify gene function.
Colossal, a bioscience company recently co-founded by Harvard University geneticist George Church, aims to leverage this technology to resurrect the wooly mammoth or at least a creature very closely resembling one.
In a nutshell, the idea is to sequence DNA from samples of mammoth tusks, bones, and other materials. This genetic material would then be inserted into the Asian elephant stem cells, which would be used to make an artificial womb and a fertilized egg to breed a mammoth-elephant hybrid.
To explore the feasibility of such a lofty goal, Gilbert’s team attempted to reconstruct the genome of the Christmas Island rat, also known as Maclear’s rat (Rattus macleari), a species of rodent that went extinct in the early 1900s.
The team was able to reassemble most of the extinct rodent’s genome thanks to bits of code gleaned from the genome of the closely related Norwegian brown rat (Rattusnorvegicus). Researchers were able to recover 95% of the Christmas Island rat’s genome, which sounds like a lot. Except, it’s not.
The last 5% of the genome they couldn’t make sense of is actually the most crucial part since it corresponds to the genes that differentiate the Christmas Island rat from other living relatives.
Some of the genes the researchers were able to recover include those related to the expression of tissue like the hair and ears. The Christmas Island rat had characteristically long black hair and round ears. However, many other genes were lost, their DNA sequences being broken up into many tiny pieces that cannot be reassembled.
Lost genes include those involved in the rat’s immune system and sense of smell. Cutting and pasting genes from another rat species is not really an option since smell plays a crucial role in foraging food, avoiding predators, and mating, so a modified animal might look and behave differently from the original extinct species.
Gilbert describes reassembling the genome of an extinct species as trying to piece together every page of a shredded book. If you have an intact copy of the original book, you should be able to reconstruct the original material perfectly. It might take you a while, but you’ll get there. But herein lies the problem: there are no more original copies for the genome of an extinct species.
Your next best bet is to compare your shredded pages to a similar book, but that means you’ll never be able to recover the missing pages that don’t match, even if you manage to deduce some of the content. The Christmas Island rat diverged from its Norwegian brown rat cousin about 2.6 million years ago. Due to this evolutionary divergence, most of the genetic information sequenced from old Christmas Island rat biological samples is simply lost. And this divergence is pretty similar to that between the woolly mammoth and the Asian elephant.
Some of this missing data could be recovered using current solutions or some that will be developed in the future. But the sad reality may be that some data will never be recovered, which makes the perfect resurrection of an extinct species impossible.
That being said, it’s not impossible to breed an animal that looks and behaves very closely to what you’d expect from an original mammoth or Tasmanian tiger. It’s just that these would be some hybrids of some sort, with combined features from both extinct and living species.
Ultimately, these findings don’t change much about the developments of scientific projects currently underway to resurrect extinct species. However, the study is still valuable because it helps clarify the limits of what’s actually possible. With a tighter straitjacket, maybe scientists’ imagination is diverted to more useful pathways of research. For instance, some of these efforts may be better placed in saving vulnerable species from extinction. Just saying.
Congress’s new proposed spending bill would finally provide NASA with the cash it needs for projects that have been struggling these past few years. Among them are efforts to develop new commercial space stations in low Earth orbit and the development of a new crewed lunar lander.
If signed, the bill would assign NASA $24.041 billion for the fiscal year 2022. While this would be $800 million less than what President Joe Biden’s budget request called for in May, it would still mean a slight increase in the agency’s cash compared to fiscal year 2021, when it received $23.27 billion.
Although Congress does not seem to see eye to eye with the presidential administration on NASA’s budget requirements, there are several projects that lawmakers in the House and Senate are finally agreeing to fund. NASA’s human landing system will receive the full $1.195 billion requested for it. This lander is being developed for the Artemis program, which aims to send the first woman and first person of color to the Moon. For 2021, appropriators only provided $850 million of the requested $3.4 billion for the lander.
Due to constraints in the budget, several changes were made to the original plans for Artemis. These involved choosing at least two commercial companies to design and build landers for the mission — intended to spark competition and provide redundancy for the program. Lacking in cash, however, NASA only selected SpaceX to develop its Starship vehicle into a lander.
If NASA was to receive the increased budget for this mission, Congress calls on it to “deliver a publicly available plan explaining how it will ensure safety, redundancy, sustainability, and competition” within 30 days of the bill’s signing. Congress is also asking for a detailed list of resources NASA would need through to 2026 to ensure the success of the program.
The windfall, should the bill be signed into force, will also inject much-needed cash into NASA’s efforts to develop a successor to the International Space Station. The ISS has funds to operate through to 2024, although the Biden administration has announced that it is looking to extend its life through to 2030. By that time, NASA hopes, the private space industry will have developed its own commercial space station or stations to take over in low Earth orbit. NASA has requested $150 million for this project for fiscal years 2020 and 2021 each but was only granted $15 million and $17 million respectively. However, Congress agreed to appropriate the requested sum of $101.1 million for this year.
Funding for other NASA programs has remained relatively constant under the new proposed bill. Its Orion crew capsule and Space Launch Systems (SLS) are receiving the full requested amount in funding, with the SLS even getting a little extra. Science is being granted $7.614 billion, which is less than what NASA requested, but more than last year. The agency’s request for $653 million for its Mars sample return mission has been granted in full.
So, although the budget doesn’t look the way NASA would have wanted, it’s still a pretty good deal. There is, however, a catch. Projects can only receive 40% of their allotted amounts until NASA’s administrator submits a multi-year plan for Artemis and upcoming NASA Moon missions. This will need to include dates for major milestones, any proposed partnerships, and a whole host of other data alongside estimates of what funds are needed to touch upon these milestones.
The Kingdom of Thailand wants to seal its commitment to green energy with its new hybrid solar-hydropower generation facility that covers a water reservoir in the northeast of the country.
The installation covers an immense 720,000 square meters of the reservoir’s surface and produces clean electricity around the clock: solar power during the day, hydropower at night. Christened the Sirindhorn dam farm, this is the “world’s largest floating hydro-solar farm”, and the first of 15 such farms planned to be built by Thailand by 2037. They are a linchpin in the kingdom’s pledge for carbon neutrality by 2050.
Floating towards the future
“We can claim that through 45 megawatts combined with hydropower and energy management system for solar and hydro powers, this is the first and biggest project in the world,” Electricity Generating Authority of Thailand (EGAT) deputy governor Prasertsak Cherngchawano told AFP.
At the 2021 United Nations Climate Change Conference (COP26) last year, Thailand’s Prime Minister Prayut Chan-O-Cha officially announced his country’s goal of reaching carbon neutrality by 2050, and a net-zero greenhouse emissions target by 2065. Thailand also aims to produce 30% of its energy from renewables by 2037 as an interim goal.
The Sirindhorn dam farm project, which went into operation last October, is the cornerstone of that pledge. The farm contains over 144,000 solar cells and can output 45 MW of electricity. This is enough to reduce Thailand’s carbon dioxide emissions by an estimated 47,000 tons per year.
Thailand’s energy grids continue to rely heavily on fossil fuel; some 55% of the country’s power generation as of October last year was derived from such fuels, while only 11% came from renewable sources such as solar or hydropower, according to Thailand’s Energy Policy and Planning Office, a department of the ministry of energy. Still, projects such as Sirindhorn show that progress is being made.
The $35 million project took two years to build, with repeated delays caused by the pandemic, which saw technicians falling sick and deliveries of solar panels being repeatedly delayed. EGAT plans to install floating hydro-solar farms in 15 more dams across Thailand by 2037, which would total an estimated 2,725 MW of power.
Currently, power generated at Sirindhorn is being distributed mainly to domestic and commercial users in the lower northeastern region of the country.
Thailand is also betting that its floating solar farms will be of interest to tourists, as well. Sirindhorn comes with a 415-meter (1,360-foot) long “Nature Walkway” which will give a breathtaking view of the reservoir and the solar cells floating across its surface. Locals are already flocking to see the solar farm, and time will tell if international travelers will be drawn here as well.
Local communities report that with the solar floats installed, catches of fish in the reservoir have decreased — but they seem to be positive about it. State authorities say that the project will not affect agriculture, fishing, or other community activities in the long term, and are committed to taking any steps necessary towards this goal.
“The number of fish caught has reduced, so we have less income,” village headman Thongphon Mobmai, 64, told AFP. “But locals have to accept this mandate for community development envisioned by the state.”
“We’ve used only 0.2 to 0.3 percent of the dam’s surface area. People can make use of lands for agriculture, residency, and other purposes,” said EGAT’s Prasertsak.
Nearly 1 in 10 Americans over the age of 65 have dementia, and as the U.S. struggles with an aging population, the proportion of elderly people with Alzheimer’s and other neurodegenerative diseases is bound to increase. But in the Amazon basin, where some indigenous people still employ a subsistence lifestyle as they have for hundreds of years isolated from industrialized society, the rate of dementia hovers at around just 1%. These findings, reported by a new study from the University of South California, suggest that the Western lifestyle may be seriously putting people at risk of dementia in old age.
“Something about the pre-industrial subsistence lifestyle appears to protect older Tsimane and Moseten from dementia,” said Margaret Gatz, the lead study author and professor of psychology, gerontology and preventive medicine at the University of South California.
Gatz and colleagues traveled to the Bolivian Amazon jungle, where they closely studied the elderly of the Tsimane’ and Mosetén tribes — two indigenous peoples that have remained largely isolated from urban life elsewhere in the country.
The Tsimane’ number about 16,000 people living in mostly riverbank villages scattered across about 3,000 square miles of the Amazon jungle. They are forager-farmers who fish, hunt, and cut down trees with machetes, which keeps everyone very physically active throughout their lifetimes.
The neighboring Mosetén, which number around 3,000 and have close cultural ties with the Tsimane’, also reside in rural villages and rely on subsistence agricultural work. However, they live closer to towns, have schools, and access to health posts, as well as access to roads and electricity. Within the last decade, the Mosetén have also received cell phone service and running water.
Researchers employed computer tomography (CT) brain scans, cognitive and neurological tests, and questionnaires to assess the mental health among the Tsimane’ and Mosetén aged 60 and over.
According to the results, the study found just 5 cases of dementia among 435 Tsimane’ and one case among 169 Mosetén, which is much less than the rate of incidence in Western countries. Previously, studies of indigenous populations in Australia, North America, Guam, and Brazil found dementia prevalence ranging from 0.5% to 20%. The authors note that the apparent higher rate of dementia among older adults from indigenous tribes elsewhere in the world could be due to their higher contact with their industrialized neighbors, and subsequent adoption of more sedentary lifestyles.
In the same over-60 groups, the researchers also diagnosed about 8% of elderly Tsimane’ and 10% of Mosetén with mild cognitive impairment (MCI) — the stage between the expected cognitive decline of normal aging and the more serious decline of dementia. This condition is characterized by memory loss and a decline in cognitive abilities, such as language and spatial reasoning. The MCI rates were comparable to those encountered in high-income countries.
In high-income countries with high rates of dementia among older adults, the population generally does not engage in the recommended amount of physical activity and has a diet rich in sugars and fats. As a result, older adults are more susceptible to heart disease and brain aging. In contrast, the Tsimane’ people have unusually healthy hearts for their age. That’s not surprising considering they also have the lowest prevalence of coronary atherosclerosis of any population in the world.
Alzheimer’s has been previously associated with hypertension, diabetes, cardiovascular diseases, physical inactivity, and even air pollution. It’s no coincidence that these chronic diseases and health problems are staples of modern Western lifestyles.
In 2021, the same team from the University of South California found that the Tsimane indigenous people of the Bolivian Amazon experience less brain atrophy than their American and European peers. Their decrease in brain volume happened at a rate that was 70% lower than in Western populations.
“We’re in a race for solutions to the growing prevalence of Alzheimer’s disease and related dementias,” said Hillard Kaplan, a study co-author and professor of health economics and anthropology at Chapman University who has studied the Tsimane for two decades. “Looking at these diverse populations augments and accelerates our understanding of these diseases and generates new insights.”
If the Tsimane’ and Mosetén offer any indication, a pre-industrial lifestyle can offer significant protection against dementia. But that doesn’t mean we can all revert to foraging in the woods and living under the stars. In case someone is romanticizing life in the Amazon jungle, bear in mind that the Tsimane’ have an average of nine children per family who live an average of just over 50 years compared to the world average of 71.5 years. So while it may be true that indigenous Amazon people rarely suffer from dementia at old age, what’s certain is that even fewer actually make it that far.
The findings were published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.
We often think of climate science as something that started only recently. The truth is that, like almost all fields of science, it started a long time ago. Advancing science is often a slow and tedious process, and climate science is not an exception. From the discovery of carbon dioxide until the most sophisticated climate models, it took a long time to get where we are.
Unfortunately, many scientists who played an important role in this climate journey are not given the credit they deserve. Take, for instance, Eunice Newton Foote.
Foote was born in 1819 in Connecticut, USA. She spent her childhood in New York and later attended classes in the Troy Female Seminary, a higher education institution just for women. She married Elish Foote in 1841, and the couple was active in the suffragist and abolitionist movements. They participated in the “Women’s Rights Convention” and signed the “Declaration of Sentiments” in 1848.
Eunice was also an inventor and an “amateur” scientist, a brave endeavor in a time when women were scarcely allowed to participate in science. However, one of her discoveries turned out to be instrumental in the field of climate science.
Why do we need jackets in the mountains?
In 1856, Eunice conducted an experiment to explain why low altitude air is warmer than in mountains. Back then, scientists were not sure about it, so she decided to test it. She published her results in the American Journal of Science and Arts.
Foote placed two cylinders under the Sun and later in the shade, each with a thermometer. She made sure the experiment would start with both cylinders with the same temperature. After three minutes, she measured the temperature in both situations.
She noticed that rarefied air didn’t heat up as much as dense air, which explains the difference between mountaintops and valleys. Later, she compared the influence of moisture with the same apparatus. To make sure the other cylinder was dry enough, she added calcium chloride. The result was a much warmer cylinder with moist air in contrast to the dry one. This was the first step to explain the processes in the atmosphere, water vapor is one of the greenhouse gasses which sustain life on Earth.
But that wasn’t all. Foote went further and studied the effect of carbon dioxide. The gas had a high effect on heating the air. At the time, Eunice didn’t notice it, but with her measurements, the warming effect of water vapor made the temperatures 6% higher, while the carbon dioxide cylinder was 9% higher.
Surprisingly, Eunice’s concluding paragraphs came with a simple deduction on how the atmosphere would respond to an increase in CO2. She predicted that adding more gas would lead to an increase in the temperature — which is pretty much what we know to be true now. In addition, she talked about the effect of carbon dioxide in the geological past, as scientists were already uncovering evidence that Earth’s climate was different back then.
We now know that during different geologic periods of the Earth, the climate was significantly warmer or colder. In fact, between the Permian and Triassic periods, the CO2 concentration was nearly 5 times higher than today’s, causing a 6ºC (10.8ºF) temperature increase.
Eunice Foote’s discovery made it to Scientific American in 1856, where it was presented by Joseph Henry in the Eighth Annual Meeting of the American Association for the Advancement of Science (AAAS). Henry also reported her findings in the New-York daily tribune but stated there were not significant. Her study was mentioned in two European reports, and her name was largely ignored for over 100 years — until it finally received credit for her observations in 2011.
The credit for the discovery used to be given to John Tyndall, an Irish physicist. He published his findings in 1861 explaining how absorbed radiation (heat) was and which radiation it was – infrared. Tyndall was an “official” scientist, he had a doctorate, had recognition from previous work, everything necessary to be respected.
But a few things draw the eye regarding Tyndall and Foote.
Dr Tyndall was part of the editorial team of a magazine that reprinted Foote’s work. It is possible he didn’t actually read the paper, or just ignored it because it was an American scientist (a common practice among European scientists back then), and or because of her gender. But it’s possible that he drew some inspiration from it as well — without quoting it.
It should be said that Tyndall’s work was more advanced and precise. He had better resources and he was close to the newest discoveries in physics that could support his hypothesis. But the question of why Foote’s work took so long to be credited is hard to answer without going into misogyny.
Today, whenever a finding is published, even if made with a low-budget apparatus, the scientist responsible for the next advance on the topic needs to cite their colleague. A good example happened to another important discovery involving another female scientist. Edwin Hubble used Henrietta Swan Leavitt’s discovery of the relationship between the brightness and period of cepheid variables. Her idea was part of the method to measure the galaxies’ velocities and distances that later proved the universe is expanding. Hubble said she deserved to share the Nobel Prize with him, unfortunately, she was already dead after the prize announcement.
It’s unfortunate that researchers like Foote don’t receive the recognition they deserve, but it’s encouraging that the scientific community is starting to finally recognize some of these pioneers. There’s plenty of work still left to be done.
The phrase ‘prevention is better than the cure’ is a fundamental principle of modern health, and your oral health should be no different. One of the best ways to prevent cavities is by brushing and flossing correctly. But by now, most people do this and they still end up with some caries eventually. Taking prevention to the next level, scientists at the University of Washington have now developed an optical-based method that can identify the most at-risk teeth by mapping high acidity in the dental plaque that covers the teeth.
Dental plaque is produced by bacteria that live in our mouths as a byproduct as they consume sugars, starches, and other bits of foods that haven’t been properly cleaned from the teeth. If plaque stays on the teeth for more than a few days, it hardens and becomes a substance called tartar. In time, the microorganisms that form on the plaque release acids that wear down the tooth enamel, then the next layer called dentin, before reaching the pulp. When acid attacks the pulp, you’ve officially gotten a new cavity.
But what if we could monitor this acidic activity and stop it before it crosses a point of no return that triggers the cavity formation? That’s exactly what researchers at the University of Washington set out to do. They’ve devised a system, which they call O-pH, that measures the pH levels, or acidity, of the plaque covering each tooth under inspection.
In order to map the acidity of the plaque, a person’s teeth are first covered in a non-toxic, safe chemical dye that reacts with light to produce fluorescent reactions. An optical probe then detects these fluorescent reactions, whose signals can reveal the exact acidity of the underlying dental plaque.
The proof of concept was demonstrated on a small sample of 30 patients, aged 10 to 18. Children and teenagers were selected because their enamel is much thinner than that of adults, which makes detecting any sign of erosion — and consequently a potential cavity — early on very important. The tooth acidity was read before and after sugar rinses, as well as pre- and post-professional dental cleaning.
In the future, this acidity test could be standard practice in dental practices. Eric Seibel, senior author and research professor of mechanical engineering at the University of Washington, says that when a patient comes in for routine teeth cleaning, “a dentist would rinse them with the tasteless fluorescent dye solution and then get their teeth optically scanned to look for high acid production areas where the enamel is getting demineralized.” The dentist and patient can then form a treatment plan to reduce the acidity and avoid costly cavities.
“We do need more results to show how effective it is for diagnosis, but it can definitely help us understand some of your oral health quantitatively,” said Manuja Sharma, lead author and a doctoral student in the UW Department of Electrical and Computer Engineering. “It can also help educate patients about the effects of sugar on the chemistry of plaque. We can show them, live, what happens, and that is an experience they’ll remember and say, OK, fine, I need to cut down on sugar!”
The O-pH system was described in the journal IEEE Xplore.
Einstein’s theory of general relativity was revolutionary on many levels. One of its many groundbreaking consequences is that mass and energy are basically interchangeable at rest. The immediate implication is that you can make mass — tangible matter — out of energy, thereby explaining how the universe as we know it came to be during the Big Bang when a heck lot of energy turned into the first particles. But there may be much more to it.
In 2019, physicist Melvin Vopson of the University of Portsmouth proposed that information is equivalent to mass and energy, existing as a separate state of matter, a conjecture known as the mass-energy-information equivalence principle. This would mean that every bit of information has a finite and quantifiable mass. For instance, a hard drive full of information is heavier than the same drive empty.
That’s a bold claim, to say the least. Now, in a new study, Vopson is ready to put his money where his mouth is, proposing an experiment that can verify this conjecture.
“The main idea of the study is that information erasure can be achieved when matter particles annihilate their corresponding antimatter particles. This process essentially erases a matter particle from existence. The annihilation process converts all the [remaining] mass of the annihilating particles into energy, typically gamma photons. However, if the particles do contain information, then this also needs to be conserved upon annihilation, producing some lower-energy photons. In the present study, I predicted the exact energy of the infrared red photons resulting from this information erasure, and I gave a detailed protocol for the experimental testing involving the electron-positron annihilation process,” Vopson told ZME Science.
Information: just another form of matter and energy?
The mass-energy-information equivalence (M/E/I) principle combines Rolf Launder’s application of the laws of thermodynamics with information theory — which says information is another form of energy — and Claude Shannon’s information theory that led to the invention of the first digital bit. This M/E/I principle, along with its main prediction that information has mass, is what Vopson calls the 1st information conjecture.
However, testing these conjectures is not trivial. For instance, a 1 terabyte hard drive filled with digital information would gain a mass of only 2.5 × 10-25 Kg compared to the same erased drive. Measuring such a tiny change in mass is impossible even with the most sensitive scale in the world.
Instead, Vopson has proposed an experiment that tests both conjectures using a particle-antiparticle collision. Since every particle is supposed to contain information, which supposedly has its own mass, then that information has to go somewhere when the particle is annihilated. In this case, the information should be converted into low-energy infrared photons.
According to Vopson’s predictions, an electron-positron collision should produce two high-energy gamma rays, as well as two infrared photons with wavelengths around 50 micrometers. The physicist adds that altering the samples’ temperature wouldn’t influence the energy of the gamma rays, but would shift the wavelength of the infrared photons. This is important because it provides a control mechanism for the experiment that can rule out other physical processes.
Validating the mass-energy-information equivalence principle could have far-reaching implications for physics as we know it. In a previous interview with ZME Science, Vopson said that if his conjectures are correct, the universe would contain a stupendous amount of digital information. He speculated that — considering all these things — the elusive dark matter could be just information. Only 5% of the universe is made of baryonic matter (i.e. things we can see or measure), while the rest of the 95% mass-energy content is made of dark matter and dark energy — fancy terms physicists use to describe things that they have no idea what they look like.
Then there’s the black hole information loss paradox. According to Einstein’s general theory of relativity, the gravity of a black hole is so overwhelming, that nothing can escape its clutches within its event horizon — not even light. But in the 1970s, Stephen Hawking and collaborators sought to finesse our understanding of black holes by using quantum theory; and one of the central tenets of quantum mechanics is that information can never be lost. One of Hawking’s major predictions is that black holes emit radiation, now called Hawking radiation. But with this prediction, the late British physicist had pitted the ultimate laws of physics — general relativity and quantum mechanics — against one another, hence the information loss paradox. The mass-energy-information equivalence principle may lend a helping hand in reconciling this paradox.
“It appears to be exactly the same thing that I am proposing in this latest article, but at very different scales. Looking closely into this problem will be the scope of a different study and for now, it is just an interesting idea that must be followed,” Vopson tells me.
Finally, the mass-energy-information equivalence could help settle a whimsical debate that has been gaining steam lately: the notion that we may all be living inside a computer simulation. The debate can be traced to a seminal paper published in 2003 by Nick Bostrom of the University of Oxford, which argued that a technologically adept civilization with immense computing power could simulate new realities with conscious beings in them. Bostrom argued that the probability that we are living in a simulation is close to one.
While it’s easy to dismiss the computer simulation theory, once you think about it, you can’t disprove it either. But Vopson thinks the two conjectures could offer a way out of this dilemma.
“It is like saying, how a character in the most advanced computer game ever created, becoming self-aware, could prove that it is inside a computer game? What experiments could this entity design from within the game to prove its reality is indeed computational? Similarly, if our world is indeed computational / simulation, then how could someone prove this? What experiments should one perform to demonstrate this?”
“From the information storage angle – a simulation requires information to run: the code itself, all the variables, etc… are bits of information stored somewhere.”
“My latest article offers a way of testing our reality from within the simulation, so a positive result would strongly suggest that the simulation hypothesis is probably real,” the physicist said.
A joint research venture between the University of Birmingham and private firms NitroPep Ltd and Pullman AC has produced air filters that are highly effective at killing bacteria, fungi, and viruses, including the SARS-CoV 2 virus, the infamous coronavirus.
The secret of these filters’ effectiveness is a chemical called chlorhexidine digluconate (CHDG). This is a potent biocide that can kill pathogens within seconds of coming into contact with them. Air filters coated in this substance can prove to be a powerful tool against airborne pathogens around the world, according to the researchers that designed them.
Removing the gunk
“The COVID-19 pandemic has brought to the forefront of public consciousness the real need for new ways to control the spread of airborne respiratory pathogens. In crowded spaces, from offices to large indoor venues, shopping malls, and on public transport, there is an incredibly high potential for transmission of COVID-19 and other viruses such as flu,” says Dr. Felicity de Cogan, Royal Academy of Engineering Industry Fellow at the University of Birmingham, and corresponding author of the paper.
“Most ventilation systems recycle air through the system, and the filters currently being used in these systems are not normally designed to prevent the spread of pathogens, only to block air particles. This means filters can actually act as a potential reservoir for harmful pathogens. We are excited that we have been able to develop a filter treatment which can kill bacteria, fungi and viruses—including SARS-CoV-2—in seconds. This addresses a global un-met need and could help clean the air in enclosed spaces, helping to prevent the spread of respiratory disease.”
The filters were tested in both laboratory and real-life conditions to determine how effective they were at removing air-borne pathogens, and the results are stellar.
In the lab, the filters were covered with viral particles of the Wuhan strain of SARS-CoV-2, alongside control filters. They were then checked periodically over a period of more than one hour to see how these pathogens fared. While much of the initial quantity of viral particles remained on the surface of the control filters for the experiment’s length, all SARS-CoV-2 cells were destroyed within 60 seconds on the treated filters.
Experiments involving bacteria and fungi that commonly cause illness in humans — such as E. coli,S. aureus, and C. albicans— yielded similar results. This showcases the wide applicability of the filters.
To determine how well these fitlers would perform in real-life situations, treated filters were installed in the heating, ventilation, and air conditioning systems on train carriages in the UK alongside control filters in matched pairs on the same train line. These were left to operate for three months before being removed and sent to the lab for analysis — which involved the researchers counting any bacteria colonies that survived on the filters.
No pathogens were found on the treated filters, the team explains. Furthermore, this step showed that the treatment was durable enough to withstand three months of real-world use while maintaining their structure, filtration functions, and anti-pathogen abilities.
“The technology we have developed can be applied to existing filters and can be used in existing heating, ventilation and air conditioning systems with no need for the cost or hassle of any modifications,” Dr. de Cogan explains. “This level of compatibility with existing systems removes many of the barriers encountered when new technologies are brought onto the market.”
NitroPep Ltd is now building on these findings in order to deliver a final marketable version of the coating.
The paper “Efficacy of antimicrobial and anti-viral coated air filters to prevent the spread of airborne pathogens” has been published in the journal Nature Scientific Reports.
Few places are as exposed as the European Union (EU) to Russia’s oil and gas in the wake of its invasion of Ukraine. The EU gets about 40% of its gas from Russia at a cost of over $110 million a day. Moving with a surprising speed, the EU has now introduced a strategy to cut its reliance on this fuel source by two-thirds within a year — and this could mean a lot both economically and environmentally.
The REPowerEU plan hopes to make Europe independent of Russian fossil fuels by 2030, placing initial efforts just on gas. The roadmap proposes to find alternative supplies of gas in the next few months, as well as increasing energy efficiency and doubling down on renewable energy sources in the medium to longer term.
“We simply cannot rely on a supplier who explicitly threatens us. We need to act now to mitigate the impact of rising energy prices, diversify our gas supply for next winter and accelerate the clean energy transition,” Commission President Ursula von der Leyen said in a statement. “We’ll work swiftly to implement these ideas.”
The road ahead
The new proposal will make it a legal requirement for EU countries to make sure they have a minimum level of gas storage. The objective is to have gas stocks at 90% capacity by Autumn, up from about 30% now. Discussions are already taking place with existing gas suppliers such as Norway and Algeria to increase flows and compensate for the crackdown on Russian gas. Environmentally, this won’t make a substantial difference as just the source of the gas will end.
The Commission pictures ending reliance on all fossil fuels from Russia “well before” 2030. In the short term, gas would be imported from the US and Africa and some countries might have to increase the use of coal in the months ahead. While this will mean higher carbon emissions, the longer-term goal is a shift to renewable energy — which will make a difference environmentally.
Another area of focus for the EU in the coming months will be higher imports of Liquefied Natural Gas (LNG) from suppliers including the US, Qatar, and Australia. Germany has already announced plans for two new LNG terminals to increase supplies, which has raised concerns among experts over a longer dependency on fossil fuels.
Executive Vice-President for the European Green Deal, Frans Timmermans asked to “dash into renewable energy at a lightning speed,” as they are cheaper, cleaner, and a potentially endless source of energy. The Russian invasion shows the urgency of accelerating Europe’s energy transition to cleaner energy sources, Timmerman said.
As well as finding new gas supplies, the Commission argued the reliance on Russia will be eased because of new renewable energy projects that will soon come online. Countries should consider using the revenues they raised from the Emissions Trading Scheme, the world’s largest carbon market, to pay for further green energy sources, the Commission said. Solar energy will be a particular point of focus, with a 4-stage plan aimed at delivering 1TW by 2030:
Multiply rooftop PV development through mandatory solar on new buildings, bans on fossil-fuel boilers, and significant investment.
Facilitate utility-scale development by freezing grid connection fees, and mandating member states to identify suitable solar PV sites, aiming to fast-track developments.
Pave the way for smart solar and hybrid projects using dedicated funding.
Accelerate the deployment of EU solar PV manufacturing capacity with€ 1bn.
The proposal says renewable energy projects have to be fast-tracked, with a large potential in domestic rooftop solar power. Up to a quarter of the EU’s electricity consumption could be obtained from panels on buildings and farms, the Commission said – also calling for a large increase in the use of biogas, made from agricultural and food waste.
EU leaders will meet in Versailles, France, later this week to discuss the plan, which won’t be cheap and might lead to some dissenting voices. Meanwhile, campaigners are asking governments to ensure the poorest are protected. Europe is already facing an energy poverty crisis and no one should have to choose between heating and heating, the NGO Global Witness said in a statement.
Bees and other pollinators play a key role in ensuring a healthy ecosystem and are also critical to our food security. However, they are in decline in many parts of the world, hit hard by the loss of habitats and loss and widespread use of toxic pesticides.
In recent years, many of these pesticides have been banned due to pressure from researchers and environmental groups. But they can also come back.
A nasty comeback
Thiamethoxam is a type of pesticide part of the group known as neonicotinoids, widely used around the world. However, in 2018, the most toxic ones, including thiamethoxam, were banned from outdoor use in the EU and the UK amid a growing list of evidence of the harm they cause to bees and other pollinators.
When poisoned by these chemicals, bees experience paralysis of their flight muscles and a failure in the homing behavior of foragers — which means less food for the colony. A single exposure is already enough to cause significant damage and Thiamethoxam is increasingly regarded as a problematic pesticide that is best banned. Neonicotinoids in general can also cause environmental contamination, leaching into soil and water and affecting the entire ecosystem.
However, these pesticides continue to be used even in banned places as countries can grant an “emergency derogation” when there’s the danger of a virus that can’t be contained by any other “reasonable” means. The UK is the most recent example, allowing the use of thiamethoxam for sugar beet against the advice of its own government experts.
It’s not the first time something like this has happened. In January 2021, the UK also planned a special derogation for the pesticide to save sugar beet plants from the beet yellow virus. However, there were lower levels of disease than expected and it was announced that the conditions for emergency use had not been met. This time, things look to be different.
Environmental and health organizations grouped under The Pesticide Collaboration have launched a legal challenge. The UK government decision, even temporary, isn’t consistent with halting wildlife decline, they argue. Farmers should be supported to reduce the reliance on harmful chemicals, finding alternative solutions, they added.
The sugar beet crisis
Over half the sugar consumed in the UK comes from sugar beet grown in England. A large amount of land is put aside every year to satisfy the country’s sugar demand, but climate change is now causing problems for the crop. This has resulted in pressure from farming lobby groups for the government to allow the use of harmful pesticides.
Unfortunately, this winter is much warmer than normal, and scientific modeling predicts a 68% level of virus incidence, which means the threshold for the use of the pesticide has been met, a government statement reads.
“The decision to approve an emergency authorization was not taken lightly and based on robust scientific assessment. We evaluate the risks very carefully and only grant temporary emergency authorizations for restricted pesticides in special circumstances when strict requirements are met and there are no alternatives,” a UK government spokesperson said in a statement.
There are about 3,000 farmers who grow sugar beet in the UK, according to the National Farmers Union (NFU). Farmers will be banned from growing flowering plans for 32 months after the sugar beet crop to minimize the risk to bees. NFU said in a statement that growers are relieved by the decision amid severe pest pressure across the country.
Campaigners argue only 5% of the pesticide actually reaches the crop, with the rest accumulating in the soil and causing a higher level of contamination than in pollen and nectar. This can then be a route of exposure for many organisms, including bee species that nest underground. It’s also absorbed by the roots of many plants visited by bees, such as wildflowers.
“Allowing a bee-harming pesticide back into our fields is totally at odds with ministers’ so-called green ambitions, not to mention directly against the recommendation of their own scientists. This decision comes just two months after the government enshrined in law a target to halt species loss by 2030,” Sandra Bell, campaigner at Friends of the Earth said in a statement.
Situations like this are more likely to emerge as environmental regulations become tighter and climate change also puts additional pressure on agriculture. It remains to be seen what other countries will do in the UK’s position.
Washington, DC, has a rat problem. According to the Centers for Disease Control and Prevention (CDC), this has led to the emergence of the first two official cases of hantavirus in humans in the city.
Wildlife has its own share of viruses and pathogens to deal with, as do people. Sometimes, however, when these two groups live in close proximity, pathogens can evolve to cross from one to the other. When this takes place from wildlife or livestock to humans, this is known as zoonosis. The coronavirus pandemic started as a zoonosis.
One genus of viruses that can comfortably infect both rodents and humans is known as orthohantavirus, or simply hantaviruses. These are widespread through rodent populations such as city rats, where they cause asymptomatic infections. However, people can become infected with these viruses as well, most commonly through exposure to rat urine or feces, although saliva or bites can also transmit the virus.
Washington DC’s rat problem has led to the emergence of two known cases of hantavirus infection, the CDC reported Thursday. Transmission from one infected person to another is almost unheard-of with hantaviruses, so concerns about brewing epidemics are far from the CDC’s mind. The infections were recorded in 2018 and have been successfully treated.
Still, the situation poses a risk for the health of people in Washington DC, who should take steps to protect themselves from the rodents.
“Although extremely rare, the two SEOV cases presented in this report highlight the importance of physicians including hantavirus infection in their differential diagnoses in patients with compatible symptoms and history of animal exposure or travel and underscore the importance of reporting notifiable infectious disease cases to health departments for investigation and response,” the CDC’s report explains. “These cases also serve as a reminder to the public to minimize risk for infection by following recommended hygiene practices.”
Hantavirus infection in people can lead to a host of respiratory and hemorrhagic diseases which can easily become fatal. Fortunately for the cases recorded in DC, the strain identified in the two infected individuals is a milder “Old World” strain called the Seoul virus. Old World hantaviruses cause a disease called hemorrhagic fever with renal syndrome (HFRS). In contrast, “New World” hantaviruses, which are present in the Americas, cause a much more severe respiratory infection known as hantavirus pulmonary syndrome (HPS)– which is much deadlier.
HFRS starts out as a generic infection with fever, chills, nausea, and headache, but can then progress to low blood pressure, acute shock, vascular leakage, and acute kidney failure, the CDC notes. The severity of HFRS varies by the strain of hantavirus that causes it, and can reach up to 15% fatality. In the case of the Seoul virus, fatality rates are around 1%. Both individuals reported-on by the CDC made a full recovery.
HPS also begins as a generic infection with fever, chills, and aches, but quickly progresses to an acute, life-threatening phase after about a week. The patient’s lungs and heart are affected; the lungs fill with fluid, and patients require hospitalization and ventilation within 24 hours. HPS is fatal in about 38% of cases, according to the CDC. The deadliest such virus, the Sin Nombre virus, spread by the deer mouse, has a fatality rate of about 50%.
HPS and the Sin Nombre virus first came in the crosshairs of US health officials following an outbreak of deadly respiratory disease in the Four Corners region. In total, 48 cases were identified in that year, 27 of which were fatal. The CDC finally tracked the virus down to rodents in the area, and it gained the moniker of Sin Nombre virus (the virus with no name) during this process.
The Seoul virus has a much lower prevalence in the US, and spreads from the common brown rat, which travelled to all corners of the world on European ships (hence, the virus is known as an “Old World” virus). The virus is present worldwide but was first described in Korea, near Seoul. It is considered a rare pathogen among humans.
This is what makes the two cases reported-on by the CDC notable. Patient A was a healthy male, a 30-year-old maintenance worker who had “frequent rodent sightings at his workplace”. He contracted the disease in May 2018 and made full recovery after receiving treatment. Patient B was an unrelated case. This 37-year-old man with chronic kidney disease who worked as a dishwasher and plumber’s assistant contracted the disease in November 2018; it is unclear from what source. The CDC notes that he did not own any pets, had not recently travelled outside of the US, and was unaware of exposure to rodents at any point in his daily life. He also made a full recovery after receiving treatment.
The CDC believes these two cases were caused by the city-wide rat problem in Washington DC, which they explain has been worsening for years now.
“Rodent overpopulation in DC is well documented by increased complaints via the Citywide Call Center to the Rodent Control Program, and the DC Department of Health has amplified efforts to address this public health threat,” the CDC explains.
The cases serve as a reminder of the dangers of rat infestation in our cities, and should motivate the public to follow recommended hygiene practices to insulate themselves from the risk. Meanwhile doctors should keep in mind that the virus is active in the area and look out for signs of hantavirus infection in their patients.
After they revisited photos of ancient human skeletons first exhumed in Portugal’s Sado Valley in the 1960s, archaeologists now believe that the 8,000-year-old remains went through a mummification practice before their burial. This would make the remains the oldest evidence for Mesolithic mummification in Europe. In fact, it could very well be the earliest evidence of mummification in the world.
The oldest evidence of deliberate mummification in Egypt, the most famous region in the world for mummies, is about 5,500 years old. However, researchers believe mummification may have been much more common during prehistoric times and could in fact be much older — it’s just that evidence is hard to come by due to the fragile nature of mummified tissue.
But using a clever technique, it may be possible to tell whether decomposed remains may have originally undergone mummification, significantly extending the timeline of such burial practices.
Excavations in the Sado Valley in southern Portugal, at the sites of Arapouco and Poças de S. Bento, between 1958 and 1964 recovered more than 100 skeletons dating between 8,000 and 7,000 years ago. Unfortunately, much of the original documentation for these extraordinary finds was lost, including photographs, site plans, and field drawings.
That’s until João Luís Cardoso, an archaeologist at the Open University in Lisbon, came across three rolls of film while studying a local archive.
These verified photos depict 13 bodies exhumed in 1961 and 1962, which Cardoso and colleagues used to reconstruct their likely burial positions using an archaeothanatological analysis. Based on knowledge of natural decay processes, this method has made it possible to reconstruct in detail how humans have historically dealt with their dead.
In addition to observations about the spatial distribution of the ancient bones from Sado Valley, forensic anthropologist Hayley Mickleburgh performed decomposition experiments to predict how human corpses pin different burial positions could look like if they had been mummified or not.
Together, these observations suggest that some of these remains must have been mummified. Although there was no soft tissue left, the archaeologists reached this conclusion based on deductions from indirect evidence like the position of the bodies, with their knees bent and pressed against the chest, as well as the presence of sediment infill around the bones and the absence of disarticulation. An unprepared decomposing corpse will disarticulate at weak joints relatively quickly after its burial, but mummified bodies still preserve articulation.
The authors of the new study believe that before being buried, the desiccating bodies were gradually tightened with ropes, binding the limbs in place and compressing the remains into the desired position. This would explain some of the signs of mummification, which was likely performed to ease transport to the grave and to preserve the shape of the body in life after burial.
Overall, the Portuguese researchers strongly believe that prehistoric mummification may have been much more widespread across the world than previously thought, despite the lack of direct evidence of soft tissue. This is why follow-up observations of ancient archaeological sites using archaeothanatological analysis are paramount in order to uncover new robust evidence of pre-burial practices in prehistory. In other words, this may just be the beginning of a new exciting phase in mummy archaeology.
Whether or not the Sado Valley burials represent the oldest mummies in the world discovered thus far remains contested. The oldest confirmed mummies in the world are the 7,000-year-old Chinchorro mummies, found on Chile’s coast. But people likely mummified their dead much earlier than that, even in hunter-gatherer communities.
The Endurance was finally uncovered, over a century after it sank in the Weddell Sea in Antarctica. The ship was part of a famous expedition led by Sir Ernest Shackleton but got trapped in pack ice, forcing the expedition members to camp for months in the Antarctic and make a heroic escape.
Despite laying under 3km (10,000 feet) of frigid water for over a century, the ship seems to be in impeccable shape, almost frozen in time. The ship was discovered just several kilometers from where it was abandoned after a search mounted by the Falklands Maritime Heritage Trust (FMHT) investigated the area for two weeks.
Using a South African icebreaker, Agulhas II, the search team deployed submersible units to comb the area. After coming across various interesting targets, they finally uncovered the wreck site on Saturday, spending the next few days documenting and photographing the site.
In a blog post announcing the find, Director of Exploration Mensun Bound couldn’t contain his excitement:
“Ladies and Gentlemen,
I don’t know how else to say this, so I am going to come straight to the point.
We have found the wreck of the Endurance!”
“In a long career of surveying and excavating historic shipwrecks, I have never seen one as bold and beautiful as this.”
The mission’s leader, the veteran polar geographer Dr. John Shears also told the BBC that this is an incredible achievement, describing the moment when they saw the ship as “jaw-dropping”. Shears also emphasized that this was “the world’s most difficult shipwreck search”, battling blizzards, bitterly cold temperatures, and constantly shifting sea-ice. “We have achieved what many people said was impossible,” Shears said.
The ship looks much like it did when it was last photographed by Shackleton’s filmmaker, Frank Hurley, in 1915. While some things have obviously broken down, you can still see the hull, the deck, and the porthole window from Shackleton’s cabin. The anchors are still around, as are some of the boots and crockery the crew abandoned with the ship.
“Most remarkable of all was her name – E N D U R A N C E – which arcs across her stern with perfect clarity. And below is the 5-pointed Polaris star. Just as in Hurley’s famous photographs,” Bound adds.
Some sea creatures (such as filter feeders) have colonized the wreck but there don’t seem to be any wood-eating worms that would degrade the ship structurally.
The wreck itself cannot be moved or disturbed in any way, as it is a designated monument under the international Antarctic Treaty. Therefore, researchers can’t bring anything to the surface, and all they’ve done now was to document the position and situation for the ship.
A legendary expedition
Sir Ernest Henry Shackleton led three expeditions into the Antarctic. The one that employed the Endurance was launched in 1914, and Endurance departed from South Georgia, British Overseas Territory in the southern Atlantic Ocean, for the Weddell Sea on 5 December. But the situation quickly took a turn for the worse, as the ship became trapped in an ice floe. The crew waited until February and then realized that the ship would be trapped until spring (in the southern hemisphere, spring starts in September).
Shackleton ordered the conversion of the ship to a winter station, and the crew managed to tough it out until September. But when the ice started to release, the crew’s hopes that the ship would be freed safely were destroyed. The ice put extreme pressure on the ship’s hull, damaging it, and the ship was taking water. In November, the crew abandoned the ship.
The next two months, Shackleton and his crew camped on a large, flat ice floe (basically an ice island), hoping that it would drift towards Paulet Islands 250 miles (402 km) away, where some stores were cached. This too failed. Shackleton decided to set up a more permanent camp on a different flow, hoping to drift to a safe island. This too did not happen. The floe broke in two, and Shackleton’s crew was forced into lifeboats, heading towards the nearest island.
The exhausted men managed to end up their three lifeboats at Elephant Island, 346 miles (557 km) from where the Endurance sank, after being adrift on ice for almost 500 days. Shackleton gave his mittens to photographer Frank Hurley (who had lost his) and suffered severe frostbites as a result. In a desperate last-ditch attempt, Shackleton decided to take one of the three lifeboats and head for whaling stations 720 nautical miles (1,334 km) away.
Shackleton packed minimal supplies and head out with a handful of people, only to be met by a hurricane. They landed on an island and Shackleton and two members braced a yet-untried land route over dangerous, uncharted mountainous terrain. Ultimately, they were able to reach a whaling station and after several tries, rescue the surviving members of the expedition.
The fact that researchers now have such a connection to this expedition is a spectacular achievement. “We will pay our respects to ‘The Boss’,” said Dr. Shears, using the nickname the Endurance crew had for their leader.
Still, the current expedition hopes they can uncover even more from the ship and will now embark on thorough scientific research of the vessel.
“You can even see the holes that Shackleton’s men cut in the decks to get through to the ‘tween decks to salvage supplies, etc, using boat hooks. In particular, there was the hole they cut through the deck in order to get into “The Billabong”, the cabin in “The Ritz” that had been used by Hurley, Leonard Hussey (meteorologist), James McIlroy (surgeon) and Alexander Macklin (surgeon), but which was used to store food supplies at the time the ship went down,” Bound concluded in an article for the BBC.
Researchers at the Max Planck Institute for Plant Breeding Research have set the groundwork for supercharging the potato, by mapping out the tuber’s complete genome.
Fried, mashed, or thrown in a stew, the humble potato has a special place in our hearts and our plates that nothing else seems to be able to fill. Researchers seem to love this tasty tuber as well, and have put significant effort into decoding its genetic secrets. This impressive work will allow us to create better varieties of potato much faster than traditional breeding methods allow for, with implications for the quality of our meals, the enjoyment we derive from it, and global food security.
“The potato is becoming more and more integral to diets worldwide including even Asian countries like China where rice is the traditional staple food. Building on this work, we can now implement genome-assisted breeding of new potato varieties that will be more productive and also resistant to climate change — this could have a huge impact on delivering food security in the decades to come.”
The potato has not changed very much in the last 100 years or so. The overwhelming majority of varieties that are available in shops today are the same ones that were put to market over the last century and before. While these traditional cultivars are very popular, they do underline that there is a lack of variety of potatoes being grown, cooked, and enjoyed around the world. Thus, it stands to reason that improvements can be made to the baseline potato in order to make it more palatable, more resilient, or more abundant.
That’s what the team at the Max Planck Institute for Plant Breeding Research hopes to achieve with the full sequencing of the plant’s genome. The work, led by geneticist Korbinian Schneeberger, represents the first full assembly of the potato genome in history, allowing for researchers to work with a much better view of the plant’s genetic intricacies, and thus much more accuracy when trying to breed new varieties of the plant.
Low genetic diversity within a species — and the potato is a good example of one such species — means that it can have difficulties thriving in certain contexts, and leaves it vulnerable to disease. The near-extinction of the Gros Michel banana due to the Panama disease is a great example of such a genetic vulnerability at work. In the case of the potato, the Irish famine of the 1840s stands testament to how completely potato crops can be wiped out by pathogens. During this tragic event, Europeans were growing a single variety of potatoes, which was vulnerable to blight; as such, potato crops failed across the continent.
The Green Revolution of the 1950s and 60s saw a great diversification of crop varieties in staples like rice or wheat, but not potatoes. Efforts to breed new varieties with higher yields or more disease resistance have, so far, remained largely unsuccessful.
Potatoes, the team explains, inherit two copies of each chromosome from every parent — unlike humans, who inherit one copy of every chromosome from their parents. This makes them a species with four copies of each chromosome, a ‘tetraploid’, making them exceedingly difficult and slow to be coaxed into generating new varieties with desirable combinations of traits.
The same tetraploid structure also makes it technically difficult to reconstruct the potato’s genome.
To work around this issue, the team sequenced the DNA of potatoes working not with mature plants, but with large numbers of individual pollen cells. These contain only two copies of each parent chromosome, which made it easier for the team to use established genetic methods to reconstruct the plant’s genome.
The results should give scientists and plant breeders a powerful new tool with which to identify desirable gene variants in the potato and work to establish new varieties that contain them. Essentially, it gives them a baseline against which they can reliably compare individual plants and establish exactly where their desirable properties originate — and then work to reproduce them.
The paper “Chromosome-scale and haplotype-resolved genome assembly of a tetraploid potato cultivar” has been published in the journal Nature Genetics.
As the war (or if you’re in Russia, the “special operation“) continues to rage on, Russian authorities have banned the last semblance of independent journalism and are amplifying efforts to restrict domestic access to free information. But millions of Russians are not having it and are flocking to virtual private networks (or VPNs) to browse the free internet.
The demand for VPNs, which allow the user to browse the internet privately and without restriction, skyrocketed in Russia after the invasion. Between February 27 and March 3, demand surged by 668% — but after Russia blocked Facebook and Twitter on March 4, the demand for VPNs grew even more, peaking at 1,092% above the average before the invasion.
By March 5, all the top ten most downloaded apps in Russia are essentially VPNs.
Overall, the Google Play Store saw 3.3 million VPN downloads, while the Apple App Store had 1.3 million. That’s 4.6 million VPN downloads since the invasion started (Russia has a population of around 144 million).
Russian authorities have not yet blocked app stores, although they have the ability to do so. However, they are trying to block VPN traffic at the network level — drawing from China’s experience in censoring the internet. It’s a bit of an arms race: VPNs may be blocked, and then they have to find new ways of evading censorship (often by switching servers).
For users, this means they may be forced to change servers or even apps regularly if they want to access independent, foreign publishers and social media. Otherwise, they will have to contend with the warped, distorted reality typically present in Russian state-owned media.
Russia’s internet censorship is not as stringent as China’s, but it could be getting there very quickly. As Russia becomes more and more isolated, the Kremlin is trying to cast an online iron curtain to block its people from accessing the free internet. The Russian parliament also approved a law making the spreading of “false” news about the war in Ukraine a criminal offense punishable by up to 15 years in prison. Even the word “war” is banned in Russian media.
It’s not the first time we’re seeing something like this. In January, VPN demand in Kazakhstan also skyrocketed by over 3,400% following an internet blackout during anti-government protests. When China passed the Hong Kong national security law, VPN demand also surged (in a country where VPN usage is already common). Myanmar and Nigeria went through similar situations. However, the increase in demand is unprecedented, VPN providers say
VPN demand in Ukraine has also climbed 609% higher than before the invasion, mostly spiked by fears that invading forces will also carry out cyberattacks.
From early on in the pandemic, there’s been strong evidence of COVID-19 can take a toll on the brain and the nervous system – with symptoms like the loss of smell and taste as hallmarks of early infection. Now, a new study further demonstrated the mental toll of the virus, which was linked with significant, lasting brain abnormalities even in mild cases.
Researchers found that COVID-19 seems to reduce the brain’s gray matter, mainly in areas linked with memory processing and smell. These changes were observed in both people who required hospitalization and in those who had a less severe infections. The damage seen in the brain was beyond the structural changes that normally happen with age and could not be explained by other factors.
The study looked at changes in the brains of 785 people aged 51-81, who previously contributed brain scans to the UK Biobank, a large-scale database of brain imaging data from over 45,000 UK residents. Out of the participants, 401 had a COVID-19 infection sometime between March 2020 and April 2021 – with 4% hospitalized for infections.
The remaining 384 participants didn’t have COVID-19 but matched the infected participants in age, sex, and COVID-19 risk factors, such as whether they had diabetes. They served as the control group as they had no record of confirmed or suspected COVID-19. Everyone in the study was subject to two brain scans to allow comparisons.
“Using the UK Biobank resource, we were in a unique position to look at changes that took place in the brain following mild—as opposed to more moderate or severe—SARS-CoV-2 infection,” Genaëlle Douaud, lead author on the study, said in a statement. “We saw a greater loss of gray matter volume in infected participants.”
COVID-19 and the brain
The team used magnetic resonance imaging (MRI) to look at the brains. MRI uses a magnetic field and radio waves to generate images of tissues in the body. The MRI scans showed clear shrinkage in the brains of the people who caught the disease. Participants of the study caught COVID-19 about 4.5 months before their second scan.
The infected group had larger tissue loss in specific regions of the cerebral cortex – the outer surface of the brain. Shrinkage was most pronounced in the orbitofrontal cortex (which plays an important role in sensation) and in the parahippocampal gyrus (which is important for encoding new memories).
At the same time, those infected with COVID-19 had a larger reduction in overall brain size than the control group without the virus, the study showed. The authors also found tissue damage in areas of the brain linked with the primary olfactory cortex – a structure that gets sensory information from scent-detecting neurons in the nose.
On average, those who had the virus showed 0.2% to 2% greater tissue loss and damage over the course of about three years, compared with the control group. Estimates suggest that adults lose between 0.2% to 0.3% of gray matter in regions related to memory each year, so the extra loss would be out of the ordinary.
“It’s the only study in the world to be able to demonstrate before vs after changes in the brain associated with SARS-CoV-2 infection,” Naomi Allen, chief scientists at the Biobank, said in a statement. “Collecting a second set of scans has generated a unique resource to enable scientists to understand how the virus affects internal organs.”
The study stops short of explaining how impactful these changes are on the brain, and how long-lasting they are. However, problems associated with COVID-19 appear to be more pervasive than initially thought, and the specter of long COVID will likely continue for a long time to come.
Although humans make up only a tiny fraction of all life on the planet, our impact upon diversity and wildlife has been enormous. By some accounts, human activity is responsible for the loss of 80% of all wild animals and about 50% of all plants. Much of this loss was necessary to make way for farmed livestock for human consumption.
Just consider this fact: 70% of all birds on Earth are chickens and other poultry, whereas wild birds comprise a meager 30%. Were an alien archaeologist to visit our planet after humans went extinct, they would surely be staggered by the abundance of chicken fossils.
But before we became hooked on chicken eggs and hot wings, we most likely first started with geese.
Japanese archaeologists performing excavations at Tianluoshan, a Stone Age site dated between 7,000 and 5,500 years ago in China, found extensive evidence of goose domestication. They claim this is the earliest evidence of bird domestication reported thus far.
The team identified 232 goose bones, which paint a convincing picture that Tianluoshan may be the cradle of modern poultry.
First and foremost, the researchers performed radiocarbon dating on the bones themselves, rather than the sediments which covered the remains. This lends confidence that the goose bones are really as old as 7,000 years.
At least four bones belonged to juveniles no older than 16 weeks. This shows that they must have hatched at the site because it would have been impossible for them to fly in from somewhere else at their age. This is likely the case for the adult geese found there as well, given that wild geese don’t breed in the area today and probably didn’t 7,000 years ago either.
But, to be sure, the team led by Masaki Eda at Hokkaido University Museum in Sapporo, Japan, thoroughly broke down the chemical makeup of the ancient bones, showing the water they drank was local. The strikingly uniform size of the bred geese is also very indicative of captive breeding.
Although not by any means definitive, all of these lines of evidence converge to the same conclusion: geese were probably the first birds humans have domesticated, and this happened more than 7,000 years ago in China.
New Scientist reports that other studies have claimed that chickens were the first domesticated birds, as early as 10,000 years ago, also in avian-loving northern China. But the evidence, in this case, has proven contentious. Genetic analysis suggests chickens were domesticated from wild birds called red junglefowl, but these birds do not live that far north. Furthermore, the chicken bones weren’t directly dated. The firmest evidence of chicken domestication only appeared 5,000 years ago.
While most domestication research has focused on dogs and cattle, it’s refreshing to see new perspectives on the evolutionary history of poultry, upon which our food security depends so much.
For some time now, EU governments have been pushing for natural gas and nuclear energy as an essential part of the energy transition from carbon-intensive fossil fuels like coal and oil. But since Ukraine was invaded, Europe’s reliance on Russian gas has triggered a sudden push towards energy independence, mainly via renewables. It’s increasingly looking like Putin’s invasion may succeed in pushing Europe towards renewable energy.
In Germany, Chancellor Olaf Scholz said renewable energy is “crucial” for the EU’s energy security and Finance Minister Christian Lindner called for renewables “freedom energies.” Meanwhile, in France, Barbara Pompili, Minister for Ecological Transition, said that ending the dependency on fossil fuels, especially Russian ones, is essential.
In response, the Stand with Ukraine coalition, which groups hundreds of organizations including environmental groups like Greenpeace, said a ban on Russian energy imports would step one in a path to end fossil fuel production. They called for “bold steps” towards global decarbonization and for a transition to “clean and safe” renewables.
The EU imported 155 billion cubic meters of natural gas from Russia in 2021, almost half (45%) of its gas imports and nearly 40% of the total amount used, according to the International Energy Agency (IEA). But the war has largely disrupted this. Now, the European Commission is expected to present an updated energy strategy, which will likely give renewables a larger role.
The race to end this Russian dependence will likely require boosting imports from countries like the US and Qatar in the short term, and will likely lead to more domestic fossil fuel production. However, this doesn’t have to be the path ahead, climate experts argue, suggesting energy independence via clean energy such as solar and wind. The most likely option is a mixture between the two.
No more illusions
Europe has pledged to cut its greenhouse gas emissions by at least 55% by 2030, reaching net zero emissions by 2050. According to preliminary data, EU emissions dropped 10% from 2019 to 2020 – strongly related to the Covid-19 pandemic. By comparison, EU emissions declined 4% from 2018 to 2019. Despite being one of the more ambitious climate pledges around, it’s still nowhere near what is necessary if we want to avoid the worst of climate change effects.
If Europe wants to rid itself of Russian fossil fuels, it will need some sources oil and gas — but focusing on renewabls is the smart long-term bet, researchers emphasize.
The argument that Europe could limit its dependence on Russian gas by focusing on local fossil fuel sources and importing liquid natural gas from the US is neither realistic nor cost-effective, according to the think tank Carbon Tracker. It would require decades to build new gas decades and source local deposits, meaning price pressures won’t be solved right away.
By contrast, solar and wind energy sources can be significantly scaled up as part of existing decarbonization policies. This would be more cost-effective because of the large drop in renewable energy prices. The think tank Wuppertal Institute released a study this week showing how heating in the EU could run completely on renewables by 2013 thanks to electric heat pumps.
Meanwhile, the IEA came up with a road map to help deal Europe in its energy transition. The plan would reduce the bloc’s dependence on Russian natural gas by one-third in just one year while delivering on the bloc’s climate pledges. It’s a collection of actions designed to diversify the energy supply, focused on renewables.
“Nobody is under any illusions anymore. Russia’s use of its natural gas resources as an economic and political weapon show Europe needs to act quickly to be ready to face considerable uncertainty over Russian gas supplies next winter,” IEA Executive Director Fatih Birol said in a written statement announcing the plan.
The recommendations include no renewing gas supply contracts with Russia, which are due to expire at the end of the year, increasing biogas and biomethane supply, storing more gas to have a buffer of security, accelerating the deployment of renewables, protecting vulnerable customers, and improving the energy grid reliability and flexibility.
One major point of contention among psychologists has always been the nature versus nurture debate — the extent to which particular aspects of our behavior are a product of either inherited (i.e. genetic) or acquired (i.e. learned) influences. In a new study on mice, researchers at the University of Utah Health focused on the former, showing that genes inherited from each parent have their own impact on hormones and important neurotransmitters that regulate our mood and behavior.
Intriguingly, some of these genetic influences are sex-specific. For instance, the scientists found that genetics inherited from mom can shape the decisions and actions of sons, while genes from dad have biased control over daughters.
I got it from my Mom and Dad
Like chromosomes, genes also come in pairs. Both mom and dad each have two copies, or alleles, of each of their genes, but each parent only passes along one copy of each to the child. These genes determine many traits, such as hair and skin color.
But it’s not only our outward appearance that is influenced by genes. In a new study, researchers found that tyrosine hydroxylase and dopa decarboxylase — two genes that are heavily involved in the synthesis of hormones and neurotransmitters like dopamine, serotonin, norepinephrine, or epinephrine — are expressed differently from maternally versus paternally inherited gene copies. These chemicals play a crucial role in regulating an array of important functions from mood to movement.
The genes are also involved in the production of the adrenaline hormone by the adrenal gland, which triggers the “fight or flight” response when we encounter danger or stress. Together, these pathways form the brain-adrenal axis.
“The brain-adrenal axis controls decision making, stress responses, and the release of adrenaline, sometimes called the fight or flight response. Our study shows how mom’s and dad’s genes control this axis in their offspring and affect adrenaline release. Mom’s control the brain and dad’s control the adrenal gland,” Christopher Gregg, principal investigator and associate professor in the Department of Neurobiology at the University of Utah Health, told ZME Science.
In order to investigate how inherited gene copies introduce maternal or paternal biases in the brain-adrenal axis, the researchers genetically modified mice to attach a fluorescent tag to the dopa decarboxylase enzyme. Using a microscope, they could tell if a gene was inherited from the mother (colored red) or from the father (colored blue).
An investigation of the entire mouse brain revealed 11 regions that contained groups of neurons that only use mom’s copy of the dopa decarboxylase gene. Conversely, in the adrenal gland, there were groups of cells that were exclusively expressed by the gene copy inherited from the dad.
These findings immediately led to an existential question: could our behavior be influenced by these genetic biases? To answer, the researchers analyzed mice with mutations that switched off one parent’s copy in a select group of cells while the rodents were foraging for food.
The mice were left to explore freely so any external influence was kept to a minimum. Their behavior had to be as natural as possible as they encountered various obstacles, which prompted them to either take risks or retreat to safety, before resuming their quest for finding food.
These movements and behaviors look random and chaotic, but a machine algorithm developed by the researchers was able to pick up subtle, but significant patterns. When these foraging patterns were broken down into modules, the researchers were able to identify behavioral differences associated with each parent’s copy of the dopa decarboxylase genes.
“We have faced a lot of skepticism from the scientific community. The way we study decision-making by using machine learning to detect patterns was hard for scientists to understand. The community was surprised to find that such well-studied genes (Th and Ddc) express the Mum and Dad’s gene copies in different brain and adrenal cells. We had to do a lot of work to show how strong the evidence is for our discovery,” Gregg said.
Gregg had been interested in how biological factors influence our decisions since he first came across Daniel Kahneman’s work in behavioral economics while he was still a postdoc. In the 1970s, Kahneman and Amos Tversky introduced the term ‘cognitive bias’ to describe our systematic but flawed patterns of responses to judgment and decision problems.
For instance, the gambler’s fallacy makes us tend to be certain that if a coin has landed heads up five times in a row, then it’s much more likely to land tails up the sixth time. The odds are, in fact, still 50-50. One of the most pervasive and damaging biases is the confirmation bias, which leads us to look for evidence confirming what we already think or suspect. If you’re disgruntled by the current political divides across the world, where each side seems unable to allow that the other side might be right about some things, you can point the finger at confirmation bias in many cases. There are many other biases, though, with Wikipedia listing at least 185 entries.
Now, Gregg seems convinced that these cognitive biases and some decision processes are deeply rooted in our biology, as well as that of other mammals. And with more research, it may be possible to modify maladaptive behaviors in a clinical setting, with potential new treatments for conditions like anxiety or depression.
The main caveat, however, is that all of this work has been performed on mice. Gregg and colleagues now want to develop and apply a new artificial intelligence platform called Storyline Health to human decision-making and behavior. They expect to discover genetic factors that control our behavior and cognition in a similar way to rodents.
“I am very excited about this new area that emerges from our work and merges decision making, machine learning and genetics. We are going to discover a lot of important new things about the factors that shape our decisions,” he said.