There are very good reasons why Mars is such a desolate, barren landscape. With no thick atmosphere nor a magnetic field, the Red Planet’s surface is bombarded daily by radiation up to 900 times higher than seen on Earth. However, some places are sheltered. New research has found that cave entrances are shielded from the harmful radiation that normally hits Mars. This may make them ideal as both sites for future settlements and robotic missions meant to scour for signs of alien life.
Despite amazing advances in space exploration in the last decade, if we’re going to take the idea of settling Mars sometime during this century seriously, there are many challenges that need to be overcome. That’s unless we’re content with one-way suicide missions.
There’s no shortage of environmental hazards out to kill any astronaut bold enough to dare set foot on Mars. For one, the planet only has 0.7% of Earth’s sea-level pressure, meaning any human on Mars must wear a full pressure suit or stay barricaded inside a pressure-controlled chamber, otherwise oxygen wouldn’t flow through the bloodstream and the body could swell and bleed out.
Then there’s the issue of radiation. Mars is farther away from the Sun than Earth, receiving roughly 60% of the power per square meter seen on a similar site on Earth. But since Mars doesn’t have a magnetic field to deflect energetic particles, coupled with the paper-thin atmosphere, its surface is exposed to much higher levels of radiation than Earth. Furthermore, besides regular exposure to cosmic rays and solar wind, it receives occasional, lethal radiation blasts due to strong solar flares.
Measurements performed by the Mars Odyssey probe suggest that ongoing radiation levels on Mars are at least 2.5 times higher than what astronauts experience on the International Space Station. That’s about 22 millirads per day, which works out to 8000 millirads (8 rads) per year. For comparison, the people in the U.S. are exposed to roughly 0.62 rads/year on average.
Any attempt to colonize the Red Planet will require measures to ensure radiation exposure is kept to a minimum. Some of the proposed ideas thus far involve habitats built directly into the ground or even above-ground habitats using inflatable modules encased in ceramics.
But a better idea may be to take advantage of the natural shelters already in place. Mars is dotted with deep pits, caves, and lava tube structures across its surface. According to a new study performed by researchers led by Daniel Viúdez-Moreiras at Spain’s National Institute for Aerospace Technology, many of these caverns could offer ample protection to human settlers.
“Caves and their entrances have been proposed as habitable environments and regions that could have preserved evidence of life, mostly due to their natural shielding from the damaging ionizing and non-ionizing radiation present on the surface. However, no studies to date have quantitatively determined the shielding offered by these voids on Mars,” the researchers wrote in the journal Icarus.
The researchers found that the levels of UV radiation inside Martian caverns were, in some cases, ~2% of those values found on the surface.
“Numerical simulations of cave entrances show a reduction even more than two orders of magnitude in UV radiation, both in the maximum instantaneous and cumulative doses, throughout the year and at any location of the planet,” the researchers found.
What’s more, the amount of active radiation is still higher than the minimum required for Earth-like photosynthesis. In other words, cave entrances could shelter both humans and their plant food source. However, it’s unclear whether ionizing radiation — the kind of electromagnetic radiation associated with cancer — is blocked in the same way as UV radiation.
“Ionizing radiation doesn’t present exactly the same behavior as UV radiation,” Viúdez-Moreiras. told New Scientist. “However, it is expected that ionizing radiation will also be strongly attenuated in pit craters and cave skylights.”
In 2009, researchers led by Dr. Armando Azua-Bustos, a scientist at the Department of Planetology and Habitability Center of Astrobiology (CSIC-INTA) in Madrid, described the behavior of a particular Cyanidium eukaryote red algae growing in the Mars-like Atacama Desert. These microorganisms formed biofilms in seemingly inhospitable coastal caves where there is little light, but just enough it seems to support life. If Martian caves are anything like those across the barren Atacama Desert, the driest place on Earth, life could find a way to thrive there as well, Azua-Bustos and colleagues proposed.
High-resolution surface imaging data recorded over the past couple of decades by instruments like the Mars Reconnaissance Orbiter Context Camera system (CTX), together with Mars Odyssey’s thermal emission imaging system (THEMIS), suggest that the Tharsis bulge may be the best region for cave candidates on Mars. More than 1,000 suitable caves have been identified in this region, which also contains three enormous shield volcanoes, Arsia Mons, Pavonis Mons, and Ascraeus Mons.
Tharsis city sounds like an awesome name for the first human settlement on Mars. Remember the name.
UPDATE (August 30, 2021): The article was updated to include the findings made by Azua-Bustos et al. in the Atacama Desert, which complement the radiation quantification in Martian caves.
In the wake of the 2011 Fukushima nuclear power plant disaster, Japanese authorities set up an exclusion zone which grew larger and larger as radiation leaked from the plant, forcing more than 150,000 people to evacuate from the area. A decade later, that zone remains in place and many residents have not returned with entire towns left completely deserted — not counting other animals.
After humans fled Fukushima prefecture, wildlife took over. According to Japanese researchers who analyzed DNA samples from the site, wild boars have bred with domestic pigs that were left behind during the hasty evacuation. As a result, wild boar-pig hybrids now roam the radioactive exclusion zone. However, the researchers add that the hybrids have suffered no mutations as a result of radiation exposure. In fact, they seem to thrive.
The researchers embarked examined the DNA of wild boars and domestic livestock from Fukushima to see how the animals were affected by life in the radiation-contaminated area. To their surprise, they instead found evidence of hybridization, or cross-breeding, between the two.
There are now hundreds of wild boars roaming Fukushima, which registers levels of the radioactive element cesium-137 some 300 times higher than the safe threshold. With virtually no predators, the boars, originally from nearby mountains, have commenced a “biological invasion”.
Of the 338 wild boars whose genes were sequenced, at least 18 individuals displayed domestic pig genes. However, more domestic pig genes have been found in wild boars since the study was completed, highlighting the need for more genetic monitoring at Fukushima.
“Frequencies of this haplotype have remained stable since first detection in 2015. This result infers ongoing genetic pollution in wild boar populations from released domesticated pigs,” the scientists wrote in the journal Proceedings of the Royal Society B.
These changes are at a low frequency, and as the hybrids will breed with wild boars, the domestic pig genes will be diluted over time. The researchers believe there will be no changes in the wild boars’ behavior over time. Currently observed abnormal wild boar behavior is pinned to the absence of people rather than some genetic component.
Since 2018, people have started slowly moving back into previously abandoned areas close to Fukushima. It seems like it’s only a matter of time before the boars have to move back to the mountains. In the meantime, they seem to be enjoying themselves.
Perhaps some areas of Fukushima will remain deserted by humans for decades and may share the fate of Chernobyl. Today, the exclusion zone of Pripyat in Ukraine is a haven for wildlife, with European bison, boreal lynx, moose, brown bears, and wolves thriving in the radioactive town.
Astronomers have detected a new, potentially deadly emanation coming from Uranus: X-rays. While most of these are likely produced by the sun and then reflected by the blue planet, the team is excited about the possibility of a local source of X-rays adding to these emissions.
The seventh planet from the sun has the distinction of being our only neighbor that rotates on its side. But that’s not the only secret this blue, frigid dot in space seems to hide, according to new research. The planet also seems to be radioactive — after a fashion. This discovery currently leaves us with more questions than answers, but it could help us better understand Uranus in the long run.
Deep space rays
Since it’s so far away, we’ve had precious few opportunities to interact with the planet. In fact, the only human spacecraft to ever come near Uranus was Voyager 2, and that happened in 1986. So most of our data regarding the frozen giant comes from telescopes, such as NASA’s Chandra X-ray Observatory and the Hubble Space Telescope.
A new study based on snapshots of Uranus taken by Chandra in 2002 and 2017. These revealed the existence of X-rays in the data from 2002, and a possible burst of the same type of radiation in the second data set. The 2017 dataset was recorded when the planet was approximately at the same orientation relative to Earth as it was in 2002.
The team explains that the source of these X-rays, or at least the chief part of them, is likely the Sun. This wouldn’t be unprecedented: both Jupiter and Saturn are known to behave the same way, scattering light from the Sun (including X-rays) back into the void. Earth’s atmosphere, actually, behaves in a similar way.
But, while the team was expecting to observe X-rays coming off of Uranus due to these precedents, what really surprised them is the possibility that another source of radiation could be present. While still unconfirmed, such a source would have important implications for our understanding of the planet.
One possible source would be the rings of Uranus; we know from our observations of Saturn that planetary ring systems can emit X-rays, produced by collisions between them and charged particles around the planets. Uranus’ auroras are another contender, as we have registered emissions coming from them on other wavelengths. These auroras are also produced by interactions with charged particles, much like the northern lights on Earth. Auroras are also known to emit X-rays both on Earth and other planets.
The piece that’s missing in the aurora picture, however, is that researchers don’t understand what causes them on Uranus.
Its unique magnetic field and rapid rotation could create unusually complex auroras, the team explains, which further muddies our ability to interpret the current findings; there are too many unknown variables in this equation. Hopefully, however, the current findings will help point us towards the answers we need.
The paper “A Low Signal Detection of X‐Rays From Uranus” has been published in the Journal of Geophysical Research: Space Physics.
Researchers at the University of Rhode Island’s (URI) Graduate School of Oceanography report that a whole ecosystem of microbes below the sea dines not on sunlight, but on chemicals produced by the natural irradiation of water molecules.
Whole bacterial communities living beneath the sea floor rely on a very curious food source: hydrogen released by irradiated water. This process takes place due to water molecules being exposed to natural radiation, and feeds microbes living just a few meters below the bottom of the open ocean. Far from being a niche feeding strategy, however, the team notes that this radiation-fueled feeding supports one of our planet’s largest ecosystems by volume.
Cooking with radiation
“This work provides an important new perspective on the availability of resources that subsurface microbial communities can use to sustain themselves. This is fundamental to understand life on Earth and to constrain the habitability of other planetary bodies, such as Mars,” said Justine Sauvage, the study’s lead author and a postdoctoral fellow at the University of Gothenburg who conducted the research as a doctoral student at URI.
The process through which ionizing radiation (as opposed to say, visible light) splits the water molecule is known as radiolysis. It’s quite natural and takes place wherever there is water and enough radiation. The authors explain that the seafloor is a particular hotbed of radiolysis, most likely due to minerals in marine sediment acting as catalysts for the process.
Much like radiation in the form of sunlight helps feed plants, and through them most other life on Earth, ionizing radiation also helps feed a lot of mouths. Radiolysis produces elemental hydrogen and oxygen-compounds (oxidants), which serve as food for microbial communities living in the sediment. A few feet below the bottom of the ocean, the team adds, it becomes the primary source of food and energy for these bacteria according to Steven D’Hondt, URI professor of oceanography and a co-author of the study.
“The marine sediment actually amplifies the production of these usable chemicals,” he said. “If you have the same amount of irradiation in pure water and in wet sediment, you get a lot more hydrogen from wet sediment. The sediment makes the production of hydrogen much more effective.”
Exactly why this process seems to be more intense in wet sediment, we don’t yet know. It’s likely the case that some minerals in these deposits can act as semiconductors, “making the process more efficient,” according to D’Hondt.
The discovery was made after a series of experiments carried out at the Rhode Island Nuclear Science Center. The team worked with samples of wet sediment collected from various points in the Pacific and Atlantic Oceans by the Integrated Ocean Drilling Program and other U.S. research vessels. Sauvage put some in vials and then blasted these with radiation. In the end, she compared how much hydrogen was produced in vials with wet sediment to controls (irradiated vials of seawater and distilled water). The presence of sediment increased hydrogen production by as much as 30-fold, the paper explains.
“This study is a unique combination of sophisticated laboratory experiments integrated into a global biological context,” said co-author Arthur Spivack, URI professor of oceanography.
The implications of these findings are applicable both to Earth and other planets. For starters, it gives us a better understanding of where life can thrive and how — even without sunlight and in the presence of radiation. This not only helps us better understand the depths of the oceans, but also gives clues as to where alien life could be found hiding. For example, many of the minerals found on Earth are also present on Mars, so there’s a very high chance that radiolysis could occur on the red planet in areas where liquid water is present. If it takes place at the same rates it does on Earth’s seafloor, it “could potentially sustain life at the same levels that it’s sustained in marine sediment.”
With the Perseverance rover having just landed on Mars on a mission to retrieve samples of rocks and to keep an eye out for potentially-habitable environments, we may not have to wait long before we can check.
At the same time, the authors explain that their findings also have value for the nuclear industry, most notably in the storage of nuclear waste and the management of nuclear accidents.
“If you store nuclear waste in sediment or rock, it may generate hydrogen and oxidants faster than in pure water. That natural catalysis may make those storage systems more corrosive than is generally realized,” D’Hondt says.
Going forward, the team plans to examine how the process takes place in other environments, both on Earth and beyond, with oceanic crust, continental crust, and subsurface Mars being of particular interest to them. In addition to this, they also want to delve deeper into how the subsurface communities that rely on radiolysis for food live, interact, and evolve.
The paper “The contribution of water radiolysis to marine sedimentary life” has been published in the journal Nature Communications.
A fungus found on the ruins of the Chernobyl nuclear power plant could protect astronauts from cosmic radiation, the greatest hazard for humans on deep-space exploration missions.
Scientists have long been trying to find solutions to the radiation caused by long-duration deep-space missions. Several options have been on the table including a Star Trek-like deflector shield and manufacturing radiation-shielding bricks made from the Martian regolith (soil).
The problem is starting to become urgent, as space agencies are getting serious about sending humans to the Moon by 2024 under the Artemis program and promises of crewed missions to Mars in the near future. A 360-day round trip to the red planet would expose unprotected astronauts to the equivalent of two-thirds of their allowable lifetime radiation exposure — simply put, it would be too much radiation for a safe journey.
But this could be prevented thanks to an extremophile fungus known as Cladosporium sphaerospermum. The organism was first discovered in 1886 and now it has been found growing in radioactive environments, including the cooling pools of the Chernobyl nuclear plant.
The fungus, melanized and radiothropic, is capable of converting radioactive energy into chemical energy, which it does using melanin pigments inside its cell walls. It is analogous to photosynthesis, in which plants convert energy from visible light to useful energy.
Considering the fungus’ appetite for radiation, Nils Averesch, a co-author of the study and a scientist at NASA Ames Research Center, created an experiment to establish how much radiation this organism might absorb while in space. He and his team also wanted to evaluate its suitability as a medium for a radiation shield.
The venue for the experiment was the International Space Station (ISS), which features a unique radiation environment not unlike the surface of Mars. The astronauts aboard the ISS divided a petri dish in half, one side with the fungus and the other one empty as he negative control. The fungi grew for 30 days, as the astronauts constantly monitored the radiation levels.
The results showed that the fungi were capable of adapting to the microgravity environment of low Earth orbit quickly and were able live off of the incoming radiation. The researchers found a 1.7-millimeter-thick layer of growth blocking radiation somewhere between 1.82% to 5.04% compared to the negative control group. Not only did the fungi survive — it thrived.
“In the experiment, we were able to prove that the fungus does not only thrive on ionizing radiation on Earth but also in space,” Averesch said in a press release. “In addition to not being destroyed by the radiation… the fungus does, in fact, reduce radiation of the measured spectrum.”
The researchers agree that a fungal lawn measuring 8.2 inches (21 centimeters) thick could “could largely negate the annual dose-equivalent of the radiation environment on the surface of Mars,” as they wrote in the study. The fungus is ranked as “among the most effective radiation attenuators.” The fungus is a self-sustaining, self-replicative substrate capable of living off even the smallest doses of radiation and biomass, the researchers found. It can also be grown on many different carbon sources, such as organic waste.
It’s a promising solution for astronauts in space, but more tests will be needed to confirm these results.
New research from the North Carolina State University points to a polymer embedded with bismuth trioxide particles as a possible replacement for today’s toxic radiation shielding materials such as lead.
The material is lightweight, can be manufactured quickly, and is effective at blocking ionizing radiation such as gamma rays. Such properties make it an ideal material for a wide range of applications ranging from medicine to space exploration.
No radiation past this point
“Traditional radiation shielding materials, like lead, are often expensive, heavy and toxic to human health and the environment,” says Ge Yang, an assistant professor of nuclear engineering at NC State and corresponding author of a paper on the work.
“This proof-of-concept study shows that a bismuth trioxide compound could serve as effective radiation shielding, while mitigating the drawbacks associated with traditional shielding materials.”
In the paper, the team details how this material — a “poly (methyl methacrylate) (PMMA) / Bi2O3 composite” — can be produced using a curing method that relies on ultraviolet (UV) light instead of traditional, high-temperature approaches, which are expensive and can take “even days” to perform. The UV method, by contrast, can cure this material in “the order of minutes at room temperature,” Yang explains.
Through their method, the team constructed samples of this polymer that contained up to 44% bismuth trioxide by weight. PMMA itself, which is standard ‘acrylic plastic’ lends optical clarity, abrasion resistance, hardness, and stiffness to the mixture, while the bismuth compound does all of the radiation shielding. It also “improved the micro-hardness to nearly seven times that of the pure PMMA”, the team explains. Microhardness is the hardness of a material as tested with a force of less than one Newton.
Lab tests showed that different concentrations of bismuth oxide provide varying levels of radiation shielding, with the one detailed here (44% weight) offering “excellent mechanical and shielding properties”.
“This is foundational work,” Yang says. “We have determined that the compound is effective at shielding gamma rays, is lightweight, and is strong. We are working to further optimize this technique to get the best performance from the material. We are excited about finding a novel radiation shielding material that works this well, is this light and can be manufactured this quickly.”
For the immediate future, the team wants to continue exploring the properties of the material, including its behavior under different heat levels.
The paper “Gamma radiation shielding properties of poly (methyl methacrylate) / Bi2O3 composites” has been published in the journal Nuclear Engineering and Technology.
As the COVID-19 pandemic continues, it becomes abundantly clear that conspiracy theories and misinformation are almost as prone to spreading through the public. Thus far during this global crisis, misinformation like “coronavirus is just the flu/no worse than the flu” to drinking water every 15 minutes will “flush out” the virus, to eat cloves of garlic protects against COVID-19. The latter of which led to a woman being hospitalised in China after eating 1.5kg of raw garlic.
Whilst conspiracy theories may be slightly less dangerous than pure misinformation, they are no less insidious. Some ‘theories’ that have circulated thus far are that COVID-19 is a “bioweapon” that was “created in a lab” — either genetically engineered or incubated in bat test subjects, in the US or in China, depending on who you believe — to it being a “poplation control scheme” devised by Bill Gates of Microsoft.
What is very clear is that the “disease vector” responsible for the spread of misinformation and conspiracy is most certain social media and to a wider extent, the internet itself. It is perhaps ironic then, that the most widespread conspiracy theory and the one that the most people seem to be lending credibility to, is that 5G — the next generation of mobile internet connection that promises faster upload/download speeds through the use of a wider radio spectrum — is either responsible for the illness that is being blamed on COVID-19 or is somehow facilitating the spread.
In fact, news reports this week indicate that some people are taking this fallacious connection so seriously that they are attacking 5G towers and workers. Just this morning Birmingham Live in the UK reported that a 5G mast had been set on fire, whilst a video circulates on Twitter of protesters in Hong Kong tearing down masts.
Whilst it would be easy, and perhaps convenient, to claim this as a new phenomenon, the adoption of COVID-19 as the proof of the “dangers” of 5G is just the latest step in a long smear campaign designed to induce fear about its introduction.
The trepidation around 5G can be traced much further back, beyond its inception, beyond the creation of the internet even. The fear of 5G arises from our fundamental and long-standing misunderstanding about radiation. More specifically about what electromagnetic radiation is, and the difference between ionizing and non-ionizing radiation.
But before tackling the long history of irrational radiation fear, we should take a look at some extant claims and demonstrate how easy they are to dismiss.
Tracking down Patient Zero
Whilst it would be pretty much impossible to track down the first person who connected 5G and COVID-19, it’s far more feasible to separate out some of the most common claims and analyse them. The first 5G/Coronavirus claim that I personally came across was the idea that there actually is “no virus” and that all the symptoms are a result of 5G networks, so let’s consider that claim to be our “Patient Zero.”
In the above screenshot, it’s clear that the roll-out of different forms of communication are being linked to the prevalence of certain viruses. It would be pretty easy to start any kind of debunking by pointing out that everything we know about the viral theory of disease transmission would have to be wrong to accommodate this conspiracy theory. Thus, before we even start, there’s a wealth of evidence — enough to build the foundation of our entire understanding of disease and medicine — to demonstrate this claim is nonsense on toast.
But, where’s the fun in that? Instead, let’s pick apart the claim bit by bit.
Firstly, the suggestion that radio waves were introduced in 1916 is laughable and clearly demonstrates that the people that are spreading this conspiracy have zero idea what electromagnetic radiation is.
Radio waves are simply low-frequency, long-wavelength, electromagnetic radiation — less energetic than infrared. In fact, they carry with them less energy than the visible light we use to see everything around us.
It should be clear then that if radio waves are responsible for a viral disease, the largest contributor to epidemics should be sunlight.
Radio waves didn’t “emerge” in 1916, in fact, the static that you can hear on an unturned radio partly consists of radio waves that date back to shortly after the big bang — emerging from the Cosmic Microwave Background (CMB) that permeates the entire Universe. The Earth also receives a great deal of electromagnetic radiation from the Sun in the form of radio waves. Thus, any technological developments that utilised radio waves simply added to those natural sources.
Looking past that there is also an issue with the dates being offered in this widely circulated social media post. The first commercial radio transmitters and receivers were developed between 1895 and 1896 by Guglielmo Marconi — with radio being widely used by 1900. Way before 1918 flu pandemic, which lasted until 1920.
The “evidence” put forward by the conspiracy theorists then takes a break of nearly a century until the supposed introduction of 3G in 2003, which is linked to the spread of SARS. The thing is, there were lots of pandemics in this intervening time — Asian flu in 1957 and Hong Kong flu in 1968 for example. These are ignored because they don’t fit the conspiracy theorists’ narrative.
As for 3G, well its rollout took a protracted period of time. Whilst it was indeed serving Europe in 2003, 3G wasn’t rolled out in Asia until 2006. It took until 2007 to get 3G operational in 40 countries, and it wasn’t introduced in Africa until 2012. The SARS pandemic was first identified in China in 2002 — four years before 3G was introduced. It was brought under control in July 2003. There was another smaller outbreak in 2004, again in China, still two years before the introduction of 3G there.
The roll-out of 4G was much tighter, taking from 2008 to 2010 roughly to implement. The Swine flu pandemic began in Mexico in 2009 and was over by August 2010. That means that for Swine Flu there is some correlation. Far more than can be attributed to 3G and SARS, which barely overlaps at all.
We also have to ignore that coronaviruses such as COVID-19, SARS, and MERS are very different than influenza strains, can often cause radically different symptoms and most certainly have very different incubation periods. If these ailments had the same root-cause — ie. low-frequency radiation — we should expect them to be similar.
With all these cases, even if you discount the fact that every epidemiologist, doctor and scientist who works in virology must be “in” on the conspiracy, there still lurks that problem that science attempts to avoid at every turn.
The strands of “evidence” presented to support this conspiracy are very easy to dismiss based on a well known logical fallacy which scientists are always at pains to avoid. A maxim that passed into infamy when a doctor ignored its principles and started a movement that has cost lives across the globe.
Correlation does not equal causation.
The mere fact that two events are correlated does not mean that they are causally linked. A causal link between events has to be established by evidence. To demonstrate this, one only has to see how easy it is to link events like these epidemics to something else unrelated, especially when you omit and distort data. For example, can we really be sure that the American thrash band Metallica aren’t responsible for the epidemics blamed on 5G and other radio wave-based systems?
Picture what follows as a deranged tweet:
“In 1986, Metallica released their masterpiece “Master of the Puppets.” In the same year, America suffered its largest flu epidemic since 1968!
2003, Metallica release the panned “St. Anger” album — SARS happens!
2008, they release “Death Magnetic” shortly after MERS strikes!
And in 2019 the band release “Helping Hands…Live & Acoustic at the Masonic” thus sparking the COVID-19 pandemic and simultaneously proving the Masons were behind this all along!”
What I did there was made a correlation using the barndoor effect. Rather than aiming at a target painted on a barn-door, I fired a few random shots into it and then painted a target around the bullet holes. Being a crack shot is easy when you cheat. And this is exactly what the people pushing this conspiracy are doing.
One of the key issues that still motivates the anti-vax movement is the rise in autism cases and how this seems to correlate to the introduction of the MMR vaccination. The connection was initially drawn by Andrew Wakefield in a 1998 paper published in TheLancet and later retracted. Wakefield himself was struck off for the unethical procedures he engaged in to obtain his results, but he has been embraced as a hero by the anti-vax —and some would say the anti-science — movement.
All it takes to launch a conspiracy theory based on correlation is a willingness to distort and ignore data, and to bury the fact you have no actual causal evidence.
Let’s bring the viral theory of disease back into play, and look at a slightly toned-down suggestion, the idea that 5G could be weakening our immune systems.
Understanding Non-ionizing Radiation
Again, the main evidence that has been presented for 5G facilitating the spread of COVID-5G has been the correlation between its rollout, the areas of the world in which it is most used, and the timing and location of COVID-19 outbreaks. We can dismiss this by saying correlation doesn’t equal causation. So what about the suggested mechanisms by which 5G is weakening our immune systems?
The idea that 5G weakens the immune system is very similar to claims of electromagnetic hypersensitivity (EHS) in which mild to severe symptoms are connected to exposure to electromagnetic fields (EMF). At the moment the World Health Organisation (WHO) does not consider the symptoms of EHS to be related to exposure to EMF. Likewise, there is no clinical evidence to suggest that 5G can cause harm or weaken the immune system.
Firstly, the human immune system cannot be weakened against COVID-19, for the simple reason that this strain of coronavirus is new, we have no immune response to it. That is what makes it so dangerous, none of our immune systems contain the antibodies for this virus yet.
Secondly, the radio waves that form the basis of 5G are non-ionizing. This essentially means that they don’t have the requisite energy to strip electrons from atoms. This is unlike high-frequency electromagnetic radiation like X-rays or gamma-rays which do have the energy to ionize atoms and thus, damage cells.
When electrons are stripped from atoms — these atoms become ionized. This can be a problem in our bodies because the surface — or valance — electrons of an atom determine how it bonds with other atoms. A change in this respect can change how proteins fold within the body. This might not sound too extreme, but the way a protein folds determines how it functions. Thus, exposure to ionizing radiation can lead to all sorts of nasty effects, including cancer and yes, weakened immune systems.
Again, radio waves don’t have enough energy to do this, but you may well be asking, what if we’ve been exposed to a lot of radiowaves? Surely then there will collectively be enough energy to cause ionization?
The simple answer to this is no. Fortunately, that isn’t how ionization works.
Imagine the valence electron as a rubber duck and the atom to which it is attached as a metal bucket. We start to fill the bucket by pouring water into it — analogous to bombarding our electron and atom with radio waves.
Now in the real world, the water lifts the duck off the bottom of the bucket, and eventually, it spills out. Ionization doesn’t work like this though. With ionization, the electron doesn’t spill out unless the photons that make up these radio waves individually contain enough energy make them do that. It does matter how many photons there are.
Albert Einstein was the first to discover this phenomenon whilst investigating the photoelectric effect. When light hits the surface of a metal, electrons are given off, but Einstein found that lowering frequency of the light cut off the flow of electrons. Yet to his surprise, altering the intensity of light did not cause electrons to stop being released — it just slowed their escape.
So for example, a low-frequency light with a high-intensity shining on the surface of a metal will not cause electrons to flow. Yet a high-frequency, low-intensity light will.
Re-running our bucket experiment, this is like saying the duck stays at the bottom of the bucket unless the water is of the correct temperature to make it rise. No matter how much water pours in, that duck ain’t budging. Bringing the temperature of the water up, spills out the duck at random, it could take a drop of water to do it, it could take a monsoon.
If it seems like this doesn’t make sense, well, yeah. It’s quantum physics. If it confused and terrified Einstein, why should it be comfortable and easy for us to understand?
Finally, we come to the idea that COVID-19 can somehow utilise 5G signals as a method of transport or even communication.
The Daily Star — the UK’s number purveyor of pseudo-scientific junk — this week ran an article that suggested: “viruses can talk to each other” and thus make active decisions about who to infect. The implication is that 5G signals are being used to do this. Full Fact, the UK’s fact-checking website link this bizarre claim to a 2011 paper which suggests bacteria can communicate via electromagnetic signals — an idea that is thoroughly disputed and, as you probably noticed, refers to bacteria not viruses.
The Full fact article also points out that COVID-19 is spreading in areas of the world with little to no 5G coverage. One of the worst-hit countries is Iran, a country with no 5G networks.
This element of the COVID-19/5G conspiracy really goes to the heart of why we need to step on this “theory” hard and fast. We know how COVID-19 spreads and limiting that spread is vital.
The novel coronavirus moves through contact with small droplets when those infected with the virus cough, sneeze or exhale. Smashing down 5G towers will achieve nothing to limit the spread. What will limit the spread is getting people to self-isolate, practice social distance and good hygiene practices when they can’t. Wearing protective gear such as masks and gloves has been shown to have some positive effects.
To get people to do that we must show them that conspiracy theories like those listed — and I hope thoroughly debunked — above, are nonsense. In turn, ensuring that they are listening to good information and not outdated irrational fears about “radiation.”
Wildlife is thriving in the human-free nuclear accident area in Fukushima, Japan.
A new study from the University of Georgia reports that populations of wild animals in the nuclear exclusion zone in Fukushima, Japan are blooming. According to the findings, more than 20 species, including wild boar, Japanese hare, macaques, pheasant, fox, and the raccoon dog, make their home in various areas of the landscape.
No humans, more animals
“Our results represent the first evidence that numerous species of wildlife are now abundant throughout the Fukushima Evacuation Zone, despite the presence of radiological contamination,” said UGA associate professor James Beasley.
It’s been nearly a full decade since the nuclear accident at Fukushima. As in other nuclear accidents (such as that at Chernobyl), authorities established a no-go zone around the site of the accident to safeguard public health.
Animals, however, are free to come and go as they please, and both the public and scientific community are curious to see how life gets by in such areas — the answer seems to be ‘better than expected’.
In addition to the team’s past research at Chernobyl, the current paper suggests that quarantined areas can act as safe havens for wild animals, especially species that tend to come into conflict with humans, such as wild boars. These animals were predominantly seen in human-evacuated areas or zones, according to Beasley.
“This suggests these species have increased in abundance following the evacuation of people,” he says.
For the study, the team worked with three zones of interest (established by the government in the Fukushima region after the 2011 accident) and gathered wildlife population figures by using 106 camera sites in these zones. Among the zones, one was completely off-limits for humans due to high levels of radiation contamination, one saw restricted access due to intermediate levels of contamination, and the last one was still open to human access and habitation due to low background levels of radiation.
The uninhabited zone served as the control zone for the research. There is no previous data on wildlife populations in the evacuated areas from which to establish a baseline, but the three areas are in close proximity and have a similar landscape. Thus, the team explains, the human-inhabited area can act as a reliable control.
The cameras captured over 46,000 images of wild boar over 120 days. Around 26,000 were taken in the uninhabited area, approximately 13,000 in the restricted one, and only 7,000 in the inhabited zones. Other species seen in high numbers included raccoons, Japanese marten, and Japanese macaque or monkeys, according to the team.
“This research makes an important contribution because it examines radiological impacts to populations of wildlife, whereas most previous studies have looked for effects to individual animals,” said Hinton.
The team looked at the impact of variables such as distance to road, time of activity (as captured by the cameras’ date-time stamps), vegetation type, and elevation on the wildlife population. They report that the behavioral patterns of most species align with their historically-recorded patterns. Raccoons, for example, a nocturnal species, were more active during the night; pheasants, which are diurnal, were more active during the day. In the meantime, wild boar in the uninhabited area were more active during the day, while boar in human-inhabited areas were more active during the night. The team says this suggests that the species is modifying their behavior in response to humans.
However, the team underscores that these findings refer to whole populations, and doesn’t make any assessments as to the health of individual animals.
“The terrain varies from mountainous to coastal habitats, and we know these habitats support different types of species. To account for these factors, we incorporated habitat and landscape attributes such as elevation into our analysis,” Beasley said.
“Based on these analyses, our results show that level of human activity, elevation and habitat type were the primary factors influencing the abundance of the species evaluated, rather than radiation levels.”
One exception to the general pattern was the Japanese serow, a goat-like mammal, which was most-seen in rural, human-inhabited upland areas. The team believes this comes as a behavioral adjustment to avoid the growing numbers of boar in the evacuated areas.
The paper “Rewilding of Fukushima’s human evacuation zone” has been published in the journal Frontiers in Ecology and the Environment.
The ozone hole over the Antarctic registered its smallest annual peak on record (tracking began in 1982) according to an announcement by the National Oceanic and Atmospheric Administration (NOAA) and NASA on Monday.
Each year, an ozone hole forms during the Southern Hemisphere’s late winter as the solar rays power chemical reactions between the ozone molecules and man-made compounds of chlorine and bromine. Governments around the world are working together to cut down on the ozone-depleting chemicals that created this hole, and it definitely helps.
However, the two agencies warn that we’re still far from solving the problem for good. The small peak in the ozone hole’s surface likely comes from unusually mild temperatures in that layer of the atmosphere seen during this year, they add.
Good but not done
NASA and NOAA explain that the ozone hole consists of an area of heavily-depleted ozone in the upper reaches of the stratosphere. This hole is centered on Antarctica, between 7 and 25 miles (11 and 40 kilometers) above the surface. At its largest recorded size in 2019, the hole extended for 6.3 million square miles (September 8) and then shrank to less than 3.9 million square miles (during the rest of September and October). While that definitely sounds like and is a lot of surface, it’s better than it used to be.
“During years with normal weather conditions, the ozone hole typically grows to a maximum of about 8 million square miles,” the agencies said in a news release.
It’s the third time we’ve seen a similar phenomenon — weather systems slowing down stratospheric ozone loss — take place over in the last 40 years. Below-average spikes in the size of the ozone hole were also recorded in 1988 and 2002.
The stratosphere’s ozone layer helps deflect ultraviolet (UV) radiation incoming from the sun. That’s very good news if you like being alive as UV rays are highly energetic and will cause harm to the DNA of living organisms. UV exposure can lead to skin cancer or cataracts for animals and damages plantlife.
A host of chemicals that used to be employed for refrigeration, including chlorofluorocarbons (CFCs) and hydrofluorocarbons (HFCs), break down ozone molecules in the stratosphere — which exposes the surface to greater quantities of UV. These compounds can last for several decades in the atmosphere and are extremely damaging to ozone during that time, breaking it down in huge quantities.
Humanity bunched together to control the production and release of such chemicals under the Montreal Protocol of 1988, which has drastically reduced CFC emissions worldwide. The ozone layer has been steadily recovering since then, but there’s still a long way to go.
“It’s a rare event that we’re still trying to understand,” Susan Strahan, an atmospheric scientist at the NASA’s Goddard Space Flight Center in Maryland, said in a news release. “If the warming hadn’t happened, we’d likely be looking at a much more typical ozone hole.”
The reactions that break down ozone take place most effectively on the surface of high-flying clouds, but milder-than-average temperatures above Antarctica this year inhibited cloud formation and made them dissipate faster, NASA explains. Since there were fewer clouds to sustain these reactions, a considerable amount of ozone made it unscathed. In a divergence from the norm, NOAA reports that there were no areas above the frozen continent this year that completely lacked ozone.
Warming in the shape of “sudden stratospheric warming” events, were unusually strong this year, NOAA adds. Temperatures in September were 29˚F (16˚C) warmer than usual (at 12 mi/19 km altitude) on average, “which was the warmest in the 40-year historical record for September by a wide margin” according to NASA.
Warmer air weakened the Antarctic polar vortex, a current of high-speed air circling the South Pole that typically keeps the coldest air near or over the pole itself, which slowed significantly (from an average wind speed of 161 mph / 260 kmph to 67 mph / 107 kmph). The slowed-down vortex allowed air to sink lower in the stratosphere, where it warmed and inhibited cloud formation. It’s also likely that it allowed for ozone-rich air from other parts of the Southern Hemisphere to move in.
In 1974, the late Stephen Hawking argued in a now-famous study that besides mass and spin, black holes can be characterized by a unique temperature. He also claimed that black holes don’t just devour matter, but also emit radiation. Researchers at the Israel Institute of Technology have set out to test this theory by creating a black hole analog in the lab. The results agreed with Hawking’s predictions, giving more credence to the theory.
A black hole is defined as a region of spacetime whose extremely strong gravity prevents anything, including light, from escaping. This essentially means that we can’t see a black hole directly — although this year astronomers captured a picture of a black hole’s event horizon (the swirling, bright boundary of the black hole). Scientists are confident that black holes exist, judging from their theoretical calculations and observations of X-rays emitted by swirling disks of gas around the black hole. The motions of nearby stars can also infer the presence of a black hole. In fact, most galaxies — the Milky Way included — are thought to be held together by the gravity of supermassive black holes (with masses millions of times that of the sun), which lie at the galactic center.
But if everything gets sucked into a black hole, never to return, what happens to the information that these objects used to hold? According to the laws of quantum mechanics, matter cannot simply disappear without leaving behind information of its previous state. So, on the one hand, we have physics that says information is never truly lost, nor is it truly copied, while on the other hand, we know that an object that gets too close, and crosses the black hole’s event horizon, it can never escape.
This is known as the black hole information paradox — and Stephen Hawking had been trying to crack it for decades. His investigations eventually led him to develop the Hawking radiation theory, in which the physicist argued that not all matter falls into a black hole. In some cases, when entangled pairs of particles are attracted into a black hole, only one of them would fall in, while the other escaped. Hawking named these escaping particles Hawking radiation, theorizing that its nature should be thermal radiation whose temperature would depend on the size of the black hole.
Testing such a theory is virtually impossible because we currently lack the technology required to measure the radiation from a real black hole. Which is why a team of researchers at the Israel Institute of Technology had to come up with a creative solution. For their new study, the authors made a Bose-Einstein condensate (BEC) — the fifth form of matter where familiar physics fades away, and quantum phenomena start to take over, even at a macroscopic scale. BEC matter almost stops behaving as particles and starts behaving more like waves. In a BEC you can observe “waves of atoms”, moving synchronized with each other just like water drops in an ocean wave.
The experimental setup for the black hole analog. Credit: Jeff Steinhauer.
To make BEC, the Israeli researchers trapped 8,000 rubidium atoms in a focused laser beam, chilling matter to only a billionth of a degree above absolute zero. A second laser fired on one side of the BEC made it denser on that side. According to the researchers, this led to a transition that moves at a constant speed through the condensate from the denser area (outside of the black hole) to the less dense area (analogous to the inside of the black hole). This is where it gets a bit tricky: the researchers say that sound waves traveling through the denser region move faster than this transitional flow, allowing sound to move in either direction. However, in the less dense region, sound waves can only travel away from the sharp transition — in other words, further into the black hole analog.
Light can either move away or into a black hole (and never escape) and in this experiment, light was replaced by sound. In experiments, researchers forced one of a pair of phonons (like photons for sound) to fall into the flow of rubidium atoms, while the other was allowed to escape. When the researchers measured both phonons, they recorded an average temperature of .035 billionths of a Kelvin, which agrees with Hawking’s predictions.
These findings in no way prove Hawking’s theory — that would require technology that doesn’t currently exist. However, the study published in Natureshows that Hawking was definitely on to something.
Just like yeast yields bread and beer, it could help lab workers track their daily radiation exposure quickly and effectively.
Workers in hospitals and nuclear facilities can wear disposable yeast badges to check their daily radiation exposure instantly. (Purdue University image/Kayla Wiles).
As I was preparing a homebrewed beer batch a few days ago, I couldn’t help but wonder at the marvel that is yeast. This tiny organism, so small and unassuming, makes so much of what we take for granted happen. Mankind has been using yeast for its selfish purposes for thousands of years, and yet we may only be tapping into a very small portion of what it can actually offer.
For instance, Purdue University researchers think it can be of great help when monitoring radiation exposure. They’ve designed a simple disposable badge which, in addition to yeast, only contains paper, aluminum, and tape. You’d take it, go about your work, and then simply activate the yeast with a drop of water. This will show radiation exposure as read by an electronic device. It’s simple, it’s elegant, and it’s pretty effective.
“You would use the badge when you’re in the lab and recycle it after you’ve checked your exposure by plugging it into a device,” said Manuel Ochoa, a postdoctoral researcher in Purdue’s School of Electrical and Computer Engineering.
The problem is, radiology workers are routinely exposed to low doses of radiation. If you go and you have an X-Ray once in a while, you wouldn’t worry about it at all — but if that’s your job, things start to change. While good design and protective gear largely keep workers within a safe range of radiation exposure, absorbing a little bit is almost unavoidable.
So these workers wear badges which help monitor their overall exposure. But the process is slow and cumbersome, researchers explain.
“Currently, radiology workers are required to wear badges, called dosimeters, on various parts of their bodies for monitoring their radiation exposure,” said Babak Ziaie, Purdue professor of electrical and computer engineering. “They wear the badges for a month or two, and then they send them to the company that made them. But it takes weeks for the company to read the data and send a report back to the hospital. Ours give an instant reading at much lower cost.”
This is where our faithful yeast enters the stage. Much like humans, yeast is vulnerable to radiation. So if you have a yeast badge, the more you are exposed to radiation, the more yeast cells it will kill off. The phenomenon which makes this process into a quantifiable reading is actually quite neat: when you add water to the yeast, it starts a localized fermentation process — just like with bread or beer. This forms carbon bubbles at the surface, as well as some chemical ions. These ions increase the electrical conductivity of the badge, and this conductivity can be measured — giving an instant value of radiation exposure.
“We use the change in electrical properties of the yeast to tell us how much radiation damage it incurred. A slow decrease in electrical conductivity over time indicates more damage,” said Rahim Rahimi, Purdue postdoctoral researcher in electrical and computer engineering.
Rahimi and colleagues say that if the device goes commercial, the reading process is simple enough that it could be done with a tablet or smartphone.
They also mention that, genetically, yeast is surprisingly similar to human tissue — so by studying the effect that radiation has on yeast, we could ultimately better understand how it affects human cells.
“For yeast, it seems that radiation primarily affects the cell walls of the membrane and mitochondria,” Ochoa said. “Since biologists are already familiar with yeast, then we’re more likely to understand what’s causing the biological effects of radiation in organic matter.”
Chang Keun Yoon, Manuel Ochoa, Albert Kim, Rahim Rahimi, Jiawei Zhou, Babak Ziaie. Yeast Metabolic Response as an Indicator of Radiation Damage in Biological Tissue. Advanced Biosystems, 2018; 1800126 DOI: 10.1002/adbi.201800126
During the final stage of World War II, the United States detonated two nuclear weapons over the Japanese cities of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. The A-bomb nicknamed “Little Boy” that blew over Hiroshima instantly killed 45,000 people and would go on to claim the lives of thousands more as a result of nuclear fallout. In a novel research, scientists have now used measured how much radiation was absorbed by the bones of one of the casualties, who was less than a mile away from where the bomb was set off.
A mushroom cloud billows into the sky about an hour after an atomic bomb was dropped on Hiroshima, Japan. US Army via Hiroshima Peace Memorial Museum.
Most of the research that studied the effects of A-bomb radiation on the human body focused on how nuclear fallout exposure affects the health of victims. We know, for instance, that approximately 1,900 people, or about 0.5% of the post-bombing population, are believed to have died from cancers attributable to Little Boy’s radiation release. The new study performed by Brazilian researchers at the University of São Paulo is different: it’s the first to measure direct blast radiation exposure, effectively using a victim’s jawbone as a dosimeter — a device used to measure an absorbed dose of ionizing radiation.
Little Boy held about 140 pounds of uranium, which underwent nuclear fission when it exploded as planned nearly 2,000 feet above the Japanese city. The blast released 16 kilotons of explosive force, causing unspeakable damage in the area. According to one estimate, at least 50,000 people were killed and an equal number were injured that day. Nearly 70% of the city’s buildings were destroyed, leaving many homeless.
One of the unfortunate victims was less than a mile away from the bomb’s hypocenter. Using a technique called electron spin resonance (ESR), the researchers estimate that the jawbone’s radiation dose was about 9.46 grays (Gy) — the measurement unit of absorbed radiation dose of ionizing radiation, e.g. X-rays. The gray is defined as the absorption of one joule of ionizing radiation by one kilogram (1 J/kg) of matter, e.g. human tissue.
For cancer patients, doctors often target tumors with a collimated beam, a radiotherapy which can involve up to 70 Gy’s in some cases. However, a person whose whole body is exposed to 3-5 Gy’s can expect to die within a couple weeks. The jawbone’s radiation dose measured a staggering 9.46 grays (Gy).
The innovative method used by the Brazilian researchers was first demonstrated in the 1970s by Sérgio Mascarenhas, who was at the time teaching at the University of São Paulo’s São Carlos Physics Institute (IFSC-USP). The physicist wrote a widely-acclaimed paper that concluded that X-ray and gamma-ray irradiation makes human bones slightly magnetic, a phenomenon called paramagnetism. Bones contain a mineral called hydroxyapatite which, when irradiated, produces CO2 whose levels can be traced inside the mineral. The resulting free radicals can then be used to gauge the radiation dose in bone.
The mandible studied by the researchers. Credit: Credit: Sergio Mascarenhas (IFSC-USP).
Initially, Mascarenhas’ technique was intended to be a new tool for dating bones from archeological sites in his country based on how much radiation they’d received from elements like thorium that occur naturally in the sand. One day, however, he was invited to test his technique on the remains of people from the Hiroshima blast. Unfortunately, his analysis was far too rudimentary at the time — due to the lack of advanced computers, the physicist was unable to separate the A-bomb signal from the background signal. He did get to keep the jawbone though.
Decades later, Angela Kinoshita of Universidade do Sagrado Coração in São Paulo State, along with colleagues, used modern equipment to finally make the method work. The dose distribution matched that found in different materials around Hiroshima, including wall bricks and roof tiles, suggesting that the method is accurate, although more experiments are still required.
“There were serious doubts about the feasibility of using this methodology to determine the radiation dose deposited in these samples,” Kinoshita said in a press release.
“The results confirm its feasibility and open up various possibilities for future research that may clarify details of the nuclear attack.”
There is a lot of interest in this methodology due to the risk of terrorist attacks in countries like the United States.
“Imagine someone in New York planting an ordinary bomb with a small amount of radioactive material stuck to the explosive,” said study co-author Oswaldo Baffa of the University of São Paulo’s Ribeirão Preto School of Philosophy, Science & Letters.
“Techniques like this can help identify who has been exposed to radioactive fallout and needs treatment.”
MIT researchers have devised a method that could allow states to prove they’re disposing of nuclear weapons without giving away any of their technical details — which are considered state secrets.
Open nose cavity US Mark 5 nuclear bomb showing the ‘pit’. Image credits Scott Carson / US Atomic Energy Commission.
Nuclear disarmament negotiations (particularly those between the U.S. and Russia) always hit a patch of rough ground when verification processes come up. The main point of these talks is to promote nuclear non-proliferation — the understanding that the fewer nukes there are in the world at any one time, and the fewer actors there are with access to them, the easier it will be for humanity not to blast itself back to an irradiated stone age. However, every reliable verification process that the two parties could agree on as trustworthy (i.e. visually identifying the warheads) would give away technical data pertaining to the weapons.
This would never fly. For starters, governments don’t like other people to know how their nukes work — especially the people they’re generally aiming said nukes at. Secondly, such measures would risk disseminating technical details to third-parties, thereby defeating the whole purpose of disarmament efforts. Visual confirmation, then, became a no-go.
To spot a warhead
Now, an MIT research team reports developing a novel method of confirmation that could help promote nuclear disarmament without disseminating any state secrets. The method, similar to a physics-based version of cryptographic encryption systems, can be applied in two different versions — just in case one is found to have drawbacks by any government. The findings were published in two different papers.
Lacking a reliable tool to identify nuclear weapons, and thus bereft of a way to enforce their destruction, past agreements have focused on decommissioning of delivery systems. It makes sense, as it’s far harder to ‘fake’ a plane or a ballistic missile, whereas nuclear bombs are basically spheres of plutonium. Such measures have worked reasonably well up to now, butlead author Areg Danagoulian believes that it only skirted the real issue: to avoid such weapons falling into the hands of terrorist or rogue states, we need to dispose of the actual warheads — which means we need a reliable way to identify them or spot fakes, one to which governments will agree.
“How do you verify what’s in a black box without looking inside? People have tried many different concepts,” Danagoulian says. “But these efforts tend to suffer from the same problem: If they reveal enough information to be effective, they reveal too much to be politically acceptable.”
Their solution draws inspiration from digital data encryption methods, which alter data using a set of large numbers, which form the key. Without this key, the encrypted data is a hodge-podge of characters. However, while it may be illegible, it is still possible to tell if it is identical to another set of encrypted data — if they use the same key, the datasets would be the same hodge-podge. Danagoulian and his team applied the same principle for their warhead verification system — “not through computation but through physics,” he explains. “You can hack electronics, but you can’t hack physics.”
The method analyzes both of a warhead’s essential parts: the sphere of radioactive elements that supply its nuclear ‘gunpowder’, and the dimension of the hollow sphere called a pit that serves as a ‘detonator’ — details pertaining to both elements are considered state secrets. Because of this, they couldn’t simply probe the weapons’ internal characteristics, and they couldn’t tell a fake apart just by measuring emitted radiation.
So what the team did was to introduce a physical key, created from a mix of the same isotopes used in the weapon, but in a ratio unknown to the inspection crew. Similarly to a filter applied to a photo, the key will scramble information about the weapon itself. In keeping with that analogy, the physical key is like a complementary color filter (a picture’s negative) that will cancel out all of the weapon’s emissions when lined up properly. If the investigated object has a different emission pattern (i.e. it’s a fake), it will bleed through the filter, alerting the investigation crew.
(Top) diagram showing the configuration that could be used to verify that a nuclear warhead is real. (Bottom left) measurement without the reciprocal. (Bottom right) measurement with the reciprocal. Image credits Areg Danagoulian.
This filter — called a cryptographic reciprocal or a cryptographic foil — will be produced by the same country that made the warheads, thereby keeping their secrets safe. The weapon can be hidden in a black box to prevent visual inspection, lined up with the foil, then get blasted with a beam of neutrons. A detector will then analyze the output and render it as a color image — if the warhead is genuine, the image will be blank. The second variant of this process substitutes a photon beam for the neutron one.
These tests are based on the requirements of a Zero Knowledge Proof — where the honest prover can demonstrate compliance, without revealing anything more. It also benefits from a built-in disincentive to lie. Because the template is the perfect complement of the weapon itself, when superimposed over a dummy it will actually reveal information about the warhead’s composition and configuration — the very things states don’t want others to know about.
It’s a neat concept; the only issue I have with it right now is that it only works if all parties involved are genuine, and do actually create the right reciprocals for their warheads. Still, if the system does someday get adopted and helps bring about significant reductions in the number of nuclear weapons in the world, Danagoulian says, “everyone will be better off.”
“There will be less of this waiting around, waiting to be stolen, accidentally dropped or smuggled somewhere. We hope this will make a dent in the problem.”
The papers “Experimental demonstration of an isotope-sensitive warhead verification technique using nuclear resonance fluorescence” has been published in the journal Proceedings of the National Academy of Sciences; “Nuclear disarmament verification via resonant phenomena” has been published in the journal Nature Communications.
MIT and Arizona State University researchers are hot on the heels of the Universe’s first stars: they’ve traced the faint signals of hydrogen gas energized by stellar radiation just 180 million years after the Big Bang.
Image via Pixabay.
Where does everything come from? That’s one of the questions people have been burning to answer since times immemorial. It’s a hugely complicated question, but science can offer some bits and pieces to start cobbling the answer together. A paper published today by MIT and Arizona State University researchers uncovered a fundamental one such piece: the earliest evidence of hydrogen gas and the earliest evidence of stars igniting that we’ve ever seen.
The gas from whence we came
Using a table-sized radio antenna plopped down in a remote part of western Australia, dubbed EDGES (Experiment to Detect Global EoR Signature), the team managed to pick up trace signals generated by hydrogen gas, just 180 million years after the Big Bang. This is the earliest evidence we’ve ever seen for the presence of hydrogen in the early universe — a very important find, considering hydrogen is the simplest, and thus first, atom out there. The team was also able to determine that by this time, the gas bore traces of the first stars in the world.
“This is the first real signal that stars are starting to form, and starting to affect the medium around them,” says study co-author Alan Rogers, a scientist at MIT’s Haystack Observatory.
“What’s happening in this period is that some of the radiation from the very first stars is starting to allow hydrogen to be seen. It’s causing hydrogen to start absorbing the background radiation, so you start seeing it in silhouette, at particular radio frequencies.”
The EDGES instrument was designed to pick up radio signals generated during a time in the universe’s history known as the Epoch of Reionization, or EoR. It’s during this time that we think the first sources of light (such as stars) sprang up in the world, from a sort of cosmic primordial soup made up mostly of hydrogen gas. Not much else was happening before this time, mostly due to a lack of energy to change objects in the universe: hydrogen, for example, was virtually invisible, as its energy state made it indistinguishable from the surrounding cosmic background radiation.
The dark Horsehead Nebula in the constellation Orion. Hydrogen corresponds to red. Image credits Ken Crawford.
But the birth of these sources of light and energy ionized the hydrogen gas, changing its energy state, and making it release energy as radio waves — and that’s exactly what EDGES was designed to pick up.
However, not all went according to plan. When the team looked at the frequency range the antenna was designed to pick up, between 100 to 200 megahertz, they hardly picked up any signal.
One explanation they came up with is that the theoretical models which we used to calculate what emissions this early hydrogen would give off overestimated the gas’ temperature. So they re-crunched the numbers, this time assuming that the hydrogen and its environment were at about the same, lower temperature. They decided their best bet was to search the 50 to 100 megahertz frequency range, so they returned their antenna and flipped the switch again.
“As soon as we switched our system to this lower range, we started seeing things that we felt might be a real signature,” Rogers says.
The device picked up a flattened absorption profile (i.e. a dip in the radio wave-spectrum) centered around 78 megahertz. Rogers adds that the frequency corresponds to “roughly 180 million years after the Big Bang”, adding that “this has got to be the earliest” detection of a signal from hydrogen we yet have. To put things into perspective, we know that the universe is at least 11 billion years old, and most estimates place its age upwards of 13.5 billion years.
The radio profile matches theoretical predictions of a star-hydrogen interaction. These early stellar bodies poured ultraviolet radiation out into the void, ionizing any surrounding body of hydrogen. As a result, the gas began to absorb background radiation, which changed the spin on its single electron. Ultimately, this change made the atoms emit, rather than absorb, radiation, at a characteristic wavelength of 21 centimeters, or a frequency of 1,420 megahertz — becoming, in effect, ‘visible’ for the first time.
Red-shift affected these waves, so by the time it reached present-day Earth, it was somewhere in the range of 100 megahertz.
But the dip in the radio spectrum was also stronger and deeper than the models predicted — suggesting that the hydrogen was indeed colder than previously assumed. The team estimates that the hydrogen gas and the universe as a whole must have been twice as cold as previously estimated, at about 3 kelvins, or -270.15 degrees Celsius / -454 degrees Fahrenheit.
The edges of discovery
Image credits MIT / Haystack Observatory.
The research is likely the best window we’ll have into the universe’s early history for a long time to come. It took an incredible scientific effort to obtain these results. It took years of hard work for engineers and scientists to design, re-design, and re-calibrate the EDGES instrument to even have a hope of picking up on these signals. Peter Kurczynski, program director for Advanced Technologies and Instrumentation in the Division of Astronomical Sciences at the National Science Foundation, the organization that built EDGES, compares it to “being in the middle of a hurricane and trying to hear the flap of a hummingbird’s wing.”
“Sources of noise can be a thousand times brighter than the signal they are looking for,” he explains.
It was built in the middle of Australia’s nowhere (which has to be at least nowhere squared) because it was the most remote place they could get to — and that limited interference from man-made radio signals, which would easily overpower any of the signals the antenna was designed to pick up on.
“It is unlikely that we’ll be able to see any earlier into the history of stars in our lifetimes,” lead author Judd Bowman of ASU says. “This project shows that a promising new technique can work and has paved the way for decades of new astrophysical discoveries.”
It’s also the first actual glimpse we get into this period of the universe, a particularly important one — these were, after all, the universe’s early days. The foundation of everything we see today was laid down during that Epoch.
The paper, “An absorption profile centred at 78 megahertz in the sky-averaged spectrum” has been published in the journal Nature.
The Sun’s closest neighboring star, Proxima Centauri, might not be as welcoming as we believed — a team of astronomers have detected a flare so powerful from the star that it throws the habitability of its system into serious doubt.
Artist’s impression of a flare from Proxima Centauri. Image credits Roberto Molar Candanosa / Carnegie Institution for, NASA/SDO, NASA/JPL.
A team of astronomers led by Meredith MacGregor from the Carnegie Institution for Science discovered the flare while reanalyzing recordings taken last year by the Atacama Large Millimeter/submillimeter Array (ALMA), an array of 66 radiotelescope antennas nestled in the Atacama desert.
Now, by their very nature, solar flares are some of the most violent and energetic events we know of. Think of them as magnetic short-circuits in a star. What happens during a flare is that ebbs and flows in a star’s magnetic field start accelerating electrons (negatively-charged particles) close to the speed of light. Enough of these build up that they start interacting with stellar plasma (highly electrically charged atoms), ripping it out of the star, causing it to erupt. This eruption can be seen across the electromagnetic spectrum.
And that’s where the bad news starts: even by solar flare standards, what the team discovered was humblingly violent. The flare they detected from Proxima Centauri was over 10 times brighter, at its peak, than the largest flares that we’ve ever recorded from the Sun at similar wavelengths.
“March 24, 2017 was no ordinary day for Proxima Cen,” said MacGregor.
The flare increased Proxima Centauri’s brightness by a factor of 1,000 over 10 seconds. It was also preceded by a smaller flare. Taken together, the event lasted for under two minutes — which would explain why nobody noticed them in the first place. For context, ALMA observed the star for over 10 hours between January and March of last year, when the flares erupted.
We knew from previous observation that Proxima Centauri was prone to regular bouts of flares, although they were much smaller and emitted chiefly in the x-ray spectrum. However, the findings now cast a lot of doubt on the habitability of the exoplanet Proxima b, which up to now raised a lot of interest as a potentially habitable planet. Proxima b orbits its star around 20 times closer than the Earth orbits the Sun, so flares of this magnitude are a huge problem. The team estimates that a flare 10 times larger than a major solar flare would drench the planet with 4,000 times more radiation that Earth gets from a solar flare. That’s enough to raise literal hell on the planet, the team explains:
“It’s likely that Proxima b was blasted by high energy radiation during this flare,” says MacGregor.
“Over the billions of years since Proxima b formed, flares like this one could have evaporated any atmosphere or ocean and sterilized the surface, suggesting that habitability may involve more than just being the right distance from the host star to have liquid water.”
So it might be healthier to steer away from Proxima b until we find a way to accurately predict, and then successfully weather, these flares.
The findings also allowed the team to get a better image of the Proxima Centauri system, and infirm previous estimation that it contains large bodies of dust and larger particles, similar to our asteroid belt.
The paper “Detection of a Millimeter Flare From Proxima Centauri” has been published in the journal Astrophysical Journal Letters.
The Van Allen belts are two radiation belts. These are zones of electrically charged particles which are poised, encompassing the Earth far above the surface, and held there by the planet’s magnetic field. The first of the belts was discovered in early 1958 through data collected by Explorer I (the United States’ first space satellite) and the Explorer III and Pioneer satellites, under James Alfred Van Allen and his team at the University of Iowa.
Similar radiation belts have since been found surrounding other planets, but the term of Van Allen belts only refers to those two belts (and sometimes other belts that are transitorily formed) which surround the Earth. They have been dubbed the Van Allen belts after the American physicist credited with their discovery.
Each of the two belts surrounds the Earth in a sort of doughnut-shaped formation. The inner belt reaches from approximately 600 to 3,000 miles above the Earth, and the outer belt from about 9,300 to 15,500 miles above the Earth. Astronomers have determined that the belts consist of many electrically charged particles, like protons and electrons. Earth’s magnetic field traps these particles, directing them to the magnetic poles.
The particles move in spiral paths along a system of flux lines, curving from the north magnetic pole to the south magnetic pole. As the particles come nearer either pole, the converging flux lines reflect them toward the opposite pole. This effect keeps the particles of the Van Allen belts bouncing between the poles. The belts receive new particles from the solar wind, a continuous stream of charged particles emitted from our sun.
Chart Showing the Van Allen Belts in Proportion to Earth
Other particles can be gained by solar flares and cosmic rays. Intense solar activity can disrupt the belts, leading to magnetic storms. Such disruptions also affect radio reception, cause surges in power lines, and produce auroras.
Ever since their discovery, the Van Allen belts have concerned and inspired people’s minds. Hollywood feature film and TV producer, writer, and director Irwin Allen came out with his science fiction movie Voyage to the Bottom of the Sea in 1961, three years after the discovery of the first belt. The main plot conceived by Allen and Charles Bennett revolves around saving all life on Earth from the natural inferno that was created when a meteor shower pierced the Van Allen radiation belt, catching it ablaze.
Ice burgs begin to melt in the Arctic, entire forests are engulfed in flames, and the crews of sea-going vessels traveling on the ocean’s surface are baked alive. Eventually, scientist Admiral Harriman Nelson proposes to shoot a nuclear missile from his submarine Seaview into the burning belt at a certain projection and time, which would, in theory, overwhelm and extinguish the skyfire, essentially “amputating” the belt from the Earth.
Scene from Irwin Allen’s 1961 Film Voyage to the Bottom of the Sea. Source: 20th Century Fox.
Even today, decades later, people are concerned about the radiation belts. A prominent group of physicists wants the belts eliminated altogether. A plan was even suggested in which long conducting tethers that are charged with a high voltage are deployed from satellites into the belts. It would force charged particles that come into contact with the tethers to have their pitch angle altered.
Over time, theoretically, this would dissolve the inner belts. The belts pose certain difficulties and dangers (mainly caused by radiation) whenever a satellite, telescope, or human is to be launched into outer space. There is a decent scientific argument in that these belts provide anything useful, or that we could do away with them without a negative effect.
According to some, if the belts were not there, the Earth would no longer possess a magnetic field. That means that cosmic ray particles would be at liberty to collide with our atmosphere in larger quantities, resulting in a higher background level of secondary “air shower neutrons”, leading to higher doses of background radiation on the surface. If the Van Allen belts were gone, it would definitely impact human life.
The World Book Encyclopedia Vol. 20. World Book, Inc., 1987.
Concrete is responsible for a significant share of man-made greenhouse emissions. Credit: Pixabay.
MIT undergraduate students found that recycled plastic flakes can make concrete 15 percent stronger. Discarded plastic bottles could thus one day serve a new role, inside your walls for instance, instead of polluting the environment in landfills and oceans. The plastic is first blasted with gamma rays, a process which is complately harmless.
Blasting concrete pollution
Concrete is the second most widely used construction material in the world, after water. Manufacturing and transporting concrete is responsible for 4.5 percent of all man-made carbon dioxide emissions.
While researching a student project, Carolyn Schaefer and Michael Ortega were amazed by just how many emissions the concrete industry is responsible for. If they could find a way to make concrete greener, even by a fraction, the two thought, it would be possible to lessen concrete’s strain on the environment.
“There is a huge amount of plastic that is landfilled every year,” Michael Short, an assistant professor in MIT’s Department of Nuclear Science and Engineering, told MIT News. “Our technology takes plastic out of the landfill, locks it up in concrete, and also uses less cement to make the concrete, which makes fewer carbon dioxide emissions. This has the potential to pull plastic landfill waste out of the landfill and into buildings, where it could actually help to make them stronger.”
The MIT undergrads scoured the literature and found out about previous efforts that mixed recycled plastic with Portland cement. The resulting concrete, however, was weakened. Going deeper into the rabbit hole, the two found out that exposing the plastic to gamma radiation alters the material’s crystalline structure to such a degree that the plastic turns stiffer, tougher, and stronger. So, they got the bright idea to first irradiate plastic and then mix it with cement and mineral additives (fly ash and silica fume) to manufacture a potentially stronger concrete.
The plastic in question was polyethylene terephthalate, recovered from a nearby recycling plant. The flakes were irradiated with a cobalt-60 irradiator housed in one of MIT’s basements. The irradiated flakes of plastic do not leave traces of radiation afterward so they can safely be used in cement without the fear of jeopardizing human health.
After it was poured into molds, allowed to cure, and removed from the molds, cylindrical concrete samples were subjected to a battery of compression tests. The results were then compared to those performed on concrete made from regular, non-irradiated plastic, as well as plain concrete with no plastic.
According to the MIT researchers, the presence of the gamma-ray irradiated plastic and fly ash enhanced the strength of the concrete by 15 percent, as reported in the journal Waste Management.
Using X-ray diffraction, backscattered electron microscopy, and X-ray microtomography, the researchers found that irradiated plastic, particularly at high doses, exhibited crystalline structures with more cross-linking, or molecular connections. Such crystalline structure seems to trap pores within the concrete, making it denser and stronger. Tests so far suggest that the higher the dose of gamma ray radiation, the stronger the concrete, though more work needs to be carried out to find the optimal mix of materials and radiation.
Next, Ortega and Schaefer plan on experimenting with different kinds of plastic and radiation doses. They say that replacing just 1.5 percent of concrete with plastic makes it stronger, and could have a significant impact. By one calculation, 1.5 percent plastic in concrete implies 0.0675 percent of the world’s carbon dioxide emissions would be slashed.
Between Sep. 12 and 13, the thin Martian atmosphere became 25 times brighter than usual, after an extremely powerful solar storm struck the planet. The event, which was recorded by NASA’s Maven spacecraft orbiting the red planet, triggered a global ultraviolet aurora but also doubled the amount of radiation that reached the planet’s surface.
“NASA’s distributed set of science missions is in the right place to detect activity on the Sun and examine the effects of such solar events at Mars as never possible before,” said MAVEN Program Scientist Elsayed Talaat, program scientist at NASA Headquarters, Washington, for NASA’s Mars Atmosphere and Volatile Evolution, or MAVEN, mission.
The solar storm lit up Mars like a light bulb
Even though this should be a quiet period in the Sun’s 11-year sunspot and storm-activity cycle, the coronal mass ejection (CME) which triggered the Martian aurora over the weekend was extremely powerful. A CME is a burst of charged particles, mostly electrons and protons, emanating from twisted magnetic field structures, or “flux ropes”, present on sun’s corona. These solar storms can vary in strength wildly and are known to impact Earth’s magnetosphere, being responsible for geomagnetic storms and the mesmerizing auroras. This month’s event was powerful and broad enough, for instance, to be detected on Earth despite the fact that the sunspots from which the CME gushed were on the opposite side of the Sun than we were facing.
“The current solar cycle has been an odd one, with less activity than usual during the peak, and now we have this large event as we’re approaching solar minimum,” said Sonal Jain of the University of Colorado Boulder’s Laboratory for Atmospheric and Space Physics, who is a member of MAVEN’s Imaging Ultraviolet Spectrograph instrument team.
“When a solar storm hits the Martian atmosphere, it can trigger auroras that light up the whole planet in ultraviolet light. The recent one lit up Mars like a light bulb. An aurora on Mars can envelope the entire planet because Mars has no strong magnetic field like Earth’s to concentrate the aurora near polar regions. The energetic particles from the Sun also can be absorbed by the upper atmosphere, increasing its temperature and causing it to swell up.”
This weekend’s CME doubled radiation levels on Mars’ surface. Credit: NASA/JPL-Caltech/Univ. of Colorado/SwRI-Boulder/UC Berkeley
While Maven was busy studying the pretty ultraviolet lights in Mars’ atmosphere, below on the surface NASA’s Curiosity rover picked up the radiation levels. Here on Earth, we’re protected from the sun’s bursts of plasma by the magnetosphere that shrouds our planet, blocking most radiation. Mars lost its magnetic fields ages ago, though, and it’s thus far more vulnerable to solar mood swings. In only one day, the radiation levels on the red planet‘s surface spiked more than double anything previously measured by the Curiosity rover’s Radiation Assessment Detector, or RAD, since the mission started in 2012.
“This is exactly the type of event both missions were designed to study, and it’s the biggest we’ve seen on the surface so far,” said RAD Principal Investigator Don Hassler of the Southwest Research Institute’s Boulder, Colorado, office. “It will improve our understanding of how such solar events affect the Martian environment, from the top of the atmosphere all the way down to the surface.”
However, the extreme radiation exposure raises new troubling concerns as to the habitability of Mars, one of the main fields of study for the Curiosity mission — that’s speaking about a planet with alarmingly high radiation levels hitting its surface already. On average, Mars sees 22 millirads per day, which works out to 8000 millirads (8 rads) per year. For comparison, human beings in developed nations are exposed to 0.62 rads per year. Studies suggest that the human body could withstand a dose of up to 200 rads without permanent damage. However, prolonged exposure to the kinds of levels detected on Mars significantly increases the risk of acute radiation sickness, increased risk of cancer, genetic damage, and even death.
“If you were outdoors on a Mars walk and learned that an event like this was imminent, you would definitely want to take shelter, just as you would if you were on a space walk outside the International Space Station,” Hassler said. “To protect our astronauts on Mars in the future, we need to continue to provide this type of space weather monitoring there.”
NASA, and likely SpaceX too which has a stated goal of building a martian colony numbering one million inhabitants, will have to bear these intense radiation fluctuations in mind when designing their habitats.
Humanity’s effects aren’t contained to the planet’s surface alone but go all the way out to space, a new paper reports.
While we’ve left a pretty obvious trace on the face of the planet, the one we’ve left on our near-space environment is less apparent — but there is a trace, scientists found. A certain channel of communication, known as very low frequency (VLF) radio, has been found to interact with particles in space and alter their movement patterns. These interactions can even create an impromptu shield around Earth, which protects against high-energy space radiation.
The finding comes as part of a more comprehensive paper on human-caused space weather phenomena.
VLF channels are transmitted from ground stations at huge power levels, and it’s easy to see why — the most common use of these channels is to maintain communications with submarines, so the signal needs to be strong enough to pass through a lot of water and in some cases, soil. But because of their energy, these radio signals also extend far beyond our atmosphere. And because they’re so widely used, these stations cover the Earth in a VLF bubble.
Previous research has discovered that under the right conditions, radio signals in the VLF and nearby spectrum can interact with and affect the properties of the high-energy radiation fields around the planet. in fact, it’s so powerful that NASA’s Van Allen Probes, which study electrons and ions high above Earth’s surface, can reliably pick up on the bubble.
Which was fortunate, because it allowed the team to pick up on one curious fact: that the VLF bubble seems to extend outwards precisely to the inner edge of the Van Allen radiation belts. These belts are bodies of charged particles held at bay by Earth’s magnetic field — think of them like a purgatory for space radiation.
The team speculates that in the absence of VLF signals, the belt’s lower limit — the so-called “impenetrable barrier” would likely stretch far closer to the planet’s surface. Comparing the extent of the belts recorded with the Van Allen Probe today with the same value recorded by satellite in the 1960s, when VLF transmissions were more limited, shows that the barrier has indeed been pushed outwards.
The discovery offers a relatively cheap and simple system for terraforming planets with excess surface radiation. Plans are also underway to test whether a VLF signal source in the upper atmosphere could be used to scrub the extra charged particles which add up in the belts during periods of intense space weather — such as when the sun erupts with giant clouds of particles and energy.
The full paper “Anthropogenic Space Weather” has been published in the journal Space Science Reviews.
Morning glory (family Convolvulaceae) seeds can survive through ridiculously high doses of UV radiation, a new study found, making them ideally suited for future colonies on high-UV planets such as Mars. They’re so good at it that these seeds might even survive the trip between planets unprotected — lending more confidence to the theory of panspermia.
Give or take one decade ago, astronauts onboard the ISS placed about 2000 tobacco plant (genus Nicotiana) and arabidopsis (Arabidopsis thaliana) seeds on the outside of the station, then went about their business for 558 and 682 days. The plan was to see what effects long-term exposure to UV light, cosmic radiation, and the extreme temperature fluctuations out there would have on the tiny seeds. Since any of these factors on its own is lethal to most life as we know it, the general expectation was that they would die off.
But at the end of the experiment in 2009, when the seeds were brought back down to Earth and planted, 20% of them germinated and grew into normal, healthy-looking plants. Which was surprising, to say the least. Now, 10 years after the experiment, an international team of researchers is trying to understand why.
“Seeds are ideally suited to storing life,” says David Tepfer, an emeritus plant biologist at the Palace of Versailles Research Center of the National Institute for Agronomic Research in France.
Together with Sydney Leach, an emeritus physicist at Paris-Meudon Observatory in France, Tepfer took a closer look at the DNA of some of these space-traveling seeds that didn’t make it to the germination trials. They were looking for a short section of genetic code which had been spliced into the seeds’ genome before their space journey. This bit of code was meant to act as an overall indicator of the exposed DNA’s level of damage, and the team found degradation both on it and the seeds’ genome. It’s possible that under the harsh conditions of space, distinct bits of the DNA were chemically fused like a stack of CDs melted together. The information stored in the DNA couldn’t be read afterward, inactivating the whole strand.
Still, one issue remained unaddressed. Given the inherent space constraints and transportation difficulties, the duo had to work with small seeds for the space tests “but small seeds are generally not capable of long-term survival in the soil,” the team writes. To see what the limitations of larger seeds were, the team performed a follow-up lab experiment with three types of seeds — tobacco and arabidopsis as a control sample and morning glory seeds “for their larger size, tougher seed coats, and longevity in the soil.” They then blasted these seeds with a huge amount of radiation — roughly 6 million times as much UV as is typically used to purge drinking water of any pathogens. The tobacco and arabidopsis seeds didn’t make it, but morning glory seeds germinated normally after the exposure.
Pack some sunscreen
The team writes that their survival likely comes down to a protective layer coating the morning glory seeds, which contains flavonoids (compounds commonly found in wine and tea that act as natural sunscreens) and insulates them from the brunt of UV radiation.
Barricaded behind these flavonoids, seeds could slumber their way from one planet to the next and, assuming they don’t burn on reentry or land on a planet where everything is toxic and awful for them, take root and jumpstart life around the Universe — a process known as panspermia. Tepfer also says it’s worth investigating if feeding animals a diet rich in flavonoids can lend them resistance to UV, potentially keeping them safe on interplanetary travels.
Feeding animals a high-flavonoid diet might confer resistance to UV light and make them better suited for interplanetary travel, Tepfer suggests. “They might become more ultraviolet-resistant,” he says. “Red wine or green tea, anyone?”
The paper “Survival and DNA Damage in Plant Seeds Exposed for 558 and 682 Days outside the International Space Station” has been published in the journal Astrobiology.