Category Archives: Physics

Is information the fifth state of matter? Physicist says there’s one way to find out

Credit: Pixabay.

Einstein’s theory of general relativity was revolutionary on many levels. One of its many groundbreaking consequences is that mass and energy are basically interchangeable at rest. The immediate implication is that you can make mass — tangible matter — out of energy, thereby explaining how the universe as we know it came to be during the Big Bang when a heck lot of energy turned into the first particles. But there may be much more to it.

In 2019, physicist Melvin Vopson of the University of Portsmouth proposed that information is equivalent to mass and energy, existing as a separate state of matter, a conjecture known as the mass-energy-information equivalence principle. This would mean that every bit of information has a finite and quantifiable mass. For instance, a hard drive full of information is heavier than the same drive empty.

That’s a bold claim, to say the least. Now, in a new study, Vopson is ready to put his money where his mouth is, proposing an experiment that can verify this conjecture.

“The main idea of the study is that information erasure can be achieved when matter particles annihilate their corresponding antimatter particles. This process essentially erases a matter particle from existence. The annihilation process converts all the [remaining] mass of the annihilating particles into energy, typically gamma photons. However, if the particles do contain information, then this also needs to be conserved upon annihilation, producing some lower-energy photons. In the present study, I predicted the exact energy of the infrared red photons resulting from this information erasure, and I gave a detailed protocol for the experimental testing involving the electron-positron annihilation process,” Vopson told ZME Science.

Information: just another form of matter and energy?

The mass-energy-information equivalence (M/E/I) principle combines Rolf Launder’s application of the laws of thermodynamics with information theory — which says information is another form of energy — and Claude Shannon’s information theory that led to the invention of the first digital bit. This M/E/I principle, along with its main prediction that information has mass, is what Vopson calls the 1st information conjecture.

The 2nd conjecture is that all elementary particles store information content about themselves, similarly to how living things are encoded by DNA. In another recent study, Vopson used this 2nd conjecture to calculate the information storage capacity of all visible matter in the Universe. The physicist also calculated that — at a current 50% annual growth rate in the number of digital bits humans are producing — half of Earth’s mass would be converted to digital information mass within 150 years.

However, testing these conjectures is not trivial. For instance, a 1 terabyte hard drive filled with digital information would gain a mass of only 2.5 × 10-25 Kg compared to the same erased drive. Measuring such a tiny change in mass is impossible even with the most sensitive scale in the world.

Instead, Vopson has proposed an experiment that tests both conjectures using a particle-antiparticle collision. Since every particle is supposed to contain information, which supposedly has its own mass, then that information has to go somewhere when the particle is annihilated. In this case, the information should be converted into low-energy infrared photons.

The experiment

According to Vopson’s predictions, an electron-positron collision should produce two high-energy gamma rays, as well as two infrared photons with wavelengths around 50 micrometers. The physicist adds that altering the samples’ temperature wouldn’t influence the energy of the gamma rays, but would shift the wavelength of the infrared photons. This is important because it provides a control mechanism for the experiment that can rule out other physical processes.

Validating the mass-energy-information equivalence principle could have far-reaching implications for physics as we know it. In a previous interview with ZME Science, Vopson said that if his conjectures are correct, the universe would contain a stupendous amount of digital information. He speculated that — considering all these things — the elusive dark matter could be just information. Only 5% of the universe is made of baryonic matter (i.e. things we can see or measure), while the rest of the 95% mass-energy content is made of dark matter and dark energy — fancy terms physicists use to describe things that they have no idea what they look like.

Then there’s the black hole information loss paradox. According to Einstein’s general theory of relativity, the gravity of a black hole is so overwhelming, that nothing can escape its clutches within its event horizon — not even light. But in the 1970s, Stephen Hawking and collaborators sought to finesse our understanding of black holes by using quantum theory; and one of the central tenets of quantum mechanics is that information can never be lost. One of Hawking’s major predictions is that black holes emit radiation, now called Hawking radiation. But with this prediction, the late British physicist had pitted the ultimate laws of physics — general relativity and quantum mechanics — against one another, hence the information loss paradox. The mass-energy-information equivalence principle may lend a helping hand in reconciling this paradox.

“It appears to be exactly the same thing that I am proposing in this latest article, but at very different scales. Looking closely into this problem will be the scope of a different study and for now, it is just an interesting idea that must be followed,” Vopson tells me.

Finally, the mass-energy-information equivalence could help settle a whimsical debate that has been gaining steam lately: the notion that we may all be living inside a computer simulation. The debate can be traced to a seminal paper published in 2003 by Nick Bostrom of the University of Oxford, which argued that a technologically adept civilization with immense computing power could simulate new realities with conscious beings in them. Bostrom argued that the probability that we are living in a simulation is close to one.

While it’s easy to dismiss the computer simulation theory, once you think about it, you can’t disprove it either. But Vopson thinks the two conjectures could offer a way out of this dilemma.

“It is like saying, how a character in the most advanced computer game ever created, becoming self-aware, could prove that it is inside a computer game? What experiments could this entity design from within the game to prove its reality is indeed computational?  Similarly, if our world is indeed computational / simulation, then how could someone prove this? What experiments should one perform to demonstrate this?”

“From the information storage angle – a simulation requires information to run: the code itself, all the variables, etc… are bits of information stored somewhere.”

“My latest article offers a way of testing our reality from within the simulation, so a positive result would strongly suggest that the simulation hypothesis is probably real,” the physicist said.

What color is a mirror? It’s not a trick question

Credit: Pixabay.

When looking into a mirror, you can see yourself or the mirror’s surroundings in the reflection. But what is a mirror’s true color? It’s an intriguing question for sure since answering it requires us to delve into some fascinating optical physics.

If you answered ‘silver’ or ‘no color’ you’re wrong. The real color of a mirror is white with a faint green tint.

The discussion itself is more nuanced, though. After all, a t-shirt can also be white with a green tint but that doesn’t mean you can use it in a makeup kit.

The many faces of reflected light

We perceive the contour and color of objects due to light bouncing off them that hits our retina. The brain then reconstructs information from the retina — in the form of electrical signals — into an image, allowing us to see.

Objects are initially hit by white light, which is basically colorless daylight. This contains all the wavelengths of the visible spectrum at equal intensity. Some of these wavelengths are absorbed, while others are reflected. So it is these reflected visible-spectrum wavelengths that we ultimately perceive as color.

When an object absorbs all visible lengths we perceive it as black while an object that reflects all visible wavelengths will appear white to our eyes. In practice, there is no object that absorbs or reflects 100% of incoming light — this is important when discerning the true color of a mirror.

Why isn’t a mirror plain white?

Not all reflections are the same. The reflection of light and other forms of electromagnetic radiation can be categorized into two distinct types of reflection. Specular reflection is light reflected from a smooth surface at a definite angle, whereas diffuse reflection is produced by rough surfaces that reflect light in all directions.

Credit: Olympus Lifescience.

A simple example of both types using water is to observe a pool of water. When the water is calm, incident light is reflected in an orderly manner thereby producing a clear image of the scenery surrounding the pool. But if the water is disturbed by a rock, waves disrupt the reflection by scattering the reflected light in all directions, erasing the image of the scenery.

Credit: Olympus Lifescience.

Mirrors employ specular reflection. When visible white light hits the surface of a mirror at an incident angle, it is reflected back into space at a reflected angle that is equal to the incident angle. The light that hits a mirror is not separated into its component colors because it is not being “bent” or refracted, so all wavelengths are being reflected at equal angles. The result is an image of the source of light. But because the order of light particles (photons) is reversed by the reflection process, the product is a mirror image.

However, mirrors aren’t perfectly white because the material they’re made from is imperfect itself. Modern mirrors are made by silvering, or spraying a thin layer of silver or aluminum onto the back of a sheet of glass. The silica glass substrate reflects a bit more green light than other wavelengths, giving the reflected mirror image a greenish hue.

This greenish tint is imperceptible but it is truly there. You can see it in action by placing two perfectly aligned mirrors facing each other so the reflected light constantly bounces off each other. This phenomenon is known as a “mirror tunnel” or “infinity mirror.” According to a study performed by physicists in 2004, “the color of objects becomes darker and greener the deeper we look into the mirror tunnel.” The physicists found that mirrors are biased at wavelengths between 495 and 570 nanometers, which corresponds to green.

So, in reality, mirrors are actually white with a tiny tint of green.

The smallest refrigerator in the world will keep your nanosoda cool

This electron microscope image shows the cooler’s two semiconductors — one flake of bismuth telluride and one of antimony-bismuth telluride — overlapping at the dark area in the middle, which is where most of the cooling occurs. The small “dots” are indium nanoparticles, which the team used as thermometers. Credit: UCLA.

By using the same physical principles that have been powering instruments aboard NASA’s Voyager spacecraft for the past 40 years, researchers at UCLA have devised the smallest refrigerator in the world. The thermoelectric cooler is only 100 nanometers thick — roughly 500 times thinner than the width of a strand of human hair — and could someday revolutionize how we keep microelectronics from overheating.

“We have made the world’s smallest refrigerator,” said Chris Regan, who is a UCLA physics professor and lead author of the new study published this week in the journal ACS Nano.

Instead of your vapor-compression system inside your refrigerator, the tiny device developed by Regan’s team of researchers is thermoelectric. When two different semiconductors are sandwiched between metal plates, two things can happen.

If heat is applied, one side becomes hot, while the other remains cool. This temperature difference can be harvested to generate electricity. Case in point, the Voyager spacecraft, which is believed to have traveled beyond the limits of the solar system after it visited the outermost planets in the 1970s, is still powered to this day by thermoelectric devices that generate electricity from heat produced by a plutonium nuclear reactor.

This process also works in reverse. When electricity is applied, one semiconductor heats up, while the other stays cold. The cold side can thus function as a cooler or refrigerator.

What the UCLA physicists were able to do is scale down thermoelectric cooling by a factor of more than 10,000 compared to the previous smallest thermoelectric cooler.

They did so using two standard semiconductor materials: bismuth telluride and antimony-bismuth telluride. Although the materials are common, the combination of the two bismuth compounds in two-dimensional structures proved to be excellent.

Typically, the materials employed in thermoelectric coolers are good electrical conductors but poor thermal conductors. These properties are generally mutually exclusive — but not in the case of the atom-thick bismuth combo.

“Its small size makes it millions of times faster than a fridge that has a volume of a millimeter cubed, and that would already be millions of times faster than the fridge you have in your kitchen,” Regan said.

“Once we understand how thermoelectric coolers work at the atomic and near-atomic level,” he said, “we can scale up to the macroscale, where the big payoff is.”

One of the biggest challenges the researchers had to face was measuring the temperature at such a tiny scale. Your typical thermometer simply won’t do. Instead, the physicists employed a technique that they invented in 2015 called PEET, or plasmon energy expansion thermometry. The method determines temperature at the nanoscale by measuring changes in density with a transmission electron microscope.

In this specific case, the researchers placed nanoparticles of indium in the vicinity of the thermoelectric cooler. As the device cooled or heated, the indium correspondingly contracted or expanded. By measuring the density of indium, the temperature of the nano-cooler could be determined precisely.

“PEET has the spatial resolution to map thermal gradients at the few-nanometer scale—an almost unexplored regime for nanostructured thermoelectric materials,” said Regan.

The winning combination of semiconductors found by the UCLA physicists may one day be brought to the macro scale, enabling a new class of cooling devices with no moving parts that regulate temperature in telescopes, microelectronic devices, and other high-end devices. 

Annie Jump Cannon: the legend behind stellar classification

It is striking that today, we can not only discover but even classify stars that are light-years from Earth — sometimes, even billions of light-years away. Stellar classification often uses the famous Hertzsprung–Russell diagram, which summarises the basics of stellar evolution. The luminosity and the temperature of stars can teach us a lot about their life journey, as they burn their fuel and change chemical composition.

We know that some stars are made up mostly of ionised helium or neutral helium, some are hotter than others, and we fit the Sun as a not so impressive star compared to the giants. Part of that development came from Annie Jump Cannon’s contribution during her long career as an astronomer. 

The Hertzsprung diagram where the evolution of sun-like stars is traced. Credits: ESO.

On the shoulders of giantesses

Cannon was born in 1863 in Dover, Delaware, US. When she was 17 years old, thanks to her father’s support, she managed to travel 369 miles all the way from her hometown to attend classes at Wellesley College. It’s no big deal for teens today, but back then, this was an imaginable adventure for a young lady. The institution offered education exclusively for women, an ideal environment to spark in Cannon an ambition to become a scientist. In 1884, she graduated and later in 1896 started her career at the Harvard Observatory.

In Wellesley, she had Sarah Whiting as her astronomy professor, who sparked Cannon’s interest in spectroscopy:

“… of all branches of physics and astronomy, she was most keen on the spectroscopic development. Even at her Observatory receptions, she always had the spectra of various elements on exhibition. So great was her interest in the subject that she infused into the mind of her pupil who is writing these lines, a desire to continue the investigation of spectra.”

Whiting’s obituary in 1927, Annie Cannon.

Cannon had an explorer spirit and travelled across Europe, publishing a photography book in 1893 called “In the footsteps of Columbus”. It is believed that during her years at Wellesley, after the trip, she got infected with scarlet fever. The disease infected her ears and she suffered severe hearing loss, but that didn’t put an end to her social or scientific activities. Annie Jump Cannon was known for not missing meetings and participating in all American Astronomical Society meetings during her career.

OBAFGKM

At Radcliffe College, she began working more with spectroscopy. Her first work with southern stars spectra was later published in 1901 in the Annals of the Harvard College Observatory. The director of the observatory, Edward C. Pickering chose Cannon as the responsible for observing stars which would later become the Henry Draper Catalogue, named after the first person to measure the spectra of a star. 

Annie Jump Cannon at her desk at the Harvard College Observatory. Image via Wiki Commons.

The job didn’t pay much. In fact, Harvard employed a number of women as “women computers” that processed astronomic data. The women computer at Harvard earned less than secretaries, and this enabled researchers to hire more women computers, as men would have need to be paid more.

Her salary was only 25 cents an hour, a small income for a difficult job to look at the tiny details from the spectrographs, often only possible with magnifying glasses. She was known for being focused (possibly also influenced by her deafness), but she was also known for doing the job fast. Simply put,

During her career, she managed to classify the spectra of 225,000 stars. At the time, Williamina Fleming, a Scottish astronomer, was the Harvard lady in charge of the women computers. She had previously observed 10,000 stars from Draper Catalogue and classified them from letters A to N. But Annie Jump Cannon saw the link between the stars’ temperature and rearranged Fleming’s classification to the OBAFGKM system. The OBAFGKM system divides the stars from the hottest to the coldest, and astronomers created a popular mnemonic for it: “Oh Be A Fine Guy/Girl Kiss Me”.

Legacy

“A bibliography of Miss Cannon’s scientific work would be exceedingly long, but it would be far easier to compile one than to presume to say how great has been the influence of her researches in astronomy. For there is scarcely a living astronomer who can remember the time when Miss Cannon was not an authoritative figure. It is nearly impossible for us to imagine the astronomical world without her. Of late years she has been not only a vital, living person; she has been an institution. Already in our school days she was a legend. The scientific world has lost something besides a great scientist.”

Cecilia Payne-Gaposchkin in Annie Jump Cannon’s obituary.
Annie Jump Cannon at Harvard University. Smithsonian Institution @ Flickr Commons.

Annie Jump Cannon was awarded many prizes, she became honorary doctorate of Oxford University, the first woman to receive the Henry Draper Medal in 1931, and the first woman to become an officer of the American Astronomical Society. 

Her work in stellar classification was followed by Cecilia Payne-Gaposchkin, another dame of stellar spectroscopy. Payne improved the system with quantum mechanics and described what stars are made of

Very few scientists have such a competent and exemplary career as Cannon. Payne continued the work left from Cannon, her advisor, Henry Norris Russell, then improved it with minimum citation. From that, we got today’s basic understanding of stellar classification. Her beautiful legacy has been rescued recently by other female astronomers who know the importance of her life’s work.

This ‘everlasting bubble’ endured 465 days without popping

An iridescent soap bubble. Credit: Pixabay.

Soap bubbles can be dazzlingly beautiful and entertaining, especially if they’re wielded by a good bubbleologist — a person who makes soap bubbles professionally. The show never lasts long though, as the bubbles pop within seconds or minutes. But not all bubbles are so ephemeral and fragile.

In a new study, researchers in France described how they created a so-called ‘everlasting bubble’ out of water, glycerol, and plastic particles. True to its name, the most resilient bubble made by the scientists survived for a staggering 465 days, which might as well be forever compared to the usual fleeting lifetime of soap bubbles.

“Soap bubbles are by essence fragile and ephemeral,” the study’s authors wrote in their paper that appeared in the journal Physical Review Fluids. “We design bubbles made of a composite liquid film able to neutralize all these effects and keep their integrity for more than one year in a standard atmosphere.”

Soap bubbles naturally take the form of a sphere due to the phenomenon of surface tension, a property of a liquid surface that makes it act as if it were a stretched elastic membrane. In this case, the liquid is the combination of water and soap that traps air. The greater the surface area, the more energy is required to maintain any given shape, which is why bubbles seek to take the shape of a sphere — it’s the 3-D structure with the lowest surface area.

However, the delicate equilibrium that enables surface tension is short-lived in soap bubbles. As the water in the soap film evaporates, the film becomes thinner and thinner until the surface tension is broken and the bubble inevitably pops. When smaller bubbles merge to create large bubbles the resulting bubble is also more fragile.

You can make soap bubbles last a little longer by adding certain surfactants that strengthen the thin liquid soap film, but this will only delay the inevitable.

But not all bubbles are this ephemeral. In 2017, researchers demonstrated a new type of bubble known as a “gas marble” — liquid droplets wrapped in a thin shell made of plastic microspheres. These bubbles were inspired by “liquid marbles”, which are droplets of liquid coated with microscopic, hydrophilic beads. Both types of structures are very stable and can sustain relatively large external forces.

For instance, liquid marbles can be rolled around a solid surface without breaking apart, and gas marbles are tenfold stronger than liquid marbles.

These interesting properties allow gas marbles to last a lot longer than soap bubbles before popping, but no one has ever tested just how long they can last — until now.

Everlasting bubbles made from plastic microspheres and glycerol. Credit: Physical Review Fluids.

Researchers at the University of Lille in France performed an experiment in which they blew three different types of bubbles: soap bubbles, water-based gas marbles, and water-glycerol-based gas marbles.  Glycerol is a compound found in soap that bonds well with water molecules.

The soap bubbles were obviously the most unstable, lasting less than a minute. The water-based marbles lasted much longer, collapsing in 6 to 60 minutes. But the water-glycerol-marbles were by far the winner, remaining intact for much longer. The longest-lasting one survived for 465 days.

Everlasting bubbles owe their whopping longevity to the stabilizing effects of glycerol, which has a strong affinity with water and can soak up water from the air to make up for the quantity lost to evaporation. Meanwhile, the tiny plastic particles prevent water drainage from the shell. Together, both effects combine to protect the bubble from rupturing.

Okay, that’s pretty cool, but what good is it to have an ‘everlasting bubble’? The researchers didn’t detail any practical applications to this research in their study, but there could be some uses in areas where preventing evaporation is important. For instance, everlasting bubble coating could be used to shield certain medicinal aerosols and sprays so they last longer in the atmosphere.

The journey of galaxy clusters in billions of years

A new study modeled the dynamics and evolution of some of the largest known structures in the universe.

Extragalactic neighborhood. Credit: Wikimedia Commons.

Let’s take a moment to look at our position in the universe.

We are now living on a solar system orbiting the center of the Milky Way galaxy — which itself lies in the Local Group of galaxies neighboring a Local Void, a vast cluster of space with fewer galaxies than expected. Wait, we’re not done yet. These structures are part of a larger region that encompasses thousands of galaxies in a supercluster called the Laniakea Supercluster, which is around 520 million light-years across. 

A group of researchers has now simulated the movement of galaxies in the Laniakea and other clusters of galaxies starting when the universe was in its infancy (just 1.6 million years old) until today. They used observations from the Two Micron All-Sky Survey (2MASS) and the Cosmicflows-3 as the starting point for their study. With these two tools, they looked at galaxies orbiting massive regions with velocities of up to 8,000 km/s — and made videos describing those orbits.

Because the universe is expanding and that influences the evolution of these superclusters, we first need to know how fast the universe is expanding, which has proven to be very difficult to calculate. So the team considered different plausible universal expansion scenarios to get the clusters’ motion. 

Besides Laniakea, the scientists report two other zones where galaxies appear to be flowing towards a gravitational field, the Perseus-Pisces (a 250 million light-years supercluster) and the Great Wall (a cluster of about 1.37 billion light-years). In the Laniakea region, galaxies flow towards the Great Attractor, a very dense part of the supercluster. The other superclusters have similar patterns, the Perseus-Pisces galaxies flow towards the spine of the cluster’s large filament.

The researchers even predicted the future of these galaxies. They estimated the path of the galaxies to something like 10 billion years into the future. It is clear in their videos, the expansion of the universe affecting the big picture. In smaller, denser regions, the attraction prevails, like the future of Milkomeda in the Local Group.

The study has been accepted for publication in Astrophysical Journal.

Liquid water on Mars? Actually, probably not

A view of Mars’ south pole. Research led by The University of Texas at Austin found that a 2018 discovery of liquid water under the Red Planet’s south polar cap is most likely just radar reflecting from volcanic rock. Credit: ESA/DLR/FU Berlin.

In 2018, a NASA announcement got us all excited. The study, based on an interpretation of radio data, suggested that there may be liquid lakes under the ice cap at Mars’s south pole. “We interpret this feature as a stable body of liquid water on Mars,” the authors wrote in the study.

“Here, we aim to determine if Martian terrains today could produce strong basal echoes if they were covered by a planet-wide ice sheet,” the researchers write in their paper.

“We find that some existing volcanic-related terrains could produce a very strong basal signal analog to what is observed at the South polar cap. Our analysis strengthens the case against a unique hypothesis based solely on liquid water for the nature of the polar basal material,” they add.

Water signals

Researchers have put several scientific instruments on and around Mars. Among them is MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) — a low-frequency radar developed by researchers working in Italy. Radar waves from MARSIS can penetrate through ice (and to a lesser extent, through rocks), offering clues about the surface as well as the subsurface of the Red Planet.

When the radar waves encounter a different surface (for instance, when they pass from ice to rocks), a part of their energy is reflected back. Based on this type of data, certain deductions can be made about the layers through which the waves passed — but the results are not always clear.

For instance, the 2018 study concluded that a “shiny” patch (a patch that is very reflective to radar data) close to Mars’ frozen south pole could indicate a subsurface lake, 1.4 kilometers (0.87 miles) under the ice.

MARSIS data showing an area of high reflectivity (dark blue) that was thought to be water. Credit: USGS Astrogeology Science Center, Arizona.

If this were indeed the case, and Mars were to host a network of such lakes, it would be groundbreaking. Not only would this mean that life may still exist on Mars, but it could also be very helpful in the case of a human Martian colony. But right from the get-go, this interpretation had critics.

For instance, a 2019 lab study that simulated the conditions on Mars concluded that these areas are simply too cold to host liquid water, even salty liquid water (which stays liquid at colder temperatures). Instead, the authors of that study suggested that the reflective patch is a clay mass.

Now, a new study suggests that neither of those is true, and the patches are simply volcanic rocks.

A song of rocks and water

The new study, led by planetary scientist Cyril Grima of the University of Texas Institute for Geophysics, opted for a clever trick to explain the nature of the reflective patch. They wondered what they’d see if they “covered” all of Mars with an ice sheet, just like at the South Pole? In other words, if we replicated the conditions where the reflective patch was discovered all over the planet, would we see any others like it?

The answer turned out to be ‘yes’.

Mars as it might appear covered in ice. The red spots are reflective patches. Image credits: Cyril Grima.

Researchers found a bunch of reflective patches scattered across the Red Planet. The researchers overlaid these patches with a map of the Martian geology, noting that they neatly matched the outline of volcanic rocks.

This idea makes sense, because just like water and clay are reflective, so too are volcanic rocks. Also, Mars has a lot of volcanic rocks, not all of which have been mapped yet; it’s possible that such an area may lie around the south pole, unbeknownst to researchers.

“I think the beauty of Grima’s finding is that while it knocks down the idea there might be liquid water under the planet’s south pole today, it also gives us really precise places to go look for evidence of ancient lakes and riverbeds and test hypotheses about the wider drying out of Mars’ climate over billions of years,” says planetary scientist Ian Smith of York University in Canada, who led the frozen clay study.

Relative reflective strength of Mars if the surface was entirely covered by a 1.4-km dirty ice sheet. Credits: Grima et al, 2022.

Ultimately, while this latest study suggests the reflective patches are volcanic rocks, it’s possible that the last word is still out. The odds of them being liquid water seem to be quite slim at the moment, but whether they’re volcanic rocks, clay, or something else entirely is not at all clear. Isaac Smith, a Mars geophysicist at York University who was not involved in either study, believes this is a good example of how science should work.

“Science isn’t foolproof on the first try,” said Smith, who is an alumnus of the Jackson School of Geosciences at UT Austin. “That’s especially true in planetary science where we’re looking at places no one’s ever visited and relying on instruments that sense everything remotely.”

Hopefully, future missions and studies will help shed more light on what lies under the Martian surface.

The study was published in Geophysical Research Letters.

China builds the world’s first artificial moon

Chinese scientists have built an ‘artificial moon’ possessing lunar-like gravity to help them prepare astronauts for future exploration missions. The structure uses a powerful magnetic field to produce the celestial landscape — an approach inspired by experiments once used to levitate a frog.

The key component is a vacuum chamber that houses an artificial moon measuring 60cm (about 2 feet) in diameter. Image credits: Li Ruilin, China University of Mining and Technology

Preparing to colonize the moon

Simulating low gravity on Earth is a complex process. Current techniques require either flying a plane that enters a free fall and then climbs back up again or jumping off a drop tower — but these both last mere minutes. With the new invention, the magnetic field can be switched on or off as needed, producing no gravity, lunar gravity, or earth-level gravity instantly. It is also strong enough to magnetize and levitate other objects against the gravitational force for as long as needed.

All of this means that scientists will be able to test equipment in the extreme simulated environment to prevent costly mistakes. This is beneficial as problems can arise in missions due to the lack of atmosphere on the moon, meaning the temperature changes quickly and dramatically. And in low gravity, rocks and dust may behave in a completely different way than on Earth – as they are more loosely bound to each other.

Engineers from the China University of Mining and Technology built the facility (which they plan to launch in the coming months) in the eastern city of Xuzhou, in Jiangsu province. A vacuum chamber, containing no air, houses a mini “moon” measuring 60cm (about 2 feet) in diameter at its heart. The artificial landscape consists of rocks and dust as light as those found on the lunar surface-where gravity is about one-sixth as powerful as that on Earth–due to powerful magnets that levitate the room above the ground. They plan to test a host of technologies whose primary purpose is to perform tasks and build structures on the surface of the Earth’s only natural satellite.

Group leader Li Ruilin from the China University of Mining and Technology says it’s the “first of its kind in the world” that will take lunar simulation to a whole new level. Adding that their artificial moon makes gravity “disappear.” For “as long as you want,” he adds.

In an interview with the South China Morning Post, the team explains that some experiments take just a few seconds, such as an impact test. Meanwhile, others like creep testing (where the amount a material deforms under stress is measured) can take several days.

Li said astronauts could also use it to determine whether 3D printing structures on the surface is possible rather than deploying heavy equipment they can’t use on the mission. He continues:

“Some experiments conducted in the simulated environment can also give us some important clues, such as where to look for water trapped under the surface.”

It could also help assess whether a permanent human settlement could be built there, including issues like how well the surface traps heat.

From amphibians to artificial celestial bodies

The group explains that the idea originates from Russian-born UK-based physicist Andre Geim’s experiments which saw him levitate a frog with a magnet – that gained him a satirical Ig Nobel Prize in 2000, which celebrates science that “first makes people laugh, and then think.” Geim also won a Nobel Prize in Physics in 2010 for his work on graphene.

The foundation of his work involves a phenomenon known as diamagnetic levitation, where scientists apply an external magnetic force to any material. In turn, this field induces a weak repulsion between the object and the magnets, causing it to drift away from them and ‘float’ in midair.

For this to happen, the magnetic force must be strong enough to ‘magnetize’ the atoms that make up a material. Essentially, the atoms inside the object (or frog) acts as tiny magnets, subject to the magnetic force existing around them. If the magnet is powerful enough, it will change the direction of the electrons revolving around the atom’s nuclei, allowing them to produce a magnetic field to repulse the magnets.

Diamagnetic levitation of a tiny horse. Image credits: Pieter Kuiper / Wiki Commons.

Different substances on Earth have varying degrees of diamagnetism which affect their ability to levitate under a magnetic field; adding a vacuum, as was done here, allowed the researchers to produce an isolated chamber that mimics a microgravity environment.

However, simulating the harsh lunar environment was no easy task as the magnetic force needed is so strong it could tear apart components such as superconducting wires. It also affected the many metallic parts necessary for the vacuum chamber, which do not function properly near a powerful magnet.

To counteract this, the team came up with several technical innovations, including simulating lunar dust that could float a lot easier in the magnetic field and replacing steel with aluminum in many of the critical components.

The new space race

This breakthrough signals China’s intent to take first place in the international space race. That includes its lunar exploration program (named after the mythical moon goddess Chang’e), whose recent missions include landing a rover on the dark side of the moon in 2019 and 2020 that saw rock samples brought back to Earth for the first time in over 40 years.

Next, China wants to establish a joint lunar research base with Russia, which could start as soon as 2027.  

The new simulator will help China better prepare for its future space missions. For instance, the Chang’e 5 mission returned with far fewer rock samples than planned in December 2020, as the drill hit unexpected resistance. Previous missions led by Russia and the US have also had related issues.

Experiments conducted on a smaller prototype simulator suggested drill resistance on the moon could be much higher than predicted by purely computational models, according to a study by the Xuzhou team published in the Journal of China University of Mining and Technology. The authors hope this paper will enable space engineers across the globe (and in the future, the moon) to alter their equipment before launching multi-billion dollar missions.

The team is adamant that the facility will be open to researchers worldwide, and that includes Geim. “We definitely welcome Professor Geim to come and share more great ideas with us,” Li said.

COVID lockdowns led to less lightning in the sky

Credit: Pixabay.

During the spring of 2020, when the coronavirus pandemic caught everyone with their pants down, governments scrambled to close their borders and impose strict lockdowns in order to curb the spread of a virus that was rife with uncertainty. Human activity slowed to a crawl and, as a result, the air and water became cleaner. Fewer vehicles on the road meant urban spaces became safer for animals and much quieter. There were even viral reports of dolphins in the canals of Venice, Italy, and pumas in the streets in Santiago, Chile, prompting many to triumphantly claim ‘nature is healing’.

The ‘healing’ part is hyperbolic, but what’s clear is that nature went through significant changes as a result of our lockdowns — and it even showed in the sky. At the recent American Geophysical Union meeting in New Orleans, scientists at MIT reported that a drop in atmospheric aerosols due to shuttering of activity coincided with a drop in lightning.

According to a new study, reduced human activity lowered the number of aerosol emissions — microscopic particles in the atmosphere too small to see with the naked eye that can result from pollution due to fossil fuels — affecting the electrical charge of clouds and their ability to form lightning.

Between March 2020 and May 2020, there were 19% fewer intracloud flashes (the most common type of lightning) compared to the same three-month period in 2018, 2019, and 2021.

Earle Williams, a physical meteorologist at the Massachusetts Institute of Technology, and colleagues used three different methods to measure lighting, all of which pointed to the same trend of diminished lightning activity associated with diminished aerosol concentration.

Atmospheric aerosols absorb water vapor thereby helping form cloud droplets. Without any aerosols, we wouldn’t have clouds. When there are more aerosols in the atmosphere, the water vapor becomes more widely distributed across droplets, making them smaller and less likely to coalesce into rain droplets. As a result, clouds grow larger but precipitation is suppressed.

Furthermore, clouds seeded with fewer aerosols have fewer positively charged ice particles in the clouds to react with negatively charged hail in the lower part of the cloud, which explains why we had less lightning that strikes the surface or discharges into the air.

For instance, lightning flashes are more frequent along shipping routes, where freighters emit particulates into the air, compared to the surrounding ocean. And the most intense thunderstorms in the tropics brew up over land, where aerosols are elevated by both natural sources and human activity.

Areas with the strongest reduction of aerosols also experienced the most dramatic drops in lightning events. These include Southeast Asia, Europe, and most of Africa. North and South America also experienced a reduction in lightning, but not as dramatic as in other places. Researchers believe that some of the drop in aerosol pollution due to human activity in the Americas was offset by the catastrophic wide-scale fires experienced in 2020.

Lightning is an important component of the weather system, which is why scientists are so interested in understanding it better. Also, from an ecological perspective, lighting interacts with air molecules to produce nitrogen oxide, a family of poisonous, highly reactive gases.

Ultracold atoms spun on a string form quantum tornadoes

Like weather patterns on Earth, the spinning of a fluid of quantum particles led to the formation of swirling ‘quantum crystals’. Credit: MIT.

The familiar universe around us behaves in largely predictable ways, physically speaking. It’s why we’re confident to board a flight or we don’t act surprised when a computer performs a task exactly as instructed. But when zooming into the world of the very small, at the atomic level, our assumptions about how the universe works — a model known as classical physics — start to break down. Case in point, MIT physicists have coaxed a bunch of ultra-chilled atoms to exhibit a never before seen phenomenon: a crystal made of ‘quantum tornadoes’.

Welcome to the bizarre world of quantum physics. Get in, no time to explain!

Before delving into the specifics of this latest research, it’s worth taking a trip down memory lane, back to the 1980s. This was a fruitful decade in physics, with a lot of particle physics activity that eventually led to the discovery of a new family of matter known as quantum Hall fluids and consisting of clouds of electrons suspended in magnetic fields.

Classical physics dictates that the electrons in the Hall ‘fluid’ should repel each other and arrange themselves in an orderly lattice, forming a crystal. Except that no. Just no. Instead, the particles always adjusted their behavior to what their neighbors were doing, all in a correlated way.

“People discovered all kinds of amazing properties, and the reason was, in a magnetic field, electrons are (classically) frozen in place—all their kinetic energy is switched off, and what’s left is purely interactions,” says Richard Fletcher, assistant professor of physics at MIT. “So, this whole world emerged. But it was extremely hard to observe and understand.”

Fletcher and Martin Zwierlein, the Thomas A. Frank Professor of Physics at MIT, wondered if they could replicate this effect in an experiment that makes it easier to see. Electrons in a magnetic field move in very small increments, and the researchers thought that since the motion of atoms under rotation can occur over much larger length scales, they could afford to use ultracold atoms in lieu of electrons to make a nice light show.

For their study, they used lasers to trap around one million sodium atoms, cooling them to around 100 nanokelvins, a hair’s breadth away from absolute zero. A system of electromagnets both further confined the atoms and collectively spun them around like marbles in a bowl at about 100 rotations per second.

Using high-speed precision optical cameras to observe what was happening at the molecular level, the physicists found that the atoms spun into a long thread at around the 100-millisecond mark. This was the threshold at which the atoms’ behavior crossed the quantum barrier.

“In a classical fluid, like cigarette smoke, it would just keep getting thinner,” Zwierlein says. “But in the quantum world, a fluid reaches a limit to how thin it can get.”

“When we saw it had reached this limit, we had good reason to think we were knocking on the door of interesting, quantum physics,” adds Fletcher. “Then the question was, what would this needle-thin fluid do under the influence of purely rotation and interactions?”

As atoms continued to spin, quantum instability kicked in. The thread wavered, then turned the shape of a corkscrew, before finally breaking into a string of rotating blobs resembling tornadoes. The authors called the resulting structure a ‘quantum crystal’, whose shape is purely the result of the interplay between the rotation of the fluid and the forces between the atoms.

“This crystallization is driven purely by interactions, and tells us we’re going from the classical world to the quantum world,” said Fletcher in a statement.

According to the researchers, the evolution of the spinning atoms in the gas broadly mimics how Earth’s rotation creates large-scale weather patterns. A fine example of how the very small and the very large are not that disconnected as the wackiness of quantum physics might lend us to believe.

“The Coriolis effect that explains Earth’s rotational effect is similar to the Lorentz force that explains how charged particles behave in a magnetic field,” Zwierlein notes. “Even in classical physics, this gives rise to intriguing pattern formation, like clouds wrapping around the Earth in beautiful spiral motions. And now we can study this in the quantum world.”

“This evolution connects to the idea of how a butterfly in China can create a storm here, due to instabilities that set off turbulence,” Zwierlein explains. “Here, we have quantum weather: The fluid, just from its quantum instabilities, fragments into this crystalline structure of smaller clouds and vortices. And it’s a breakthrough to be able to see these quantum effects directly.”

The findings appeared in the journal Nature.

Scientists image atoms with record resolution close to absolute physical limits

An electron ptychographic reconstruction of a praseodymium orthoscandate (PrScO3) crystal, zoomed in 100 million times. Credit: Cornell University.

Physicists at Cornell University have pushed the boundaries of atomic imaging by pushing the resolution of an electron microscope by a factor of two. While many modern smartphones have high-resolution cameras that allow you to zoom in a lot, they’re no match for this setup that can reconstruct ultraprecise images with one-trillionth of a meter precision. You can see individual atoms and the chemical bonds in molecules.

The researchers, led by Professor David Muller, devised an electron microscope pixel array detector and state-of-the-art 3D reconstruction algorithms to take laser-precise images of atoms. The resolution is so sharp that the only blurred element is the thermal jiggling of the atoms themselves.

“This doesn’t just set a new record,” Muller said. “It’s reached a regime which is effectively going to be an ultimate limit for resolution. We basically can now figure out where the atoms are in a very easy way. This opens up a whole lot of new measurement possibilities of things we’ve wanted to do for a very long time,” Muller said.

The breakthrough hinges on a computer-algorithm-driven technique known as ptychography, which works by scanning overlapping scattering patterns from a sample and then looking for changes in the overlapping region.

“We’re chasing speckle patterns that look a lot like those laser-pointer patterns that cats are equally fascinated by,” Muller said. “By seeing how the pattern changes, we are able to compute the shape of the object that caused the pattern.”

The detector used by the electron microscope is very slightly defocused on purpose. This way the blurred beam can capture the widest range of data possible. The data is then used to reconstruct a sharp image of the sample via complex algorithms.

“With these new algorithms, we’re now able to correct for all the blurring of our microscope to the point that the largest blurring factor we have left is the fact that the atoms themselves are wobbling, because that’s what happens to atoms at finite temperature,” Muller said. “When we talk about temperature, what we’re actually measuring is the average speed of how much the atoms are jiggling.”

Due to the jiggling of the atoms, the researchers claim that their achievement is almost at the physical lower bound of atomic imaging. Theoretically, they could break their own record and achieve an even higher resolution by freezing the sample close to absolute zero temperature. However, even at close to zero, there are still quantum fluctuations and the improvements would only be marginal at best anyway. 

Electron ptychography will allow scientists to identify individual atoms in 3-D space that may be obscured by other imaging methods. Immediate applications include detecting impurities in samples, as well as imaging them and their vibrations. For the industry, this is particularly useful when assessing the quality of semiconductors, catalysts, and sensitive quantum materials meant for quantum computers.

“We want to apply this to everything we do,” said Muller.”Until now, we’ve all been wearing really bad glasses. And now we actually have a really good pair. Why wouldn’t you want to take off the old glasses, put on the new ones, and use them all the time?”

The findings appeared in the journal Science.

This article originally appeared in May 2021.

What is Plasma — the most common state of matter found in the universe

Although plasma is unstable in terrestrial conditions, it’s the most common state of matter in the universe, making up a large chunk of all the stars in the universe. It’s also pretty weird, posing many questions that researchers are still working to unravel.

Just like the energy of light is carried by photons, the oscillation of plasma is carried out by plasmons. Plasmons aren’t a particle per se, they’re something called “quasiparticles” — a physical concept that researchers use to treat excitations in solids as particles.

We’re taught early in school that the basic states of matter are solid, liquid, and gas. There are other exotic states that scientists discovered more recently (like a superfluid or a Bose-Einstein condensate, for instance), but those three are what we learn as the “main” states of matter. However, your primary school teacher may have missed another one: plasma.

Plasma has a lot to do with heat: add enough heat to solids and they become liquid, next liquid becomes gas. Finally, with enough energy to ionize atoms into a soup of electrons and ions (electrically charged atoms), you have plasma.

Plasma has some unique characteristics that emerge as a result of the way particles interact with each other in this state. Let’s have a look at them.

Plasmon and Debye Shielding

Consider some electrically neutral plasma — this means that, between the charged free ions and electrons, positive and negative charges cancel each other out. If we change the position of a few electrons, even if it’s only a few, the displacement will change the electric equilibrium, so the now-unbalanced ions will try to move to a position to restore equilibrium. Because opposite charges attract, when the electrons move, the positive ions try to pull them back and make everything organized again. In plasma, this oscillation happens until some equilibrium is reached. This phenomenon is called Langmuir waves.

If you add charged particles to the plasma, the ions and electrons rearrange themselves in order to reach a nearly neutral state. To do so, electrons create a sort of electromagnetic shield around the positive particle, repelling the positive ions from it. This is called Debye Shielding.

The region necessary to involve such a particle is called a Debye sheath — or an electrostatic sheath. This type of sheath appears in plasma because the electrons tend to have a much higher temperature than that of the ions.  This creates a layer in the plasma that has a greater density of positive ions, and hence an overall excess positive charge.

Positive ion sheaths around grid wires in a thermionic gas tube. Wikimedia Commons

Depending on the charge and the characteristics of the plasma, the sheath can be bigger or smaller. Scientists classify plasmas based on the size of this sheath — an “ideal” plasma has lots of particles per Debye sheath volume, while fewer particles make it harder for electrons to shield new particles.

The early universe was made of plasma

Plasma can get very weird — which is why it’s somewhat surprising that the entire universe was, at some point, plasma.

During the first 10 to 15 microseconds of the universe, it was filled with a super hot soup made of particles called gluons and quarks. Gluons are the “glue” that sticks quarks together to form protons, neutrons, and other larger particles. During that early period, the universe was nearly 2 trillion Kelvin hot. This is the hottest the universe has ever been.

The history of the universe by particle physics. Credits: Particle Data Group.

We can simulate such conditions in a particle collider. In these accelerators, scientists smash heavy gold or lead ions together to produce the quark-gluon soup (QGP) — the stuff that made up the early plasma universe. In such conditions, they learned that this state of matter behaves like a perfect fluid, not viscous like honey. 

The two places perfect to form QGP are the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and at the Large Hadron Collider (LHC) at CERN. The ions are accelerated to 99.995% of the speed of light, and QGP exists just for a very small fraction of a second (nearly zero) before it condenses again to form heavier particles.

Scientists collect the data from the accelerator and try to explain the mess with quantum chromodynamics- a very complex theory that strives to describe one of the fundamental interactions in nature — the strong interaction.

This image shows the end view of a collision of two 30-billion electron-volt gold beams in the STAR detector at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. The beams travel in opposite directions at nearly the speed of light before colliding. (Credit: Brookhaven National Laboratory).

Starstuff

You don’t need to go into the lab to find plasma — simply go outside. We have the sun (during the day) and stars (during the night). On some days, you can even see it in the sky, in the form of lightning.

Plasma is also found scattered across the universe. Called the Warm–hot intergalactic medium (WHIM), there’s plasma of around a billion Kelvin found between galaxies. Around galaxies, there is a reservoir of diffuse gas in the form of plasma called the circumgalactic medium (CGM). This is usually hard for scientists to study because the gas has a very low density, but some simulations seem to try to understand the role of CGM in galaxy formation.

You may find plasma in planets as well, mostly in their magnetospheres – the regions of planets’ magnetic field affecting charged particles coming from space. The magnetospheres serve as protection from the solar wind, and in the case of the giant planets, these regions can be larger than the Sun. Inside Jupiter’s inside the dawn flank of the magnetopause, scientists found protons and heavy ions.

This view visualizes only the inner part of the magnetosphere. The complete Jovian magnetosphere is an enormous, tadpole-shaped structure that balloons out to dozens of Jupiter widths around the planet. In the direction away from the sun, the magnetotail extends as far as the orbit of Saturn. Wikimedia Commons.

Plasma is a special state of matter, it is easily found in the universe. Unlike the other states of matter, it comes with other special properties because it is made of charged ingredients. Thanks to plasma physics, we understand the early stages of the universe and astrophysical objects.

Nano-magnifying glass converts infrared light into visible light

Artist impression of the prototype that converts infrared light to the visible spectrum using molecules that are sandwiched between gold particles and a mirror.

Our eyes cannot see infrared light, but that’s why we have infrared detectors, acting like an augmented sense. However, these aren’t too sensitive since infrared light carries so little energy compared to ambient heat, resulting in a lot of noise. The best infrared detectors overcome this problem by operating at ultra-low temperatures, but this requires a lot of energy and can be extremely expensive.

An international team of researchers has streamlined infrared detection by developing a low-cost device that coaxes molecules to convert invisible infrared into visible light. The molecules absorb mid-infrared light inside their vibrating chemical bonds, then transfer this extra energy to visible light they encounter. In the process, the infrared light is ‘upconverted’ to broadband closer to the blue end of the spectrum, which is visible to modern cameras.

Converting light frequencies is no trivial task due to constraints owed to the law of energy conservation. But the researchers found a workaround by adding energy to infrared light using a mediator, namely tiny vibrating molecules.

The main challenge lies in having the molecules intersect with the visible light quickly enough for the energy transfer to occur. To do so, the team led by researchers at the University of Cambridge devised a setup that traps light from a laser into crevices surrounded by a thin layer of gold. A single layer of molecules occupies the same tiny volume where light is squeezed through a space a billion times smaller than a human hair.

“Trapping these different colors of light at the same time was hard, but we wanted to find a way that wouldn’t be expensive and could easily produce practical devices,” said co-author Dr. Rohit Chikkaraddy from the Cavendish Laboratory, who devised the experiments based on his simulations of light in these building blocks.

“It’s like listening to slow-rippling earthquake waves by colliding them with a violin string to get a high whistle that’s easy to hear, and without breaking the violin,” said Professor Jeremy Baumberg, who led the research.

What truly makes this new infrared detector useful is that it can be integrated into existing visible light detecting technologies, such as ordinary cameras.

Low-cost infrared detectors have a wide range of applications, ranging from sensing contaminants and tracking cancers to observing galactic structures. And while this prototype is still in its early phases of development, the researchers are confident they can optimize its performance even further to turn it into a cheap sensor for industry and scientific applications.

“So far, however, the device’s light-conversion efficiency is still very low,” cautions Dr. Wen Chen, first author of the work. “We are now focusing our efforts in further improving it” – a key step toward commercial applications.

The findings appeared in the journal Science.

Acoustics and air bubbles could help researchers monitor how glaciers melt

Image credits: Hari Vishnu.

As temperatures continue to rise, glaciers continue to melt — it’s a no-brainer. However, tracking exactly how and when glaciers melt is a whole new problem, and this has been a challenging problem for years. But a team of researchers may have a solution.

Hari Vishnu, from the National University of Singapore, Grant Deane, from the Scripps Institute of Oceanography, and their research team, found that glacial ice melt releases distinct pressurized bubbles that can be detected acoustically.

“In tidewater glacial bays such as the ones we studied in Svalbard, the ice lost by the glacier is predominantly due to underwater melting and calving of the glacier,” Vishnu told ZME Science. “The underwater soundscape within a frequency band of 1-3 kHz is dominated by the sound of underwater melting of glacier ice and subsequent release of pressurized bubbles from within this ice.”

Air trapped below the glacier builds more and more pressure, holding bubbles of air that can reach pressures up to 20 atmospheres of pressure; when the ice melts, these bubbles are released and produce detectable sounds.

The essential idea is that faster melting creates a more rapid release of bubbles, and therefore more sounds.

“So we are aiming to invert this sound to try and obtain information on the melt rate of the ice. Sound cues have previously been used to estimate the amount of ice lost due to calving too, and now we are focusing our efforts on the other component, which is underwater melting.”

Image credits: Hari Vishnu.

The acoustic intensity of this release is dependent on various parameters (including the geometry of the glacier/ocean interface, the temperature and salt composition of the water and the ice), but this intensity can also offer clues about how the ice is melting. Specifically, glaciers melt faster when exposed to warmer water, which pushes the bubbles out stronger and faster.

“This finding is exciting and significant because it tells us that the sound measured in the bay contains cues on the rate of the underwater melting, and moving forward, there is potential for us to develop techniques based on listening to this sound for monitoring the melt rate at these glaciers,” Vishnu adds.

The researchers use a vertical array of sound sensors to profile the underwater sound of a bay. It’s like an underwater acoustic camera, Vishnu explains, but ‘imaging’ acoustically only in the vertical direction. The sound from the glacier melting arrives horizontally, so other sounds can be filtered out. Ultimately, after some signal processing, the researchers can remove unwanted noise and other effects and obtain insights about how ice is melting.

In the long run, the method could be used to study glaciers on a wide area, as sound can travel long distances underwater. It would enable long-term monitoring and is relatively easy to deploy and use, the researchers say.

“Our long-term goal is to establish long-term sound recording stations in glaciers around Greenland and Svalbard to monitor their ice-loss and stability. But there are challenges involved in getting to a stage where we can do this at such a large scale accurately and using a system that can operate autonomously, and the results we have reported are a first step towards that stage.”

If confirmed (the study has not been published in a peer-reviewed journal yet), this remote sensing method could be an important method not just in terms of climate change, but also to reduce the risk for boats moving in glacier areas.

“There is ice occasionally calving off the glacier terminus, which means there is the possibility of ice falling on your head or coming up from beneath your boat if you try to approach the area around the terminus. So a monitoring method that is able to gauge the ice loss remotely is required,” Vishnu concludes

The results were presented during the 181st Meeting of the Acoustical Society of America, which will be held Nov. 29-Dec. 3, Hayden Johnson.

New ‘super jelly’ is soft, but strong enough to withstand the weight of a few cars

It’s not easy being soft and strong at the same time — unless you’re the new hydrogel developed at the University of Cambridge. This is the first soft material that has such a huge degree of resistance to compression, the authors report.

Image credits Zehuan Huang.

A new material developed by researchers at the University of Cambridge looks like a squishy gel normally, but like an ultra-hard, shatterproof glass when compressed — despite being 80% water. Its secret lies in the non-water portion of the material; this consists of a polymer network with elements held together by “reversible interactions”. As these interactions turn on and off, the properties of the materials shift.

The so-dubbed ‘super jelly’ could be employed for a wide range of applications where both strength and softness are needed such as bioelectronics, cartilage replacement in medicine, or in flexible robots.

Hardy hydrogel

“In order to make materials with the mechanical properties we want, we use crosslinkers, where two molecules are joined through a chemical bond,” said Dr. Zehuan Huang from the Yusuf Hamied Department of Chemistry, the study’s first author.

“We use reversible crosslinkers to make soft and stretchy hydrogels, but making a hard and compressible hydrogel is difficult and designing a material with these properties is completely counterintuitive.”

The macroscopic properties of any substance arise from its microscopic properties — its molecular structure and the way its molecules interact. Because of the way hydrogels are structured, it’s exceedingly rare to see such a substance show both flexibility and strength.

The team’s secret lay in the use of molecules known as cucurbiturils. These are barrel-shaped molecules that the team used as ‘handcuffs’ to hold other polymers together (a practice known as ‘crosslinking’). This holds two ‘guest molecules’ inside the cavity it forms, which were designed to preferentially reside inside the cucurbituril molecule. Because the polymers are linked so tightly, the overall material has a very high resistance to compression (there isn’t much free space at the molecular level for compression to take place).

The alterations the team made to the guest molecules also slows down the internal dynamics of the material considerably, they report. This gives the hydrogel overall properties ranging between rubber-like and glass-like states. According to their experiments, the gel can withstand pressures of up to 100 MPa (14,503 pounds per square inch). An average car, for comparison, weighs 2,871 pounds.

“The way the hydrogel can withstand compression was surprising, it wasn’t like anything we’ve seen in hydrogels,” said co-author Dr. Jade McCune, also from the Department of Chemistry. “We also found that the compressive strength could be easily controlled through simply changing the chemical structure of the guest molecule inside the handcuff.”

“People have spent years making rubber-like hydrogels, but that’s just half of the picture,” said Scherman. “We’ve revisited traditional polymer physics and created a new class of materials that span the whole range of material properties from rubber-like to glass-like, completing the full picture.”

The authors say that, as far as they know, this is the first time a glass-like hydrogel has been developed. They tested the material by using it to build a real-time pressure sensor to monitor human motions.

They’re now working on further developing their glass-like hydrogel for various biomedical and bioelectronic applications.

The paper “Highly compressible glass-like supramolecular polymer networks” has been published in the journal Nature Materials.

One of Einstein’s manuscripts is going to auction and is expected to fetch millions of euros

This Tuesday, in Paris, a manuscript of Albert Einstein is going to auction.

Albert Einstein. Image in the public domain.

Christie’s Auctions and Private Sales will be putting the document up to auction on behalf of the Aguttes auction house later this week in Paris. This is probably one of the most valuable manuscripts of Einstein to ever come to auction and is expected to garner a sum fit for its significance: between two to three million euros.

Preliminary work

“This is without a doubt the most valuable Einstein manuscript ever to come to auction,” Christie’s said in a statement, according to the AFP.

The 54-page long manuscript was handwritten between 1913 and 1914 in Zurich, Switzerland by Einstein and Swiss engineer Michele Besso, his colleague and friend. It contains the preparatory groundwork for the theory of relativity, arguably one of the most important contributions to physics of the 20th century. His work earned him the Nobel Prize for Physics in 1921.

Besso was instrumental in preserving the document, Christie’s adds, as Einstein himself was likely to have seen it as an unimportant working document.

The manuscript offers “a fascinating plunge into the mind of the 20th century’s greatest scientist”, according to the auction house. Einstein died aged 76 in 1955 and is widely considered to be one of the greatest physicists ever. He’s also something of a popular icon, and widely known today.

The ‘Tsar Bomba’: the most powerful nuclear weapon ever made

The Tsar Bomba in 1960. The footage was declassified in 2020. Credit: Rosatom.

On October 30, 1961, during a cloudy morning, a Soviet bomber dropped a thermonuclear bomb over Novaya Zemlya Island, deep in the Arctic Ocean, in the most extreme northeastern part of Europe. The blast exploded with a staggering yield of 50 megatons (equivalent to 50 million tons of conventional explosives) whose detonation flash could be seen from over 1,000 km away. The bomb, known as the Tsar Bomba (“King of Bombs”), represents the most powerful thermonuclear weapon ever detonated in history. No other bomb as strong as it was ever tested. This is the story of the pinnacle of nuclear weapons.

The bomb of all bombs

Ground-level view of detonation of Tsar Bomba. Credit: Wikimedia Common.

In the late 1950s, the Soviets found themselves in a pickle. The Cold War was in full swing and the Americans were clearly winning. Although by that time, the USSR had also developed its own thermonuclear weapons to match the USA arsenal, the Soviets had no effective means of delivering its nukes to US targets.

The post-WWII military doctrine was dramatically disrupted by the introduction of nuclear weapons. Once nukes came into the picture, the US and the Soviet Union, the only nuclear powers at the beginning of the Cold War, each adopted nuclear deterrence as their strategy. Nuclear deterrence represents the credible threat of retaliation to forestall an enemy attack. So if your threat of retaliation isn’t really a genuine threat, you may face total annihilation.

To level the playing field, the Soviets thought of the mother of all bluffs: a weapon so powerful it could level huge cities like New York or Paris in a single blow.

It was Soviet leader Nikita Khrushchev who ordered scientists to start work on the most powerful bomb in the world with development beginning in 1956. In its first phase, the Tsar Bomba went by the code name “product 202”, then from 1960, it was known as “item 602”. In this second phase, nuclear physicist Andrei Sakharov was key to the bomb’s development.

The nuclear scientists settled on a 50Mt thermonuclear warhead design, which is equivalent to nearly 3,300 Hiroshima-era atom bombs. Thermonuclear weapons, also known as hydrogen bombs, are a step above atomic bombs, classed as second-generation nuclear weapons. While atomic bombs employ nuclear fission to release copious amounts of energy from uranium or plutonium, hydrogen bombs employ a second step in which the energy from fission of heavy elements is used to fuse hydrogen isotopes deuterium and tritium.

How the Soviets built the world’s most powerful bomb ever

Total destruction radius, superimposed on Paris. Red circle = total destruction (radius 35 kilometers), yellow circle = fireball (radius 3.5 kilometers). Credit: Wikimedia Commons.

The design of hydrogen bombs is very clever, as far as you can afford to admire a weapon of mass destruction. In order to increase the yield of a conventional atom bomb, you basically have to add proportionately more uranium and plutonium, both highly scarce elements. But a hydrogen bomb only uses a tiny amount of uranium or plutonium, just enough to kick-start the fusion of heavy hydrogen isotopes.

After the fission of the primary stage, the temperature inside the thermonuclear device soars to 100 million Kelvin (20,000 times higher than the surface of the Sun). Thermal X-rays from the first stage reach the secondary fusion stage, which implodes from all the energy, thereby activating a sequence of events that ultimately triggers the nuclear fission chain reaction.

The first full-scale thermonuclear test was carried out by the United States in 1952, but the Soviets took things to a whole new level. The Tsar Bomba actually had three stages: two fission reaction stages and a final fusion reaction.

The fission of uranium or plutonium generates tremendous heat and pressure that initiates another fission reaction in stage two, where neutrons from the first stage combine with lithium-6 to create deuterium and tritium. The hydrogen isotopes start to fuse under extreme heat and pressure, causing the thermonuclear explosion. Around 97% of Tsar Bomba’s total yield resulted from thermonuclear fusion alone, leading to minimal nuclear fallout relative to the incomprehensible destruction of the nuclear warhead and making it one of the “cleanest” nuclear bombs ever made.

The final iteration of the Tsar Bomba measured 8 meters in length with a diameter of about 2 meters. Its weight was around 25 tons, which was far too much to be handled by any intercontinental ballistic missile developed at the time by either the Soviets or Americans. In fact, the Tsar Bomba was so big it couldn’t be carried by any plane fielded by the Soviet Union.

The Tsar Bomba was dropped from a modified Tu-95 bomber. Credit: Picryl.

Sakharov had to work closely with aviation engineers to modify a Tupolev Tu-95 plane. The carrier had its fuel tanks and bomb bay doors removed and its bomb-holder replaced by a new holder attached directly to the longitudinal weight-bearing beams.

In 1961, after a brief respite, political tensions between the United States and the Soviet Union were once again high. This was just a year before the Cuban Missile Crisis, after all. The Cold War thus resumed and so did the Tsar Bomba testing.

The day the Earth trembled before the Tsar Bomba

The Tsar Bomba’s fireball grew 8 km (5 miles) wide at its maximum. It didn’t touch the surface of the Earth due to the shock wave, but nearly reached 10.5 km (6.5 miles) in altitude — the same cruising altitude as the deploying bomber. Credit: Wikimedia Commons.

On October 17, 191, Khrushchev announced the upcoming test of its 50Mt mega weapon. The Tu-95V aircraft, No. 5800302, armed with the warhead took off from the Olenya airfield and was flown to State Test Site No. 6 of the USSR Ministry of Defense located on the deserted island of Novaya Zemlya. The crew numbered nine officers led by Andrei Durnovtsev.

During the deployment of the warhead, the bomb was released from a height of 10,500 meters (13,780 ft). Immediately an 800-kilogram parachute was deployed to give the carrier and observer plane enough time to fly about 45 kilometers (28 miles) away from ground zero. The crew had a 50 percent chance of survival, and they all made out alive.

Site of the detonation. Credit: Wikimedia Commons.

The Tsar Bomba exploded for the first and last time about 4,200 meters (13,780 ft) above the Mityushikha Bay nuclear testing range. All went according to plan — meaning all hell broke loose.

The 8-kilometre-wide (5.0 mi) fireball reached nearly as high as the altitude of the release plane and was visible at almost 1,000 km (620 mi) away. After the fireball subsided, it made way for a mushroom cloud made of debris, smoke and condensed water vapor, which extended about 67 km (42 miles) high, about seven times taller than Mount Everest. The flare from the detonation was visible in Norway, Greenland, and Alaska.

The heat from the explosion could have caused third-degree burns 100 km (62 mi) away from ground zero. And although the warhead was detonated miles above ground, it generated a seismic wave that was felt with an estimated magnitude of 5.0-5.25.

One of the Soviet cameramen described the harrowing experience:

“The clouds beneath the aircraft and in the distance were lit up by the powerful flash. The sea of light spread under the hatch and even clouds began to glow and became transparent. At that moment, our aircraft emerged from between two cloud layers and down below in the gap a huge bright orange ball was emerging. The ball was powerful and arrogant like Jupiter. Slowly and silently it crept upwards…Having broken through the thick layer of clouds it kept growing. It seemed to suck the whole Earth into it. The spectacle was fantastic, unreal, supernatural.”

The mushroom cloud of Tsar Bomba seen from a distance of 161 km (100 mi). Credit: Wikimedia Commons.

There were no fatalities resulting from the Tsar Bomba’s test, windows were shattered due to the explosion in a village on Dikson Island, although it was 780 km (480 mi) away from the testing site.

In 2020, Rosatom, the Russian nuclear energy agency, released a 30-minute documentary video that shows the preparation and detonation of the Tsar Bomba. The video was previously a state secret. You can now watch it below.

https://www.youtube.com/watch?v=nbC7BxXtOlo&feature=youtu.be

The bomb that blasted a new era of peace

Predictably, the Tsar Bomba test unleashed a wave of indignation in the United States. But behind closed doors, the White House and the Pentagon were not actually sure how to respond. A new study published in October, which is based on recently declassified documents, offers valuable insights into how President John F. Kennedy decided to act in these highly tense times.

The study that appeared in the Bulletin of the Atomic Scientists shows that the Soviets weren’t the only ones contemplating mega thermonuclear weapons. Lead author Alex Wellerstein, a nuclear historian at the Stevens Institute of Technology in Hoboken, found documents showing that Edward Teller, the mastermind of the hydrogen bomb, wanted to get the green light from the Atomic Energy Commission for two superbomb designs. One was for 1,000 megatons (20 times more powerful than the Tsar Bomba) and the other for 10,000 megatons (a staggering 200 times more powerful than the Soviet doom bringer). The proposal was made in 1954, before the Soviets thought about making the Tsar Bomba.

If you’re shocked by the idea of making a 10,000 megaton super nuclear weapon, congratulations! You’re actually an empathetic human being. Seriously though, we all need to bear in mind something about thermonuclear weapons: they have unlimited destructive power, meaning they can be scaled to blow up the entire planet if a large enough warhead is produced. The Tsar Bomba, for instance, was initially designed as a 100-megaton warhead, but the Soviets scaled it down by adding a lead sheath. In 1950s prices, the cost of increasing the yield of a thermonuclear bomb was just 60 cents per kiloton of TNT.

While many fellow nuclear scientists were indeed shocked by this audacious proposal, the military was all ears. But they too cooled off once they learned a 1,000-megaton warhead would be so powerful that the radioactivity would be impossible to keep confined within the borders of an enemy state.

After the Tsar Bomba was detonated, enthusiasm for an American super bomb reignited. According to Dr. Wellerstein, in 1962, the defense secretary, Robert S. McNamara, lobbied the Atomic Energy Commission to build the American equivalent of the Tsar Bomba.

Andrei Sakharov. Credi: Wikimedia Commons.

But President Kennedy, who was famous for his loathing of nuclear weapons, had other plans. By then, scientists figured out how to conduct nuclear tests underground in the Nevada desert. However, even if it was detonated deep underground, a super thermonuclear bomb would still break through the hard rock and release radiation into the atmosphere.

In the aftermath of the Cuban Missile Crisis, whose threat of total obliteration was too close for comfort, President Kennedy managed to convince the Soviets to limit nuclear testing to underground sites. On October 7, 1963, the United States, the United Kingdom, and the Soviet Union signed the Partial Nuclear Test Ban Treaty, which prohibited tests in the atmosphere, outer space, and underwater. In doing so, these countries ensured that no one would detonate a Tsar Bomba-like weapon ever again.

A key role in the Partial Test Ban Treaty was held by Sakharov, one of the lead designers of the Tsar Bomba. Concerned with the moral and political implications of his work, Sakharov pushed his Moscow contacts to sign the treaty.

In 1968, Sakharov fell out of the Kremlin’s good graces after publishing an essay in which he described anti-ballistic missile defense as a major threat of nuclear war. In the Soviet nuclear scientist’s opinion, an arms race in the new technology would increase the likelihood of nuclear war. After publishing this manifesto, Sakharov was banned from conducting military-related research. In response, Sakharov assumed the role of an open dissident in Moscow and continued to write anti-nuclear weapon essays and support human rights movements.

In 1975, Sakharov was awarded the Nobel Peace Prize, with the Norwegian Nobel Committee calling him “a spokesman for the conscience of mankind,” adding that “in a convincing manner Sakharov has emphasized that Man’s inviolable rights provide the only safe foundation for genuine and enduring international cooperation.” Of course, Sakharov was not allowed to leave the Soviet Union in order to receive his prize.

The last straw was when Sakharov staged a protest in 1980 against the Soviet intervention in Afghanistan. He was arrested and exiled to the city of Gorky (now Nizhny Novgorod), which was completely off-limits to foreigners. Sakharov spent the rest of his days in an apartment under police surveillance until one day in 1986, when he got a call from Mikhail Gorbachev telling him that he and his wife could return to Moscow. Sakharov died in December 1989. The Tsar Bomba, his own brainchild, was dead long before that, thanks partly to him. 

Flyboard Air from Zapata.

Hoverboards are now real — and the science behind them is dope

What could be the coolest way of going to work you can imagine? Let me help you out. Flying cars — not here yet. Jetpacks — cool, but not enough pizzaz. No, there’s only one correct answer to this question: a hoverboard.

A whole generation of skateboarders and sci-fi enthusiasts (especially Back to the Future fans) have been waiting for a long time to see an actual levitating hoverboard. Well, the wait is over. The future is here. 

Franky Zapata flying on Flyboard Air. Image credits: Zapata/YouTube.

There were rumors in the 90s that claimed hoverboards had been invented but were not made available in the market because some powerful parent groups are against the idea of flying skateboards being used by children. Well, there was little truth to those rumors — hoverboards haven’t been truly developed until very recently. No longer a fictional piece of technology, levitating boards exist for real and there is a lot of science working behind them.

A hoverboard is basically a skateboard without tires that can fly above the ground while carrying a person on it. As the name implies, it’s a board that hovers — crazy, I know.

The earliest mention of a hoverboard is found in Michael K. Joseph’s The Hole in the Zero, a sci-fi novel that was published in the year 1967. However, before Michael Joseph, American aeronautical engineer Charles Zimmerman had also come up with the idea of a flying platform that looked like a large hoverboard.

Zimmerman’s concept later became the inspiration for a small experimental aircraft called Hiller VZ-1 Pawnee. This bizarre levitating platform was developed by Hiller aircraft for the US military, and it also had a successful flight in 1955. However, only six such platforms were built because the army didn’t find them of any use for military operations. Hoverboards were feasible, but it was still too difficult to build them with the day’s technology.

Hoverboards were largely forgotten for decades and seemed to fall out of favor. Then, came Back to the Future.

A page from the book Back to the Future: The Ultimate Visual History. Image credits: /Film

The hoverboard idea gained huge popularity after the release of Robert Zemeckis’s Back to the Future II in 1989. The film featured a chase sequence in which the lead character Marty McFly is seen flying a pink hoverboard while being followed by a gang of bullies. In the last two decades, many tech companies and experts have attempted to create a flying board that could function like the hoverboard shown in the film.

Funnily enough, Back to the Future II takes place in 2015, and hoverboards were common in the fictional movie. They’re not quite as popular yet, but they’re coming along.

The science behind hoverboards

Real hoverboards work by cleverly exploiting quantum mechanics and magnetic fields. It starts with superconductors — materials that have no electrical resistance and expel magnetic flux fields. Scientists are very excited about superconductors and have been using them in experiments like the Large Hadron Collider.

Because superconductors expel magnetic fields, something weird happens when they interact with magnets. Because magnets must maintain their North-South magnetic field lines, if you place a superconductor on a magnet, it interrupts those field lines, and the magnet lifts the superconductor out of its way, suspending it into the air.

A magnet levitating above a high-temperature superconductor, cooled with liquid nitrogen. Image credits: Mai Linh Doan.

However, there’s a catch: superconductors gain their “superpowers” only at extremely low temperatures, at around -230 degrees Fahrenheit (-145 Celsius) or colder. So real-world hoverboards need to be fueled with supercooled liquid nitrogen around every 30 minutes to maintain their extremely low temperature. 

All existing hoverboards use this approach. While there has been some progress in creating room-temperature superconductors, this technology is not yet ready to be deployed in the real world. But then again, 30 minutes is better than nothing.

Some promising hoverboards and the technology behind them

In 2014, an inventor and entrepreneur Greg Henderson listed a hoverboard prototype Hendo hoverboards on the crowdfunding platform Kickstarter. The Hendo hoverboard could fly 2.5 cm above the ground with 300 lb (140 kg) of weight but just like maglev trains, it required a magnetic track made of non-ferromagnetic metals to function. 

The hoverboard followed magnetic levitation, a principle that allows an object to overcome gravitation and stay suspended in the air in the presence of a magnetic field. However, the hoverboard didn’t go into mass production because Henderson used the gadget only as a means to promote his company Arx Pax Labs.

A year later, another inventor (Cătălin Alexandru Duru) developed a drone-like hoverboard prototype (which is registered under the name omni hoverboard) and using the same approach, he set a Guinness World Record for covering maximum distance with an autonomous hoverboard. During his flight, Alexandru covered a distance of about 276 meters and reached a height of 5 meters. 

ARCA CEO Dumitru Popescu controlling his ArcaBoard through body movement. Image Credits: Dragos Muresan/Wikimedia Commons

In 2015, Japanese auto manufacturer Lexus also came up with a cool liquid-nitrogen-filled hoverboard that could levitate when placed on a special magnetic surface. The Lexus hoverboard consists of yttrium barium copper oxide, a superconductor which if cooled down beyond its critical temperature becomes repulsive to magnetic field lines. The superconductor used both quantum levitation (and quantum locking) to make the hoverboard perfectly fly over a magnetic surface.

The same year in December, Romania-based ARCA Space Corporation introduced an electric hoverboard called ArcaBoard. Being able to fly over any terrain and water, this rechargeable hoverboard was marketed as a new mode of personal transportation. The company website mentions that ArcaBoard is powered by 36 in-built electric fans and can be easily controlled either from your smartphone or through the rider’s body movements.   

Components in an ArcaBoard. Image Credits: ARCA

One of the craziest hoverboard designs is Franky Zapata’s Flyboard Air. This hoverboard came into the limelight in the year 2016 when Zapata broke Cătălin Alexandru Duru’s.Guinness World Record by covering a distance of 2,252.4 meters on his Flyboard Air. This powerful hoverboard is capable of flying at a speed of 124 miles per hour (200 km/h), and can reach as high as 3000 meters (9,842 feet) up in the sky. 

Flyboard Air comes equipped with five jet turbines that run on kerosene and has a maximum load capacity of 264.5 lbs (120 kg). At present, it can stay in the air for only 10 minutes but Zapata and his team of engineers are making efforts to improve the design further and make it more efficient. In 2018, his company Z-AIR received a grant worth $1.5 million from the French Armed Forces. The following year, Zapata crossed the English Channel with EZ-Fly, an improved version of Flyboard Air.

While ArcaBoard really went on sale in 2016 at an initial price of $19,900, Lexus Hoverboard and Flyboard Air are still not available for public purchase. However, in a recent interview with DroneDJ, Cătălin Alexandru Duru revealed that he has plans to launch a commercial version of his omni hoverboard in the coming years.

What is a Faraday cage and how does it work?

There’s a good chance the plane you boarded during a flight got hit by lightning. According to the Federal Aviation Administration, a plane is struck by lightning every 1,000 flight hours or so. There are around 10,000 planes in the air at any given moment, so the odds of one of them being hit by lightning are pretty high. But somehow there has not been one single accident or fatality owed to lightning in aviation history — and we have physics to thank.

A plane’s body is designed to be completely encapsulated with aluminum, which allows the electrical current to flow solely through the outside or outer shell of the planet and out through the tail, keeping the inside of the plane free of electrical charge. Essentially, an airplane is a giant Faraday cage.

What’s a Faraday cage?

A Faraday cage, also known as a Faraday shield, is a conducting enclosure that shields anything inside from electromagnetic fields by redistributing the electric charges at the surface of the conductor, which in turn cancels the field’s effect in the interior of the cage. The concept and underlying physical phenomenon were first demonstrated by English scientist Michael Faraday in 1836.

Faraday performed many experiments in the early 19th century that greatly contributed to our understanding of electromagnetism. The English physicist and chemist was the first to show that a magnetic field produces an electric current, discovered the effect of magnetism on light, and invented the first electric motor and dynamo.

During one of these experiments, Faraday noticed that an electrical conductor only carries an electrical charge on its surface, while the interior of the conductor is not affected at all.

Michael Faraday. Credit: Public Domain.

Faraday set out to investigate this phenomenon at a large scale. To this aim, he lined all the walls of a room with metal foil, then fired a high-voltage current from an electrostatic generator through the outside of the room. Using an electroscope, a device that detects electrical charge, Faraday showed that only the walls carried electrical charge while the interior of the room was completely devoid of charge.

Earlier, Benjamin Franklin, a major figure in his own right for the American Enlightenment and the history of physics, electrified a silver pint (tankard) and lowered an uncharged cork ball attached to a silk thread. Although the ball was lowered until it touched the bottom of the metal enclosure, the ball wasn’t attracted to the charged interior sides of the pint. But when Franklin withdrew the cork and pointed it near the pint’s exterior, the ball was immediately drawn to its surface.

Decades later, Faraday replicated Franklin’s research with a twist in his now-famous ice pail experiment, during which he lowered a charged brass sphere into a metal cup. As expected, his results agreed with Franklin’s original work.

These experiments validated a fundamental tenet of electromagnetism: an electrical current flowing through a conductor only resides on the outer surface and the inside of the conductor is not affected by the external charge. Later developments in field theory refined the physics of these observations, showing that the charge flowing through a conductor is actually redistributed resulting in a net electrostatic field within the conductor of zero.

How a Faraday cage works

Your car is an example of a Faraday cage, which will protect you from getting killed by a lightning strike. Credit: Britannica.

This leads us to the Faraday cage, which can be viewed as a hollow conductor that shields anything inside from external electrical charge or radiation. Because the Faraday cage distributes the charge around the exterior of the shield, charges within the interior are canceled out. The shield also works against radio waves and microwaves.

A Faraday cage can be a continuous shell-like material like the hull of an airplane or a mesh. The size of the gaps in the screen or mesh alters the cage’s properties, which can be adjusted to exclude only certain frequencies and wavelengths of electromagnetic radiation.

The typical Faraday cage is made of grounded wire mesh or parallel wires. The wires have to be made of conductive materials, such as aluminum or copper, and the coating has to be perfectly closed from one to another.

However, not all electromagnetic radiation and fields are blocked by a Faraday cage. Stable or slowly varying magnetic fields like Earth’s magnetic field can penetrate a Faraday cage, which explains why a compass will still work inside one. Near-field high-powered frequencies like HF RFID, which are used in contactless payments, can also penetrate the shield.

The applications for Faraday cages are manyfold. In fact, a lot of our modern electronic hardware wouldn’t be able to function properly or at all were it not for incorporating Faraday shielding in their design.

Besides protection against external electromagnetic radiation and electrical discharge, Faraday cages also block electromagnetic noise that can hamper the performance of electronic devices. And since a cage keeps things out but also inside, a Faraday cage can also be useful when you want to prevent electromagnetic energy radiated from internal components from escaping the enclosure.

For instance, the best example of a Faraday cage is right inside your kitchen. Microwave ovens have a metal shell that prevents the microwaves inside the oven from leaking into the environment. Inside hospitals, Faraday cages help MRI machines to scan tissues inside the human body. An MRI room has to be shielded otherwise external electromagnetic fields could ruin the diagnostic images. The military also routinely incorporates Faraday cages into vehicles and bunkers to protect its assets from electromagnetic pulses. In science, this shielding reduces the noise in analytical chemistry tests for sensitive measurements. 

Fusion breakthrough brings us one step closer to solving key challenges

A worker doing maintenance work inside the reactor.

In fusion power, two atomic nuclei combine to form a heavier nucleus, releasing vast amounts of energy in the process. The process takes place in a fusion reactor and, at least in theory, this energy can be harnessed; but the practical aspects are extremely challenging.

An important problem for fusion reactors is maintaining the plasma core extremely hot (hotter than the surface of the sun), while also safely containing the plasma — something fusion researchers refer to as “core-edge integration”. Researchers working at the DIII-D tokamak at the National Fusion Facility managed to make the fusion core even hotter, while also safely cooling the material that reaches the reactor wall.

Somewhat like the conventional combustion engine, a fusion reactor must also exhaust heat and particles. A key strategy to cool down the plasma core is to inject impurities (particles heavier than the plasma) into the exhaust region — but these same impurities can travel into regions where fusion reactions are occurring , reducing overall performance.

Previously, these impurities were in the form of a gas. However, in a new study, researchers found that a particular chemical mixture in the form of a powder offers several advantages.

The powder consists of boron, boron nitride, and lithium, and it was trialed at the DIII-D tokamak reactor. A tokamak is a type of fusion reactor that uses magnetic fields in a donut shape to confine the plasma — something called a high-confinement mode, or H mode. Experiments showed that the powder was effective at cooling the plasma boundary while only producing a marginal decrease in fusion performance.

“Our work is important because it shows new ways to achieve high-pressure core plasmas (H-Mode and Super-H Mode) while keeping the boundary cold enough to avoid melting and damaging the reactor walls,” explains Florian Effenberg of Princeton Plasma Physics Laboratory, co-author of the new study “The injection of boron, boron nitride, and lithium powders into boundary plasma dissipates the power before it can reach wall components. Thereby, we can achieve both at a time, a super hot core plasma that can produce fusion energy and a cold boundary that allows safe and long-pulse operation of the reactor. “

Although the DIII-D is a relatively small tokamak, the experimental results along with theoretical simulations suggest that the approach is also compatible with larger devices like ITER, the international tokamak under construction in France, and would facilitate a core-edge integration solution in future fusion power plants.

“This is an important step integration solutions for safe heat exhaust and high-performance operation. Further assessments and optimization are, of course, necessary,” Effenberg concludes.

This approach could be instrumental in addressing the core-edge integration in future fusion power plants. So what does this mean for fusion power in general? It’s hard to make any clear estimates, says Effenberg, but the positive signs are there. The development of fusion reactors resembles that of microprocessors in computers. Just like there’s Moore’s law for microprocessors (which observes that the number of transistors in a microprocessor has doubled every two years) there’s a “Moore’s law for fusion”, in which the ‘triple product’ of density, temperature and confinement time (which measures the performance of a fusion plasma) has doubled every 1.8 years.

“Generally, we are on track and make progress according to “Moore’s law for fusion”, which shows that a demonstration fusion power plant is in reach within the next decade. The last meters are tough, but after a hunger period, there will finally be a wave of new and upgraded fusion machines coming online, flooding the last critical gaps in our knowledge with data,” Effenberg concludes.

Results will be presented at the 63rd Annual Meeting of the APS Division of Plasma Physics.