Tag Archives: light

LHC physicists make matter out of light

CERN researchers used the LHC to produce a pair of W bosons from two photons. Credit: CERN.

As another confirmation that we’re living in a quantum mechanical universe, physicists have used the Large Hadron Collider (LHC) in Switzerland to generate matter from energy. Specifically, photons were merged and transformed into W bosons, which are particles that carry a weak force.

For centuries, scientists trained in classical physics lived by an immutable mantra: no matter what happens in the universe, mass is always conserved. What goes in, must always come out. But then came Albert Einstein, whose theory of special and general relativity showed that different observers would disagree about what the energy of a system was, hence mass couldn’t be the only conserved quantity.

Ultimately, this is how we wind up with the most famous equation in physics, E = mc².

I’ll leave it to Einstein himself to explain the equation:

“It followed from the special theory of relativity that mass and energy are both but different manifestations of the same thing — a somewhat unfamiliar conception for the average mind.”

From the soundtrack of the film, Atomic Physics. Credit: J. Arthur Rank Organization, Ltd., 1948.

This also means that mass can be converted into energy and vice-versa. At the LHC, scientists regularly smash particles accelerated close to the speed of light together, transforming the particles into energy and then back into different types of particles.

But you can skip a step. In a recent update, CERN researchers working with the ATLAS experiment describe how they were able to collide two photons together, which are massless particles of light. As a result, the interaction led to the formation of W bosons, particles that have both mass and charge, and which play a vital role in nuclear decay.

We’re literally bombarded with countless photons on a daily basis each time the sun rises, so why doesn’t this happen all the time? Einstein’s nifty equation yet again has the best explanation.

If you read the mass-energy equivalence equation from right to left, you’ll clearly see that a small amount of mass can produce a huge amount of energy due to the square of the speed of light (c²). That’s why relatively small hydrogen bombs can wreak devastation across thousands of square kilometers.

When the equation is read from left to right, the interpretation is that you need a boatload of energy to produce mass. One of the few places here on Earth where that kind of energy is possible to generate is at the LHC, the world’s largest and highest-energy particle collider, and the largest machine in the world for that matter.

“If you go back and look at Maxwell’s equations for classical electromagnetism, you’ll see that two colliding waves sum up to a bigger wave,” Simone Pagan Griso, a researcher at the US Department of Energy’s Lawrence Berkeley National Laboratory, told Symmetry Magazine. “We only see these two phenomena recently observed by ATLAS when we put together Maxwell’s equations with special relativity and quantum mechanics in the so-called theory of quantum electrodynamics.”

Along with Z bosons, W bosons are weak nuclear force carriers. The weak force is one of the four fundamental forces, alongside electromagnetism (which holds atoms together), strong nuclear force (which glues atomic nuclei together), and gravity.

According to the laws of electrodynamics, two intersecting light beams should never deflect, absorb, or disrupt one another. You can prove this for yourself at home if you happen to have two lasers handy.

However, quantum electrodynamics allows for light and matter to interact. These new findings actually confirm one of the main predictions of electroweak theory, namely that force carriers like W bosons interact with themselves.

“This observation opens up a new facet of experimental exploration at the LHC using photons in the initial state”, said Karl Jakobs, Spokesperson of the ATLAS Collaboration. “It is unique as it only involves couplings among electroweak force carriers in the strong-interaction dominated environment of the LHC. With larger future datasets it can be used to probe in a clean way the electroweak gauge structure and possible contributions of new physics.”

Astronomers witness light produced by the merger of two black holes for first time

An artist’s impression of a supermassive black hole. Credit: R Hurt (IPAC)/Caltech.

It doesn’t get any blacker than a black hole, the densest object in the universe. Their gravitational pull is so strong that nothing can escape its clutches, including light.

However, it’s an ironic twist of fate that when two black holes merge in a cataclysmic event, they can also produce a flare of light as powerful as a trillion suns. Astronomers have now confirmed this phenomenon for the first time in a new study.

On May 21, 2019, scientists affiliated with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO) and the European Virgo detected the gravitational waves generated by the merger of two black holes, in an event dubbed S190521g.

Gravitational waves are essentially ripples in the fabric of spacetime which are generated by interactions between very massive accelerating cosmic objects, such as neutron stars or black holes. Physicists liken gravity waves to the waves generated by a stone thrown into a pond.

Although gravitational waves were predicted by Einstein’s Theory of Relativity, their existence was confirmed very recently in 2016 by LIGO, whose founders were awarded the much deserved Nobel Prize in Physics one year later.

Since then, scientists have found many sequences of gravitational waves, with much more to follow once more sensible detectors come online.

This story isn’t about gravitaional waves, though. While studying S190521g, physicists at Caltech’s Zwicky Transient Facility (ZTF) also spotted a flare of light emanting from the pair of merging black holes.

“This supermassive black hole was burbling along for years before this more abrupt flare,” Matthew Graham, a research professor of astronomy at Caltech and the project scientist for ZTF, said in a statement. “The flare occurred on the right timescale, and in the right location, to be coincident with the gravitational-wave event.”

Light-emitting black holes mergers aren’t exactly a new idea. They’ve been theorized before by physicists whose models suggested that merging black holes can plow into the hot gas, dust, and all the other jumbled mess of matter hovering around the black hole, waiting to be gobbled up.

The huge momentum and sudden release of kinetic energy of the merged black hole can cause gas to react, generating a bright flare.

Now, the theory has been shown to also work in practice. The light from S190521g was visible for days, before it slowly faded into oblivion about a month later.

However, the researchers say they will keep an eye on this newly birthed supermassive black hole. They hope to catch another flare within a couple of years as it is expected to ram into the surrounding disk of gas once more.

“Supermassive black holes like this one have flares all the time. They are not quiet objects, but the timing, size, and location of this flare was spectacular,” Mansi Kasliwal, an assistant professor of astronomy at Caltech, and co-author of the study, said in a statement.

“The reason looking for flares like this is so important is that it helps enormously with astrophysics and cosmology questions. If we can do this again and detect light from the mergers of other black holes, then we can nail down the homes of these black holes and learn more about their origins.”

The findings appeared in the journal Physical Review Letters.

Complex glass objects 3D-printed using new take on old method

Researchers at ETH Zürich have developed the first 3D-printing method that can produce highly-complex, porous glass objects. The approach relies on a special resin that can be cured using ultraviolet (UV) light.

Several of the 3-D printed objects created by the team.
Image credits Group for Complex Materials / ETH Zurich.

Glass has been a long-standing goal of 3D-printing enthusiasts for a long time now; it’s also proven to be the most elusive. The inherent problem regarding printable glass is that the material requires very high temperatures to process. The two approaches we’ve tried so far are to either ‘print’ molten glass — which requires expensive and specialized heat-resistant equipment — or to use ceramic powders as ink to sinter into glass — an approach that sacrifices precision and thus the complexity of the finished product.

In order to solve the issue, the team from ETH Zurich went back to the roots, and worked from stereolithography, one of the first 3-D printing techniques developed during the 1980s. They developed a resin which contains a plastic material and organic molecules tied to glass precursors that can be hardened by exposure to UV light.

A light touch

When blasted with UV light — the team says commercially available Digital Light Processing technology works just fine — photosensitive components in the resin bind together. The plastic in the ink forms into a maze-like polymer that provides the structural framework. Ceramic-bearing molecules link together in the empty areas created by the framework.

This allows an object to be built layer-by-layer, and by modifying the intensity of the UV light, the team can change various parameters in each layer. Weak light intensity results in large pores, for example, while intense illumination produces small pores.

“We discovered that by accident, but we can use this to directly influence the pore size of the printed object,” says Kunal Masania, a co-author of the study.

So where does the glass fit into this? The team explains that they can modify the microstructure of their (hardened) ink by mixing silica with borate or phosphate and adding it to the resin. Silica is the main component of glass, while borate and phosphate are added to specialized, heat-resistant and optical glass respectively. The team explains that their approach allows for single or multiple types of inks to be mixed into a single object, allowing for several kinds of glass to be produced in the end.

The final step involves using heat to actually turn the hardened ink into glass. The printed ‘blanks’ are fired at 600˚C, which burns away the polymer framework, and then at 1000˚C to transform the ceramic structure into glass. During the thermal treatment, the blanks shrink significantly, the authors report, while becoming as transparent and hard as window glass.

So far, the approach can only be used for small objects — about the size of a die. Larger objects such as bottles, drinking glasses, or window panes cannot be produced this way, but that wasn’t the goal here, Masania explains. The team wanted to prove that glass is a viable material for 3D-printing, he explains.

The team has applied for a patent on their technology and are negotiating with industry representatives to take their process to market.

The paper “Three-dimensional printing of multicomponent glasses using phase-separating resins” has been published in the journal Nature Materials.

Sunflower-like material follows beam of light

Scientists have devised a material that perfectly aligns with the direction of a light beam, much like sunflowers following the sun.

In 2016, a study published in the journal Science explained how young sunflower plants manage to track the sun — it all has to do with circadian rhythm. According to that study, a young flower faces east at dawn, then slowly turns west as the sun moves across the sky. During the night, it slowly turns back east to begin the cycle again.

The sunflower’s turning is actually a result of different sides of the stem elongating at different times of the day. For many years, scientists have attempted to mimic sun-tracking, known as heliotropism, in artificial materials. They’ve had little success until very recently.

In a new study published this week in Nature Nanotechnology, researchers have described a new material that follows a light source. The researchers at the University of California Los Angeles, led by Ximin He, combined a photoresponsive nanomaterial (i.e. absorbs light and turns it into heat) with a thermoresponsive polymer that contracts when it encounters heat. The material was fashioned into small cylinders.

When a light source is fired on the cylinders, it gets absorbed, heating the material but only on the light-facing side. As the material contracts on the illuminated side, the cylinder bends towards the light beam. When the top of the cylinder aligns with the beam, the underside of the shaft cools down, expanding and stopping the motion of the cylinder. These cylinders respond in real-time to a light beam’s motion, continuously turning in a wide range of directions.

The authors claim that the material could improve the efficiency of light-harvesting devices. For instance, it could be employed in solar cells that always face the sun without the need for any external power input.

NASA releases beautiful new animation of a black hole

A beautiful new animation produced by NASA helps visualize the relationship between gravity, time, and space.

Image credits NASA Goddard Space Flight Center / Jeremy Schnittman.

Researchers at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, have generated a new animation of a black hole and its surrounding matter disk. The animation is based on radio images of a black hole at the core of galaxy M87 taken by the Event Horizon Telescope.

Bendy time

“Simulations and movies like these really help us visualize what Einstein meant when he said that gravity warps the fabric of space and time,” says Jeremy Schnittman, Ph.D., the NASA astrophysicist who generated these gorgeous images using custom software

Schnittman’s work helps to showcase how the huge gravity around a black hole distorts the way we perceive its surroundings. That halo-like structure is, in fact, a disk. This accretion disk is a relatively thin mass of gas infalling into the black hole; we see it in the particular shape shown above because gravity is bending light around the black hole. It’s pretty similar to bending a picture of the disk.

Gas in accretion disks is very hot (through a combination of friction and compression), so it radiates in different parts of the electromagnetic spectrum. Those around the youngest of stars glow in infrared, but the disk in this animation glows with X-rays, because it has a lot of energy. It ripples and flows as magnetic fields move along its bulk. This creates brighter and dimmer bands in the disk.

The gas also moves faster the closer it gets to the black hole — close to the speed of light nearest to it. In the animation above, this makes the left side look brighter than the right side due to redshift.

The thin line of light that seemingly outlines the black hole is its “photon ring”. You’re actually looking at the underside of the disk, its image bent back to us by the massive gravitational pull there. What we see as the photon ring is made up of several layers that grow progressively thinner and dimmer — this is light that’s been bent several times around the black hole before escaping for our telescopes to capture. Schnittman’s model uses a spherical black hole, so here the photon ring looks identical from every angle.

“Until very recently, these visualizations were limited to our imagination and computer programs,” Schnittman says. “I never thought that it would be possible to see a real black hole.”

What light pollution is and what we can do about it

Light isn’t the first thing that comes to mind when you think about pollution — but it can be just that.

Image via Pixabay.

It’s very hard to look at today’s towns and cities and imagine that, for the longest time in human history, if you wanted a night light your best bet was a full moon. But our enjoyment of lights have also given rise to an interesting, if surprising, type of pollution: light pollution.

The bad light

Us hairless apes rely on sight quite a bit but we have a very hard time seeing in the dark. As such, we’ve put considerable effort into keeping things bright after the sun retires for the day.

And there lies the rub. Availability of light definitely helps, but too much starts having a negative impact on humans and ecosystems alike by impacting natural activity patterns. For us humans, nighttime exposure to light has been linked to disruption of our circadian rhythms (our ‘body clocks’) and related possible conditions: obesity, depression, sleeping disorders, and cancer, among others.

Image credits Arek Socha.

The effects on wildlife are more varied, from relatively minor to outright deadly (such as for moths). These effects almost always extend to entire ecosystems. Light can impact the success (and thereby population numbers) of species that compete over the same ecological role or ‘niche’, and the waking hours of various species. The balance between predators and prey may also be affected.

Unlike chemical or radioactive pollution, it isn’t toxic or destructive directly but still affects living organisms (which, in general, time their activity after natural light cycles). We still have an incomplete understanding of how light pollution affects us or the environment.

Types and sources of light pollution

Light pollution, at its simplest, is the addition of artificial light in environments where natural light may or may not be present. Sources include exterior and interior lighting on buildings, outdoor lights such as in parking lots, advertising, or streetlights. If you’re trying to sleep but a bright light outside is beaming right into your face, you’re experiencing the effects of light pollution.

Levels of light pollution at night.
Image credits New World Atlas of Artificial Sky Brightness / Nataliya Rybnikova via Geekwire / YouTube.

The intensity, timing, quantity, and quality of emitted light can create the adverse effects of light pollution.

Glare is an example of light becoming a pollutant through its intensity. A very bright light can cause instant (but temporary) loss of vision, and can lead to long-time vision deficiencies. Less extreme glare will still impair your vision by causing a loss of contrast in the eye (that’s why we squint on bright days) or be a nuisance by causing discomfort.

The timing of light is what interferes with our circadian rhythms. Artificial light is typically used when the natural one isn’t available, so in essence, it makes daytime ‘last longer’ than it should. A 1985 study on photo pollution reports that “for many nocturnally active animals a natural light-field between sunset and sunrise is a requirement for survival,” and light pollution prevents those conditions from forming. Prolonging daytime also affects brain wave patterns, hormone production, cell regulation mechanisms, and other biological activities.

If you can’t spot any stars in the sky at night, you’re dealing with a light quantity issue. This aspect of light pollution is most evident at night as skyglow, and it’s very hard to find a spot free from it in developed countries. Skyglow is a diffuse light that’s produced on the surface and permeates the area. It’s why clouds appear as bright over an empty night’s sky in the city, and as black over a starry one in remote areas.

Image taken at the Paranal Observatory, Cerro Paranal, Chile during a total lunar eclipse.
Image credits Yuri Beletsky via Wikimedia.

Light clutter is another example of a quantity issue. Too many lights bunched up together draw your attention and can become confusing and tiring. Both light clutter and glare are major safety concerns for traffic, especially at night.

The quality of the light itself can also be a problem for species that pick up on light polarization — this is known as polarized light pollution — according to a 2009 study.

How to do something about it and why

Unlike other types of pollutants, light can be managed very easily while still allowing fixtures to perform their intended role. Some of the ways you can help reduce light pollution include:

  • If available, use LEDs and lights that emit a warm glow. Blue light (many LEDs emit a blueish light) is carried over a shorter wavelength, so it can travel farther for longer, and you don’t want that.
  • Shield outdoor fixtures with opaque covers to keep light from being emitted directly to the sky. You can either buy them with the caps or have a DIY day of making some.
  • Use exterior fixtures with cutoff angles. This helps reduce glare, skyglow, and improves visibility by focusing the light where it’s needed. Unshielded light fixtures emit between 30 and 50% of their light skyward or sideways, which is wasted light. The Illuminating Engineering Society of North America (IESNA) defines several outdoor cutoff classifications you can check out for inspiration.
  • Obviously, turning off lights when you don’t need or want them on and using sensors to turn them on only when needed is a great way of reducing light pollution.

Light pollution is pervasive throughout most of the world and, like dads everywhere like to remind us, light doesn’t come for free. Futurism reports that the United States alone produces 22,000 gigawatt-hours of energy a year of wasted light (light pollution). They say this translates to roughly “$2.2 billion a year – enough to fund a new mission to Mars […] 3.6 tons of coal or 12.9 million barrels of oil [annually].”

If money doesn’t convince you, think of all the uninterrupted sleep we could all be getting if only we’d cut down on light pollution.

MIT develops programmable, color-changing dyes that you can spray on basically anything

If you’ve ever been envious of chameleons, rejoice! New research is bringing their color-changing properties to a dye near you.

Image credits Yuhua Jin et. al, 2019.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have designed a new, reprogrammable ink that can change color when exposed to ultraviolet (UV) and visible light sources. Dubbed “PhotoChromeleon,” the dyes can be used on anything from phone cases to cars and is resilient enough to be used in natural environments without rubbing off.

Color, changed

“This special type of dye could enable a whole myriad of customization options that could improve manufacturing efficiency and reduce overall waste,” says CSAIL postdoc Yuhua Jin, the lead author of the new study.

“Users could personalize their belongings and appearance on a daily basis, without the need to buy the same object multiple times in different colors and styles.”

PhotoChromeleon dyes are a mix of photochromic dyes that can be sprayed or painted onto the surface of any object to change its color. The process is fully reversible and can be repeated infinitely, the team explains.

The dye comes as a further development of the team’s previous “ ColorMod,” a process that involves the use of 3-D printers to fabricate items that can shift their color. PhotoChromeleon comes to address several of the limitations of ColorMod that the authors weren’t very happy with — namely, it’s limited color scheme and low-resolution results.

ColorMod relies on individually-printed 3-D pixels, and there’s a limit to how small they can be built. Overall, this makes the resolution of the finished product feel a bit grainy. As far as the colors are concerned, these pixels can only shift between two states, one showing its original color, and a transparent one. Again, cool, but not as cool as it could be.

The team’s work definitely paid off, however. They explain that PhotoChromeleon ink can be used to create anything from a landscape to zebra patterns to a flame pattern using a larger palette of colors.

PhotoChromeleon is made by mixing cyan, magenta, and yellow (CMY) photochromic dyes (i.e. dyes that change color when exposed to light) into a sprayable solution. By understanding how each dye interacts with different wavelengths, the team can control each color channel by activating and deactivating them with the corresponding light sources.

More specifically, they use three different types of light with different wavelengths to ‘eliminate’ each of the primary colors. For example, blasting the dyes with blue light would inactivate the yellow color (it’s chromatic opposite), leaving only magenta and cyan which gives an overall blue appearance. Blast it with a green light and it would turn blue (by inactivating the magenta dye).

“By giving users the autonomy to individualize their items, countless resources could be preserved, and the opportunities to creatively change your favorite possessions are boundless,” says MIT Professor Stefanie Mueller.

Using the dyes is as simple as spraying it over an object, then placing it in a box with a projector and UV light. The UV light first ‘saturates’ the dyes, turning them from transparent to visible, and the projector ‘desaturates’ them as needed (this produces the final color or color patterns). To reset the whole lot, all you have to do is blast it again with UV light and start over.

In order to help you get the exact finish you want, the team also developed a user interface to process designed and patterns for projection onto the desired items. Users can upload a blueprint and the program handles mapping (i.e. bending and applying it) onto the object.

As proof-of-concept tests, the team used their dye on a car model, a phone case, a shoe, and (quite fittingly) a small toy chameleon. Depending on the shape and orientation of the object, the process took between 15 to 40 minutes, the patterns all had high resolutions and could be successfully erased when desired.

While PhotoChromeleon is definitely more capable than its predecessor, it’s not perfect. The team didn’t have access to photochromic dyes that perfectly match magenta or cyan, so they used close approximations. In the future, they plan to collaborate with materials scientists to improve their dyes.

The paper “Photo-Chromeleon: Re-Programmable Multi-Color Textures Using Photochromic Dyes” has been published in the Proceedings of UIST 2019.

New research finds the neurons that makes mice itchy

New research is looking into how our bodies sense and transmit itchiness to the brain.

Light touches play an important role in our daily lives. Between cuddling, picking up fragile objects, and performing tasks that require precision, we use the sensation to guide many of our activities. It’s also an essential part of the body’s defense system, telling us, among others, if we’re covered in biting insects such as ticks or mosquitoes — via that oh-so-pleasant sensation of itchiness.

Creepy crawlies

“The takeaway is that this mechanical itch sensation is distinct from other forms of touch and it has this specialized pathway within the spinal cord,” says Salk Institute Professor Martyn Goulding, senior author of the new study.

The team looked at how neurons in the spinal cord carry these itchy signals to the brain. They hope that the findings will help lead to new drugs to treat chronic itch, which occurs in such conditions as eczema, diabetes, and even some types of cancer.

Goulding and his colleagues had previously found a set of inhibitory neurons in the spinal cord that keep the itchiness pathway locked down most of the time. Inhibitory neurons act as brakes on neural circuits, dampening their activity. Without these neurons — which produce the neurotransmitter neuropeptide Y (NPY) — the pathway is constantly active, causing chronic itching.

What the team wanted to find out know was how the signal encoding this sensation is transmitted to the brain, making us feel the itch. One of the team’s hypotheses was that when NPY inhibitory neurons are missing, the nerve bundles in the spinal cord that transmit light touch get stuck on the “on” setting — which creates a self-amplifying loop. The team identified a population of such (excitatory) neurons in the spinal cord that express the receptor for NPY, the so-called Y1 spinal neurons

To test if these were indeed behind the self-amplifying loop of itchiness, the team selectively removed the NPY “brake” and Y1 “accelerator” neurons in mice to see the effects.

Without Y1 neurons, they report, the mice didn’t scratch, not even in response to light-touch stimuli that normally make them scratch. When the team gave them drugs to activate the Y1 neurons, the mice scratched spontaneously even in the absence of any touch stimuli. The team was then able to link NPY neurotransmitter levels to Y1 neuron excitability — showing that NPY controls our sensitivity to light touch. The findings are also supported by other research which found that people with psoriasis have lower than average levels of NPY.

While the study shows how itchy signals go through the spinal cord, more research is needed to understand the full pathway. There are other neurons that likely mediate its transmission and final response in the brain, the team explains.

“By working out mechanisms by which mechanical itch is signaled under normal circumstances, we might then be able to address what happens in chronic itch,” says David Acton, a postdoctoral fellow in the Goulding lab and the study’s first author.

The paper “Spinal Neuropeptide Y1 Receptor-Expressing Neurons Form an Essential Excitatory Pathway for Mechanical Itch” has been published in the journal Cell Reports.

The most energetic light recorded thus far hits Tibetan plateau

Crab Nebula as seen by Hubble and Herschel. Credit: Wikimedia Commons.

An experiment involving over 600 particle detectors stretched over 36,900 square meters has measured the most energetic light ever witnessed on this planet. The photons were part of gamma rays emanating from the famous Crab Nebula, the remains of a supernova that was first observed in 1054 AD, which is located approximately 6,500 light years away. These photons measured tremendously high values exceeding 100 trillion electron volts (TeV), with one measurement clocking in 450TeV — the highest ever recorded. Previously, photons measuring no more than tens of trillions of electronvolts had been recorded.

Physicists started the Tibet Air Shower Gamma Collaboration, an observatory in the Tibetan Plateau some 4,300 meters above sea level because rarified air at this altitude allows more secondary particles to reach detectors. Secondary subatomic particles are created when cosmic rays and gamma rays interact with particles in the upper atmosphere.

By measuring and excluding muon particles — an elementary subatomic particle similar to the electron but 207 times heavier — physicists were able to backtrack the energy and origin of the incoming gamma rays that caused the showers. A total of 24 events caused by intense photons with energies higher than 100 trillion electronvolts were reported. To get a sense of the scale involved, regular photons that emanate from the sun — particles of visible light — have an energy of only a few electronvolts.

Now that scientists have experimental confirmation that high-energy photons reach Earth, they can elaborate a more precise model for how such particles are created and whether or not there’s a limit to how much energy they can carry.

In this particular case, researchers think that the gamma rays were accelerated by a process known as Inverse Compton scattering — a process during which super high-energy electrons bounce off lower energy photons. Inside the Crab Nebula, electrons may have scattered off low-energy photons from the cosmic microwave radiation (photons created soon after the Big Bang).

The findings appeared in the journal Physical Review Letters.

Why is snow white?

Every time it snows, the world turns white, even for the briefest of moments. Today we’re taking a look at why that is.

Snow street.

Image via Pixabay.

You likely hear the song “White Christmas” played every time the winter holidays swing around. It goes to show just how deep cultural associations between snow and its color — that striking, pure, sparkling white — run. If you think about it, however, something doesn’t add up. Snow is basically made up of tiny crystals of water (ice) caked one on top of the other. Water isn’t white; nor is ice, for that matter.

Logic dictates that there must be another element coming into the mix to make snow, well, snow-white. There is. To whet your appetite, it’s basically the same process that makes polar bears appear white. So let’s see what it is.

Color me surprised

To get a clearer picture of why snow appears white, we need to take a look at what generates color in the first place.

Our eyes are basically sensors designed to pick up on a particular spectrum of electromagnetic radiation — which, surprise, surprise, we call the ‘visible light’ spectrum. We perceive different wavelengths or intervals of this spectrum as different colors: ‘wider’ waves look red to us, while ‘narrower’ waves appear to be blue.

Light is pretty much like any other type of radiation. When it hits an object, it can pass through, interact with it, or be reflected completely. Objects take on different colors because their individual building blocks (atoms or molecules) vibrate in response to different frequencies of energy (such as that carried by light). They absorb a particular band of energy to sustain this vibration — which transforms it to heat. The light frequencies which don’t get absorbed can keep going through this material (which makes it transparent or translucent) or get reflected (making the material opaque).

What you see as ‘color’ is the blend of all energy intervals or bands from the visible spectrum that a material doesn’t absorb. Think of white light as a sum of all the colors canceling each other out. To get a particular shade, then, you need to do one of two things. You can subtract its opposite, which we call its ‘complementary’ (here’s a handy color wheel), from the mix, leaving that particular color ‘uncanceled’. Alternatively, you can absorb all other wavelengths and reflect only the color you want.

As an example, leaves appear to be a fresh green because chlorophyll absorbs the wavelengths corresponding to red and blue. Their complementary colors are green and orange/yellow. Leaves absorb only a fraction of the green wavelengths, and what’s reflected creates their color. It’s particularly interesting to note that sunlight is heavy in the green-wavelengths of light. Plants want red and blue light because they’re the less energetic parts of solar radiation. Going for the green spectrum would actually radiation-fry the leaves’ biochemical gears.

Don’t judge a snow by its color

If you put a chunk of ice next to a handful of snow, it’s pretty easy to tell that their colors do not match. One looks basically like solid water while the other is all glimmery, white, and definitely not transparent. So what gives?

Well, first off, caution to the wise: ice isn’t transparent — it’s translucent. Some of the atoms in the ice molecule are close enough to alter lightwaves as they come into contact. Think of it like the light having to squeeze between these atoms as it passes through ice. It doesn’t bother the light very much, but it does ‘bend’ its trajectory a little. Put your finger in a glass of water, and the submerged part will look skewed compared to the rest of your hand; it’s the same process at work.

Shape and size also make an appearance here. Snow is made up of many tiny ice crystals stacked together. When light encounters snow, it goes through the first layer of crystals and gets bent a little. From here, it passes to a new crystal, and the process repeats. Kind of like a disco ball, the snow keeps refracting light until it’s bent right out the pile. Since ice is translucent (doesn’t absorb any wavelength of light), the color of this light isn’t altered, so it’s still white when it exits the pile of snow to hit your retina.

Powder snow.

Matte but glittery.
Image via Pixabay.

The small size of ice crystals in snow also gives it that ‘matte but glittery’ look. Smooth objects reflect light specularly, or like a mirror. Rough surfaces scatter the light they reflect instead, which is why we can perceive texture from looking at an object. The crystals in snow are smooth, so each reflects light specularly. From the right angles, you can see this as tiny, bright reflections on the ice. When clumped up together, however, the crystals scatter light overall. Because the way light falls on it helps create the color, snow can take shades of blue, purple, or even pink in certain circumstances — when it’s in shadow, for example.

As for the polar bears, they’re not really white. Their fur is actually pretty dark in color. Polar bears’ coats are made of two layers of hairs, one short and thick, the other a bit longer and more sparse. This second, longer coat is made up of transparent hairs with hollow interiors. Much like in the case of snow, light falling on these hairs scatters (thanks to light-scattering particles inside the hollow cores) and is reflected back out, giving the bears a white appearance. Salt particles in between the hairs left over from ocean water evaporating after a swim further enhance this effect.

Artist concept of nano-patterned object reorienting itself to remain in a beam of light.

Physicists propose a new way to levitate and propel spacecraft-sized objects with light

Artist concept of nano-patterned object reorienting itself to remain in a beam of light.

Artist concept of nano-patterned object reorienting itself to remain in a beam of light.

In the future, spacecraft could travel to other stars faster than anything currently available by using laser light sources that are millions of miles away. For the moment, this prospect has been explored only theoretically by physicists at Caltech. In their new study, the researchers propose levitating and propelling objects using a beam of light by etching the surface of those objects with specific nanoscale patterns.

A pattern that keeps objects afloat

For decades, researchers have been using so-called optical tweezers to move and manipulate microscopic objects (i.e. nanoparticles) using a focused laser beam. Nanoparticles can be suspended mid-air due to the light scattering and gradient forces resulting from the interaction of the particle with the light. Such devices have been used to trap small metal particles, but also viruses, bacteria, living cells, and even strands of DNA. For his contributions to developing optical tweezers, Arthur Ashkin was awarded the 2018 Nobel Prize in Physics.

However, optical tweezers are limited by distance and the size of the objects. Essentially, only very small objects can be manipulated with light in this fashion and only from close range.

“One can levitate a ping pong ball using a steady stream of air from a hair dryer. But it wouldn’t work if the ping pong ball were too big, or if it were too far away from the hair dryer, and so on,” Ognjen Ilic, a postdoc at Caltech and the study’s first author, said in a statement.

In their new study, Ilic and colleagues have proposed a radical new way to use light in order to trap or even propel objects. Theoretically, their method is not limited by an object’s size or distance from the source, which means macroscopic objects such a spacecraft could be accelerated, perhaps even close to relativistic speeds, using the force of light alone.

For this to work, certain nanoscale patterns need to be etched on an object’s surface. When the concentrated laser beam hits this patterned surface, the object should begin to “self-stabilize” by generating torque to keep it in the light beam. The authors say that the patterning is designed in such a way as to encode the object’s stability.

This would work for any kind of object, from a grain of rice to a spaceship in size. The light source could also be millions of miles away which would make this technology ideal to power a light sail for space exploration.

“We have come up with a method that could levitate macroscopic objects,” said Harry Atwater, Professor of Applied Physics and Materials Science in Caltech’s Division of Engineering and Applied Science. “There is an audaciously interesting application to use this technique as a means for propulsion of a new generation of spacecraft. We’re a long way from actually doing that, but we are in the process of testing out the principles.”

The findings were reported in the journal Nature Photonics.
Blood Moon.

What causes Blood Moons? The same thing that makes skies blue

When the Moon turns bloody, it’s Earth at work.

Blood Moon.

Image via Pixabay.

Humanity has always kept an eye on the heavens. Societies lived and died by natural cycles, and these orbs in the sky seemed to dictate the rhythm of life — so they imposed themselves as central players in our mythoi. The imprint they left on our psyche is so deep that to this day, we still name heavenly bodies after gods.

But two players always commanded center stage: the Sun and the Moon. One interaction between the two is so particularly striking that virtually all cultures regarded it as a sign of great upheaval: the blood moon. Its perceived meaning ranges from the benign to the malevolent. Blood moons drip with cultural significance, and we’ll explore some of it because I’m a huge anthropology nerd.

But they’re also very interesting events from a scientific point of view, and we’ll start with that. What, exactly, turns the heavenly wheel of cheese into a bloody pool? Well, let me whet your appetite by saying that it’s the same process which produces clear blue skies. Ready? Ok, let’s go.

The background

Geometry of a lunar eclipse.

The geometry of a lunar eclipse.
Image credits Sagredo / Wikimedia.

For context’s sake, let’s start by establishing that the moon doesn’t shine by itself. It’s visible because it acts as a huge mirror, beaming reflected sunlight down at night. During a total lunar eclipse, the Earth passes between the Sun and the Moon, blocking sunlight from hitting its surface. Blood moons happen during such lunar eclipses. A sliver of light is refracted (bent) in the atmosphere, passing around the Earth and illuminating the Moon. This is what gives it that reddish colo

It all comes down to how light interacts with our planet’s atmosphere, most notably a process called Rayleigh scattering: electromagnetic radiation interacts with physical particles much smaller in size than the radiation’s wavelength.

For context’s sake part deux, what our eyes perceive as white light is actually a mix of all the colors we can see. Each color is generated by a particular wavelength interval (more here).

Boiled down, different bits of light get more or less scattered depending on their wavelength. It’s quite potent: roughly a quarter of the light incoming from the Sun gets scattered — depending on fluctuating atmospheric properties such amount of particles floating around in it — and some two-thirds of this light reaches the surface as diffuse sky radiation.

The Blood Moon

As a rule of thumb, our atmosphere is better at scattering short wavelengths (violets and blues) than long wavelengths (oranges and reds). ‘Scattering’ basically means ‘spreading around’, and this makes the sky look blue for most of the day. This scattering is not dependent on direction (or, in fancy-science-speak, it’s an isotropic property) but its perceived effect is.

When the sun is high in the sky, light falls roughly vertically on our planet; as such, it passes through a relatively short span of the atmosphere. Let’s denote this rough length with ‘a‘.

The light of dawn and dusk arrives tangentially (horizontally) to the planet. It thus has to pass through a much longer span of the atmosphere than it does at noon. Blues become scattered just like in the previous case as light traverses this a distance through the atmosphere. But it then has to pass through yet more air. So greens (the next-shortest wavelengths) also become dispersed. That’s why the sky on dawn or sunsets appear red or yellow (the remaining wavelengths).

Blood Moon.

The same mechanism is at work during a blood moon. Light passing through the Earth’s atmosphere gets depleted in short wavelengths, making it look yellowy-red. This makes the Moon appear red as it reflects red light back to our eyes.

One cool effect of this dispersion is that blood moons sometimes exhibit a blue-turquoise band of color at the beginning and just before the end of the eclipse. This is produced by the light that passes through the ozone layer in the top-most atmosphere. Ozone scatters primarily red light, leaving blues mostly intact.

Cultural meanings

Many ancient civilizations looked upon the blood moon with concern: in their eyes, this was an omen that evil was stirring.

“The ancient Inca people interpreted the deep red colouring as a jaguar attacking and eating the moon,” Daniel Brown wrote for The Conversation. “They believed that the jaguar might then turn its attention to Earth, so the people would shout, shake their spears and make their dogs bark and howl, hoping to make enough noise to drive the jaguar away.”

Some Hindu traditions hold that the Moon turns red because of an epic clash between deities. The demon Swarbhanu tricks the Sun and Moon for a sip of the elixir of immortality. As punishment Vishnu (the primary god of Hinduism) cuts off the demon’s head — which lives on as Rahu.

Understandably upset by the whole experience, Rahu chases the sun and moon to devour them. An eclipse only happens if Rahu manages to catch one of the two. Blood Moons form when Rahu swallows the moon and it falls out of his severed neck. Several things, such as eating or worshiping, are prohibited, as Hindu traditions hold that evil entities are about during an eclipse.

Other cultures took a more compassionate view of the eclipsed moon. The Native American Hupa and Luiseño tribes from California, Brown explains, thought it was wounded or fell ill during such an event. In order to help its wives in healing the darkened moon, the Luiseño would sing and chant healing songs under an open sky.

My personal favorite, however, is the approach of the Batammaliba people, who live in the nations of Togo and Benin in Africa. Their traditions hold that the lunar eclipse is a conflict between sun and moon; we little people must encourage them to bury the hatchet! Such events are thus seen as an opportunity to lay old animosities and feuds to rest;

I’m definitely going to try that during the next blood moon.

World’s fastest camera captures 10 trillion frames per second

Thought your iPhone’s camera can shoot sick slow-mos? Here’s something a bit more impressive.

Credit: INRS.

Credit: INRS.

Researchers at Caltech and L’Institut national de la recherche scientifique (INRS) devised the world’s fastest camera, which can shoot an incredible 10 trillion frames per second. It’s so fast that it can capture the interactions between matter and light at the nanoscale. The new camera more than doubles the number of frames per second set by the previous record holder, a device developed by researchers in Sweden.

T-CUP, officially the world’s fastest camera, is based on the ‘Compressed Ultrafast Photography’ technology. It works by combining a femtosecond streak camera and a static camera. A data acquisition technique called Radon transformation rounds up the setup.

Such ultra-fast cameras will prove useful to physicists looking to probe the nature of light and how it travels. Other potential applications include medicine and engineering.

“We knew that by using only a femtosecond streak camera, the image quality would be limited. So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames per second,” said Lihong Wang, the Bren Professor of Medical Engineering and Electrical Engineering at Caltech.

Real-time imaging of temporal focusing of a femtosecond laser pulse at 2.5 Tfps. Credit: Jinyang Liang, Liren Zhu & Lihong V. Wang.

Real-time imaging of temporal focusing of a femtosecond laser pulse at 2.5 Tfps. Credit: Jinyang Liang, Liren Zhu & Lihong V. Wang.

During a test, T-CUP captured a femtosecond (a millionth of a billionth second) laser pulse, recording 25 images which were 400 femtoseconds apart. The resolution and staggering scale involved allowed the research team to record changes in the light beam’s shape, intensity, and angle of inclination.

A femtosecond laser pulse passing through a beam splitter. Credit: INRS.

The level of precision obtained by the researchers is unprecedented — and they’d like to do even more! According to co-author Jinyang Liang, there are ways to increase the speed up to one quadrillion (1015) frames per second. Being able to record the behavior of light at such as scale is beyond our current technology but once it becomes reality, entirely new fields of physics could be opened up.

T-CUP was described in the journal Light: Science and Applications. 

SETI project uses AI to track down mysterious light source

Credit: Breakthrough Listen.

Last year, astronomers tasked with hunting alien signals identified 21 repeating light pulses emanating from a dwarf galaxy located 3 million-light years away. The source could be a fast-rotating neutron star — or it could be alien technology, perhaps meant to propel a space-sailing craft. Now, the researchers used artificial intelligence to pore through the dataset to discover 72 new fast radio bursts generated by the mysterious light source.

Fast radio bursts (FRBs) are bright pulses of radio emission mere milliseconds in duration. The signals acquired by the Green Bank Telescope in West Virginia and then initially analyzed through traditional methods by the Breakthrough Listen — a SETI project led by the University of California, Berkeley — lasted only an hour.

What sets the source in question — called FRB 121102 — apart from other on-off fast radio bursts is that the emitted bursts fired in a repeated pattern, alternating between periods of quiescence and frenzied activity.

Since the first readings made on August 26, 2017, the team of astronomers has devised a machine-learning algorithm that scoured through 400 terabytes of data recorded over a five-hour-long period.

The machine learning algorithm called a “convolutional neural network” is often employed by tech companies to display online search results or sort images. It found an additional 72 bursts not detected originally, bringing the total number of detected bursts from FRB 121102 to around 300 since it was initially discovered in 2012.

“This work is exciting not just because it helps us understand the dynamic behavior of fast radio bursts in more detail, but also because of the promise it shows for using machine learning to detect signals missed by classical algorithms,” said Andrew Siemion, director of the Berkeley SETI Research Center and principal investigator for Breakthrough Listen, the initiative to find signs of intelligent life in the universe.

The mystery still lingers, though. We still don’t know much about FRBs or what produced this sequence, but the new readings help put some new constraints on the periodicity of the pulses generated by FRB 121102. It seems like the pulses are not fired all that regularly after all, at least not if the pattern is longer than 10 milliseconds. More observations might one day help scientists figure out what is driving these enigmatic light sources, the authors of the new study wrote in The Astrophysical Journal.

“Whether or not FRBs themselves eventually turn out to be signatures of extraterrestrial technology, Breakthrough Listen is helping to push the frontiers of a new and rapidly growing area of our understanding of the Universe around us,” said UC Berkeley Ph.D. student Gerry Zhang.

Surface displacement.

Scientists have calculated the force of a photon hitting an object

An international team of researchers has finally been able to calculate the momentum of light.

Light night.

Image credits Felix Mittermeier.

Light exerts a minute pressure on the objects it interacts with. Finding the exact value of this pressure is a quest that scientists have pursued for nearly 150 years now. Today, a team of researchers has finally cracked it.

A light touch

Photons, although lacking in mass, do have momentum — so when they hit an object, they apply a force onto it.

This idea first surfaced in science in 1619, in a treatise by the German mathematician and astronomer Johannes Kepler. He believed that the pressure exerted by light was the reason why a comet’s tail always pointed away from the Sun. In 1873, Scottish physicist James Clerk Maxwell proposed that light is a form of electromagnetic radiation — and thus carries momentum, allowing it to exert pressure on matter. Thus, the pressure exerted by light was linked to its momentum.

Maxwell’s hypothesis turned out to be true. However, because the momentum of light is extremely tiny, the pressure it exerts is also exceedingly low — so measuring it directly is next to impossible.

“Until now, we hadn’t determined how this momentum is converted into force or movement,” explains coauthor and engineer Kenneth Chau of the University of British Columbia, Okanagan Campus, Canada.

“Because the amount of momentum carried by light is very small, we haven’t had equipment sensitive enough to solve this.”

We still don’t have any piece of equipment sensitive enough to measure this momentum — which makes the current findings all the more impressive. Chau’s team — which includes members from Slovenia and Brazil — found a way to work around this limitation, however.

The device they built is based around a mirror. The team fitted highly-sensitive acoustic sensors to it and then encased the contraption in several layers of heat shielding material to protect it from outside interferences. The last step was to shoot laser pulses at the mirror.

As photons in the laser hit the mirror, they apply pressure which generates movement (elastic waves) across its surface. The acoustic sensors measured these waves, which the team later used to calculate the pressure generated by individual photons.

Surface displacement.

Surface displacements caused by the laser. “Displacement” is measured in femtometers (quadrillionths of a meter).
Image credits Tomaž Požar et al., 2018, Nature Communications.

“We were able to trace the features of those waves back to the momentum residing in the light pulse itself, which opens the door to finally defining and modeling how light momentum exists inside materials.”

The research provides the framework from which researchers can refine the value. An accurate value of radiation pressure could have wide-ranging applications, from better optical tweezers — scientific instruments that use highly focused laser beams to manipulate particles down to the scale of a single atom — to more efficient solar sails that will let us zip about the universe without the need for fuel.

“We’re not there yet,” Chau said, “but the discovery in this work is an important step and I’m excited to see where it takes us next.”

The paper “Isolated detection of elastic waves driven by the momentum of light” has been published in the journal Nature Communications.

RW Aur A.

We may have just witnessed a close-by star devour the remnants of a planet

A nearby star may have just consumed a planet, NASA reports.

RW Aur A.

Image credits Chandra X-ray Observatory / Harvard.

Some 450 light years away from Earth, the young star RW Aur A just finished chowing down on a planet — probably.

RW Aur A has captured astronomers’ attention ever since 1937. Nestled in the Taurus-Auriga Dark Clouds, which host stellar nurseries containing thousands of infant stars, its light tends to dim “every few decades for about a month,” according to NASA. Needless to say, this has made researchers very curious ever since we realized it. But then, back in 2011, something happened to throw all this interest into high gear: the star became dimmer far more often, and for longer periods of time.

A groundbreaking feast

To get to the bottom of things, a team of researchers pointed the Chandra X-ray Observatory towards RW Aur A over a five-year period. Chandra is a space telescope first launched in 1999, but which still boasts extremely sensitive X-ray sensors that can make sense of the radiation emitted even by young stars such as RW Aur A.

While young stars can be just as perky as any other, they’re typically shrouded in thick disks of gas, dust, and larger debris — which filter their radiation output and alter their intensity. While this makes less-sensitive instruments practically blind to the shrouded stars, instruments like Chandra can use the ‘filtered’ radiation to estimate what the disks are made of.

And that’s exactly what the team did in this case. According to the paper reporting the findings, Chandra detected surprisingly high levels of iron around RW Aur A. Since previous measurements didn’t record the same concentrations of iron (rather they picked up on much lower levels), the only possible explanation is that an event ejected a huge quantity of the element around the star.

They believe that all this iron came from a planet — or a few planetesimals — colliding with one another around the star. If any one of these bodies was rich in iron, it would explain the high levels seen in the disks around RW Aur A. Chandra recordings in 2017 revealed strong emission from iron atoms, indicating that the disk contained at least 10 times more iron than recordings captured in 2013 during a bright period.

The team speculates that this iron excess comes from a collision of two infant planetary bodies — including at least one object large enough to be a planet — in the space surrounding RW Aur A. Such an event would vaporize a large amount of material from the stars, including some iron. Furthermore, as the larger chunks of debris fall towards the star under its gravitational tug, they would release even more iron as the intense heat breaks them apart and solar winds batter them. Taken together, it would explain the high levels of iron observed in the star’s corona.

Better yet, it would also explain the dimming we see. As this debris falls into the star, it could be physically obscuring its light.

“If our interpretation of the data is correct, this would be the first time that we directly observe a young star devouring a planet or planets,” says Hans Guenther, who led the study out of MIT’s Kavli Institute for Astrophysics and Space Research.

With this in mind, an alternative explanation is also possible — if far less epic. RW Aur A is part of a binary star system, the sister of (you’ll never guess it) RW Aur B. If small grains of iron-rich particles can become trapped in certain parts of a star’s disk, and if that disk is perturbed by something massive (say, another star) the resulting interplay of tidal forces could stir the iron-rich particles — and make the disk seem richer in iron as all this dust falls into RW Aur A and obscures its light.

The team plans to continue their observations of the star over the next couple of years to see if iron levels stay constant. If they do, it would point to a massive source of iron (i.e. in favor of the collision scenario); if not, the tidal interaction between the two stars would seem like the more likely choice.

“Much effort currently goes into learning about exoplanets and how they form, so it is obviously very important to see how young planets could be destroyed in interactions with their host stars and other young planets, and what factors determine if they survive,” Guenther says.

Needless to say, I’m rooting for the collision scenario.

The paper “Optical Dimming of RW Aur Associated with an Iron-rich Corona and Exceptionally High Absorbing Column Density” has been published in the journal The Astronomical Journal.

Heat and light.

External temperature also influences our circadian rhythms, study reports

It’s not only light that tells our biological clocks it’s time for bed — temperature plays a role, too.

Heat and light.

Image credits Leonardine36 / Pixabay.

Researchers from the University of Michigan report that even mild changes in ambient temperature can influence sleep-wake cycles. The neurons that regulate the body’s circadian clock use thermoreceptors to keep tabs on temperatures outside the body, and use the readings to determine when it’s time for a nap.

The findings help flesh out our understanding of how the mammalian brain regulates wake-sleep cycles; previously, only the influence of light on the circadian rhythm was known.

Chill down, nap on

“Decades of work from recent Nobel Prize winners and many other labs have have actually worked out the details of how light is able to adjust the clock, but the details of how temperature was able to adjust the circadian clock were not well understood,” said Swathi Yadlapalli, first author of the study.

“Going forward, we can ask questions of how these two stimuli are processed and integrated into the clock system, and how this has effects on our sleep behavior and other physiological processes.”

The circadian rhythm, also sometimes referred to as the circadian clock, is a biochemical mechanism that allows living organisms to sync their sleep-wake cycle to the 24-hour cycle of a day. Essentially, it’s our daily rhythm. One of the key factors influencing the workings of this rhythm, perhaps unsurprisingly, are levels of ambient light.

However, temperature also seems to play a big part. Together with Chang Jiang, a postdoctoral researcher at the U-M Department of Mechanical Engineering, Yadlapalli developed an optical imaging and temperature control system. Using it, the duo looked into the neural activity in the circadian clock of fruit flies (Drosophila melanogaster) while they were exposed to heat and cold. Fruit flies were used for the study because the neurons that govern their circadian clocks are strikingly similar to those in humans.

The team reports that colder temperatures excite sleep-promoting neurons, a process which ties external temperature to sleep cycles. Finding such a process in fruit flies suggests these neurons could have similar functionality in humans.

“It looks like clock neurons are able to get the temperature information from external thermoreceptors, and that information is being used to time sleep in the fly in a way that’s fundamentally the same as it is in humans,” Shafer said.

“It’s precisely what happens to sleep in mammals when internal temperature drops.”

Shafer adds that the circadian system creates a daily rhythm in temperature which is an important cue for when nap time comes around. So, while you may think our bodies run at a steady 37°C (98.6°F), “in fact, it’s fluctuating.” As the clock ticks nearer to wakefulness, our circadian system warms the body up. When it’s close to bedtime, it lowers our internal temperature. This effect is independent of the temperature of the room you’re sleeping in.

The paper “Circadian clock neurons constantly monitor environmental temperature to set sleep timing” has been published in the journal Nature.

Light pollution from research ship makes Artic zooplankton return to the deep

Image credits: Wikipedia.

Scientists discovered that zooplankton from the Arctic is very sensitive to light pollution. Even light coming from research ships can make these small organisms sink back into darkness. Sure, it was previously known that the light of a full moon or the northern lights would make these creatures retreat to deeper waters, but the possibility that ship-borne lights bothered them was still debatable.

“We did have a suspicion that this was the case,” said Martin Ludvigsen, a professor in NTNU’s Department of Marine Technology and at the university’s Centre of Autonomous Marine Operations and Systems. “We were able to demonstrate this, and show the significance of the lights from the ship” he added.

Run for the depths!

Zooplankton is the most widespread vertical migratory biomass on Earth (and armed to the teeth). Around the globe, these tiny animals rise to the surface during the night to feed and descend into deeper waters to avoid predators during the day. Research over the last decade shows that the weak moonlight or the northern lights cause zooplankton to retreat to darker waters.

Because of its photosensitivity, scientists have a hard time actually studying zooplankton: if light from their ships shines through to the small animals, any accurate recording of their population in an area becomes highly improbable.

To better understand the effects of light and light pollution on zooplankton, a team of researchers from NTNU, UiT (The Arctic University of Norway), the University of Delaware, and the Scottish Association for Marine Science modified a kayak, equipped it with sensors, a petrol engine, strapped it to a ship, and set it all out to sea. Once in open waters the kayak, dubbed Jetyak, was sent away from the research vessel and used to measure the depth reached by artificial light, as well as to record plankton thickness via sonar.

The “Jetyak” and a part of the research team.
Photo: Geir Johnsen, NTNU/UNIS

The acoustic data collected by the autonomous Jetyak showed that the layer of zooplankton was far thicker and started from closer to the surface near itself compared to that near the research boat (where the plankton was hiding from light). This effect reached depths of up to 80 meters.

“We were sort of surprised how pronounced this avoidance behavior was,” Ludvigsen said. “It was so clear and so fast. Even when we tried to reproduce this in a small boat and a headlamp, it was really easy to see in the echosounder.”

Photo from the board of the research ship.
Photo: Benjamin Hell

“These findings tell us that zooplankton populations and behavior can be under- or overestimated because these marine organisms respond to light, either by swimming away from it, or sometimes towards it,” said Geir Johnsen, co-author and a marine biologist at NTNU.

The biologist believes that scientists have to undertake their studies under natural conditions if they want to discover what zooplankton is truly up to. This means developing autonomous vehicles equipped to sample the vast seas.

Arctic fauna — ranging from bowhead whales to marine birds, to cod — feeds on zooplankton, particularly those in the genus Calanus. Their high content of fatty acids is what makes them such a filling meal.

Calanus glacialis
Photo: Malin Daase, UiT — The Arctic University of Norway

“Light pollution may disturb zooplankton behavior with respect to feeding, predator-prey relationships and diurnal migration, in addition to their development from juveniles to adults,” Johnsen said.

Global warming also poses a serious threat to the Arctic’s tiny inhabitants — as sea ice cover is growing thinner, or outright melting completely in large areas, zooplankton is rapidly running out of the dark areas they like, Johnsen remarked. Considering how central zooplankton is on the local menus, the Arctic ecosystem may have a lot to suffer.

The paper was published in the journal Science Advances.

LED light savings backfire spectacularly as light pollution increases dramatically

LEDs promised to bring a revolution in outdoor lighting and in a way, they did. It just might not be what we were hoping for.

Light pollution is almost ubiquitously associated with urbanization. Image credits: Wonderlane / Flickr.

When solid-state lighting options such as LEDs, OLEDs, and PLEDs were introduced, everyone hoped that they would reduce costs, energy usage, and the environmental impact of outdoor lighting. After all, these new systems consumed less and often have a longer lifespan than their conventional counterparts. But a new study has found that due to these reduced costs, many municipalities are actually using more and more lighting, creating a net effect that’s even worse than before. Not only are some places consuming more energy overall, but they are producing much more light, with potentially dramatic consequences on wildlife and our own health.

Better isn’t always better

Instead of reaping the benefits of lower costs, many municipalities recklessly took advantage of the new technology and installed more and more lighting posts, resulting in using even more energy than before.

“As a result, the world has experienced widespread ‘loss of the night,’ with half of Europe and a quarter of North America experiencing substantially modified light-dark cycles,” write the researchers in the new study, which was published today in Scientific Advances.

This growth is tightly correlated with the increase of the Gross Domestic Product (GDP), and the fastest growth occurred in developing countries. However, developed areas also tended to show an increase in light production.

“What’s more, we actually see only part of the light increase”, says Christopher Kyba whose research is done both at GFZ and the Leibniz Institute for Freshwater Ecology and Fisheries IGB.

What Kyba is alluding to is the fact researchers were expecting the measured light to go down dramatically, because the Day-Night-Band instrument which was used to gather the data doesn’t see wavelengths below 500 nanometers (human visible range is between 400 and 700 nm). LEDs output much more light below the 500 nm threshold than regular light bulbs, so even if light consumption remained the same, researchers were expecting to measure an overall luminosity, due to the measuring limitation. But they didn’t. If anything, they reported more luminosity than ever in many parts of the world.

“For that reason I expected that wealthy countries would appear to be getting darker (even if that wasn’t truly the case). Instead, we observed wealthy countries staying constant, or in many cases increasing,” Kyba told Gizmodo. “That means that even though some cities are saving energy by switching to LEDs, other places are getting brighter by installing new or brighter lamps (that need new energy). So the data aren’t consistent with the hypothesis that on the global scale, LEDs are saving energy for outdoor lighting applications.”

More light, more problems

Photograph of Calgary, Alberta, Canada, taken from the International Space Station on Nov. 27, 2015. Many areas on the outskirts are newly lit compared to 2010, and many neighborhoods have switched from orange sodium lamps to white LED lamps. Credits: NASA’s Earth Observatory/Kyba, GFZ.

Artificial light is, of course, crucial to modern society. It allows us to work at any time or in naturally dark environments and it offers possibilities for recreation and sports — it allows us to do what we want when we want it, no longer depending on the Sun’s natural cycle. However, too much light can be a bad thing, with several studies accusing it of compromising health and disrupting ecosystems.

In 2007, “shift work that involves circadian disruption” was listed as a probable carcinogen by the World Health Organization’s International Agency for Research on Cancer. Another recent study by Professor Steven Lockley at Harvard Medical School found that artificial light, even dimmed, can have significant effects on sleep disruption and melatonin suppression. Light pollution also poses a serious threat to wildlife — particularly to nocturnal wildlife. Despite all these warning signs, not much action has been taken to limit light pollution, and as urbanization spreads more and more, so too does light pollution.

Still, there are reasons to be optimistic. Unlike other types of pollution, light doesn’t require any dramatic action to reverse its effects. All you do is switch it off, and it’s gone (unlike say oil pollution, where even if you stop the pollution source, you still need to clean up the existing mess). Also, some places were less careless than others. Light emission per capita in Germany is three times lower than in the US, while the standard of living is just as much, if not significantly higher. This gives hope that prosperity doesn’t always mean going over the top with resource consumption, and a reasonable energy consumption (along with a reduction of light pollution) can be achieved — but only with responsible policies. Perhaps it’s time for such policies to emerge at a local, national, and even international level.

The study has been published in Science Advances.

Designer Oscar Lhermitte brings the moon to your fingertips

We love art that not only thrills your senses but also makes you think, and this project does just that. Oscar Lhermitte’s MOON brings the stunning beauty of the lunar globe on your desk — 100% topographically accurate.


There are few sights as captivating the full moon on a clear night’s sky. There’s something very tranquil and beautiful in seeing the white aster transiting the sky. Probably driven by similar emotions, product designer Oscar Lhermitte took the feeling down from the sky and brought it to our fingertips — at a 1:20 million scale.

Teaming up with design studio Kudu, he spent 4 years constructing a topographically accurate lunar globe from data recorded by NASA’s Lunar Reconnaissance Orbiter.

In order to create the lunar globe, Oscar first reached out to the team at the Institute of Planetary Research. They gave him access to their database, which he used to design the MOON. The data used are DTM (Digital Terrain Model) and are constructed from stereo images.

The images were then developed to achieve the correct scale of terrain and make it spherical. One full Moon was 3D printed in order to become the MOON Master (the one the molds are then made from).

All images provided by Oscar Lhemitte

The globe is dotted with all of the moon’s craters in precise detail, so you can get an exquisite feel of our planet’s favorite satellite.



A ring of LEDs follows the path of the Moon in real time, keeping its correct face constantly lit. You either set the moon to the position you desire, see all of its phases in 30 seconds in demo mode or switch it to live to have it synchronize with the current position of the actual moon.

MOON has 3 modes of operation:

  1. Manual – allowing you to rotate the sun yourself, setting the lunar phase that you would like to see.
  2. Demo – letting you observe a synodic month in just 30 seconds.
  3. Live – Synchronising itself with the current position of the real moon. All MOONs are manufactured in London, England.


Also, MOON’s system has the exact same memory capacity as the Apollo 11 computers that brought the first people to the moon. You can’t get any more lunar than this without leaving the planet.

MOON was available £500 on Kickstarter with a discounted price of £450 for early backers. Now, the retail price price is £700. MOON was successfully launched on Kickstarter in May 2016 and raised more than £140K.

All image credits go to Oscar Lehrmitte.