Tag Archives: wavelength

The color purple is unlike all others, in a physical sense

Our ability to perceive color is nothing short of a technical miracle — biologically speaking. But there is one color we can see that isn’t quite like the rest. This color, purple, is known as a non-spectral color. Unlike all its peers it doesn’t correspond to a single type of electromagnetic radiation, and must always be born out of a mix of two others.

A violet rectangle over a purple background. Image credits Daily Rectangle / Flickr.

Most of you here probably know that our perception of color comes down to physics. Light is a type of radiation that our eyes can perceive, and it spans a certain range of the electromagnetic spectrum. Individual colors are like building blocks in white light: they are subdivisions of the visible spectrum. For us to perceive an object as being of a certain color, it needs to absorb some of the subdivisions in the light that falls on it (or all of them, for black). The parts it reflects (doesn’t absorb) are what gives it its color.

But not so for purple, because it is a…

Non-spectral color

First off, purple is not the same as violet, even though people tend to treat them as interchangeable terms. This is quite understandable as, superficially, the two do look very similar. On closer inspection, purple is more ‘reddish’, while violet is more ‘blueish’ (as you can see in the image above), but that’s still not much to go on.

Why they’re actually two different things entirely only becomes apparent when we’re looking at the spectrum of visible light.

Image via Reddit.

Each color corresponds to photons vibrating with a particular intensity (which produces their wavelength). Humans typically can see light ranging from 350 to 750 nanometers (nm). Below that we have ultraviolet (UV) radiation, which we can’t see but is strong enough to cause radiation burns on the beach, DNA damage, and other frightful things. Above the visible spectrum, we have infrared (IR), a type of electromagnetic radiation that carries heat, and which armies and law enforcement use in fancy cameras; your remote and several other devices also use IR beams to carry information over short distances.

The numbers above aren’t really extremely important for our purposes here; they describe the exact colors used for flairs on a subreddit I follow, and the wavelengths noted there will shift slightly depending on the hue you’re dealing with. I left the numbers there, however, because it makes it easier to showcase the relationship between light’s physical properties and our perception of it.

What we perceive as violet is, quite handily, the bit of the visible spectrum right next to that of UV rays. This sits on the left side of the chart above and is the most energetic part of light that our eyes can see (low wavelength means high vibration rates, which mean higher energy levels). On the right-hand side, we have red, with high wavelength / low energy levels.

Going through the spectrum above, you can find violet, but not purple. You may also be noticing that while we talk of ultraviolet radiation, we’re not mentioning ultrapurple rays — because that’s not a thing. Purple, for better or worse, doesn’t make an appearance on the spectrum. Unlike red or blue or green, there is no wavelength that, alone, will make you perceive the color purple. This is what being a ‘non-spectral’ color means, and why purple is so special among all the colors we can perceive.

More than the sum of its parts

If you look at orange, which is a combination of yellow and red, you can see that its wavelength is roughly the average of those of its constituent colors. It works with pretty much every color combination, such as blue-yellow (for green) or red-green (for more orange).

Now, the real kicker with purple, which we know we can get by mixing in red with blue, is that by averaging the wavelengths of its two parent colors, you’d get something in the green-yellow transition area. Which is a decidedly not-purple color.

That’s all nice and good, but why are we able to perceive purple, then? Well, the short of it is “because brain”. Although purple isn’t a spectral color in the makeup of light, it is a color that can exist naturally and in the visible spectrum, so our brains evolved the ability to perceive it; that’s the ‘why’. Now let’s move on to ‘how’. It all starts with cells in our eyes called ‘cones’

CIE colour matching functions (CMFs) Xbar (blue), Ybar (green) and Zbar (red). Image via Reddit.

The chart on the left is a very rough and imperfect approximation for how the cone cells on our retinas respond to different parts of the visible spectrum. There’s three lines because there are three types of cone cells lining our retinas. While reality is a tad more complicated, for now, keep in mind that each type of cone cell responds to a certain color (red, green, or blue).

How high each line peaks shows how strong a signal it sends to our brain for individual wavelengths. Although we only come equipped with receptors for these three colors, our brain uses this raw data to mix hues together and produce the perception of other colors such as yellow, or white, and so on.

The more observant among you have noticed that cone cells that respond to the color red also produce a signal for parts of the visible spectrum corresponding to blue. And purple is a mix of red and blue. Coincidence?! No; obviously.

The thing is, while every color you perceive looks real, they’re pretty much all just hallucinations of your brain. When light on the leftmost side of the spectrum (as seen in the chart above) hits your eye, signals are sent to your brain corresponding only to the color red. Move more towards the middle, however, and you see that both red and green are present. But the end perception is that of yellow, or green.

What happens is that your brain constantly runs a little algorithm that estimates what color things you’re seeing are. If a single type of signal is received, you perceive the color corresponding to it. If a mix of signals is received, however, we perceive a different color or hue based on the ratio between signals. If both green and red signals are received, but there’s more of the red than the green, our brains will tell us “it’s yellow”. If the signal for green is stronger than that for red, we see green (or shades of green). The same mechanism takes place for all possible 9 combinations of these colors.

That bit to the right of the chart, where both red and blue signals are sent to the brain, is where the color purple is born. There’s no radiation wavelength that carries purple like there is with violet or orange. The sensation of purple is created by our brains, sure, but the reason why it needs to be created in the first place is due to this quirk of how the cone cells in our eyes work. From the chart above you can see that cells responding to green pigments also show some absorption in the area corresponding to purple, but for some reason, our brains simply don’t bother with it.

From my own hobbies (painting) I can tell you that mixing violet with green produces blue, but mixing purple with green results in brown. Pigments and colored light don’t necessarily work the same way, this is all anecdotal, and I have no clue whether that’s why green signals get ignored in purple — but I still found it an interesting tidbit. Make of it what you will.

In conclusion, what makes purple a non-spectral color is that there isn’t a single wavelength that ‘carries’ it — it is always the product of two colors of light interacting.

Are there any others like it?

Definitely! Black and white are prime examples. Since there’s not a single wavelength for white (it’s a combination of all wavelengths) or black (no wavelengths), they are by definition non-spectral colors. The same story with gray. These are usually known as non-colors, grayscale colors, or achromatic hues.

Furthermore, colors produced by mixing grayscale with another color are also considered non-spectral (since one component can’t be produced by a single wavelength, the final color can’t be produced by a single one either). Pink is most often given as an example, as is brown, since these can be produced using non-spectral colors (white and/or purple for pink, gray/black for brown).

Metallic paints also, technically, are non-spectral colors. A large part of the visual effect of metallic paints is given off by how they interact with and scatter light. A certain wavelength produces a single color; the shininess we perceive in metallic pigments can’t be reproduced using a single wavelength, as this is given off by tiny variations in surface reflecting light in different directions. The metal itself may well be a solid color, but our final perception of it is not. A gray line painted on canvas doesn’t look like a bar of steel any more than a yellowish one can pass off as a bar of gold. As such, metallic colors are also non-spectral colors.

Blood Moon.

What causes Blood Moons? The same thing that makes skies blue

When the Moon turns bloody, it’s Earth at work.

Blood Moon.

Image via Pixabay.

Humanity has always kept an eye on the heavens. Societies lived and died by natural cycles, and these orbs in the sky seemed to dictate the rhythm of life — so they imposed themselves as central players in our mythoi. The imprint they left on our psyche is so deep that to this day, we still name heavenly bodies after gods.

But two players always commanded center stage: the Sun and the Moon. One interaction between the two is so particularly striking that virtually all cultures regarded it as a sign of great upheaval: the blood moon. Its perceived meaning ranges from the benign to the malevolent. Blood moons drip with cultural significance, and we’ll explore some of it because I’m a huge anthropology nerd.

But they’re also very interesting events from a scientific point of view, and we’ll start with that. What, exactly, turns the heavenly wheel of cheese into a bloody pool? Well, let me whet your appetite by saying that it’s the same process which produces clear blue skies. Ready? Ok, let’s go.

The background

Geometry of a lunar eclipse.

The geometry of a lunar eclipse.
Image credits Sagredo / Wikimedia.

For context’s sake, let’s start by establishing that the moon doesn’t shine by itself. It’s visible because it acts as a huge mirror, beaming reflected sunlight down at night. During a total lunar eclipse, the Earth passes between the Sun and the Moon, blocking sunlight from hitting its surface. Blood moons happen during such lunar eclipses. A sliver of light is refracted (bent) in the atmosphere, passing around the Earth and illuminating the Moon. This is what gives it that reddish colo

It all comes down to how light interacts with our planet’s atmosphere, most notably a process called Rayleigh scattering: electromagnetic radiation interacts with physical particles much smaller in size than the radiation’s wavelength.

For context’s sake part deux, what our eyes perceive as white light is actually a mix of all the colors we can see. Each color is generated by a particular wavelength interval (more here).

Boiled down, different bits of light get more or less scattered depending on their wavelength. It’s quite potent: roughly a quarter of the light incoming from the Sun gets scattered — depending on fluctuating atmospheric properties such amount of particles floating around in it — and some two-thirds of this light reaches the surface as diffuse sky radiation.

The Blood Moon

As a rule of thumb, our atmosphere is better at scattering short wavelengths (violets and blues) than long wavelengths (oranges and reds). ‘Scattering’ basically means ‘spreading around’, and this makes the sky look blue for most of the day. This scattering is not dependent on direction (or, in fancy-science-speak, it’s an isotropic property) but its perceived effect is.

When the sun is high in the sky, light falls roughly vertically on our planet; as such, it passes through a relatively short span of the atmosphere. Let’s denote this rough length with ‘a‘.

The light of dawn and dusk arrives tangentially (horizontally) to the planet. It thus has to pass through a much longer span of the atmosphere than it does at noon. Blues become scattered just like in the previous case as light traverses this a distance through the atmosphere. But it then has to pass through yet more air. So greens (the next-shortest wavelengths) also become dispersed. That’s why the sky on dawn or sunsets appear red or yellow (the remaining wavelengths).

Blood Moon.

The same mechanism is at work during a blood moon. Light passing through the Earth’s atmosphere gets depleted in short wavelengths, making it look yellowy-red. This makes the Moon appear red as it reflects red light back to our eyes.

One cool effect of this dispersion is that blood moons sometimes exhibit a blue-turquoise band of color at the beginning and just before the end of the eclipse. This is produced by the light that passes through the ozone layer in the top-most atmosphere. Ozone scatters primarily red light, leaving blues mostly intact.

Cultural meanings

Many ancient civilizations looked upon the blood moon with concern: in their eyes, this was an omen that evil was stirring.

“The ancient Inca people interpreted the deep red colouring as a jaguar attacking and eating the moon,” Daniel Brown wrote for The Conversation. “They believed that the jaguar might then turn its attention to Earth, so the people would shout, shake their spears and make their dogs bark and howl, hoping to make enough noise to drive the jaguar away.”

Some Hindu traditions hold that the Moon turns red because of an epic clash between deities. The demon Swarbhanu tricks the Sun and Moon for a sip of the elixir of immortality. As punishment Vishnu (the primary god of Hinduism) cuts off the demon’s head — which lives on as Rahu.

Understandably upset by the whole experience, Rahu chases the sun and moon to devour them. An eclipse only happens if Rahu manages to catch one of the two. Blood Moons form when Rahu swallows the moon and it falls out of his severed neck. Several things, such as eating or worshiping, are prohibited, as Hindu traditions hold that evil entities are about during an eclipse.

Other cultures took a more compassionate view of the eclipsed moon. The Native American Hupa and Luiseño tribes from California, Brown explains, thought it was wounded or fell ill during such an event. In order to help its wives in healing the darkened moon, the Luiseño would sing and chant healing songs under an open sky.

My personal favorite, however, is the approach of the Batammaliba people, who live in the nations of Togo and Benin in Africa. Their traditions hold that the lunar eclipse is a conflict between sun and moon; we little people must encourage them to bury the hatchet! Such events are thus seen as an opportunity to lay old animosities and feuds to rest;

I’m definitely going to try that during the next blood moon.

New, revolutionary metalens focuses entire visible spectrum into a single point

The Harvard-produced lens could usher in a new age of cameras and augmented reality.

The next generation of cameras might be powered by nanotechnology.

From the gargantuan telescopes built to study the universe to the ever smaller cameras inside your smartphones, lenses have come a long way. They’ve reached incredibly high performance at lower and lower costs, but researchers want to take them to the next level. A team from Harvard has developed a metalens — a flat surface that uses nanostructures to focus light — capable of concentrating the entire visible spectrum onto a single spot.

Metalenses aren’t exactly a new thing. They’ve been around for quite a while, but until now, they’ve struggled to focus a broad spectrum of light, addressing only some of the light wavelengths. This is the first time researchers managed to focus the entire spectrum — and in high resolution. This raises exciting possibilities.

“Metalenses have advantages over traditional lenses,” says Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the research. “Metalenses are thin, easy to fabricate and cost effective. This breakthrough extends those advantages across the whole visible range of light. This is the next big step.”

In a way, creating such a lens is like building a maze for light. Different wavelengths travel at different speeds; red moves the fastest, with violet being the slowest. At macroscopic scales (say, if you were using a prism for light diffraction), this difference is not noticeable. But if you go down to the nanoscale, it becomes evident, leading to so-called chromatic aberrations. Conventional lenses bypass this by having a curved surface, but metalenses need to take a different approach. This is where the innovation takes place. The team from the School of Engineering and Applied Sciences (SEAS) at Harvard used tiny arrays of titanium dioxide nano-sized fins to fix chromatic aberrations.

An artist’s conception of incoming light being focused on a single point by a metalens. Image credits: Jared Sisler/Harvard SEAS.

Previous research had shown that this is possible in theory, but this is the first time a practical solution was designed, and it was no easy feat.

“One of the biggest challenges in designing an achromatic broadband lens is making sure that the outgoing wavelengths from all the different points of the metalens arrive at the focal point at the same time,” said Wei Ting Chen, a postdoctoral fellow at SEAS and first author of the paper.

“By combining two nanofins into one element, we can tune the speed of light in the nanostructured material, to ensure that all wavelengths in the visible are focused in the same spot, using a single metalens. This dramatically reduces thickness and design complexity compared to composite standard achromatic lenses.”

Through this approach, they were able to focus all the colors of the rainbow onto a single point — in other words, they were able to image “normal” white light, using a lens thousands of times thinner than what we’re used to.

“Using our achromatic lens, we are able to perform high quality, white light imaging. This brings us one step closer to the goal of incorporating them into common optical devices such as cameras,” said Alexander Zhu, co-author of the study.

The potential for practical applications is practically limitless, not only in photography but also in emerging technologies such as virtual or augmented reality. But while this does bring researchers one step closer to developing smaller, better lenses for your camera or smartphone, there’s still a long way to go before the technology will reach consumers. The first step is achieving the same results in macro-scale lenses. Chen and Zhu say that they plan on scaling up the lens to about 1 cm (0.39 in) in diameter, which would make it suitable for real-world applications. It will undoubtedly take them at least a few years to reach that goal, but if they can do it, we’re in for quite a treat.

Journal Reference: Wei Ting Chen et al. A broadband achromatic metalens for focusing and imaging in the visible. doi:10.1038/s41565-017-0034-6.

Typical Li-Fi setup. Credit: Bloomberg.

Dutch researchers demonstrate 42.8 gbps connection using Li-Fi. It’s 100 times faster than the best Wi-Fi

Schematic of LiFi operating principle. Credit: Flickr.

Schematic of LiFi operating principle. Credit: Flickr.

WiFi can become extremely irritating especially when too many connections get logged on to the same hotspot. That’s just an inherent fault of WiFi whose protocols share the bandwidth even if that ultimately means no user gets satisfied. Try splitting an apple into 20 pieces — everyone will still stay hungry. One solution would be to grow more apples but in the case of WiFi that’s getting increasingly challenging because the technology is nearing its limits. That’s why some researchers are exploring new technologies, among them a form of wireless local area networking based on light waves aptly called Li-Fi.

To get an idea of Li-Fi’s potential, it’s enough to learn about the performance recently demonstrated by researchers at the Eindhoven University of Technology. Their device has an incredible capacity of 40Gbit/s per ray over a distance of 2.5 meters. That’s roughly 100 times faster than the best WiFi routers which can clock in 300 Mbps.  

The operating principle is rather simple which is good news because it means it can be easily and cheaply scaled by the industry. The wireless data itself is beamed from a few central antennas that very precisely direct rays of light supplied by an optical fiber. Typically, these antennas, which have no moving parts, and require no maintenance or power, can be fitted on the ceiling. Inside each antenna is a pair of gratings that radiate light at different wavelengths and at different angles, a technique called ‘passive diffraction gratings.’

Typical Li-Fi setup. Credit: Bloomberg.

Typical Li-Fi setup. Credit: Bloomberg.

The first Li-Fis designed at the beginning of this decade beamed light waves from LED lamps and overheads at breakneck speed. Immediately, these devices blew everyone away with their high data transfer rate, up to 10 times faster than the WiFi state of the art but the limitations made these early Li-Fis rather impractical. For one, just like WiFi, these systems used the same single ‘light bulb’ to connect to multiple devices which means that the same connection sharing problems arise. Secondly, optical light waves can’t penetrate walls, unlike WiFi’s radio waves. This is a good thing if you want an extremely secure network but for the average consumer, it’s a drag.

Wi-Fi signal can pass through walls but Li-Fi just bounces off. This is Li-Fi's main limitation unless you care a lot about privacy. Credit: Bloomberg

Wi-Fi signal can pass through walls but Li-Fi just bounces off. This is Li-Fi’s main limitation unless you care a lot about privacy. Credit: Bloomberg

The Dutch researchers’ Li-Fi does away with the first limitation. Data is transferred using infrared light with wavelengths of 1500 nanometers and higher which is invisible and harmless, though infrared still can’t penetrate walls because the energy is too low and gets absorbed by the concrete. This doesn’t necessarily have to be an issue because as a user leaves a room and out of the range of a light antenna, then another antenna strapped on the ceiling of the next room can take over.

The main innovation lies in the fact that every connected device gets its own ray of light which solves all those congestion issues with Wi-Fi.

By now you must be excited. Unfortunately, professor of broadband communication technology Ton Koonen says the technology is still five years away until it can reach homes so don’t throw away that annoying WiFi router just yet.