Tag Archives: refraction

Why typhoons and hurricanes make beautiful, pink (or violet) skies

Hagibis Typhoon struck Japan with massive winds and rain, becoming the strongest storm to hit the country in decades. But right before it hit, the skies looked like this:

Image credits: Twitter / Matthew S. Cuyugan

It’s not unusual for pre-storm skies to have intense, unusual colors. When a particularly strong storm hits (whether it’s a typhoon, cyclone, or hurricane), the skies can take pinkish or violet hues.

While it may look beautifully ominous, there is a scientific reason why this happens — and it involves physics.

Image credits: Twitter/ Desu_unknown

This eerie phenomenon, which often precedes or follows a major storm, is the result of “scattering”. First of all, we need to understand why the sky is blue in the first place.

There are tiny molecules of water, dust, and other gases in the atmosphere. These molecules scatter sunlight and diffract into different color wavelengths. Because blue has the shortest wavelengths in the visible spectrum, it is scattered more than the other colors, and it travels as shorter, smaller waves. This is why we see the sky as blue — at least most of the time.

Heavy storms push away the larger molecules which scatter wavelengths more evenly. This makes the colors of the sky appear more vivid, but that’s just a part of it.

Image credits: Twitter / sengdayritt)

When a massive storm hits, warm air rises rapidly above seawater, causing evaporation. This leaves behind a higher amount of salt molecules and crystals, which cause a more intense scattering of shorter wavelengths, which we see superimposed on the normal blue colors of the sky.

This is a different process to sunrise or sunset, which can also produce somewhat similar colors. That happens because when the sun is low on the horizon, sunlight passes through more air at sunset and sunrise than during the day, which means more molecules to scatter the violet and blue light.

“More atmosphere means more molecules to scatter the violet and blue light away from your eyes,” writes Steven Ackerman, Ph.D., a professor of meteorology at the University of Wisconsin–Madison. “If the path is long enough, all of the blue and violet light scatters out of your line of sight. The other colors continue on their way to your eyes. This is why sunsets are often yellow, orange, and red.”

Instead, when it comes to storms, it’s this modification of the atmosphere that is responsible for the vivid sky colors. But there’s another trick we need to discuss.

Image credits: Twitter /メスゴリラ

Although the sky shows up as violet, it’s not really violet — instead, what we’re seeing is pink superimposed on the regular blue color of the sky, particular when it gets darker outside.

But wait a minute, a pink wavelength doesn’t exist

Good observation! Pink is not in the color spectrum, which las led to some people wrongly claiming that pink is not a color — obviously it’s a color, we can see it (and sometimes in the sky nonetheless). However, pink is not a transmissive color, it’s a reflective color.

While Osaka largely spared by the storm, it still experienced bright pink skies. Image credits: Twitter / ThePhotonauts

An object is colored because some wavelengths are reflected and others are absorbed. In other words, when something is blue (or red, or any color), it absorbs all wavelengths except for the one we are seeing (the one that gives its color). This explains why we see most colors, but things lie differently for pink (and other colors, such as cyan, brown and magenta). We perceive it because of the way our brain translates light bouncing off of different objects.

So even though pink is not in the spectrum, we can obtain it easily by mixing red and white pigment, for instance. But don’t look down on pink — all colors are interpretations of our brain, so it doesn’t make pink any lesser of a color.

As for Japan’s skies before Hagibis, they were a sight to behold — although the events that followed were anything but beautiful.

In this photo of the angular-selective sample (the rectangular region), a beam of white light passes through as if the sample was transparent glass. The red beam, coming in at a different angle, is reflected away, as if the sample was a mirror. The other lines are reflections of the beams. (This setup is immersed in liquid filled with light-scattering ­particles to make the rays visible). (credit: Weishun Xu and Yuhao Zhang)

A new method for filtering light coming from a specific direction

Using only material geometry and interference patterns, MIT researchers have devised a novel way of passing light of any colour only if it comes from a specific angle. Light coming from other directions will be reflected, something which can be desirable in certain applications. Those who could benefit immediately from the findings are solar photovoltaics, detectors for telescopes and microscopes, and privacy filters for display screens.

In this photo of the angular-selective sample (the rectangular region), a beam of white light passes through as if the sample was transparent glass. The red beam, coming in at a different angle, is reflected away, as if the sample was a mirror. The other lines are reflections of the beams. (This setup is immersed in liquid filled with light-scattering ­particles to make the rays visible). (credit: Weishun Xu and Yuhao Zhang)

In this photo of the angular-selective sample (the rectangular region), a beam of white light passes through as if the sample was transparent glass. The red beam, coming in at a different angle, is reflected away, as if the sample was a mirror. The other lines are reflections of the beams. (This setup is immersed in liquid filled with light-scattering ­particles to make the rays visible). (credit: Weishun Xu and Yuhao Zhang)

The researchers built a stack of 80 ultrathin layers built out of two materials with different refractive indices (glass and tantalum oxide).  At the interfaces, small amounts of light get reflected, but combining the surrounding layers in a specific fashion, only light coming in from a certain direction and at a specific polarization will become reflected.

“When you have two materials, then generally at the interface between them you will have some reflections,” the researchers explain.

But at these interfaces, “there is this magical angle called the Brewster angle, and when you come in at exactly that angle and the appropriate polarization, there is no reflection at all.”

Previously, researchers demonstrated methods that selectively reflect light for one precise angle, but these involved narrowing down a range of light frequencies (colours). The new system allow all colours in the visible spectrum of light to be reflected from a single direction. A video of the experimental set-up can be viewed below.

A thermophotovoltaic cell that harnesses solar energy to heat a material could employ such a system to radiate light of a particular colour. At the same time, a complementing photovoltaic cell would use all of that colour of light, limiting heat and light lost to reflections, re-emissions and such, thus improving efficiency. Microscopes and telescopes could also potentially benefit from such a system in scenarios where bright cosmic objects interfere and block the view of an object of interest. Using a telescope that only reads light from a certain angle, it’s possible then to observe very faint targets masked by those that are brighter. Display screens or phones could exploit this to only display information when the person is right in front of them, to avoid peeping.

In principle, the angular selectivity can be made narrower simply by adding more layers to the stack, the researchers say. For the experiments performed so far, the angle of selectivity was about 10 degrees; roughly 90 percent of the light coming in within that angle was allowed to pass through.

Findings appeared in the journal Science.

 

color-grade

Digital imaging of the future: artificial imaging and 3-D displays

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

The subtleties in these computer-generated images of translucent materials are important. Texture, color, contrast, and sharpness combine to create a realistic image. (Courtesy of Ioannis Gkioulekas and Shuang Zhao.)

Computer graphics and digital video have gone an incredibly long way since their early days, however technology is still at such a point where people can still distinguish quite easily between what’s digitally rendered and what’s footage of reality. Three new papers recently presented by Harvard scientists at SIGGRAPH 2013 (the acronym stands for for Special Interest Group on GRAPHics and Interactive Techniques), the 40th International Conference and Exhibition on Computer Graphics and Interactive Techniques, are the most recent efforts in perfecting digital imaging, and their findings are most interesting so far to say the least.

One of the papers, led by  Todd Zickler, computer science faculty at theHarvard School of Engineering and Applied Sciences (SEAS), tackles a difficult subject in digital imaging, namely how to mimic the appearance of a translucent object, such as a bar of soap.

“If I put a block of butter and a block of cheese in front of you, and they’re the same color, and you’re looking for something to put on your bread, you know which is which,” says Zickler. “The question is, how do you know that? What in the image is telling you something about the material?”

To answer this question, the researchers had to dwell into how humans perceive and interact with objects and inherently can tell certain properties apart. For instance, if you were to look at a certain familiar object, you’re able to assess its mass and density without touching it simply based on its appearance and texture. For a computer this is more difficult to do, but if achieved, a device with a mounted camera could identify what material an object is made of and  know how to properly handle it—how much it weighs or how much pressure to safely apply to it—the way humans do.

The researchers’ approach is based on translucent materials’ phase function, part of a mathematical description of how light refracts or reflects inside an object – what we actually are able to see since what is perceived with our eyes is only light that bounces off objects, not the objects themselves. Phase function shape is incredibly different, vast and perceptually diverse to the human brain, which has made past attempts at modeling it extremely difficult.

Luckily, today scientists have access to a great deal of computing power. Zickler and his team first rendered thousands of computer-generated images of one object with different computer-simulated phase functions, so each image’s translucency was slightly different from the next.  From there, a program compared each image’s pixel colors and brightness to another image in the space and decided how different the two images were. Through this process, the software created a map of the phase function space according to the relative differences of image pairs, making it easy for the researchers to identify a much smaller set of images and phase functions that were representative of the whole space. At the end, actual people were invited to browse through various images and decide how different these were, providing insight into how the human brain tells objects like plastic or a soap bubble apart just by looking at them.

“This study, aiming to understand the appearance space of phase functions, is the tip of the iceberg for building computer vision systems that can recognize materials,” says Zickler

Looking at a display like through a window

A second paper also involving Zickler is also most interesting. Think of an adaptive display, inherently flat and thus 2-D, that can adapt the displayed objects according to the angle you view it from and environmental lighting  – just like looking through a window.

The solution takes advantage of mathematical functions (called bidirectional reflectance distribution functions) that represent how light coming from a particular direction will reflect off a surface.

color-grade

From the professional artist’s studio to the amateur’s bedroom

The third paper,  led by Hanspeter Pfister, An Wang Professor of Computer Science, takes a look on how to optimize and manipulate vivid colors. At the moment, professional artists need to manually brush and edit frame-by-frame a video that needs to have a certain color pallet imposed. Amateur filmmakers therefore cannot achieve the characteristically rich color palettes of professional films.

“The starting idea was to appeal to broad audience, like the millions of people on YouTube,” says lead author Nicolas Bonneel, a postdoctoral researcher in Pfister’s group at SEAS.

Pfister claims that his team is working on a kind of software that will allow amateur video editors to chose from various templates, say the color pallets for Amélie  or Transformers, and then simply by selecting what’s the foreground and what’s the background, and then software does the rest, interpolating the color transformations throughout the video.  Bonneel estimates that the team’s new color grading method could be incorporated into commercially available editing software within the next few years.

 

Rewriting the anatomy books – new layer of human cornea discovered

Scientists at The University of Nottingham have come across what can be a monumental discovery, demonstrating for the first time a new layer of the human cornea. The layer, which was described in a paper in Ophthalmology, could help surgeons to dramatically improve outcomes for patients with severe cornea affections and those undergoing surgery.

cornea

The new layer has been named Dua’s layer, after academic Professor Harminder Dua, who made the discovery.

“This is a major discovery that will mean that ophthalmology textbooks will literally need to be re-written. Having identified this new and distinct layer deep in the tissue of the cornea, we can now exploit its presence to make operations much safer and simpler for patients,” says Dua, a professor of ophthalmology and visual sciences. “From a clinical perspective, there are many diseases that affect the back of the cornea which clinicians across the world are already beginning to relate to the presence, absence or tear in this layer.”

The cornea is the transparent part of the front of the eye which covers the iris, pupil, and anterior chamber. Along with the anterior chamber, the cornea acts like a lens, refracting and bending light to best suit the view. It is responsible for about two-thirds of the eye’s total optical power.

The newly discovered layer is just 15 microns thick – which may not seem like much, but when you compare it to the cornea’s entire thickness, which is about 550 microns, it becomes significant. Ophtalmologists proved the existence of this layer by simulating human corneal transplants and grafts on eyes donated for research purposes to eye banks located in Bristol and Manchester.

Their discovery has the potential to help hundreds of thousands of people, or even more – giving a better understanding on corneal problems and providing better solution, both in terms of treatment and surgery.

Full paper here

The orientation of 4,000 S-shaped units forms a metamaterial lens that focuses radio waves with extreme precision, and very little energy lost. (c) Dylan Erb

New metamaterial focuses radio waves with extreme precision similar to Star Wars’ Death Star

Researchers at MIT have created a new metamaterial that they used to fashion a concave lens capable of focusing radio waves with extreme precision. The result lens is extremely lightweight compared to its counterparts developed from conventional materials, and could see promising applications in satellite telecommunications and space exploration of distant stars.

In many ways metamaterials are supernatural, that’s because by definition it is a material artificially engineered by man to have properties that can never be encountered in nature. It’s  an extremely exciting field, since you’re basically building new, unique compounds and structures. The most interesting applications of metamaterials we’ve been granted to see comes in the form of invisibility cloaks and what’s commonly referred to as “super lenses” – extremely potent lenses that focus light beyond the range of optical microscopes to image objects at nanoscale detail.

Building the metamaterial lens

The latter is what MIT scientists have been going for with their negative refraction concave lens, which bends electromagnetic waves — in this case, radio waves — in exactly the opposite sense from which a normal concave lens would work. These properties are given off by the structure of the metamaterial, and how individual cells are arranged. In this case, the researchers built a blocky, S-shaped “unit cell” only a few millimeters wide whose shape refracts radio waves in particular directions – 4000 of these were arranged to form the concave negative refraction lens. Each of these cells only bends radio waves slightly, but together they focus the wave.

The orientation of 4,000 S-shaped units forms a metamaterial lens that focuses radio waves with extreme precision, and very little energy lost. (c) Dylan Erb

The orientation of 4,000 S-shaped units forms a metamaterial lens that focuses radio waves with extreme precision, and very little energy lost. (c) Dylan Erb

Isaac Ehrenberg, an MIT graduate student in mechanical engineering, shaped the lens via 3-D printing layer by intricate layer from a polymer solution. He then washed away any residue with a high-pressure water jet and coated each layer with a fine mist of copper to give the lens a conductive surface.

“There’s no solid block of any material in the periodic table which will generate this effect,” Ehrenberg says. “This device refracts radio waves like no other material found in nature.”

In an experiment, two radio antennas were positioned between the metamaterial lens. The resulting energy transmitted through it was found to travel through the lens almost in its entirety, with very little being lost with the metamaterial- significant improvement in energy efficiency when compared with past negative-refraction design. The team also found that radio waves converged in front of the lens at a very specific point, creating a tight, focused beam.

Star Wars’ Death Star laser beam?

As an analogy, Ehrenberg sees the design and functionality of the lens much in the same way as the Death Star’s concave dish that focuses a powerful laser beam to destroy nearby planets in the movie Star Wars. Again George Lucas’ awesome saga offers invaluable inspiration to scientists.

Since it weighs less than a pound, the lens could be used to focus radio waves precisely on molecules to create the same high-resolution images currently produced by very heavy and bulky lenses. Mass is one of the main factors taken into account for space applications, and future space satellites would definitely benefit from this. In addition, Ehrenberg says its fabrication is simple and easily replicated, allowing other scientists to investigate 3-D metamaterial designs.

“You can really fully explore the space of metamaterials,” Ehrenberg says. “There’s a whole other dimension that now people will be able to look into.”

His findings were documented in the Journal of Applied Physics.

source

 

One of the GRIN lenses.

New artificial lenses mimic the natural qualities of the eye

Modern sight correction medical procedures often involve surgery where an artificial lens is implanted. The patient’s sight is significantly improved, however the quality of vision is far from that experienced with a healthy pair of eyes. That’s because current artificial lenses function more or less like those from a camera, a bit more advanced of course. The eye is a lot more complicated, on the other hand. Recently a team of researchers have successfully constructed a lens that is closer to the human eye than any of its counterparts.

One of the GRIN lenses.

One of the GRIN lenses.

During high school optics, textbooks and teachers would often use the human eye as an allegory for a natural light bending lens. Then they would compare it to a camera, when discussing refraction – the bending of light in a particular direction when traveling through a new medium. Fact of the matter is, a camera’s lens is only comprised of only one or a few other layers. As light passes through the lens, it’s bent only at the surface of the lens, and then exits in a straight line. This is why artificial lens implants, while still improving sight considerably, aren’t that effective.

The eye, however, bends light continuously. To create an artificial lens with features closer to the natural qualities of the eye, scientists at Case Western University, the Rose-Hulman Institute of Technology, the U.S. Naval Research Laboratory, and Case Western spin-off company PolymerPlus made a single lens from hundreds of thousands of layered and laminated nanoscale polymer films. The technology is known as GRIN (gradient refractive index optics).

Each of these thousands of stacked films has slight different optical properties, which causes light to be incrementally bent by multiple degrees as it passes through the lens.

“As light passes from the front of the human eye lens to the back, light rays are refracted by varying degrees,” said Michael Ponting, president of PolymerPlus. “It’s a very efficient means of controlling the pathway of light without relying on complicated optics, and one that we attempted to mimic.”

Lenses currently employed by today’s technology and used to treat sight impairment conditions, like cataract, lack the ability to incrementally change the refraction of light, and thus fail to come close to the performances of the human eye.

“A copy of the human eye lens is a first step toward demonstrating the capabilities, eventual biocompatible and possibly deformable material systems necessary to improve the current technology used in optical implants,” says Ponting.

Since the technology also enables optical systems with fewer components, GRIN could be used not only as medical implants, but also in consumer and military products.

“Prototype and small batch fabrication facilities exist, and we’re working toward selecting early adoption applications for nanolayered GRIN technology in commercial devices,” says Ponting.

Findings were published in the journal Optics Express.The animation below describes the M-GRIN manufacturing process used to make the new lenses:

source