Tag Archives: lens

New, revolutionary metalens focuses entire visible spectrum into a single point

The Harvard-produced lens could usher in a new age of cameras and augmented reality.

The next generation of cameras might be powered by nanotechnology.

From the gargantuan telescopes built to study the universe to the ever smaller cameras inside your smartphones, lenses have come a long way. They’ve reached incredibly high performance at lower and lower costs, but researchers want to take them to the next level. A team from Harvard has developed a metalens — a flat surface that uses nanostructures to focus light — capable of concentrating the entire visible spectrum onto a single spot.

Metalenses aren’t exactly a new thing. They’ve been around for quite a while, but until now, they’ve struggled to focus a broad spectrum of light, addressing only some of the light wavelengths. This is the first time researchers managed to focus the entire spectrum — and in high resolution. This raises exciting possibilities.

“Metalenses have advantages over traditional lenses,” says Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and senior author of the research. “Metalenses are thin, easy to fabricate and cost effective. This breakthrough extends those advantages across the whole visible range of light. This is the next big step.”

In a way, creating such a lens is like building a maze for light. Different wavelengths travel at different speeds; red moves the fastest, with violet being the slowest. At macroscopic scales (say, if you were using a prism for light diffraction), this difference is not noticeable. But if you go down to the nanoscale, it becomes evident, leading to so-called chromatic aberrations. Conventional lenses bypass this by having a curved surface, but metalenses need to take a different approach. This is where the innovation takes place. The team from the School of Engineering and Applied Sciences (SEAS) at Harvard used tiny arrays of titanium dioxide nano-sized fins to fix chromatic aberrations.

An artist’s conception of incoming light being focused on a single point by a metalens. Image credits: Jared Sisler/Harvard SEAS.

Previous research had shown that this is possible in theory, but this is the first time a practical solution was designed, and it was no easy feat.

“One of the biggest challenges in designing an achromatic broadband lens is making sure that the outgoing wavelengths from all the different points of the metalens arrive at the focal point at the same time,” said Wei Ting Chen, a postdoctoral fellow at SEAS and first author of the paper.

“By combining two nanofins into one element, we can tune the speed of light in the nanostructured material, to ensure that all wavelengths in the visible are focused in the same spot, using a single metalens. This dramatically reduces thickness and design complexity compared to composite standard achromatic lenses.”

Through this approach, they were able to focus all the colors of the rainbow onto a single point — in other words, they were able to image “normal” white light, using a lens thousands of times thinner than what we’re used to.

“Using our achromatic lens, we are able to perform high quality, white light imaging. This brings us one step closer to the goal of incorporating them into common optical devices such as cameras,” said Alexander Zhu, co-author of the study.

The potential for practical applications is practically limitless, not only in photography but also in emerging technologies such as virtual or augmented reality. But while this does bring researchers one step closer to developing smaller, better lenses for your camera or smartphone, there’s still a long way to go before the technology will reach consumers. The first step is achieving the same results in macro-scale lenses. Chen and Zhu say that they plan on scaling up the lens to about 1 cm (0.39 in) in diameter, which would make it suitable for real-world applications. It will undoubtedly take them at least a few years to reach that goal, but if they can do it, we’re in for quite a treat.

Journal Reference: Wei Ting Chen et al. A broadband achromatic metalens for focusing and imaging in the visible. doi:10.1038/s41565-017-0034-6.

Camera Lens.

Lensless camera designed to be paper-thin and do anything a traditional camera does

Caltech engineers have developed an ultra-thin, lens-less camera design which can do everything a traditional camera can while still fitting in your pocket.

Camera Lens.

Image credits Rudy and Peter Skitterians.

Traditional cameras can be designed to be pretty small — such as the ones in your webcam or telephone — but because of the way they’re designed, they can’t ever really be completely flat. These devices rely on arrays of lenses to bend and direct incoming light onto a film or optical sensor where it’s recorded, and the lenses have to be a certain size, shape, and distance away from their neighbors to work properly — so they need to be 3D.

This is a problem if you’re trying to design a high-fidelity camera that fits in your pocket. So a team of engineers at Caltech have worked around the issue by doing away with the lens altogether and replacing them with an ultra-thin optical phased array (OPA).

Light bending is so last year

OPA’s do the same job as a lens but instead of using glass to bend light, they use processors to crunch data. They’re large arrays of light sensors, each of which can digitally factor in a precise time delay (called phase shift) to incoming light, allowing the camera to focus on different objects or look in different directions.

The OPA works like a reverse phased array. These are large emitter arrays, mostly used in wireless communication and radar, and work by sending out the same signal through each emitter. Because of their position relative to one another, these emitters will work to amplify the signal in one direction and cancel each other out everywhere else, essentially creating a signal ‘laser beam’. The OPA works with incoming signal (light), amplifying it in one direction while canceling out signals received by all elements across the array.

“What the camera does is similar to looking through a thin straw and scanning it across the field of view. We can form an image at an incredibly fast speed by manipulating the light instead of moving a mechanical object,” says graduate student Reza Fatemi, lead author of the paper.

“Here, like most other things in life, timing is everything. With our new system, you can selectively look in a desired direction and at a very small part of the picture in front of you at any given time, by controlling the timing with femto-second—quadrillionth of a second—precision,” says principal investigator and Bren Professor of Electrical Engineering and Medical Engineering in the Division of Engineering and Applied Science at Caltech Ali Hajimiri.

The camera currently uses an array of just 64 light receivers (in an 8×8 grid), so the resulting image has a pretty low resolution. But it’s only intended as a proof of concept — and it works — meaning that it’s just an issue of scaling it up. The layer-thin camera primarily uses silicon photonics to emulate the lens and sensor of a digital camera, so it should be cheaper as well as thinner than its digital counterparts.

Photographers will be happy to hear that Caltech’s layer-camera can emulate anything a regular lens is capable of doing only much faster — for example, Hajimiri says it can switch from a fish-eye to a telephoto lens instantaneously, just by tweaking the incoming light. Smartphone enthusiasts everywhere will be delighted that such cameras will allow devices to become thinner than ever before.

Moving from the very small to the very big, the 2D-camera could allow for massive, but very light and flat telescopes to be built on the ground or in space, allowing for a far better control than lensed telescopes today and dramatically reducing their maintenance and running costs. Finally, the tech could change how we think about cameras from the ground up, by allowing whole new classes of paper-thin, inexpensive devices such as wallpaper-cameras, even wearable ones.

The team doesn’t have plans to work on scaling up the camera by designing chips that enable much larger receivers with higher resolution and sensitivity.

The paper was presented at the Optical Society of America’s (OSA) Conference on Lasers and Electro-Optics (CLEO) and published online by the OSA in the OSA Technical Digest in March 2017.

The paper “An 8X8 Heterodyne Lens-less OPA Camera” has been presented at the Optical Society of America’s (OSA) Conference on Lasers and Electro-Optics (CLEO) and can be read here.

small lens

Ultra-thin flat lens leads to smaller, better, cheaper optical devices: from telescopes to VR goggles

To make a clear picture, high power microscopes need to stack multiple lenses. But this sort of instruments could be scaled down to a fraction of their size with a planar or flat lens. Such a lens was recently demonstrated by researchers at  Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). The lens works with high efficiency and within the visible light spectrum.

small lens

(c) Harvard University

“This technology is potentially revolutionary because it works in the visible spectrum, which means it has the capacity to replace lenses in all kinds of devices, from microscopes to camera, to displays and cell phones,” said Federico Capasso, Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering and senior author of the paper. “In the near future, metalenses will be manufactured on a large scale at a small fraction of the cost of conventional lenses, using the foundries that mass produce microprocessors and memory chips.”

The lens is essentially a paint whitener over a thin glass that measures only 2 mm across – finer than a human hair. The metamaterial — a material with properties which can’t be found in nature — is packed with millions of tiny pillars, smaller than the wavelength of light. As such, the lens-light interaction remoulds the rays.

The tiny pillars guide light so the image is focused and sharp.

The tiny pillars guide light so the image is focused and sharp. Credit: Harvard University

This is how the researchers were able to replicate the focusing ability of conventional lenses with a much smaller, flat lens. Additionally, aberrations inherent in traditional optics are avoided, making the image 30% sharper in comparison with top-end scientific microscopes.

“The quality of our images is actually better than with a state-of-the-art objective lens. I think it is no exaggeration to say that this is potentially revolutionary,” said Prof. Capasso.

These lenses can be made with the same tools used to make computer chips. Capasso argues that this means his lab’s design can be easily mass produced. This means not only more, cheaper top-end microscopes, but also much better smartphone cameras, for instance. Basically, any application that involves optics can benefit from the very first flat lens. Virtual reality goggles, which are pretty bulky and uncomfortable to wear for a long time could be scaled down dramatically, too.

“The amazing field of metamaterials brought up lots of new ideas but few real-life applications have come so far,” said Vladimir M. Shalaev, professor of electrical and computer engineering at Purdue University, who was not involved in the research. “The Capasso group with their technology-driven approach is making a difference in that regard. This new breakthrough solves one of the most basic and important challenges, making a visible-range meta-lens that satisfies the demands for high numerical aperture and high efficiency simultaneously, which is normally hard to achieve.”

“Any good imaging system right now is heavy because the thick lenses have to be stacked on top of each other. No one wants to wear a heavy helmet for a couple of hours,” he said. “This technique reduces weight and volume and shrinks lenses thinner than a sheet of paper. Imagine the possibilities for wearable optics, flexible contact lenses or telescopes in space.”

 

 

How the eye works

 

Image via flickr. 

Doing some light reading

Touch interprets changes of pressure, texture and heat in the objects we come in contact with. Hearing picks up on pressure waves, and taste and smell read chemical markers. Sight is the only sense that allows us to make heads and tails of some of the electromagnetic waves that zip all around us — in other words, seeing requires light.

Apart from fire (and other incandescent materials), bioluminiscent sources and man-made objects (such as the screen you’re reading this on) our environment generally doesn’t emit light for our eyes to pick up on. Instead, objects become visible when part of the light from other sources reflects off of them.

Let’s take an apple tree as an example. Light travels in a (relatively) straight line from the sun to the tree, where different wavelengths are absorbed by the leaves, bark and apples themselves. What isn’t absorbed bounces back and is met with the first layer of our eyes, the thin surface of liquid tears that protects and lubricates the organ. Under it lies the cornea, a thin sheet of innervated transparent cells.

Behind them, there’s a body of liquid named the aqueous humor. This clear fluid keeps a constant pressure applied to the cornea so it doesn’t wrinkle and maintains its shape. This is a pretty important role, as that layer provides two-thirds of the eye’s optical power.

Anatomy of the eye.
Image via flikr

The light is then directed through the pupil. No, there’s no schoolkids in your eye; the pupil is the central, circular opening of the iris, the pretty-colored part of our eyes. The iris contracts or relaxes to allow an optimal amount of light to enter deeper into our eyes. Without it working to regulate exposure our eyes would be burned when it got bright and would struggle to see anything when it got dark.

The final part of our eye’s focusing mechanism is called the crystalline lens. It only has half the focusing power of the cornea but its most important function is that it can change how it does this. The crystalline is attached to a ring of fibrous tissue on its equator, that pull on the lens to change its shape (a process known as accommodation), allowing the eye to focus on objects at various distances.

Fun fact: You can actually observe how the lens changes shape. Looking at your monitor, hold your up hands some 5-10 centimeters (2-4 inches) in front of your eyes and look at them till the count of ten. Then put them down; those blurry images during the first few moments and the weird feeling you get in your eyes are the crystalline stretching to adapt to the different focal vision.
Science at its finest.

After going through the lens, light passes through a second (but more jello-like) body of fluid and falls on an area known as the retina. The retina lines the back of the eye and is the area that actually processes the light. There are a lot of different parts of the retina working together to keep our sight crispy clear, but three of them are important in understanding how we see.

  • First, the macula. This is the “bull’s eye” of the retina. At the center of the macula there’s a slight dip named the fovea centralis (fovea is latin for pit). As it lies at the focal point of the eye, the fovea is jam-packed with light sensitive nerve endings called photoreceptors.
  • Photoreceptors. These differentiate in two categories: rods and cones. They’re structurally and functionally different, but both serve to encode light as electro-chemical signals.
  • Retinal pigment epithelium. The REP is a layer of dark tissue whose cells absorb excess light to improve the accuracy of our photoreceptors’ readings. It also delivers nutrients to and clears waste from the retina’s cells.

So far you’ve learned about the internal structure of your eyes, how they capture electromagnetic light, focus it and translate it into electro-chemical signals. They’re wonderfully complex systems, and you have two of them. Enjoy!

There’s still something I have to tell you about seeing, however. Don’t be alarmed but….

The images are all in your head

While eyes focus and encode light into the electrical signals our nervous system uses to communicate, they don’t see per se. Information is carried by the optical nerves to the back of the brain for processing and interpretation. This all takes place in an area of our brain known as the visual cortex.

Brain shown from the side, facing left. Above: view from outside, below: cut through the middle. Orange = Brodmann area 17 (primary visual cortex)
Image via wikipedia

Because they’re wedged in your skull a short distance apart from each other, each of your eyes feeds a slightly different picture to your brain. These little discrepancies are deliberate; by comparing the two, the brain can tell how far an object is. This is the mechanism that ‘magic eye’ or autostereogram pictures attempt to trick, causing 2D images to appear three dimensional.  Other clues like shadows, textures and prior knowledge also help us to judge depth and distance.

[YOU SHOULD  ALSO READ] The peculiar case of a woman who could only see in 2-D for 48 years, and the amazing procedure that gave her stereo-vision

The neurons work together to reconstruct the image based on the raw information the eyes feed them. Many of these cells respond specifically to edges orientated in a certain direction. From here, the brain builds up the shape of an object. Information about color and shading are also used as further clues to compare what we’re seeing with the data stored in our memory to understand what we’re looking at. Objects are recognized mostly by their edges, and faces by their surface features.

Brain damage can lead to conditions that impair object recognition (an inability to recognize the objects one is seeing) such as agnosia.  A man suffering from agnosia was asked to look at a rose and described it as ‘about six inches in length, a convoluted red form with a linear green attachment’. He described a glove as ‘a continuous surface infolded on itself, it appears to have five outpouchings’. His brain had lost its ability to either name the objects he was seeing or recognize what they were used for, even though he knew what a rose or a glove was. Occasionally, agnosia is limited to failure to recognize faces or an inability to comprehend spoken words despite intact hearing, speech production and reading ability.

The brain also handles recognition of movement in images. Akinetopsia, a movement-recognition impairing condition is caused by lesions in the posterior side of the visual cortex. People suffering from it stop seeing objects as moving, even though their sight is otherwise normal. One woman, who suffered such damage following a stroke, described that when she poured a cup of tea the liquid appeared frozen in mid-air, like ice. When walking down the street, she saw cars and trams change position, but not actually move.

Researchers create contact lenses with telescopic abilities

It’s the world of science fiction come alive – Swiss researchers have developed contact lenses which, when paired with special spectacles, bestow telescopic vision on their wearers.

Cool, and very useful

lens

The contact-lens-and-spectacles have the ability of zooming in 2.8 times.

The device was not created for Bond-like purposes, but rather to help people suffering age-related visual impairment and blindness. Age-related macular degeneration (AMD) is a medical condition which usually affects older adults and results in a loss of vision in the center of the visual field because of damage to the retina. It is a very serious condition which dramatically hinders day to day activities, such as driving or even cooking.

However, the lenses might someday find their way towards other areas as the research was being funded by DARPA, the department of the US military responsible for research.

“They are not so concerned about macular degeneration,” he said. “They are concerned with super vision which is a much harder problem. That’s because the standard is much higher if you are trying to improve vision rather than helping someone whose eyesight has deteriorated,” he said.

How it works

The device consists of two parts: the central region lets light through for normal vision, and the telescopic part sits in a ring around this central area. Tiny aluminium mirrors scored with a specific pattern act as a magnifier, bouncing the light around four times within the ring before directing it towards the retina.

The magnifying is not in use, being blocked by polarising filters, but users can switch it on by changing the filters on the spectacles so the only light falling on their retina comes from the magnified stream. The prototype is no larger than 8mm in diameter, 1mm thick at its centre and 1.17mm thick in its magnifying ring.

“The most difficult part of the project was making the lens breathable,” Dr Tremblay explained. “If you want to wear the lens for more than 30 minutes you need to make it breathable.”

This is very important because gases have to be able to pass through the lens to the eye – especially the cornea, which requires oxygen supply to function. However, this makes the fabrication proces much more difficult.

“The fabrication tolerances are quite challenging because everything has to be so precise,” he said.