Tag Archives: trillion fps camera

Trillion fps camera shoots advancing light waves

How fast can your camera shoot? 60 frames per second, maybe 100? If you’ve got a good one, maybe 1000, or maybe you’re super pro and you shoot 10.000 fps. Puh-lease! The new MIT camera shoots at 1 trillion fps – that’s frames every second !

Think of it this way: 1 trillion seconds is over 31,688 years; so if you shot just one second and played it at 30 fps, it would last over 1.000 years to watch it! That would be some boring movie, no matter what you look like. Even light looks like it’s moving in slow motion.

Of course, you can’t take this camera on vacation, but even if you could, there would be no place on Earth which offers the necessary lighting. They used to shoot those “femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques”.

The result you see here is an actual moving ray of light, caught in the act.

“It’s very interesting work. I am very impressed,” says Nils Abramson, a professor of applied holography at Sweden’s Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second.

The work, which was done in 2011 is still unsurpassed in terms of speed, and I’m surprised this field of research hasn’t grown more popular, especially considering its applications. Medical imaging and laser physics are just two of the ones that pop in mind.

“I’m surprised that the method I’ve been using has not been more popular,” Abramson adds. “I’ve felt rather alone. I’m very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the light itself.”

MIT camera

Ultra-speed camera developed at MIT can “see” around corners

MIT camera

Researchers at MIT have developed a new revolutionary technique, in which they re-purposed the trillion frames/second camera we told you about a while ago, and used it to capture 3-D images of a wooden figurine and of foam cutouts outside of the camera’s line of sight. Essentially, the camera could see around corners, by transmitting and then reading back light bouncing off of the walls.

The central piece of the scientists’ experimental rig is the femtosecond laser, a device capable of emitting bursts of light so short that their duration is measured in quadrillionths of a second. The system employed fires femtosecond bursts of light towards the wall faced opposite to the obscured object, in our case a wooden figurine, which then gets reflected inside the room hidden from the camera from where it bounces back and forth for a while until it returns towards the camera and hits a detector. Basically, this works like a periscope, except instead of mirrors, the device makes use of any kind of surface.

Since the bursts are so short, the device can compute how far they’ve traveled by measuring the time it takes them to reach the detector. The procedure is repeated several times, while light is bounced on various different points of the wall such that it may enter the room at different angles – eventually the room geometry is pieced together.

Ramesh Raskar, head of the Camera Culture Research Group at the MIT Media Lab that conducted the study, said, “We are all familiar with sound echoes, but we can also exploit echoes of light.”

To interpret and knit multiple femtosecond-laser measurements into visual images, a complicated mathematical algorithm had to be developed. A particular challenge the researchers faced was how to understand information from photons that had traveled the same distance and hit the camera lens at the same position, after having bounced off different parts of the obscured scene.

“The computer overcomes this complication by comparing images generated from different laser positions, allowing likely positions for the object to be estimated,” the team said.

The process currently takes several minutes to produce an image though the scientists believe they will eventually be able to get this down to a mere 10 seconds, and also hope to improve the quality of the images the system produces and to enable it to handle visual scenes with a lot more clutter. Applications include emergency response imaging systems that can evaluate danger zones and saves lives, or automatic unmanned vehicle navigation which navigate around obstructed corners.

Their findings will be reported in a paper out this week in the journal Nature Communications. 

source: MIT