Tag Archives: camera

World’s largest camera takes 3,200-megapixel photos

A group of researchers at Sandford University have taken the first 3,200-megapixel digital photos, the largest ever taken in a single shot, using an extraordinary array of imaging sensors. These will be part of the world’s largest digital camera that will be set up in a telescope in Chile once the camera is fully assembled.

The complete focal point. Credit SLAC

Researchers at the US Department of Energy’s SLAC National Accelerator Laboratory have been working since 2015 to manufacture the world’s largest and most powerful digital camera. The device will be the centerpiece of the Vera C. Rubin Observatory currently under construction in Chile, which will gather views of the night sky.

The project, known as the Legacy Survey of Space and Time (LSST), features 189 individual imaging sensors that record 16 megapixels each.

“This achievement is among the most significant of the entire Rubin Observatory Project,” SLAC’s Steven Kahn, director of the observatory, said in a statement. “The completion of the LSST Camera focal plane and its successful tests is a huge victory by the camera team that will enable Rubin Observatory to deliver next-generation astronomical science.”

While a full-frame consumer camera has an imaging sensor of 1.4 inches (3.5 centimeters), the focal plane of this monster cameras reaches more than two feet (61 centimeters) in width. That would allow it to spot astronomical objects or capture a portion of the sky in great detail, the researchers argued, highlighting their potential once it is fully assembled.

During tests, the team at Sandford placed the focal plane in a cryostat in order to cool the sensors down to -101.1ºC (-150 Fahrenheit), which is the required operating temperature. Then they took pictures of broccoli, as it has intricate details, as well pictures of the team and of Vera C. Rubin, the scientist after which the observatory is named.

The resolution by the focal plane is high enough to spot a golf ball from 15 miles away. Credit SLAC

The images are actually so large that it would require 378 4K television screens to present one in full size, the researchers estimated, adding that the amazing resolution would allow spotting a golf ball from 24 kilometers (15 miles) away. The sensors will be able to spot objects 100 million times dimmer than those visible to the naked eye.

Although the researchers have passed the most important phases of the project, they still have more challenging work ahead in order to assemble the rest of the camera. They have to insert the cryostat with the focal plane into the camera body, as well as add the lenses, a shutter and a filter exchange system. The final testing would start mid-2021, they estimate.

“Nearing completion of the camera is very exciting, and we’re proud of playing such a central role in building this key component of Rubin Observatory,” said JoAnne Hewett, SLAC’s chief research officer, in a statement. “It’s a milestone that brings us a big step closer to exploring fundamental questions about the universe in ways we haven’t been able to before.”

Researchers build the first wireless camera that fits on a beetle

It’s a good day to be a tech-loving beetle, as researchers at the University of Washington (UW) have developed a tiny, wireless camera that can be mounted on top of live insects such as beetles and robots of similar size.

Image credits Mark Stone / University of Washington.

The camera can stream video to a smartphone at 1 to 5 frames per second — which, admittedly, isn’t a lot. But that performance becomes much more impressive when you consider that it weighs just 250 milligrams (0.008 ounces) and can pivot 60 degrees (to get wide-angle panorama shots).

BettlePro

“We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

“Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”

The team mounted the cameras on top of live beetles and insect-sized robots to test their efficiency. The cameras themselves are lightweight but the batteries needed to power them would be much too large for the insects to bear, so the team used a different approach.

Vision is inherently energy-hungry. Flies, the authors note, use between 10% and 20% “of their resting energy just to power their brains, most of which is devoted to visual processing”. In order to reduce this strain, their eyes have a central area of high-focus and an external area of low-focus. To see clearly, they need to turn their heads in the direction they want to see; the outer area then helps them keep watch for predators, but doesn’t produce a high-quality image.

This setup also means that their brains have to use much less energy to process the incoming images.

To mimic this approach, the team installed a tiny, ultra-low-power black-and-white camera on a mechanical arm that they can sweep across the field of view. The arm moves when a high voltage is applied (which bends the material). Unless more power is applied, the arm stays in place for about a minute and eventually relaxes back into its original position

“One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering.

“We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”

The whole setup can be controlled with a smartphone via Bluetooth from a distance of up to 120 meters away.

The beetles chosen to test this camera were a death-feigning beetle and a Pinacate beetle, as there was evidence they could bear weights of around half a gram. The team ensured the device didn’t impede the insects’ motions, and let them loose on gravel, on a slope, and on a tree. The beetles successfully navigated them all, even managing to climb the tree. The authors note that the beetles lived for at least a year after the experiment.

“We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said.

“If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”

The robot used in the tests is the smallest power-autonomous terrestrial robot with wireless vision, according to the paper. It uses vibrations to move (which makes it very energy-efficient). While the setup worked, the vibrations distorted the overall image, so the team had the robot make a short stop, take a picture, and resume moving. In this mode, the robot managed 2 to 3 centimeters per second and a camera battery life of around 90 minutes.

Applications for tiny cameras abound. It’s the first time we’ve been able to have direct footage from the back of an insect, and the camera’s diminutive size means it can go where no other similar device has in the past.

But the team is particularly worried about privacy concerns. They hope that by introducing the public to their creation, “people can start coming up with solutions to address them”.

The paper “Wireless steerable vision for live insects and insect-scale robots,” has been published in the journal Science Robotics.

Smallest-yet image sensor for medical use wins Guinness World Record

A new, diminutive optical sensor has won a place in the Guinness Book of World Records for being so, so small (and still functional).

The newly-developed camera.
Image credits OmniVision.

OmniVision, a California-based developer of advanced digital imaging solutions has announced the development of its OV6948 image sensor — a piece of gear that now holds the record for the smallest image sensor in the world.

Eagle-eyed, but small

The sensor will be used in a camera module, which the company has christened CameraCubeChip. OmniVision’s announcement (published on their website) of the new device all but earmarks it for medical use, stating that it’s meant to “address the market demand for decreased invasiveness and deeper anatomical access”. In the future, the company hopes to also expand the range of potential users to include veterinarians, dental practitioners, and the health industry at large.

And it’s easy to see why. The new sensor measures just 0.575 x 0.575 mm (1 mm = 0.03 in), while the wafer-shaped CameraCubeChip is only slightly larger: 0.65 x 0.65 x 1.158 mm, roughly the size of a grain of sand. Because of its very small size, the sensor and camera module can be fixed to disposable endoscopes and used to imagine the smallest parts of the body, from nerves and parts of the eye to the spine, heart, the inside of joints, or the urological system. Patients are bound to appreciate how small the devices are, considering that alternatives available today are uncomfortable, and can become quite painful.

The camera will also be much cooler (in terms of temperature) than traditional probes, which means it can be used for longer inside a patient’s body without posing any risk. This is due to its very modest power usage: just 25 mW (milliwatts) of power.

The new sensor has a 120-degree field of vision, a focus range of 3 to 30 mm, allows for 200 x 200 resolution, and can process video at 30 fps (frames per second). It will also be able to transmit data in analog form over a maximum distance of 4 meters.

Another important advantage of the new sensor is that it can be affixed to disposable endoscopes. Patient cross-contamination caused by endoscope reuse is a growing public health concern, one which the camera can help fix, or at least reduce.

AI fail: Chinese driver gets fine for scratching his face

A driver in China got a fine for the smallest possible gesture: scratching his face.

A Chinese man had the misfortune of scratching his face as he was passing by a monitoring camera, which landed him a fine and 2 points off of his driver’s license. Image: Sina Weibo.

According to the Jilu Evening Post, the driver was only scratching his face — but his gesture looked like he was talking on the phone. An automated camera took a picture of him, and according to Chinese authorities “the traffic surveillance system automatically identifies a driver’s motion and then takes a photo”. Essentially, the AI operating the camera interpreted the gesture as the driver speaking on the phone, and fined him.

The driver, who has only been identified by his surname “Liu” shared the photo on social media, humorously quipping:

“I often see people online exposed for driving and touching [others’] legs,” he said on the popular Sina Weibo microblog,” “but this morning, for touching my face, I was also snapped ‘breaking the rules’!”

After a struggle, he was able to cancel the fine, but this raises important concerns about privacy and AI errors, especially in an “all-seeing” state such as China. The country already has more than 170 million surveillance cameras, with plans to install a further 400 million by 2020. Many of these cameras come with facial recognition technology, and some even have AI capabilities, being able to assess a person’s age, ethnicity, and even gestures. Sometimes, though, they fail.

As the BBC points out, China’s social media was also buzzing with revolt regarding the state’s surveillance policies. China recently implemented a social credit system, intended to standardize the assessment of citizens’ behavior — and input from such cameras are key for the system.

“This is quite embarrassing,” one post commented, “that monitored people have no privacy.”

“Chinese people’s privacy — is that not an important issue?” another asks.

For now, this is indicative of a problem the whole world will have to deal with sooner or later: levels of both AI and surveillance are surging through our society, and we’re still not sure how to deal with them in a way that’s helpful but not intrusive.

World’s fastest camera captures 10 trillion frames per second

Thought your iPhone’s camera can shoot sick slow-mos? Here’s something a bit more impressive.

Credit: INRS.

Credit: INRS.

Researchers at Caltech and L’Institut national de la recherche scientifique (INRS) devised the world’s fastest camera, which can shoot an incredible 10 trillion frames per second. It’s so fast that it can capture the interactions between matter and light at the nanoscale. The new camera more than doubles the number of frames per second set by the previous record holder, a device developed by researchers in Sweden.

T-CUP, officially the world’s fastest camera, is based on the ‘Compressed Ultrafast Photography’ technology. It works by combining a femtosecond streak camera and a static camera. A data acquisition technique called Radon transformation rounds up the setup.

Such ultra-fast cameras will prove useful to physicists looking to probe the nature of light and how it travels. Other potential applications include medicine and engineering.

“We knew that by using only a femtosecond streak camera, the image quality would be limited. So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames per second,” said Lihong Wang, the Bren Professor of Medical Engineering and Electrical Engineering at Caltech.

Real-time imaging of temporal focusing of a femtosecond laser pulse at 2.5 Tfps. Credit: Jinyang Liang, Liren Zhu & Lihong V. Wang.

Real-time imaging of temporal focusing of a femtosecond laser pulse at 2.5 Tfps. Credit: Jinyang Liang, Liren Zhu & Lihong V. Wang.

During a test, T-CUP captured a femtosecond (a millionth of a billionth second) laser pulse, recording 25 images which were 400 femtoseconds apart. The resolution and staggering scale involved allowed the research team to record changes in the light beam’s shape, intensity, and angle of inclination.

A femtosecond laser pulse passing through a beam splitter. Credit: INRS.

The level of precision obtained by the researchers is unprecedented — and they’d like to do even more! According to co-author Jinyang Liang, there are ways to increase the speed up to one quadrillion (1015) frames per second. Being able to record the behavior of light at such as scale is beyond our current technology but once it becomes reality, entirely new fields of physics could be opened up.

T-CUP was described in the journal Light: Science and Applications. 

mantis-shrimp

Shrimp-inspired camera leads to new underwater GPS

mantis-shrimp

Credit: Pixabay.

To our eyes, life underwater looks a bit blander and less crisp than at the surface. But that’s only because our vision had to adapt to surface life, favored by millions of years of evolution. If you were a marine creature, you’d literally see things with a different eye. One such creature is the mantis shrimp, whose vision was modeled in a new camera by researchers at the University of Illinois.

The world through the eyes of a mantis shrimp

The new bio-inspired camera that mimics the eyes of the mantis shrimp can detect the polarization properties of underwater light. By being able to read how light refracts (or bends) when it passes through the surface of water and bounces from particles and water molecules, the researchers were able to use devise a novel GPS method.

“We collected underwater polarization data from all over the world in our work with marine biologists and noticed that the polarization patterns of the water were constantly changing,” said study leader Viktor Gruev, an Illinois professor of electrical and computer engineering and a professor of the Carle Illinois College of Medicine.

“This was in stark contrast to what biologists thought about underwater polarization information. They thought the patterns were a result of a camera malfunction, but we were pretty sure of our technology, so I knew this phenomenon warranted further investigation.”

Just earlier today, we wrote about another study that showed how the ancient Vikings could have used sunstones that polarize light as a compass. In a separate study published today in the journal Science AdvancesGruev and colleagues similarly discovered that the underwater polarization patterns captured by the shrimp-like camera are linked to the sun’s position relative to the location where the recording was made.

The team used this information to estimate the sun’s heading and elevation angle, allowing them to determine their GPS coordinates simply by knowing the date and time of the filming. During tests that coupled the bio-inspired camera with an electronic compass and tilt sensor, the researchers were able to locate their position anywhere on the planet within an accuracy of 61 km. That’s not exactly Google Maps material, but still impressive for a light-based, underwater GPS.

This method could prove highly useful in various underwater applications such as locating missing aircraft or creating a detailed map of the seafloor by using a swarm of tiny robots.

University of Illinois electrical and computer engineering professor Viktor Gruev led a study demonstrating underwater global positioning made possible by a bio-inspired camera that mimics the eyes of a mantis shrimp. Credit: Viktor Gruev.

University of Illinois electrical and computer engineering professor Viktor Gruev led a study demonstrating underwater global positioning made possible by a bio-inspired camera that mimics the eyes of a mantis shrimp. Credit: Viktor Gruev.

The research also offers valuable insights into the migratory behavior of many marine species.

“Animals like turtles and eels, for example, probably use a slew of sensors to navigate their annual migration routes that take them thousands of miles across oceans,” Gruev said. “Those sensors may include a combination of magnetic, olfactory and possibly – as our research suggests – visual cues based on polarization information.”

If polarization is this important for many marine species, how does pollution interfere with all of this? According to the researchers, it’s very likely that marine pollution, which has increased dramatically in the past few decades, affects underwater polarization patterns. This means that many marine animals might sense their surroundings differently from what they had originally learned. For instance, already many more whales are becoming stranded, some even ending up close to the Californian shore, where they’ve never been observed before.

Closeup of a bee's amazing eyes. Credit: Flickr, USGS Bee Inventory.

How bees might help smartphone cameras snap more natural-looking photos

Most cameras, whether their embedded into your phone or drone for that matter, are crap when it comes to rendering colors as vibrantly as the human eye does. One team of researchers, however, argues that we would take far better vacation photos if cameras were built more like a bee’s eye.

Closeup of a bee's amazing eyes. Credit: Flickr, USGS Bee Inventory.

Closeup of a bee’s amazing eyes. Credit: Flickr, USGS Bee Inventory.

The problem with modern commercial cameras is color constancy. That is, the ability to identify and distinguish color in any variation of light. It’s what helps us humans identify objects even in dim light. For instance, we know that a banana is yellow even though it’s sitting in a basket in the low light just before dawn. You look at the banana and you know it’s yellow but if you look at a picture of the banana taken in the same light that’s another thing.

“For a digital system like a camera or a robot the colour of objects often changes. Currently this problem is dealt with by assuming the world is, on average, grey,” said Adrian Dyer, an Associate Professor at RMIT.

“This means it’s difficult to identify the true colour of ripe fruit or mineral rich sands, limiting outdoor colour imaging solutions by drones, for example.”

It’s not just people that have good color constancy, bees do too. What’s more, they have five eyes, two of which are dedicated to sensing color which is mighty useful when foraging flowers is concerned. The other three eyes are not as specialized but can still sense color through receptors called oceli, which focus on the color of light. These oceli are always pointed to the sky.

Australian researchers at RMIT University, Melbourne, think it’s these oceli that relay information on light to the right parts of the brain responsible for processing color. This is to ensure the bee knows what’s it doing and approaches the right flower, unlike a camera which can be pick up the wrong colors.

For this to happen, information from the ocelli would have to be integrated with colors seen by the compound eyes. Indeed, this seems to happen after the researchers mapped the neural tracings from ocelli and showed neural projections fed into the processing areas of the bee brain. “It is rare that physics, biology, neuro-anatomy and ecology all fit together, but here we have it,” said Professor Andrew Greentree from the ARC Centre for Nanoscale BioPhotonics at RMIT, in a statement.

This discovery on color constancy could be implemented into imaging systems to enable accurate color interpretation. One day, we might all take better, sweeter pictures. And it’s thanks to bees.

The findings appeared in the Proceedings of the National Academy of Sciences of the United States of America. 

 

Camera Lens.

Lensless camera designed to be paper-thin and do anything a traditional camera does

Caltech engineers have developed an ultra-thin, lens-less camera design which can do everything a traditional camera can while still fitting in your pocket.

Camera Lens.

Image credits Rudy and Peter Skitterians.

Traditional cameras can be designed to be pretty small — such as the ones in your webcam or telephone — but because of the way they’re designed, they can’t ever really be completely flat. These devices rely on arrays of lenses to bend and direct incoming light onto a film or optical sensor where it’s recorded, and the lenses have to be a certain size, shape, and distance away from their neighbors to work properly — so they need to be 3D.

This is a problem if you’re trying to design a high-fidelity camera that fits in your pocket. So a team of engineers at Caltech have worked around the issue by doing away with the lens altogether and replacing them with an ultra-thin optical phased array (OPA).

Light bending is so last year

OPA’s do the same job as a lens but instead of using glass to bend light, they use processors to crunch data. They’re large arrays of light sensors, each of which can digitally factor in a precise time delay (called phase shift) to incoming light, allowing the camera to focus on different objects or look in different directions.

The OPA works like a reverse phased array. These are large emitter arrays, mostly used in wireless communication and radar, and work by sending out the same signal through each emitter. Because of their position relative to one another, these emitters will work to amplify the signal in one direction and cancel each other out everywhere else, essentially creating a signal ‘laser beam’. The OPA works with incoming signal (light), amplifying it in one direction while canceling out signals received by all elements across the array.

“What the camera does is similar to looking through a thin straw and scanning it across the field of view. We can form an image at an incredibly fast speed by manipulating the light instead of moving a mechanical object,” says graduate student Reza Fatemi, lead author of the paper.

“Here, like most other things in life, timing is everything. With our new system, you can selectively look in a desired direction and at a very small part of the picture in front of you at any given time, by controlling the timing with femto-second—quadrillionth of a second—precision,” says principal investigator and Bren Professor of Electrical Engineering and Medical Engineering in the Division of Engineering and Applied Science at Caltech Ali Hajimiri.

The camera currently uses an array of just 64 light receivers (in an 8×8 grid), so the resulting image has a pretty low resolution. But it’s only intended as a proof of concept — and it works — meaning that it’s just an issue of scaling it up. The layer-thin camera primarily uses silicon photonics to emulate the lens and sensor of a digital camera, so it should be cheaper as well as thinner than its digital counterparts.

Photographers will be happy to hear that Caltech’s layer-camera can emulate anything a regular lens is capable of doing only much faster — for example, Hajimiri says it can switch from a fish-eye to a telephoto lens instantaneously, just by tweaking the incoming light. Smartphone enthusiasts everywhere will be delighted that such cameras will allow devices to become thinner than ever before.

Moving from the very small to the very big, the 2D-camera could allow for massive, but very light and flat telescopes to be built on the ground or in space, allowing for a far better control than lensed telescopes today and dramatically reducing their maintenance and running costs. Finally, the tech could change how we think about cameras from the ground up, by allowing whole new classes of paper-thin, inexpensive devices such as wallpaper-cameras, even wearable ones.

The team doesn’t have plans to work on scaling up the camera by designing chips that enable much larger receivers with higher resolution and sensitivity.

The paper was presented at the Optical Society of America’s (OSA) Conference on Lasers and Electro-Optics (CLEO) and published online by the OSA in the OSA Technical Digest in March 2017.

The paper “An 8X8 Heterodyne Lens-less OPA Camera” has been presented at the Optical Society of America’s (OSA) Conference on Lasers and Electro-Optics (CLEO) and can be read here.

This camera can see around corners in real time

The future is now – researchers at the Heriot-Watt University in Edinburgh, Scotland have developed a camera that can see around corners and track movements in real time.

(Photo : NPG Press | YouTube)

The camera used an already developed technique called echo mapping – more or less the same thing called “echolocation” in the natural world. Echolocating animals emit calls out to the environment and listen to the echoes of those calls that return from various objects near them. They use these echoes to locate and identify the objects. In this case, the camera emits short pulses of laser at the floor in front of a wall. The laser bounces off the walls through the room and ultimately returns to the camera.That laser, in fact, fires as many as 67 million times every second, offering a huge amount of information to the camera extremely quickly. It’s not the first time something like this has been developed, but it’s the first time it works in real time, which makes much more interesting.

“This could be incredibly helpful for [computer assisted] vehicles to avoid collisions around sharp turns … or for emergency responders looking around blind corners in dangerous situations,” said Genevieve Gariepy, co-lead researcher on the project.

So far, the tests were carried on successfully and the camera was able to detect one-foot tall objects, and also detect multiple objects at the same time. It also detected the movement within a centimeter or two, and even estimated the speeds of objects. Check out the video below to see it in action.

New camera for ultrafast photography shoots one hundred billion frames per second

High speed photography is no longer a new thing… but then again, it depends what you mean by high speed photography; you likely don’t mean one hundred billion frames per second (100,000,000,000 fps) – but that’s exactly what Liang Gao, Assistant Professor at Stony Brook University means. He and his team have developed the world’s  fastest receive-only 2-D camera.

Reflection of laser pulse. Credits: Liang et al, 2014. Note: ps stands for pico second, one trillionth of a second.

Using the Washington University technique, called compressed ultrafast photography (CUP), Wang and his colleagues have made movies of things we could have previously only imagined: laser pulse reflection, refraction, faster-than light propagation of what is called non-information, and photon racing in two media. You can see all these here.

As a matter of fact, the technology is too advanced – so there are quite some problems with it.

“For the first time, humans can see light pulses on the fly,” Wang says. “Because this technique advances the imaging frame rate by orders of magnitude, we now enter a new regime to open up new visions. Each new technique, especially one of a quantum leap forward, is always followed a number of new discoveries. It’s our hope that CUP will enable new discoveries in science — ones that we can’t even anticipate yet.”

Refraction of laser pulse. Credits: Liang et al., 2014

Of course the camera doesn’t look like your average Canon or Nikon – it’s actually a series of devices envisioned to work with high-powered microscopes and telescopes to capture dynamic natural and physical phenomena. The raw data is gathered, sent to a computer, and only there does the image form – this is called computational imaging.

 “These ultrafast cameras have the potential to greatly enhance our understanding of very fast biological interactions and chemical processes and allow us to build better models of complex, dynamical systems.” said Richard Conroy, PhD, program director of optical imaging at the National Institute of Biomedical Imaging and Bioengineering.

Indeed, aside for being incredibly cool, this camera also has many potential applications; the most obvious ones are in biomedicine, which is actually what the team had in mind. For example, scientists can detect extremely subtle changes in cellular environmental conditions like pH or oxygen pressure. The technique could also be applied to astronomy, where scientists could analyze the temporal activities of a supernova that occurred many light years away, and in forensics – for bullet trajectory analysis.

Speed of laser pulse in different mediums. Credits: Liang et al.

“Combine CUP imaging with the Hubble Telescope, and we will have both the sharpest spatial resolution of the Hubble and the highest temporal solution with CUP,” he says. “That combination is bound to discover new science.”

Another special area of application could be fluorescence – the emission of light by a substance that has previously absorbed light; one of the movies researchers published shows a green excitation light pulsing toward fluorescent molecules on the right where the green converts to red, which is the fluorescence. Wang explains why this is important:

Fluorescence excitation and emission. Credits: Liang et al, 2014.

“Fluorescence is an important aspect of biological technologies,” he says. “We can use CUP to image the lifetimes of various fluorophores, including fluorescent proteins, at light speed.”

Journal Reference:

  1. Liang Gao, Jinyang Liang, Chiye Li, Lihong V. Wang. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature, 2014; 516 (7529): 74 DOI: 10.1038/nature14005. *note: as we were telling you a few days ago, you can now freely access all the articles on Nature for free!
A software corrects blur and makes photos taken with the same smartphone a lot clearer and sharper. The technology allows for lens designs to be less complex, smaller, lighter and cheaper. Image: Algolux

Software makes phone pics clearer and sharper without changing hardware

A software corrects blur and makes photos taken with the same smartphone a lot clearer and sharper. The technology allows for lens designs to be less complex, smaller, lighter and cheaper. Image: Algolux

A software corrects blur and makes photos taken with the same smartphone a lot clearer and sharper. The technology allows for lens designs to be less complex, smaller, lighter and cheaper. Image: Algolux

There aren’t that many people who imagined that in only a couple of years we’d see smartphones with 40MGpx cameras. Amazingly as that may sound, manufacturers are nearing a stand-still as far as optics miniaturization is concerned and even so, high end camera phones don’t come near the quality of a dedicated optical hardware. A new software developed by a company called Algolux is set to cover some of the lost ground by correcting optical aberrations and in the process making photos taken by your smartphone or tablet a lot sharper or clearer without any hardware modifications.

algolux

Image: Algolux

 

Algolux Virtual Lens corrects optical aberrations through software, for sharper photos. Algolux Virtual IS corrects motion blur and shutter shake, which may be experienced in low light conditions. Virtual Lens takes care of image quality while Virtual IS software takes care of image stabilization. All in all, the company has software and computational imaging techniques that correct for blurring, distortion and other aberrations.

“We are currently focusing on smartphones and tablets, a fast-growing market where cameras and computational power are tightly intertwined. As smartphones attain a certain level of parity across vendors, camera quality and device design have become very strong differentiators.” said the team.

The algorithms will be especially useful for low-end phones. It’s enough to see these sample before and after pics to understand what I mean. For the smartphone industry this should be a fantastic addition, one that will allow manufacturers to keep lens design simple, by substituting complex lens systems with smart software. This means cheaper, better and smaller phone cameras.

how-ip-camera-works

How IP cameras work: the basics of modern surveillance

In an ever more crowded and complex world, people are more aware of the need for security, both in business settings and at home.  The need for alarm systems, limited access, extra locks, and passwords are all common these days.  Surveillance via a system of digital cameras is also gaining popularity, and such systems can be surprisingly affordable for those on a tight budget.  Such surveillance commonly uses internet protocol cameras, also known as IP cameras, to effectively monitor important locations in a home or office.   While it is standard in many commercial and industrial settings, these cameras can easily be used for home surveillance too.

A Digital Generation of Cameras

Cutting edge digital surveillance cameras have surpassed the old model of closed circuit television, seen frequently in the corner of the room in minimarts and banks.  These new digital cameras use a computer network to both send and receive information.  In this new system, the network manager does not need to be in one place, as these cameras can be accessed over the internet.  If a building has wi-fi capability, then the system can be accessed anywhere.  This can help efficiency.  For example, it allows security team officers to patrol the second floor of a store and still access a live feed of the first floor.  The cameras are smaller, and the visual resolution is better too.   They even offer HD capability.

how-ip-camera-works

Special Features

New digital surveillance cameras function like webcams, but they can do more, too.  For instance, they can be controlled remotely and repositioned for different usage.  This might come into play with different uses of a space.  For example, if a politician is giving a speech in the lobby of a hotel, the camera could be pointed at a podium rather than on the entry doors.  Another great feature is offered on cameras set up to respond to movement, sound, or heat.  When nothing is going on, there is no wasted recording, but as soon as there is action, the camera goes live.  If there is a questionable incident, this makes it much easier to locate the desired footage.  This saves time and thus money.   These cameras can also be used to safely communicate over distance, as when a gas station attendant in a security cage safely communicates with a customer having a problem at the pump.

Home Security Applications

IP cameras can be used at home for personal use, whether it’s monitoring the liquor cabinet or the front porch.  Their uses are nearly endless.  Since some cameras are only a few inches long, they are easy to place in many spots:  on a shelf, beside a computer, on a counter, or even a windowsill.  The latter may be especially appropriate if there has been a rash of thefts in the area.  Parents with young children need to get out on the town once in a while, and they will often hire a nanny or babysitter.  That’s a perfect time to use a small camera for home surveillance.  If anything goes wrong, the parents can watch the footage and quickly get to the root of the problem.  When a teenage girl starts dating a boy the family doesn’t know, her parents may have some concerns.  While they may not want to embarrass either teenager with their presence, they may want to have a hidden camera in the living room or a bedroom– just in case.

50 Gigapixel camera

Are you ready for the Gigapixel age? Researchers build 50 Gigapixel camera

50 Gigapixel camera

If you  feel very proud of your iPhone’s 8Megapixel camera or your high resolution DSLR, you might want to consider what a camera capable of taking photos with gigapixel resolution implies. Researchers at  Duke University and the University of Arizona thought this through, and managed to devise a 50 gigapixel  camera. Here’s Paris in 26 Gigapixels just so you can form an idea.

The concept behind researchers’ incredible camera is extremely simple, quite lego-like. Stack 98 tiny cameras in a housing, and sync them such that they form one, giant camera. Of course, the scientists had to overcome a number of various, which, surprisingly, were more related to computing than optics.

“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona, commented in a release by the institution.

He continues: “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

gigapixel photo

Yes, I know, you want one – unfortunately, the prototype is 2.5 feet square and 20 inches deep. Not your typical hiking camera gear, however the scientists feel confident that if camera technology and electronics continue to miniaturize at its current pace, consumer grade gigapixel cameras will hit the market soon enough.

via PopSci

MIT camera

Ultra-speed camera developed at MIT can “see” around corners

MIT camera

Researchers at MIT have developed a new revolutionary technique, in which they re-purposed the trillion frames/second camera we told you about a while ago, and used it to capture 3-D images of a wooden figurine and of foam cutouts outside of the camera’s line of sight. Essentially, the camera could see around corners, by transmitting and then reading back light bouncing off of the walls.

The central piece of the scientists’ experimental rig is the femtosecond laser, a device capable of emitting bursts of light so short that their duration is measured in quadrillionths of a second. The system employed fires femtosecond bursts of light towards the wall faced opposite to the obscured object, in our case a wooden figurine, which then gets reflected inside the room hidden from the camera from where it bounces back and forth for a while until it returns towards the camera and hits a detector. Basically, this works like a periscope, except instead of mirrors, the device makes use of any kind of surface.

Since the bursts are so short, the device can compute how far they’ve traveled by measuring the time it takes them to reach the detector. The procedure is repeated several times, while light is bounced on various different points of the wall such that it may enter the room at different angles – eventually the room geometry is pieced together.

Ramesh Raskar, head of the Camera Culture Research Group at the MIT Media Lab that conducted the study, said, “We are all familiar with sound echoes, but we can also exploit echoes of light.”

To interpret and knit multiple femtosecond-laser measurements into visual images, a complicated mathematical algorithm had to be developed. A particular challenge the researchers faced was how to understand information from photons that had traveled the same distance and hit the camera lens at the same position, after having bounced off different parts of the obscured scene.

“The computer overcomes this complication by comparing images generated from different laser positions, allowing likely positions for the object to be estimated,” the team said.

The process currently takes several minutes to produce an image though the scientists believe they will eventually be able to get this down to a mere 10 seconds, and also hope to improve the quality of the images the system produces and to enable it to handle visual scenes with a lot more clutter. Applications include emergency response imaging systems that can evaluate danger zones and saves lives, or automatic unmanned vehicle navigation which navigate around obstructed corners.


Their findings will be reported in a paper out this week in the journal Nature Communications. 

source: MIT

Shocking news: implanting a camera in your head is bad for your health

Artists suffer for their art – it’s a well known fact. But for an artist at the New York University, things escalated to a whole new level: the camera he installed himself in his head was rejected by his body, causing some serious health issues.

Let’s rewind a little; back in November, Wafaa Bilal, an NYU photography professor tried to do something nobody else had done before (and it’s easy to understand why). He went to a tattoo shop in Los Angeles and had a titanium base inserted behind the skin on his head. He attached a camera to it that took pictures every minute, pictures that were available for everybody to see on his website.

The first question that pops up of course, is why would somebody insert a camera in his head ? According to the Iraqi artist, it’s the best way of keeping a record of his past, which was necessary since he became a refugee in 1991. However, he does have another reason, which is more profound

“Most of the time, we don’t live in the places we live in,” he said. “We don’t exist in the city we exist in. Perhaps physically we exist, but mentally we are somewhere else.” Yet another explanation: The project points to the future—a future where, as Mr. Bilal sees it, communication devices will become part of our bodies.

But his innovative project reached a road block when his body started rejecting the camera; steroids and medication didn’t do him much good, so the camera had to be removed. He is determined to continue however, with a camera that’s tied to his head instead.

Tomorrow’s camera is flash free, regardless of light conditions

As any amateur photographer can tell you, in order to take a clear picture, you require a good light source; so in poor light conditions, the solution was the intense flash. However, there are some obvious disadvantages.

Still, computer scientist Rob Fergus started thinking if we actually need such an intense light source, or if we could actually develop some sort of invisible flash that would solve the inconvenient that come with the traditional camera flash.

F is a multi spectral flash, A is using ambiental lighting, which is way lower than it should be, R is a combined version of the two, and L is a reference long exposure shot

So one year later, the end result was a camera that emits and records light outside the visible spectrum. Practically, the prototype emits a flash, but you just don’t see it, and the photographs are as good as old-school flash ones. How does it work ? Well, usually, cameras have a filter that prevents any type of light from the infrared spectrum.  For this innovative camera, Fergus replaced the filter; the UV however, was a little trickier. His camera could already detect UV, but sending it out, that was a real challenge. So he employed the help of some hobbyists that use UV photography to reveal hidden patterns on flowers: landing strips for insects, polinators, etc.

So the camera is done, but is it any good ? Well, it most definitely is. as you can see for yourself.

“Most pictures you take with a flash look quite crappy,” says Ankit Mohan, an expert in camera technology at the Massachusetts Institute of Technology says. “They look kind of flat, you get the red-eye effect, and one part of the scene is always much brighter than another part. But the problem of capturing a picture with no flash is that you don’t get detail. By combining the two you get the best of both worlds.”

Despite the comfort advantages it provides, this development is also quite useful in some fields.

Cramer Gallimore, a professional photographer based in North Carolina, believes dark-flash photography has great potential. “You might be able to take high-quality photographs of wildlife without disturbing them,” Gallimore says, “and for forensic photography, it would be very useful to have technology like this that could switch between infrared technology and visible light photography to record certain traces of human activity at a crime scene.”

Source: Popular Mechanics