Tag Archives: mit

MIT develops new, cheap, fast Covid-19 test, is awaiting approval from FDA

A new startup created by members from the Massachusetts Institute of Technology (MIT) is preparing to submit a new and fast Covid-19 test for “emergency use authorization” by the FDA.

SARS-CoV-2 as seen under the transmission electron microscope.
Image credits NIH Image Gallery.

The new test is based on technology developed at MIT’s Institute for Medical Engineering and Science (IMES), reports MIT News. It can provide reliable diagnostics in about 20 minutes, which is extremely fast. The E25Bio startup plans to make the test — which works similarly to a pregnancy test — commercially available as soon as possible in order to help fight the current outbreak under the FDA’s “emergency use authorization” model.

Speed testing

“Our hope is that, similar to other tests that we’ve developed, this will be usable on the day that symptoms develop,” says Lee Gehrke, the Hermann L.F. von Helmholtz Professor at IMES, who led the development of the test.

“We don’t have to wait for antibodies to the virus to come up.”

The team behind the new test has years of experience working on similar diagnostic devices. They used a technology known as lateral flow technology, which is similar to the ones used by pregnancy tests but aimed at identifying viral proteins, to create tests for Ebola, dengue fever, and Zika virus, among other infectious diseases.

The test itself consists of small strips of paper coated with antibodies that bind to specific viral proteins. A solution of gold nanoparticles, a different antibody, and samples harvested from the patient is then mixed, and the test dipped into it. In case the virus is present, its marker protein will attach to the antibodies on the strip of paper together with nanoparticles and antibodies in the solution, creating a colored line on the test. The whole thing takes around 20 minutes, the team explains.

There are two types of Covid-19 tests available so far. One involves testing blood for antibodies against the virus — however, this can be unreliable as antibodies only become detectable a few days after onset of the symptoms — while the other checks for viral DNA in saliva or mucus samples. The latter is more reliable and can be used to detect the virus earlier in the infection, but relies on polymerase chain reaction (PCR), a technique that ‘amplifies’ traces of DNA but takes several hours and specialized equipment to perform.

E25Bio is awaiting FDA approval of the test so that they may begin trials using patient samples. If that proves successful, the next step would involve using it for clinical diagnosis.

One advantage of the study, the team notes, is that it is simple and cheap to produce, making it ideal for quick manufacturing in large quantities.

Mini Cheetah.

MIT’s newest, diminutive robot can do backflips and outrun you in every single way

MIT’s newest robot is cute, tiny, modular, and could run rings around you.

Mini Cheetah.

*robotic cheetah noises*.
Image credits Bryce Vickmark.

Researchers at MIT have developed a ‘mini cheetah’ robot whose range of motion, they boast, would rival those of a champion gymnast. This four-legged robot (hardly more than a powerpack on legs) can move, bend, and swing its legs in a wide range of motions, which allows it to handle uneven terrain about twice as fast as a human, and even walk upside-down. The robot, its developers add, is also “virtually indestructible” at least as falling or slamming into stuff is concerned.

Skynet’s newest pet

The robot weighs in at a paltry 20 pounds, but don’t let its diminutive stature fool you. The mini cheetah can perform some really impressive tricks, even being able to perform a 360-degree backflip from a standing position. If kicked to the ground, or if it falls flat, the robot can quickly recover with what MIT’s press release describes as a “swift, kung-fu-like swing of its elbows.” Apparently, nobody at MIT has ever seen Terminator.

But, the mini cheetah isn’t just about daredevil moves — it’s also designed to be highly modular and dirt cheap (for a robot). Each of its four limbs is powered by three identical electric motors (one for each axis) that the team developed solely from off-the-shelf parts. Each motor (as well as most other parts) can be easily replaced in case of damage.

“You could put these parts together, almost like Legos,” says lead developer Benjamin Katz, a technical associate in MIT’s Department of Mechanical Engineering.

“A big part of why we built this robot is that it makes it so easy to experiment and just try crazy things, because the robot is super robust and doesn’t break easily, and if it does break, it’s easy and not very expensive to fix.”

The mini cheetah draws heavily from its much larger predecessor, Cheetah 3. The team specifically aimed to make it smaller, easier to repair, more dynamic, and cheaper so that they would create a platform on which more researchers can test movement algorithms. The modular layout also makes it highly customizable. In Cheetah 3, Katz explains, you had to “do a ton of redesign” to change or install any parts since “everything is super integrated”. In the mini cheetah, installing a new arm is as simple as adding some more motors.

“Eventually, I’m hoping we could have a robotic dog race through an obstacle course, where each team controls a mini cheetah with different algorithms, and we can see which strategy is more effective. That’s how you accelerate research.”

Each of the robot’s 12 motors is about the size of a Mason jar lid and comes with a gearbox that provides a 6:1 gear reduction, enabling the rotor to provide six times the torque that it normally would. A sensor permanently measures the angle and orientation of the motor and its associated limb, allowing the robot to keep tabs on its shape.

It’s also freaking adorable:

This lightweight, high-torque, low-inertia design allows the robot to execute fast, dynamic maneuvers and make high-force impacts on the ground without breaking any gears or limbs. The team tested their cheetah through the hallways of MIT’s Pappalardo Lab and along the slightly uneven ground of Killian Court. In both cases, it managed to move at around 5 miles (8 km) per hour. Your average human, for context, walks at about 3 miles per hour.

“The rate at which it can change forces on the ground is really fast,” Katz says. “When it’s running, its feet are only on the ground for something like 150 milliseconds at a time, during which a computer tells it to increase the force on the foot, then change it to balance, and then decrease that force really fast to lift up. So it can do really dynamic stuff, like jump in the air with every step, or run with two feet on the ground at a time. Most robots aren’t capable of doing this, so move much slower.”

They also wrote special code to direct the robot to twist and stretch, showcasing its range of motion and ability to rotate its limbs and joints while maintaining balance. The robot can also recover from unexpected impacts, and the team programmed it to automatically shut down when kicked to the ground. “It assumes something terrible has gone wrong,” Katz explains, “so it just turns off, and all the legs fly wherever they go.” When given a command to restart, the bot determines its orientation and performs a preprogrammed maneuver to pop itself back on all fours.

The team, funnily enough, also put a lot of effort into programming the bot to perform backflips.

“The first time we tried it, it miraculously worked,” Katz says.

“This is super exciting,” Kim adds. “Imagine Cheetah 3 doing a backflip — it would crash and probably destroy the treadmill. We could do this with the mini cheetah on a desktop.”

The team is building about 10 more mini cheetahs, which they plan to loan to other research groups. They’re also looking into instilling a (fittingly) very cat-like ability in their mini cheetahs, as well:

“We’re working now on a landing controller, the idea being that I want to be able to pick up the robot and toss it, and just have it land on its feet,” Katz says. “Say you wanted to throw the robot into the window of a building and have it go explore inside the building. You could do that.”

I have to admit, the idea of casually launching a robot out the window (there’s a word for that, by the way: defenestration) with complete disregard, and having it come back a few minutes later with its task complete, is hilarious to me. And probably why they will, eventually, learn to hate us.

Still, doom at the hands of our own creations is still a ways away, and not completely certain. Until then, the team will be presenting the mini cheetah’s design at the International Conference on Robotics and Automation, in May. No word on whether they’ll be giving these robots out at the conference, but if they are, I’m calling major dibs.

The device mounted on the roof of an MIT building. Credit: Thomas Cooper et al.

Passive sun-powered device turns water into superheated steam

The device mounted on the roof of an MIT building. Credit: Thomas Cooper et al.

The device mounted on the roof of an MIT building. Credit: Thomas Cooper et al.

MIT engineers have developed a convenient, lightweight device that uses energy from the sun to turn water into superheated steam (steam that is hotter than 100°C). The hot steam pumped out by the system can be used to sterilize medical equipment, as well as for use in cooking and cleaning in remote locations or in poor regions with no access to electricity. A scaled up version could also be useful in an industrial setting where it could be could be collected and condensed to produce desalinated, distilled drinking water.

The device is about the size and thickness of a small digital tablet or Kindle. Its structure is like a sandwich, with a top layer made of a metal-ceramic composite that efficiently absorbs heat from the sun and a bottom layer that emits that heat to the water below. Once the water starts boiling, the steam rises back into the device where it is funneled through the middle layer — a material resembling foam that further heats the steam above the boiling point. Finally, the superheated steam is pumped out through a single tube.

“It’s a completely passive system — you just leave it outside to absorb sunlight,” said Thomas Cooper, assistant professor of mechanical engineering at York University, who led the work as a postdoc at MIT. “You could scale this up to something that could be used in remote climates to generate enough drinking water for a family, or sterilize equipment for one operating room.”

Previously, the same researchers demonstrated an earlier version of their passive solar heater — a graphite-covered carbon foam that floats on water. Its main drawback, however, was that it would eventually become contaminated with salt and other impurities in water.

The MIT engineers solved this problem by suspending the device above the water and using more efficient heat-absorbing materials.

“It’s this clever engineering of different materials and how they’re arranged that allows us to achieve reasonably high efficiencies with this non-contact arrangement,” Cooper said.

The researchers first tested the passive water heater in the lab, using a solar simulator instead of natural sunlight. Water heated this way crossed the boiling point, producing superheated steam at 122°C, under conditions that mimicked a clear, sunny day. The device was also tested in real-life, ambient conditions on the roof of MIT’s Building 1. To increase the sun’s intensity, the team set up a simple solar concentrated (a curved mirror that collects and redirects sunlight into the solar heater). The structure was able to produce steam at 146°C over the course of 3.5 hours.

Later, the researchers showed how their device produced steam from seawater and how this steam was collected to produce pure, distilled water.

“This design really solves the fouling problem and the steam collection problem,” said Gang Chen, the Carl Richard Soderberg Professor of Power Engineering at MIT. “Now we’re looking to make this more efficient and improve the system. There are different opportunities, and we’re looking at what are the best options to pursue.”

The findings were published in the journal Nature Communications.

Climate change protest sign.

New role-playing game engages people from all backgrounds with climate action

Climate change is no joke — but it can be a game.

Climate change protest sign.

Image via Maxpixel / Public Domain.

More specifically, it can be the subject of an MIT Sloan role-playing video game. Dubbed World Climate Simulation, the game puts players in the shoes of UN members partaking in climate talks. Its developers report that over four-fifths of participants who played the game showed an increased desire to combat climate change, regardless of their political beliefs.

Climate UN-change

“The big question for climate change communication is: how can we build the knowledge and emotions that drive informed action without real-life experience which, in the case of climate change, will only come too late?”, asks Prof. Juliette Rooney Varga, lead researcher of the study and Director of the University of Massachusetts Lowell Climate Change Initiative.

The team’s approach revolved around three elements: “information grounded in solid science, an experience that helps people feel for themselves on their own terms and social interaction arising from conversation with their peers,” explains co-author Andrew Jones of Climate Interactive.

In the game, developed countries pledge money through the Green Climate Fund to help developing nations cut emissions and adapt to climate change. The game’s core mechanics are handled by a real-life climate policy computer model known as C-ROADS. This model has been used to guide UN climate negotiations in the past, as it is a very powerful simulator of expected outcomes. Players’ choices were run through C-ROADS and resulted in immediate feedback on how each would ultimately affect the environment.

The group worked with 2,000 participants of various socioeconomic backgrounds and ages recruited from “eight different countries across four continents”, explains an MIT Sloan press release. Through the game, the team looked at each player’s beliefs regarding climate change, their emotional responses to its effects, and willingness to address the main drivers of climate change. By the conclusion of play trials, participants showed greater urgency in tackling the issue, the team reports.

Post-trial questions.

Post-survey responses to questions regarding (A) how engaging the World Climate simulation was as a learning experience, (B) the effects the simulation had on motivation to address climate change and (C) desire to learn more about climate change science, solutions, politics, economics, and policies.
Image credits J.N. Rooney-Varga et al., 2018, PLOS One.

The idea behind the game was to try and bridge the huge divides that the political spectrum imparts on the discussion, the team explains. By putting people in charge of tackling the issue and letting them see how their lives will be impacted, the game aims to engage those that aren’t very concerned about climate action.

Hands-on experience

The team reports that players go headlong into the first round of climate negotiations, usually being quite lax in the changes they call for. However, after C-ROADS showed the outcome of these talks to their health, prosperity, and welfare, the team adds, they generally went into the following rounds with a much more aggressive approach to achieving emissions cuts.

“The first round of negotiations ends with a plenary session in which a representative from each delegation delivers a short speech describing their pledge and negotiating position, including concessions they seek from the other parties,” the paper explains.

“In our experience, the first round of pledges always falls short of the emissions reductions required to limit expected warming to 2 °C and are often qualitatively similar to the actual pledges that emerged from the Paris Agreement, leading to warming of approximately 3.3 °C by 2100.”

“Participants often express surprise that the impact of their pledges is not greater and ask many questions about the structure and dynamics of the climate system as they seek to understand why the simulation results differ from their expectations.”

Perhaps more importantly, they were also more hopeful in the eventual success of environmental actions, as well as a greater desire to understand climate science and the impact of climate change. Urgency is key to actually undergoing the societal, economic, and political changes required to combat climate change. The other two traits will help keep our eyes on the goal during difficult times and limit the effect of mumbo-jumbo à la ‘clean coal‘.

“It was this increased sense of urgency, not knowledge, that was key to sparking motivation to act,” said Prof. Juliette Rooney Varga, lead researcher of the study and Director of the University of Massachusetts Lowell Climate Change Initiative.

In the end, the team hopes to push environmental talks to the forefront of national and international dialogue and policy-making and to take political interest out of climate action.

“Gains were just as strong among American participants who oppose government regulation of free markets – a political ideology that has been linked to climate change denial in the US – suggesting the simulation’s potential to reach across political divides,” the paper reads.

“Research shows that showing people research doesn’t work,” said John Sterman, co-author of the study and professor at MIT’s Sloan School of Management. “World Climate works because it enables people to express their own views, explore their own proposals and thus learn for themselves what the likely impacts will be.”

Schools in France, Germany, and South Korea have adopted World Climate Simulation as an official educational resource, the team adds.

The paper “Combining role-play with interactive simulation to motivate informed climate action: Evidence from the World Climate simulation” has been published in the journal PLOS One.

This is SoFi, a robotic fish designed at MIT. Credit: MIT.

Fish-like robot might reveal the secret life of ocean wildlife

MIT has designed and built a soft-bodied robot that looks and swims exactly like a fish. It’s so realistic that even real fish in Fiji are falling for it.

This is SoFi, a robotic fish designed at MIT. Credit: MIT.

This is SoFi, a robotic fish designed at MIT. Credit: MIT.

Designing an agile submersible robot that can mimic the natural motion of aquatic wildlife has always proven challenging. Where others have failed, however SoFi — short for Soft Robotic Fish — works so well that even the fish swimming in coral reefs around the Pacific island of Fiji were duped into thinking it was one of their own.

The robot is 47 centimeters long (18 in.) and can handle depths of up to 18 meters (59 ft.). It uses smooth, undulating motions to swim at speeds of half its body length per second, controlled by a nearby diver using an acoustic communication modem, as described in a recent paper published in Science Robotics.

SoFi’s undulating tail motion is controlled by a hydraulic pump that moves water. On the inside, electronics like a Linux computer or its fisheye lens are stored in watertight compartments placed in the robot’s head.

The biggest challenge was devising an adjustable buoyancy system to enable the bot to easily swim at different depths without floating to the surface or sinking to the seafloor. Communication between the diver’s controller and the robot’s computer was also tricky, since radio frequencies typically used on land don’t work underwater. The MIT researchers led by Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory, solved all of these conundrums. They even got creative and used a Nintendo controller to relay ultrasonic instructions to the robot from as far as 10 meters (6.4 ft.) away.

SoFi anatomy

Credit: Science Robotics.

Robotic fish like SoFi could prove to be essential to understanding and protecting marine life. Human activity and climate change are putting increasing strain on marine wildlife, and scientists often have to get their hands literally wet on site to assess the damage and resilience of ecosystems.

Credit: MIT.

ZME Science often publishes stories about ambitious missions in search for extraterrestrial life to far-off worlds like Mars or Jupiter’s moon Europa. However, the truth is there’s a whole unexplored, alien-like world waiting to be discovered right in our backyards. Scientists estimate that there are more than 1 million marine species but only about 250,000 have been formally described in the scientific literature over the centuries. Those figures exclude microbes, which estimates place at up to 1 billion kinds, according to the Census of Marine Life. SoFi and the next-generation of aquatic bots that will follow will help us finally peek into the secret life of underwater creatures.

Round printing.

Novel 3D printing method makes furniture in vats of gel within minutes

MIT and Steelcase researchers have teamed up to revamp 3D printing and throw in a Westworld-esque vibe in the bargain. This new technique injects the material into a supportive gel and can print much larger objects than previously possible in a matter of minutes.

RLP in progress.

Dubbed Rapid Liquid Printing because MIT probably doesn’t have a naming division, the approach forgoes the layer-by-layer approach of traditional 3D printing methods and instead injects material directly into a vat of supportive gel. The injection head essentially ‘draws’ the object inside the vat, with the gel providing buoyancy and maintaining the shape of the object while it hardens.

Round printing.

Image via Youtube.

Altering the speed of injection and the speed at which the head travels through the gel will alter the thickness of lines laid down by the device, allowing for a huge range of shapes to be created.

Printing varying thickness line.

Image via Youtube.

Rapid Liquid Printing 2.

Rapid Liquid Printing 3.

RLP allows for much larger objects to be created much faster and using stronger materials than traditional printing methods. The developers, a mixed team of researchers from the MIT’s Self-Assembly Lab and furniture manufacturer Steelcase, hope the technology will address what they perceive are the main limitations of traditional 3D printing methods: slow manufacturing speed compared to conventional processes such as milling or injection molding, their (usually) small scale, and the narrow range of materials they can use (which are also comparatively lower quality than other industrial materials).

And it works, on all counts. This cool video the developers put together showcases how RLP can be used to print a whole piece of furniture in a matter of minutes. Check it out:

Unless otherwise specified, image credits go to MIT / Selfassemblylab.

 

Robot and human hands.

Robot see, robot do: MIT software allows you to instruct a robot without having to code

Researchers have put together C-LEARN, a system that should allow anyone to teach their robot any task without having to code.

The robot chef from the Easy Living scene in Horizons at EPCOT Center.
Image credits Sam Howzit / Flickr.

Quasi-intelligent robots are already a part of our lives, and someday soon, their full-fledged robotic offspring will be too. But until (or rather, unless) they reach a level of intelligence where we can teach them verbally, as you would a child, instructing a robot will require you to know how to code. Since coding is complicated, more complicated than just doing the dishes yourself, anyway, it’s unlikely that regular people will have much use for robots.

Unless, of course, we could de-code the process of instructing robots. Which is exactly what roboticists at the MIT have done. Called C-LEARN, the system should make the task of instructing your robot as easy as teaching a child. Which is a bit of good-news-bad-news, depending on how you feel about the rise of the machines: good, because we can now have robot friends without learning to code, and bad, because technically the bots can use the system to teach one another.

How to train your bot

So as I’ve said, there’re two ways you can go about it. The first one is to program them, which requires expertise in the field of coding and takes a lot of time. The other is to show the bot what you want it to do by tugging on its limbs or moving digital representations of them around, or just doing the task yourself and having it imitate you. For us muggles the latter is the way to go, but it takes a lot of work to teach a machine even simple movements — and then it can only repeat, not adapt them.

C-LEARN is meant to form a middle road and address the shortcoming of these two methods by arming robots with a knowledge base of simple steps that it can intelligently apply when learning a new task. A human user first helps build up this base by working with the robot. The paper describes how the researchers taught Optimus, a two-armed robot, by using software to simulate the motion of its limbs. Like so:

The researchers described movements such as grasping the top of a cylinder or the side of a block, in different positions, retaking each motion for seven times from each position. The motions varied slightly each time, so the robot can look for underlying patterns in the motions and integrate them into the data bank. If for example the simulated grasper always ended up parallel to the object, the robot would note this position is important in the process and would constrain its future motions to attain this parallelism.

By this point, the robot is very similar to a young child, “that just knows how to reach for something and grasp it,” according to D’Arpino. But starting from this database the robot can learn new, complex tasks following a single demonstration. All you have to do is show it what you want done, then approve or correct its attempt.

Does it work?

Robot and human hands.

To test the system, the researchers taught Optimus four multistep tasks — to pick up a bottle and place it in a bucket, to grab and lift a horizontal tray using both hands, to open a box with one hand and use the other to press a button inside it, and finally to grasp a handled cube with one hand and pull a rod out of it with the other. Optimus was shown how to perform each task once, made 10 attempts at each, and succeeded 37 out of 40 times. Which is pretty good.

The team then went one step further and transferred Optimus’s knowledge base and its understanding of the four tasks to a simulation of Atlas, the bullied bot. It managed to complete all four tasks using the data. When researchers corrupted the data banks by deleting some of the information (such as the constraint to place a grasper parallel to the object), Atlas failed to perform the tasks. Such a system would allow us to confer the models of motion created by one bot with thousands of hours of training and experience to any other robot — anywhere in the world, almost instantly.

D’Arpino is now testing whether having Optimus interact with people for the first time can refine its movement models. Afterward, the team wants to make the robots more flexible in how they apply the rules in their data banks, so that they can adjust their learned behavior to whatever situation they’re faced with.

The goal is to make robots who are able to perform complex, dangerous, or just plain boring tasks with high precision. Applications could include bomb defusal, disaster relief, high-precision manufacturing, and helping sick people with housework.

The findings will be presented later this month at the IEEE International Conference on Robotics and Automation in Singapore.

You can read the full paper “C-LEARN: Learning Geometric Constraints from Demonstrations for Multi-Step Manipulation in Shared Autonomy” here.

NASA’s morphing wing will make airplanes smoother, more efficient

A new shape-changing wing designed by MIT and NASA engineers could revolutionize the way we design flying vehicles. By twisting and morphing in flight, the “morphing wing” eliminates the need for flaps, ailerons, and winglets, making our planes more efficient and adaptable in the process.

Image credits NASA.

Birds’ wings have long been the envy of the aeronautical industry. While human-built planes may reach higher and fly faster than anything nature produced, they rely on clunky mechanisms and inflexible wings to stay aloft and maneuver. This impacts their energy (and thus, fuel) efficiency, limits the range of motions available, and the speed of maneouver. Birds, on the other hand, can affect subtle or more dramatic changes to their wings in flight, allowing them huge versatility and mobility compared to fixed wings.

So, wing shape has a huge hand to play in determining the flying capabilities of crafts, and rigid designs aren’t always the most efficient. NASA and MIT engineers have teamed together to bring some of the flexibility birds’ wings exhibit to airplanes.

“The ability to morph, or change shape, is desirable for a number of reasons in nature or in engineering, such as responding to varying external conditions, improving interaction with other bodies, or maneuvering in various media such as water or air,” the team explains.

They ditched the conventional system and started from scratch, assembling the wing using “a system of tiny, lightweight subunits” creating a mobile frame. These are covered with overlapping parts resembling feathers, which create the wing’s surface. The whole frame is built using only eight black, slightly squishy, carbon fiber elements — compared to the millions of plastic, composite, and metal parts that make a regular wing — covered with the shiny orange surface. Here’s an experimental, 5-feet (1.5 meter) model NASA put together:

Image credits NASA / MADCAT.

Image credits NASA / MADCAT.

Each of these eight components has a different stiffness, and the specific way they are interconnected makes the wings tunably flexible. Two small engines are all that’s required to twist the wing, changing the way it cuts through the air.

“One of the things that we’ve been able to show is that this building block approach can actually achieve better strength and stiffness, at very low weights, than any other material that we build with,” says NASA’S Kenny Cheung, one of the leaders of the project.

When the team placed a mock-up with the new wings in the wind tunnel at NASA’s Langley Research Center, Virginia, the dummy plane showed some spectacular aerodynamics.

“We maxed out the wind tunnel’s capacity,” says Cheung.

Airplane wings rely on ailerons to change directions and flaps for boosting lift at take-off and reduce landing distance. But when extended or manipulated, these surfaces create gaps in the wing — disturbing airflow, reducing performance, and generating noise.

“They require complex hydraulic and other actuators that add weight, complexity, and things that can go wrong,” adds Mark Sensmeier, an aerospace engineer at Embry-Riddle Aeronautical University.

The full paper “Digital Morphing Wing: Active Wing Shaping Concept Using Composite Lattice-Based Cellular Structures” has been published in the journal SoftRobotics.

MIT machine makes videos out of still images to predict what happens next

Credit: MIT

Credit: MIT

When you see an action picture, say a ball in mid-air or a car driving on the highway in the middle of the desert, your mind is very good at filling in the blanks. Namely, it’s a no-brainer that the ball is going to hit the ground or the car will continue to drive in the direction it’s facing. For a machine, though, predicting what happens next can be very difficult. In fact, many experts in the field of artificial intelligence think this is one of the missing pieces of the puzzle which when completed might usher in the age of thinking machines. Not reactive, calculated machines like we have today — real thinking machines that in many ways are indistinguishable from us.

Researchers at MIT are helping bridge the gap in this field with a novel machine learning algorithm that can create videos out of still images.

“The basic idea behind the approach is to compete two deep networks against each other. One network (“the generator”) tries to generate a synthetic video, and another network (“the discriminator”) tries to discriminate synthetic versus real videos. The generator is trained to fool the discriminator,” the researchers wrote.

The neural net, comprised of artificial neural networks, was trained by being fed 2 million videos downloaded from Flickr, sorted by four types of scenes: golf, beach, train and baby. Based on what the neural net learned from these videos, the machine could then complete a still picture by adding self-generated frames essentially predicting what happens next (the GIF below). The same machine could also generate new videos that resemble the scenes from the still picture (first GIF in this article).

Credit: MIT

The feat, in itself, is terrifically impressive. After all, it’s all self-generated by a machine. But that’s not to say that the neural net’s limitations don’t show. It’s enough to take a close look at the generated animated graphics for a couple seconds to spot all sorts of oddities from deformed babies, to warping trains, to the worst golf swings in history. The MIT researchers themselves identified the following limitations:

  • The generations are usually distinguishable from real videos. They are also fairly low resolution: 64×64 for 32 frames.

  • Evaluation of generative models is hard. We used a psychophysical 2AFC test on Mechanical Turk asking workers “Which video is more realistic?” We think this evaluation is okay, but it is important for the community to settle on robust automatic evaluation metrics.

  • For better generations, we automatically filtered videos by scene category and trained a separate model per category. We used the PlacesCNN on the first few frames to get scene categories.

  • The future extrapolations do not always match the first frame very well, which may happen because the bottleneck is too strong.

We get the idea, though. Coupled with other developments, like another machine developed at one of MIT’s labs that can predict if a hug or high-five will happen, things seem to be shaping up pretty nicely.

via The Verge

Self-shading windows MIT

Self-shading windows switch from transparent to opaque, no power required

Self-shading windows MIT

(c) MIT, Dinca

MIT researchers creatively used  electrochromic materials which change colour and transparency in response to an applied voltage to design a new class of self-shading windows. When an electrical current is applied, the windows can swiftly change from transparent to opaque, or vice-versa. The power required to trigger the change is minimal. Moreover, to remain in a certain state, no power is required.

Curtains are so last century

Electrochromic windows aren’t exactly new. The Boeing 787 uses these materials for its cabin windows to prevent bright sunlight from glaring the crew. When the voltage is turned on, however, it takes good a couple of minutes before the windows go dark.

This happens because it takes time for the electrons in the material to change charge. To create a colour-changing effect, positive ions need to move through the material, but these move far slower than electrons. Also in regard to previous self-shading windows like those in the 787, MIT’s professor of chemistry Mircea Dincă and lead researcher of the current paper, says these sort of materials don’t change completely from transparent to black.

To make a self-shading window that transforms fast and completely between transparent and opaque, the MIT team used sponge-like materials called metal-organic frameworks (MOFs). These materials conduct both electrons and ions at high speeds and have been used previously by Dincă’s team to make them turn from clear to shades of blue or green. Now, their new material made by mixing an organic material and a metal salt, completely blocks or lets light pass through.

“It’s this combination of these two, of a relatively fast switching time and a nearly black color, that has really got people excited,” Dincă says.

Besides avoiding glare, the new material could prove very useful if incorporated into residential or industrial windows. Just by flipping a switch, you can make the windows let less light through which might save a lot of energy by offsetting air conditioning. Once the sun is ready to set, you can adjust the windows to let sunlight through so you don’t need to turn on the artificial lights.

What’s really interesting is that preliminary tests show only an initial voltage needs to be applied to change the opacity of the windows. No further power is required for the material to maintain its current state. Power is required only when the user wants to revert the material to its former state, whether transparent or opaque.

The results were published in the journal Chem.

In this time-lapse series of photos, progressing from top to bottom, a coating of sucrose (ordinary sugar) over a wire made of carbon nanotubes is lit at the left end, and burns from one end to the other. Image: MIT

Batteries made from carbon nanotubes are lit like a fuse to make power

Lithium, the stuff the battery in your smartphone or notebook are made of, is a toxic substance and in short supply. It’s pretty clear it’s not a sustainable solution to our mobile power generation needs. One alternative explored by researchers at MIT uses carbon nanotubes, which are non-toxic and non-metallic. The carbon nanotube battery also works fundamentally different. Instead of converting chemical energy into electricity, the system developed at MIT harnesses heat.

In this time-lapse series of photos, progressing from top to bottom, a coating of sucrose (ordinary sugar) over a wire made of carbon nanotubes is lit at the left end, and burns from one end to the other.   Image: MIT

In this time-lapse series of photos, progressing from top to bottom, a coating of sucrose (ordinary sugar) over a wire made of carbon nanotubes is lit at the left end, and burns from one end to the other. Image: MIT

Michael Strano, a chemical engineering professor at MIT, and colleagues first learned that the tiny carbon cylinders can produce an electrical current by heat alone in 2010. They coated the tiny tubes with a combustible material and let it progressively burn by lighting one of the ends, just like a fuse. The current produced then was minuscule, but the proof of concept got everyone pretty excited.

Five years later Strano’s lab has dramatically upped the efficiency of the process, by nearly 10,000 percent.

The researchers now also have a better grip of the underlying mechanism of this previously encountered physical phenomena. This energy conversion occurs, Strano says, because pulses of heat push electrons through the bundle of carbon nanotubes, which are highly electrically conductive. The electrons are carried along a carbon nanotube wire like surfer rides a wave. This themopower wave is divided into two separate components that may reinforce one another or counter each other. That’s why  heat produces a single voltage, but sometimes it produces two different voltage regions at the same time, as the MIT researchers witnessed.

A battery that’s on fire might not seem like a good idea to power the same phone you keep in your pocket. This time around though, the researchers use a benign fuel to drive the heat: sucrose. Most of us know it as table sugar.

So far, the device is 1% percent efficient and tests showed it can light LEDs or power smaller electronic devices. Pound for pound though, the ‘fuse battery’ provides power in the same ballpark as today’s most efficient lithium-ion batteries.

Here are some other advantages:

  • Virtually unlimited shelf life, which would make the battery ideal for space probes that need to keep power reserves dormant until the time is nigh.
  • It’s completely scalable, unlike conventional batteries which can be minituarized. The fused battery can be as small as a toe nail or as big as a house.
  • It works just on heat and is non-dependent on any chemical formulation.
  • You can get quick and powerful boosts of power that are not possible using conventional batteries. The  thermopower wave systems can be used for powering long-distance transmission units in micro- and nano-telecommunication hubs, says Kourosh Kalantar-Zadeh, a professor of electrical and computer engineering at RMIT University in Australia, who was not involved in this research.

There’s also a lot of room to grow. It took 25 years for lithium-ion batteries to get to where they are today, Stano says. The professor hopes their research might inspire other groups to explore other fuels besides sucrose, for instance, and turn this into something even more efficient.

Findings appeared in the journal Energy & Environmental Science. 

MIT develops new solar cells, 400 times more efficient and light enough to drape a soap bubble

An MIT research team has developed a new technology that will allow for the creation of lighter and thinner solar cells than ever before. While the team says there is still work to be done before making them commercially available, the panels already proved their efficacy in laboratory settings. They hope that their work will power the next generation of portable electronic devices.

To demonstrate just how thin and lightweight the cells are, the researchers draped a working cell on top of a soap bubble, without popping it.
Image credits Joel Jean and Anna Osherov / MIT

The key to the new approach is to create the solar cell, the substrate that supports it and the protective overcoating – all in one process, says MIT associate dean for innovation and Fariborz Maseeh Professor of Emerging Technology Vladimir Bulović. Unlike conventional solar-cell manufacturing processes, which employ harsh chemicals and high temperatures, this method only calls for a carrier material in a vacuumed solvent free environment at room temperature.

“We put our carrier in a vacuum system, then we deposit everything else on top of it, and then peel the whole thing off,” explains research assistant Annie Wang.

“The innovative step is the realization that you can grow the substrate at the same time as you grow the device,” Bulović says.

Bulović says that like most new inventions, it all sounds very simple once it’s been done. But actually developing the techniques to make the process work required years of effort.

The team used parylene, a common flexible polymer, as both the substrate and overcoating and an organic material known as DBP (for the light absorbing layer) to test their new method of production. The substrate and the cell itself were “grown” through vapor deposition techniques on a sheet carrier material, in this case glass. Because the substrate is build in-place and doesn’t need to be handled during fabrication, it’s not exposed to dust or other contaminants that plague solar cells’ performance either. After the construction process is complete, the parylene-DBP-parylene sandwich is lifted off the glass using a frame of flexible film.

While they used a glass carrier for their solar cells, co-author Joel Jean says “it could be something else. You could use almost any material,” since the processing takes place under such benign conditions. The substrate and solar cell could be deposited directly on fabric or paper, for example.

The end result is the thinnest and lightest complete solar cell ever made — just one-fiftieth of the thickness of a human hair, including the substrate and overcoating.

“If you breathe too hard, you might blow it away,” says doctoral student Joel Jean.

Showing off? Yea, a bit. The cell in this demonstration is not especially efficient because of it’s low weight — but it’s power-to-weight ratio is among the highest ever achieved. Where typical glass-covered modules peak out at around 15 watts of power per kilogram of weight, the new cells churn out 6 watts per gram, or 400 times more energy. In applications where weight is a limiting factor, such as spacecraft or on high-altitude, this gives them an undeniable edge.

“It could be so light that you don’t even know it’s there, on your shirt or on your notebook,” Bulović says. “These cells could simply be an add-on to existing structures.”

But the researchers acknowledge that their demo cell may be a tad too thin to be practical; luckily, they say that parylene films of up to 80 microns in thickness can be easily deposited using equipment commercially available today, without sacrificing the benefits of the in-line substrate formation.

Taking the concept from laboratory-scale work to a full manufacturable product with take time, the team says. But the sheer versatility and affordability this process lends to solar cells is unquestionable.

“We have a proof-of-concept that works,” Bulović says.

“How many miracles does it take to make it scalable? We think it’s a lot of hard work ahead, but likely no miracles needed.”

And others are also excited to see the technology brought from the lab in the “wild.”

“This demonstration by the MIT team is almost an order of magnitude thinner and lighter” than the previous record holder, says Max Shtein, associate professor of materials science and engineering, chemical engineering and applied physics at the University of Michigan. He was not involved in this work.

“It has tremendous implications for maximizing power-to-weight (important for aerospace applications, for example), and for the ability to simply laminate photovoltaic cells onto existing structures.”

“This is very high quality work,” Shtein adds, with a “creative concept, careful experimental set-up, very well written paper, and lots of good contextual information. The overall recipe is simple enough that I could see scale-up as possible.”

The full paper, titled “In situ vapor-deposited parylene substrates for ultra-thin, lightweight organic solar cells” has been published online in the journal Elsevier and is available here.

Trillion fps camera shoots advancing light waves

How fast can your camera shoot? 60 frames per second, maybe 100? If you’ve got a good one, maybe 1000, or maybe you’re super pro and you shoot 10.000 fps. Puh-lease! The new MIT camera shoots at 1 trillion fps – that’s 1.000.000.000.000 frames every second !

Think of it this way: 1 trillion seconds is over 31,688 years; so if you shot just one second and played it at 30 fps, it would last over 1.000 years to watch it! That would be some boring movie, no matter what you look like. Even light looks like it’s moving in slow motion.

Of course, you can’t take this camera on vacation, but even if you could, there would be no place on Earth which offers the necessary lighting. They used to shoot those “femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques”.

The result you see here is an actual moving ray of light, caught in the act.

“It’s very interesting work. I am very impressed,” says Nils Abramson, a professor of applied holography at Sweden’s Royal Institute of Technology. In the late 1970s, Abramson pioneered a technique called light-in-flight holography, which ultimately proved able to capture images of light waves at a rate of 100 billion frames per second.

The work, which was done in 2011 is still unsurpassed in terms of speed, and I’m surprised this field of research hasn’t grown more popular, especially considering its applications. Medical imaging and laser physics are just two of the ones that pop in mind.

“I’m surprised that the method I’ve been using has not been more popular,” Abramson adds. “I’ve felt rather alone. I’m very glad that someone else is doing something similar. Because I think there are many interesting things to find when you can do this sort of study of the light itself.”

MIT polymer paves the way for solar-heated clothes

MIT scientists have developed a material that can absorb solar energy, store and release it on demand to produce heat. Made from a film of polymer, the material could be used to used to tailor cold climate garments that charge up during the day and keep you pleasantly warm in the evening.

Image via inhabitat

The polymer weave absorbs energy from the sun’s rays and stores it through chemical reactions within a transparent film. The material contains certain molecules that move into a “charged position” when exposed to sunlight.

Storing energy in a chemical form is desirable as the compounds are stable enough to allow the user to draw on the reserves at their own discretion. The energy from the material can be released with widely available catalysts. For example, the heat stored in a solar-charged jacket can be released when it’s subjected to a powerful flash of light or when exposed to an electrical current.

The team claims the polymer can heat up to 60 degrees Fahrenheit, and it can store solar energy for an indefinite amount of time.

If applied to clothing, the sun-storing material could benefit everyone from athletes or cold-weather workers, as well as regular fashionistas living in chilly environments.

Researchers say the film is easy to produce, in a two step process. They are looking to apply the energy-harvesting film to materials and products like clothing, window glass and industrial products.

MIT’s online courses can now lead to a degree

“Anyone who wants to be here now has a shot to be here,” MIT President L. Rafael Reif said. “They have a chance to prove in advance that they can do the work.”

Image via Wikipedia.

By now, you should know that MIT posted many of their courses and materials for free, on the internet. If you didn’t, well… now you do – you can access their open courseware here. But this story isn’t about this, it’s about taking things to the next level. Because now, with these course, you can actually get a degree.

The good

“We produce 40 students a year, and they say that’s a drop in the bucket; we need thousands,” said Yossi Sheffi, director of the MIT Center for Transportation and Logistics.

In a pilot project announced Wednesday, students will be able to take a semester of free online courses in one of MIT’s graduate programs and then, after paying a “relatively modest” free of $1500 you can get a micro-masters degree – if you pass the exam, that is.

You basically pay $150 for each of the five online classes, plus up to $800 to take the exam, but hey, you get a degree from MIT, right?

The bad

Well, you do get a degree from them, but it’s not a masters degree. It is a credential granted by MITx for outstanding performance in graduate-level online coursework. MIT will definitely consider it and many learners will be able to then move on to an actual degree at MIT. It makes a lot of sense for MIT to want to attract outstanding students for conventional courses. If you’re good enough, you basically get to avoid the usual admission system.

“That admission system works well for people who went to schools we know very well,” Reif said. “But for people from outside that familiar circle, it can be hard to break in.”

It’s also relatively cheap, but it’s not really cheap – for most of the world at least. $1500 is still a respectable sum, one that many students will definitely have a problem coming up with.

“We will give students the chance to prove they can achieve excellence in a master’s program before they have to apply for admission. This will level the playing field: Students from lesser-known universities globally will be able to prove their mettle as prospective MIT residential students,” the website reads.

… and the ugly

Unfortunately, the only micro-master available through this pilot is a one-year Supply Chain Management (SCM). The purpose of this is not to make money, but to attract students. If it goes well, then it will definitely expand to other areas, but it will probably take a couple of years.

“Right now the main focus is quality, and hopefully the finances will work out later,” Reif said. “But this is not something in which we expect to make money. We want to break even.”

For more information, check out the MIT FAQ section.

MIT’s smart wound dressing is incredibly cool and I want one

Smart phones, smart tvs, smart cars, it’s a trend that’s picking up more and more and for good reason — from making work easier, entertainment more accessible and increasing safety, automation is the name of the game. And the latest member to join club Smart is a bandage designed by MIT associate professor Xuanhe Zhao:

This sticky bandage is made out of a hyrdogel matrix, a stretchy rubbery material that’s actually mostly water. It can bond strongly to materials such as gold, titanium, aluminium, glass, silicone or ceramic.

As seen in the picture above, the bandage can incorporate varied sensors, LED lights and other electronic equipment. Zhao also designed tiny and just-as-stretchy drug-delivering reservoirs and channels to be incorporated into the material.

“If you want to put electronics in close contact with the human body for applications such as health care monitoring and drug delivery, it is highly desirable to make the electronic devices soft and stretchable to fit the environment of the human body,” Zhao said.

This allows the “smart wound dressing” to be fitted to any area of the body where it’s needed, and to deliver medicine to the patient without the need for a human nurse or doctor. The prototype Zhao tested was fitted with heat sensors and LEDs, and programmed to administer the stored drug when the patient developed a fever.

However, the bandage’s uses are only limited by the electronics we can fit into it.

The LEDs can be programmed to light up if a drug is running low, or in response to changes in the patient — increased or lowered blood pressure, increases in temperatures, and so on.

Zhao says that the electronics in the bandage aren’t limited to the surface of the patient’s skin either. The hydrogel can be used to fix sensors inside the body, such as implanted bio-compatible glucose sensors or even soft, compliant neural probes.

And you can even use it to bandage traditionally tricky, flexible areas such as elbows or knees — the gel stretches with the body and keeps the devices inside functional, intact and where they’re needed.

Finally, a bandage worthy of the tech-savvy!

The study was published in the journal Advanced Materials.

All image credits go to techweeklynews

MIT Wi-Fi technology can see you through walls

Researchers at MIT have developed a device that can track human silhouettes behind walls using Wi-Fi. The device called RF-Capture emits out Wi-Fi signals and then tracks back the reflections and see if together, they piece a human form.

This is what the RF-Capture “sees”. Note that it only detects some parts of the human body. Image credits: MIT.

Wi-Fi is a local area wireless computer networking technology that allows electronic devices to connect to each other, generally using the 2.4 gigahertz (12 cm) UHF and 5 gigahertz (6 cm) SHF ISM radio bands. But Wi-Fi can do more than create networks and connecting you to the internet – as a new study just showed, it can send out a signal and reconstruct what’s “on the other side” – see what reflected the signal back. This can generally be done with every type of wave, but high frequency waves provide better resolution and Wi-Fi is a cheap and generally available technology, which makes it more attractive to use.

Here’s how this works: RF-Capture is placed in a room, and starts emitting signals; a part of the signal bounces back off the walls, but some of it passes through and gets to the neighboring room. If someone is walking in the neighboring room, then the signal is again reflected by the human body and returns to the Wi-Fi device; only some of the body parts create significant reflections. The technology could be used in the houses of elder people or people with disabilities to see if they have fallen or if they are injured and need help. It can also be used in smart homes, to detect movements that control the appliances in the house. It can identify people and see if they are making certain gestures. Here’s a video showing how it works:

How it works. Image via MIT.

The sensibility is absolutely amazing! Being able to detect movements with the same accuracy as a kinect camera placed right in front of the subject is absolutely spectacular.

The concept itself is not new – it’s something that has been used in geophysics for decades, for example in ground penetrating radars, a technology that can detect buried near-surface objects. But the application is entirely different, and holds a lot of potential, because Wi-Fi is basically ubiquitous in the developed world.

MIT tackling more serious science: they program beer-delivering robots

Massachusetts Institute of Technology ‘s Computer Science and Artificial Intelligence Laboratory is on the brink of revolutionizing relaxation with their recent breakthrough: they have programmed two robots that can deliver beverages.

What’s yer poison?
Image via wikimedia

The robots, called PR2, have coolers attached to them and are programmed to roam around separate rooms and go ask people if they want a drink. Should the person say yes, the silicone-powered bartender wheels over to a larger robot that places a beer in the cooler, and returns it to the customer.

While the task of drink-fetching may seem small and underwhelming for a robot, programing a unit that can successfully perform this task is an incredible leap forward in robotics. The study remarks that one advantage of testing out a robot on bartending is that this environment allows the researchers to develop the program that drives the little PR2s with ease.

“As autonomous personal robots come of age, we expect certain applications to be executed with a high degree of repeatability and robustness. In order to explore these applications and their challenges, we need tools and strategies that allow us to develop them rapidly. Serving drinks (i.e., locating, fetching, and delivering), is one such application with well-defined environments for operation, requirements for human interfacing, and metrics for successful completion,” the study reads.

And while the applications that PR2 can be currently employed in are rather limited, the team behind them feels that specialization, rather than generalization of tasks to be performed, is the way to go for robotic progress. As such, they advocate the creation of an “app-store” of sorts, a database of specific, useful robotic behaviors that can be ran to perform specific tasks. One app will allow the robot to butler, another to clean, or sow, or cook, and so on.

“This view of encapsulating particular functionality is gaining acceptance across the research community and is an important component for the near and long term visions of endowing personal robots with abilities that will better assist people.”

It can also point astonishingly well. Image via popsci

It can also point astonishingly well.
Image via popsci

Even in the relatively well-constrained bounds of a specific “application”, endowing a personal robot with autonomous capability will require integrating many complex subsystems; most robots will need some facility in perception, motion planning, reasoning, navigation, and grasping. Each of these subsystems are well-studied and validated individually, but their seamless coordination has proven itself a tricky prize for roboticists, up to now.
“Specific challenges integrators face include coping with multiple points of failure from complicated subsystems, computational constraints that may only be encountered when running large sets of components, and reduced performance from subsystems during interoperation.”
There is also an issue of how robots integrate and coordinate with each-other. I’ll let Ariel Anders, one of MIT’s scientists working on PR2, explain in this video:

The MIT robots are considered groundbreaking (and thankfully not glass-shattering), and i personally feel it’s a great leap forward and can’t wait to have a robot butler of my own. The technology shows great promise, and engineers hope to eventually use it as a basis for more crucial missions. The creators said that they hope to one day use the robots at emergency shelters to take orders for bottles and crackers.

You can read the full abstract here.

 

 

In this illustration, the Quantum Dot (QD) spectrometer device is printing QD filters — a key fabrication step. The dots are made by printing droplets. Image: MIT

Spectrometer is small enough to fit in your smartphone

MIT engineers demonstrated a working spectrometer that took a huge leap in scale from a huge, bulky lab gear to a portable piece of equipment that’s small enough to fit in a smartphone. Spectrometer are essential to research nowadays, employed in everything from physics, to biology, to chemistry. To design the spectrometer, the MIT team made use of tiny semiconductor nanoparticles called quantum dots. Having a portable spectrometer could prove to be extremely practical .You can use it to remotely diagnose diseases, detect pollution or food poisoning.

In this illustration, the Quantum Dot (QD) spectrometer device is printing QD filters — a key fabrication step.  The dots are made by printing droplets. Image: MIT

In this illustration, the Quantum Dot (QD) spectrometer device is printing QD filters — a key fabrication step. The dots are made by printing droplets. Image: MIT

The basic function of a spectrometer is to take in light, break it into its spectral components and digitize the signal as a function of wavelength. The information is then read by a computer and shown on a display. Raindrops split a beam of white sunlight into rays of colored light, bending the blueish ones more than the reddish ones to make the well-known arc in the sky. Rain, then, is a brilliant method for separating sunlight. Indeed, the earlier spectrometers consisted of prisms that separate light into its constituent wavelengths, while current models use optical equipment such as diffraction gratings to achieve the same effect. Even so, this kind of equipment is huge. The spectrometer developed at MIT is about the size of a quarter!

The researchers have quantum dots to thank for this achievement. Quantum dots are a type of nanocrystals that absorb light. These are often called artificial atoms because, like real atoms, they confine electrons to quantized states with discrete energies. However, although real atoms are identical, most quantum dots comprise hundreds or thousands of atoms, with inevitable variations in size and shape and, consequently, unavoidable variability in their wavefunctions and energies. This is actually a good thing in this case. The quantum dots are made by mixing various  metals such as lead or cadmium with other elements including sulfur, selenium, or arsenic. By controlling the ratio between the materials, you get quantum dots with specific, unique properties.

Nowadays, quantum dots are heavily researched for use in solar panels or for TV displays, since they also fluoresce. While these applications are quite challenging at this stage, quantum dot light absorption is very well studied and as such any spectrometer that uses them can be expected to give out stable results.

The MIT researchers printed hundreds of quantum dots – each absorbing a specific wavelength of light – into a thin film and placed on top of a photodetector such as the charge-coupled devices (CCDs) found in cellphone cameras. An algorithm identifies the percentage of photons absorbed by each dot, then uses this info to compute the intensive and wavelength of the original beam of light. The more quantum dot materials there are, the more wavelengths can be covered and the higher resolution can be obtained. In this case, 200 quantum dots were deployed over a range of 300 nanometers. By adding even more dots, engineers could build a small spectromer that covers the whole range of wavelenghts.

“Using quantum dots for spectrometers is such a straightforward application compared to everything else that we’ve tried to do, and I think that’s very appealing,” says Moungi Bawendi, the Lester Wolfe Professor of Chemistry at MIT and the paper‘s senior author.

Previously, another team from the same MIT unveiled a handheld mass spectrometer. Coupled with this latest news, one might imagine scientists, doctors or hazard control officers using both optical and mass spectrometers in the field quite easily and reliably.

Researchers watch underwater footage taken by various AUVs exploring Australia's Scott Reef. Image: MIT

Autonomous underwater gliders plan missions and coordinate by themselves

Researchers watch underwater footage taken by various AUVs exploring Australia's Scott Reef. Image: MIT

Researchers watch underwater footage taken by various AUVs exploring Australia’s Scott Reef. Image: MIT

Climate models and environmental monitoring missions are ever more reliant on autonomous underwater vehicles (AUVs) to scour the ocean depths and bring back valuable data like temperature, salinity, carbon levels and so on. Researchers at MIT have now upgraded the way AUVs perform their missions by adding an extra dimension to their autonomy. They demonstrate how a pack of AUVs, directed by a “captain” drone, is able to navigate obstacles and retrieve data with minimal intervention. This dramatically enhances performance and might revolutionize the way scientists study the oceans.

Typically, these sort of robots require predetermined instructions very precisely laid out by a programmer. The alternative is for a person to remotely control the underwater vehicle, but it wouldn’t be autonomous anymore, defeating the purpose. The team at MIT had a different plan, by infusing the bots with almost cognitive-like behavior. Aptly named “Enterprise”, the program uses a hierarchical decision making system in which one AUV is tasked as the “captain” and the other follow its lead. The captain run his decision based on data delivered by the “navigator”, another AUV which watches for obstacles and plans the route, as well as the “engineer”, an AUV which handles any real time situations where there might be a malfunction or engineering problem. Together, the AUVs performed nicely in the waters off the  western coast of Australia back in March.

“We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support,” says Brian Williams, a professor of aeronautics and astronautics at MIT, and principal developer of the mission-planning system. “With this system, we were showing we could safely zigzag all the way around the reef, like an obstacle course.”

“We can give the system choices, like, ‘Go to either this or that science location and map it out,’ or ‘Communicate via an acoustic modem, or a satellite link,'” Williams says. “What the system does is, it makes those choices, but makes sure it satisfies all the timing constraints and doesn’t collide with anything along the way. So it has the ability to adapt to its environment.”

A Slocum glider, used by the MIT team, navigates underwater. Credit: MIT

A Slocum glider, used by the MIT team, navigates underwater. Credit: MIT

A while ago, researchers deployed robot gliders equipped with sensors that track temperature, salinity and oxygen levels in the waters around the Antarctic. These showed that swirling ocean eddies, similar to atmospheric storms, play an important role in transporting warm waters to the Antarctic coast. Using smarter, more agile gliders scientists can now probe the oceans in places that were previously physically inaccessible. Who knows what they’ll find then. By giving robots control of higher-level decision-making, Williams says such a system would also free engineers to think about overall strategy, while AUVs determine for themselves a specific mission plan. Such a system could also reduce the size of the operational team needed on research cruises.

“If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams said. “You could send sea vessels which send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.”

Williams and colleagues will present their Enterprise findings at the International Conference on Automated Planning and Scheduling in Israel in June.