Tag Archives: Atlas

Robot and human hands.

Robot see, robot do: MIT software allows you to instruct a robot without having to code

Researchers have put together C-LEARN, a system that should allow anyone to teach their robot any task without having to code.

The robot chef from the Easy Living scene in Horizons at EPCOT Center.
Image credits Sam Howzit / Flickr.

Quasi-intelligent robots are already a part of our lives, and someday soon, their full-fledged robotic offspring will be too. But until (or rather, unless) they reach a level of intelligence where we can teach them verbally, as you would a child, instructing a robot will require you to know how to code. Since coding is complicated, more complicated than just doing the dishes yourself, anyway, it’s unlikely that regular people will have much use for robots.

Unless, of course, we could de-code the process of instructing robots. Which is exactly what roboticists at the MIT have done. Called C-LEARN, the system should make the task of instructing your robot as easy as teaching a child. Which is a bit of good-news-bad-news, depending on how you feel about the rise of the machines: good, because we can now have robot friends without learning to code, and bad, because technically the bots can use the system to teach one another.

How to train your bot

So as I’ve said, there’re two ways you can go about it. The first one is to program them, which requires expertise in the field of coding and takes a lot of time. The other is to show the bot what you want it to do by tugging on its limbs or moving digital representations of them around, or just doing the task yourself and having it imitate you. For us muggles the latter is the way to go, but it takes a lot of work to teach a machine even simple movements — and then it can only repeat, not adapt them.

C-LEARN is meant to form a middle road and address the shortcoming of these two methods by arming robots with a knowledge base of simple steps that it can intelligently apply when learning a new task. A human user first helps build up this base by working with the robot. The paper describes how the researchers taught Optimus, a two-armed robot, by using software to simulate the motion of its limbs. Like so:

The researchers described movements such as grasping the top of a cylinder or the side of a block, in different positions, retaking each motion for seven times from each position. The motions varied slightly each time, so the robot can look for underlying patterns in the motions and integrate them into the data bank. If for example the simulated grasper always ended up parallel to the object, the robot would note this position is important in the process and would constrain its future motions to attain this parallelism.

By this point, the robot is very similar to a young child, “that just knows how to reach for something and grasp it,” according to D’Arpino. But starting from this database the robot can learn new, complex tasks following a single demonstration. All you have to do is show it what you want done, then approve or correct its attempt.

Does it work?

Robot and human hands.

To test the system, the researchers taught Optimus four multistep tasks — to pick up a bottle and place it in a bucket, to grab and lift a horizontal tray using both hands, to open a box with one hand and use the other to press a button inside it, and finally to grasp a handled cube with one hand and pull a rod out of it with the other. Optimus was shown how to perform each task once, made 10 attempts at each, and succeeded 37 out of 40 times. Which is pretty good.

The team then went one step further and transferred Optimus’s knowledge base and its understanding of the four tasks to a simulation of Atlas, the bullied bot. It managed to complete all four tasks using the data. When researchers corrupted the data banks by deleting some of the information (such as the constraint to place a grasper parallel to the object), Atlas failed to perform the tasks. Such a system would allow us to confer the models of motion created by one bot with thousands of hours of training and experience to any other robot — anywhere in the world, almost instantly.

D’Arpino is now testing whether having Optimus interact with people for the first time can refine its movement models. Afterward, the team wants to make the robots more flexible in how they apply the rules in their data banks, so that they can adjust their learned behavior to whatever situation they’re faced with.

The goal is to make robots who are able to perform complex, dangerous, or just plain boring tasks with high precision. Applications could include bomb defusal, disaster relief, high-precision manufacturing, and helping sick people with housework.

The findings will be presented later this month at the IEEE International Conference on Robotics and Automation in Singapore.

You can read the full paper “C-LEARN: Learning Geometric Constraints from Demonstrations for Multi-Step Manipulation in Shared Autonomy” here.

Browse the brain one cell at a time in the most detailed atlas ever made

The new Allen Brain Atlas combines neuroimaging and detailed cell studies to create the most detailed ever map of the brain.

Image credits Ed S. Lein et al., 2016.

Image credits Ed S. Lein et al., 2016.

One of the biggest hurdles neuroscientists face today is the incredible complexity of the organ they work with. Because so many different parts come together to make it work (and because so many of those parts are so tiny) there isn’t an exact template of where each piece starts and where it ends. But now, after a five-year-long effort, Ed Lein and his colleagues from the Allen Institute for Brain Science in Seattle have put together a comprehensive, open-access digital atlas of the human brain — think of it as the Google map of the brain, complete with markers and street-view.

Where’s what

“Essentially what we were trying to do is to create a new reference standard for a very fine anatomical structural map of the complete human brain,” says Ph.D. and Lead Investigator at the Allen Institute for Brain Science Ed Lein.

“It may seem a little bit odd, but actually we are a bit lacking in types of basic reference materials for mapping the human brain that we have in other organisms like mouse or like monkey, and that is in large part because of the enormous size and complexity of the human brain.”

The project was based on a single healthy postmortem brain of a 34-year-old woman. The team started by taking full scans of the organ in magnetic resonance and diffusion weighted imaging to capture the overall structure and the way fibers connect inside the brain. Then, it was time to look inside. The brain was sliced up into 2,716 thin sections for cellular analysis. Parts of these sheets of brain were dyed with Nissl stain, and their cell architecture was examined. The team then used two other stains to selectively label certain aspects of the brain, such as structural elements of cells, fibers in the white matter, and specific types of neurons.

Based on the Nissl-stained slides, the team cataloged 862 distinct brain structures, finding some novel subregions of the thalamus and amygdala and two other structures that have previously only been described in non-human primates.

When the team put the overall high-resolution data together with the detailed, cellular-level structure of each area, they annotated the atlas with the brain structure they identified. Lein explains that the atlas is available online so people can “navigate it, and move from the macro level all the way right into the cellular level.”

He says that the atlas will become an invaluable tool for neuroscientists to use as common starting material — a set of well-defined areas on which they can later add more levels of annotation based on the criteria they need.

“To understand the human brain, we need to have a detailed description of its underlying structure,” says Lein.

One brain to map them all

Mapping the human brain has long been a major goal of neuroscientists who are trying to make heads and tails of how it works, what its parts are and what these parts actually do. Last year, researchers from the Human Connectome Project released a detailed brain map based on multiple MRI measurements recorded from 210 healthy patients. Lein and his colleagues chose to concentrate their efforts on only one brain to go into a lot more detail with their work.

“Because of the labor intensiveness of doing this, it always lives in the scale of a single brain,” Lein says, “and you really go to town in trying to understand everything you can about that one individual.”

But going in-depth on a single specimen also has its drawbacks. Human Connectome Project researcher Matthew Glasser thinks that the Allan Brain Atlas is “impressive” particularly on a neuroanatomical level, but points out that it might be hard to generalize the findings to the whole human race.

“The thing that’s a challenge is relating a single brain like this that’s very intensively studied to other brains,” Glasser says.

But the thing to remember is that before these two datasets became available, the best reference material we’ve had was put together in 1909, when German anatomist Korbinian Brodmann used Nissl staining to create a cellular-scale brain map. Most brain-mapping efforts to date are still based on Brodmann’s work — hopefully, the new Allen Brain Atlas will speed up such efforts in the future.

“There simply hasn’t been a complete map of the human brain as a reference piece of material for anyone studying any part of the brain,” Lein says, “and this is a completely essential part of doing research.”

[button url=”http://brain-map.org/” postid=”” style=”btn-danger” size=”btn-lg” target=”_self” fullwidth=”false”]Browse the Allen Brain Atlas[/button]

The full paper “Comprehensive cellular-resolution atlas of the adult human brain” has been published in The Journal of Comparative Neurology.