Tag Archives: robotics

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

Steam Power Might Help in Space Exploration

WINE prototype. Credit: HONEYBEEROBOTICS LTD.

WINE prototype. Credit: HONEYBEEROBOTICS LTD.

A vast array of gas fuels have been used in the launching and transportation of spacecraft with liquid hydrogen and oxygen among them. Other spacecraft rely heavily on solar power to sustain their functionality once they have entered outer space. But now steam-powered vessels are being developed, and they are working efficiently as well.

People have been experimenting with this sort of technology since 1698, some decades before the American Revolution. Steam power has allowed humanity to run various modes of transportation such as steam locomotives and steamboats which were perfected and propagated in the early 1800s. In the century prior to the car and the plane, steam power revolutionized the way people traveled.

Now, in the 21st century, it is revolutionizing the way in which man, via probing instruments, explores the cosmos. The private company Honeybee Robotics, responsible for robotics being employed in fields including medical and militaristic, has developed WINE (World Is Not Enough). The project has received funding from NASA under its Small Business Technology Transfer program.

The spacecraft is intended to be capable of drilling into an asteroid’s surface, collecting water, and using it to generate steam to propel it toward its next destination. Late in 2018, WINE’s abilities were put to the test in a vacuum tank filled with simulated asteroid soil. The prototype mined water from the soil and used it to generate steam to propel it. Its drilling capabilities have also been proven in an artificial environment. To heat the water, WINE would use solar panels or a small radioisotopic decay unit.

“We could potentially use this technology to hop on the moon, Ceres, Europa, Titan, Pluto, the poles of Mercury, asteroids — anywhere there is water and sufficiently low gravity,” The University of Central Florida’s planetary researcher Phil Metzger stated.

Without having to carry a large amount of fuel and assumably having unlimited resources for acquiring its energy, WINE and its future successors might be able to continue their missions indefinitely. Similar technology might even be employed in transporting human space travelers.

NASA Explores the Use of Robotic Bees on Mars

Graphic depiction of Marsbee - Swarm of Flapping Wing Flyers for Enhanced Mars Exploration. Credits: C. Kang.

Graphic depiction of Marsbee – Swarm of Flapping Wing Flyers for Enhanced Mars Exploration. Credits: C. Kang.

Robot bees have been invented before, but Mars might be a place for them to serve a unique purpose. Earlier this year, it was revealed that the Japanese chemist Eijio Miyako led a team at the National Institute of Advanced Industrial Science and Technology (AIST) in developing robotic bees. So they’re not really bees; they’re drones. Miyako’s bee drones are actually capable of a form of pollination similar to real bees.

Bees have been the prime subject of many a sci-fi films including The Savage Bees (1976), The Swarm (1978), and Terror Out of the Sky (1978). In the 21st century, bees have been upgraded. Their robotic counterparts shall have an important role to play in future scientific exploration. And this role could very well be played out on the surface of Mars.

Now, NASA has begun to fund a project to create other AI-steered robotic bees for the future exploration of Mars. The main cause of experimenting with such mini robots is for the desirable need for speed. The problem is this: the traditional rovers sent to Mars in the past move very slowly. NASA anticipates an army of fliers to move significantly faster than their snail-like predecessors.

A number of researchers in Alabama are currently collaborating with a group based in Japan to design these mechanical drones. Sizewise the drones are very similar to real bees; however, the wings are unnaturally large. The lengthened wingspan was a well-needed feature to add since the Red Planet’s atmosphere is thinner compared to Earth’s. These small insect-like robots have been dubbed “Marsbees.”

If used, the Marsbees would travel in swarms and be able to return to some sort of a base, not unlike the way bees return to their hive. The base would likely be a rover providing a place for the Marsbees to be reenergized. But they would not have to come to this rover station to send out the information they’ve accumulated. Similar to satellites, they would be able to transmit their findings wirelessly. Marsbees would also likely be able to collect a variety of data. If their full development is feasible and economical, the future for Marsbees looks promising.

New “soft robots” are strong enough to lift heavy weights, delicate enough to pluck a raspberry

Scientists have already developed strong, capable robots. The next step is creating delicate, nimble robots — and that’s exactly what Colorado researchers have accomplished. They’ve outfitted robots with muscle-like features, offering them not only the power to manipulate heavy objects but also the gentleness to do so without damaging them.

Credits: Keplinger Research Group.

Already, robots are being used in a myriad of industries, mostly for repetitive tasks which require a lot of power. But, they’re typically rigid and apply a fixed amount of force. Meanwhile, gentler robots and robots which can apply varying force can also be useful in a number of tasks, from picking fruit to helping elderly or impaired people. So scientists from the University of Colorado in Boulder wanted to explore the softer side of these machines.

“We want to do the opposite,” said Christoph Keplinger, assistant professor in CU’s Department of Mechanical Engineering. “We want robots who will be our friends and help us.”

They sought inspiration from biology, focusing on two technologies:

Pneumatic actuators are powerful and relatively easy to fabricate, but they can be bulky and their movements tend to be rigid. Meanwhile, dielectric elastomer actuators are much faster and smoother, but they’re more prone to failure. Keplinger and his colleagues joined the two technologies in an innovative project — it’s like mixing the strength of an elephant with the delicacy of a hummingbird, they say.

“Think about a hummingbird and the high speed of its wings,” Keplinger said. “Then think about the power of the trunk of an elephant. At the same time, think about an octopus arm, which is extremely versatile and can squeeze through tiny spaces.”

Nick Kellaris, a materials science and engineering graduate student, left, and mechanical engineering graduate student Eric Acome look over liquified artificial “muscle” or soft robot material in the Keplinger Research Lab. Photo by Glenn Asakawa / University of Colorado Boulder.

This field of research is called “soft robotics” and their project is named “hydraulically amplified self-healing electrostatic” — or HASEL. HASEL eschews the traditional idea of a metallic droid, replacing it with a soft shell capable of mimicking the expansion and contraction of biological muscles. To make things even better, this robot can not only be built from cheap, readily available materials, but it’s also self-repairing.

Basically, the donut-shaped elastomer shell is filled with an insulating liquid (such as canola oil). It’s then hooked to a pair of electrodes and, when a voltage is applied, the liquid is displaced, shifting the shape of the soft shell. When the voltage is turned off or reduced, the grip is released. In terms of general physics, it works very much like a biological muscle. Through sensors, HASEL can also pick up environmental cues, making it even more lifelike.

“We draw our inspiration from the astonishing capabilities of biological muscle,” said Christoph Keplinger, senior author of both papers, an assistant professor in the Department of Mechanical Engineering and a fellow of the Materials Science and Engineering Program.“HASEL actuators synergize the strengths of soft fluidic and soft electrostatic actuators, and thus combine versatility and performance like no other artificial muscle before.

Keplinger details his project in a pair of papers published in Science and Science Reports. They built three designs, and to show them off, they had the robots complete tasks which require both strength and tenderness. For instance, the robots were able to lift heavy weights, but they were also able to handle delicate objects such as a raspberry and a raw egg.

Eric Acome, lead author of the Science paper, and Nick Kellaris, lead author of the Science Robotics paper, say that they have high hopes for their design.

“We can make these devices for around 10 cents, even now,” said Nicholas Kellaris, also a doctoral student in the Keplinger group and the lead author of the Science Robotics study. “The materials are low-cost, scalable and compatible with current industrial manufacturing techniques.”

A muscle-like electrical actuator developed by researchers in the Keplinger lab. Photo by Glenn Asakawa/University of Colorado Boulder.

Robert Shepherd, a soft robotics expert at Cornell University, who was not involved in the study, told ScienceMag that this is a very big step forward, while Bobby Braun, dean of CU Boulder’s College of Engineering and Applied Science said that the research is “nothing short of astounding.”

Considering the performance, low price, and versatility of these robots, we could expect them to make a real impact on society in the near future.

“We’d like to do this as soon as possible to start making an impact on people’s lives,” Keplinger concludes.

Robot Brain.

Biology can help patch the flaws in our robots, metastudy reports

Cyborgs might still be a ways away, but “biohybrid” bots might be closer than you think, according to an international team of researchers.

Robot Brain.

Image via midnightinthedesert.

The term cyborg refers to any biomechanical entity that was born organic and later received mechanical augmentations, either to restore lost functionality or to enhance its abilities. It’s possible that cyborgs will become commonplace in the future, as people turn to robotic prosthetics to replace lost limbs, explore whole new senses through mechanical augmentation, or by plugging into a Neuralink-like artificial mind.

But there’s also a reverse to the cyborg coin, the biohybrids — robots enhanced with living cells or tissues to make them more lifelike. Biological systems can bring a lot to the biohybrid table, such as muscle cell augmentations to help the bots perform subtle movements, or bacterial add-ons to help them navigate through living organisms — and unlike cyborgs, biohybrids are coming on-line today, according to a new metastudy.

Meatbots

The paper, penned by an international group of scientists and engineers, aims to get an accurate picture of the state of biohybrid robotics today. The field, they report, is entering a “deep revolution in both [the] design principles and constitutive elements” it employs.

“You can consider this the counterpart of cyborg-related concepts,” said lead author Leonardo Ricotti, of the BioRobotics Institute at the Sant’Anna School of Advanced Studies, in Pisa, Italy. “In this view, we exploit the functions of living cells in artificial robots to optimize their performances.”

In recent years we’ve seen robots of all shapes and sizes bringing increasing complexity to bear in both software and hardware. They’re on assembly lines moving and welding heavy metal pieces, and sub-millimeter robots are being developed to kill cancer cells or heal wounds from within the body.

One thing robots haven’t quite gotten right in all this time, however, is fine movement. Actuation, the coordination of movements, proved itself to be a persistent thorn in the side of robotics, the team writes. Robots can handle huge weights with impressive ease and fluidity. Alternatively, they can operate a laser cutter with perfect accuracy each and every time. But they have difficulty coordinating subtler actions, such as cracking an egg cleanly into a bowl, or caressing. Unlike animal movements, which start gently on a micro scale and lead up to large-scale motion, robots’ initial movements are jerky.

Another shortcoming, according to Ricotti, is that our bots are quite power hungry. They can’t hold a candle to the sheer energy efficiency of biological systems, refined by evolution almost to its limits over millions of years — a problem that’s particularly relevant in micro-robots, whose power banks are routinely larger than the robot itself.

Mixing living ‘parts’ into robots can solve these problems, she adds.

The team writes that muscles can provide the fine accuracy actuation and steady movement that robots currently lack. For example, they showcase a group led by Barry Trimmer of Tufts University (Trimmer is also a co-author of the metastudy), that developed worm-like biohybrid robots powered by the contraction of insect muscle cells.

Co-author Sylvain Martel, of Polytechnique Montréal, is trying to solve the energy issue by outfitting his bots with bacterial treads. His work used magnetotactic bacteria, which move along magnetic field lines, to transport medicine to cancer cells. The method allows Martel’s team to guide the bacteria using external magnets, allowing them to target tumors or cells that have proven elusive in the face of traditional treatments.

Steel and sinew

Biohybrid robotics comes with its own set of drawbacks, however. Biological systems are notoriously more fragile than metal-borne robots, and they prove to be the weakest link in hybrid systems. Biohybrids can only operate in temperature ranges suitable for life (so no extreme heat or cold), are more vulnerable to chemical or physical damage, and so on. In general, if a living organism wouldn’t last too long in a certain place, neither would a biohybrid.

Finally, living cells need to be nourished, and that’s something we haven’t really learned how to do well in robots yet — so as of now, our biohybrids tend to be rather short-lived. But for all their shortcomings, biohybrid robots have a lot of promise. When talking about a manta-ray-like biobot developed by a team at Harvard last year, Adam Feinberg, a roboticist at Carnegie Mellon University, said that “by using living cells they were able to build this robot in a way that you just couldn’t replicate with any other material.”

“You shine a light, and it triggers the muscles to swim. You couldn’t replicate this movement with on-board electronics and actuators while keeping it lightweight and maneuverable.”

The paper Biohybrid actuators for robotics: A review of devices actuated by living cells has been published in the journal Science Robotics.

Flippy Miso robotics

Burger-flipping robot will grill meat in 50 fast food restaurants

Scholars have warned that demand for low-skilled jobs will drop sharply following automation and flipping burgers is definitely on the robo-overlord menu, as CaliBurger can attest. The fast-food restaurant chain present all over the United States, but also in countries like China, Sweden, Qatar or Taiwan, said it will introduce a burger robot in fifty of its locations.

Flippy Miso robotics

Credit: Miso Robotics.

The kitchen assistant, known as ‘Flippy’, was designed by a startup called Miso Robotics which specializes in “technology that assists and empowers chefs to make food consistently and perfectly, at prices everyone can afford.”

It looks like a cart on wheels with only one arm and no legs. With six axes though, the arm has plenty of freedom of motion so the robot can perform a variety of tasks. In fact, Miso claims this robot is more akin to a self-driving car than an assembly line machine.

What they mean is that Flippy uses feedback-loops that reinforce its good behavior so it gets better with each flip of the burger. Unlike an assembly line robot that needs to have everything positioned in an exact ordered pattern, Flippy’s machine learning algorithms allow it to pick uncooked burgers from a stack or flip those already on the grill. Hardware like cameras helps Flippy see and navigate its surroundings while sensors inform the robot when a burger is ready or still raw. Meanwhile, an integrated system that sends orders from the counter back to the kitchen informs Flippy just how many raw burgers it should be prepping.

Miso engineers working on Flippy’s algorithms had to flip burgers to get in the right mindset. The video below gives you an idea how Flippy sees the grill from its point of view.

‘Flippy cooks burgers perfectly — every time’

Flippy is supposed to be a kitchen assistant and can’t replace human workers entirely — not yet, at least. A human still has to prepare the cooked burger, place cheese on the grill or add toppings like sauce. Momentum Machines, a company that has been working on its own burger bots for some years, is allegedly introducing these additional steps into its machines’ routine. It might not take too long before human presence in the burger grilling kitchen is superseded.

Moving on beyond burgers, it’s quite reasonable seeing Flippy preparing other dishes like fish, chicken, vegetables and such. Moreover, its compact size and adaptability mean that the machine can be installed in any restaurant’s kitchen as it is, with no additional hassle. Flexibility is the keyword here.

Over the next two years, Flippy bots will be installed in 50 CaliBurger restaurants around the world. One Pasadena restaurant is already enjoying the fruits of its labor. However, we can’t speak in the name of the 2.3 million likely underpaid cooks currently employed in the United States. At least, Miso’s CEO admits their product will put people out of jobs.

“Tasting food and creating recipes will always be the purview of a chef. And restaurants are gathering places where we go to interact with each other. Humans will always play a very critical role in the hospitality side of the business given the social aspects of food. We just don’t know what the new roles will be yet in the industry,” the company’s CEO and co-founder David Zito said. 

Just take a look at the following promo video and tell me that human worker’s face doesn’t spell ‘show-off!’.

On a more serious note, it’s clear nobody has any idea what will happen to all these displaced jobs. Tech startups are in a competition to be as disruptive as possible with not much regard for what happens next to the industry they’re affecting. Quite frankly, that might not be their responsibility. Nobody had any beef with Henry Ford when he flooded the market with millions of Model-Ts, pulling horse and buggies out of the streets. This time, however, it looks like a whole different ball game. Artificial intelligence and robotics are disrupting multiple industries at the same time. Vehicles, health, law, food. You name it.

Pedipulator

The army’s amazing 1962 four-legged Pedipulator beat Star Wars to it by 15 years

Pedipulator

The Star Wars franchise is one of the most amazing productions ever — the early movies at least. It was so forward thinking, so innovative, and so ahead of its time that it’s no surprise to see concepts from the movie come to life today. But sometimes, real life is stranger than fiction. Take General Electric’s 1962 four-legged human-operated Pedipulator which appears 15 years before Star Wars’ AT-ST Walker.

Pedipulator

Now on display at the US Army Transportation Musem at Fort Eustis, the GE quadruped called the Pedipulator, or “Walking Truck,” rests soundly. Developed in Pittsfield, Massachusetts, the vehicle was officially called a Cybernetic Anthropmorophous Machine (CAM), which GE developed on contract with the army to supply a vehicle able to push through dense vegetation, step over felled trees, and sidle around standing ones — all while nimbly carrying up to half-ton in men and material.



But the same super-sensitive, hand-and-foot-controlled hydraulics that enabled the CAM to casually push aside a jeep, or gently paw a GE light bulb without breaking it, also made it impractical for prolonged battlefield use. Operators found the constant manipulation of the controls very fatiguing, leading the project to be mothballed.

Taken from GE “Walking Truck” brochure from 1968.

Taken from GE “Walking Truck” brochure from 1968.

Eventually, the CAM’s sophisticated “force feedback” capability found reapplication undersea, where GE developed hydraulic arms for the world’s first aluminum submarine, the Aluminaut. Today, robotic arms on everything from Hazmat vehicles to space shuttles.

 

Scientists develop new, adorable class of soft robots

Harvard researchers have revealed a cute, self-powered octopus-like robot. The robot is surprisingly resilient and can operate for up to eight minutes by itself, opening up new possibilities in robot design.

 This image shows the octobot, an entirely soft, autonomous robot. A pneumatic network (red) is embedded within the octobot’s body and hyperelastic actuator arms (blue). Credit: Ryan Truby, Michael Wehner, and Lori Sanders, Harvard University.

This image shows the Octobot, an entirely soft, autonomous robot. A pneumatic network (pink) is embedded within the Octobot’s body and hyperelastic actuator arms (light blue). Credit: Ryan Truby, Michael Wehner, and Lori Sanders, Harvard University.

Soft robots could revolutionize the industry. They’re more adaptable to many natural environments, and are ironically more resilient than their solid counterparts because they can adapt to various environments. However, there are some very big hurdles against soft robots – especially batteries.

“Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials,” researchers write in the study. “Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources.”


You can’t really fit conventional batteries on a soft robot… because batteries are hard. So, the big challenge is making these squishy bots fully autonomous — something  Jennifer Lewis and her colleagues at Harvard University managed to overcome, and created the cute ‘octobot’ you see above.

“Creating a new class of fully soft, autonomous robots is a grand challenge, because it requires soft analogues of the control and power hardware currently used,” Lewis added in the study.

They used a combination of techniques to develop it, including 3D printing of the pneumatic networks within the soft body. Octobot can operate autonomously for 4-8 minutes, but that run-time could be significantly improved by a more sophisticated design of fuel usage.

Like with any nascent technology, there are no immediate applications in sight, but in the long run, soft robots could really be a game changer. Speaking with ZME Science, Ryan Truby, author and Ph.D. candidate at Harvard University’s Paulson School of Engineering and Applied Sciences discussed potential applications for the technology:

“Soft robotics is definitely a field in its infancy,” he said. “The potential applications that are particularly exciting for soft robotic systems are those that sit at the human interface, such as wearable and biomedical technologies. Because these robotic systems are based on soft materials like silicone rubbers, they can be inherently safer than traditional robotic systems and possibly better suited for such applications. Additionally, we are finding that soft robots have potential application in environments where conventional robots might fail, such as underwater conditions. “

Michael Wehner, the co-lead author on the paper added:

“As Ryan points out, this is a new field so the “Killer App” is yet to be determined. Some early avenues to explore are in fields involving human-robot-interaction, a long-time focus area of mine.”

“As inherently soft, soft robots pose less risk to both humans and the robots themselves in unplanned interactions, which must be accounted for in unstructured environments.”

Another interesting point about the robot is its fuel. The Octobot system uses concentrated hydrogen peroxide as fuel, which is already a pretty eco-friendly option, the byproducts being oxygen, water and heat. The hydrogen peroxide decomposes into oxygen gas and water vapor, as regulated by the microfluidic “soft controller,” powering actuation. This approach also opens up an intriguing possibility, as Truby explains:

“I think that in the future, it would be neat to see if a robot like the Octobot could possess the ability to produce hydrogen peroxide on-board using reagents from its environment. This could be done, for example, using a biochemical reaction that is regulated within the soft robot. However, this would be a tremendous challenge!”

Journal Reference: An integrated design and fabrication strategy for entirely soft, autonomous robots.

A college course has been using a robot as a teacher and no one even realized

When it comes to the list of things robots can’t do, social interaction comes way on the top of the list. But for the last term, a robot has been serving as a teaching assistant (TA) at the Georgia Institute of Technology for months… and no one even realized.

Monty, a telemanipulation prototype from Anybots. Photo by Jeff Keyzer.

A TA called Jill Watson helped with a course – a course on artificial intelligence, of course. Jill’s main tasks were responding to student emails and taking care of mundane tasks. She also got involved in forum discussions, posting short prompts and confirmations for questions.

“She was the person – well, the teaching assistant – who would remind us of due dates and post questions in the middle of the week to spark conversations,” student Jennifer Gavin told Melissa Korn at The Wallstreet Journal. “It seemed very much like a normal conversation with a human being.”

Apparently, the students were quite satisfied with her activity, but they didn’t know she was a robot. Fellow student Shreyas Vidyarthi declared himself “flabbergasted” at the revelation, but not everyone was surprised. Their colleague, Tyson Bailey, said he wasn’t surprised when he learned about Jill’s identify, especially given the nature of the course.

Jill was “recruited” by Ashok Goel, a professor of computer science at Georgia Tech. He fed the algorithm 40,000 discussion forum posts to give it an idea of how a TA generally behaves on the class forum. The algorithm analyzes all the posts, and if it thinks it can respond to a query with more than 97 percent precision, it swoops in and answers – all powered by IBM’s Watson analytics system.

While this was just a university course, it may have a huge significance. Jill answering routine questions saved a lot of time for the course teachers, and the same could go on many other places. Plenty of forums, both educational and otherwise could benefit from having an automated response machine – it would save time for the professors/admins, and the students/users could definitely use it. I for one, welcome our new robot TAs.

laws robotics

Scientists are teaching robots to say ‘No’ to commands. Is that a good thing?

laws robotics

In the 1940s when real robots, let alone artificial intelligence, weren’t around, famed SciFi author Isaac Asimov set forth a set of laws known as the “Three Laws of Robotics”. These state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Above all else, it seems, a robot must obey a human at all times, except when such an order might harm a human. Researchers at Tufts University Human-Robot Interaction Lab, however, use their own set of rules – one where robots can reject a command. That sounds like the plot for a bad movie about human annihilation at the hand of artificial overlords.

If you think about it for a moment, though, it makes sense. Humans aren’t exactly perfectly rational, so we sometimes make, in lack of a better word, stupid decisions. Passing on these decisions to robots could potentially have drastic consequences. What Tufts researchers are suggesting is applying in a similar fashion the reasoning humans use to assess a command to robots. According to IEEE, linguistics theory says humans assess a request by following so-called Felicity conditions. These are:

  1. Knowledge: Do I know how to do X?
  2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?
  3. Goal priority and timing: Am I able to do X right now?
  4. Social role and obligation: Am I obligated based on my social role to do X?
  5. Normative permissibility: Does it violate any normative principle to do X?

The first three are self-explanatory. The forth condition basically says “can I trust you? who are you to tell me what to do?”. The fifth rule says “OK, but if I do that do I break any rules? (civil and criminal laws for humans, and the possibly an altered version of Asimov’s Laws of Robotics for robots). In the videos below, Tufts researchers demonstrate their work .

First, a robot set on a table is ordered to walk over it.The robot registers however that by doing so it would fall off and possibly damage itself, so it rejects the order. The researcher, however, changes the framework by telling the robot “I will catch you”, then the robots amazingly complies. It’s worth noting that the robot didn’t have these exact conditions preprogrammed. Natural language processing lends the robot a sort of understanding of what the human means in a general way “You will not violate Rule X because the circumstances that would cause damage are rendered void”.

Next, the robot is instructed to “move forward” through an obstacle, which the robot graciously disobeys because it violates a set of rules that say “obstacle? don’t budge any more”. So, the researcher asks the robot to disable its obstacle detection system. In this case, the Felicity condition #4 isn’t met because the human didn’t have the required privileges.

In the final video, the same situation is presented only now the human that makes the command has the required trust necessary to fulfill the command.

At Tufts, the researchers are also working on a project called Moral competence in Computational Architectures for Robots, which seeks to “identify the logical, cognitive, and social underpinnings of human moral competence, model those principles of competence in human-robot interactions, and demonstrate novel computational means by which robots can reason and act ethically in the face of complex, practical challenges.”

“Throughout history, technology has opened up exciting opportunities and daunting challenges for attempts to meet human needs. In recent history, needs that arise in households, hospitals, and on highways are met in part by technological advances in automation, digitization, and networking, but they also pose serious challenges to human values of autonomy, privacy, and equality. Robotics is one of the liveliest fields where growing sophistication could meet societal needs, such as when robotic devices provide social companionship or perform highly delicate surgery. As computational resources and engineering grow in complexity, subtlety, and range, it is important to anticipate, investigate, and as much as possible demonstrate how robots can best act in accordance with ethical norms in the face of complex, demanding situations.”

“The intricacy of ethics, whether within the human mind or in society at large, appears to some commentators to render the idea of robots as moral agents inconceivable, or at least inadvisable to pursue. Questions about what actions robots would have the leeway, direction, or authority to carry out have elicited both fears and fantasies. “

Could in the future robots become the ultimate ethical agents? Imagine all the virtues, morals and sound ethics amassed through countless ages downloaded inside a computer. An android Buddha. That would be interesting, but in the meantime Tufts researchers are right: there are situations when robots should disobey a command, simply because it might be stupid. At the same time, this sets a dangerous precedent. Earlier today, I wrote about a law passed by Congress that regulates mined resources from space. Maybe it’s time we see an international legal framework that compels developers not to implement certain rules in their robots’ programming, or conversely implement certain requirements. That would definitely be something I think everybody agrees is warranted and important – if it only didn’t interfere with the military.

RoboHow: the Wikipedia that teaches robots how to cook

Developing robots that behave like us is one of the holy grails of modern robotics. And although recent advances in AI technology, human mimicking and automation have brought us closer than ever to that goal and gave machines a better sense of how to navigate their surroundings, there is a lot to improve in the way they work and interact with humans.

D’aaaaaaaaaaaaaw!
Image via today

But fear not! A European initiative founded in 2012, dubbed RoboHow, comes to take up the challenge by creating systems that should help robots learn and share information with each other (even by using actual language), mimicking human learning processes. The aim of the platform is to do away with pre-programming our machines to perform certain tasks, and to teach them how to put information together, use it, and remember it for the future – to “program” themselves.

The German robot PR2 – backed by the RoboHow team – is an example of a machine designed to take advantage of this new approach.

The PR2 is engineered to process written instructions from websites like WikiHow and then physically perform the associated tasks. After being tested in a bar tending setting, the robot is now interested in the kitchen – specifically pancakes. That may seem like a small task, but it requires an intricate framework of prior knowledge of micro-tasks that humans take for granted — such as the amount of pressure required to open a container.

We should have all known the Robot Revolution started when we saw this pancake.
Image via pancakeoftheweek

Ideally, the PR2 would gain that knowledge through experimenting, use it in its environment, and communicate what it has learned to an online database called OpenEase. This would create an open, easily accessible repository of growing knowledge for any robot to tap into and learn from.

MIT Technology Review researchers said they are also considering implementing techniques that would allow robots to learn from observing humans at work, performing certain tasks. One such approach would be studying virtual-reality data after humans have performed such tasks wearing tracking gloves.

The ultimate goal would be creating a set of robots that could adapt to changing environments and instructions and react in an appropriate manner. The biggest barrier is relaying the semiotics of language into algorithm. Bridging that gap would be a giant step forward in developing robots that learn and grow like humans.

MIT tackling more serious science: they program beer-delivering robots

Massachusetts Institute of Technology ‘s Computer Science and Artificial Intelligence Laboratory is on the brink of revolutionizing relaxation with their recent breakthrough: they have programmed two robots that can deliver beverages.

What’s yer poison?
Image via wikimedia

The robots, called PR2, have coolers attached to them and are programmed to roam around separate rooms and go ask people if they want a drink. Should the person say yes, the silicone-powered bartender wheels over to a larger robot that places a beer in the cooler, and returns it to the customer.

While the task of drink-fetching may seem small and underwhelming for a robot, programing a unit that can successfully perform this task is an incredible leap forward in robotics. The study remarks that one advantage of testing out a robot on bartending is that this environment allows the researchers to develop the program that drives the little PR2s with ease.

“As autonomous personal robots come of age, we expect certain applications to be executed with a high degree of repeatability and robustness. In order to explore these applications and their challenges, we need tools and strategies that allow us to develop them rapidly. Serving drinks (i.e., locating, fetching, and delivering), is one such application with well-defined environments for operation, requirements for human interfacing, and metrics for successful completion,” the study reads.

And while the applications that PR2 can be currently employed in are rather limited, the team behind them feels that specialization, rather than generalization of tasks to be performed, is the way to go for robotic progress. As such, they advocate the creation of an “app-store” of sorts, a database of specific, useful robotic behaviors that can be ran to perform specific tasks. One app will allow the robot to butler, another to clean, or sow, or cook, and so on.

“This view of encapsulating particular functionality is gaining acceptance across the research community and is an important component for the near and long term visions of endowing personal robots with abilities that will better assist people.”

It can also point astonishingly well. Image via popsci

It can also point astonishingly well.
Image via popsci

Even in the relatively well-constrained bounds of a specific “application”, endowing a personal robot with autonomous capability will require integrating many complex subsystems; most robots will need some facility in perception, motion planning, reasoning, navigation, and grasping. Each of these subsystems are well-studied and validated individually, but their seamless coordination has proven itself a tricky prize for roboticists, up to now.
“Specific challenges integrators face include coping with multiple points of failure from complicated subsystems, computational constraints that may only be encountered when running large sets of components, and reduced performance from subsystems during interoperation.”
There is also an issue of how robots integrate and coordinate with each-other. I’ll let Ariel Anders, one of MIT’s scientists working on PR2, explain in this video:

The MIT robots are considered groundbreaking (and thankfully not glass-shattering), and i personally feel it’s a great leap forward and can’t wait to have a robot butler of my own. The technology shows great promise, and engineers hope to eventually use it as a basis for more crucial missions. The creators said that they hope to one day use the robots at emergency shelters to take orders for bottles and crackers.

You can read the full abstract here.

 

 

Each Wave Glider will collect valuable data about the status of the Pacific Ocean's current health.

Unmanned robots embark on epic voyage across the Pacific Ocean

Each Wave Glider will collect valuable data about the status of the Pacific Ocean's current health.

Each Wave Glider will collect valuable data about the status of the Pacific Ocean's current health.

This weekend four unmanned robot vehicles set out to cross the Pacific Ocean, for the longest voyage of this kind so far attempted. During their 300 days trek,  the Wave Glider crafts will gather immense data regarding composition and quality of sea water, which will provide researchers with invaluable data regarding the current status of the ocean’s health.

The robots, designed by Liquid Robotics, were launched from the St Francis Yacht Club on the edge of San Francisco harbour on 17 November. Initially, the four crafts will travel together until they reach Hawaii, after which they’ll split into two pairs – one will cross the ocean towards Australia, while the other  head to Japan to support a dive on the Mariana Trench (deepest part of the ocean). In total, 3,000 nautical miles (66,000km) will be covered, while curious viewers can keep up to date with the robots’ live progress on Google Earth.

“Most of the ocean remains unexplored with less than 10 percent of it mapped out. This expedition creates an opportunity for students, marine researchers, and aspiring oceanographers to follow these brave Liquid Robotics ocean robots as they cross the Pacific virtually through the Ocean Showcase on the Google Earth website,” says Jenifer Austin Foulkes, Ocean in Google Earth manager.

Surprisingly, the construction of the robots seems quite fragile. Made out of two parts, the upper half of the Wage Glider is shaped like a stunted surfboard and it is attached by a cable to a lower part fitted with a series of fins and a keel. Around 2.25 million data points will be gathered during the voyage as the unmanned crafts will pass for never before surveyed waters via sensors. To power the sensors, a solar panel was installed on the upper part of the craft, in contact with the surface. For me, it’s quite remarkable how the Wave Glider will be able to withstand the torrents and vicious waves of the Pacific, but obviously the engineers who made them have to know what they’re doing.

“At Virgin Oceanic, our mission is taking the next step in human exploration to the last frontier – the very bottom of our seas. I will be piloting to the bottom of the Mariana Trench to explore the deepest point of the Pacific Ocean,” says Chris Welsh, Virgin Oceanic co-founder and pilot.

“Wave Gliders are one of the most promising solutions for major, low cost, long-range ocean exploration. I look forward to seeing the results as their Wave Gliders cross over the Mariana Trench, which is our first major dive location.”

Foambot

Foambot can build new robots on the spot [VIDEO]

Foambot

Another one from the realm of James Cameron’s movies – a robot that can build other robots. While the technology is still not exactly Skynet material, University of Pennsylvania’s Modlab foambot is an extremely interesting working concept capable of deploying itself with modular robot components, and practically assemble a new bot on the spot depending on the task it needs to perform.

A bunch of roboting parts, like actuators, are lined in a particular order depending on the type of contraption you need to assemble, after which the foambot comes along and sprays foam, which hardens and allows the new robot to come into a new moving shape. The foam is a simple commercially available material, able to expand 20 times its own size. It might not look like much, but in future variations it might prove to be indispensible for certain applications.

Almost all of today’s robots are designed for serving a particular task, however with this kind of tech, after a few tweaks to build a more practical variant, you could basically deploy bots cattering for a bunch of different situations. The foambot would definitely be very much welcomed in applications where there are numerous unpredictable environmental parameters, like space exploration or search and rescue.

Check out the self-deploying robot, developed by a team lead by Shai Revzen, in two seperate instances, one as a snake like narrow robot, the other as a quadruped bot, in the videos right below.

Teaching a robot how to sword fight might support safety advances

If you think giving a robot a sword and teaching him how to use is a bad idea, you may be just about half wrong.  A young robotics PhD student at Georgia Tech has programmed a robot how to sword fight, in terms of only defending itself against attacks in order to simulate the sudden movements of humans through robotic environments and avoid those as well.

“In order to deploy safe and flexible robots for service and automation, robots must act safely in close contact with humans,” said Tobias Kunz, the Georgia Tech researcher.

The basis of his idea is that by programming a robot ninja (I couldn’t help myself), is that, as like in any sword fight, you teach it how to predict human movement and how to react to it. In this case, a robot could retract an arm or circle round a human if he’s in a certain proximity or acts according some kind of predefined pattern.

It’s a very interesting idea, which could lead to impressive advances by making robots dynamically safe. So far, his model is only virtual, which you can view below, but like you’ve already seen in the caption above could be turn applicable.

Kunz worked together with colleagues Peter Kingston, Mike Stilman, and Magnus Egerstedt, on their ICRA paper was titled, “Dynamic Chess: Strategic Planning for Robot Motion,” and was presented this week  at the IEEE International Conference on Robotics and Automation (ICRA).

[via IEEE Spectrum]

How would you respond to being touched by a robotic nurse ?

To be touched by a careful nurse, and to feel taken care of is very important, and often neglected; having that sense of comfort and tranquility might just be what gives that extra boost to the patient. Touching patients can lead to a numerous of responses, from calmness to discomfort, from intimacy to even aggression. But how would people react to if they were touched by a robot ? Now that’s an even more sensitive issue. Would they dislike it, or take it in stride ? According to a new study done by the Georgia Institute of Technology, people generally have a positive response towards the robot, but it all depends on what they think its intention is.

“What we found was that how people perceived the intent of the robot was really important to how they responded. So, even though the robot touched people in the same way, if people thought the robot was doing that to clean them, versus doing that to comfort them, it made a significant difference in the way they responded and whether they found that contact favorable or not,” said Charlie Kemp, assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

The nurse, named Cody touched and wiped the “patients'” forearm; what was extremely interesting was that even though Cody touched them in exactly the same way, the subjects responded way better when they believed the robot intended to clean their arm compared to when they believed Cody intended to comfort them.

“There have been studies of nurses and they’ve looked at how people respond to physical contact with nurses,” said Kemp, who is also an adjunct professor in Georgia Tech’s College of Computing. “And they found that, in general, if people interpreted the touch of the nurse as being instrumental, as being important to the task, then people were OK with it. But if people interpreted the touch as being to provide comfort … people were not so comfortable with that

The study also had another goal, and so it tested whether the people responded more favorably when the robot indicated that it was going to touch them, versus touching them without saying anything. The results were a little surprising, indicating that people like it better when they were touched without warning.

“The results suggest that people preferred when the robot did not actually give them the warning,” said Tiffany Chen, doctoral student at Georgia Tech. “We think this might be because they were startled when the robot started speaking, but the results are generally inconclusive.”

With the ever developing robot industry, it’s obvious by now that numerous tasks that are to be performed by robots require touching humans, so their response to this touch is extremely important, especially in healthcare. The results seem to indicate that people aren’t really that scared of robots, and don’t necessarily dislike being aided by them, but there’s still a long bridge to pass before we can say that robot nurses, even if perfectly capable, can start taking care of people.

Kaspar the friendly robot – helping autistic children smile

Pictured on the left is Eden Sawczenko, an autistic four year girl from Stevenage, that has had a lot of problems bonding with other children, not being able to understand emotions and actually frawining upon them. Her best friend in the world is Kaspar, a very friendly human-like, child-sized robot built by scientists from University of Hertfordshire specifically to help autistic children try to lead a normal life.

The robot is brought to the pre-school for autistic children in Stevenage once a week, when children can play with Kaspar for about 10 minutes each, alongside a scientist who remotely controls the robot.

“She’s a lot more affectionate with her friends now and will even initiate the embrace,” said Claire Sawczenko, Eden’s mother.

Kaspar is quite adorable actually. If you look closely, the robot seems quite lovable with those shaggy locks, silly baseball cap and stripped red socks. The eyes…. well ok, those freak me out too.

“Children with autism don’t react well to people because they don’t understand facial expressions,” said Ben Robins, a senior research fellow in computer science at the University of Hertfordshire who specializes in working with autistic children.

“Robots are much safer for them because there’s less for them to interpret and they are very predictable.”

The robot is programmed to interact with children in all sort of manners, like smiling, frowning, laughing, blinking or waving his arms. Kaspar only a handful of tricks, like saying “Hello, my name is Kaspar. Let’s play together,” laughing when his sides or feet are touched, raising his arms up and down, or hiding his face with his hands and crying out “Ouch. This hurts,” when he’s slapped too hard. But with all of these in mind, he’s considered the most advanced model in the world as far as similar autism-related projects from US, Canada or Japan are concerned.

The first model was introduced in 2005, while the most recent one is covered in silicon silicone patches that feel like skin to help children become more comfortable with touching people and can play Nintendo Wii with the children. So far, 300 autistic children have had the chance to meet and play with Kaspar, and researchers hope that the US$2,118 cost of development could be lowered to just a few hundred dollars if mass production of the robot might turn some day into reality.

Although no tangible data regarding the children’s progress is available, experts are very interested in confident in Kaspar’s ability to help autististic children improve their social skills and teach them basic emotions. Nevertheless, it’s enough to hear out Eden’s mother talk about her daugther’s progress to understand that no scientific statistics or polls are necessary.

“Before, Eden would make a smiley face no matter what emotion you asked her to show,” she said. “But now she is starting to put the right emotion with the right face. That’s really nice to see.”

Robot archer learns how to aim and fire

The future is here, baby ! Robot archers, that’s what it’s all about ! This little humanoid robot, nicknamed iCub may be using just arrows with suction cups, but hey – you have to start somewhere ! Italian researchers developed an algorithm that can teach the robot how to shoot arrows.

So, after being taught how to hold the bow and fire, the robot learns to aim on its own, and gets better and better with each hit. It took 8 shots to hit the bull’s eye.

The learning algorithm, called ARCHER (Augmented Reward Chained Regression) algorithm, was developed and optimized specifically for problems like the archery training, which have a smooth solution space and prior knowledge about the goal to be achieved. In the case of archery, we know that hitting the center corresponds to the maximum reward we can get. Using this prior information about the task, we can view the position of the arrow’s tip as an augmented reward, says Dr. Petar Kormushev of the Italian Institute of Technology (IIT).

How does it work ? Again, Dr. Kormushev:

ARCHER uses a chained local regression process that iteratively estimates new policy parameters which have a greater probability of leading to the achievement of the goal of the task, based on the experience so far. An advantage of ARCHER over other learning algorithms is that it makes use of richer feedback information about the result of a rollout.


Via IIT