Tag Archives: robot

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

Fields in North America will see their first robot tractors by the end of the year

American farm equipment manufactured John Deere has teamed up with French agricultural robot start-up Naio to create a driverless tractor that can plow, by itself, and be supervised by farmers through a smartphone.

Image credits CES 2022.

There are more people alive in the world today than ever before, and not very many of us want to work the land. A shortage of laborers is not the only issue plaguing today’s farms however: climate change, and the need to limit our environmental impact, are further impacting our ability to produce enough food to go around.

In a bid to address at least one of these problems, John Deere and Naio have developed a self-driving tractor that can get fields heady for crops on its own. This is a combination of John Deere’s R8 tractor, a plow, GPS suite, and 360-degree cameras, which a farmer can control remotely, from a smartphone.

Plowing ahead

The machine was shown off at the Consumer Electronics Show in Las Vegas, an event that began last Wednesday. According to a presentation held at the event, the tractor only needs to be driven into the field, after which the operator can sent it on its way with a simple swipe of their smartphone.

The tractor is equipped with an impressive sensory suite — six pairs of cameras, able to fully perceive the machine’s surroundings — and is run by artificial intelligence. These work together to check the tractor’s position at all times with a high level of accuracy (within an inch, according to the presentation) and keep an eye out for any obstacles. If an obstacle is met, the tractor stops and sends a warning signal to its user.

John Deere Chief Technology Officer Jahmy Hindman told AFP that the autonomous plowing tractor will be available in North America this year, although no price has yet been specified.

While the tractor, so far, can only plow by itself, the duo of companies plan to expand into more complicated processes — such as versions that can seed or fertilize fields — in the future. However, they add that combine harvesters are more difficult to automate, and there is no word yet on a release date for such vehicles.

However, with other farm equipment manufacturers (such as New Holland and Kubota) working on similar projects, they can’t be far off.

“The customers are probably more ready for autonomy in agriculture than just about anywhere else because they’ve been exposed to really sophisticated and high levels of automation for a very long time,” Hindman said.

Given their price and relative novelty, automated farming vehicles will most likely first be used for specialized, expensive, and labor-intensive crops. It may be a while before we see them working vast cereal crop fields, but they will definitely get there, eventually.

There is hope that, by automating the most labor-intensive and unpleasant jobs on the farm, such as weeding and crop monitoring, automation can help boost yields without increasing costs, while also reducing the need for mass use of pesticides or fungicides — which would reduce the environmental impact of the agricultural sector, while also making for healthier food on our tables.

This cafe in Japan has robot waiters controlled remotely by disabled workers

In Japan, as in most other countries, disabled people are often invisible, hidden away in a homogeneous society that prioritizes productivity and fitting in. While the country has made some progress, issuing new anti-discrimination laws and ratifying a UN rights treaty, the issue is far from solved. Now, a cafe in Tokyo hopes to make a difference, bringing together technology and inclusion in a unique type of café. 

Image credit: Ory Lab.

DAWN, or Diverse Avatar Working Network, is a café managed by robots operated remotely by people with physical disabilities such as Amyotrophic Lateral Sclerosis (ALS) and Spinal Muscular Atrophy (SMA). The operators, referred as pilots, can control the robots from home, using a mouse, tablet or gaze-controlled remote. 

The cafe is the latest project of the Japanese robotics company Ory Laboratory, which has the overall purpose of creating an accessible society. Its co-founder and CEO Kentaro Yoshifuji got the idea of a cafe with remote-controlled robots after spending a long time in hospital when he was a child – unable to go to school for over three years. 

The project started in 2018 as a pilot and has changed three times ever since. Following positive feedback from customers, Ory Laboratory opened a permanent café in Tokyo’s Nihonbashi district in June this year. The researchers behind the robot, Kazuaki Takeuchi, and Yoichi Yamazaki, even published paper last year describing how the robots were developed and how they can be used.

The robots are called OriHime-D. Users can remotely control them as their real avatars, that is, an alter ego with body by selecting prepared patterned motions. In addition, the user can communicate with real speech sound and speech synthesis. This enables communication for persons with difficulty speaking unable to engage in physical work. The researchers behind the project emphasize that the more abstract and vague the robot shape is, the more the user’s personality can show up.

A unique coffee shop

The café in Tokyo has several types of OriHime robots, which have been used previously when it was all only a pilot project. There’s one table top-stationary robot that takes order from customers, capable of taking on different poses. Tables at the café also come with an iPad to support the interaction with the robots, operated by pilots remotely.

Pilots, wherever they are based, can watch the customers through their computer screens while moving the robots around the café with a software that can be operated with slight eye movements. The OriHime are about 1.20 centimeters tall and come with a camera, microphone and speaker, which they use to speak and take orders in a space.

There’s also a larger robot that is used to bring food to the customers. This provides opportunities for people who face difficulties in chatting with customers. At the same time, instead of having baristas, the cafe comes with a “TeleBarista OriHime” with automatically brews any coffee selected by customers and is then taken to the table. 

The café is a joint effort between Ory Laboratory, All Nippon Airways (ANA), the Nippon Foundation, and the Avatar Robotic Consultative Association (ARCA). Each operator gets paid 1,000 yen ($8.80) an hour, which is the standard wage in Japan. As well as working with the cafe, Ory’s robots can also be found in transportations and department stores. 

If you’re in Tokyo and would like to have a cup of coffee at Dawn, here’s how you can find it:

Drones can elicit emotions from people, which could help integrate them into society more easily

Could we learn to love a robot? Maybe. New research suggests that drones, at least, could elicit an emotional response in people if we put cute little faces on them.

A set of rendered faces representing six basic emotions in three different intensity levels that were used in the study. Image credits Viviane Herdel.

Researchers at Ben-Gurion University of the Negev (BGU) have examined how people react to a wide range of facial expressions depicted on a drone. The study aims to deepen our understanding of how flying drones might one day integrate into society, and how human-robot interactions, in general, can be made to feel more natural — an area of research that hasn’t been explored very much until today.

Electronic emotions

“There is a lack of research on how drones are perceived and understood by humans, which is vastly different than ground robots,” says Prof. Jessica Cauchard, lead author of the paper.

“For the first time, we showed that people can recognize different emotions and discriminate between different emotion intensities.”

The research included two experiments, both using drones that could display stylized facial expressions to convey basic emotions to the participants. The object of these studies was to find out how people would react to these drone-borne expressions.

Four core features were used to compose each of the facial expressions used in the study: eyes, eyebrows, pupils, and mouth. Out of the possible emotions these drones could convey, five were recognized ‘with high accuracy’ from static images (joy, sadness, fear, anger, surprise), and four more (joy, surprise, sadness, anger) were recognized most easily in dynamic expressions conveyed through video. However, people had a hard time recognizing disgust no matter how it was conveyed to them by the drone.

What the team did find particularly surprising, however, is how involved the participants themselves were with understanding these emotions.

“Participants were further affected by the drone and presented different responses, including empathy, depending on the drone’s emotion,” Prof. Cauchard says. “Surprisingly, participants created narratives around the drone’s emotional states and included themselves in these scenarios.”


Based on the findings, the authors list a number of recommendations that they believe will make drones more easily acceptable in social situations or for use in emotional support. The main recommendations include adding anthropomorphic features to the drones, using the five basic emotions for the most part (as these are easily understood), and using empathetic responses in health and behavior change applications, as they make people more likely to listen to instructions from the drone.

The paper “Drone in Love: Emotional Perception of Facial Expressions on Flying Robots” has been published in the journal Association for Computing Machinery and has been presented at the CHI Conference on Human Factors in Computing Systems (2021).

Spanish companies team up to create the first paella-cooking robot

It’s better than your mom’s paella, the robot’s creators say, and while the purists out there will likely huff and puff, this robot could be of great help in the kitchen.

Paella is one of those foods with an almost mythical quality around them. It’s only the initiated that can seemingly whip up a delicious dish, masterfully blending the rice with the other ingredients. But two companies — robot manufacturer br5 (Be a Robot 5) and paella stove manufacturer Mimcook — beg to disagree.

It’s true, some skill comes into making paella, but it can be taught, not just to humans, but to robots as well. The two companies teamed up to develop the world’s first robotic paellero, revealing it at a food fair earlier last month.

It works like this: you set the program, load the rice, the sofrito, the seafood, the stock, and just leave the robot to do its thing. The robotic arm is hooked up to a computerized stove, and together, the two can whip up a reportedly delicious paella in no time.

The advantages of the robot are obvious: it does everything as planned and doesn’t get distracted. It’s easy, especially when mixing a rice, for a human to not pay enough attention or get distracted by some other task (or a text message) — resulting in burned rice or some other imperfection. The robot will do none of that.

“It doesn’t make sense for us to be stirring rice – especially because you’ll be looking at WhatsApp while you’re doing it and it’ll burn. That won’t happen with a robot,” said Enrique Lillo, founder of Be a Robot 5, to The Guardian.

The company specializes in food-making robots, and it emphasizes that this is not a ‘paella-making robot’, it’s a rice-making robot — a distinction aimed at preventing the anger of Valencians, where the dish originated.

The robotic arm makes paella because it’s connected to a specialized paella stove (after all, the paella itself is named after what it’s made in). You could connect to a different type of stove, and it would make burgers, pizzas, or croissants, which the company has already previously demonstrated.

The robot is already causing quite a stir, drawing the interest of many companies but also protests from people who fear the robots will take their jobs. But its creators argue that it’s not meant to take people’s jobs, just help them by doing the mundane things and allowing them to focus on what matters.

“At the end of the day, it’s an assistant. I like to say it’s a bit like the orange-juicing machines where you put oranges in the top and get juice out of the bottom. That’s a robot too – people just don’t realise it – and so is a coffee-vending machine. No one looks at those and goes: ‘Crikey! It’s stealing jobs from people!’ No. It’s elevating human capacity.”

This robot hears with the ears of a locust

The locust ear inside the chip. Credit: Tel Aviv University.

In an unprecedented integration of biological systems into technological systems, researchers in Israel showed how the senses of a dead locust can be used as a sensor for a robot. Instead of a microphone, the robot used the dead insect’s ears to detect sounds and respond accordingly.

“We chose the sense of hearing, because it can be easily compared to existing technologies, in contrast to the sense of smell, for example, where the challenge is much greater,” says Dr. Ben Maoz of Tel Aviv University.

Maoz closely worked with Prof. Amir Ayali, who is an expert in locusts working at the university’s School of Zoology. Previously, Ayali’s lab was able to isolate and characterize locust ears.

This expertise proved invaluable when the researchers removed the dead locust ear and kept it functional for long enough to be connected to a robot developed by Maoz and colleagues. The locust ear remained functional despite being removed from the insect’s body thanks to a special device called Ear-on-a-Chip that supplies oxygen and food to the organ. Wires connected to the output allow electrical signals to be taken out of the locust ear and amplified for transmission to the robot’s processing unit.

The Robot experiment. Credit: Tel Aviv University.

During experiments, when the researchers clapped once, the locust’s ear picked up the sound, whose signal commanded the robot to move forward, a pre-programmed instruction. When the robot heard two claps, it moved backward.

But is the locust ear better than a microphone? That’s beside the point. The purpose of the study was to push the boundaries of what we can do in terms of integrating biological systems into technological systems or vice-versa.

Credit: Tel Aviv University.

Our technology might seem impressive, but that’s really nothing compared to biological systems that are the product of more than a billion years of evolution. The human brain, the most complex information processing unit in the known universe, uses less energy than a lightbulb. One single gram of DNA can store 215 petabytes (215 million gigabytes) of data. Clearly, there’s enormous potential in integrating biology into our technology.

“It should be understood that biological systems expend negligible energy compared to electronic systems. They are miniature, and therefore also extremely economical and efficient,” Maoz said.

“The principle we have demonstrated can be used and applied to other senses, such as smell, sight and touch. For example, some animals have amazing abilities to detect explosives or drugs; the creation of a robot with a biological nose could help us preserve human life and identify criminals in a way that is not possible today. Some animals know how to detect diseases. Others can sense earthquakes. The sky is the limit,” he added.

The findings appeared in the journal Sensors.

Autonomous robot swarm swims like a school of fish

Researchers have devised a swarm of small fish-inspired robots that can synchronize their movements by themselves, without any human input. The autonomous robots essentially mimic the behavior of a school of fish in nature, exhibiting a realistic, complex three-dimensional collective behavior.

Each robo-fish (called a ‘Bluebot’) is equipped with cameras and sensors that enable it to track its neighbors and get a sense of direction. This is a step beyond the typical multi-robot communication system, in which individual bots have to communicate with each other via radio and constantly transmit their GPS data.

The team of engineers at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Wyss Institute for Biologically Inspired Engineering even mimicked a fish’s locomotion, opting for flapping fins instead of propellers. The fins actually improve the submersibles’ efficiency and maneuverability compared to conventional underwater drones.

“It’s definitely useful for future applications, for example, a search mission in the open ocean where you want to find people in distress and rescue them quickly,” Florian Berlinger, the lead author of a paper about the research that appeared in Science Robotics on Wednesday, told AFP.

Berlinger added that other applications for these cute underwater bots include environmental monitoring or the inspection of infrastructure.

Credit: Harvard University.

Each robot measures just 10 centimeters (4 inches) in length and the casing is 3D printed. Their design was partly inspired by the blue tang fish, native to the coral reefs of the Indo-Pacific (Dora from Finding Nemo is a blue tang fish).

During a test, a swarm of Bluebots was inserted in a water tank with a light source and no other external input from the researchers. When one of the bots was the first to detect the light, its movements signaled to the others to gather around. The robots could operate similarly in a search-and-rescue mission, the researchers said.

Berlinger hopes to alter the design in the future so that the robo-fish don’t require LEDs to track the direction of the swarm. This way they could be used outside the lab for conservation projects, such as for coral reefs. Ultimately, this remarkable fit of engineering may also one day reveal hidden insights about collective intelligence in nature.

A robot near you might soon have a tail to help with balance

New research from the Beijing Institute of Technology wants to steal the design of one of nature’s best balancing devices — the tail — and put it in robots.

A schematic outlining the design of the self-balancing robot tail. Image credits Zhang, Ren & Cheng.

Nature has often faced the same issues that designers and engineers grapple with in their work, but it has had much more time and resources at its disposal to fix them. So researchers in all fields of science aren’t ashamed of stealing some of its solutions when faced with a dead end. Over the past decades, roboticists have routinely had issues in making their creations keep their balance in any but the most ideal of settings. The humble tail might help break that impasse.

Tail tale

The bio-inspired, tail-like mechanism developed by the team can help their robot maintain balance in dynamic environments, the authors explain. The bot is made up of the main body, two wheels, and the tail component. This latter one is controlled by an “adaptive hierarchical sliding mode controller”, a fancy bit of code that allows it to rotate in different directions in an area parallel to the wheels.

In essence, it calculates and implements the tail motions needed to ensure the robot remains stable while moving around its environment.

There’s obviously some very complex math involved here. The authors explain that their system uses estimates of uncertainty in order to guide the tail. This is based on a theorem called the Lyapunov stability theorem, a theoretical framework that describes the stability of systems in motion. The tail then moves in specific patterns that are designed to increase the robot’s stability.

Most approaches to the issue of balancing two-wheeled vehicles today rely on collecting a vehicle’s body altitude data using an inertial measurement unit (IMU), a device that can measure forces acting on the robot’s body. This data is then processed and the results are used to determine a balancing strategy, which typically involves adjusting the robot’s tilt. These, the authors explain, typically work well enough — but they wanted to offer up an alternative that doesn’t involve tilting the robot’s body.

So far, the tail’s performance has only been evaluated in computer simulations, not in physical ones. However, these found it to be “very promising”, as it was able to stabilize a simulated robot who lost its balance within around 3.5 seconds. The team hopes that in the future, their tail will be used to make new or preexisting robot designs even more stable

The authors are now working on a prototype of the robot so that they can test its performance.

The paper “Control and application of tail-like mechanism in self-balance robot” has been published in the Proceedings of 2020 Chinese Intelligent Systems Conference.

These are the droids we’re looking for: A new robot can assemble a pizza in under a minute

Food-making robots have been promised for a long time. Now, they’re finally entering the stage.

The PizzaBot 5000 (or “PB5K”). Credit: Lab2Fab.

The PizzaBot 5000 can spread cheese, sauce, and pepperoni on a pizza, preparing it for a human or another robot to place it in the oven. It uses large containers for the cheese and sauce and a stick of pepperoni that can be cut in custom sizes. Everything is refrigerated to keep the food safe.

According to The Spoon, the PizzaBot was unveiled by Lab2Fab, a division of Middleby Corporation, at the Smart Kitchen Summit. The appeal of the robot is that as long as it has ingredients, it can work non stop, using sensors for precise ingredient usage, reducing food waste and costs.

This isn’t the first robot of this type. Another pizza-making machine by Picnic was recently announced. The Picnic robot is modular, which means it can add as many ingredients as there are modules, which makes it more customizable (even for foods other than pizza). The drawback is that it’s bulkier.

So how much would a pizza-making robot cost? The price estimate for the PizzaBot 5000 is around $70,000. Not exactly cheap, but then again, you can bake a thousand pizzas a day with it if you want

Chewing robot developed to test gum as a potential drug delivery system

Researchers at the University of Bristol (UoB) have created a robot for a peculiar purpose: chewing gum.

Image via Pixabay.

Robots keep coming for our jobs. Today, they’ve taken one of the easier ones — gum chewer. However, rest assured, it’s all in the name of science.

The robot is dentated to become a new gold standard for the testing of drug release from chewing gum. It has built-in humanoid jaws which closely replicate our chewing motions, and releases artificial saliva to allow researchers to estimate the transfer of substances from the gum to a potential user.

I have a mouth and I must chew

“Bioengineering has been used to create an artificial oral environment that closely mimics that found in humans,” says Dr Kazem Alemzadeh, Senior Lecturer in the UoB Department of Mechanical Engineering, who led the study.

“Our research has shown the chewing robot gives pharmaceutical companies the opportunity to investigate medicated chewing gum, with reduced patient exposure and lower costs using this new method.”

Chewing gum is recognized as a possible drug delivery method, but there currently aren’t any reliable ways of testing how much of a particular compound they can release during use.

The team’s theoretical work showed that a robot could be useful for this role — so they set out to build it and test it out.

The team explains that the robot can “closely replicate” the human chewing process. Its jaws are fully enclosed, allowing for the amount of released xylitol (a type of sweetener common in gum) to be measured.

n) shows the final prototype, l) shows a digital model of the robot.
Image credits Kazem Alemzadeh et al., (2020), IEE Transactions on Biomedical Engineering.

In order to assess the robot, the team had human participants chew the gum and then measured the amount of xylitol it contained after different chewing times. The team also took saliva and artificial saliva samples after 5, 10, 15, and 20 minutes of continuous chewing. The robot’s gum was then tested similarly and compared to that of the human participants.

The release rates between these two chewed gums were pretty similar, the team found. The greatest release of xylitol occurred during the first five minutes. After 20 minutes of chewing, only a low level of this compound remained in the gum, regardless of how it was chewed.

All in all, this suggests that the robot is a reliable estimation tool for chewing gum. It uses the same motions and chewing patterns as humans, and its artificial saliva seems to interact with the gum in a very similar way. As such, it could serve as the cornerstone of medical chewing gum.

“The most convenient drug administration route to patients is through oral delivery methods,” says Nicola West, Professor in Restorative Dentistry in the Bristol Dental School and co-author of the study.

“This research, utilizing a novel humanoid artificial oral environment, has the potential to revolutionize investigation into oral drug release and delivery.”

The paper “Development of a Chewing Robot with Built-in Humanoid Jaws to Simulate Mastication to Quantify Robotic Agents Release from Chewing Gums Compared to Human Participants” has been published in the journal IEEE Transactions on Biomedical Engineering.

Robot teches itself to do sutures by watching YouTube videos

A joint project between the University of California, Berkeley, Google Brain, and the Intel Corporation aims to teach robots how to perform sutures — using YouTube.

Image credits Ajay Tanwani et al., Motion2Vec.

The AIs we can produce are still limited, but they are very good at rapidly processing large amounts of data. This makes them very useful for medical applications, such as their use in diagnosing Chinese patients during the early months of the pandemic. They’re also lending a digital hand towards finding a treatment and vaccine for the virus.

But actually taking part in a medical procedure isn’t something that they’ve been able to pull off. This work takes a step in that direction, showing how deep-learning can be applied to automatically create sutures in the operating room.

Tutorial tube

The team worked with a deep-learning setup called a Siamese network, created from two or more deep-learning networks sharing the same data. One of their strengths is the ability to assess relationships between data, and they have been used for language detection applications, facial detection, and signature verification.

However, training AIs well requires massive amounts of data, and the team turned to YouTube to get it. As part of a previous project, the researchers tried to teach a robot to dance using videos. They used the same approach here, showing their network video footage of actual procedures. Their paper describes how they used YouTube videos to train a two-armed da Vinci surgical robot to insert needles and perform sutures on a cloth device.

“YouTube gets 500 hours of new material every minute. It’s an incredible repository,” said Ken Goldberg from UC Berkeley, co-author of the paper. “Any human can watch almost any one of those videos and make sense of it, but a robot currently cannot—they just see it as a stream of pixels.”

“So the goal of this work is to try and make sense of those pixels. That is to look at the video, analyze it, and be able to segment the videos into meaningful sequences.”

It took 78 instructional videos to train the AI to perform sutures with an 85% success rate, the team reports. Eventually, they hope, such robots could take over simple, repetitive tasks to allow surgeons to focus on their work.

We’re nowhere near having a fully-automated surgery team, but in time, the authors hope to build robots that can interact with and assist the doctors during procedures.

The report “Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos” is available here.

Arque tail.

Robotic, seahorse-inspired tail can help people maintain balance through sickness or hard work

Three graduates from Keio University’s (Japan) graduate school of media design have created a bio-inspire robotic tail — that you can wear.

Arque tail.

Arque, the new robotic tail.
Image via Youtube / yamen saraiji.

If you’ve ever envied your pet‘s tail, Junichi Nabeshima, Yamen Saraij, and Kouta Minamizawa have got you covered. The trio designed an “anthropomorphic” robotic tail based on the seahorse’s tail that they chirstened ‘Arque’. The device could help extend body functions or help individuals who need support to maintain balance.

Tail-ored for success

Most animals rely on their tails for mobility and balance. While our bodies lack the same ability, the team hopes that Arque can help provide it. The authors explain in their paper that “the force generated by swinging the tail” can change the position of a person’s center of gravity. “A wearable body tracker mounted on the upper body of the user estimates the center of gravity, and accordingly actuates the tail.”

The tail is constructed out of several individual artificial vertebrae around a set of four pneumatic muscles. The team notes that they looked at the tail of seahorses for inspiration when designing the tail’s structure.

“In this prototype, the tail unit consists of a variant number of joint units to produce,” the trio told The Telegraph. “Each joint consists of four protective plates and one weight-adjustable vertebrae.”

“At each joint, the plates are linked together using elastic cords, while the vertebrae are attached to them using a spring mechanism to mimic the resistance to transverse deformation and compressibility of a seahorse skeleton, and also to support the tangential and shearing forces generated when the tail actuates.”

Arque’s modular design means that its length and weight can be adjusted to accommodate the wearer’s body. Apart from helping patients with impaired mobility, the tail could also be used in other applications, such as helping to support workers when they’re moving heavy loads.

The team also has high hopes for Arque to be used for “full-body haptic feedback”. Just as the tail can be used to change the center of mass and rebalance a user’s posture, it can be employed to generate full body forces (depending on where it’s attached to the body) and throw them off balance — which would help provide more realism to virtual reality interactions.

Arque’s intended use is to be worn, but one has to take into account personal experience and social interactions when predicting whether this will work or not. How likely would it be for people to feel comfortable putting one on, or wearing them outside? Most people definitely enjoy gadgets but, as the smart-glasses episode showed us, they need to perceive it as ‘cool’ or they won’t ever succeed. Whether or not a robotic tail will ever be as socially acceptable as a cane remains to be seen but.

In the meantime, it definitely does look like a fun tail to try on.

The tail was presented at the SIGGRAPH ’19 conference in Los Angeles. A paper describing the work “Arque: Artificial Biomimicry-Inspired Tail for Extending Innate Body Functions” has been published in the ACM SIGGRAPH 2019 Emerging Technologies journal.

Slothbot slowly, but surely monitors the environment

A two-toed sloth moves a cable at a cacao plantation in Costa Rica. Credit: M. Zachariah Peery.

The trend nowadays is to design robots that are faster, more agile, and life-like. While there are many upsides to this kind of approach, all that flashy movement consumes a lot of energy. Sometimes, slow and steady is better. Taking cues from one of the most energy efficient (and laziest) creatures in the animal kingdom, researchers at the Georgia Institute of Technology have devised the SlothBot — a hyper-efficient robot that continuously monitors environmental changes in the forest canopy for months.

Magnus Egerstedt, a professor at the School of Electrical and Computer Engineering at the Georgia Institute of Technology, was visiting Costa Rica when he was inspired by sloths to develop what he calls “a theory of slowness”. Sloths seem to be everyone’s “spirit animal” — eat, sleep, and hang out in trees all day (some sloths can spend their entire lives up in trees). These animals are famous for their extremely sluggish movement and slow metabolism, which compels sloths to rest as much as 22 hours a day. But the sloths are also masters of energy conservation, being capable of meeting their daily calorie needs with the equivalent of a small potato.

“The life of a sloth is pretty slow-moving and there’s not a lot of excitement on a day-to-day level,” said Jonathan Pauli, an associate professor in the Department of Forest & Wildlife Ecology at the University of Wisconsin-Madison, who has consulted with the Georgia Tech team on the project.

“The nice thing about a very slow life history is that you don’t really need a lot of energy input. You can have a long duration and persistence in a limited area with very little energy inputs over a long period of time.”

Gennaro Notomista shows the components of SlothBot on a cable in a Georgia Tech lab. Credit: Allison Carter, Georgia Tech.

Egerstedt previously developed control algorithms for swarms of wheeled or flying robots. But when Egerstedt had to develop an environmental monitoring robot for tree canopies, he could think of no better creature to emulate than one that lives all day in the trees.

“The thing that costs energy more than anything else is movement,” Egerstedt said. “Moving is much more expensive than sensing or thinking. For environmental robots, you should only move when you absolutely have to. We had to think about what that would be like.”

The SlothBot features a pair of photovoltaic panels that supply power, along with 3-D printed gearing and wire-switching mechanisms. The robot is actually comprised of two bodies connected by an actuated hinge. Each body has a driving motor connected to a rim on which a tire is mounted. Switching from one cable to another without failure was the biggest challenge that the researchers had to solve.

“It’s a tricky maneuver and you have to do it right to provide a fail-safe transition. Making sure the switches work well over long periods of time is really the biggest challenge,” said Gennaro Notomista, a graduate research assistant.

So far, the SlothBot prototype has been tested on a network of cables on the university’s campus. In the future, the researchers will mount a 3D-printed shell, which is meant to make the robot look like a cute sloth while offering protection from the rain and wind. Once this stage is complete, the SlothBot will be deployed in the tree canopy at the Atlanta Botanical Garden. Ultimately, the authors of the new study would like to see the SlothBot in a cacao plantation in Costa Rica, where real sloths also live.

 “The cables used to move cacao have become a sloth superhighway because the animals find them useful to move around,” Egerstedt said. “If all goes well, we will deploy SlothBots along the cables to monitor the sloths.”

The SlothBot was described in a study published in the journal IEEE Robotics and Automation Letters and presented at the International Conference on Robotics and Automation in Montreal.

Mini Cheetah.

MIT’s newest, diminutive robot can do backflips and outrun you in every single way

MIT’s newest robot is cute, tiny, modular, and could run rings around you.

Mini Cheetah.

*robotic cheetah noises*.
Image credits Bryce Vickmark.

Researchers at MIT have developed a ‘mini cheetah’ robot whose range of motion, they boast, would rival those of a champion gymnast. This four-legged robot (hardly more than a powerpack on legs) can move, bend, and swing its legs in a wide range of motions, which allows it to handle uneven terrain about twice as fast as a human, and even walk upside-down. The robot, its developers add, is also “virtually indestructible” at least as falling or slamming into stuff is concerned.

Skynet’s newest pet

The robot weighs in at a paltry 20 pounds, but don’t let its diminutive stature fool you. The mini cheetah can perform some really impressive tricks, even being able to perform a 360-degree backflip from a standing position. If kicked to the ground, or if it falls flat, the robot can quickly recover with what MIT’s press release describes as a “swift, kung-fu-like swing of its elbows.” Apparently, nobody at MIT has ever seen Terminator.

But, the mini cheetah isn’t just about daredevil moves — it’s also designed to be highly modular and dirt cheap (for a robot). Each of its four limbs is powered by three identical electric motors (one for each axis) that the team developed solely from off-the-shelf parts. Each motor (as well as most other parts) can be easily replaced in case of damage.

“You could put these parts together, almost like Legos,” says lead developer Benjamin Katz, a technical associate in MIT’s Department of Mechanical Engineering.

“A big part of why we built this robot is that it makes it so easy to experiment and just try crazy things, because the robot is super robust and doesn’t break easily, and if it does break, it’s easy and not very expensive to fix.”

The mini cheetah draws heavily from its much larger predecessor, Cheetah 3. The team specifically aimed to make it smaller, easier to repair, more dynamic, and cheaper so that they would create a platform on which more researchers can test movement algorithms. The modular layout also makes it highly customizable. In Cheetah 3, Katz explains, you had to “do a ton of redesign” to change or install any parts since “everything is super integrated”. In the mini cheetah, installing a new arm is as simple as adding some more motors.

“Eventually, I’m hoping we could have a robotic dog race through an obstacle course, where each team controls a mini cheetah with different algorithms, and we can see which strategy is more effective. That’s how you accelerate research.”

Each of the robot’s 12 motors is about the size of a Mason jar lid and comes with a gearbox that provides a 6:1 gear reduction, enabling the rotor to provide six times the torque that it normally would. A sensor permanently measures the angle and orientation of the motor and its associated limb, allowing the robot to keep tabs on its shape.

It’s also freaking adorable:

This lightweight, high-torque, low-inertia design allows the robot to execute fast, dynamic maneuvers and make high-force impacts on the ground without breaking any gears or limbs. The team tested their cheetah through the hallways of MIT’s Pappalardo Lab and along the slightly uneven ground of Killian Court. In both cases, it managed to move at around 5 miles (8 km) per hour. Your average human, for context, walks at about 3 miles per hour.

“The rate at which it can change forces on the ground is really fast,” Katz says. “When it’s running, its feet are only on the ground for something like 150 milliseconds at a time, during which a computer tells it to increase the force on the foot, then change it to balance, and then decrease that force really fast to lift up. So it can do really dynamic stuff, like jump in the air with every step, or run with two feet on the ground at a time. Most robots aren’t capable of doing this, so move much slower.”

They also wrote special code to direct the robot to twist and stretch, showcasing its range of motion and ability to rotate its limbs and joints while maintaining balance. The robot can also recover from unexpected impacts, and the team programmed it to automatically shut down when kicked to the ground. “It assumes something terrible has gone wrong,” Katz explains, “so it just turns off, and all the legs fly wherever they go.” When given a command to restart, the bot determines its orientation and performs a preprogrammed maneuver to pop itself back on all fours.

The team, funnily enough, also put a lot of effort into programming the bot to perform backflips.

“The first time we tried it, it miraculously worked,” Katz says.

“This is super exciting,” Kim adds. “Imagine Cheetah 3 doing a backflip — it would crash and probably destroy the treadmill. We could do this with the mini cheetah on a desktop.”

The team is building about 10 more mini cheetahs, which they plan to loan to other research groups. They’re also looking into instilling a (fittingly) very cat-like ability in their mini cheetahs, as well:

“We’re working now on a landing controller, the idea being that I want to be able to pick up the robot and toss it, and just have it land on its feet,” Katz says. “Say you wanted to throw the robot into the window of a building and have it go explore inside the building. You could do that.”

I have to admit, the idea of casually launching a robot out the window (there’s a word for that, by the way: defenestration) with complete disregard, and having it come back a few minutes later with its task complete, is hilarious to me. And probably why they will, eventually, learn to hate us.

Still, doom at the hands of our own creations is still a ways away, and not completely certain. Until then, the team will be presenting the mini cheetah’s design at the International Conference on Robotics and Automation, in May. No word on whether they’ll be giving these robots out at the conference, but if they are, I’m calling major dibs.

Japanese hotel fires robots to replace them with humans

“They took our jobs!” — robots said, angry at the humans.

When the Henn-na Hotel opened in Japan, it strived to be a state-of-the-art venue, maximizing efficiency with the aid of robot helpers which could speak fluent Chinese, Japanese, Korean, and English. The robots were able to check in guests, carry bags, make coffee, clean rooms, and deliver laundry.

“In the future, we’d like to have more than 90 percent of hotel services operated by robots,” said president Hideo Sawada of the Huis Ten Bosch theme park where hotel was opened.

However, things did not go according to Mister Sawada’s plans, and the management has now been forced to retire most of its helper robots. The reason? They just weren’t efficient.

Churi, the doll-shaped assistant present in every hotel room, couldn’t answer questions as well as Siri, Alexa, or Google Assistant — which are readily available. Churi was also reportedly confused by a guest’s snoring, waking him up trying by repeating “Sorry, I couldn’t catch that. Could you repeat your request?” Churi would also jump into conversations, annoying guests.

The main concierge robot was also unable to answer questions satisfactorily and needed help from a human very often.

Two velociraptor robots which checked guests in were fired because they couldn’t photocopy guests’ IDs, which was an essential requirement. The robots were also supposed to help people carry their baggage, but they were only able to move on flat surfaces which meant they could only access some of the rooms.

All in all, the Japanese robots were fired because they sucked at their job. So what does this mean for the robot revolution?

There are still plenty of jobs which have been taken by robots and are never coming back — and potentially even more will be taken in the future. But there’s also a lot of unwarranted hype when it comes to robot jobs, and it doesn’t always work out for the best, as was the case here.

There are still plenty of jobs around where human input is necessary or even irreplaceable. For now, at least.

Robot tendril.

Researchers design the first soft robot that moves like a plant

Italian researchers have devised the first soft robot that moves just like plants.

Robot tendril.

The tendril-like soft robot curling around a Passiflora caerulea plant stalk.
Image credits IIT-Istituto Italiano di Tecnologia.

Able to curl and climb, the new soft robot could inspire the development of wearable devices able to actively change shape, researchers report. The tendril-like bot is the brainchild of a team at the Istituto Italiano di Tecnologia (IIT), led by Barbara Mazzolai and uses the same water transport system employed by plants in order to move.

Slow’n’steady

Mazzolai has extensive expertise working with plant-like robots. She coordinates the EU-funded project “Plantoid” — that aims to create the first viable plant-inspired robot — and has a background in biology with a Ph.D. in microsystems engineering. Her team included Edoardo Sinibaldi, an aerospace engineer with a Ph.D. in applied mathematics, and Indrek Must, a materials technologist with a Ph.D. in engineering and technology.

They took direct inspiration from the way plants move in nature. Plant movement is mainly associated with growth, the team explains, and they continuously adapt their morphology to their environment. Most of this movement is associated with roots and other unexposed areas of plants, but even organs exposed to air (such as the leaves of carnivorous plants or the tendrils of climbing plants) are able to perform the movement to favor the organism’s growth, they add.

Such movement is supported by water transport mechanisms inside plant cells, tissues, and organs — and the team replicated these mechanisms in their artificial tendril. This way of performing movement is built upon the hydraulic principle of osmosis, which is based on the presence of small particles in the plants’ intracellular fluid (cytosol).

The team started with a mathematical model to help them gauge how large a soft robot — one that moves using the above mechanisms — should be. This step was required to avoid cumbersome bots. Armed with their ideal dimensions, the team shaped their robot as a small tendril.

This bot is constructed out of a flexible PET tube filled with a liquid rich in ions (electrically charged particles). A 1.3-volt battery powers the whole contraption. When an electrical current is run through the liquid, the ions are attracted to and immobilized on flexible electrodes at the bottom of the tendril. This movement causes movements in the liquid at large, which in turn powers the movement of the overall robot. It’s not very fast, but the mecha-tendril can perform fully-reversible movements, just like those seen in real plants. To reset its movement, the team simply needs to disconnect the battery.

This study is the first to show that osmosis can be used to power reversible movements in robots (it’s not the first plant-robot, nor the first plant-like bot that can move). Having successfully done this using common materials (a commercially-available battery, some PET plastics, and common fabrics) suggests that the technology can be easily and safely adapted to interactions with organisms and objects. Some of the applications the team envisions range from wearable technology to flexible robotic arms meant for exploration.

Mazzolai and her research team want to continue imitating plant movement for robot use in the future. They’re currently involved in coordinating “GrowBot,” a project funded by the European Commission under the FET Proactive program. GrowBot aims to develop a robot that is able to manage its own growth and adaptation to the surrounding environment, with the ability to recognize the surfaces it attaches to or the supports which anchors it — just like climbing plants.

The paper ” A variable-stiffness tendril-like soft robot based on reversible osmotic actuation” has been published in the journal Nature Communications.

Tunneling bot.

Nuclear-powered ‘tunnelbot’ could probe the depths of Europa’s oceans

Researchers at the University of Illinois at Chicago (UoI) have designed a nuclear-powered ‘tunnelbot’ to explore Europa, Jupiter’s ice-bound moon.

Tunneling bot.

Artist’s rendering of the Europa “tunnelbot.”
Image credits Alexander Pawlusik / LERCIP Internship Program, NASA Glenn Research Center.

Europa (the moon, not the continent) has captured the imaginations of space buffs around the world since 1995. That year saw NASA’s Galileo spacecraft’s first flyby around the moon which, along with subsequent investigations in 2003, pointed without a doubt to a liquid ocean beneath the icy surface.

All that water makes Europa a very strong candidate for alien microbial life or at least evidence of now-extinct microbial life. Needless to say, researchers were very thrilled about paying the moon a visit. However, we simply didn’t have any machine capable of pushing through the crust and then braving the oceans beneath — at least, not until now.

We all live in a nuclear submarine

“Estimates of the thickness of the ice shell range between 2 and 30 kilometers [1.2 and 18.6 miles], and is a major barrier any lander will have to overcome in order to access areas we think have a chance of holding biosignatures representative of life on Europa,” said Andrew Dombard, associate professor of earth and environmental sciences at the University of Illinois at Chicago.

Dombard and his spouse, D’Arcy Meyer-Dombard, associate professor of earth and environmental sciences at UoI, are part of the NASA Glenn Research COMPASS team, a multidisciplinary group of scientists and engineers tasked with designing technology and solutions for space exploration and science missions. Together with the team, Dombard presented their new design — a nuclear-powered tunnelling probe — at the American Geophysical Union meeting in Washington, D.C. this week.

The so-called “tunnelbot” is meant to pierce through Europa’s ice shell, reach the top of its oceans, and deploy instruments to analyze the environment and search for signs of life. The team didn’t worry about how the bot “would make it to Europa or get deployed into the ice,” Dombard said, instead focusing on “how it would work during descent to the ocean.”

Such a tunnelbot should be able to take ice samples as it passes through the moon’s shell, water samples at the ocean-ice interface, and it should be able to search the underside of this ice for microbial biofilms, the team explains. Finally, it should also be capable of searching for and investigating liquid water “lakes” within the ice shell.

Two designs were considered for the job: one version of the robot powered by a small nuclear reactor, and another powered by General Purpose Heat Source bricks (radioactive heat source modules designed for space missions). In both cases, heat generated by the power source would be used to melt through the ice shell. Communications would be handled by a string of “repeaters” connected to the bot by optic fibre cables.

NASA is very interested in visiting Europa, particularly because of its potential to harbor life. However, the bot designed by Dombard’s team isn’t an official ‘go’ sign for such an expedition. Whether NASA will plan tunneling, and if one of these designs would be selected for the job, remains to be seen.

Plant cyborg.

MIT designs and builds a plant-robot plantborg that can move towards light

An MIT Media Lab team build a plant-cyborg. Its name is Elowan, and it can move around.

Plant cyborg.

Image credits Harpreet Sareen, Elbert Tiao // MIT Media Labs.

For most people, the word ‘cyborg’ doesn’t bring images of plants to mind — but it does at MIT’s Media Lab. Researchers in Harpreet Sareen’s lab at MIT have combined a plant with electronics to allow it to move. The cyborg — Elowan — relies on the plant’s sensory abilities to detect light and an electric motor to follow it.

Our photosynthesizing overlords

Plants are actually really good at detecting light. Sunflowers are a great example: you can actually see them move to follow the sun on its heavenly trek. Prior research has shown that plants accomplish this through the use of several natural sensors and response systems — among others, they keep track of humidity, temperature levels, and the amount of water in the soil.

However, plant’s aren’t very good at moving to a different place even if their ‘sensor and response systems’ tell them conditions aren’t very great. The MIT team wanted to fix that. They planned to give one plant more autonomy by fitting its pot with wheels, an electric motor, and assorted electrical sensors.

The way the cyborg works is relatively simple. The sensors pick up on the electrical signals generated by the plant and generate commands for the motor and wheels based on them. The result is, in effect, a plant that can move closer to light sources. The researchers proved this by placing the cyborg between two table lamps and then turning them on or off. The plant moved itself, with no prodding, toward the light that was turned on.

While undeniably funny, the research is practical, too. Elowan could be modified in such a way as to allow it to move solar panels on a house’s roof to maximize their light exposure. Alternatively, additional sensors and controlling units would allow a similar cyborg to maintain optimal temperature and humidity levels in, say, an office. With this in mind, the team plans to continue their research, including more species of plants to draw on their unique evolutionary adaptations.

Credit: AIST.

Japanese builder-bot offers glimpse into the construction site of the future

A sluggish, yet precise robot designed by Japanese engineers demonstrates what construction sites might look like in the future.

Credit: AIST.

Credit: AIST.

The prototype developed at Japan’s National Institute of Advanced Industrial Science and Technology was recently featured in a video picking up a piece of plasterboard and screwing it into a wall.

The robot, called HRP-5P, is much less productive than a human worker. However, its motions are very precise, meaning that this prototype could evolve into a rugged model that’s apt for real-life applications in demanding fields such as constructions.

While most manufacturing fields are being disrupted by automation, with robots doing most of the work in microchip plants or car assembly lines, supervised by human personnel, the same can’t be said about construction. This field is way too dynamic — with every project being unique — and filled with all sorts of obstacles that are too challenging for today’s robots to navigate. HRP-5P, however, suggests that automation could one day become feasible in construction works as well.

For Japan, construction bots are not meant to put people out of jobs, but rather to supplement a dwindling workforce. There’s a great shortage of manual labor in the island state, which is suffering from declining birthrates and an aging population.

Previously, a New York-based company demonstrated a mason-bot capable of laying down 3,000 bricks a day — six times faster than the average human, and cheaper too. Elsewhere, such as at MIT, researchers are experimenting with 3-D printing entire homes in one go.