Category Archives: Robotics

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

New microbots can travel to the brain via the nose and deliver treatments

Scientists have successfully guided a microbot through the nasal pathways to the brain of a mouse. If the same approach can be replicated in humans, it could be a game-changer against neurodegenerative disease, enabling doctors to deliver therapies directly to the brain.

Image credits: DGIST.

A research team led by DGIST (the Daegu Gyeongbuk Institute of Science and Technology in South Korea) has created a microrobot propelled by magnets that can navigate the human body. The trial, published in the journal Advanced Materials, describes how they manufactured the microrobot, dubbed a Cellbot, by magnetizing stem cells extracted from the human nasal cavity. The scientists then tested the ability of the Cellbot to move through the body’s confined vessels and passages to reach its target, which it completed with ease.

DGIST said in a statement that “This approach has the potential to effectively treat central nervous system disorders in a minimally invasive manner.”

Building an intranasal microrobot

Brain conditions affect tens of millions of people worldwide, with experts estimating that the number of Americans with Alzheimer’s alone could stand at 6.2 million people. Unfortunately, there’s no available cure for many of them. However, much of the research in this field focuses on stem cell therapies.

These therapies comprise special cells that can develop into many different tissue types, making them ideal for regenerative medicine as they can replace structures within the body damaged by disease or harsh therapeutics such as chemotherapy. However, problems may arise when using this type of therapy as the blood-brain barrier (the vascular system that supplies blood to the central nervous system) tightly regulates molecules that go in and out of the brain. This neural boundary prevents most therapeutics from entering without the use of high-risk surgery.

The current study may have finally found a solution for this problem.

The Institute explains their Cellbot consists of human stem cells scraped from structures known as turbinates in the nasal cavity – which they then soaked in a solution containing iron nanoparticles. The metallic particles, invisible to the naked human eye, are amalgamated with the stem cells to magnetize them, which then enables the propulsion of the Cellbots using an external magnetic field. After measuring the magnetization of the microbots, the team put the Cellbots through a rigorous set of trials to test their mobility and regenerative properties.

A microbot obstacle course

In the first test involving microfluid channels, the scientists mapped a tortuous route for the biobots around tiny pillars measuring no more than the width of a human hair placed in microscopic canals full of viscous liquid. In this way, they demonstrated that the Cellbots could traverse obstacles in confined spaces, as would be the case if they were injected into your nose.

They then tested whether the Cellbots were still safe to use as a therapy due to the presence of iron. Micro-brain organoids were grown in the lab, and the Cellbots successfully grafted onto them in the same fashion as stem cells. These results suggested that the Cellbots could differentiate into neuronal cells and help to regenerate damaged brain tissues just like their native counterparts.

Finally, a swarm of Cellbots was propelled by an external magnetic field to a target region in the mouse brain via the nasal pathway. The biobots were tagged using a fluorescent marker and guided by the scientists to traverse the blood-brain barrier and target the cortex of the frontal region of the animal’s brain – where the nervous system accepted and integrated them.

New hope for untreatable brain disease

In their whitepaper, the researchers conclude that the collective results of their experiments demonstrate that the Cellbots can be successfully administered nasally and guided manually to the target brain region. The study represents a promising approach for untreatable central nervous system diseases. Professor Choi, DGIST head researcher, concluded:

“This research overcomes the limitations in the delivery of a therapeutic agent into brain tissues owing to the blood-brain barrier.” He added, “It opens new possibilities for the treatment of various intractable neurological diseases, such as Alzheimer’s disease, Parkinson’s disease, and brain tumors, by enabling accurate and safe targeted delivery of stem cells through the movement of a magnetically powered microrobot via the intranasal pathway.”

New four-legged robots designed to work together to accomplish difficult tasks

Quantity is a quality all of its own, and that seems to be true in robotics, as well. Researchers at the University of Notre Dame report having successfully designed and built multi-legged robots that can navigate difficult terrain and work together to perform various tasks.

Image credits University of Notre Dame / Yasemin Ozkan-Aydin.

Nature is no stranger to the concept of cooperation. We ourselves are a great example of such cooperation at work, but insects such as ants and bees showcase what can be done when even tiny actors join hands. Roboticists have long been keen to mimic such abilities in their creations, and to instill them in small frames, especially.

New research places us squarely on the path towards such an objective.

Silicon swarm

“Legged robots can navigate challenging environments such as rough terrain and tight spaces, and the use of limbs offers effective body support, enables rapid maneuverability and facilitates obstacle crossing,” says Yasemin Ozkan-Aydin, an assistant professor of electrical engineering at the University of Notre Dame, who designed the robots.

“However, legged robots face unique mobility challenges in terrestrial environments, which results in reduced locomotor performance.”

The collective behavior of birds, ants, and other social insect species has been a great source of inspiration for Ozkan-Aydin. In particular, she was fascinated by their ability to work together to perform tasks that would be impossible for a single individual of the species to perform. She set out to try and instill the same capabilities in her own creations.

Although collective behaviors have been explored in flying and underwater robots, land-borne robots must contend with particular challenges that the other two do not. Traversing complex terrain, for example, is one such challenge.

Ozkan-Aydin started from the idea that a physical connection between individual bots could be used to enhance their overall mobility. The legged robots she designed will attempt to perform tasks such as moving a light object or navigating a smooth surface on their own but, if the task proves to be too great for them alone, several robots will physically connect to one another to form a larger, multi-legged system. Collectively, they will work to overcome the issue.

“When ants collect or transport objects, if one comes upon an obstacle, the group works collectively to overcome that obstacle. If there’s a gap in the path, for example, they will form a bridge so the other ants can travel across — and that is the inspiration for this study,” she said.

“Through robotics we’re able to gain a better understanding of the dynamics and collective behaviors of these biological systems and explore how we might be able to use this kind of technology in the future.”

Each individual bot measures around 15 to 20 centimeters (6 to 8 inches) in length, and they were built using a 3D printer. They carry their own lithium polymer battery, three sensors — a light sensor at the front and two magnetic touch sensors at the front and back, — and a microcontroller. The magnetic sensors allow them to connect to one another. They move around on four flexible legs, a setup that Ozkan-Aydin says reduces their need for sensors and their overall complexity.

She designed and built the robots in early 2020 and, due to the pandemic, much of her experimentation was performed at home or in her yard. During that time, the robots’ abilities were tested over grass, mulch, leaves, and acorns. Their abilities to cross flat surfaces were tested over particle board, stairs made from insulation foam, over a shaggy carpet, or over a particle board with rectangular wooden blocks glued on to simulate rough terrain.

During this time, Ozkan-Aydin programmed the robots so that when one of them became stuck, they would send a signal to the others to come to link up with it and help it traverse the obstacles together.

“You don’t need additional sensors to detect obstacles because the flexibility in the legs helps the robot to move right past them,” said Ozkan-Aydin. “They can test for gaps in a path, building a bridge with their bodies; move objects individually; or connect to move objects collectively in different types of environments, not dissimilar to ants.”

There are still improvements that can be made to the design, she explains. However, the intention wasn’t to design the perfect robot; what she hopes for is that her findings will help spur further development of low-cost, cooperative robots that can perform real-world tasks such as search-and-rescue operations, collective transport of various objects, environmental monitoring, or even space exploration. In the future, she will be focusing on improving the control, sensing abilities, and power autonomy of the robots.

“For functional swarm systems, the battery technology needs to be improved,” she said. “We need small batteries that can provide more power, ideally lasting more than 10 hours. Otherwise, using this type of system in the real world isn’t sustainable.”

“You need to think about how the robots would function in the real world, so you need to think about how much power is required, the size of the battery you use. Everything is limited so you need to make decisions with every part of the machine.”

The paper “Self-reconfigurable multilegged robot swarms collectively accomplish challenging terradynamic tasks” has been published in the journal Science Robotics.

Submersible robots help us better understand ocean health and carbon flows

Floating robots could become indispensable in helping us monitor the health of ocean ecosystems and the flow of carbon between the atmosphere and oceans, according to a new study.

Although the microscopic marine plants and animals which make up plankton are the bedrock of ocean ecosystems. While they’re essential for the well-being of everything that swims, they’re also very important for our comfort and well-being, too. Plankton is one of the largest single sources of oxygen on the planet, and it consumes a lot of CO2 to do it. This process is known as marine primary productivity.

Knowing how they’re faring, then, would be a great help. Floating robots can help us out in that regard, according to a new paper.

Floats my boats

“Based on imperfect computer models, we’ve predicted primary production by marine phytoplankton will decrease in a warmer ocean, but we didn’t have a way to make global-scale measurements to verify models. Now we do,” said Monterey Bay Aquarium Research Institute (MBARI) Senior Scientist Ken Johnson, first author of the paper.

Together with former MBARI postdoctoral fellow Mariana Bif, Johnson shows how a fleet of marine robots could completely change our understanding of primary productivity on a global scale. Data from these crafts would allow researchers to more accurately model the flow of carbon between the atmosphere and the ocean, thus improving our understanding of the global carbon cycle.

Furthermore, the duo explains, shifts in phytoplankton productivity can have significant effects on all life on Earth by changing how much carbon oceans absorb, and by altering oceanic food webs. The latter can easily impact human food security, as the oceans are a prime source of food for communities all over the world. In the context of our changing climate, it’s especially important to know with accuracy how much carbon plankton can scrub out of the atmosphere, and what factors influence this quantity.

Part of what makes the ocean such a good carbon sink is that dead organic matter sinks to the bottom. Plankton grows by consuming atmospheric oxygen, and is in turn consumed by other organisms, such as fish. As these eventually die, they sink to the bottom of the sea, where they’re decomposed by bacteria, releasing carbon in the process. However, because this happens at great depths, the carbon is effectively prevented from returning to the atmosphere for very long periods of time. Generally, it seeps into deep-water sediments and stays there for millions of years or more.

That being said, this process is very sensitive to environmental factors such as changes in climate. While we understand that this happens, we’ve not been able to actually monitor how primary productivity is responding to climate change on a global scale, as most of it happens in the depths of the oceans.

“We might expect global primary productivity to change with a warming climate,” explained Johnson. “It might go up in some places, down in others, but we don’t have a good grip on how those will balance.”

“Satellites can be used to make global maps of primary productivity, but the values are based on models and aren’t direct measurements,”

Autonomous robots could help us get the data we need, the study argues. For starters, it’s much easier to build robots that can withstand the humongous pressures of the deep ocean than it is to build equivalent manned submarines. Secondly, robots are mass-producible for relatively little cost. Human crews are expensive and slow to train — they’re also quite limited in availability. Finally, robots can operate for much longer periods of time than human crews, and nobody needs to risk their life in the process.

The authors point to the deployment of Biogeochemical-Argo (BGC-Argo) floats across the globe as a great example of how robots can help monitor primary productivity. These automated floats can measure temperature, salinity, oxygen, pH, chlorophyll, and nutrient content in marine environments, at depths of up to 2,000 meters (6,600 ft). A float can perform its monitoring tasks autonomously, shifting between different depths and supplying live data to researchers onshore. These robots have been deployed in increasing numbers over the past decade, providing reliable — but as of yet, still sparse — measurements of oxygen production across the globe.

Although the data they’ve been feeding us didn’t tell us anything new, it is the first time we’ve been able to quantitatively measure primary productivity directly.

“Oxygen goes up in the day due to photosynthesis, down at night due to respiration—if you can get the daily cycle of oxygen, you have a measurement of primary productivity,” explained Johnson.

In order to confirm that these robots were actually performing their job reliably, the team compared primary productivity estimates computed from the BGC-Argo floats to ship-based sampling data in two regions: the Hawaii Ocean Time-series (HOT) Station and the Bermuda Atlantic Time-series Station (BATS). The data from these two sources matched over several years, proving the reliability of the system.

“We can’t yet say if there is change in ocean primary productivity because our time series is too short,” cautioned Bif. “But it establishes a current baseline from which we might detect future change. We hope that our estimates will be incorporated into models, including those used for satellites, to improve their performance.”

Seeing as we have drones flying about the atmosphere taking pictures of everything and anything, it only makes sense that we’d eventually have some doing the same underwater. I am personally very thrilled to see robots taking on the deepest depths. The ocean is a fascinating place, but I’m also terrified of drowning, so I’ll probably never work up the courage to actually explore it. Hopefully, our automated friends will do the work for us and help us understand what is still a very much unexplored frontier of Earth.

The paper “Constraint on net primary productivity of the global ocean by Argo oxygen measurements” has been published in the journal Nature.

Ingenuity has flown a full mile over Mars, and broken its altitude record

With its 10th flight on Mars completed just yesterday, NASA’s Ingenuity helicopter has now flown more than a mile through the skies of our red neighbor.

Illustration of the Ingenuity Mars Helicopter. Image credits GPA Photo Archive / Flickr.

In a Twitter post on Sunday, NASA confirmed that Ingenuity flew over the “Raised Ridges”, part of a fracture system inside Jezero Crater that researchers have been looking to investigate for some time now. These fractures can act as pathways for fluids underground so, if there’s water on Mars (or if there was water on Mars), these fractures would hold signs of its passing. This marked the 10th flight for the helicopter drone, and its first full mile over Mars.

Humble beginnings

“With the Mars Helicopter’s flight success today, we crossed its 1-mile total distance flown to date,” officials with NASA’s Jet Propulsion Laboratory in Pasadena, California wrote in an Instagram update late Saturday. JPL is home to the mission control for Perseverance and Ingenuity.

In an earlier Tweet, Ingenuity operations lead Teddy Tzanetos described the planned flight in a status update, calling it the most complex mission the drone has undergone so far, in terms of both navigation and performance. The helicopter was sent to investigate (fly over and photograph) 10 sites, with the mission estimated to last around 165 seconds.

Although the full details of the mission haven’t been published yet, Tzanetos explained on Friday that it would be taking off from its sixth airfield and then moving south-by-southwest about 165 feet (50 meters). From there, it was scheduled to take two pictures of Raised Ridges from two different angles, both looking south. From there, Ingenuity was scheduled to fly west and then northwest, to snap further images of the Raised Ridges area. These will be used by NASA to create stereo images of the area.

What we do know is that during the flight, Ingenuity achieved a new record height: 40 feet (12 meters) above ground.

Ingenuity was meant to operate on mars for around 30 days; it has now been hard at work for 107. It went well beyond its duties during this time, allowing ground control to test out several flight maneuvers and undergoing two software updates — one to improve its flight speed, the other to refine its camera’s color-capturing abilities. To date, it’s flown for 14 minutes on Mars, which is a bit over 112% the performance target used for tech demos back on Earth.

Still, it shows no signs of slowing down anytime soon. Since it’s running on solar panels, fuel isn’t a concern, and NASA has already extended its operations once (after Ingenuity completed its primary mission in April). We’re likely to see a similar extension in the future, as the craft is providing invaluable reconnaissance from the skies of Mars.

A drone-flying software outperforms human pilots for the first time

The rise of the machines won’t be as dramatic as those in Terminator or the Animatrix if people can simply outrun the murderbots. And, currently, we can do that quite comfortably. Some robots can walk, some can run, but they tend to fall over pretty often, and most are not that fast. Autonomous flying drones are also having a very hard time keeping up with human-controlled ones, as well.

Image credits  Robotics and Perception Group, University of Zurich.

New research at the University of Zurich, however, might finally give robots the edge they need to catch up to their makers — or, at least, give flying drones that edge. The team developed a new algorithm that calculates optimal trajectories for each drone, taking into account their individual capabilities and limitations.

Speed boost

“Our drone beat the fastest lap of two world-class human pilots on an experimental race track,” says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and corresponding author of the paper.”The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones’ limitations”.

“The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that,” adds Philipp Foehn, Ph.D. student and first author of the paper.

Battery life is one of the most stringent constraints drones today face. Because of this, they need to be fast. The approach their software uses today is to break down their flight route into a series of waypoints and then calculate the best trajectory, acceleration, and deceleration patterns needed over each segment.

Previous drone piloting software relied on various simplifications of the vehicle’s systems — such as the configuration of its rotors or flight path — in order to save on processing power and run more smoothly (which in turn saves on battery power). While practical, such an approach also produces suboptimal results, in the form of lower speeds, as the program works with approximations.

I won’t go into the details of the code here, mainly because I don’t understand code. But results-wise, the drone was pitted against two human pilots — all three navigating the same quadrotor drone — through a race circuit, and came in first place. The team set up cameras along the route to monitor the drones’ movements and to feed real-time information to the algorithm. The human pilots were allowed to train on the course before the race.

In the end, the algorithm was faster than the pilots on every lap, and its performance was more consistent between laps. The team explains that this isn’t very surprising, as once the algorithm identifies the best path to take, it can reproduce it accurately time and time again, unlike human pilots.

Although promising, the algorithm still needs some tweaking. For starters, it consumes a lot of processing power right now: it took the system one hour to calculate the optimal trajectory for the drone. Furthermore, it still relies on external cameras to keep track of the drone, and ideally, we’d want onboard cameras to handle this step.

The paper “Time-optimal planning for quadrotor waypoint flight” has been published in the journal Science Robotics.

New AI training approach could finally allow computers to have imaginations

Researchers at the University of Southern California (USC) are trying to teach a computer not how to love, but how to imagine.

Image credits Bruno Marie.

People generally don’t have any issue imagining things. We’re pretty good at starting from scratch, and we’re even better at using our experience to imagine completely new things. For example, all of you reading this could probably imagine the Great Wall of China but made from spaghetti and meatballs, or a cat in a pirate hat.

Computers, however, are notoriously bad at this. It’s not their fault, we’ve built them to be fast, accurate, and precise, not to waste their time daydreaming; that’s our job. But giving computers the ability to imagine — to envision an object with different attributes, or to create concepts from scratch — could definitely be useful. Although machine learning experts have dealt with this issue up to now, we’ve made precious little progress.

However, a new AI developed at the USC mimics the same processes that our brains use to fuel our imagination, being able to create entirely new objects with a wide range of attributes.

Creative computing

“We were inspired by human visual generalization capabilities to try to simulate human imagination in machines,” said lead author Yunhao Ge, a computer science PhD student working under the supervision of Laurent Itti, a computer science professor.

“Humans can separate their learned knowledge by attributes — for instance, shape, pose, position, color — and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks.”

In other words, as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.”

One of the largest hurdles we’ve faced in teaching computers how to imagine is that, generally speaking, they’re quite limited in what they recognize.

Let’s say we want to make an AI that can design buildings. We train such systems today by feeding them a lot of data. In our case, this would be a bunch of pictures of buildings. By looking at them, the theory goes, the AI can understand what makes a building a building, and the proper way to design one. In other words, it understands its attributes, which can then be replicated or checked against. With these in hand, it should be able to extrapolate — create virtually endless examples of new buildings.

The issue is that our AIs are still trained to understand features for the most part, not attributes. This means stuff like certain patterns of pixel layouts, which words a certain word is most likely to be encountered after. A simple but imperfect way to describe this is that a properly-trained AI today can recognize a building as a building, but it has no idea what a building actually is, what it’s used for, or how. It can check if a picture looks like a picture of a wall, and that’s about it. For our practical purposes today, this type of training is sufficient.

Still, in order to push beyond this point, the team used a process called disentanglement. This is the sort of process is used to create deepfakes, for example, by ‘disentangling’ or separating a person’s face movements and identity. Using this process, one person’s appearance can be replaced with another’s, while maintaining the former’s movements and speech.

The team took groups of sample images and fed them into the AI, instead of using one picture at a time as traditional training approaches do. They then tasked the program with identifying the similarities between them, a step called “controllable disentangled representation learning”. Information gleaned here was them recombined in a “controllable novel image synthesis,” which is programmer speak for ‘imagining things’.

It’s still much more crude than what we’re able to do using our brains, but as far as the mechanisms that underpin them, the processes aren’t very different at all.

“For instance, take the Transformer movie as an example” said Ge. “It can take the shape of Megatron car, the color and pose of a yellow Bumblebee car, and the background of New York’s Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session.”

The AI generated a dataset of 1.56 million images from the data used to train it, the team adds.

Artificial imagination would be a huge boon especially in research, for example in efforts to discover new drugs. We often get the idea from movies that once a computer becomes smart enough, it can take over the world and the human race effortlessly. Definitely thrilling stuff. But the fact of the matter remains that all the processing power in the world won’t be able to devise new medicine, for example, without the ability to first imagine something. The processing power can check (with the right code) how some molecules interact. But in order to do that, you have to first think of interacting those molecules — and that’s handled by imagination.

“Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique,” said Itti,

“This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans’ understanding of the world.”

The paper “Zero-shot Synthesis with Group-Supervised Learning” has been presented at the 2021 International Conference on Learning Representations and is available here.

Drones can elicit emotions from people, which could help integrate them into society more easily

Could we learn to love a robot? Maybe. New research suggests that drones, at least, could elicit an emotional response in people if we put cute little faces on them.

A set of rendered faces representing six basic emotions in three different intensity levels that were used in the study. Image credits Viviane Herdel.

Researchers at Ben-Gurion University of the Negev (BGU) have examined how people react to a wide range of facial expressions depicted on a drone. The study aims to deepen our understanding of how flying drones might one day integrate into society, and how human-robot interactions, in general, can be made to feel more natural — an area of research that hasn’t been explored very much until today.

Electronic emotions

“There is a lack of research on how drones are perceived and understood by humans, which is vastly different than ground robots,” says Prof. Jessica Cauchard, lead author of the paper.

“For the first time, we showed that people can recognize different emotions and discriminate between different emotion intensities.”

The research included two experiments, both using drones that could display stylized facial expressions to convey basic emotions to the participants. The object of these studies was to find out how people would react to these drone-borne expressions.

Four core features were used to compose each of the facial expressions used in the study: eyes, eyebrows, pupils, and mouth. Out of the possible emotions these drones could convey, five were recognized ‘with high accuracy’ from static images (joy, sadness, fear, anger, surprise), and four more (joy, surprise, sadness, anger) were recognized most easily in dynamic expressions conveyed through video. However, people had a hard time recognizing disgust no matter how it was conveyed to them by the drone.

What the team did find particularly surprising, however, is how involved the participants themselves were with understanding these emotions.

“Participants were further affected by the drone and presented different responses, including empathy, depending on the drone’s emotion,” Prof. Cauchard says. “Surprisingly, participants created narratives around the drone’s emotional states and included themselves in these scenarios.”


Based on the findings, the authors list a number of recommendations that they believe will make drones more easily acceptable in social situations or for use in emotional support. The main recommendations include adding anthropomorphic features to the drones, using the five basic emotions for the most part (as these are easily understood), and using empathetic responses in health and behavior change applications, as they make people more likely to listen to instructions from the drone.

The paper “Drone in Love: Emotional Perception of Facial Expressions on Flying Robots” has been published in the journal Association for Computing Machinery and has been presented at the CHI Conference on Human Factors in Computing Systems (2021).

Spanish companies team up to create the first paella-cooking robot

It’s better than your mom’s paella, the robot’s creators say, and while the purists out there will likely huff and puff, this robot could be of great help in the kitchen.

Paella is one of those foods with an almost mythical quality around them. It’s only the initiated that can seemingly whip up a delicious dish, masterfully blending the rice with the other ingredients. But two companies — robot manufacturer br5 (Be a Robot 5) and paella stove manufacturer Mimcook — beg to disagree.

It’s true, some skill comes into making paella, but it can be taught, not just to humans, but to robots as well. The two companies teamed up to develop the world’s first robotic paellero, revealing it at a food fair earlier last month.

It works like this: you set the program, load the rice, the sofrito, the seafood, the stock, and just leave the robot to do its thing. The robotic arm is hooked up to a computerized stove, and together, the two can whip up a reportedly delicious paella in no time.

The advantages of the robot are obvious: it does everything as planned and doesn’t get distracted. It’s easy, especially when mixing a rice, for a human to not pay enough attention or get distracted by some other task (or a text message) — resulting in burned rice or some other imperfection. The robot will do none of that.

“It doesn’t make sense for us to be stirring rice – especially because you’ll be looking at WhatsApp while you’re doing it and it’ll burn. That won’t happen with a robot,” said Enrique Lillo, founder of Be a Robot 5, to The Guardian.

The company specializes in food-making robots, and it emphasizes that this is not a ‘paella-making robot’, it’s a rice-making robot — a distinction aimed at preventing the anger of Valencians, where the dish originated.

The robotic arm makes paella because it’s connected to a specialized paella stove (after all, the paella itself is named after what it’s made in). You could connect to a different type of stove, and it would make burgers, pizzas, or croissants, which the company has already previously demonstrated.

The robot is already causing quite a stir, drawing the interest of many companies but also protests from people who fear the robots will take their jobs. But its creators argue that it’s not meant to take people’s jobs, just help them by doing the mundane things and allowing them to focus on what matters.

“At the end of the day, it’s an assistant. I like to say it’s a bit like the orange-juicing machines where you put oranges in the top and get juice out of the bottom. That’s a robot too – people just don’t realise it – and so is a coffee-vending machine. No one looks at those and goes: ‘Crikey! It’s stealing jobs from people!’ No. It’s elevating human capacity.”

Automated labs are poised to revamp research forever

You may have heard that artificial intelligence-based systems will eventually replace jobs involving routine tasks, from bank tellers and retail salespersons to truck drivers and couriers. But the disruption of the labor market will likely be much broader, and may even involve jobs that we generally wouldn’t consider replaceable by machines — even those of researchers in the lab.

Case in point, there are now numerous projects led by academics and major drug companies showcasing how robotics and artificial intelligence can come together to revolutionize research.

RoboRXN synthesis robot before it is loaded with reagents. In use, it is sealed and under vacuum. Credit: Michael Buholzer.

Reporting in the journal Science Advances, a team led by Alán Aspuru-Guzik, a professor of chemistry and computer science at the University of Toronto, described a completely automated research laboratory where an AI controls the synthesis of thin-film materials. The algorithms are completely in charge of identifying, synthesizing, and validating novel molecules. While doing so, the AI is constantly producing and reprocessing data, which it then uses to refine its chemical synthesis process.

Another project from the University of Liverpool demonstrated an automated lab that performed 700 experiments over 8 days by itself during which it optimized a photocatalytic process for generating hydrogen from water.

Similar to a factory or Amazon warehouse robot, this robo-researcher uses specialized grippers to handle samples in liquid and solid chemical dispensers. The robot can also operate a gas chromatograph, as well as other lab instruments such as those from Excedr.

After adjusting the concentrations of 10 chemicals using artificial intelligence, this ingenious mechanical researcher arrived at a mixture that produced 6 times more hydrogen than the starting conditions produced.

It took the robot 8 days to perform optimization work that could have taken a year of practical, very tedious work. The researchers at the University of Liverpool estimate that their robot is 1,000 times faster than a human researcher.

A robot moves around a lab 24/7 optimizing hydrogen-generating reactions. Credit:  Andrew I. Cooper/University of Liverpool.

At IBM Zurich, researchers launched RoboRXN, a fully automated synthesis system that builds upon IBM RXN for Chemistry, a free cloud-based software that predicts the result of chemical reactions. By combining computational power with hardware, RoboRXN is basically a robot that executes synthesis and other chemical processes typically reserved for lab technicians and qualified researchers.

In many ways, these automated labs are like self-driving cars in the sense that human control of a complex machine that has to undergo complicated maneuvers has been replaced by an AI.

Of course, these automated labs are by no means perfect. But while they still need a lot of work, some may be already suitable for niche applications. One obvious example is for material synthesis for rapid prototyping, be it for new photovoltaic cells or the aerospace industry.

It is likely that lab automation market will reach more than US$ 16 billion by the year 2022, with the U.S., followed by Europe and Japan, seen as the largest markets.

That’s not to say that humans will be out of the picture — far from it. Every lab will still need a human researcher to set the big-picture goal. Imagination and creativity will remain in the human domain for a very long time to come, so if you plan on staying relevant in the future, your best bet is to double down on these qualities.

The best gadgets and robots of 2020

In the midst of all the crazy things that have happened in our pandemic year, it’s easy to lose track of other developments. But despite the hardship of the lockdowns and the pandemic itself, the world isn’t sitting still. We’ve seen some stunning advancements not related to the pandemic, including some very nifty gadgets. Here are just some of them.

The robot dog by the name of Spot

Remember those unsettling robot dog videos trying to go down stairs and open doors? Spot is their leader. The robot by Boston Dynamics has been in development for a few years now, but it’s gone on sale in 2020, for the hefty sum of $74,500 — and this was also the year that Spot was really put to good use.

Spot is agile, robust, and can navigate rugged terrain with unprecedented mobility. Its software is downloadable and upgradeable (available on GitHub) if you’re up for the task, and are willing to pay the price of a luxury car to get the robot itself.

https://youtu.be/tUYpvvzNanU

Spot isn’t exactly a companion (though he can also play that part, and he’s a pretty good dancer actually) — he’s more of a utility dog. From patrolling hazardous sites and abandoned buildings to monitoring construction sites and offshore oil rigs, Spot can be sent where it would be too dangerous for humans. Different companies (and even governments) are already putting the robot dog to good use. For instance, Spot is patrolling the parks of Singapore warning people to not stay too close to each other.

A little bit dystopian? Maybe. Useful? Definitely.

Drones taking to the oceans

The Geneinno T1 drone. Image credits: Geneinno.

Drones are as cool and useful as ever (and they’re actually becoming more and more present in science and environmental monitoring), but they’re not exactly a new gadget. Well, at least air drones. Underwater drones, however, are pretty new and interesting.

An underwater drone is a submarine in the same sense a ‘regular’ drone is a helicopter. Biologists have been using ROVs (Remotely Operated underwater Vehicles) for a few years to study corals, fish, and explore the subsurface — now, you can get your own version. Several companies are already working in the field, but US-based Geneinno seems to be one of the pioneers in the field, and their ROVs (or underwater drones, which just sounds better) are now available to the public.

The Lego Bugatti

Image credits: LEGO

Nowadays, you can build anything and everything from Lego — but few things are as awesome as the company’s Technic branch. You basically build your own, realistic and fancy model cars, from the likes of a Ferrari or a Lamborghini to a Jeep Wrangler or even a race plane.

The cars have accurate real-life functions, such as a gearbox and a steering wheel, connected just like the real thing (there’s even a Lego engine). This is not for the inexperienced builder and not for those without patience, but it can make for a stunning little home gadget. But if you’re looking to build your own Lego fancy car, this is as good as it gets in 2020.

The Raspberry Pi Compute Module 4 — a card-sized computer

Image credits: Raspberry Pi foundation.

What you see here, just slightly bigger than a coin, is a full-on computer — and it goes for about $25. The Raspberry Pi Foundation is already well-known to those interested in the Internet of Things (IoT) and gadgets, as well as those looking for cheap computing alternatives.

Raspberry Pi’s are small, single-board computers that can function either stand-alone, or as part of other applications (typically involving some form of sensors). The new mini version includes a 64-bit quad-core processor, graphics support, hardware decoder, HDMI ports, USB ports, a PCI interface, camera interfaces, at least a gigabyte of memory, flash storage, clock and battery backup, a wireless option, an ethernet option. If you’d like to start diving into the world of IoT or just getting started with some offbeat computing, this is definitely one of the best places to start — and it won’t break your budget either.

Futuristic AI fitness work-from-home mirrors

Image credits: Fuseproject.

Staying fit is never easy, especially in a year like this when we’ve had to deal with the pandemic and all the stress and uncertainty — while mostly staying home. But somehow, one feels that having a futuristic AI mirror assistant could help with that.

The new Forme by Fuseproject is a 43-inch screen with 4k resolution and stowable arms for resistance training. It’s your very own one-on-one personal assistant working out with you in the comfort of your home. You can do various types of resistance training, and the screen helps you see what your virtual trainer is doing and try to do the same thing (you can also see yourself and improve your form). You can opt for pre-recorded workouts or a specialized routine, but the machine’s AI also analyzes your workout schedule and progress and constantly tweaks and adapts for optimum performance.

The world’s first graphene headphones

Since its recent discovery, graphene has been touted as a wonder material with myriad applications ranging from renewable energy to spacesuits. While graphene has undoubtedly had an important impact on science, we, the profane consumers, are happy to see it make an impact on something more down to earth: music.

Ora headphones are the world’s first graphene headphones, supported by one of the very inventors of graphene, Nobel Laureate Konstantin Novoselov, and they’re one of the first graphene products to hit the shelves. The quality of the headphones shows in the sound quality, and the design is quite unique.

The Robot kitchen

Robots can already do many things, but if they can’t cook a good dinner, how good are they really? Well luckily, that’s no longer a problem — at least if you can spare a six figure sum for the fully automated Moley kitchen. The system features two robotic arms and an array of sensors and cameras that not only cook your meal but also wash everything after they’re done.

For now, the system can produce 30 dishes (all developed by top chefs), but the digital menu will soon be expanded to over 5,000 choices. It’s truly one robot worth sinking your teeth into.

A wearable sensor that tells you what’s in your blood

Image credits: Robson Rosa da Silva.

This noninvasive skin-adherent sensor printed on microbial nanocellulose is essentially a 1.5 by 0.5 cm thin sheet that can detect a range of biomarkers, from sodium and potassium to lactic acid and glucose. It can even be used to track the level fo atmospheric pollutants. In addition to medical uses, it could, for instance, be used when working out (to tell you when you should take it easy), or for detecting glucose and warning when you should lay off the cake.

To make things even better, the material is breathable and doesn’t include plastic. The Brazilian researchers who developed it are now looking to see what products would offer the best integration.

The Smart Garden 6

Let me guess — you’re still using plastic pots to grow plants in? That’s so 2019. This small, chique automated plant grower by the Finnish Design Shop lets you grow your own herbs and salads with minimum hassle.

Not only does it pump its own water from time to time (you just need to fill the tank), but it also has 18 high-end LED lights which ,according to the producer, “provide the best spectrums and intensity needed to create perfect germination and growth conditions for your greens”

***

Disclaimer: obviously, this is meant to be a subjective list and does not reflect any endorsement. If you feel any gadgets should be added on this list, feel free to add them in the comment section.

Robot workspace to get human touch remotely

It’s been fairly easy for some to adopt a remote working model during the pandemic, but manufacturing and warehouse workers have had it rougher — some tasks just need people to be physically present in the workplace.

But now, one team is working on a solution for the traditional factory floor that could allow more workers to carry out their labor from home.

The proposed human-in-the-loop assembly system. The robot workspace can be manipulated remotely. Image credits: Columbia Engineering.

Columbia Engineering announced that researchers have won a grant to develop the project titled “FMRG: Adaptable and Scalable Robot Teleoperation for Human-in-the-Loop Assembly.” The project’s raw ingredients include machine perception, human-computer interaction, human-robot interaction, and machine learning.

They have come up with a “physical-scene-understanding algorithm” to convert visual observations via camera shots of a robot workspace into a virtual 3D-scene representation.  

Handling 3D models

The system analyzes the robot worksite and can change it into a visual physical scene representation. Each object is represented by a 3D model that mimics its shape, size, and physical attributes. A human operator gets to specify the assembly goal by manipulating these virtual 3D models.

A reinforcement learning algorithm infers a planning policy, given the task goals and the robot configuration. Also, this algorithm can infer its probability of success and use it to determine when to request human assistance — otherwise, it carries out its work automatically.

The project is led by Shuran Song, an assistant professor of computer science at Columbia University. She said the system they envision will allow workers who are not trained roboticists to operate the robots and this pleases her.

“I am excited to see how this research could eventually provide greater job access to workers regardless of their geographical location or physical ability.”

Automation for the future

The team received $3.7m funding from the National Science Foundation (NSF). The NSF stated the award period starts from January 1 to an estimated end date of Dec. 31, 2025. The NSF award abstract reveals the positive impact such an effort could have on business and workers:

“The research will benefit both the manufacturing industry and the workforce by increasing access to manufacturing employment and improving working conditions and safety. By combining human-in-the-loop design with machine learning, this research can broaden the adoption of automation in manufacturing to new tasks. Beyond manufacturing, the research will also lower the entry barrier to using robotic systems for a wide range of real-world applications, such as assistive and service robots.”

The abstract said their team is collaborating with NYDesigns and LaGuardia Community College “to translate research results to industrial partners and develop training programs to educate and prepare the future manufacturing workforce.”

Song is directing the vision-based perception and machine learning algorithm designs for the physical-scene-understanding algorithms. Computer Science Professor Steven Feiner, Columbia University, is looking at the 3D and VR user interface. Matei Ciocarlie, associate professor of mechanical engineering, Columbia University, is building the robot learning and control algorithms. Before joining the faculty, Matei was a scientist at Willow Garage, and scientist at Google. Matei contributed to the development of the open-source Robot Operating System.

A takeaway: News of robots often results in hair-pulling remarks on a tradeoff that can result in lost jobs for humans. Here is a project that, once complete, has the potential to complement human capabilities by using robotics.

Nancy Cohen is a contributing author. Want to get involved like Nancy and send your story to ZME Science? Check out our contact and contribute page.

A robot near you might soon have a tail to help with balance

New research from the Beijing Institute of Technology wants to steal the design of one of nature’s best balancing devices — the tail — and put it in robots.

A schematic outlining the design of the self-balancing robot tail. Image credits Zhang, Ren & Cheng.

Nature has often faced the same issues that designers and engineers grapple with in their work, but it has had much more time and resources at its disposal to fix them. So researchers in all fields of science aren’t ashamed of stealing some of its solutions when faced with a dead end. Over the past decades, roboticists have routinely had issues in making their creations keep their balance in any but the most ideal of settings. The humble tail might help break that impasse.

Tail tale

The bio-inspired, tail-like mechanism developed by the team can help their robot maintain balance in dynamic environments, the authors explain. The bot is made up of the main body, two wheels, and the tail component. This latter one is controlled by an “adaptive hierarchical sliding mode controller”, a fancy bit of code that allows it to rotate in different directions in an area parallel to the wheels.

In essence, it calculates and implements the tail motions needed to ensure the robot remains stable while moving around its environment.

There’s obviously some very complex math involved here. The authors explain that their system uses estimates of uncertainty in order to guide the tail. This is based on a theorem called the Lyapunov stability theorem, a theoretical framework that describes the stability of systems in motion. The tail then moves in specific patterns that are designed to increase the robot’s stability.

Most approaches to the issue of balancing two-wheeled vehicles today rely on collecting a vehicle’s body altitude data using an inertial measurement unit (IMU), a device that can measure forces acting on the robot’s body. This data is then processed and the results are used to determine a balancing strategy, which typically involves adjusting the robot’s tilt. These, the authors explain, typically work well enough — but they wanted to offer up an alternative that doesn’t involve tilting the robot’s body.

So far, the tail’s performance has only been evaluated in computer simulations, not in physical ones. However, these found it to be “very promising”, as it was able to stabilize a simulated robot who lost its balance within around 3.5 seconds. The team hopes that in the future, their tail will be used to make new or preexisting robot designs even more stable

The authors are now working on a prototype of the robot so that they can test its performance.

The paper “Control and application of tail-like mechanism in self-balance robot” has been published in the Proceedings of 2020 Chinese Intelligent Systems Conference.

New class of actuators gives nanobots legs (that work)

A new paper brings us one step closer to creating swarms of tiny, mobile robots.

Artist’s rendition of an array of microscopic robots.
Image credits Criss Hohmann.

Science fiction has long foretold of sprawling masses of tiny robots performing tasks from manufacturing and medicine to combat — with the most extreme example being the Grey Goo. We’re nowhere near that point, yet, but we’re making progress.

A new paper describes the development of a novel class of actuators (devices that can generate motion) that is comparable with current electronics. These actuators are tiny and bend when stimulated with a laser, making them ideal to power extremely small robots. A lack of proper means of movement has been a severe limitation on our efforts to design very small robots so far, the team explains.

Finding their legs

“What this work shows is proof of concept: we can integrate electronics on a [tiny] robot. The next question is what electronics should you build. How can we make them programmable? What can they sense? How do we incorporate feedback or on-board timing?” lead author Marc Miskin, assistant professor of electrical and systems engineering at Penn State, told me in an email.

“The good news is semiconductor electronics gives us a lot of developed technology for free. We’re working now to put those pieces together to build our next generation of microscopic robots.

Actuators are the rough equivalent of engines. Although they rarely use the same principles, they’re both meant to do physical work (a motion that can be used to perform a certain task). The lack of an adequate actuator, both in regards to size and compatibility with our current electronics, has hampered advances into teeny-tiny robots.

Marc and his team hope to finally offer a solution to this problem. The actuators they developed are small enough to power the legs of robots under 0.1 mm in size (that’s about the size of a strand of human hair). The devices are compatible with silicon-based circuitry, so no special adaptations are needed to work with them in most settings.

These actuators bend in response to a laser pulse to create a walking motion; power, in this case, was supplied by onboard photovoltaics (solar panels). As for the sizes involved here: the team reports that they can fit over one million of their robots on a 4-inch wafer of silicon.

Given that the proof-of-concept robots are surprisingly robust, very resistant to acidity, and small enough to go through a hypodermic (syringe) needle, one particularly exciting possibility is to use them for medical applications or simple biomonitoring in human and animal patients — just like in the movies. I’ve asked Marc what other potential applications they’re excited for, and the possibilities do indeed seem endless:

“We’re thinking about applications in manufacturing (can you use them to form or shape materials at the microscale?), repairing materials (can you fix defects to increase material lifespan?), and using them as mobile sensors (can you send robots into say cracks in a rock or deep in a chemical reactor to make measurements and bring data back).”

However, he’s under no illusions that this will be an easy journey. “These are of course long term goals: right now all our robots can do is walk,” he notes.

Technology and know-how, however, have a way of compounding once released into ‘the wild’ of our economies. The advent of appropriate actuators might just be the nudge needed to walk us into a series of rapid improvements on nanomachines. And I, for one, couldn’t be more excited.

The paper “Electronically integrated, mass-manufactured, microscopic robots” has been published in the journal Nature.

Chewing robot developed to test gum as a potential drug delivery system

Researchers at the University of Bristol (UoB) have created a robot for a peculiar purpose: chewing gum.

Image via Pixabay.

Robots keep coming for our jobs. Today, they’ve taken one of the easier ones — gum chewer. However, rest assured, it’s all in the name of science.

The robot is dentated to become a new gold standard for the testing of drug release from chewing gum. It has built-in humanoid jaws which closely replicate our chewing motions, and releases artificial saliva to allow researchers to estimate the transfer of substances from the gum to a potential user.

I have a mouth and I must chew

“Bioengineering has been used to create an artificial oral environment that closely mimics that found in humans,” says Dr Kazem Alemzadeh, Senior Lecturer in the UoB Department of Mechanical Engineering, who led the study.

“Our research has shown the chewing robot gives pharmaceutical companies the opportunity to investigate medicated chewing gum, with reduced patient exposure and lower costs using this new method.”

Chewing gum is recognized as a possible drug delivery method, but there currently aren’t any reliable ways of testing how much of a particular compound they can release during use.

The team’s theoretical work showed that a robot could be useful for this role — so they set out to build it and test it out.

The team explains that the robot can “closely replicate” the human chewing process. Its jaws are fully enclosed, allowing for the amount of released xylitol (a type of sweetener common in gum) to be measured.

n) shows the final prototype, l) shows a digital model of the robot.
Image credits Kazem Alemzadeh et al., (2020), IEE Transactions on Biomedical Engineering.

In order to assess the robot, the team had human participants chew the gum and then measured the amount of xylitol it contained after different chewing times. The team also took saliva and artificial saliva samples after 5, 10, 15, and 20 minutes of continuous chewing. The robot’s gum was then tested similarly and compared to that of the human participants.

The release rates between these two chewed gums were pretty similar, the team found. The greatest release of xylitol occurred during the first five minutes. After 20 minutes of chewing, only a low level of this compound remained in the gum, regardless of how it was chewed.

All in all, this suggests that the robot is a reliable estimation tool for chewing gum. It uses the same motions and chewing patterns as humans, and its artificial saliva seems to interact with the gum in a very similar way. As such, it could serve as the cornerstone of medical chewing gum.

“The most convenient drug administration route to patients is through oral delivery methods,” says Nicola West, Professor in Restorative Dentistry in the Bristol Dental School and co-author of the study.

“This research, utilizing a novel humanoid artificial oral environment, has the potential to revolutionize investigation into oral drug release and delivery.”

The paper “Development of a Chewing Robot with Built-in Humanoid Jaws to Simulate Mastication to Quantify Robotic Agents Release from Chewing Gums Compared to Human Participants” has been published in the journal IEEE Transactions on Biomedical Engineering.

Men like futuristic sex robots, and think women do too. But women don’t

The fantasy of sexual robots has been around for a while, lurking around the periphery of a future time. But as it’s inching closer to reality and we have to start considering it as a real possibility, are we really ready for something like this?

The ‘You look lonely’ scene from Blade Runner 2049. Credits: Columbia Pictures.

“Physical and emotional intimacy between humans and robots may become commonplace over the next decades,” reads the new study carried by researchers from Norway.

The idea of robots that can interact with humans sexually and emotionally isn’t new, but it’s far more common in literature and movies than in real life. The likes of Blade Runner and Westworld explore potential romantic relationships between humans and robots and raise some intriguing psychological points, but what do real-life humans think about it?

In the study, 163 female and 114 male participants were asked to read a short story about a humanoid robot designed either for sex or for platonic love. They then completed a questionnaire about how they would react if their partner would have such a robot, and how their partner would react if they would use such a robot themselves. As it turns out, men and women see the situation quite differently.

Men were likely to agree with statements such as “I hope this type of robot is developed in the future” and “I look forward to the development and launch of this type of robot,” whereas women were more likely to answer “This kind of robot would evoke strong feelings of jealousy in me”.

Overall men tend to have more positive attitudes towards sex robots, while women are more reluctant. But men wrongly assume that their female partners share their views. Funny enough, women also assume that men share their views (of robot hesitancy) — which men don’t.

In other words, not only do men and women have different attitudes, but they’re in the dark when it comes to that the other gender is thinking. The results suggest that people project their own feelings about robots onto their partner, erroneously expecting their partner to share their views.

The study reads:

“Females have less positive views of robots, and especially of sex robots, compared to men. Contrary to the expectation rooted in evolutionary psychology, females expected to feel more jealousy if their partner got a sex robot, rather than a platonic love robot. The results further suggests that people project their own feelings about robots onto their partner, erroneously expecting their partner to react as they would to the thought of ones’ partner having a robot.”

It’s not really surprising that men tend to have more permissive attitudes towards sex robots, but the fact that neither men nor women guessed the attitudes of the opposite sex raises some interesting questions about how this technology would affect interpersonal relations.

It’s also worth noting that the study has a significant limitation: participants were recruited via Facebook and email, so there is a bias in how the participants were selected, and it’s possible that participants have a greater interest in robots than the average person.

But even so, the problems the study highlighs are intriguing.

Robots have developed greatly in recent years, and while true humanoid robots are far from becoming a reality, sex robots are actually pretty close to becoming a reality. If these robots are becoming a thing, we need to start talking about them. Keeping them a tabu just spells trouble down the road.

The study “Friends, Lovers or Nothing: Men and Women Differ in Their Perceptions of Sex Robots and Platonic Love Robots” has been published in Frontiers in Psychology.

Talkative robots make humans chat too — especially robots that show ‘vulnerability’

Robots admitting to making a mistake can, surprisingly, improve communication between humans — at least during games.

Image via Pixabay.

A new study led by researchers from Yale University found that in the context of a game with mixed human-and-robot teams, having the robot admit to making mistakes (when applicable) fosters better communication between the human players and helps improve their experience. A silent robot, or one that would only offer neutral statements such as reading the current score, didn’t result in the same effects.

Regret.exe

“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author.

“Our study shows that robots can affect human-to-human interactions.”

Robots are increasingly making themselves part of our lives, and there’s no cause to assume that this trend will stop; in fact, it’s overwhelmingly likely that it will accelerate in the future. Because of this, understanding how robots impact and influence human behavior is a very good thing to know. The present study focused on how the presence of robots — and their behavior — influences communication between humans as a team.

For the experiment, the team worked with 153 people divided into 51 groups — three humans and a computer each. They were then asked to play a tablet-based game in which the teams worked together to build the most efficient railroad routes they could over 30 rounds. The robot in each group would be assigned one pattern of behavior: they would either remain silent, utter a neutral statement (such as the score or number of rounds completed), or express vulnerability through a joke, personal story, or by acknowledging a mistake. All of the robots occasionally lost a round, the team explains.

“Sorry, guys, I made the mistake this round,” the study’s robots would say. “I know it may be hard to believe, but robots make mistakes too.”

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

People teamed with robots that made vulnerable statements spent about twice as much time talking to each other during the game, and they reported enjoying the experience more compared to people in the other two kinds of groups, the study found. However, participants in teams with the vulnerable and neutral robots than among both communicated more than those in the groups with silent robots, suggesting that the robot simply engaging in any form of conversation helped spur its human teammates to do the same.

“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” said Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science at Yale and a co-author of the study. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task.”

“Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”

Soft robot hand can sweat to keep itself cool

Credit: Mishra et al, Science Robotics.

An experimental soft-bodied robotic hand maintains a stable temperature by releasing water through its tiny pores.

Although still a proof of concept, this bio-inspired approach could lead to a new class of robots that can operate for prolonged periods of time without overheating.

Sweaty robot palms

Robots and mechanical machines, in general, face important thermoregulation challenges, either because their components overheat or due to operating in hot environments like an assembly line or out in the field on a summer day. Cooling consumes a lot of energy, raising costs, while poor heat management can significantly impact the durability and performance of the machines.

Researchers at the Cornell University, Facebook Reality Labs, and the Center for Micro-BioRobotics in Pisa, addressed this challenge by looking at nature for a solution — the cooling power of perspiration naturally stood out.

“We believe [this] is a basic building block of a general purpose, adaptive, and enduring robot,” said Robert Shepherd, associate professor of Cornell’s Sibley School of Mechanical and Aerospace Engineering and co-author of the research,

When our bodies heat up, our millions of glands across our skin produce sweat — mostly water with a little bit of potassium, salt, and a few other minerals. Humans have the most efficient sweating system that we know of — we’re more of an exception in that we rely on secreting water on our skin to stay cool. Most furry mammals regulate their body temperature through panting while other animals like ectotherms — lizards, amphibians, and insects — have evolved other behaviors that help keep them cool.

Sweating enabled humans to march all day, even on hot summer days when most predators are out in the shade cooling off. So, in many ways, sweating has been a secret weapon that helped us survive and thrive across the world, in many different climates.

It makes sense to model some of our machines after this biological mechanism.

“It turns out that the ability to perspire is one of the most remarkable features of humans,” said Thomas Wallin, an engineer at Facebook Reality Labs and co-author of the new study. “We’re not the fastest animals, but early humans found success as persistent hunters. The combination of sweating, relative hairlessness, and an upright bipedal gait enabled us to physically exhaust our prey over prolonged chases.”

Credit: Science Robotics.

Wallin and colleagues designed a balloon-like robot fitted with pores that allow water to slowly ooze out — but only once the “body” temperature reaches a certain threshold. In order to make the hand-shaped robot respond to temperature, the researchers employed a hydrogel material called poly-N-isopropylacrylamide (PNIPAm). This material reacts to temperature passively, without the need for sensors or additional electronic components.

At 30 degrees Celsius (86 degrees Fahrenheit), the micropores in the soft robot’s top layer stay closed. Beyond this temperature, the pores expand, allowing pressurized fluid to leak — the robot sweats.

Experiments during which the robot was exposed to wind from a fan showed that the cooling rate was six times better than non-sweating machines. In fact, the thermoregulatory performance was even better than humans and horses (the other animal that sweats, although quite differently than humans do; horses still mainly rely on panting to cool off).

Such soft robots, however, aren’t well suited for all types of applications. The dripping solution makes the soft actuators slippery, making grasping challenging. The robot also runs out of water eventually and a refillable water tank isn’t always an option.

It’s still a very interesting proof of concept that shows you don’t need huge heat sinks and cooling fans to keep a robot’s temperature at optimal levels.

The findings appeared in the journal Science Robotics.

The world’s first ‘living machines’ can move, carry loads, and repair themselves

Researchers at the University of Vermont have repurposed living cells into entirely new life-forms — which they call “xenobots”.

The xenobot designs (top) and real-life counterparts (bottom).
Image credits Douglas Blackiston / Tufts University.

These “living machines” are built from frog embryo cells that have been repurposed, ‘welded’ together into body forms never seen in nature. The millimeter-wide xenobots are also fully-functional: they can move, perform tasks such as carrying objects and healing themselves after sustaining damage.

This is the first time anyone “designs completely biological machines from the ground up,” the team writes in their new study.

It’s alive!

“These are novel living machines,” says Joshua Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center and co-lead author of the study. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

“It’s a step toward using computer-designed organisms for intelligent drug delivery.”

The xenobots were designed with the Deep Green supercomputer cluster at UVM using an evolutionary algorithm to create thousands of candidate body forms. The researchers, led by doctoral student Sam Kriegman, the paper’s lead author, would assign the computer certain tasks for the design — such as achieving locomotion in one direction — and the computer would reassemble a few hundred simulated cells into different body shapes to achieve that goal. The software had a basic set of rules regarding what the cells could and couldn’t do and tested each design against these parameters. After a hundred runs of the algorithm, the team selected the most promising of the successful designs and set about building them.

The design of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

This task was handled by a team of researchers at Tufts University led by co-lead author Michael Levin, who directs the Center for Regenerative and Developmental Biology at Tufts. First, they gathered and incubated stem cells from embryos of African frogs (Xenopus laevis, hence the name “xenobots”). Finally, these cells were cut and joined together under a microscope in a close approximation of the computer-generated designs.

The team reports that the cells began working together after ‘assembly’. They developed a passive skin-like layer and synchronized the contractions of their (heart) muscle cells to achieve motion. The xenobots were able to move in a coherent fashion up to days or weeks at a time, the team found, powered by embryonic energy stores.

Later tests showed that groups of xenobots would move around in circles, pushing pellets into a central location, spontaneously and collectively. Some of the xenobots were designed with a hole through the center to reduce drag but the team was able to repurpose it so that the bots could carry an object.

It’s still alive… but on its back?

A manufactured quadruped organism, 650-750 microns in diameter.
Image credits Douglas Blackiston / Tufts University.

One of the most fascinating parts of this already-fascinating work, for me, is the resilience of these xenobots.

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. We slice [a xenobot] almost in half and it stitches itself back up and keeps going. This is something you can’t do with typical machines.”

“These xenobots are fully biodegradable,” he adds, “when they’re done with their job after seven days, they’re just dead skin cells.”

However, none of the team’s designs was able to turn itself over when flipped on its back. It’s an almost comical little Achilles’ Heel for such capable biomachines.

The manufacturing process of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

Still, they have a lot to teach us about how cells communicate and connect, the team writes.

“The big question in biology is to understand the algorithms that determine form and function,” says Levin. “The genome encodes proteins, but transformative applications await our discovery of how that hardware enables cells to cooperate toward making functional anatomies under very different conditions.”

“[Living cells] run on DNA-specified hardware,” he adds, “and these processes are reconfigurable, enabling novel living forms.”

Levin says that being fearful of what complex biological manipulations can bring about is “not unreasonable”, and are very likely going to result in at least some “unintended consequences”, but explains that the current research aims to get a handle on such consequences. The findings are also applicable to other areas of science and technologies were complex systems arise from simple units, he explains, such as the self-driving cars and autonomous systems that will increasingly shape the human experience.

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

The paper “A scalable pipeline for designing reconfigurable organisms” has been published in the journal PNAS.

Scientists devise tiny robot insects that can’t be crushed by a flyswatter

In the future, swarms of tiny flying soft robots could zip through the sky, performing various tasks such as monitoring the environment, remote repairs, perhaps even pollination. In Switzerland, engineers have recently demonstrated a new type of insect-like flying robots that may do just that. But don’t let their fragile appearance deceive you — these tiny bots are so strong they can resist being battered by a flyswatter.

The DEAnsect. Credit: EPFL.

Central to the proper functioning of this tiny soft robot, known as DEAnsect, are artificial muscles. Researchers at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland fitted the thumbnail-sized robots with dielectric elastomer actuators (DEAs) — hair-thin artificial muscles — which propel the artificial insects at about 3cm/second through vibrations.

Each DEA contains an elastomer membrane sandwiched between two soft electrodes. When a voltage is applied, the electrodes come together, compressing the membrane; once the voltage is switched off, the membrane returns to its original size. Each of the robot’s legs has three such muscles.

The vibrations caused by switching the artificial muscles on and off (up to 400 times a second) allows the DEAnsect to move with a high degree of accuracy, as demonstrated in experiments in which the robots followed a maze (shown in the video).

These extremely thin artificial muscles allowed the entire design to be streamlined in a very compact frame. The power source only weighs 0.2 grams, while the entire robot, battery and other components included, weighs one gram.

“We’re currently working on an untethered and entirely soft version with Stanford University. In the longer term, we plan to fit new sensors and emitters to the insects so they can communicate directly with one another,” said Herbert Shea, one of the authors of the new study published in Science Robotics.