Tag Archives: robots

New four-legged robots designed to work together to accomplish difficult tasks

Quantity is a quality all of its own, and that seems to be true in robotics, as well. Researchers at the University of Notre Dame report having successfully designed and built multi-legged robots that can navigate difficult terrain and work together to perform various tasks.

Image credits University of Notre Dame / Yasemin Ozkan-Aydin.

Nature is no stranger to the concept of cooperation. We ourselves are a great example of such cooperation at work, but insects such as ants and bees showcase what can be done when even tiny actors join hands. Roboticists have long been keen to mimic such abilities in their creations, and to instill them in small frames, especially.

New research places us squarely on the path towards such an objective.

Silicon swarm

“Legged robots can navigate challenging environments such as rough terrain and tight spaces, and the use of limbs offers effective body support, enables rapid maneuverability and facilitates obstacle crossing,” says Yasemin Ozkan-Aydin, an assistant professor of electrical engineering at the University of Notre Dame, who designed the robots.

“However, legged robots face unique mobility challenges in terrestrial environments, which results in reduced locomotor performance.”

The collective behavior of birds, ants, and other social insect species has been a great source of inspiration for Ozkan-Aydin. In particular, she was fascinated by their ability to work together to perform tasks that would be impossible for a single individual of the species to perform. She set out to try and instill the same capabilities in her own creations.

Although collective behaviors have been explored in flying and underwater robots, land-borne robots must contend with particular challenges that the other two do not. Traversing complex terrain, for example, is one such challenge.

Ozkan-Aydin started from the idea that a physical connection between individual bots could be used to enhance their overall mobility. The legged robots she designed will attempt to perform tasks such as moving a light object or navigating a smooth surface on their own but, if the task proves to be too great for them alone, several robots will physically connect to one another to form a larger, multi-legged system. Collectively, they will work to overcome the issue.

“When ants collect or transport objects, if one comes upon an obstacle, the group works collectively to overcome that obstacle. If there’s a gap in the path, for example, they will form a bridge so the other ants can travel across — and that is the inspiration for this study,” she said.

“Through robotics we’re able to gain a better understanding of the dynamics and collective behaviors of these biological systems and explore how we might be able to use this kind of technology in the future.”

Each individual bot measures around 15 to 20 centimeters (6 to 8 inches) in length, and they were built using a 3D printer. They carry their own lithium polymer battery, three sensors — a light sensor at the front and two magnetic touch sensors at the front and back, — and a microcontroller. The magnetic sensors allow them to connect to one another. They move around on four flexible legs, a setup that Ozkan-Aydin says reduces their need for sensors and their overall complexity.

She designed and built the robots in early 2020 and, due to the pandemic, much of her experimentation was performed at home or in her yard. During that time, the robots’ abilities were tested over grass, mulch, leaves, and acorns. Their abilities to cross flat surfaces were tested over particle board, stairs made from insulation foam, over a shaggy carpet, or over a particle board with rectangular wooden blocks glued on to simulate rough terrain.

During this time, Ozkan-Aydin programmed the robots so that when one of them became stuck, they would send a signal to the others to come to link up with it and help it traverse the obstacles together.

“You don’t need additional sensors to detect obstacles because the flexibility in the legs helps the robot to move right past them,” said Ozkan-Aydin. “They can test for gaps in a path, building a bridge with their bodies; move objects individually; or connect to move objects collectively in different types of environments, not dissimilar to ants.”

There are still improvements that can be made to the design, she explains. However, the intention wasn’t to design the perfect robot; what she hopes for is that her findings will help spur further development of low-cost, cooperative robots that can perform real-world tasks such as search-and-rescue operations, collective transport of various objects, environmental monitoring, or even space exploration. In the future, she will be focusing on improving the control, sensing abilities, and power autonomy of the robots.

“For functional swarm systems, the battery technology needs to be improved,” she said. “We need small batteries that can provide more power, ideally lasting more than 10 hours. Otherwise, using this type of system in the real world isn’t sustainable.”

“You need to think about how the robots would function in the real world, so you need to think about how much power is required, the size of the battery you use. Everything is limited so you need to make decisions with every part of the machine.”

The paper “Self-reconfigurable multilegged robot swarms collectively accomplish challenging terradynamic tasks” has been published in the journal Science Robotics.

Submersible robots help us better understand ocean health and carbon flows

Floating robots could become indispensable in helping us monitor the health of ocean ecosystems and the flow of carbon between the atmosphere and oceans, according to a new study.

Although the microscopic marine plants and animals which make up plankton are the bedrock of ocean ecosystems. While they’re essential for the well-being of everything that swims, they’re also very important for our comfort and well-being, too. Plankton is one of the largest single sources of oxygen on the planet, and it consumes a lot of CO2 to do it. This process is known as marine primary productivity.

Knowing how they’re faring, then, would be a great help. Floating robots can help us out in that regard, according to a new paper.

Floats my boats

“Based on imperfect computer models, we’ve predicted primary production by marine phytoplankton will decrease in a warmer ocean, but we didn’t have a way to make global-scale measurements to verify models. Now we do,” said Monterey Bay Aquarium Research Institute (MBARI) Senior Scientist Ken Johnson, first author of the paper.

Together with former MBARI postdoctoral fellow Mariana Bif, Johnson shows how a fleet of marine robots could completely change our understanding of primary productivity on a global scale. Data from these crafts would allow researchers to more accurately model the flow of carbon between the atmosphere and the ocean, thus improving our understanding of the global carbon cycle.

Furthermore, the duo explains, shifts in phytoplankton productivity can have significant effects on all life on Earth by changing how much carbon oceans absorb, and by altering oceanic food webs. The latter can easily impact human food security, as the oceans are a prime source of food for communities all over the world. In the context of our changing climate, it’s especially important to know with accuracy how much carbon plankton can scrub out of the atmosphere, and what factors influence this quantity.

Part of what makes the ocean such a good carbon sink is that dead organic matter sinks to the bottom. Plankton grows by consuming atmospheric oxygen, and is in turn consumed by other organisms, such as fish. As these eventually die, they sink to the bottom of the sea, where they’re decomposed by bacteria, releasing carbon in the process. However, because this happens at great depths, the carbon is effectively prevented from returning to the atmosphere for very long periods of time. Generally, it seeps into deep-water sediments and stays there for millions of years or more.

That being said, this process is very sensitive to environmental factors such as changes in climate. While we understand that this happens, we’ve not been able to actually monitor how primary productivity is responding to climate change on a global scale, as most of it happens in the depths of the oceans.

“We might expect global primary productivity to change with a warming climate,” explained Johnson. “It might go up in some places, down in others, but we don’t have a good grip on how those will balance.”

“Satellites can be used to make global maps of primary productivity, but the values are based on models and aren’t direct measurements,”

Autonomous robots could help us get the data we need, the study argues. For starters, it’s much easier to build robots that can withstand the humongous pressures of the deep ocean than it is to build equivalent manned submarines. Secondly, robots are mass-producible for relatively little cost. Human crews are expensive and slow to train — they’re also quite limited in availability. Finally, robots can operate for much longer periods of time than human crews, and nobody needs to risk their life in the process.

The authors point to the deployment of Biogeochemical-Argo (BGC-Argo) floats across the globe as a great example of how robots can help monitor primary productivity. These automated floats can measure temperature, salinity, oxygen, pH, chlorophyll, and nutrient content in marine environments, at depths of up to 2,000 meters (6,600 ft). A float can perform its monitoring tasks autonomously, shifting between different depths and supplying live data to researchers onshore. These robots have been deployed in increasing numbers over the past decade, providing reliable — but as of yet, still sparse — measurements of oxygen production across the globe.

Although the data they’ve been feeding us didn’t tell us anything new, it is the first time we’ve been able to quantitatively measure primary productivity directly.

“Oxygen goes up in the day due to photosynthesis, down at night due to respiration—if you can get the daily cycle of oxygen, you have a measurement of primary productivity,” explained Johnson.

In order to confirm that these robots were actually performing their job reliably, the team compared primary productivity estimates computed from the BGC-Argo floats to ship-based sampling data in two regions: the Hawaii Ocean Time-series (HOT) Station and the Bermuda Atlantic Time-series Station (BATS). The data from these two sources matched over several years, proving the reliability of the system.

“We can’t yet say if there is change in ocean primary productivity because our time series is too short,” cautioned Bif. “But it establishes a current baseline from which we might detect future change. We hope that our estimates will be incorporated into models, including those used for satellites, to improve their performance.”

Seeing as we have drones flying about the atmosphere taking pictures of everything and anything, it only makes sense that we’d eventually have some doing the same underwater. I am personally very thrilled to see robots taking on the deepest depths. The ocean is a fascinating place, but I’m also terrified of drowning, so I’ll probably never work up the courage to actually explore it. Hopefully, our automated friends will do the work for us and help us understand what is still a very much unexplored frontier of Earth.

The paper “Constraint on net primary productivity of the global ocean by Argo oxygen measurements” has been published in the journal Nature.

Scientists observe nanobots coordinating inside a living host for the first time

Nanobots have the potential of revolutionizing fields from material engineering to medicine. But first, we have to figure out how to build them and make them work. A new paper reports on a confident step toward that goal, as we’ve been able to observe the collective behavior of autonomous nanobots inside a living host.

A schematic of a molecular planetary gear, an example of nanomachinery. Image via Wikimedia.

The range of tasks that nanobots can potentially handle is, in theory, incredible. Needless to say, then, there’s a lot of interest in making such machines a reality. For now, however, they’re still in the research and development phase, with a particular interest in tailoring them for biomedical applications. Nanobots using our body’s own enzymes as fuel are some of the most promising systems in this regard currently, and a new paper is reporting on how they behave inside a living host.

March of the Machines

“The fact of having been able to see how nanorobots move together, like a swarm, and of following them within a living organism, is important, since millions of them are needed to treat specific pathologies such as, for example, cancer tumors,” says Samuel Sánchez, principal investigator at the Institute for Bioengineering of Catalonia (IBEC).

Nanobots are machines built at the nano-scale, where things are measured in millionths of a millimeter. They’re intended to be able to move and perform certain tasks by themselves, usually in groups. Being so small, however, actually seeing them go about their business — and thus, checking if they work as intended — isn’t very easy.

That’s why the IBEC team, together with members from the Radiochemistry & Nuclear Imaging Lab at the Center of Cooperative Investigation of Biomaterials (CIC biomaGUNE) in Spain, set out to observe these bots working inside the bladders of living mice using radioactive isotope labeling. This is the first time researchers have successfully tracked nanobots in vivo using Positron Emission Tomography (PET).

For the study, the team started with in vitro (in the lab) experiments, where they monitored the robots using both optical microscopy and PET. Both techniques allowed them to see how these nanoparticles interacted with different fluids and how they were able to collectively migrate following complex paths.

The next step involved injecting these bots into the bloodstream and, finally, the bladders of living mice. The machines were designed to be coated in urease, an enzyme that allows the bots to use urea from urine as fuel. The team reports that they were able to swim collectively, which induced currents in the fluid inside the animals’ bladders. These nanomachines were evenly distributed throughout the bladders, the team adds, which is indicative of the fact that they were coordinating as a group.

“Nanorobots show collective movements similar to those found in nature, such as birds flying in flocks, or the orderly patterns that schools of fish follow,” explains Samuel Sánchez, ICREA Research Professor at IBEC.

“We have seen that nanorobots that have urease on the surface move much faster than those that do not. It is, therefore, a proof of concept of the initial theory that nanorobots will be able to better reach a tumor and penetrate it,” says Jordi Llop, principal investigator at CIC biomaGUNE.

The findings showcase how nanomachines can come together and coordinate as a group, even one with millions of members, both in the lab and in living organisms. It might not sound like much, but checking that these machines can really interact as we want them to is a very important milestone in their development. It also goes a long way to prove that their activity can be monitored, even in living organisms, meaning that they can eventually be used to treat human patients.

“This is the first time that we are able to directly visualize the active diffusion of biocompatible nanorobots within biological fluids in vivo. The possibility to monitor their activity within the body and the fact that they display a more homogeneous distribution could revolutionize the way we understand nanoparticle-based drug delivery and diagnostic approaches,” says Tania Patiño, co-corresponding author of the paper.

One of the uses the team already envisions for similar nanobots is that of delivering drugs in tissues or organs where their diffusion would be hampered, either by a viscous substance (such as in the eye) or by poor vascularization (such as in the joints).

The paper “Swarming behavior and in vivo monitoring of enzymatic nanomotors within the bladder,” has been published in the journal Science Robotic.

Researchers train robot swarm to serve as ‘real-life paintbrushes’

Creating art is an intensive and time-consuming process. It’s not just envisioning and designing the piece that’s challenging — the labor of painting also takes a lot of time. But what if robots could help with this, and maybe even expand an artist’s repertoire?

It may seem far-fetched, but in a new study, researchers paved the way for exactly this: they trained a swarm of robots to be used in producing art.

Image courtesy of María Santos.

María Santos was always fascinated by the intersection of engineering and arts. A musician herself, she loves to explore this overlap of seemingly different worlds, she tells ZME Science.

“During my PhD at the School of Electrical Engineering at Georgia Tech, I was given the opportunity of combining my research on control theory and multi-robot systems with different forms of art,” she says.

It all started in a previous study with her doctoral advisor, Professor Magnus Egersted. The two first studied the expressive capabilities of robot swarms to convey basic emotions and then moved on to look at the individual trajectories executed by the swarm of robots.

Is there some artistic merit to this, or could this approach be applied in an artistic setting as a tool? Santos believes so.

“In this study we explore how the integration of such trajectories over time can lead to artistic paintings by making the robots leave physical trails as they move,” Santos explains in an email.

“We envisioned the multi-robot system as an extension of an artist’s creative palette. The presented painting swarm along with all its control knobs embody new means of interaction between artists and the piece of art, whereby artists can explore new creative directions, intuitively interacting with a robotic system while not having to concern themselves with aspects such as individual robot control or available paints to each robot.”

At first glance, using robots for art seems like a weird idea, but it makes sense once you look at it. Painting is typically labor-intensive, and despite the world around us becoming more and more automated, painting has remained exclusively a manual endeavor. The idea is not to have the robots create art, but rather for artists to use the robots as a tool to ease their workload or explore new artistic directions.

Image courtesy of María Santos.

The robots in the project move about a canvas leaving color trails, and the artist can select the areas of the canvas to be painted in a certain color — the robots will oblige in real time. It’s a bit like applying digital techniques into the real-life analog world and can serve as an interesting tool for artists.

The way Santos envisions the approach, the artist would control the swarm behavior, but not necessarily every individual robot.

“In this approach, the robotic swarm can be thought of as an “active” brush for the human artist to paint with, where the individual robots (active bristles) move over the canvas according to the color specifications given by the human at each point in time. Thus, the artist can control the collective behavior of the swarm and potentially some other general parameters (how much paint to release, how sharp the trajectories of the robots may be), but not the individual movements of each robot.”

This leaves a wide array of parameters the artist can influence to produce the desired effect, and explore different variations. It’s akin to how a composer writes variations on a theme, Santos tells me.

A video highlighting the technique, courtesy of María Santos.

In the experiments, the researchers used a projector to simulate the colored paint trail with a digital input, although they will soon replace this with a robot that handles actual paint. They found that even when the robot doesn’t have access to the desired color, it is capable to collaborate with other robots and approximate the color. This means the artist doesn’t need to worry whether the robots have access to all the possible colors.

Now, the researchers hope to collaborate with artists to see how this approach could be best tweaked to make it work in real life. The current pandemic, however, has proven to be quite a hurdle.

“We would love to get feedback from artists! In fact, when we started this project, our idea was to get artists to come to the lab and interact with the robotic swarm. This way we could see what they could come up with creatively in terms of generated paints, but also to get their input about which features would be most interesting to develop as the project progresses further.”

“However, due to COVID19, this part was infeasible during the last months, so we focus on studying the characteristics of the paintings as a function of different parameters in the swarm.”

Ultimately, the team hopes to develop this into a full-scale artistic project and allow artists and the public to experiment with it

“As of now, the artworks were created to evaluate the operation of the system, but we would love to exhibit them! Once we can get people back in the lab to try the system, we would love to see what people would come up with.”

Journal Reference:  Interactive Multi-Robot Painting Through Colored Motion Trails, Frontiers in Robotics and AI(2020). DOI: 10.3389/frobt.2020.580415

Half of Twitter accounts discussing ‘reopening America’ are bots

Roughly half of the 200 million tweets related to the virus published since January were sent by accounts that appear to be bots. They seem to have a particular interest in the conversation about ‘reopening America’ and are dominating the discourse on this topic.

Sowing discord

Unfortunately, Twitter bots aren’t as cute as these ones. Image credits: Eric Krull.

Scrolling through your Twitter feed, you might ignore most of what’s going on and focus only on what draws your eye. But even if you’d pay attention to every single story, you’d likely not be to tell which were posted by a bot, and which by an actual person.

Researchers use a multitude of methods to tell whether posts come from humans or artificial accounts, and some of these methods rely on artificial intelligence. in general, however, researchers look at factors such as the number of followers, when an account was created, how often they tweet, and at what hours. Sometimes, things line up too perfectly: new accounts, with similar follower profiles, posting at similar times, about the same hashtags. Other times, the tells are even clearer.

“Tweeting more frequently than is humanly possible or appearing to be in one country and then another a few hours later is indicative of a bot,” said Kathleen Carley, a professor of computer science at Carnegie Mellon University. Carley is conducting a study into bot-generated coronavirus activity on Twitter that has yet to be published.

Carley and colleagues collected more than 200 million tweets discussing the coronavirus or COVID-19 pandemic. They found that 82% of the top 50 influential retweeters on these topics are bots. Out of the top 1,000, 62% are bots.

These bots also seem to not be acting randomly. Instead, the stories they propagate seem to have the aim of polarizing public discourse.

“We do know that it looks like it’s a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that,” she adds.

Furthermore, bot activity seems to be two times more intense than what researchers would expect based on previous natural disasters, further supporting the idea that this is a deliberate campaign.

While finding a smoking gun will be extremely difficult, researchers are fairly confident that this is an active campaign and not just random bot activity.

That conspiracy theory you read? It could be fueled by a bot

The team found 100 types of inaccurate COVID-19 stories propagated by Twitter bots, ranging from unproven cures to conspiracy theories about hospitals being filled with mannequins, or 5G fearmongering.

These actions have already had tangible real-life consequences. For instance, several 5G towers in England have been destroyed by vandals as members of the public fell victim to conspiracy theories spread on social media.

But the larger stake is shifting public discourse and polarizing people. A good example for this is the ‘reopening America’ discussion.

Researchers found strong indicators that this discussion is orchestrated by bot activity. Accounts that are definitely bots generate 34% of all tweets about this topic, and accounts that seem to be either bots or humans with bot assistants produce over 60% of the tweets.

“When we see a whole bunch of tweets at the same time or back to back, it’s like they’re timed,” Carley said. “We also look for use of the same exact hashtag, or messaging that appears to be copied and pasted from one bot to the next.”

“Increased polarization will have a variety of real-world consequences, and play out in things like voting behavior and hostility towards ethnic groups,” Carley said.

What you can do

We are the gate keepers of our social media. Credits: dole777.

While the researchers have not found any indication of who might be behind these bots, they say it’s important for all of us to be vigilant with what we read on social media — and especially what we share forward.

We are the gatekeepers of our own social media bubble, and it pays to double-check everything against a reliable source. Even if someone appeals to your bias and says exactly what you want to hear, don’t just buy into it. This has never been more important.

In addition, researchers say we should be particularly careful with accounts we don’t know personally. Most users have long surpassed the point where they are social media friends only with their real-life acquaintances and follow a variety of accounts and pages. Many might be malevolent.

“Even if someone appears to be from your community, if you don’t know them personally, take a closer look, and always go to authoritative or trusted sources for information,” Carley said. “Just be very vigilant.”

Talkative robots make humans chat too — especially robots that show ‘vulnerability’

Robots admitting to making a mistake can, surprisingly, improve communication between humans — at least during games.

Image via Pixabay.

A new study led by researchers from Yale University found that in the context of a game with mixed human-and-robot teams, having the robot admit to making mistakes (when applicable) fosters better communication between the human players and helps improve their experience. A silent robot, or one that would only offer neutral statements such as reading the current score, didn’t result in the same effects.


“We know that robots can influence the behavior of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author.

“Our study shows that robots can affect human-to-human interactions.”

Robots are increasingly making themselves part of our lives, and there’s no cause to assume that this trend will stop; in fact, it’s overwhelmingly likely that it will accelerate in the future. Because of this, understanding how robots impact and influence human behavior is a very good thing to know. The present study focused on how the presence of robots — and their behavior — influences communication between humans as a team.

For the experiment, the team worked with 153 people divided into 51 groups — three humans and a computer each. They were then asked to play a tablet-based game in which the teams worked together to build the most efficient railroad routes they could over 30 rounds. The robot in each group would be assigned one pattern of behavior: they would either remain silent, utter a neutral statement (such as the score or number of rounds completed), or express vulnerability through a joke, personal story, or by acknowledging a mistake. All of the robots occasionally lost a round, the team explains.

“Sorry, guys, I made the mistake this round,” the study’s robots would say. “I know it may be hard to believe, but robots make mistakes too.”

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

People teamed with robots that made vulnerable statements spent about twice as much time talking to each other during the game, and they reported enjoying the experience more compared to people in the other two kinds of groups, the study found. However, participants in teams with the vulnerable and neutral robots than among both communicated more than those in the groups with silent robots, suggesting that the robot simply engaging in any form of conversation helped spur its human teammates to do the same.

“Imagine a robot in a factory whose task is to distribute parts to workers on an assembly line,” said Sarah Strohkorb Sebo, a Ph.D. candidate in the Department of Computer Science at Yale and a co-author of the study. “If it hands all the pieces to one person, it can create an awkward social environment in which the other workers question whether the robot believes they’re inferior at the task.”

“Our findings can inform the design of robots that promote social engagement, balanced participation, and positive experiences for people working in teams.”

Flying ‘Robotic pigeon’ brings us closer to bird-like drones

Strong and muscular fliers, pigeons are naturally suited to handle the blowy winds between buildings in large cities. That’s why engineers have now turned to them for inspiration, adding pigeon flight feathers to an airborne robot called PigeonBot.

Credit Standford University.

The robotic pigeon integrates true elements of traditional flying machines with elements of biology. David Lentink and colleagues at Stanford University didn’t try to build a machine to act like a bird, which would have been highly challenging. Instead, they closely studied biological mechanisms to learn how birds fly.

“I really wanted to understand how birds change the shape of their wings,” said David Lentink to Popular Science, an assistant professor of mechanical engineering at Stanford and a co-author on a new study which was published in the journal Science Robotics.

Credit: Lentink et al.

Lentink and the team studied common pigeons, looking at their skeletons and feathers. They discovered that the birds control the flight through about 40 feathers, using four “wrist” and “finger” joints to steer their movements. With that knowledge, they recreated the same mechanisms but in a drone driven by propellers.

Image credits: Chang et al (2019) / Science Robotics.

The drone’s body is formed by a foam board frame, with an embedded GPS and a remote-control receiver. The maneuverable wings have actual feathers from pigeons attached. Previous prototypes had carbon and glass fiber but were much heavier, something now solved with the new wing design.

The PigeonBot’s flying capabilities are enabled by a propeller, a fuselage, and a tail. It has motors, a pair per each wing, that can adjust each of the artificial wings and the feathers at two different joints. The researchers can use a remote to move the wing and lead to the robot to turn and bank, mimicking a real pigeon.

“We determined that birds can steer using their fingers,” Letnink said. Both birds’ wings and human arms share basic structural similarities, he and his team argued. For example, wings have humerus, radius and ulna bones and at each wingtip, birds have finger-like anatomy that can move 30 degrees.

Developing the PigeonBot had its challenges and lessons learned for the researchers. One discovery was that the robot works best when all the feathers come from the same bird. Also, incorporating them into the machine required maintenance, specifically smoothing the feathers by hand.

There are parallels between the PigeonBot and actual planes. That’s why Letnik believes that airplanes of the future will make use of morphing wings by incorporating lessons from pigeons and other birds. “You won’t see a feathers airplane but you’ll find mart materials in them,” he argued.

The world’s first ‘living machines’ can move, carry loads, and repair themselves

Researchers at the University of Vermont have repurposed living cells into entirely new life-forms — which they call “xenobots”.

The xenobot designs (top) and real-life counterparts (bottom).
Image credits Douglas Blackiston / Tufts University.

These “living machines” are built from frog embryo cells that have been repurposed, ‘welded’ together into body forms never seen in nature. The millimeter-wide xenobots are also fully-functional: they can move, perform tasks such as carrying objects and healing themselves after sustaining damage.

This is the first time anyone “designs completely biological machines from the ground up,” the team writes in their new study.

It’s alive!

“These are novel living machines,” says Joshua Bongard, a professor in UVM’s Department of Computer Science and Complex Systems Center and co-lead author of the study. “They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”

“It’s a step toward using computer-designed organisms for intelligent drug delivery.”

The xenobots were designed with the Deep Green supercomputer cluster at UVM using an evolutionary algorithm to create thousands of candidate body forms. The researchers, led by doctoral student Sam Kriegman, the paper’s lead author, would assign the computer certain tasks for the design — such as achieving locomotion in one direction — and the computer would reassemble a few hundred simulated cells into different body shapes to achieve that goal. The software had a basic set of rules regarding what the cells could and couldn’t do and tested each design against these parameters. After a hundred runs of the algorithm, the team selected the most promising of the successful designs and set about building them.

The design of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

This task was handled by a team of researchers at Tufts University led by co-lead author Michael Levin, who directs the Center for Regenerative and Developmental Biology at Tufts. First, they gathered and incubated stem cells from embryos of African frogs (Xenopus laevis, hence the name “xenobots”). Finally, these cells were cut and joined together under a microscope in a close approximation of the computer-generated designs.

The team reports that the cells began working together after ‘assembly’. They developed a passive skin-like layer and synchronized the contractions of their (heart) muscle cells to achieve motion. The xenobots were able to move in a coherent fashion up to days or weeks at a time, the team found, powered by embryonic energy stores.

Later tests showed that groups of xenobots would move around in circles, pushing pellets into a central location, spontaneously and collectively. Some of the xenobots were designed with a hole through the center to reduce drag but the team was able to repurpose it so that the bots could carry an object.

It’s still alive… but on its back?

A manufactured quadruped organism, 650-750 microns in diameter.
Image credits Douglas Blackiston / Tufts University.

One of the most fascinating parts of this already-fascinating work, for me, is the resilience of these xenobots.

“The downside of living tissue is that it’s weak and it degrades,” says Bongard. “That’s why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades. We slice [a xenobot] almost in half and it stitches itself back up and keeps going. This is something you can’t do with typical machines.”

“These xenobots are fully biodegradable,” he adds, “when they’re done with their job after seven days, they’re just dead skin cells.”

However, none of the team’s designs was able to turn itself over when flipped on its back. It’s an almost comical little Achilles’ Heel for such capable biomachines.

The manufacturing process of the xenobots.
Image credits Sam Kriegman, Douglas Blackiston, Michael Levin, Josh Bongard, (2020), PNAS.

Still, they have a lot to teach us about how cells communicate and connect, the team writes.

“The big question in biology is to understand the algorithms that determine form and function,” says Levin. “The genome encodes proteins, but transformative applications await our discovery of how that hardware enables cells to cooperate toward making functional anatomies under very different conditions.”

“[Living cells] run on DNA-specified hardware,” he adds, “and these processes are reconfigurable, enabling novel living forms.”

Levin says that being fearful of what complex biological manipulations can bring about is “not unreasonable”, and are very likely going to result in at least some “unintended consequences”, but explains that the current research aims to get a handle on such consequences. The findings are also applicable to other areas of science and technologies were complex systems arise from simple units, he explains, such as the self-driving cars and autonomous systems that will increasingly shape the human experience.

“If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules,” says Levin. “If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We’d have no idea.”

“I think it’s an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex. A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?”

The paper “A scalable pipeline for designing reconfigurable organisms” has been published in the journal PNAS.


Autonomous killer drone aims to save the Great Barrier Reef

A syringe-wielding, toxin-injecting bot will defend the Great Barrier Reef against invading starfish.


It’s quite the looker, too!
Image via Youtube.

It’s definitely a bad time to be a coral. Climate change is stressing the life out of these tiny creatures (literally), causing more and more frequent bleaching events. While this is likely enough to turn most reefs into dry husks on its own, corals also have to contend with overfishing and pollution, which has sped up their decline.

And it seems that the stars are also conspiring against the Great Barrier Reef — or, more specifically, the crown-of-thorns starfish.

Unleash the robots!

Crown-of-thorns starfish (Acanthaster planci) feed on coral, which isn’t generally a problem. If left to themselves, they keep the coral population in check, but not in sufficient numbers to damage the reef. However, there are clues that human activity (most notably agricultural runoff and port activity) has driven up their numbers to such an extent that in 2012 they were responsible for 42% of the total losses of coral, reported Terry Huges for The Conversation.

In a bid to protect the reef against this ravenous tide, Australia plans to unleash teams of killer bots on the starfish.

The idea first took root in 2015, when researchers at the Queensland University of Technology (QUT) showcased the Crown-of-thorns Starfish robot (COTSbot). The bot was capable of autonomously seeking out its targets — with 99% accuracy — and delivering a chemical cocktail to finish them off.

The same team has further refined their idea, resulting in the RangerBot. The new drone (sporting the same yellow garb) can kill starfish just as easily as its predecessor. In addition, RangerBot brings several new tools to bear — it can monitor the reef’s health indicators, map underwater areas, and it comes with an extended battery allowing it to function for eight hours straight — about three times as long as a human diver. RangerBot’s advanced design, low cost, and autonomous capability won it the 2016 Google Impact Challenge People’s Choice prize.

RangerBot can work much cheaper and more efficient than human divers for the task and can operate at any time, be it night or day. It’s the world’s first underwater robotic system designed specifically for coral reef environments using that uses only robot-vision for real-time navigation, obstacle avoidance, and complex science missions.

It is operated using a smart tablet. The researchers also made a concentrated effort to keep the bot as simple to use as possible:

“Weighing just 15kg and measuring 75cm, it takes just 15 minutes to learn how to operate RangerBot using a smart tablet,” said Professor Matthew Dunbabin, who lead the team that designed RangerBot.

“We also spent a lot of time getting the user interface as simple to use as possible so that as many of our stakeholders (from researchers, management authorities and school children) could potentially operate it with a small amount of training.”

While virtually all reefs are struggling, the Great Barrier Reef — being a designated World Heritage Site — enjoys a lion’s share of the efforts and technology dedicated to coral protection and rehabilitation. Drones, cameras, artificial reefs, and computer simulations have all been brought to bear to prevent the reef from undergoing irreversible damage.

Hopefully, these efforts will be successful and other coral reefs around the world will benefit from the lessons learned here.

“Environmental robotics is a real passion of ours and we see so much potential for these advanced technologies to transform the way we protect the world’s coral reefs,” Dunbabin concludes.

Robot human hand.

The Twitter discussion around vapes is grand — and 70% filled with bots

Huh. I wonder who could possibly stand to benefit from this.

Robot human hand.

Image via Tumisu / Pixabay.

Social media discussions around e-cigarettes and their effects on human health may largely be driven by bots, a new paper reports. The study, led by researchers from the San Diego State University (SDSU), dredged the depths of Twitter to study the use and perceptions of e-cigarettes in the United States. The team planned to gain a better understanding of the people talking about vaping but instead found that most such users aren’t even people.

Smoking gun

The study started with a random sample of almost 194,000 geocoded tweets from across the United States posted between October 2015 and February 2016. Out of these, the team drew 973 random tweets and analyzed them for sentiment and source — i.e. from an individual or an organization, for example. Out of these, 887 tweets were identified as posted by individuals, a category that includes potential bots.

More than 66% of tweets from individuals used a supportive tone when talking about the use of e-cigarettes. About 59 percent of individuals also shared tweets about how they personally used e-cigarettes. The team was also able to identify adolescent Twitter users and over 55% of their tweets related to e-cigarettes used a positive tone. In tweets that gave reference to the harmfulness of e-cigarettes, 54% held that e-cigarettes are not harmful, or that they are significantly less harmful than traditional cigarettes.

The study raises an important question, however. To what extent are these debates our own, and to what extent are they promoted as ‘mainstream’ and ‘widely accepted’ in order to spin public discourse and sell more products? Over 70% of the tweets the team looked at seem to be penned by bots, the researchers report. So there are more chipsets than brains participating in this conversation. To add injury to the insult, these bots pose as real people in an attempt to promote products and sway public opinion on the topic of their health effects.

“We are not talking about accounts made to represent organizations, or a business or a cause. These accounts are made to look like regular people,” said Lourdes Martinez, paper co-author. “This raises the question: To what extent is the public health discourse online being driven by robot accounts?”

And the discovery came on by accident. The team set out to use Twitter data to study what actual people discuss about on the topic of e-cigarettes. However, during their research, the team realized they were, in fact, dealing with a lot of bot accounts.

Bots ahoy

Mask smoke.

Hello, fellow humans. I am also human. I like to vape with my lung.

After observing anomalies in the dataset, namely related to confusing and illogical posts about e-cigarettes and vaping, the team reviewed user types and decided to reclassify them. They specifically made an effort to identify accounts that appeared to be operated by robots.

“Robots are the biggest challenges and problems in social media analytics,” said Ming-Hsiang Tsou, founding director of SDSU’s Center for Human Dynamics in the Mobile Age and co-author on the study.

“Since most of them are ‘commercial-oriented’ or ‘political-oriented,’ they will skew the analysis results and provide wrong conclusions for the analysis.”

The findings come just one month after Twitter purged its user base of millions of suspicious and fake accounts. The platform also announced it will launch new mechanisms aimed at identifying and fighting spam and other types of abuse on its virtual lands.

Tsou appreciates the effort and says that “some robots can be easily removed based on their content and behaviors,” while others “look exactly like human beings and can be more difficult to detect.”

“This is a very hot topic now in social media analytics research,” he says.

“The lack of awareness and need to voice a public health position on e-cigarettes represents a vital opportunity to continue winning gains for tobacco control and prevention efforts through health communication interventions targeting e-cigarettes,” the team wrote in the paper.

Martinez thinks public health agencies and organizations must make an effort to become more aware of the conversations happening on social media if they hope to have a chance of keeping the general public informed in the face of all of these bots.

“We do not know the source, or if they are being paid by commercial interests,” Martinez said. “Are these robot accounts evading regulations? I do not know the answer to that. But that is something consumers deserve to know, and there are some very clear rules about tobacco marketing and the ways in which it is regulated.”

The paper ““Okay, We Get It. You Vape”: An Analysis of Geocoded Content, Context, and Sentiment regarding E-Cigarettes on Twitter” has been published in the Journal of Health Communication.

Robot Brain.

Biology can help patch the flaws in our robots, metastudy reports

Cyborgs might still be a ways away, but “biohybrid” bots might be closer than you think, according to an international team of researchers.

Robot Brain.

Image via midnightinthedesert.

The term cyborg refers to any biomechanical entity that was born organic and later received mechanical augmentations, either to restore lost functionality or to enhance its abilities. It’s possible that cyborgs will become commonplace in the future, as people turn to robotic prosthetics to replace lost limbs, explore whole new senses through mechanical augmentation, or by plugging into a Neuralink-like artificial mind.

But there’s also a reverse to the cyborg coin, the biohybrids — robots enhanced with living cells or tissues to make them more lifelike. Biological systems can bring a lot to the biohybrid table, such as muscle cell augmentations to help the bots perform subtle movements, or bacterial add-ons to help them navigate through living organisms — and unlike cyborgs, biohybrids are coming on-line today, according to a new metastudy.


The paper, penned by an international group of scientists and engineers, aims to get an accurate picture of the state of biohybrid robotics today. The field, they report, is entering a “deep revolution in both [the] design principles and constitutive elements” it employs.

“You can consider this the counterpart of cyborg-related concepts,” said lead author Leonardo Ricotti, of the BioRobotics Institute at the Sant’Anna School of Advanced Studies, in Pisa, Italy. “In this view, we exploit the functions of living cells in artificial robots to optimize their performances.”

In recent years we’ve seen robots of all shapes and sizes bringing increasing complexity to bear in both software and hardware. They’re on assembly lines moving and welding heavy metal pieces, and sub-millimeter robots are being developed to kill cancer cells or heal wounds from within the body.

One thing robots haven’t quite gotten right in all this time, however, is fine movement. Actuation, the coordination of movements, proved itself to be a persistent thorn in the side of robotics, the team writes. Robots can handle huge weights with impressive ease and fluidity. Alternatively, they can operate a laser cutter with perfect accuracy each and every time. But they have difficulty coordinating subtler actions, such as cracking an egg cleanly into a bowl, or caressing. Unlike animal movements, which start gently on a micro scale and lead up to large-scale motion, robots’ initial movements are jerky.

Another shortcoming, according to Ricotti, is that our bots are quite power hungry. They can’t hold a candle to the sheer energy efficiency of biological systems, refined by evolution almost to its limits over millions of years — a problem that’s particularly relevant in micro-robots, whose power banks are routinely larger than the robot itself.

Mixing living ‘parts’ into robots can solve these problems, she adds.

The team writes that muscles can provide the fine accuracy actuation and steady movement that robots currently lack. For example, they showcase a group led by Barry Trimmer of Tufts University (Trimmer is also a co-author of the metastudy), that developed worm-like biohybrid robots powered by the contraction of insect muscle cells.

Co-author Sylvain Martel, of Polytechnique Montréal, is trying to solve the energy issue by outfitting his bots with bacterial treads. His work used magnetotactic bacteria, which move along magnetic field lines, to transport medicine to cancer cells. The method allows Martel’s team to guide the bacteria using external magnets, allowing them to target tumors or cells that have proven elusive in the face of traditional treatments.

Steel and sinew

Biohybrid robotics comes with its own set of drawbacks, however. Biological systems are notoriously more fragile than metal-borne robots, and they prove to be the weakest link in hybrid systems. Biohybrids can only operate in temperature ranges suitable for life (so no extreme heat or cold), are more vulnerable to chemical or physical damage, and so on. In general, if a living organism wouldn’t last too long in a certain place, neither would a biohybrid.

Finally, living cells need to be nourished, and that’s something we haven’t really learned how to do well in robots yet — so as of now, our biohybrids tend to be rather short-lived. But for all their shortcomings, biohybrid robots have a lot of promise. When talking about a manta-ray-like biobot developed by a team at Harvard last year, Adam Feinberg, a roboticist at Carnegie Mellon University, said that “by using living cells they were able to build this robot in a way that you just couldn’t replicate with any other material.”

“You shine a light, and it triggers the muscles to swim. You couldn’t replicate this movement with on-board electronics and actuators while keeping it lightweight and maneuverable.”

The paper Biohybrid actuators for robotics: A review of devices actuated by living cells has been published in the journal Science Robotics.

Ethical banana.

Researchers quantify basic rules of ethics and morality, plan to copy them into smart cars, even AI

As self-driving cars roar (silently, on electric engines) towards wide scale use, one team is trying to answer a very difficult question: when accidents inevitably happen, where should the computer look to for morality and ethics?

Ethical banana.

Image credits We Are Neo / Flickr.

Car crashes are a tragic, but so far unavoidable side effect of modern transportation. We hope that autonomous cars, with their much faster reaction speed, virtually endless attention span, and boundless potential for connectivity, will dramatically reduce the incidence of such events. These systems, however, also come pre-packed with a fresh can of worms — pertaining to morality and ethics.

The short of it is this: while we do have laws in place to assign responsibility after the crash, we understand that as it unfolds people may not make the ‘right’ choice. That under the shock of the event there isn’t enough time to ponder the best choice of action, and a driver’s reaction will be a mix between an instinctual response and what seems — with limited information — to limit the risks for those involved. In other words, we take context into account when judging their actions and morality is highly dependent on context.

But computers follow programs, and these aren’t compiled during car crashes. A program is written months, years in advance in a lab and will, in certain situations, sentence someone to injury or death to save somebody else. And therein lies the moral conundrum: how do you go about it? Do ensure the passengers survive and everyone else be damned? Do you make sure there’s as little damage as possible, even if that means sacrificing the passengers for the greater good? It would be hard to market the latter, and just as hard to justify the former.

When dealing with something as tragic as car crashes, likely the only solution we’d all be happy with is there being none of them — which sadly doesn’t seem possible as of now. The best possible course, however, seems to be making these vehicles act like humans or at least as humans would expect them to act. Encoding human morality and ethics into 1’s and 0’s and downloading them on a chip.

Which is exactly what a team of researchers is doing at The Institute of Cognitive Science at the University of Osnabrück in Germany.

Quantifying what’s ‘right’

The team has a heavy background in cognitive neuroscience, and have put that experience to work in teaching machines how humans do morality. They had participants take a simulated drive in immersive virtual reality around a typical suburban setting on a foggy day, and the resolve unavoidable moral dilemmas with inanimate objects, animals, and humans — to see which and why they decided to spare.

By pooling the results of all participants, the team created statistical models that outlining a framework of rules on which morality and ethical decision-making rely. Underpinning it all, the team says, seems to be a single value-of-life that drivers facing an unavoidable traffic collision assign to every human, animal, or inanimate object involved in the event. How each participant made his choice could be accurately explained and modeled by starting from this set of values.

That last bit is the most exciting finding — the existence of this set of values means that what we think of as the ‘right’ choice isn’t dependent only on context, but stems from quantifiable values. And what algorithms do very well is crunch values.

“Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object,” said Leon R. Sütfeld, PhD student, Assistant Researcher at the University, and first author of the paper.

The findings offer a different way to address ethics concerns regarding self-driving cars and their behavior in life-threatening situations. Up to now, we’ve considered that morality is somehow innately human and that it can’t be copied, as shown by efforts to ensure these vehicles conform to ethical demands — such as the German Federal Ministry of Transport and Digital Infrastructure’s (BMVI) 20 ethical principles.

[panel style=”panel-info” title=”Some of they key points of the report are as follows:” footer=””]

  • Automated and connected transportation (driving) is ethically required when these systems cause fewer accidents than human drivers.
  • Damage to property must be allowed before injury to persons: in situations of danger, the protection of human life takes highest priority.
  • In the event of unavoidable accidents, all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited.
  • In all driving situations, it must be clearly defined and recognizable who is responsible for the task of driving – the human or the computer. Who is driving must be documented and recorded (for purposes of potential questions of liability).
  • The driver must fundamentally be able to determine the sharing and use of his driving data (data sovereignty).


Another point that the report details on heavily is that of how data recorded by the car can be used, and how to balance the privacy concerns of drivers with the demands of traffic safety and economic interest in the user’s data. While this data needs to be recorded to ensure that everything went according to the 20 ethical principles, the BMVI also recognizes that there are huge commercial and state security interests in this data. Practices such as those “currently prevalent” with social media should especially be counteracted early on, BMVI believes.

At first glance rules such as the ones BMVI set down seemed quite reasonable. Of course you’d rather have a car damage a bit of property, or even risk the life of a pet, over than of a person. It’s common sense, right? If that’s the case, why would you need a car to ‘understand’ ethics when you can simply have one that ‘knows’ ethics? Well, after a few e-mails back and forth with Mr. Sütfeld I came to see that ethics, much like quantum physics, sometimes doesn’t seem to play by the books.

“Some [of the] categorical rules [set out in the report] can sometimes be quite unreasonable in reality, if interpreted strictly,” Mr Sütfeld told ZME Science. “For example, it says that a human’s well-being is always more important than an animal’s well-being.”

To which I wanted to say, “well, obviously.” But now consider the following situation: say you have a dog running out in front of a human-driven car, in such a way that it’s an absolute certainty it will be hit and killed if the driver doesn’t swerve onto the opposite lane. There’s a good chance the driver will spot the dog and avoid collision but there’s also a very tiny chance, say one in twenty, that he won’t be paying attention and hit the animal — with very little injury for the person driving, something along the lines of a sprained ankle.

“The categorical rule [i.e. human life is more important] could be interpreted such that you always have to run over the dog. If situations like this are repeated, over time 20 dogs will be killed for each prevented spraining of an ankle. For most people, this will sound quite unreasonable.”

“To make reasonable decisions in situations where the probabilities are involved, we thus need some system that can act in nuanced ways and adjust its judgement according to the probabilities at hand. Strictly interpreted categorical rules can often not fulfil the aspect of reasonableness.”


Miniature car.

Image via Pixabay.

So simply following the Ethic Handbook 101 to the letter might lead to some very disappointing results because again, morality is also dependent on context. The team’s findings could be the foundation of ensuring ethical self-driving behavior by allowing them the flexibility to interpret the rules correctly in each situation. And, as a bonus, if the car’s computers understand what it means to act morally and make ethical choices, a large part of that data may not need to be recorded in the first place — nipping a whole new problem in the bud.

“We see this as the starting point for more methodological research that will show how to best assess and model human ethics for use in self-driving cars,” Mr Sütfeld added for ZME Science.

Overall, imbuing computers with morality may have heavy ramifications in how we think about and interact with autonomous vehicles and other machines, including AIs and self-aware robots. However, just because we now know it can be possible, doesn’t mean the issue is settled — far from it.

“We need to ask whether autonomous systems should adopt moral judgements,” says Prof. Gordon Pipa, senior author of the study. “If yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

As an example, he cites the new principles set out by the BMVI. Under this framework, a child who runs out on a busy road and causes a crash would be classified as being significantly involved in creating the risk, and less qualified to be saved in comparison with a person standing on the sidewalk who wasn’t involved in any way in creating the incident.

It’s an impossible decision for a human driver. The by-stander was innocent and possibly more likely to evade or survive the crash, but the child stands to lose more and is more likely to die. But any reaction a human driver would take would be both justifiable — in that it wasn’t premeditated — and blamable — in that maybe a better choice could have been taken. But a pre-programmed machine would be expected to both know exactly what it was doing, and make the right choice, every time.

I also asked Mr Sütfeld if reaching a consensus on what constitutes ethical behavior in such a car is actually possible, and if so, how can we go about incorporating what each country’ views on morality and ethics (their “mean ethical values” as I put it) into the team’s results.

“Some ethical considerations are deeply rooted in a society and in law, so that they cannot easily be allowed to be overridden. For example, the German Constitution strictly claims that all humans have the same value, and no distinction can be made based on sex, age, or other factors. Yet most people are likely to save a child over an elderly person if no other options exist,” he told me. “In such cases, the law could (and is likely to) overrule the results of an assessment.”

“Of course, to derive a representative set of values for the model, the assessment would have to be repeated with a large and representative sample of the population. This could also be done for every region (i.e., country or larger constructs such as the EU), and be repeated every few years in order to always correctly portrait the current „mean ethical values“ of a given society.”

So first step towards ethical cars, it seems, is to sit down and have a talk — first, we need to settle on what the “right” choice actually is.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” explains Prof. Peter König, a senior author of the paper.

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should act just like humans.”

But that’s something society as a whole has to establish. In the meantime, the team has worked hard to provide us with some of the tools we’ll need to put our decisions into practice.

As robots and AIs become a larger part of our lives, computer morality might come to play a much bigger part in our lives. By helping them better understand and relate to us, ethical AI might help alleviate some concerns people have about their use in the first place. I was already pressing Mr Sütfeld deep into the ‘what-if’ realm, but he agrees autonomous car ethics are likely just the beginning.

“As technology evolves there will be more domains in which machine ethics come into play. They should then be studied carefully and it’s possible that it makes sense to then use what we already know about machine ethics,” he told ZME Science.

“So in essence, yes, this may have implications for other domains, but we’ll see about that when it comes up.”

The paper “Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure” has been published in the journal Frontiers in Behavioral Neuroscience.


Endowing AI with confidence and doubt will make it more useful, paper argues

Hard-wiring AI with confidence and self-doubt could help them better perform their task while recognizing when they need help or supervision, a team of researchers believes.


Initial image credits Tero Vesalainen.

Confidence — that thing we all wish we had at parties but can thankfully be substituted with alcohol. Having confidence in one’s own abilities is generally considered to be a good thing, although, as it turns out from a certain presidency, too much of it and you annoy the whole planet. Which is an important point to discuss, given that we’re toying around with creating actual minds, in the form of AI. So would confidence, and it’s mirror twin doubt, prove of any use to a thinking machine?

That’s the question a team of researchers led by Dylan Hadfield-Menell from the University of California, Berkeley, set out to answer. We already know part of the answer — we know what happens when machines get over-confident, he says. A perfect example of this is Facebook’s newsfeed algorithms. They were designed to feed article and post suggestions which would match people’s interests based on what they click on or share. But by following these instructions to the letter, they end up filling some feeds with nothing but fake news. A sprinkling of self-doubt would’ve been a great boon in this case.

“If Facebook had this thinking, we might not have had such a problem with fake news,” says Hadfield-Menell.

The team believes the answer lies in human oversight. Instead of showing every article or post the algorithm thinks a Facebook user wants to see, a more uncertain system would be prompted to defer to a human referee in case a link smells fishy.

But knowing that doubt can help make our machines better at what they do isn’t the same as knowing how, and how much of it, should be implemented. So the team set up an experiment to determine how a robot’s sense of its own usefulness could be used in the creation of artificial intelligence.

The off-switch

The team designed a mathematical model of a human-robot interaction they call the “off-switch game” to see how a machine’s confidence levels would impact its interaction with us. Less of a game per se and more of a simulation, it basically consists of a robot with an off switch which is given a task to do. A human overseeing the bots can press this button at any time to stop the robot, but on the other hand, the robot can choose to disable this switch if it so desires.

Not very surprisingly, when the machine was given a high degree of confidence it would never allow the human player to switch it off. In contrast, a robot endowed with low confidence would always allow the player to shut it down, even if it was performing its task perfectly.

Hadfield-Menell believes this is a good indication that we shouldn’t make AI’s too “insecure”. For example, if you task your autonomous car with taking the kids to school in the morning it should never let a child take control. In this case, the AI should be confident that its own ability is greater than that of the children and refuse to relinquish control. But if you were in the car and told it to stop, it should relinquish control. The best robots, he adds, will be those who can best balance these two extremes.

While the idea of a robot refusing a command to stop or shut down might seem a bit scary or far-fetched (and has been debated at large in the past), context is everything. Humans are fallible too, and you wouldn’t want a robotic firefighter to stop from saving someone or putting out a fire because it was ordered to, by mistake. Or a robotic nurse to stop treating a delirious patient who orders it to shut down. This confidence is a key part of AI operation and something we’ll have to consider before putting people and AIs side by side in the real world.

The issue is wider than simple confidence, however. As machines will be expected to make more and more decisions that directly impact human safety, it’s important that we put a solid ethical framework in place sooner rather than later, according to Hadfield-Menell. Next, he plans to see how a robot’s decision-making changes with access to more information regarding its own usefulness — for example, how a coffee-pot robot’s behavior might change in the morning if it knows that’s when it’s most useful. Ultimately, he wants his research to help create AIs that are more predictable and make decisions that are more intuitive to us humans.

The full paper “The Off-Switch Game” has been published in the journal arXiv.

Learning Graffiti.

Knowing for the sake of knowing: algorithm developed to hardwire curiosity into robots

To better flesh out artificial intelligence (AI), computer scientists have put together an algorithm that makes machine curious to explore and learn simply for the sake of learning. In the long run, such programs could even take bots out of the factories and put them side-by-side with researchers.

Learning Graffiti.

Sage advice.
Image credits Gerd Altmann.

The concepts of intelligence and curiosity feel so deeply entwined to us that it’s almost impossible to imagine one going very far without the other. And yet even the most powerful machine brains we’ve built up to now have had to make do without any kind of curiosity — computing and returning an answer when instructed to, going to the screensaver in the absence of input.

It’s not like we’re only figuring this out now. Scientists have been working on various ways to imbue our silicone friends with curiosity for quite some time now, but their efforts have always fallen far under the benchmark set by our innate inquisitiveness. One important limitation, for example, is that most curiosity algorithms can’t determine whether something will be interesting or not — because, unlike us, they can’t assess the sum of data the machine has in store to see potential gaps in knowledge. By comparison, you could tell with a fairly high confidence if a book will be interesting or not without reading it first.

Judging books by their cover

But Todd Hester, a computer scientist currently working with Google DeepMind in London, thinks that robots should actually be able to go against this morsel of folk wisdom. To that end, he teamed up with Peter Stone, a computer scientist at the University of Texas at Austin to create the Targeted Exploration with Variance-And-Novelty-Intrinsic-Rewards / TEXPLORE-VENIR algorithm.

“I was looking for ways to make computers learn more intelligently, and explore as a human would,” he says. “Don’t explore everything, and don’t explore randomly, but try to do something a little smarter.”

The way they did so was to base TEXPLORE-VENIR on a technique called reinforcement learning. It’s one of the main ways humans learn, too, and works through small increments towards an end goal. Basically, the machine or human in question tries something, and if the outcome brings is closer to a certain goal (such as clearing all the board in Minesweeper) it receives a reward (for us it’s dopamine) to promote that action or behavior in the future.

Reinforcement learning works for us — by making stuff like eating feel good so we don’t forget to eat — and it works for machines, too — it’s reinforcement learning that allowed DeepMind to master ATARI games and Go, for example. But that was achieved through random experimentation, and furthermore, the program was instructed to learn the game. TEXPLORE-VENIR, on the other hand, acts similarly to the reward circuits in our brains by giving the program an internal reward for understanding something new, even if the knowledge doesn’t get it closer to the ultimate goal.

Robot reading Mythical Man.

Image credits Troy Straszheim / Wikimedia.

As the machine learns about the world around it, TEXPLORE-VENIR rewards it for uncovering new information that’s unlike what it’s seen before — exploring a novel patch of forest, or finding a new way to perform a certain task. But it also rewards the machine for reducing uncertainty i.e. for getting a deeper understanding of things it already ‘knows’. So overall, the algorithm works more closely to what we understand as curiosity than previous programs.

“They’re fundamentally different types of learning and exploration,” says Konidaris. “Balancing them is really important. And I like that this paper did both of those.”

Testing points

The researchers put TEXPLORE-VENIR to the test in two different scenarios. First, the program was presented with a virtual maze constructed of four rooms connected by locked doors. Its task was to find a key, pick it up, and then use this key to unlock a door. To score the algorithm’s efficiency, each time the simulated bot passed a door it earned 10 points and had a 3000 step cap during which to achieve the highest score possible. The bot was first allowed a 1000-step exploration phase to familiarize with the maze.

When this warm-up period was done under the direction of TEXPLORE-VENIR, the bot averaged 55 door point in the 3000-step phase. For other curiosity algorithms, it averaged anywhere between 0-35 points, with the exception of R-Max, a program which also scored 55 points. When the program had to explore and pass through doors simultaneously, TEXPLORE-VENIR averaged around 70 points, R-Max around 35, while the others clocked in at under 5 points, the researchers report.

The second round of testing was performed with a physical robot, the Nao. It included three separate tasks, during which the machine earned points for hitting a cymbal, for holding a pink tape (which was fixed on his hand) in front of his eyes, and finally for pressing a button on its foot. For each task, it was allowed 200 steps to earn points but was given an initial 400-step period to explore — either randomly or using TEXPLORE-VENIR.

Each method of exploration was used 13 times. Overall, Nao found the pink tape on his hand much faster using TEXPLORE-VENIR than the random approach. It pressed the button on 7 out of the 13 trials after using TEXPLORE-VENIR, compared to zero times after exploring randomly. Lastly, it hit the cymbal in one of five trials after using TEXPLORE-VENIR, but not once after exploring randomly. TEXPLORE-VENIR allowed the robot to better understand the basics about how its body, the environment, and the task at hand worked — so it was well prepared for the trials after the exploration period.

As the team notes, striking a balance between internal and external rewards is the most important thing when it comes to learning. Too much curiosity could actually impede the robot. If the intrinsic reward for learning something is too great, the robot may ignore extrinsic rewards (i.e. those from performing its given tasks) altogether. R-Max, for example, scored fewer points in the simultaneous exploration and door-unlocking phase because its curiosity distracted it from its task, which I guess you could chalk up as AI ADHD. Too little curiosity, on the other hand, can diminish the bot’s capacity for learning. We’ve probably all had that one test where the grade was more important than actually learning anything — so you memorize, take the test, and then your mind wipes everything clean.

Hester says the next step in their research is to better tailor the algorithm after our brain architecture and use deep neural networks to make bots “learn like a child would.”

The full paper “Intrinsically motivated model learning for developing curious robots” has been published in the journal Artificial Intelligence.

Parrotlet hopping.

These tiny birds’ hopping could teach robots how to navigate rough environments

While usually gracious and smooth, birds’ flight likely started off as a short hop-and-flap to help dinosaurs forage better. A paper from Stanford University analyzes the energy used by a type of small parrot as it hops from branch to branch during foraging, and reports that their movements optimize energy usage and could be similar to the way their ancestors learned to fly.

Parrotlet hopping.

Image credits Diana Chin, Lentink Lab.

If you’re trying to understand the origins of animal flight, parrotlets make for a wonderful set of lab assistants. These diminutive parrots live from Mexico to southern parts of South America and are easy to train or care for and have a rather general flight pattern, unlike certain species — say hummingbirds, for example. They’re also extremely cute.

More to the point, a team of researchers from the Department of Mechanical Engineering at the Stanford University reports that the tiny birds tend to conserve energy on short distances from perch to perch by jumping or hopping most of the way. This behavior could offer a glimpse into the early days of flight, when feathered dinosaurs were just taking off the ground.

“Sometimes they were more cautious, they would literally just step between perches,” says lead author Diana Chin. “There was one bird that would basically do the splits.”

Flying by degrees

The team worked with four Pacific parrotlets, rewarding them with a seed each time they voluntarily jumped between force-sensitive perches inside an aerodynamic force platform. When the researchers widened the gap between perches, the parrotlets started to add some half-wingbeats in their jump. Birds use this kind of hop-and-flap to navigate tree branches with minimal effort (and so minimal energy expenditure) while foraging for food.

“[…] we discovered that parrotlets direct their leg impulse to minimize the mechanical energy needed to forage over different distances and inclinations,” the paper reads.

Less energy expenditure while searching for food means the birds could save up for situations when they really need it — such as fighting off a predator or competing for a mate. It’s likely that the first dinosaurs also used this hoping behavior to forage food, as the team’s computer models revealed that a single such “proto-wingbeat” could increase a feathered dinosaur’s jump range.

Parrotlet hopping 2.

Image credits Diana Chin, Lentink Lab.

Using data observed from the parrotlets and data from her previous studies, Chin put together a computer model showing the optimal angle of takeoff, and calculating the energy costs involved in different movements — for example the proto-wingbeats.

At first, while dinosaurs were still large and their feathers relatively small to their bodies this increase in mobility was negligible, but as dinosaurs got smaller and more specialized, the effect of the proto-wingbeat increased dramatically. Furthermore, the models revealed that these short jumps contain all the motions and tools to eventually develop into actual wing beats and flight.

Looking back at the way birds and their dinosaur ancestors learned how to hop around in trees (which are a pretty complex environment to navigate compared to a flat surface) could help design robots which could navigate very difficult or varied terrain.

Chin’s models could help design robots with both legs and wings. By conserving energy and using the most efficient motions to get around a cluttered area, a winged robot could significantly extend its operational range. The team now plans to look into how parrotlets can stick to the landing on a wide variety of surfaces, and work on designing and building the winged robots.

The paper “How birds direct impulse to minimize the energetic cost of foraging flight” has been published in the journal Nature.

Your robot always dropping stuff? Try these gecko-inspired pads

Researchers have created a new family of grippers inspired by the gecko’s ridiculously adhesive toes. These pads could be used to improve object handling on the production line or allow robots to better interact with the world.

Gecko gripper.

The gripper holding a flask of orange juice.
Image credits Sukho Song.

It’s easy to take gripping for granted, but when you really think about it (or have a robot to compare yourself with) it’s an amazing and complex skill. Our brains and bodies make it look easy — after all, even a small child knows how to handle all kinds of objects. But the number of minute processes that go on in the background, even for the simplest of gripping motions, is staggering.

Handle with care

Without even thinking about it, you know much force to apply to grip an object but not break it, how to calibrate your emotions so you’ll bring the cup to your lips — not throw it to at the ceiling. But if you want to make a robot do the same thing, you’ll have to go through a lot of programming and trial and error.

Looking for a simpler way to help our digital friends get a grip on life, researchers have done a bit of biomimicry and copied the working principles of the gecko‘s toes. The resulting pads should help address many of the problems machines today have in manipulating objects and should allow them to navigate a much wider range of shapes and materials — such as irregularly shaped walls or ceilings and slippery metal surfaces.

Gecko’s toes can stick to anything using Van der Waals forces. Long story short, because some atoms and molecules tend to be polarized (having a positive-charged side and an opposing negative-charged side) they also tend to push or pull at each other like really tiny magnets. But if there’s enough of them, the forces can stack up to an impressive effect. The gecko’s toes are covered with tiny hair-like strands which increase the surface contact between them and the surface, maximizing the Van der Waals effect and allowing the lizard to walk upside down if it so desires.

Gecko Upside Down.

Rub my belly.
Image credits Wikimedia user Tolbunt5.

Previous research on this subject has resulted in synthetic microfiber arrays which replicate the gecko’s sticky toes, but imperfectly. The hook is that properly sticking these materials to a surface takes pressure meaning they have to be mounted on a rigid backing. Doing this, however, prevents the arrays from adhering to curved surfaces.

The FAM family

The new paper details how this issue can be solved by placing the microfibers on a thin, stretchy membrane to create a family of materials the researchers call fibrillar adhesives on a membrane (FAM), and developing a new kind of backing for the membranes.

For their gripper, the team used a FAM to cover one end of a shallow rubber funnel some 18 millimeters across. The other (narrow) end of the funnel was connected to an air pump, and after the FAM came in contact with the surface-to-be-held, all the air was sucked out of the funnel to flatten it onto any shape.

Testing revealed that a gripper with the contact area of only 2.5 square centimeters (roughly the size of a dime) could lift more than 300 grams (slightly less than your average can of soda). It could grip a coffee cup from the outside (convex shape), the inside (concave shape) or the handle (complex shape). It also has a light touch — the gripped could lift a cherry tomato without damaging it, and a plastic bag without ripping it. Inflating the gripper is all that’s needed to release the objects.

The technology could be used in manufacturing to shuttle delicate or complex-shaped components around, or in medicine to grip organs without damaging them. Alternatively, they would give robots enough grip to climb onto planes, ships, or reactors to perform maintenance and repairs.

But before we start seeing them used on a wide scale, researchers have to ensure that the grippers are durable enough to withstand hundreds of thousands of usage cycles, see how they scale up to grip heavier loads and make them economically viable in comparison to simple clamps or suction cups.

The team says that scaling the grippers up to a few tens of centimeters so they can lift heavy objects — but they’re still testing on durability.

The full paper “Controllable load sharing for soft adhesive interfaces on three-dimensional surfaces” has been published in the journal PNAS.

Robot and human hands.

Robot see, robot do: MIT software allows you to instruct a robot without having to code

Researchers have put together C-LEARN, a system that should allow anyone to teach their robot any task without having to code.

The robot chef from the Easy Living scene in Horizons at EPCOT Center.
Image credits Sam Howzit / Flickr.

Quasi-intelligent robots are already a part of our lives, and someday soon, their full-fledged robotic offspring will be too. But until (or rather, unless) they reach a level of intelligence where we can teach them verbally, as you would a child, instructing a robot will require you to know how to code. Since coding is complicated, more complicated than just doing the dishes yourself, anyway, it’s unlikely that regular people will have much use for robots.

Unless, of course, we could de-code the process of instructing robots. Which is exactly what roboticists at the MIT have done. Called C-LEARN, the system should make the task of instructing your robot as easy as teaching a child. Which is a bit of good-news-bad-news, depending on how you feel about the rise of the machines: good, because we can now have robot friends without learning to code, and bad, because technically the bots can use the system to teach one another.

How to train your bot

So as I’ve said, there’re two ways you can go about it. The first one is to program them, which requires expertise in the field of coding and takes a lot of time. The other is to show the bot what you want it to do by tugging on its limbs or moving digital representations of them around, or just doing the task yourself and having it imitate you. For us muggles the latter is the way to go, but it takes a lot of work to teach a machine even simple movements — and then it can only repeat, not adapt them.

C-LEARN is meant to form a middle road and address the shortcoming of these two methods by arming robots with a knowledge base of simple steps that it can intelligently apply when learning a new task. A human user first helps build up this base by working with the robot. The paper describes how the researchers taught Optimus, a two-armed robot, by using software to simulate the motion of its limbs. Like so:

The researchers described movements such as grasping the top of a cylinder or the side of a block, in different positions, retaking each motion for seven times from each position. The motions varied slightly each time, so the robot can look for underlying patterns in the motions and integrate them into the data bank. If for example the simulated grasper always ended up parallel to the object, the robot would note this position is important in the process and would constrain its future motions to attain this parallelism.

By this point, the robot is very similar to a young child, “that just knows how to reach for something and grasp it,” according to D’Arpino. But starting from this database the robot can learn new, complex tasks following a single demonstration. All you have to do is show it what you want done, then approve or correct its attempt.

Does it work?

Robot and human hands.

To test the system, the researchers taught Optimus four multistep tasks — to pick up a bottle and place it in a bucket, to grab and lift a horizontal tray using both hands, to open a box with one hand and use the other to press a button inside it, and finally to grasp a handled cube with one hand and pull a rod out of it with the other. Optimus was shown how to perform each task once, made 10 attempts at each, and succeeded 37 out of 40 times. Which is pretty good.

The team then went one step further and transferred Optimus’s knowledge base and its understanding of the four tasks to a simulation of Atlas, the bullied bot. It managed to complete all four tasks using the data. When researchers corrupted the data banks by deleting some of the information (such as the constraint to place a grasper parallel to the object), Atlas failed to perform the tasks. Such a system would allow us to confer the models of motion created by one bot with thousands of hours of training and experience to any other robot — anywhere in the world, almost instantly.

D’Arpino is now testing whether having Optimus interact with people for the first time can refine its movement models. Afterward, the team wants to make the robots more flexible in how they apply the rules in their data banks, so that they can adjust their learned behavior to whatever situation they’re faced with.

The goal is to make robots who are able to perform complex, dangerous, or just plain boring tasks with high precision. Applications could include bomb defusal, disaster relief, high-precision manufacturing, and helping sick people with housework.

The findings will be presented later this month at the IEEE International Conference on Robotics and Automation in Singapore.

You can read the full paper “C-LEARN: Learning Geometric Constraints from Demonstrations for Multi-Step Manipulation in Shared Autonomy” here.

This robot works six times faster than humans — and it’s putting jobs at risk

The time where we could speak about ‘robots taking our jobs’ at the future tense has passed. Many factories around the world are already replacing their workforce with robots — and it’s working out for them. Now, a new robot called SAM might revolutionize brick-laying, but it might also leave lots of people without a job.

SAM, short for Semi-Automated Mason, was created by the New York-based Construction Robotics. He’s capable of laying down 3,000 bricks per day, which makes him six times faster than the average human — and it’s also cheaper. According to a report by Zero Hedge, with SAM, you’d end up at a cost of about 4.5 cents per brick. Considering a $15 per hour wage rate, plus benefits, you end up with a cost of 32 cents per brick — seven times more than with SAM. The robot also doesn’t require any breaks or sleep time, so you can basically use it round the clock, as long as there is a human supervisor around. Human assistance is still needed to load bricks and mortar into the system and to clean up excess mortar from joints and clean up the excess mortar after the bricks have been laid.

SAM has already been deployed to several construction sites in the US, and will make an entry in the UK later this year. Each SAM can be rented at a monthly cost of ~$3,300 but even so, Construction Robotics claims that the unit can reduce the cost of brick-laying by half, which means you’d be saving up a lot of money on major construction projects.

Meet SAM. Image credits: Construction Robotics.

Of course, this doesn’t mean that if you’re a bricklayer you should start looking for a new job. As mentioned above, SAM still needs a supervisor, and someone to finesse its work. There are just a few SAMs to go around, and when you consider all the construction sites in the world, the impact that SAM now has is negligible. Also, this is still just the pioneering stages of such projects, but it is a telling sign of what’s to come. More and more such robots will be developed, and that’s alright — who doesn’t want the same job done better and cheaper? What isn’t alright is that a lot of people might be left without a job, and we as a society don’t have a backup plan yet. Automation is coming at us with a fury, whether we’re ready or not.

The auxiliary cutter.

First deep-sea mining operation scheduled to start in 2019 — here are the bots that will do it

Canadian-based firm Nautilus Minerals Inc. plans to launch the world’s first deep sea mining operation in early 2019. The company will launch three remote-controlled mining robots off the coast of Papua New Guinea to the floor of the Bismark Sea to mine rich metal deposits.

Each of the robots is the size of a small house and equipped with huge rock-crushing, teeth-riddled devices to chew through the ocean’s bottom. The smallest one weighs 200 tons and they will be propelled from spot to spot on huge threads in their search for paydirt.

The auxiliary cutter.

The first bot, known as the auxiliary cutter, clears the way for the other two to operate.
Image credits Nautilius Minerals Inc.

“A lot of people don’t realize that there are more mineral resources on the seafloor than on land,” Michael Johnston, CEO of Nautilus,  said for Seeker. “Technology has allowed us to go there.”

Pressed by looming shortages on one hand and the prospect of lucrative exploitations on the other, companies and governing bodies have started joining hands to bring sea-bed mining into the picture. To date, over twenty exploration contracts have been issued by the International Seabed Authority (ISA), a part of the UN tasked with regulating areas of the seafloor that lie outside of any national jurisdiction.

“In the seabed, resources are incredibly rich,” said Michael Lodge, Secretary-General of the ISA. “These are virgin resources. They’re extremely high-grade. And they are super-abundant.”

We’ve recently talked about how current levels of mining exploration and exploitation just won’t be able to supply future demand. As populations grow and economies develop, current raw material exploitations will need new additions to satisfy that extra demand. There’s also the need to create a strong mining base to support the development of low-carbon economies — which rely on technology materials that are in short supply currently.

Seabed mining offers an attractive solution to this problem: untouched resources just waiting to be taken in the form of massive sulfide deposits of copper, nickel, cobalt, gold, and platinum.

“It’s no exaggeration to say that there are thousands of years’ supply of minerals in the seabed,” Secretary-General Lodge said. “There is just absolutely no shortage.”

The Auxiliary Cutter.

The Auxiliary Cutter removes rough terrain and creates benches for the other machines to work on.
Image credits Nautilius Minerals Inc.

Nautilius says that early tests in the Bismark Sea site, have shown the area is over 10-times as rich in copper as comparable land-based mines, and has more than three times the concentration of gold than the average figure of land exploitations. These fantastic numbers generally come down to the fact that surface resources have been thoroughly explored and long exploited, meaning that the richest deposits on land aren’t around anymore — they’re now cars, or copper wires, or planes. So by comparison, the deposits locked on the sea floor look like a cornucopia of resources just waiting to be harvested.

And I’m all for that. Considering the need, it may not be a question of ‘do we want to exploit the sea floor’ but rather one of ‘how are we going to make it if we don’t?’ That being said, we’ve had a lot of time and opportunities up here on dry land to see what rampant exploitation without care for the places being exploited leads to. As the idea of seabed mining comes closer to reality, we should really think about what the consequences of our actions would be — and how not to make a mess down there as we did topside. Some think that we’re better off just banning the practice altogether.

“There are too many unknowns for this industry to go ahead,” said Natalie Lowrey of the Australia-based Deep Sea Mining Campaign. “We’ve already desecrated a lot of our lands. We don’t need to be doing that in the deep sea.”

“There’s a serious concern that the toxicity from disturbing the deep sea can move up the food chain to the local communities [who live along the coast of Papua New Guinea].”

The Collecting Machine.

The Collecting Machine gathers cut material by drawing it in as seawater slurry with internal pumps and pushing it through a flexible pipe to the riser and lifting system.
Image credits Nautilus Minerals Inc.

One of her main concerns is that plumes of sediment stirred up during mining operations will travel along sea currents and interfere with ocean ecosystems. The clouds of silt could prove harmful to filter-feeders which often form the lower brackets of food chains — so a hit here would impact all other sea creatures.

Michael Johnston said that the company is taking the sediment plume issue seriously and have designed their equipment to minimize any undersea clouding generated by the collection procedure.

“When we’re cutting, we have suction turned on,” he said. “It’s not like we’re blowing stuff all over the place. We’re actually sucking it up. So the plume gets minimized through the mining process.”

“We go to great efforts to minimize the impact of the plumes. We’re quite confident that the impact from these activities will be significantly less than some of these people claim.”

Still, going forward we should primarily be concerned with not messing stuff up that much — because as we’ve seen, there’s no such thing as a free meal. We’ll have to wait and see how it all develops. In the meantime, one thing is certain.

“If Nautilus goes ahead, it’s going to open the gateway for this industry,” Lowrey concludes.

Emotional computers really freak people out — a new take on the uncanny valley

New research shows that AIs we perceive as too mentally human-like can unnerve us even if their appearance isn’t human, furthering our understanding of the ‘uncanny valley’ and potentially directing future work into human-computer interactions.

Image credits kuloser / Pixabay.

Back in the 1970s, Japanese roboticist Masahiro Mori advanced the concept of the ‘uncanny valley’ — the idea that humans will appreciate robots and animations more and more as they become more human-like in appearance, but find them unsettling as they become almost-but-not-quite-human. In other words, we know how a human should look, and a machine that ticks some of the criteria but not all is too close for comfort.

The uncanny valley of the mind

That’s all well and good for appearance — but what about the mind? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, had 92 participants observe a short conversation between two virtual avatars, one male and one female, in a virtual plaza. These characters talked about their exhaustion from the hot weather, after which the woman told about her frustration at the lack of free time and annoyance for waiting on a friend who’s late, then the man expressed his sympathy for her plight. Pretty straightforward small talk.

The trick was that while everyone witnessed the same scene and dialogue, the participants were given one of four context stories. Half were told that the avatars were controlled by computers, and the other half that they were human-controlled. Furthermore, half of the group was told that the dialogue was scripted and the others that it was spontaneous, in such a way that each context story was fed to one quarter of the group.

Out of all the participants, those who were told that they’d be witnessing two computers interact on their own reported the scene as more eerie and unsettling that the other three groups. People were ok with humans or script-driven computers exhibiting natural-looking social behavior, but when a computer showed frustration or sympathy on its own it put people on edge, the team reports.

Given that the team managed to elicit this response in their participants only through the concept they presented, they call this phenomenon the ‘uncanny valley of the mind,’ to distinguish between the effects of a robot’s perceived appearance and personality on humans, noting that emotional behavior can seem uncanny on its own.

In our own image

Image credits skeeze / Pixabay.

The main takeaway from the study is that people may not be as comfortable with computers or robots displaying social skills as they think they are. It’s all fine and dandy if you ask Alexa about the CIA and she answers/shuts down, but expressing frustration that you keep asking her that question might be too human for comfort. And with social interactions, the effect may be even more pronounced that with appearance alone — because appearance is obvious, but you’re never sure exactly how human-like the computer’s programming is.

Stein believes the volunteers who were told they were watching two spontaneous computers interact were unsettled because they may have felt their human uniqueness was under threat. That if computers can emulate us, what’s stopping them from taking control over our own technology? In future research, he plans to test if this effect of the uncanny valley of the mind can be mitigated when people feel they have control over the human-like agents’ behavior.

So are human-like bots destined to fail? Not necessarily — people may feel like the situation was creepy because they were only witnessing it. It’s like having a conversation with Cleverbot, only a cleverer one. A Clever2bot, if you will. It’s fun while you’re doing it, but once you close the conversation and rummage it over you just feel like something was off with the talk.

By interacting directly with the social bots, humans may actually find the experience pleasant, thus reducing its creepy factor.

The full paper “Feeling robots and human zombies: Mind perception and the uncanny valley” has been published in the journal Cognition.