Category Archives: Future

U.S. Army tests its first high-energy laser weapon

Artist illustration of a Stryker-mounter laser weapon taking out enemy airborne targets. Credit: Northrop Grumman.

The U.S. Army is just a few small steps away from fielding its first combat-ready, high-powered laser weapon. Over the summer, such a weapon was mounted on a Stryker military vehicle and used in tests at Fort Sill, Oklahoma, in a “combat shoot-off” against a series of possible combat scenarios. The first platoon of four laser-mounted Strykers is expected to join the ranks of the army in early 2022.

“This is the first combat application of lasers for a maneuver element in the Army,” said Lieutenant General L. Neil Thurgood in a statement to the press.

“The technology we have today is ready. This is a gateway to the future,” said Thurgood, who is the director for hypersonics, directed energy, space and rapid acquisition.

During the shoot-off, defense contractors Northrop Grumman and Raytheon each brought a 50-kilowatt laser weapon to the field in order to demonstrate short-range air defense (SHORAD) against a series of simulated threats and combat scenarios. These included drones, rockets, artillery, and mortar targets.

Laser-equipped Stryker on the field during tests. Credit: U.S. Army/Jim Kendell.

Once reserved for science fiction, laser weapons are now a reality — one that will hit hard once these lasers are deployed on the battlefield.

Lasers were first invented in the 1960s, but it was only recently that researchers were able to design a high-power laser system that is small enough to be deployed in a tactical environment, without taking up the entire space of a truck or the airplane.

Designing a laser that is powerful enough to take out a mortar shell from a mile away is a huge engineering challenge. The way it is done is through a technique known as Spectral Beam Combination, whereby multiple outputs of beams are combined into a single high-power beam rather than using a single individual fiber laser.

Lockheed’s ATHENA laser weapon punching a hole in a target vehicle. Credit: Lockheed Martin.

Think of a prism that breaks up a white light beam into the colors of the rainbow. High-power lasers run this process in reverse, combining a bunch of beams that cover different spectrums of electromagnetic energy and outputting a single beam.

Laser weapon development was ramped up in the past decade as a response to the rising threats of armed drones and short-range mortars or rocket barrages. These unguided projectiles can’t be put out of action with sophisticated countermeasures, such as jamming or redirecting. The timeline of a rocket or mortar impact is also very short.

In this rapidly evolving threat landscape, laser weapons suddenly become appealing. For instance, the US Navy has an ongoing program called HELIOS (High Energy Laser and Integrated Optical-dazzler and Surveillance, which aims to install a laser weapon system on a DDG Arleigh Burke class destroyer. The Air Force is currently testing the High Energy Laser Weapon System 2, made by Raytheon Space and Airborne Systems, with the primary goal of disabling enemy drones.

The US Army isn’t sitting idle either. These recent 50 kW trials represent a bold and major step forward in the Army’s ambitions to deeply laser weapons on the battlefield of the future, where it currently faces a gap in short-range air defense.

Lockheed Indirect Fire Protection Capability-High Energy Laser (IFPC-HEL). Credit: Lockheed Martin.

“Offering lethality against unmanned aircraft systems (UAS) and rockets, artillery and mortars (RAM), laser weapons now increase Army air and missile defense capability while reducing total system lifecycle cost through reduced logistical demand,” the Army said in a statement.

According to Task & Purpose, the Army aims to assemble four battalions of laser-equipped Strykers by 2022. The Army is also working on a monstrously powerful 300 kW Indirect Fires Protection Capability – High Energy Laser (IFPC-HEL) truck-mounted laser by 2024. The IFPC-HEL truck, currently in development by Lockheed Martin, would be powerful enough to put cruise missiles out of action.

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

These nanobots powered by magnets can successfully remove water pollutants

Surface water, including lakes, canals, rivers, and streams, is a key resource for agriculture, industries, and domestic households. It’s quite literally essential to human activity. However, it’s also very susceptible to pollution, and cleaning it up is rarely easy. But we may have a new ally in this fight: nanobots.

Image credit: Wikipedia Commons.

According to the UN, 90% of sewage in developing countries is dumped untreated into water bodies. Industries are also to blame, as they dispose of between 300 and 400 megatons of polluted water in water bodies every year. Nitrate, used extensively by agriculture, is the most common pollutant currently found in groundwater aquifers.

Once these pollutants enter into surface water, it’s very difficult and costly to remove them through conventional methods, and hence, they tend to remain in the water for a long time. Heavy metals have been detected in fish from rivers, which hold risks to human health. Water pollution can also progress to massive disease outbreaks.

The use of nanotechnology in water treatment has recently gained wide attention and is being actively investigated. In water treatment, nanotechnology has three main applications: remediating and purifying polluted water, detecting pollution, and preventing it. This has led to a big demand lately for nanorobots with high sensitivity

However, there’s a technical challenge. Most nanorobots use catalytic motors, which cause problems during their use. These catalytic motors are easily oxidized, which can restrict the lifespan and efficiency of nanorobots. This is where the new study comes in.

A new type of nanorobot

Martin Pumera, a researcher at the University of Chemistry and Technology in the Czech Republic, and his group of colleagues developed a new type of nanorobots, using a temperature-sensitive polymer material and iron oxide. The polymer acts like small hands that pick up and dispose of the pollutants, while the oxide makes the nanorobots magnetic.

The robots created by Pumera and his team are 200 nanometers wide (300 times thinner than human hair) and are powered by magnetic fields, allowing the researchers to control their movement. Unlike other nanorobots out there, they don’t need any fuel to function and can be used more than one time. This makes them sustainable and cost-effective.

In the study, the researchers showed that the uptake and release of pollutants in the surface water are regulated by temperature. At a low temperature of 5ºC, the robots scattered in the water. But when the temperature was raised to 25ºC they aggregated and trapped any pollutants between them. They can then be removed with the use of a magnet.

The nanorobots could eliminate about 65% of the arsenic in 100 minutes, based on the 10 tests done by the researchers for the study. Pundera told ZME Science that the technology is scalable, which is why with his team he is currently in conversations with wastewater treatment companies, hoping to move the system from bench to proof-of-concept solutions.

The study was published in the journal Nature.

People find AI-generated faces to be more trustworthy than real faces — and it could be a problem

Not only are people unable to distinguish between real faces and AI-generated faces, but they also seem to trust AI-generated faces more. The findings from a relatively small study suggest that nefarious actors could be using AI to generate artificial faces to trick people.

The most (top row) and least (bottom row) accurately classified real (R) and synthetic (S) faces. Credit: DOI: 10.1073/pnas.2120481119

Worse than a coin flip

In the past years, Artificial Intelligence has come a long way. It’s not just to analyze data, it can be used to create text, images, and even video. A particularly intriguing application is the creation of human faces.

In the past couple of years, algorithms have become strikingly good at creating human faces. This could be useful on one hand — it enables low-budget companies to produce ads, for instance, essentially democratizing access to valuable resources. But at the same time, AI-synthesized faces can be used for disinformation, fraud, propaganda, and even revenge pornography.

Human brains are generally pretty good at telling apart real from fake, but when it comes to this area, AIs are winning the race. In a new study, Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, conducted experiments to analyze whether participants can distinguish state of the art AI-synthesized faces from real faces and what level of trust the faces evoked.

 “Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” the researchers note.

The researchers designed three experiments, recruiting volunteers from the Mechanical Turk platform. In the first one, 315 participants classified 128 faces taken from a set of 800 (either real or synthesized). Their accuracy was 48% — worse than a coin flip.

Representative faces used in the study. Could you tell apart the real from the synthetic faces? Participants in the study couldn’t. Image credits: Credit: DOI: 10.1073/pnas.2120481119.

More trustworthy

In the second experiment, 219 new participants were trained on how to analyze and give feedback on faces. They were then asked to classify and rate 128 faces, again from a set of 800. Their accuracy increased thanks to the training, but only to 59%.

Meanwhile, in the third experiment, 223 participants were asked to rate the trustworthiness of 128 faces (from the set of 800) on a scale from 1 to 7. Surprisingly, synthetic faces were ranked 7.7% more trustworthy.

“Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness. We wondered if synthetic faces activate the same judgements of trustworthiness. If not, then a perception of trustworthiness could help distinguish real from synthetic faces.”

“Perhaps most interestingly, we find that synthetically-generated faces are more trustworthy than real faces.”

There were also some interesting takeaways from the analysis. For instance, women were rated as significantly more trustworthy than men, and smiling faces were also more trustworthy. Black faces were rated as more trustworthy than South Asian, but otherwise, race seemed to not affect trustworthiness.

“A smiling face is more likely to be rated as trustworthy, but 65.5% of the real faces and 58.8% of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,” the study notes

The researchers offer a potential explanation as to why synthetic faces could be seen as more trustworthy: they tend to resemble average faces, and previous research has suggested that average faces tend to be considered more trustworthy.

Although it’s a fairly small sample size and the findings need to be replicated on a larger scale, the findings are pretty concerning, especially considering how fast the technology has been progressing. Researchers say that if we want to protect the public from “deep fakes,” there should be some guidelines on how synthesized images are created and distributed.

“Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

“At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.”

The study was published in PNAS.

An AI was just used to control plasma inside a nuclear fusion reactor

A groundbreaking technology has been used to improve another, as researchers have demonstrated how AI could be used to control the superheated plasma inside a tokamak-type fusion reactor.

“This is one of the most challenging applications of reinforcement learning to a real-world system,” says Martin Riedmiller, a researcher at DeepMind.

DeepMind produced a range of shapes whose properties are under study by plasma physicists. Image credits: DeepMind & SPC/EPFL.

Current nuclear plants use nuclear fission to harness energy, forcing larger atoms to split into two smaller atoms. Fusion, on the other hand, is the opposite process. In nuclear fusion, two or more atomic nuclei combine to form one or more larger atoms. It’s the process that powers stars, but harnessing this power and using it on Earth is extremely challenging.

If you’re essentially building a miniature star (hotter than the surface of the Sun) and then using it to harness its power, you need to be absolutely certain you can control it. Researchers use a lot of tricks to achieve this, like magnets, lasers, and clever designs, but it’s still proven to be a gargantuan challenge.

This is where AI could enter the stage.

Researchers use several designs to try and contain this superheated plasma — one of these designs is called a tokamak. A tokamak uses magnetic fields in a donut-shaped containment area to keep the superheated atoms (as plasma) under control long enough that we can extract energy from it. The main idea is to use this magnetic cage to keep the plasma from touching the reactor walls, which would damage the reactor and cool the plasma.

TCV plasma. Image credits: Curdin Wüthrich, SPC/EPFL

Controlling this plasma requires constant shifts in the magnetic field, and the researchers at DeepMind (the Google-owned company that built the AlphaGo and AlphaZero AIs that dominated Go and chess) felt like this would be a good task for an algorithm.

They trained an unnamed AI to control and change the shape of the plasma by changing the magnetic field using a technique called reinforcement learning. Reinforcement learning is one of the three main machine learning approaches (alongside supervised learning and unsupervised learning). In reinforcement learning, the AI takes certain actions to maximize the chance of earning a predefined reward.

After the algorithm was trained on a virtual reactor, it was given control of the magnets inside the Variable Configuration Tokamak (TCV), an experimental tokamak reactor in Lausanne, Switzerland.

The AI-controlled the plasma for only two seconds, but this is as much as the TCV can go without overheating — and it was a long enough period to assess the AI’s performance.

Every 0.0001 seconds, the AI took 90 different measurements describing the shape and location of the plasma, adjusting the magnetic field accordingly. To speed the process up, the AI was split into two different networks — a large network that learned via trial and error in the virtual stage, and a faster, smaller network that runs on the reactor itself.

“Our controller first shapes the plasma according to the requested shape, then shifts the plasma downward and detaches it from the walls, suspending it in the middle of the vessel on two legs. The plasma is held stationary, as would be needed to measure plasma properties. Then, finally the plasma is steered back to the top of the vessel and safely destroyed,” DeepMind explains in a blog post.

“We then created a range of plasma shapes being studied by plasma physicists for their usefulness in generating energy. For example, we made a “snowflake” shape with many “legs” that could help reduce the cost of cooling by spreading the exhaust energy to different contact points on the vessel walls. We also demonstrated a shape close to the proposal for ITER, the next-generation tokamak under construction, as EPFL was conducting experiments to predict the behaviorr of plasmas in ITER. We even did something that had never been done in TCV before by stabilizing a “droplet” where there are two plasmas inside the vessel simultaneously. Our single system was able to find controllers for all of these different conditions. We simply changed the goal we requested, and our algorithm autonomously found an appropriate controller.”

The controller trained with deep reinforcement learning steers the plasma through multiple phases of an experiment. On the left, there is an inside view in the tokamak during the experiment. On the right, the reconstructed plasma shape and the target points the researchers wanted to hit. Image credits: DeepMind & SPC/EPFL.

While this is still in its early stages, it’s a very promising achievement. DeepMind’s AIs seem ready to move on from complex games into the real world, and make a real difference — as they previously did with protein structure.

This doesn’t mean that we’ll have nuclear fusion tomorrow. Although we’ve seen spectacular breakthroughs in the past couple of years, and although AI seems to be a promising tool, we’re still a few steps away from realistic fusion energy. But the prospect of virtually limitless fusion energy, once thought to be technically impossible, now seems within our reach.

The study was published in Nature.

Cultured meat is coming. But will people eat it?

Cultured chicken salad. Image credits: UPSIDE.

The prospect of cultured meat is enticing for several reasons. For starters, it’s more ethical — you don’t need to kill billions of animals every year. It could also be better for the environment, producing lower emissions and requiring less land and water than “traditional” meat production, and would also reduce the risk of new outbreaks (potentially pandemics) emerging. To top it all off, you can also customize cultured meat with relative ease, creating products that perfectly fit consumers’ tastes.

But there are also big challenges. In addition to the technological challenges, there is the need to ensure meat culturing is not only feasible and scalable but also cheap. There’s also a more pragmatic problem: taste. There’s a lot to be said about why people enjoy eating meat, but much of it boils down to how good it tastes. Meanwhile, cultured meat has an undeniable “artificial” feel to it (at least for now). Despite being made from the exact same cells as “regular” meat, it seems unnatural and unfamiliar, so there are fears that consumers may reject it as unappealing.

Before you even try it

A recent study underlines just how big this taste challenge is — and how perception (in addition to the taste per se) could dissuade people from consuming cultured meat. According to the research, which gathered data from 1,587 volunteers, 35% of non-vegetarians and 55% of vegetarians find cultured meat too disgusting to eat.

“As a novel food that humans have never encountered before, cultured meat may evoke hesitation for seeming so unnatural and unfamiliar—and potentially so disgusting,” the researchers write in the study.

For vegetarians, the aversion towards cultured meat makes a lot of sense. For starters, even though it’s not meat from a slaughtered animal, it’s still meat, and therefore has a potential to elicit disgust.

“Animal-derived products may be common triggers of disgust because they traditionally carry higher risks of disease-causing microorganisms. Reminders of a food’s animal origin may evoke disgust particularly strongly among vegetarians,” the study continues.

For non-vegetarians, it’s quite the opposite: it can elicit disgust because it’s not natural enough. Many studies highlight that meat-eaters express resistance to trying cultured meat because of its perceived unnaturalness. So if you’d want to make cultured meat more appealing for consumers, you’d have to approach things differently for vegetarians and non-vegetarians. For instance, perceiving cultured meat as resembling animal flesh predicted less disgust among meat-eaters but more disgust among vegetarians. But there were also similarities between the two groups. Perceiving cultured meat as unnatural was strongly associated with disgust toward it among both vegetarians and meat-eaters. Combating beliefs about unnaturalness could go a long way towards convincing people to at least give cultured meat a shot.

A cultured rib-eye steak. Image credits: Aleph Farms / Technion — Israel Institute of Technology.

Even before people eat a single bite of cultured meat, their opinion may already be shaped. If we want to get people to consume this type of product, tackling predetermined disgust is a big first step. Different cultures could also have widely different preferences in this regard.

“Cultured meat offers promising environmental benefits over conventional meat, yet these potential benefits will go unrealized if consumers are too disgusted by cultured meat to eat it.”

Okay, but is cultured meat actually good?

Full disclosure: no one at ZME Science has tried cultured meat yet (but we’re working on it). Even if we had, our experience wouldn’t be necessarily representative of the greater public. Herein lies one problem: compared to how big the potential market is, only a handful of people have actually tasted this type of meat. We don’t yet have large-scale surveys or focus groups (or if companies have this type of data, they haven’t publicly released it from what we could find).

The expert reviews seem to be somewhat favorable. In a recent blind test, Israel Master Chef judge Michal Ansky was unable to differentiate between “real” chicken and its cultured alternative. Ansky tasted the cultured chicken that was already approved for consumption in Singapore (the first place where cultured meat has been approved).

The remarkable progress that cultured meat has made in regards to its taste was also highlighted by a recent study from The Netherlands, in which blind-tested participants preferred the taste of cultured meat.

“All participants tasted the ‘cultured’ hamburger and evaluated its taste to be better than the conventional one in spite of the absence of an objective difference,” the researchers write.

The study authors also seemed confident that cultured meat could become mainstream given its appealing taste and environmental advantages.

“This study confirms that cultured meat is acceptable to consumers if sufficient information is provided and the benefits are clear. This has also led to increased acceptance in recent years. The study also shows that consumers will eat cultured meat if they are served it,” said Professor Mark Post from Maastricht University, one of the study authors.

Researchers are also close to culturing expensive, gourmet types of meat, including the famous Wagyu beef, which normally sells for around $400 for a kilogram. Researchers are already capable of culturing bits of this meat four times cheaper, and the price is expected to continue going down. This would be a good place for cultured meat to start, making expensive types of meat more available to the masses.

Still, there are some differences between most types of cultured meat and meat coming from animals. For instance, one study that used an “electronic tongue” to analyze the chemical make-up of the meat found “significant” differences.

“There were significant differences in the taste characteristics assessed by an electronic tongue system, and the umami, bitterness, and sourness values of cultured muscle tissue were significantly lower than those of both chicken and cattle traditional meat,” the study reads. But the same study also suggests that understanding these differences could make cultured meat even more realistic and palatable.

This technology is also progressing very quickly in this regard, and every year, cultured meat seems to be taking strides towards becoming more affordable and tasty. There are multiple companies pending approval to embark on mass production, using somewhat different technologies and products. There are multiple types of meat on the horizon, from chicken and beef to pork and even seafood, and for many of them, the taste data is only just coming in.

All in all, cultured meat promises to be one of the biggest food revolutions in the past decades. Whether it will actually deliver on this promise is a different problem that will hinge on several variables, including price, taste, and of course, environmental impact. If companies can deliver a product that truly tastes like traditional meat, they have a good chance. There’s still a long road before the technology becomes mainstream, but given how quickly things have progressed thus far, we may see cultured meat on the shelves sooner than we expect.

These spinal cord implants allow paralyzed patients to stand, walk, and even swim and cycle

Credit: EPFL.

In 2018, Swiss researchers Grégoire Courtine and Jocelyne Bloch made headlines with an implant they devised that sends electrical pulses to the spinal cord of paralyzed patients. The stimulation of the spinal nerves triggers plasticity in the cells, which seems to regenerate nerve connections, allowing test subjects paralyzed from the waist down to stand and walk, something that doctors told them they were unlikely to do again in their lifetimes. Now, the same team from the Swiss Federal Institute of Technology (EPFL) and Lausanne University Hospital have showcased an upgraded version of this spinal cord electrical stimulation — and the improvements speak for themselves.

The personalized spinal cord electrode implants were shown to restore motor movements within a few hours of the therapy’s onset in three paralyzed patients. The volunteers could not only stand and walk, but also perform motor movements that are an order of magnitude more complex, such as cycling, swimming, and canoeing.

Credit: EPFL.

Furthermore, the newly designed electrode paddle configuration can work with patients with more severe spinal cord injuries. For instance, the two 2018 patients who first tested the system retained some residual control over their legs following injury — too little for them to walk or even stand but just enough for the external electrical stimulation to allow them to regain motor function. In the upgraded version, the three patients who underwent spinal cord stimulation are completely paralyzed and were hence unable to voluntarily contract any of their leg muscles.

One of these patients is Michel Roccati, an Italian man who became completely paralyzed after a motorcycle accident four years prior to his enrollment in the EPFL spinal cord stimulation therapy. Bloch, a professor and neurosurgeon at Lausanne University Hospital, surgically implanted the new electrode lead in his spinal cord, and after recovery Roccati was ready to put the whole thing to the test.

via GIPHY

During one particularly windy day in downtown Lausanne, the researchers and Roccati gathered outdoors with an array of hardware. The walker used by Roccati had been fitted with two small remote controls that connect wirelessly to a pacemaker in the patient’s abdomen, which in turn relays the signals to the spinal implants. The signal is converted into discontinued electrical pulses that stimulate specific neurons, allowing Roccati to move his lower limbs.

For the entire duration of the test, Roccati was in full control. The patient grasped the walker and pressed the remote control buttons when he intended to move. For instance, he would press the right side button when he intended to move his left leg. Pressing the buttons almost magically caused his legs to spring forward. He was walking — and getting better and stronger with each therapy session.

“The first few steps were incredible – a dream come true!” he says. “I’ve been through some pretty intense training in the past few months, and
I’ve set myself a series of goals. For instance, I can now go up and down stairs, and I hope to be able to walk one kilometer by this spring.”

The updated system employs more sophisticated electrode paddle implants that target the dorsal roots in the lumbosacral region of the spinal cord controlled by artificial intelligence and the lowest nerve root in the spine responsible for trunk stability. These implants are controlled by an artificial intelligence system whose stimulation algorithms are supposed to imitate nature, activating the spinal cord like the brain would normally do to allow us to stand, walk, swim or ride a bike.

“Within a couple of hours, our therapy restored independent walking within a few hours after the onset of the therapy; in addition to many additional motor activities that are critical for rehabilitation and daily life,” said Robin Demesmaeker, a researcher at EPFL and the Department of Clinical Neurosciences, University Hospital Lausanne, told ZME Science.

“Central to this remarkably more effective and ultrafast therapeutic efficacy was a series of disruptive technological innovations driven by our understanding of the mechanisms through which electrical spinal cord stimulation restores movement after paralysis,” add Demesmaeker, who is also the first author of the new study that appeared today in the journal Nature Medicine.

The two other patients who’ve tested the new system also made dramatic improvements in their quality of life. In each case, the therapy — both the electrode placement on the spinal cord and the activities involved in the therapy — were personalized. After several months of intensive training, the three patients were able to regain muscle mass, move about more independently, and could take part in social activities they previously couldn’t possibly do like having a drink standing at a bar. Millions of other patients in similar conditions could stand to benefit from the same therapy.

“The main requirement is that the region of the spinal cord where the spinal implant is placed should still be intact and that the lesion should be higher,” Demesmaeker wrote in an email. “The major difficulties with more severe cervical injuries are owed to injury-induced blood-pressure instability that leads to severe orthostatic hypotension, making upright locomotor training impossible as well as highly impaired arm and hand function impeding the use of assistive devices such as crutches and a walker.”

The researchers in Switzerland are still in the middle of an ongoing clinical trial, in which they’re trying to find the most optimal path towards enabling brain-controlled spinal cord stimulation in real-time.

“We are also assessing the ability of spinal cord stimulation to alleviate other problems such as hemodynamic instability in patients with spinal cord injury and gait deficits in patients with Parkinson’s disease,” Demesmaeker said.

Half plane, half rocket, this Chinese supersonic jet could fly Beijing to New York in only an hour

Credit: Space Transportation.

Chinese company Space Transportation wants to take a jab at the growing space tourism market with a winged rocket capable of suborbital travel. The reusable space plane could take wealthy tourists to the edge of space then land them on the other side of the world in no time. A trip from Beijing to New York would only take an hour.

Space Transportation was founded in 2018 and last August it managed to raise $46 million to develop its flagship supersonic spaceplane. Although details are still sparse, a video presentation on the company’s website shows passengers boarding a vertical plane attached to a glider wing with two boosters. Once it reaches a high altitude in the stratosphere, the airplane detaches from the auxiliary power, with the wing and boosters landing back on the launch pad on their own. The airplane, now in suborbital space, proceeds to its destination, back at the launch site after passengers experience a brief stint of weightlessness or in a different destination altogether, virtually anywhere in the world. Touch down is done vertically on three legs deployed from the rear, according to Space.com.

Credit: Space Transportation.

The developers behind the project seem pretty serious about it. So far, they’ve made 10 flight tests for the self-landing booster rockets, the last of which was done in collaboration with a combustion research lab from Tsinghua University.

In many ways, Space Transporation sounds like the Chinese version of Virgin Galactic and, to a lesser degree, SpaceX. In the summer of 2021, Virgin CEO Sir Richard Branson made headlines after he went on an 11-minute suborbital flight, reaching 55 miles (88km) above the Earth’s surface. Just a week later, fellow billionaire Jeff Bezos made it past the Kármán Line, the internationally-recognized boundary of space, at nearly 62 miles (100 km) above Earth’s surface, aboard a capsule launched by Blue Origin’s New Shepard reusable rocket.

Credit: Space Transportation.

Global space tourism is projected to reach just $1.7 billion by 2027, according to a report published in 2021. Virgin Galactic has hundreds of reservations for tickets on future flights, sold between $200,000 and $250,000 each. No reservation data has been made public by Blue Origin, but we can presume they’ll soon start making more commercial space tourism flights.

However, neither Virgin Galactic nor Blue Origin seems to be interested in point-to-point travel. In addition to potential space tourism flights, Space Transportation’s vehicle also doubles as a supersonic plane capable of traveling at more than 2,600 mph. SpaceX had plans for a similar concept when it announced its “Earth to Earth” project in 2017, which repurposes its “BFR” rocket originally meant to carry passengers to Mars. But Elon Musk’s company hasn’t released any details about this city-to-city passenger transport since then, which may mean it could have been scrapped entirely.

Perhaps SpaceX found city-to-city supersonic travel financially unfeasible, but Space Transportation doesn’t seem deterred. It is planning ground tests by 2023, the first flight by 2024, and a crewed mission by 2025. Looking farther into the future, the Chinese startup dreams of testing an orbital crew space vehicle, the kind that SpaceX uses to ferry crew and cargo to the International Space Station, by 2030.

Chinese AI ‘nanny’ cares for mouse babies in artificial womb

Credit: Pixabay.

Researchers in China have developed an artificial intelligence (AI) system that monitors in real-time and takes care of developing mouse fetuses as they grow in an artificial womb. The robot constantly measures key embryo development indicators, such as carbon dioxide levels or nutrients, and adjusts them accordingly for optimized growth. Although the technology is being tested on mice, it’s within the realm of possibility that some humans may someday be birthed in very much the same way via an artificial surrogate. But before this happens, many ethical concerns stand in the way.

Artificial wombs: abominable contraptions or a life-saving cradle?

Ex-utero gestation — that is, the gestation of an unborn infant outside the body, using an artificial womb — has been gaining a lot of attention in the past decade. For instance, in 2017, scientists in the United States devised a womb-like environment filled with a substance that mimics prenatal fluids in which premature lambs matured for four weeks healthily. This ‘biobag’ has the potential to change the face of neonatal intensive care units, giving premature babies born earlier than 24 weeks a fighting chance at survival.

At the moment, babies younger than 22 weeks have no hope of survival. Even older premature neonates face overwhelming odds as their hearts and lungs are not yet fully developed to function outside the womb, even when helped by life-support systems in neonatal units. Even those who make it may experience complications that can lead to life-long disabilities. Ideally, such babies would be immediately transferred to an artificial womb once they’re delivered to continue their development until they’re healthy enough.

Beyond saving premature babies, artificial wombs are appealing because they allow women to have babies without the trauma of childbirth. Most women experience some level of injury during childbirth, including muscle tears, lifelong incontinence, organ damage, or fractures to pelvic bones. In extreme cases, some expecting mothers go through severe labor complications, including heart attacks, kidney failure, and aneurysms, which can be life-threatening for both mother and infant. Then there’s the psychological toll of childbirth, which has been associated with postnatal PTSD and depression.

The fact that there’s a growing demand for surrogacy tells us that artificial wombs may have a future. During gestational surrogacy, eggs from the biological mother are fertilized with the father’s or donor’s sperm and then the embryo is placed into the uterus of the surrogate, who carries the child to term and delivers it. In this case, the biological mother is still the woman whose eggs are used, while the surrogate is called the ‘birth mother’. However, surrogacy doesn’t solve the complications related to childbirth or prenatal babies — it just externalizes these risks to a third party.

The AI nanny

The artificial womb system. Credit: Suzhou Institute of Biomedical Engineering and Technology.

This is where artificial wombs may come in, and the technology — although not feasible at the moment to deliver babies, even if it was legal — is growing rapidly. In 2019, researchers from China grew a monkey fetus from the stage of a fertilized egg to the organ forming stage inside a synthetic uterus, marking the first time a primate embryo has developed this far outside its mother’s body.

Now, researchers at the Suzhou Institute of Biomedical Engineering and Technology, also from China, just published a study in the Journal of Biomedical Engineering, in which they described the workings of an AI that monitors embryos as they develop into fetuses and adjusts key parameters for optimal growth. In this case, the artificial womb — which the researchers call a “long-term embryo culture device” in their study — grows multiple mouse embryos inside cube-shaped enclosures filled with all the nutritious fluids they need to develop.

This kind of ex-utero embryonic development requires careful observation because the needs of the embryo can differ depending on its growth stage. The development process then has to be manually adjusted, a task that is cumbersome and prone to human error. But the robotic system automatically monitors and adjusts the embryo development environment in real-time and around the clock. Even the slightest changes in embryo development are registered and fine-tuned for optimal development, according to the South China Morning Post.

Additionally, the robot takes ultra-sharp images of varying depth during key development moments. This kind of monitoring could, for instance, reveal important insights about the very earliest development phases of the human embryo, which is still shrouded in mystery. However, international laws prohibit studies on human embryos beyond two weeks of development — anything more is deemed unethical, although this position may change if the societal benefits derived from such research heavily outweigh the downsides.

Overall, the AI and embryo development system are not a true artificial womb, since the mouse fetuses aren’t grown to live pups but it’s a step forward in the right direction, a proof of concept that may pique interest, especially in China. Although people associate China with seemingly never-ending population growth and draconian ‘one child’ policies meant to stave off overpopulation, that’s no longer true at all.

China’s demographic profile is catching up with that of other developed countries: people are marrying increasingly later and are having fewer children. In 2021, China experienced the lowest net population growth in six decades, with only half as many being born compared to 2016. Artificial womb technology may help reverse this trend that worries states across the world. But are we ready for this cyberpunk-like mode of reproduction? 

Researchers successfully regrow limbs on frogs. They want to do the same thing with humans

Most animals have pretty good injury repair capabilities, but when it comes to lost limbs, only a select few can regrow them. The rest, including humans, have little they can do to repair such injuries. But as a new study shows, with the right treatments, our bodies may be hacked and “convinced” to regrow lost limbs. Although the study focused on frogs, which are obviously very different from humans, the proof-of-concept study suggests that this approach could work on many animals, including humans.

The African clawed frog (Xenopus laevis). Image via Wiki Commons.

Limb regeneration is a new frontier in biomedical science. It’s something we’ve long considered outside the realm of possibility, restricted only to superheroes and myth, but research is bringing it closer and closer to reality.

While many things differentiate humans from frogs, neither we nor they are able to regenerate limbs. So researchers at Tufts University and Harvard University’s Wyss Institute used frogs (specifically, the African clawed frog or Xenopus laevis) as a proof of concept. X. laevis is often used in research as it is easy to handle, lays eggs throughout the year, and for a model organism, shares a close evolutionary relationship with humans.

The researchers triggered the regrowth of a lost leg using a five-drug cocktail that they applied in a wearable silicone bioreactor dome that sealed the drugs over the stump for just 24 hours. After the treatment was administered, the regenerative process was kickstarted, and over the course of an 18-month period, the frogs regrew an almost fully functional leg.

“It’s exciting to see that the drugs we selected were helping to create an almost complete limb,” said Nirosha Murugan, research affiliate at the Allen Discovery Center at Tufts and first author of the paper. “The fact that it required only a brief exposure to the drugs to set in motion a months-long regeneration process suggests that frogs and perhaps other animals may have dormant regenerative capabilities that can be triggered into action.”

The experiment was repeated on dozens of frogs, and while not all of them regrew limbs, most did — including bone tissue and even toe-like structures at the end of the limb (though these weren’t supported by bone). It’s not a magic elixir, and the treatment is not perfect, but the drug cocktail delivered through the wearable bioreactor really does seem capable of regrowing limbs.

Regrowth of soft tissue. The MDT group (bottom) represents the five-drug cocktail treatment. Image credits: Murugan et al (2022).

The researchers essentially hacked the biological pathways that enable the growth and organization of tissue — much like in an embryo. This is why the treatment was only applied once, over the course of a day; meanwhile, other approaches involve numerous interventions over the course of the process.

“The remarkable complexity of functional limbs suggests that the fastest path toward this goal may lie in triggering native, self-limiting modules of organogenesis, not continuous micromanagement of the lengthy process at the cell and molecular levels,” the researchers write in the study. “We implemented this via a short exposure of limb amputation wounds to a wearable bioreactor containing a payload of five select biochemical factors.”

The first stage is the formation of a mass of stem cells at the end of the stump, which was then used to gradually reconstruct the limb. It’s essential that this structure is covered with the dome as quickly as possible after amputation to ensure its protection and activation. This treatment would be ideally applied right after amputation.

“Mammals and other regenerating animals will usually have their injuries exposed to air or making contact with the ground, and they can take days to weeks to close up with scar tissue,” said David Kaplan, Stern Family Professor of Engineering at Tufts and co-author of the study. “Using the BioDome cap in the first 24 hours helps mimic an amniotic-like environment which, along with the right drugs, allows the rebuilding process to proceed without the interference of scar tissue.”

At first, researchers tried using the protective dome with a single drug, progesterone. Progesterone is a steroid hormone involved in the menstrual cycle, pregnancy, and embryogenesis of humans and other species. This alone triggered some limb growth, but the resulting limb was essentially a non-functional spike. Each of the other four drugs fills a different role, ranging from reducing inflammation and the stopping of scar tissue formation to the promotion of growth of new nerves, blood vessels, and muscles. It’s the combination of all these together that leads to a nigh-functional limb.

Researchers note that while the limbs weren’t 100% identical to “normal” limbs, they featured digits, webbing, and detailed skeletal and muscular features. Overall, the results show the successful “kickstarting” of regenerative pathways

The plan now is to move on to mammal research. Despite the differences between frogs and mammals, researchers say that the biggest difference lies in the “early events of wound healing” — if these early processes can be understood and replicated, then there’s no apparent reason why this couldn’t be applied to mammals, and ultimately humans as well.

“The goal of triggering latent tissue-building routines to regrow limbs in humans may be achieved by identifying and exploiting principles observed in highly regenerative organisms,” the researchers conclude.

The study was published in the journal Science Advances.

Startup turns non-recyclable plastic into building blocks

Credit: ByFusion.

Although Americans do their part and dutifully put items into their recycling bins, much of it doesn’t actually end up recycled. According to the EPA, of the 267.8 million tons of municipal solid waste generated by Americans in 2017, only 94.2 million tons were recycled or composted. Just 8% of plastics were recycled, the same report stated.

There are many reasons for this sad state of affairs. Up until recently, the U.S. exported 16 million tons of plastic, paper, metal waste to China, essentially outsourcing much of its waste processing, passing the responsibility to other countries. Some of this waste was incinerated by China to fuel its booming manufacturing sector, releasing toxic emissions in the process, while the rest end up in the countryside and ocean, contaminating the water, ruining crops, and affecting human health. But since 2018, China has banned the import of most plastics and other materials that were not up to very stringent purity standards. Without China’s market for plastic waste, the U.S. recycling industry has been caught with its pants down, woefully lacking in infrastructure.

Furthermore, recycling plastic is a major challenge even if the U.S. had a good recycling infrastructure and coherent federal strategy — recycling decision-making is currently in the hands of 20,000 communities, all of which make their own choices about whether they recycle and what gets recycled — due to contamination. Items placed in the wrong bin or food contamination can prevent large batches of material from being recycled and, as a result, a large portion of the waste placed into recycling bins has to be incinerated or discarded into landfills.

ByFusion, a startup from Los Angeles, wants to turn this problem into an opportunity. The company builds huge machines called Blockers that squeeze mounds of plastic into standard building blocks called ByBlocks. Each ByBlock is 16x8x8 inches and comes in three variations: flat, molded with pegs so they can be interlocked, or a combination of the two. According to Fast Company, ByBlocks are about 10 pounds (4.5 kg) lighter than hollow cement blocks.

Credit: ByFusion.

The world loves to use plastic because it’s cheap and highly durable. The same appealing properties are a curse when plastic reaches the end of its lifecycle. But guess where else durability and low cost are prized? That’s right, the construction industry.

Virtually any kind of plastic, with the exception of Styrofoam, can be compressed into a ByBlock. “You [can] literally eat your lunch, throw in [the leftover plastic], make a block, then stick it in the wall,” Heidi Kujawa, who founded ByFusion in 2017, told Fast Company.

The only major drawback of ByBlocks is that they’re very susceptible to degradation due to sunlight, but this can be easily circumvented by coating their surface with paint or using another weather-resistant material. This was demonstrated in the city of Boise, Idaho, where residential plastic waste (grocery bags, bubble wrap, fast-food containers, etc.) was turned into building blocks used to erect a small building in a local park.

A small building made with ByBlocks. Credit: ByFusion.
The same building after it was treated with paint and decorations. Credit: ByFusion.

Since it began operation, ByFusion has recycled over 100 tons of plastic, with the lofty goal of scaling to 100 million tons by 2030. At the moment, there’s only one full production unit in L.A., which can process 450 tons of plastic a year, but the startup has partnered with Tucson and Boise, and plans to expand in the rest of the country. The aim is to have a Blocker machine in every city in the US, where they can be integrated with existing municipal waste processing facilities or even run by corporations that want to process their waste on-site.

That’s a commendable mission but with a price tag of $1.3 million for the largest Blocker machine, many willing stakeholders may simply not be able to afford this solution. On the other hand, plastic waste has its own, often hidden, costs, so doing nothing about it may actually prove more expensive as our plastic problem compounds over time. 

China builds the world’s first artificial moon

Chinese scientists have built an ‘artificial moon’ possessing lunar-like gravity to help them prepare astronauts for future exploration missions. The structure uses a powerful magnetic field to produce the celestial landscape — an approach inspired by experiments once used to levitate a frog.

The key component is a vacuum chamber that houses an artificial moon measuring 60cm (about 2 feet) in diameter. Image credits: Li Ruilin, China University of Mining and Technology

Preparing to colonize the moon

Simulating low gravity on Earth is a complex process. Current techniques require either flying a plane that enters a free fall and then climbs back up again or jumping off a drop tower — but these both last mere minutes. With the new invention, the magnetic field can be switched on or off as needed, producing no gravity, lunar gravity, or earth-level gravity instantly. It is also strong enough to magnetize and levitate other objects against the gravitational force for as long as needed.

All of this means that scientists will be able to test equipment in the extreme simulated environment to prevent costly mistakes. This is beneficial as problems can arise in missions due to the lack of atmosphere on the moon, meaning the temperature changes quickly and dramatically. And in low gravity, rocks and dust may behave in a completely different way than on Earth – as they are more loosely bound to each other.

Engineers from the China University of Mining and Technology built the facility (which they plan to launch in the coming months) in the eastern city of Xuzhou, in Jiangsu province. A vacuum chamber, containing no air, houses a mini “moon” measuring 60cm (about 2 feet) in diameter at its heart. The artificial landscape consists of rocks and dust as light as those found on the lunar surface-where gravity is about one-sixth as powerful as that on Earth–due to powerful magnets that levitate the room above the ground. They plan to test a host of technologies whose primary purpose is to perform tasks and build structures on the surface of the Earth’s only natural satellite.

Group leader Li Ruilin from the China University of Mining and Technology says it’s the “first of its kind in the world” that will take lunar simulation to a whole new level. Adding that their artificial moon makes gravity “disappear.” For “as long as you want,” he adds.

In an interview with the South China Morning Post, the team explains that some experiments take just a few seconds, such as an impact test. Meanwhile, others like creep testing (where the amount a material deforms under stress is measured) can take several days.

Li said astronauts could also use it to determine whether 3D printing structures on the surface is possible rather than deploying heavy equipment they can’t use on the mission. He continues:

“Some experiments conducted in the simulated environment can also give us some important clues, such as where to look for water trapped under the surface.”

It could also help assess whether a permanent human settlement could be built there, including issues like how well the surface traps heat.

From amphibians to artificial celestial bodies

The group explains that the idea originates from Russian-born UK-based physicist Andre Geim’s experiments which saw him levitate a frog with a magnet – that gained him a satirical Ig Nobel Prize in 2000, which celebrates science that “first makes people laugh, and then think.” Geim also won a Nobel Prize in Physics in 2010 for his work on graphene.

The foundation of his work involves a phenomenon known as diamagnetic levitation, where scientists apply an external magnetic force to any material. In turn, this field induces a weak repulsion between the object and the magnets, causing it to drift away from them and ‘float’ in midair.

For this to happen, the magnetic force must be strong enough to ‘magnetize’ the atoms that make up a material. Essentially, the atoms inside the object (or frog) acts as tiny magnets, subject to the magnetic force existing around them. If the magnet is powerful enough, it will change the direction of the electrons revolving around the atom’s nuclei, allowing them to produce a magnetic field to repulse the magnets.

Diamagnetic levitation of a tiny horse. Image credits: Pieter Kuiper / Wiki Commons.

Different substances on Earth have varying degrees of diamagnetism which affect their ability to levitate under a magnetic field; adding a vacuum, as was done here, allowed the researchers to produce an isolated chamber that mimics a microgravity environment.

However, simulating the harsh lunar environment was no easy task as the magnetic force needed is so strong it could tear apart components such as superconducting wires. It also affected the many metallic parts necessary for the vacuum chamber, which do not function properly near a powerful magnet.

To counteract this, the team came up with several technical innovations, including simulating lunar dust that could float a lot easier in the magnetic field and replacing steel with aluminum in many of the critical components.

The new space race

This breakthrough signals China’s intent to take first place in the international space race. That includes its lunar exploration program (named after the mythical moon goddess Chang’e), whose recent missions include landing a rover on the dark side of the moon in 2019 and 2020 that saw rock samples brought back to Earth for the first time in over 40 years.

Next, China wants to establish a joint lunar research base with Russia, which could start as soon as 2027.  

The new simulator will help China better prepare for its future space missions. For instance, the Chang’e 5 mission returned with far fewer rock samples than planned in December 2020, as the drill hit unexpected resistance. Previous missions led by Russia and the US have also had related issues.

Experiments conducted on a smaller prototype simulator suggested drill resistance on the moon could be much higher than predicted by purely computational models, according to a study by the Xuzhou team published in the Journal of China University of Mining and Technology. The authors hope this paper will enable space engineers across the globe (and in the future, the moon) to alter their equipment before launching multi-billion dollar missions.

The team is adamant that the facility will be open to researchers worldwide, and that includes Geim. “We definitely welcome Professor Geim to come and share more great ideas with us,” Li said.

Device harvests power from your sweaty fingers even while you sleep

There’s an untapped fuel source that you weren’t aware of right at your fingertips — and this device intends on harvesting it. The tiny device converts sweat from your fingertips into small but useful amounts of energy, enough to power some wearable devices. Additionally, the device can also harvest energy from pressing motions such as typing. It’s, by far, the most efficient type of on-body energy harvester ever invented.

This isn’t the first sweat-based energy system. However, previous demonstrations were pitifully inefficient, requiring expending a lot of energy by running, biking, or doing some other kind of strenuous physical work, in order to generate a small amount of energy (usually less than 1% consumed during the task).

“Normally, you want maximum return on investment in energy. You don’t want to expend a lot of energy through exercise to get only a little energy back,” says senior author Joseph Wang, a nanoengineering professor at the University of California San Diego. “But here, we wanted to create a device adapted to daily activity that requires almost no energy investment–you can completely forget about the device and go to sleep or do desk work like typing, yet still continue to generate energy. You can call it ‘power from doing nothing.'”

Your fingertips can now power small electronics and sensors.
This image shows a small hydrogel (right) collecting sweat from the fingertip for the vitamin-C sensor (left), then displaying the result on the electrochromic display. Credit: Lu Yin.

Rather than having to perform a lot of work to harvest useful energy or relying on sunlight, this novel device collects 300 milliJoules worth of energy while the body is at rest — even while you sleep. Since there is no work involved, the return on investment essentially tends to infinity.

The tiny biofuel cell (BFC), made from a carbon nanotube material and a hydrogel, produces energy from lactate, a compound found in our sweat. The foam-like bioreactor is connected to a circuit with electrodes and attached to the pad of a finger. The cell removes electrons from the lactate to turn oxygen into water and, in the process, also drives electrons through the circuit to produce a current of electricity.

Although it may seem odd to target the fingertips when there are other body parts that are richer in sweat, such as the armpits, this is in fact an excellent choice. The fingertips have the highest concentration of sweat glands in the human body, up to three times more than in other body parts. We likely evolved this to help us better grip things.

The reason why other body parts feel sweatier is due to their poor ventilation. In contrast, our fingers are always exposed to the air, so the sweat evaporates as it comes out, usually immediately. Rather than letting this sweat evaporate, this device collects some of it to generate usable energy.

“The size of the device is about 1 centimeter squared. Its material is flexible as well, so you don’t need to worry about it being too rigid or feeling weird. You can comfortably wear it for an extended period of time,” said first co-author Lu Yin, a nanoengineering Ph.D. student working in Wang’s lab.

Complementary to the biofuel cell, the researchers also attached a small piezoelectric generator that converts mechanical energy into electricity. When you pinch the finger or perform everyday motions like typing on a keyboard, the piezoelectric generator produces additional energy. A single press of a finger once per hour requires 0.5 milliJoules of energy but can produce over 30 milliJoules of energy.

“We envision that this can be used in any daily activity involving touch, things that a person would normally do anyway while at work, at home, while watching TV or eating,” said Wang. “The goal is that this wearable will naturally work for you and you don’t even have to think about it.”

Although the harvested power is tiny, it’s still enough to power some health and wellness wearable electronics such as glucose meters for people with diabetes.

“We want to make this device more tightly integrated in wearable forms, like gloves. We’re also exploring the possibility of enabling wireless connection to mobile devices for extended continuous sensing,” Yin says.

“There’s a lot of exciting potential,” says Wang. “We have ten fingers to play with.”

The findings appeared in the journal Joule.

The swarm is near: get ready for the flying microbots

Imagine a swarm of insect-sized robots capable of recording criminals for the authorities undetected or searching for survivors caught in the ruins of unstable buildings. Researchers worldwide have been quietly working toward this but have been unable to power these miniature machines — until now.

A 0.16 g microscale robot that is powered by a muscle-like soft actuator. Credit: Ren et al (2022).

Engineers from MIT have developed powerful micro-drones that can zip around with bug-like agility, which could eventually perform these tasks. Their paper in the journal Advanced Materials describes a new form of synthetic muscle (known as an actuator) that converts energy sources into motion to power these devices and enable them to move around. Their new fabrication technique produces artificial muscles, which dramatically extend the lifespan of the microbot while increasing its performance and the amount it can carry.  

In an interview with Tech Xplore, Dr. Kevin Chen, senior author of the paper, explained that they have big plans for this type of robot:

“Our group has a long-term vision of creating a swarm of insect-like robots that can perform complex tasks such as assisted pollination and collective search-and-rescue. Since three years ago, we have been working on developing aerial robots that are driven by muscle-like soft actuators.”

Soft artificial muscles contract like the real thing

Your run-of-the-mill drone uses rigid actuators to fly as these can supply more voltage or power to make them move, but robots on this miniature scale couldn’t carry such a heavy power supply. So-called ‘soft’ actuators are a far better solution as they’re far lighter than their rigid counterparts.

In their previous research, the team engineered microbots that could perform acrobatic movements mid-air and quickly recover after colliding with objects. But despite these promising results, the soft actuators underpinning these systems required more electricity than could be supplied, meaning an external power supply had to be used to propel the devices.

“To fly without wires, the soft actuator needs to operate at a lower voltage,” Chen explained. “Therefore, the main goal of our recent study was to reduce the operating voltage.”

In this case, the device would need a soft actuator with a large surface area to produce enough power. However, it would also need to be lightweight so a micromachine could lift it.

To achieve this, the group elected for soft dielectric elastomer actuators (DEAs) made from layers of a flexible, rubber-like solid known as an elastomer whose polymer chains are held together by relatively weak bonds – permitting it to stretch under stress.

The DEAs used in the study consists of a long piece of elastomer that is only 10 micrometers thick (roughly the same diameter as a red blood cell) sandwiched between a pair of electrodes. These, in turn, are wound into a 20-layered ‘tootsie roll’ to expand the surface area and create a ‘power-dense’ muscle that deforms when a current is applied, similar to how human and animal muscles contract. In this case, the contraction causes the microbot’s wings to flap rapidly.

A microbot that acts and senses like an insect

A microscale soft robot lands on a flower. Credit: Ren et al (2022).

The result is an artificial muscle that forms the compact body of a robust microrobot that can carry nearly three times its weight (despite weighing less than one-quarter of a penny). Most notably, it can operate with 75% lower voltage than other versions while carrying 80% more payload.

They also demonstrated a 20-second hovering flight, which Chen says is the longest recorded by a sub-gram robot with the actuator still working smoothly after 2 million cycles – far outpacing the lifespan of other models.

“This small actuator oscillates 400 times every second, and its motion drives a pair of flapping wings, which generate lift force and allow the robot to fly,” Chen said. “Compared to other small flying robots, our soft robot has the unique advantage of being robust and agile. It can collide with obstacles during flight and recover and it can make a 360 degree turn within 0.16 seconds.”

The DEA-based design introduced by the team could soon pave the way for microbots that work using untethered batteries. For example, it could inspire the creation of functional robots that blend into our environment and everyday lives, including those that mimic dragonflies or hummingbirds.

The researchers add:

“We further demonstrated open-loop takeoff, passively stable ascending flight, and closed-loop hovering flights in these robots. Not only are they resilient against collisions with nearby obstacles, they can also sense these impact events. This work shows soft robots can be agile, robust, and controllable, which are important for developing next generation of soft robots for diverse applications such as environmental exploration and manipulation.”

And while they’re thrilled about producing workable flying microbots, they hope to reduce the DEA thickness to only 1 micrometer, which would open the door to many more applications for these insect-sized robots.

Source: MIT

World’s tiniest antenna is made from DNA

Illustration of the fluorescent-based DNA antennae. Credit: Caitlin Monney.

Chemists at the Université de Montréal have devised a nano-scale antenna using synthetic DNA to monitor structural changes in proteins in real-time. It receives light in one color and, depending on the interaction with the protein it senses, transmits light back in a different color, which can be detected. The technology could prove useful in drug discovery and the development of new nanotechnologies.

DNA contains all the instructions needed for an organism to develop, survive, and reproduce. The blueprint of life is also extremely versatile thanks to the self-assembly of DNA building blocks.

Using short, synthetic strands of DNA that work like interlocking Lego bricks, scientists can make all sorts of nano-structures for more sophisticated applications than ever possible before. These include “smart” medical devices that target drugs selectively to disease sites, programmable imaging probes, templates for precisely arranging inorganic materials in the manufacturing of next-generation computer circuits, and more.

Inspired by these properties, the Canadian researchers led by chemistry professor Alexis Vallée-Bélisle have devised a DNA-based fluorescent nanoantenna that can characterize the function of proteins.

“Like a two-way radio that can both receive and transmit radio waves, the fluorescent nanoantenna receives light in one color, or wavelength, and depending on the protein movement it senses, then transmits light back in another color, which we can detect,” said Professor Vallée-Bélisle.

The receiver of the nanoantenna reacts chemically with molecules on the surface of the target proteins. The 5-nanometer-long antenna produces a distinct signal when the protein is performing a certain biological function, which can be detected based on the light released by the DNA structure.

“For example, we were able to detect, in real-time and for the first time, the function of the enzyme alkaline phosphatase with a variety of biological molecules and drugs,” said Harroun. “This enzyme has been implicated in many diseases, including various cancers and intestinal inflammation.”

These nanoantennas can be easily tweaked to optimize their function and size for a range of functions. For instance, it’s possible to attach a fluorescent molecule to the synthesized DNA and then attach the entire setup to an enzyme, allowing you to probe its biological function. Furthermore, these crafty DNA-based machines are ready-to-use for virtually any research lab across the world. Vallée-Bélisle is now working on setting up a startup to bring this product to the market.

“Perhaps what we are most excited by is the realization that many labs around the world, equipped with a conventional spectrofluorometer, could readily employ these nanoantennas to study their favorite protein, such as to identify new drugs or to develop new nanotechnologies,” said Vallée-Bélisle.

The findings appeared in the journal Nature Methods.

This robotic thermal bore can cut through undrillable rock without making direct contact

Swifty blasting hard rock like a hot knife through butter. Credit: Petra.

Tunneling often proves a hard nut to crack. Civil engineers tasked with making new tunnels for a highway or subway station will often encounter rocks that will break even the sturdiest drill bits. When this inevitably happens, the demo squad is called in since only dynamite can save the day. But Petra, a startup hailing from sunny San Francisco, claims to have a solution.

The company has developed a thermal drilling semi-autonomous robot, aptly named “Swifty”, that can bore through the hardest geologies on earth by pulverizing rock. Rather than using mechanical drills, the robot employs a hot, high-pressure head that displaces rock without any direct contact.

In a recent demonstration, Swifty made tunnels between 18 and 60 inches (45-152 cm) in diameter, blasting through all sorts of rock. This includes a 24-inch tunnel through 20 feet of sioux quartzite, widely considered the hardest rock on earth that only dynamite can break. The robot made the tunnel at a rate of about an inch a minute.

“No tunneling method has been able to tunnel through this kind of hard rock until now. Petra’s achievement is due to Swifty’s thermal drilling method which efficiently bores through rock without touching it,” Ian Wright, Petra CTO and a Tesla co-founder, said in a statement.

Petra’s robot uses machine vision, an AI system that essentially is supposed to allow a robot to ‘see’ and make decisions based on the obstructions it encounters. When the robot is put to work, it blasts rock with a mixture of hot gas above 1,000 °C (2,000 °F), breaking the rock into smaller fragments. Once the rock is broken into bits, a powerful vacuum sucks in the fragments, clearing the way for more drilling.

“Petra is able to bore through the hardest geologies on earth, enabling customers to [install] underground utilities in difficult geographical regions most at-risk for wildfires and hurricanes. In addition, we can simplify urban utility projects in cities by allowing engineers to navigate below the maze of existing grid infrastructure,” according to a company statement.

The startup’s technology is inspired by experiments performed in the 1960s by scientists at Los Alamos National Laboratory, who imagined a nuclear-powered tunneling machine that could travel through Earth’s upper mantle or even the Moon’s crust. The rock-melting drill devised there never amounted to anything, but Petra picked up where others left off and conducted its first tests in an industrial park in Oakland, California in 2018. These initial tests used a plasma torch but it was soon abandoned in favor of gas and heat, which proved a less cumbersome setup.

Although Swifty could theoretically be used successfully in boring operations for tunnels serving transportation, there are already economically feasible solutions for this industry. Instead, Petra’s product aims to make tunneling through bedrock cheap enough to provide the right incentive for utilities to bury their electricity, broadband, and other lines underground.

According to Wired, burying power lines costs at least five times more than running them above ground while hard-rock installations can cost up to 20 times more. The advantage is that maintenance costs are much lower since the cables are sheltered from the elements, something that is particularly appealing in extreme weather-prone areas. Local citizens can enjoy a nice view of their city without having to see dangling spider webs overhead.

Petra claims that Swifty’s thermal drilling can cut the cost of tunneling through bedrock by 50% to 80%. But that remains to be proven before the industry can climb aboard the Swifty train. The startup is now testing its heat-based drilling method in a variety of settings and geologies from granite to limestone ranging from California to the Appalachian Mountains. 

Cyberpunk aesthetics and concepts.

What is cyberpunk — and are we already living in it?

In its simplest form, cyberpunk is a science fiction subgenre that brings together advanced, futuristic technology, with a decline in societal decay. Think of a society featuring advanced artificial intelligence, cybernetics, massive skyscrapers, but with many people living in slums or being controlled and lacking social freedom. But cyberpunk isn’t only a sci-fi subgenre, but also a cultural movement that has some influence on things like entertainment, design, gaming, architecture, fashion, and technology. In fact, you could argue we’re already living in a cyberpunk world.

Image credits: Raasgendor/Pixabay

Cyberpunk often features a flashy visual theme and an underlying dystopian theme of this genre. It depicts a world where technological development is at its peak, artificial intelligence co-exists with humans, people have access to robotic brains and body implants — but at the same time, the social order is heavily disturbed, corrupt multinational corporations (or machines) own and controls everything, crime has become an integral part of society, and most of the population has a poor standard of living.

The “high tech, low life” concept of a cyberpunk world has been popularized by comics, films, animes, and books of the same genre. Writers like Philip K. Dick, William Gibson, Katsuhiro Otomo, Bruce Sterling, Rudy Rucker, and many others in the 70s and 80s introduced different characteristics. Thin neon city lights, electronic music, dark streets, cyborgs, holograms, rugged and vibrant clothing style, drug syndicates, cramped apartments, illegal tech markets, and a broke society) — those are the tell-tale of a cyberpunk world that later became symbols of the genre. Cyberpunk protagonists are typically rebels, hackers, reluctant heroes clinging to individuality in a world where invasive control is the norm. Unsurprisingly, many see cyberpunk as more than just an artistic current, but rather as a social critique.

Cyberpunk elements in the real world 

Remarkably, many famous novels, anime, and movies in the cyberpunk style from the 80s and 90s that popularized the genre are set in the current time. Ridley Scott’s iconic sci-fi flick Blade Runner shows events from 2019, Software, a critically acclaimed cyberpunk novel from Rudy Rucker is based in the year 2020, P.D. James’ highly popular dystopian fiction, Children of Men is set in 2021 (its movie adaptation is based in 2027), whereas Bruce Sterling’s thrilling sci-fi book Islands in the Net tells a dark futuristic story from the year 2023.

But cyberpunk is still going strong now, we’ve just pushed the date by a few years.

Cyberpunk-type scenery from Tokyo. Image in public domain.

Learning from cyberpunk

Science fiction is reality ahead of schedule, Syd Mead, concept designer of tron and blade runner once famously said. So is cyberpunk a realistic expectation of what’s to come?

Researchers have suggested in the past that technology can fuel economic inequality. Big tech companies, in particular, are fueling inequality, and although technology as a whole is alleviating poverty, there are fears that it could fuel rampang social inequality. In addition, while making us richer, technology can also be used to control and impose dystopian measures — as we’re already starting to see in China, for instance.

In fact, what makes cyberpunk different from other sci-fi genres is its ability to manifest our fears associated with hi-technology and the perils it could bring, perils such as over-capitalism, drug addiction, gadget dependency, media oversaturation, crime, and data privacy. So while cyberpunk is a literary and artistic current, we’re definitely starting to see some of its signature trademarks in the real world.

Cyberpunk in the real world

Aesthetically, cyberpunk is distinctive in its neon urban lights. Perhaps unsurprisingly, cyberpunk scenery is becoming more and more common, as some of its underlying aspects are also creeping into our world. If we look around carefully, it’s not hard to find various cyberpunk elements around us. Here are just a few examples.

  • A cyberpunk world where powerful multinational corporations much of society. In the real world, multinational tech corporation like Google, Facebook, and Amazon control the web and most of our digital assets. A normal internet user may never know even if his data is sold on the dark web or his privacy is compromised on some level. Moreover, from time to time, these trillion-dollar tech companies are accused of putting their profits above democratic principles. Recently, an ex-Facebook (now Meta) employee Frances Haugen told CBS in an interview “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money.” Oh, and we’re just beginning to see their influence.
  • Places like Las Vegas, Chongqing city in China, Japan’s major economic centers Tokyo and Osaka, and various parts of Singapore (Golden Mile complex), and Hong Kong (such as Montane Mansion and Monster building) are loaded with visual cyberpunk-ish aesthetics such as giant neon signboards, skyscrapers, stacked apartments, dark alleys, large advertisement screens, neon-lit commercial complexes, and crowded streets. In fact, Tokyo has been the inspiration for various fictional cyberpunk cities in video games and movies.
Akira, an anime, is one of the most influential cyberpunk operas of all time, featuring many of the genre’s characteristic elements (both visual and phylosophical).
  • Chatbots and voice assistants like Alexa and Siri that monitor our preferences using algorithms are an example of artificial intelligence co-existing in the real world. Similarly, the ability of social media and online advertisements to manipulate our emotions, thoughts, and decision-making ability indicates how deep technology has entered into our lives.      
  • A popular cyberpunk video game called Cyberpunk 2077 features an in-game personalized virtual world called Braindance. Though we have not been able to develop a futuristic VR experience as advanced as Braindance, VR devices in the present also allow us to experience virtual reality. Games and applications like Fortnite, Decentraland, Second Life, and Facebook’s newly launched Horizon World are examples of virtual worlds existing within our own world. 

Moreover, prosthetic body parts, augmented reality-based applications (like the game Pokemon GO), cyberpunk-themed clothing (such as cybergoth, futuristic gothic, etc), as well as the advent of brain chips (such as Neuralink), machine learning, smart weapons, humanoids (like Sophia and Ameca) and Internet of Things (IoT)are some of the developments that are taking place in the real world but also share a striking resemblance to various elements shown in the cyberpunk themes of Terminator, Akira, Blade Runner, Alita Battle Angel, and Ghost in the Shell.

Cyberpunk and transhumanism  

Although to many people, cyberpunk is merely an aesthetic style, we’ve already mentioned that there’s some hardcore social critique to it. The main reason for this is that cyberpunk involves heavy philosophical concepts.

Transhumanism is believed to be the core philosophy behind the development of the cyberpunk genre. Transhumanism is a social, philosophical, and intellectual movement that favors the invention and use of advanced innovations that can enhance human ability. Basically, transhumanists want us to evolve past our human nature using technology. Any technology capable of improving intelligence, physical strength, health, cognitive ability, memory, and lifespan of humans is part of transhumanist progress. 

Image credits: Ben Sweer/Unsplash

Transhumanist thinkers predict emerging technologies and examine their possible positive and negative impacts on human society. Writers in the 70s and 80s are also believed to have analyzed the influence of the internet, terrorism, drugs, computers, cybersecurity, and sexual revolution while working on various cyberpunk themes. This can also be understood from the fact that the nature of the protagonist in various such works is of a transhuman, for example, Ghost in the Shell’s Motoko Kusanagi was also a transhuman.

However, due to its dystopian nature, most of the fictional works in the cyberpunk genre reveal a negative side of a transhumanist approach. Novels and films like Do Androids Dream of Electric Sheep, Alita: Battle Angel, Cowboy Bebop, Terminator, etc shows how advanced technologies can promote corruption, greed, destruction and ultimately lead to a chaotic world. According to Robert M. Geraci, who is a professor of religious studies at Manhattan College, “cyberpunk as a genre attempts to caution against transhumanism by exposing the problematic elements of the social economy that supports it.”

Nobody wants to live in a dystopian world (especially after the pandemic) but in the coming years, it would be really interesting to see if some popular cyberpunk technologies such as cyborgs, laser weapons, advanced VR devices, and flying cars become a reality.

MIT unveils the world’s longest flexible fiber battery. You can weave and wash it in fabrics

Imagine a ball of yarn that could power flexible electronic devices woven into your T-shirt. That’s exactly what engineers at MIT have done, creating a rechargeable lithium-ion battery in the form of very long fiber. According to the authors of the new study, the fiber might even be used to 3D print batteries in any shape.

A toy submarine powered by the fiber battery coiled around it. Credit: MIT.

The proof of concept is 140 meters long, making it the longest flexible fiber battery thus far. This length is arbitrary though and the researchers claim the battery fiber could still provide power at much longer lengths. “We could definitely do a kilometer-scale length,” Tural Khudiyev, formerly an MIT postdoc and now an assistant professor at the National University of Singapore, said in a statement.

This is not the first time scientists have made batteries in the form of fiber. However, all previous attempts placed the lithium and other key materials outside the fiber, whereas the new system embeds the battery inside the fiber. This protective outside coating is critical for functioning flexible power supplies, providing both stability and waterproofing.

The manufacturing involves using novel battery gels along with a standard fiber-drawing system. All the components of the battery are placed in a large cylinder that is slowly heated just below its melting point. The material is then drawn through a narrow opening, causing pressure that compresses the cylinder to a fraction of its original diameter, while maintaining the original arrangement of the parts. The thickness of the fiber device is only a few hundred microns, much thinner than any previous attempts at a fiber battery.

MIT engineers were inspired to undertake this challenge during their research on wearable electronics. They previously made fibers that embedded LEDs, photosensors, communication systems like WiFi, and other digital systems. These components were flexible enough to be worn by users and washable. However, they all relied on an external power source, which made the wearable products impractical. It was time to turn the battery into a fiber too.

“When we embed the active materials inside the fiber, that means sensitive battery components already have a good sealing,” Khudiyev says, “and all the active materials are very well-integrated, so they don’t change their position during the drawing process.”

The fiber battery continues to power an LED even after it was partially cut. This means it’s free from electrolyte loss and short-circuiting.

To demonstrate the functionality of this proof of concept, the researchers used the fiber battery to power a “Li-Fi” communications system, the kind that uses pulses of light to transmit data rather than radio waves. The Li-Fi includes a microphone, pre-amp, transistor, and diodes. They also demonstrated the integration of LED and Li-ion batteries inside a single fiber, but more than three or four devices could be combined in the same compact space in the future.

“The beauty of our approach is that we can embed multiple devices in an individual fiber, unlike other approaches which need integration of multiple fiber devices,” said MIT postdoc Jung Tae Lee. “When we integrate these fibers containing multi-devices, the aggregate will advance the realization of a compact fabric computer.”

The 140-meter-long battery fiber has a rated energy storage capacity of 123 milliamp-hours — just enough to power a smartwatch or phone. Battery fibers could be woven to produce two-dimensional fabrics like those used for clothing, but could also be used in 3-D printing to create solid objects, such as casings.

In one demonstration, a toy submarine was wrapped with battery fiber and could be powered on. Now, imagine incorporating the power source into the structure of the submarine — that would lower the overall weight and improve the efficiency and range of the device.

After printing, you do not need to add anything else, because everything is already inside the fiber, all the metals, all the active materials. It’s just a one-step printing. That’s a first,” said Khudiyev.

The findings were described in the journal Materials Today.

Researchers have just taught cyborg brains how to play Pong

An international research team has grown a brain-like organoid that is capable to play the simple video game Pong. This is the first time that such a structure (which researchers called a “cyborg brain”) is capable of performing a goal-directed task.

Pong is one of the simplest video games. You have a paddle and a ball (in the single-player version) or two paddles and a ball (in the two-player version), and you move the paddle to keep the ball in play and bounce it to the other side — much like a real ping-pong game. For most people familiar with computer games, it’s a simple and intuitive game. But for cells in a petri dish, it’s a bit of a tougher challenge.

Researchers at the biotech startup Cortical Labs took up the challenge. They created “mini-brains” (“we think it’s fair to call them cyborg brains,” the company’s chief scientific officer said in an interview) consisting of 800,000-1,000,000 living human brain cells. They then placed these cells on top of a microelectrode array that analyzes electrical changes and monitors the activity of the “brain.”

Electrical signals are also sent to the brain to tell it where the ball is located and how fast it is coming. It was taught to play the game just like humans: by playing the game repeatedly and by being offered feedback (in this case, in the form of electrical signals to electrodes).

It took about five minutes to learn the game. While the cyborg brain wasn’t quite as a human would be, it was able to learn how to play the game faster than some AIs, researchers say.

The fact that it was able to learn so quickly is a real stunner, but this is just the beginning. It’s the first time this type of brain-like structure was able to achieve something like this, and it could be a real step towards a true, advanced cyborg brain.

“Integrating neurons into digital systems to leverage their innate intelligence may enable performance infeasible with silicon alone, along with providing insight into the cellular origin of intelligence,” the researchers write in the study.

The researchers say their work can bring improvements in the design of or in therapies targeting the brain. For now, as exciting as this achievement is, it’s still hard to say what it will amount to.

The study was published in a pre-print and was not yet peer-reviewed. Journal Reference: Brett J. Kagan et al, In vitro neurons learn and exhibit sentience when embodied in a simulated game-world, biorxiv (2021). DOI: 10.1101/2021.12.02.471005.

Better than Photoshop: AI synthesizes and edits complex images from a text description — and they’re mind-bogglingly good

Text-to-image synthesis generates images from natural language descriptions. You imagine some scenery or action, describe it through text, and then the AI generates the image for you from scratch. The image is unique and can be thought of as a window into machine ‘creativity’, if you can call it that. This field is still in its infancy and while previously such models were buggy and not all that impressive, the state of the art recently showcased by researchers at OpenAI is simply stunning. Frankly, it’s also a bit scary considering the abuse potential of deepfakes.

Imagine “a surrealist dream-like oil painting by Salvador Dali of a cat playing checkers”, “a futuristic city in synthwave style”, or “a corgi wearing a red bowtie and purple party hat”. What would these pictures look like? Perhaps if you were an artist, you could make them yourself. But the AI models developed at OpenAI, an AI research startup founded by Elon Musk and other prominent tech gurus, can generate photorealistic images almost immediately.

The images featured below speak for themselves.

“We observe that our model can produce photorealistic images with shadows and reflections, can compose multiple concepts in the correct way, and can produce artistic renderings of novel concepts,” wrote the researchers in the pre-print server arXiv.

In order to achieve photorealism from free-form text prompts, the researchers applied guided diffusion models. Diffusion models work by corrupting the training data by progressively adding Gaussian noise, slowly wiping out details in the data until it becomes pure noise, and then training a neural network to reverse this corruption process. Their advantage over other image synthesis models lies in their high sample quality, resulting in images or audio files that are almost indistinguishable from traditional versions to human judges.

The computer scientists at OpenAI first trained a 3.5 billion parameter diffusion model that contains a text encoder to condition the image content on natural language descriptions. Next, they compared two distinct techniques for guiding diffusion models towards text prompts: CLIP guidance and classifier-free guidance. Using a combination of automated and human evaluations, the study found classifier-free guidance yields the highest-quality images.

While these diffusion models are perfectly capable of synthesizing high-quality images from scratch, producing convincing images from very complex descriptions can be challenging. This is why the present model was equipped with editing capabilities in addition to “zero-shot generation”. After introducing a text description, the model looks for an existing image, then edits and paints over it. Edits match the style and lighting of the surrounding content, so it all feels like an automated Photoshop. This hybrid system is known as GLIDE, or Guided Language to Image Diffusion for Generation and Editing.

For instance, inputting a text description like “a girl hugging a corgi on a pedestal” will prompt GLIDE to find an existing image of a girl hugging a dog, then the AI cuts the canine from the original image and pastes a corgi.

Besides inpainting, the diffusion model is able to produce its own illustrations in various styles, such as the style of a particular artist, like Van Gogh, or the style of a specific painting. GLIDE can also compose concepts like a bowtie and birthday hat on a corgi, all while binding attributes, such as color or size, to these objects. Users can also make convincing edits to existing images with a simple text command.

Of course, GLIDE is not perfect. The examples posted above are success stories, but the study had its fair share of failures. Certain prompts that describe highly unusual objects or scenarios, such as requesting a car with triangle wheels, will not produce images with satisfying results. The diffusion models are only as good as the training data, so imagination is still very much in the human domain — for now at least.

The code for GLIDE has been released on GitHub.