Tag Archives: nanobots

People who can’t form images in their mind have a surprising trait — they’re harder to spook with words

Are you easily spooked? Then you probably don’t have aphantasia, the inability to picture images into one’s mind. New research suggests that people who suffer from aphantasia show a reduced response to scary stories, suggesting that there’s a much stronger link between emotions and imagery than we assumed.

Image credits Shah Zaman Khan via Pixabay.

A study that pitted people against (made-up) distressing scenarios found that participants with aphantasia didn’t have much of a physical fear response to these situations, whereas the other participants did. The team says this is “the biggest difference” we’ve yet found between people with aphantasia and those without.

Fantasy no-fly zone

“These two sets of results suggest that aphantasia isn’t linked to reduced emotion in general, but is specific to participants reading scary stories,” says Professor Joel Pearson, senior author on the paper and Director of UNSW Science’s Future Minds Lab. “The emotional fear response was present when participants actually saw the scary material play out in front of them.”

“[This] suggests that imagery is an emotional thought amplifier. We can think all kinds of things, but without imagery, the thoughts aren’t going to have that emotional ‘boom’.”

“Aphantasia is neural diversity,” he adds. “It’s an amazing example of how different our brain and minds can be.”

The team measured each participant’s fear response through the changes in conductivity levels of their skin. This is influenced by how much a person sweats, and sweating is a physical reaction to states of fear or stress. It’s a commonly-used method of gauging an individual’s emotional state in psychology.

The study involved 46 participants, 22 of whom had aphantasia. Each participant was led to a darkened room, where they were seated and electrodes applied to their skin. That’s already kind of spooky, but then the participants were left alone, the lights were completely turned off, and a story was played in text form out on a screen for them.

In the beginning these were quite mundane, starting with scenarios such as “you are at the beach, in the water” or “you’re on a plane, by the window”. As they progressed, however, suspense was slowly mixed in. The participants were told of “dark flashes in the distant waves”, of “people on the beach pointing”, or the aircraft’s “cabin lights dimming” as the vehicle started to shake.

“Skin conductivity levels quickly started to grow for people who were able to visualize the stories,” says Prof Pearson. “The more the stories went on, the more their skin reacted.”

“But for people with aphantasia, the skin conductivity levels pretty much flatlined.”

Later on, the team also performed a control round in which the text stories were replaced with a series of scary or disturbing images, like a photo of a cadaver or a snake baring its fangs. This was meant to check whether the differences in response seen in the study were caused by aphanthasia, not by each participant’s threshold for response to fear. This time, the authors note, all participants showed a roughly equal physical response to the images.

According to Prof. Pearson, this is “the strongest evidence yet that mental imagery plays a key role in linking thoughts and emotions”, and “by far the biggest difference we’ve found between people with aphantasia and the general population” to date.

Aphanthasia affects an estimated 2-5% of the population, but it’s still very poorly understood. It seems to be associated with wide-ranging changes in other cognitive processes as well, most notably remembering, dreaming, and imagining. Not surprising, given that these activities often involve picturing events in your mind.

“This work may provide a potential new objective tool which could be used to help to confirm and diagnose aphantasia in the future,” says study co-author Dr Rebecca Keogh, a postdoctoral fellow formerly of UNSW and now based at Macquarie University, and it “supports aphantasia as a unique, verifiable phenomenon”. The authors say they got the idea for this study after noticing many members on aphantasia discussion boards mentioning that they don’t enjoy reading fiction.

Still, the team underscores that their results are based on averages, and that not every individual with aphantasia will experience it the same.

“Aphantasia comes in different shapes and sizes,” says Prof. Pearson. “Some people have no visual imagery, while other people have no imagery in one or all of their other senses. Some people dream while others don’t. “

“So don’t be concerned if you have aphantasia and don’t fit this mould. There are all kinds of variations to aphantasia that we’re only just discovering.”

The paper “The critical role of mental imagery in human emotion: insights from fear-based imagery and aphantasia” has been published in the journal Proceedings of the Royal Society B: Biological Sciences.

Scientists observe nanobots coordinating inside a living host for the first time

Nanobots have the potential of revolutionizing fields from material engineering to medicine. But first, we have to figure out how to build them and make them work. A new paper reports on a confident step toward that goal, as we’ve been able to observe the collective behavior of autonomous nanobots inside a living host.

A schematic of a molecular planetary gear, an example of nanomachinery. Image via Wikimedia.

The range of tasks that nanobots can potentially handle is, in theory, incredible. Needless to say, then, there’s a lot of interest in making such machines a reality. For now, however, they’re still in the research and development phase, with a particular interest in tailoring them for biomedical applications. Nanobots using our body’s own enzymes as fuel are some of the most promising systems in this regard currently, and a new paper is reporting on how they behave inside a living host.

March of the Machines

“The fact of having been able to see how nanorobots move together, like a swarm, and of following them within a living organism, is important, since millions of them are needed to treat specific pathologies such as, for example, cancer tumors,” says Samuel Sánchez, principal investigator at the Institute for Bioengineering of Catalonia (IBEC).

Nanobots are machines built at the nano-scale, where things are measured in millionths of a millimeter. They’re intended to be able to move and perform certain tasks by themselves, usually in groups. Being so small, however, actually seeing them go about their business — and thus, checking if they work as intended — isn’t very easy.

That’s why the IBEC team, together with members from the Radiochemistry & Nuclear Imaging Lab at the Center of Cooperative Investigation of Biomaterials (CIC biomaGUNE) in Spain, set out to observe these bots working inside the bladders of living mice using radioactive isotope labeling. This is the first time researchers have successfully tracked nanobots in vivo using Positron Emission Tomography (PET).

For the study, the team started with in vitro (in the lab) experiments, where they monitored the robots using both optical microscopy and PET. Both techniques allowed them to see how these nanoparticles interacted with different fluids and how they were able to collectively migrate following complex paths.

The next step involved injecting these bots into the bloodstream and, finally, the bladders of living mice. The machines were designed to be coated in urease, an enzyme that allows the bots to use urea from urine as fuel. The team reports that they were able to swim collectively, which induced currents in the fluid inside the animals’ bladders. These nanomachines were evenly distributed throughout the bladders, the team adds, which is indicative of the fact that they were coordinating as a group.

“Nanorobots show collective movements similar to those found in nature, such as birds flying in flocks, or the orderly patterns that schools of fish follow,” explains Samuel Sánchez, ICREA Research Professor at IBEC.

“We have seen that nanorobots that have urease on the surface move much faster than those that do not. It is, therefore, a proof of concept of the initial theory that nanorobots will be able to better reach a tumor and penetrate it,” says Jordi Llop, principal investigator at CIC biomaGUNE.

The findings showcase how nanomachines can come together and coordinate as a group, even one with millions of members, both in the lab and in living organisms. It might not sound like much, but checking that these machines can really interact as we want them to is a very important milestone in their development. It also goes a long way to prove that their activity can be monitored, even in living organisms, meaning that they can eventually be used to treat human patients.

“This is the first time that we are able to directly visualize the active diffusion of biocompatible nanorobots within biological fluids in vivo. The possibility to monitor their activity within the body and the fact that they display a more homogeneous distribution could revolutionize the way we understand nanoparticle-based drug delivery and diagnostic approaches,” says Tania Patiño, co-corresponding author of the paper.

One of the uses the team already envisions for similar nanobots is that of delivering drugs in tissues or organs where their diffusion would be hampered, either by a viscous substance (such as in the eye) or by poor vascularization (such as in the joints).

The paper “Swarming behavior and in vivo monitoring of enzymatic nanomotors within the bladder,” has been published in the journal Science Robotic.

New class of actuators gives nanobots legs (that work)

A new paper brings us one step closer to creating swarms of tiny, mobile robots.

Artist’s rendition of an array of microscopic robots.
Image credits Criss Hohmann.

Science fiction has long foretold of sprawling masses of tiny robots performing tasks from manufacturing and medicine to combat — with the most extreme example being the Grey Goo. We’re nowhere near that point, yet, but we’re making progress.

A new paper describes the development of a novel class of actuators (devices that can generate motion) that is comparable with current electronics. These actuators are tiny and bend when stimulated with a laser, making them ideal to power extremely small robots. A lack of proper means of movement has been a severe limitation on our efforts to design very small robots so far, the team explains.

Finding their legs

“What this work shows is proof of concept: we can integrate electronics on a [tiny] robot. The next question is what electronics should you build. How can we make them programmable? What can they sense? How do we incorporate feedback or on-board timing?” lead author Marc Miskin, assistant professor of electrical and systems engineering at Penn State, told me in an email.

“The good news is semiconductor electronics gives us a lot of developed technology for free. We’re working now to put those pieces together to build our next generation of microscopic robots.

Actuators are the rough equivalent of engines. Although they rarely use the same principles, they’re both meant to do physical work (a motion that can be used to perform a certain task). The lack of an adequate actuator, both in regards to size and compatibility with our current electronics, has hampered advances into teeny-tiny robots.

Marc and his team hope to finally offer a solution to this problem. The actuators they developed are small enough to power the legs of robots under 0.1 mm in size (that’s about the size of a strand of human hair). The devices are compatible with silicon-based circuitry, so no special adaptations are needed to work with them in most settings.

These actuators bend in response to a laser pulse to create a walking motion; power, in this case, was supplied by onboard photovoltaics (solar panels). As for the sizes involved here: the team reports that they can fit over one million of their robots on a 4-inch wafer of silicon.

Given that the proof-of-concept robots are surprisingly robust, very resistant to acidity, and small enough to go through a hypodermic (syringe) needle, one particularly exciting possibility is to use them for medical applications or simple biomonitoring in human and animal patients — just like in the movies. I’ve asked Marc what other potential applications they’re excited for, and the possibilities do indeed seem endless:

“We’re thinking about applications in manufacturing (can you use them to form or shape materials at the microscale?), repairing materials (can you fix defects to increase material lifespan?), and using them as mobile sensors (can you send robots into say cracks in a rock or deep in a chemical reactor to make measurements and bring data back).”

However, he’s under no illusions that this will be an easy journey. “These are of course long term goals: right now all our robots can do is walk,” he notes.

Technology and know-how, however, have a way of compounding once released into ‘the wild’ of our economies. The advent of appropriate actuators might just be the nudge needed to walk us into a series of rapid improvements on nanomachines. And I, for one, couldn’t be more excited.

The paper “Electronically integrated, mass-manufactured, microscopic robots” has been published in the journal Nature.

DNA nanobots deliver drugs in living cockroaches – it’s a computer, inside a cockroach

The future is here. Nano-sized entities made of DNA that are able to perform the same kind of logic operations as a silicon-based computer have been introduced into a living animal.

Artistic depiction of nanobots. Via ProTV.

It’s every Science Fiction fan’s dream come true. The tiny DNA computers are called origami robots, because they work by folding and unfolding strands of DNA; they travel around the insect’s body and interact with each other, as well as the insect’s cells. When they unfold just like a complex origami, they dispense the drugs which they carry.

“DNA nanorobots could potentially carry out complex programs that could one day be used to diagnose or treat diseases with unprecedented sophistication,” says Daniel Levner, a bioengineer at the Wyss Institute at Harvard University.

DNA computing sounds like science fiction, but it’s not exactly a novelty – it’s been researched and developed for over a decade now. DNA computing is a form of computing which uses DNA, biochemistry and molecular biology just like you would use a traditional silicon microprocessor. DNA also has a remarkable property which makes it even more useful for this kind of technique, as it unravels into two complementary strands when it meets a certain protein, making it ideal for delivering substances inside a body. When the molecule opens up, it “delivers the package”.

DNA computing nanobots with the same computing power as an 80s computer injected into cockroaches. Simply mindblowing! Image credits: Daly and Newton/Getty Images.

Researchers injected different nanobots into cockroaches, labeling them with different fluorescent markers so they can follow and analyze how robot combinations affect where substances are delivered. The accuracy of this technique is similar to that of a computer system.

“This is the first time that biological therapy has been able to match how a computer processor works,” says co-author Ido Bachelet of the Institute of Nanotechnology and Advanced Materials at Bar Ilan University. Unlike electronic devices, which are suitable for our watches, our cars or phones, we can use these robots in life domains, like a living cockroach,” says Ángel Goñi Moreno of the National Center for Biotechnology in Madrid, Spain. “This opens the door for environmental or health applications.”

DNA has already been used for storing large amounts of information and circuits for amplifying chemical signals, but when you compare these achievements with the origami robots, they are not that impressive. The number of the nanobots which were successfully used and their impressive accuracy are extremely promising.

 “The higher the number of robots present, the more complex the decisions and actions that can be achieved. If you reach a certain threshold of capability, you can perform any kind of computation. In this case, we have gone past that threshold,” he says.

The team believes they will soon be able to scale up the computing power up to that of an 8-bit computer, equivalent to a computer from the 80s – it may not seem that impressive at a first glance, but remember, this is a computer made from DNA, which serves a very unique purpose, so it’s actually more than enough.

The obvious benefits would be cancer treatments, because these must be cell-specific and one of the main problems with current treatments is the lack of cell-targeting. However, the main problem here is that any such treatment has to somehow overcome the immune response delivered by the host. Basically, your immune system will sense the nanobots as a foreign body, and try to fight them. But scientists believe they can overcome even that problem – Bachelet is confident that the team can enhance the robots’ stability so that they can survive in mammals.

“There is no reason why preliminary trials on humans can’t start within five years,” he says.

 

Photograph of nanobots killing off cancer

cancer-nanobots

Take a really good look at this picture; you may just be looking at the very thing that will defeat cancer. The black dots are nanobots, practically delivering a killing blow to the cancerous cells, and only to those cells. According to Mark Davis, head of the research team that created the nanobot anti-cancer army at the California Institute of Technology, the above mentioned technology “sneaks in, evades the immune system, delivers the siRNA, and the disassembled components exit out”.

According to the study published in Nature, you can use as many of these nanobots as you wish, and they’ll keep on raiding and killing the cancerous cells and stoping tumours.

“The more [they] put in, the more ends up where they are supposed to be, in tumour cells.”

The technology has its roots on RNA interference, a discovery that brought Andrew Fire and Craig Mello the Nobel Prize in 2006.

“RNAi is a new way to stop the production of proteins,” says Davis. “What makes it such a potentially powerful tool, he adds, is the fact that its target is not a protein. The vulnerable areas of a protein may be hidden within its three-dimensional folds, making it difficult for many therapeutics to reach them. In contrast, RNA interference targets the messenger RNA (mRNA) that encodes the information needed to make a protein in the first place. In principle,” says Davis, “that means every protein now is druggable because its inhibition is accomplished by destroying the mRNA. And we can go after mRNAs in a very designed way given all the genomic data that are and will become available.”

This is just the first demonstration CalTech performed, but it went on perfectly, and the results are quite promising. We will hold our fingers crossed.

Immortality – just 20 years away

ray-kurzweil-portrait

Raymond Kurzweil is one of the most prolific inventors and futurists; he’s the one who developed text to speech synthesis and a synthesizer that develops and even creates poetry, among others. He has also predicted new technologies that would appear and some directions that our society would take, and he got it right.

Now, the 61 year old claims that considering the rate at which our understanding of our genes and nanotechnology is increasing, scientists should be able to replace many (if not all) of our vital organs in approximately 20 years from now. When confronted and asked if that’s not a bit of wishful thinking, he replied that neural implants and artificial pancreases already exist, and the distance is not as big as it seems.

Here’s what he said:

“I and many other scientists now believe that in around 20 years we will have the means to reprogramme our bodies’ stone-age software so we can halt, then reverse, ageing. Then nanotechnology will let us live for ever. Ultimately, nanobots will replace blood cells and do their work thousands of times more effectively.

Within 25 years we will be able to do an Olympic sprint for 15 minutes without taking a breath, or go scuba-diving for four hours without oxygen. Heart-attack victims – who haven’t taken advantage of widely available bionic hearts – will calmly drive to the doctors for a minor operation as their blood bots keep them alive.

Nanotechnology will extend our mental capacities to such an extent we will be able to write books within minutes. If we want to go into virtual-reality mode, nanobots will shut down brain signals and take us wherever we want to go. Virtual sex will become commonplace. And in our daily lives, hologram like figures will pop in our brain to explain what is happening. So we can look forward to a world where humans become cyborgs, with artificial limbs and organs.”

Find some other predictions he made here

Now I am in no way the man to comment on whether these developments will or will not take place, but let’s say they’ll become reality. Is it all good, partially good, or partially bad? Overpopulation is a big problem as it is so I really have a hard time figuring that out too. What do you think?

P.S.   We had some technical problems and a partial article got published – sorry for that and we appreciate your understanding.