Tag Archives: computing

These hard-bodied robots can reproduce, learn and evolve autonomously

Where biology and technology meet, evolutionary robotics is spawning automatons evolving in real-time and space. The basis of this field, evolutionary computing, sees robots possessing a virtual genome ‘mate’ to ‘reproduce’ improved offspring in response to complex, harsh environments.

Image credits: ARE.

Hard-bodied robots are now able to ‘give birth’

Robots have changed a lot over the past 30 years, already capable of replacing their human counterparts in some cases — in many ways, robots are already the backbone of commerce and industry. Performing a flurry of jobs and roles, they have been miniaturized, mounted, and molded into mammoth proportions to achieve feats way beyond human abilities. But what happens when unstable situations or environments call for robots never seen on earth before?

For instance, we may need robots to clean up a nuclear meltdown deemed unsafe for humans, explore an asteroid in orbit or terraform a distant planet. So how would we go about that?

Scientists could guess what the robot may need to do, running untold computer simulations based on realistic scenarios that the robot could be faced with. Then, armed with the results from the simulations, they can send the bots hurtling into uncharted darkness aboard a hundred-billion dollar machine, keeping their fingers crossed that their rigid designs will hold up for as long as needed.

But what if there was a is a better alternative? What if there was a type of artificial intelligence that could take lessons from evolution to generate robots that can adapt to their environment? It sounds like something from a sci-fi novel — but it’s exactly what a multi-institutional team in the UK is currently doing in a project called Autonomous Robot Evolution (ARE).

Remarkably, they’ve already created robots that can ‘mate’ and ‘reproduce’ progeny with no human input. What’s more, using the evolutionary theory of variation and selection, these robots can optimize their descendants depending on a set of activities over generations. If viable, this would be a way to produce robots that can autonomously adapt to unpredictable environments – their extended mechanical family changing along with their volatile surroundings.

“Robot evolution provides endless possibilities to tweak the system,” says evolutionary ecologist and ARE team member Jacintha Ellers. “We can come up with novel types of creatures and see how they perform under different selection pressures.” Offering a way to explore evolutionary principles to set up an almost infinite number of “what if” questions.

What is evolutionary computation?

In computer science, evolutionary computation is a set of laborious algorithms inspired by biological evolution where candidate solutions are generated and constantly “evolved”. Each new generation removes less desired solutions, introducing small adaptive changes or mutations to produce a cyber version of survival of the fittest. It’s a way to mimic biological evolution, resulting in the best version of the robot for its current role and environment.

Virtual robot. Image credits: ARE.

Evolutionary robotics begins at ARE in a facility dubbed the EvoSphere, where newly assembled baby robots download an artificial genetic code that defines their bodies and brains. This is where two-parent robots come together to mingle virtual genomes to create improved young, incorporating both their genetic codes.

The newly evolved offspring is built autonomously via a 3D printer, after which a mechanical assembly arm translating the inherited virtual genomic code selects and attaches the specified sensors and means of locomotion from a bank of pre-built components. Finally, the artificial system wires up a Raspberry Pi computer acting as a brain to the sensors and motors – software is then downloaded from both parents to represent the evolved brain.

1. Artificial intelligence teaches newborn robots how to control their bodies

Newborns undergo brain development and learning to fine-tune their motor control in most animal species. This process is even more intense for these robotic infants due to breeding between different species. For example, a parent with wheels might procreate with another possessing a jointed leg, resulting in offspring with both types of locomotion.

But, the inherited brain may struggle to control the new body, so an algorithm is run as part of the learning stage to refine the brain over a few trials in a simplified environment. If the synthetic babies can master their new bodies, they can proceed to the next phase: testing.

2. Selection of the fittest- who can reproduce?

A specially built inert nuclear reactor housing is used by ARE for testing where young robots must identify and clear radioactive waste while avoiding various obstacles. After completing the task, the system scores each robot according to its performance which it then uses to determine who will be permitted to reproduce.

Real robot. Image credits: ARE.

Software simulating reproduction then takes the virtual DNA of two parents and performs genetic recombination and mutation to generate a new robot, completing the ‘circuit of life.’ Parent robots can either remain in the population, have more children, or be recycled.

Evolutionary roboticist and ARE researcher Guszti Eiben says this sped up evolution works as: “Robotic experiments can be conducted under controllable conditions and validated over many repetitions, something that is hard to achieve when working with biological organisms.”

3. Real-world robots can also mate in alternative cyberworlds

In her article for the New Scientist, Emma Hart, ARE member and professor of computational intelligence at Edinburgh Napier University, writes that by “working with real robots rather than simulations, we eliminate any reality gap. However, printing and assembling each new machine takes about 4 hours, depending on the complexity of its skeleton, so limits the speed at which a population can evolve. To address this drawback, we also study evolution in a parallel, virtual world.”

This parallel universe entails the creation of a digital version of every mechanical infant in a simulator once mating has occurred, which enables the ARE researchers to build and test new designs within seconds, identifying those that look workable.

Their cyber genomes can then be prioritized for fabrication into real-world robots, allowing virtual and physical robots to breed with each other, adding to the real-life gene pool created by the mating of two material automatons.

The dangers of self-evolving robots – how can we stay safe?

A robot fabricator. Image credits: ARE.

Even though this program is brimming with potential, Professor Hart cautions that progress is slow, and furthermore, there are long-term risks to the approach.

“In principle, the potential opportunities are great, but we also run the risk that things might get out of control, creating robots with unintended behaviors that could cause damage or even harm humans,” Hart says.

“We need to think about this now, while the technology is still being developed. Limiting the availability of materials from which to fabricate new robots provides one safeguard.” Therefore: “We could also anticipate unwanted behaviors by continually monitoring the evolved robots, then using that information to build analytical models to predict future problems. The most obvious and effective solution is to use a centralized reproduction system with a human overseer equipped with a kill switch.”

A world made better by robots evolving alongside us

Despite these concerns, she counters that even though some applications, such as interstellar travel, may seem years off, the ARE system may have a more immediate need. And as climate change reaches dangerous proportions, it is clear that robot manufacturers need to become greener. She proposes that they could reduce their ecological footprint by using the system to build novel robots from sustainable materials that operate at low energy levels and are easily repaired and recycled. 

Hart concludes that these divergent progeny probably won’t look anything like the robots we see around us today, but that is where artificial evolution can help. Unrestrained by human cognition, computerized evolution can generate creative solutions we cannot even conceive of yet.

And it would appear these machines will now evolve us even further as we step back and hand them the reins of their own virtual lives. How this will affect the human race remains to be seen.

light-bulb-1644993_960_720

Scientists just turned light-based information into readable soundwaves

Australian physicists at the University of Sydney converted information encoded in pulses of light into sound waves on the same computer chip. The process also worked in reverse. The research is considered a breakthrough in light-based computing which uses photons instead of electrons to relay bits.

light-bulb-1644993_960_720

Credit: Pixabay.

Light-based electronics are very appealing to the industry since photons can theoretically enable data transmission that’s an order of magnitude greater. A photon-computer could, for instance, be up to 20 times faster than the transistors operating on electrons inside your laptop. Li-Fi, a technology which uses light in routers, can be up to 100 times faster than WiFi.

Right now, transistors are nearing the limit of miniaturization silicon can accommodate. Mass produced computer chips nowadays have embedded transistors that are only 14 nanometers across. That’s only 70 silicon atoms wide.

Light-based computers are thus one possible solution to the otherwise impending halt for “Moore’s Law” — an axiom that suggests that the electronic devices double in speed and capability about every two years. It hasn’t been proven wrong in the last 40 years but the observation can’t remain viable forever.

If we make sure Moore’s Law is still kicking another 40 years though, the possibilities could be enormous.

A very light chip

There are challenges to building a photon chip, though. Ironically, photons are too fast to be read by microprocessors. And yes, fiber optic cables do use light waves to carry information but these are immediately slowed down into electrons for computers to swallow.

Before we can achieve photon-computer status, we have to jump through some hoops. An important intermediate step was recently achieved by a team led by Dr Birgit Stiller, a research fellow at the University of Sydney.

Stiller and colleagues transferred information from the optical to the acoustic domain and back again inside a chip, as described in Nature Communications. 

“The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Stiller said in a press release.

“It is like the difference between thunder and lightning,” she said.

This delay actually proves useful considering the state of the art right now. It gives the computer chip enough breath to store and manage the information for later processing, retrieval and further transmission as light waves. The video below gives you a glimpse of how all of this works.

“This is an important step forward in the field of optical information processing as this concept fulfills all requirements for current and future generation optical communication systems,” said Professor Benjamin Eggleton, study co-author.

Operating system and a movie, among others, stored in DNA with no errors. The method can pack 215 petabytes of data in a single gram of DNA

Using an algorithm designed to stream videos on mobile phones, researchers showed how to maximize the data storage potential of DNA. Their method can encode 215 petabytes of data — twice as much as Google and Facebook combined hold in their servers– on a single gram of DNA. This is 100 times more than previously demonstrated. Moreover, the researchers encoded an operating system and movie onto the DNA were able to successfully retrieve the data from sequenced DNA without any errors.

Photo: Public Domain.

Every day, we create 2.5 quintillion bytes of data, and at an ever increasing pace. IBM estimates 90% of the data in the world today has been created in the last two years alone. As more and more of our lives gets transcribed in digital form, this trend will only get amplified. One problem is that hard drives and magnetic tapes will soon become inadequate for storing such vasts amount of ‘big data’ — which is where DNA comes in.

It’s often called the ‘blueprint of life’, for obvious reasons. Every cell in our bodies, every instinct, is coded in base sequences of  A, G, C and T, DNA’s four nucleotide bases. Ever since DNA was first discovered in the 1950s by James Watson and Francis Crick, scientists quickly realized huge quantities of data could be stored at high density in only a few molecules. Additionally, DNA can be stable for a long time as a recent study showed which recovered DNA from 430,000-year-old human ancestor found in a cave in Spain.

“DNA won’t degrade over time like cassette tapes and CDs, and it won’t become obsolete — if it does, we have bigger problems,” said study coauthor Yaniv Erlich, a computer science professor at Columbia Engineering, a member of Columbia’s Data Science Institute

Erlich and colleagues at the New York Genome Center (NYGC) chose six files to write into DNA:  a full computer operating system, an 1895 French film, “Arrival of a train at La Ciotat,” a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon.

All the files were compressed into a single master file then split into short strings of binary code, all 1s and 0s. The researchers used a technique called fountain codes which Erlich remembered from graduate school, to make the reading and writing process more efficient. Using the algorithm, they packaged the strings randomly into ‘droplets’, where each 1 and 0 was mapped to one of the DNA’s four nucleotide bases (A,G,C,T). The algorithm proved essential to store and retrieve the encoded data since it corrects and deletes letter combinations known to provoke errors.

Once they finished, they ended up with a huge text file made of 72,000 DNA strands each 200 bases long. The text file was sent to a startup from San Francisco called Twist Bioscience which turned all that digital data into biological data by synthesizing DNA. Two weeks later, Erlich received a vial containing DNA which encoded all of his previous work.

Overview of the data encoding and decoding process in DNA. Credit: Harvard Uni.

Overview of the data encoding and decoding process in DNA. Credit: Harvard Uni.

To retrieve the files, the researchers used common DNA sequencing tools as well as a special software which translates all the As, Gs, Cs, and Ts back into binary. The whole processes worked flawlessly seeing how the data was retrieved with no errors.

To demonstrate, Erlich installed on a virtual machine the operating system he had encoded in the DNA and played a game of Minesweeper to celebrate. Chapeau!

“We believe this is the highest-density data-storage device ever created,” said Erlich.

Erlich didn’t stop there. He and colleagues showed that you could copy the encoded data as many times as you wish. To copy the data, it’s just a matter of multiplying the DNA sample through polymerase chain reaction (PCR). The team showed that copies of copies of copies could have their data retrieved as error-free as the original sample, as reported in the journal Science.

Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules. Credit: New York Genome Center

Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules. Credit: New York Genome Center

There are some caveats, however, which I should mention. It cost $7,000 to synthesize the DNA and another $2,000 to read it. But it’s also worth keeping in mind that sequencing DNA is getting exponentially cheaper. It cost us $2.7 billion and 15 years of work to sequence the first human genome, then starting from 2008 the cost came down from $10 million to the couple-thousand-dollar mark. Sequencing DNA might become as cheap as running electricity through transistors at some point in the not very distant future.

Another thing we should mention is that DNA storage isn’t meant for mundane use. As it stands today, you can’t replace your HDD with DNA in a home computer for instance because the read-write time can take days. Instead, DNA might be our best solution for archiving the troves of data that are amounting to insane quantities with each passing day. And who knows, maybe someone can find a way to code and encode data in molecules as fast as electrons zip through a transistor — but that seems highly unlikely, if not impossible.

By 2040 our computers will use more power than we can produce

The breathtaking speed at which our computers evolve is perfectly summarized in Moore’s Law — the idea that the sum of transistors in an integrated circuit doubles every two years. But this kind of exponential growth in computing power also means that our chipsets need more and more power to function — and by 2040 they will gobble up more electricity than the world can produce, scientists predict.

Image bia pixabay

The projection was originally contained in a report released last year by the Semiconductor Industry Association (SIA) but it has only recently made headlines as the group issued its final assessment on the semiconductor industry. The basic idea is that as computer chips become more powerful and incorporate more transistors, they’ll require more power to function unless efficiency can be improved.

Energy which we may not have. They predicted that unless we significantly change the design of our computers, by 2040 we won’t be able to power all of them. But there’s a limit to how much we can improve using our methods:

“Industry’s ability to follow Moore’s Law has led to smaller transistors but greater power density and associated thermal management issues,” the 2015 report explains.

“More transistors per chip mean more interconnects – leading-edge microprocessors can have several kilometres of total interconnect length. But as interconnects shrink they become more inefficient.”

So in the long run, SIA estimates that under current conditions “computing will not be sustainable by 2040, when the energy required for computing will exceed the estimated world’s energy production.”

Total energy used for computing.
Image credits SIA

This graph shows the problem. The power requirements of today’s systems — the benchmark line — are the orange line and total energy production is the yellow one. The point they meet at, predicted to be somewhere around 2030 or 2040, is where the problems start. Today, chip engineers stack ever-smaller transistors in three dimensions in order to improve performance and keep pace with Moore’s Law, but the SIA says that approach won’t work forever, given how much energy will be lost in future, progressively denser chips.

“Conventional approaches are running into physical limits. Reducing the ‘energy cost’ of managing data on-chip requires coordinated research in new materials, devices, and architectures,” the SIA states.

“This new technology and architecture needs to be several orders of magnitude more energy efficient than best current estimates for mainstream digital semiconductor technology if energy consumption is to be prevented from following an explosive growth curve.”

The roadmap report also warns that beyond 2020, it will become economically unviable to keep improving performance with simple scaling methods. Future improvements in computing power must come from areas not related to transistor count.

“That wall really started to crumble in 2005, and since that time we’ve been getting more transistors but they’re really not all that much better,” said computer engineer Thomas Conte from Georgia Tech for IEEE Spectrum.

“This isn’t saying this is the end of Moore’s Law. It’s stepping back and saying what really matters here – and what really matters here is computing.”

The new design uses a special material called carbon nanotubes, which allows memory and processor layers to be stacked in three dimensions. Image: Max Shulaker

3D stacked computer chips could make computers 1,000 times faster

Computer chips today have billions of tiny transistors just a few nanometers wide (a hair is 100nm thick), all crammed up in a small surface. This huge density allows multiple complex operations to run billions of times per second. This has been going on since the ’60s when Gordan Moore first predicted that  the number of transistors on a given silicon chip would roughly double every two years. So far, so good – Moore is still right! But for how long? There’s only so much you can scale down a computer chip. At some point, once you cross a certain threshold, you pass from the macroworld into the spooky domain of quantum physics. Past this point, quantum fluctuations might render the chips useless. Moore might still be right, though. Or he could be wrong, but in a way that profits society: computer chips could increase in computer power at a far grater pace than Moore initially predicted (if you still keep Moore’s law but replace transistors with the equivalent computing power). This doesn’t sound so crazy when you factor in quantum computers or, more practical, a 3D computer architecture demonstrated by a team at Stanford University which crams both CPU and memory into the same chip. This vastly reduces the “commuting time” electrons typically have to go through while traveling through conventional circuits and makes them more efficient. Such a 3D design could make a chip 1,000 faster than what we typically see today, according to the researchers.

 The new design uses a special material called carbon nanotubes, which allows memory and processor layers to be stacked in three dimensions. Image: Max Shulaker

The new design uses a special material called carbon nanotubes, which allows memory and processor layers to be stacked in three dimensions. Image: Max Shulaker

According to Max Shulaker, one of the designers of the chip, the breakthrough lies in interweaving memory (store data) and processors (compute date) into the same space. That’s because, Shulaker says, the greatest barrier that’s holding today’s processors from reaching higher computing speeds lies not with transistors but with memory. As a computer shuttles vasts amounts of data, it constantly dances electrons between storage mediums (hard drives and RAM) and the processors through data highways (the wires).  “You’re wasting an enormous amount of power,” said Shulaker at the  “Wait, What?” technology forum hosted by DARPA. Basically, 96% of the time a computer stays idle, waiting for information to be retrieved. So, what you do is you put the RAM and processor together.

Sounds simple enough, but in reality there are a number of challenges. One of them is that you can’t put the two on the same silicon wafer. During manufacturing, these wafers are heated to 1,000 degrees Celsius, far too much for the metal parts that make up hard drives or solid state drives. So, the workaround was found to be using a novel material: carbon nanotubes –  tubular cylinders of carbon atoms that have extraordinary mechanical, electrical, thermal, optical and chemical properties. Because of their electrical properties similar to silicon, many believe someday these could replace silicon as the de facto semiconductor building block of choice. In this particular case, carbon nanotubes can be processed at low temperatures.

The challenge with working with nanotubes is these are very difficult to ‘grow’ in a predictable, regular pattern. They’re like spaghetti, but in computing if only a couple of your nanotubes are misaligned this spells disaster. Also, inherent defects in manufacturing means that while most CNTs will work as semiconductors (switch current on/off), some will work as conductors and fry your transistors. Luckily, the Stanford researchers found a nifty trick: turn off all the CNTs then run a powerful current through the circuit to blow off the defective conductive CNTs, just like fuses.

The image on the left depicts today’s single-story electronic circuit cards, where logic and memory chips exist as separate structures, connected by wires. Like city streets, those wires can get jammed with digital traffic going back and forth between logic and memory. On the right, Stanford engineers envision building layers of logic and memory to create skyscraper chips. Data would move up and down on nanoscale “elevators” to avoid traffic jams. Credit: Wong/Mitra Lab, Stanford

The image on the left depicts today’s single-story electronic circuit cards, where logic and memory chips exist as separate structures, connected by wires. Like city streets, those wires can get jammed with digital traffic going back and forth between logic and memory. On the right, Stanford engineers envision building layers of logic and memory to create skyscraper chips. Data would move up and down on nanoscale “elevators” to avoid traffic jams. Credit: Wong/Mitra Lab, Stanford

 

Ultimately, the Stanford team built a system that stacks memory and processing power in the same unit, with tiny wires connecting the two. The architecture can produce lightning-fast computing speeds up to 1,000 times faster than would otherwise be possible. To demonstrate, they used this architecture to devise sensors that detect anything from infrared light to various chemicals.

So, when will we see stacked interfaces like these in our computers or smartphones? Nobody knows for sure due to one issue: cooling. We’ve yet to see a solution that works well for a 3D stacked CPU-memory unit.

A cellulose nanofibril (CNF) computer chip rests on a leaf. Photo: Yei Hwan Jung, Wisconsin Nano Engineering Device Laboratory

Scientists demonstrate full working biodegradable wooden chips. No, the electronic kind

Researchers at the University of Wisconsin-Madison designed an innovate and sustainable solution to the global electronic waste problem: make the substrate of computer chips out of cellulose nanofibril (CNF), a biodegradable material from wood. The team collaborated with the the Madison-based U.S. Department of Agriculture Forest Products Laboratory (FPL) to build their device.

A cellulose nanofibril (CNF) computer chip rests on a leaf. Photo: Yei Hwan Jung, Wisconsin Nano Engineering Device Laboratory

A cellulose nanofibril (CNF) computer chip rests on a leaf. Photo: Yei Hwan Jung, Wisconsin Nano Engineering Device Laboratory

On average, cell phones are used for less than 18 months and computers are used for some 3 years before being replaced. But very few end up getting recycled, in part because there’s little awareness on the subject, part lack of facilities. Not to mention how prohibitively expensive it can be to extract the toxic semiconductors that makeup the devices, like gallium arsenide (GaAs). So, most electronics end up in the landfill, polluting the environment and expanding demand for further landfill space.

Wood is plentiful and biodegradable, so why not chop some good ol’ chips? Trust me, I was as skeptical as you at this point until I read the paper (Nature). According to the researchers at UW, the active region of a typical computer chip is only a thin layer atop a bulk support which consists of more than 99% of the semiconductor material. This huge bottom layer relative to the top layer is basically used only to carry non-active components. As such, it can be replaced with just about any material, in this case a favorable biodegradable one, granted chip operation isn’t affected.

“The majority of material in a chip is support. We only use less than a couple of micrometers for everything else,” says UW-Madison electrical and computer engineering professor Zhenqiang “Jack” Ma. “Now the chips are so safe you can put them in the forest and fungus will degrade it. They become as safe as fertilizer.”

How about that – electronic fertilizer. You definitely don’t hear that every day. Naturally though the researchers didn’t build a chip atop of a wooden plank. First, the researchers had to breach two key barriers when dealing with wood-derived materials in an electronics setting: surface smoothness and thermal expansion. In the end, they found an elegant solution by coating the CNF layers with a special solution.

“You don’t want it to expand or shrink too much. Wood is a natural hydroscopic material and could attract moisture from the air and expand,” Zhiyong Cai, project leader for an engineering composite science research group at FPL. “With an epoxy coating on the surface of the CNF, we solved both the surface smoothness and the moisture barrier.”

With over a decade of experience working with CNF, the researchers are confident in their design. They say it can also be easily adapted to current manufacturing technology.

“The advantage of CNF over other polymers is that it’s a bio-based material and most other polymers are petroleum-based polymers. Bio-based materials are sustainable, bio-compatible and biodegradable,” Gong says. “And, compared to other polymers, CNF actually has a relatively low thermal expansion coefficient.”

Yei Hwan Jung, a graduate student in electrical and computer engineering and a co-author of the paper, says the new process greatly reduces the use of such expensive and potentially toxic material.

“I’ve made 1,500 gallium arsenide transistors in a 5-by-6 millimeter chip. Typically for a microwave chip that size, there are only eight to 40 transistors. The rest of the area is just wasted,” he says. “We take our design and put it on CNF using deterministic assembly technique, then we can put it wherever we want and make a completely functional circuit with performance comparable to existing chips.”

 

An artificial synaptic circuit. Image: Sonia Fernandez

Making computers ‘tick’ like the human brain: a breakthrough moment

Researchers at UC Santa Barbara made a simple neural circuit comprised of 100 artificial synapses, which they used to classify three letters by their images, despite font changes and noise introduced into the image. The researchers claim the rudimentary, yet effective circuit processes the text much in the same way as the human brain does. In other words, like you’re currently interpreting the text in this article. Even if you change the font, printscreen this article and splash it with an airbrush in MS Paint, you’ll still be able to read at least portions of it, because the human brain is so great at scaling patterns and abstracting symbols. This kind of research will hopefully usher in a new age of more refined, energy efficient computing.

An artificial synaptic circuit. Image: Sonia Fernandez

An artificial synaptic circuit developed at UC Santa Barbara. Image: Sonia Fernandez

Don’t worry, while this is a big step for artificial intelligence, the circuit comes nowhere near the human brain, which has 1015 (one quadrillion) synaptic connections. Despite technology as gone a long way, computers are still rather dumb. Yes, you can achieve marvelous things with them, but they’re only tools – not thinking machines. Any traces of “smartness” you might find in a computer or software code is actually human cleverness – you’re revering the designer’s intent! Considering this kind of complexity, scientists have long been trying to mimic the way the brain processes information; not necessarily to create a sentient artificial intelligence, rather to increase mechanical computational speed by orders of magnitude. The adult human brain needs about 20 Watts of power. A conventional machine that could simulate the entire human brain would need to have an entire river’s course bent just to cool it!

The team led by Dmitri Strukov, a professor of electrical and computer engineering at UC Santa Barbara,  where looking to build a simple, yet effective device that could perform some of the tasks human brains can made using split-second decision making processes. Reading or interpreting visuals symbols is one of them. They used their rudimentary artificial neural network to classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. The algorithms they used was akin to the way we pick our friends out from a crowd, or find the right key from a ring of similar keys. The findings were reported in the journal Nature.

“While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat, part of the team at UC Santa Barbara.

“And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” according to  Gina Adam, another engineer in the team.

The artificial neural circuit can learn to recognize simple black-and-white patterns, thanks to devices called memristors located at each place the wires cross. Image: UCSB

The artificial neural circuit can learn to recognize simple black-and-white patterns, thanks to devices called memristors located at each place the wires cross. Image: UCSB

To build their artificial neural network, the engineers used memristors instead of traditional semiconductor transistors – the kind your CPU or graphical card uses. A memristor is an electrical component that limits or regulates the flow of electrical current in a circuit and remembers the amount of charge that has previously flowed through it – a transistor with memory. Unlike traditional transistor which rely on the drift and diffusion of electrons and their holes through semiconducting material, memristor operation is based on ionic movement, similar to the way human neural cells generate neural electrical signals.

“The memory state is stored as a specific concentration profile of defects that can be moved back and forth within the memristor,” said Strukov.

The ionic mechanism has several advantages over pure electron transfer.

“For example, many different configurations of ionic profiles result in a continuum of memory states and hence analog memory functionality,” he said. “Ions are also much heavier than electrons and do not tunnel easily, which permits aggressive scaling of memristors without sacrificing analog properties.”

In other words, in this case at least, analog trumps digital, since the same human-brain functionality would be attained only by an enormous machine, if that machine were to use conventional transistor technology.

“Classical computers will always find an ineluctable limit to efficient brain-like computation in their very architecture,” said lead researcher Prezioso. “This memristor-based technology relies on a completely different way inspired by biological brain to carry on computation.”

This is merely the beginning. To scale the number of artificial synapses and thus perform some more complex tasks, many more memristors need to be added and interwoven. The team is also thinking of assembling a hybrid, in which memristors and conventional transistor technology are mearged, which will enable more complex demonstrations and allow this early artificial brain to do more complicated and nuanced things. Ideally, trillions of memristors would be stacked atop each other to perform computations much more efficiently than before. This, however, implies some more tiny steps until this kind of research is considered mature enough to be awarded billions of dollars.

Previously, researchers at Harvard School of Engineering and Applied Sciences (SEAS) built  a transistor that behaves like a neuron, in some respects at least. Last year, the K supercomputer, with over 700,000 processor cores and 1.4 million GB of RAM, was used to simulate one second of human neural activity in 40 minutes.

 

Great Principles of Computing book review

Book review: ‘Great Principles of Computing’

Great Principles of Computing book review

 

Great Principles of Computing
By Peter J. Denning, Craig H. Martell
MIT Press, 320pp | Buy on Amazon
Is computer science really a science or just a tool for analyzing data, churning and crunching numbers? During its brief history, computer science has had a lot to endure, but it’s only recently being appreciated for its potential as an agent of discovery and thought. At first, computing looked like only the applied technology of math, electrical engineering or science, depending on the observer. In fact, during its youth, computing was regarded as the mechanical steps one needs to follow to solve a mathematical function, while computers were the people that did the computation. What you and me call a computer today actually stands for automatic computer, but along the way the distinction blurred.

Ultimately, computer science is a science of information processes, no different from biology in many respects. Not if we heed the words of  Nobel laureate David Baltimore or cognitive scientist Douglas Hofstadter who first proposed biology had become an information science and DNA translation is a natural information process. Following this line of reasoning, computer science studies both natural and artificial information processes. Like all sciences, it follows that computer science is also guided by some great principles framework – something that Denning and Martell try to expose in their book, “Great Principles of Computing.”

Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design. Each serves to provide a perspective on computing, but they’re not mutually exclusive. For instance, the internet can be seen at once as a communication system, a coordination system or a storage system. During each chapter, the authors expose and explain what each principle means and how it relates to different areas: information, machines, programming, computation, memory, parallelism, queueing, and design. Of course, principles are fairly static, so their relation to one another is also discussed at length.

The great-principles framework reveals a rich set of rules on which all computation is based. These principles interact with the domains of the physical, life and social sciences, as well as with computing technology itself. As such, professionals in science and engineering might find this book particularly useful, yet that’s not to say laymen won’t have a lot to learn. But while the concepts or principles outlined in the book are very thoroughly explained, be warned at the same time that this is a technical book. With this out of the way, if you’re not afraid of a lot of schematics and a few equations here and there, “Great Principles of Computing” is definitely a winner.

 

Big Data

‘Data Smashing’ algorithm might help declutter Big Data noise without Human Intervention

There’s an immense well of information humanity is currently sitting on and it’s only growing exponentially. To make sense of all the noise, whether we’re talking about apps like speech recognition, cosmic body identification or search engine results, highly complex algorithms that use less processing power by hitting the bull’s eye or as close as possible are warranted. In the future, such algorithms will be comprised of machine learning technology that gets smarter and smarter after each information parse; this will most likely employ quantum computing as well. Until then, we have to make use of conventional algorithms and a most exciting paper detailing such a technique was recently reported.

Smashing data – the bits and pieces that follow are the most important

Big Data

Credit: 33rd Square

Called ‘data smashing’, the algorithm tries to fix one major flaw in today’s information processing. Immense amounts of data are currently being fed in and while algorithms help us declutter, at the end of the day companies and governments still need experts to oversee the process and grant a much need human fine touch. Basically, computers are still pretty bad at solving complex patterns. Sure, they’re awesome for crunching the numbers, but in the end, humans need to compare the outputted scenarios and pick out the most relevant answer. As more and more processes are being monitored and fed into large data sets, however, this task is becoming ever more difficult and human experts are in low supply.

[ALSO READ] Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors

The algorithm, developed by Hod Lipson, associate professor of mechanical engineering and of computing and information science, and Ishanu Chattopadhyay, a former postdoctoral associate with Lipson now at the University of Chicago, is nothing short of brilliant. It works by estimating the similarities between streams of arbitrary data without human intervention, and even without access to the data sources.

Basically, data is being ‘smashed’ with one another to tease out unique information by measuring what remains after each ‘collision’. The more info stands, the less likely it is it originated from the same streams.

Data smashing could open doors to a new body of research – it’s not just helping experts sort through data easier, it might also actually identify anomalies that are impossible to spot by humans in virtue of pure computing brute force. For instance, the researchers demonstrated data smashing using data from real-world problems, including detection of anomalous cardiac activity from heart recordings and classification of astronomical objects from raw photometry. Results showed that the info was on par with the accuracy of specialized algorithms and heuristics tweaked by experts to work.

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

New transistor boasts neuron-like capabilities. It learns as it computes, hinting towards a new parallel computing future

The human brain is possibly the most complex entity in the Universe. It’s absolutely remarkable and beautiful to contemplate, and the things we are capable of because of our brains are outstanding. Even though most people might seem like they’re using their brains absolutely trivially the truth is the brain is incredibly complex. Let’s look at technicalities alone: the human brain is littered with some 100 billion nerve cells, together these form connections in tandem as each neuron is simultaneously engaged with another 1000 or so. In total some 20 million billion calculations per second are performed by the brain.

The unmatched computational strength of the human brain

That’s quite impressive. Some people think just because they can’t add, multiply or differentiate an equation in a heartbeat like a computer does, then that computer is ‘smarter’ than them. Couldn’t be farther from the truth. That machine can only do that – compute. Ask your scientific hand calculator to make you breakfast, write a novel or dig a hole. You can design a super scientific calculator with gripable limbs and program it to grab a shovel and dig – it will succeed probably, but again it will reach yet another limitation since that’s all been designed to do – it doesn’t ‘think’ for itself. Imagine this, if you were to combine the whole computing power on our planet – virtually combine all the CPUs in the world – only then would you able to reach the same computing speed of a human brain. To build a machine, by today’s sequential computational standards, similar to the human brain thus costs an enormous amount of money and energy. To cool such a machine you’d need to divert a whole river! In contrast, an adult human brain only consumes 20 Watts of energy!

Mimicking the computing power of the brain, the most complex computational ‘device’ in the Universe, is a priority for computer science and artificial intelligence enthusiasts. But we’re just beginning to learn how the brain works and what lies within our deepest recesses – the challenges are numerous.

A new step forward in this direction has been made by scientists at the Harvard School of Engineering and Applied Sciences (SEAS) who reportedly built  a transistor that behaves like a neuron, in some respects at least.

The brain is extremely plastic  as it creates a coherent interpretation of the external world based on input from its sensory system.  It’s always changing and highly adaptable. Actually, some neurons or whole brain regions can switch functions when needed, fact attested by various medical cases in which severe trauma was inflicted. One should remember a fellow named Phineas Gage. Gage worked as a construction worker during the railroad boom of the mid XIX century. A freak accident propelled  a large iron rod directly through his skull completely severing his brain’s left frontal lobe. He survived to live many years afterword, though his personality was severely altered – the prime example at the time that personality is deeply intertwined with the brain. What it also demonstrates, however, is that key brain functions were diverted to other parts of the brain.

A synaptic transitor

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

Several prototypes of the synaptic transistor are visible on this silicon chip. (Photo by Eliza Grinnell, SEAS Communications.)

So how do you mimic this amazing plasticity? Well, if you want a chip that behaves like a human brain you first need to have its constituting elements behave like the constituting elements of the brain – transistors for neurons.  A transistor in some ways behaves like a synapse, acting as a signal gate. When two neurons are in connection (they’re never in direct contact!), electrochemical reactions through neurotransmitters relay specific signals.

In a real synapse, calcium ions induce chemical signaling. The Harvard transistors uses instead oxygen ions, engulfed in a 80-nanometer-thick layer of samarium nickelate crystal, which is the analog to the synapse channel. When a voltage is applied to the crystal, oxygen ions slip through, changing the conductive properties of the lattice and altering signal relaying capabilities.

The strength of the connection is based on the time delay in the electric signal fed into it. It’s the same way for real neurons that get stronger as they relay more signals. Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer.

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic coection. Essentially, it memorizes the action between the neurons.”

So, it does in fact run a bit like a neuron, in the sense that it adapts and strengthens and weakens connections accordingly to external stimuli. Also, opposed to traditional transistors, the Harvard creation isn’t restricted to the binary system of ones and zeros and interestingly enough runs on non-volatile memory, which means even when power is interrupted, the device remembers its state.  Still, it can’t form new connections like a human neuron can.

“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

It does have a significant advantage over the human brain – these transistors can run at high temperatures exceeding 160 degrees Celsius. This kind of heat typically boils the brain, so kudos.

So, in principle at least, integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance. We’re still light years away from something like this happening, still it hints of a future of highly efficient and fast parallel computing! This is the very fist baby step – a proof of concept.

“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”

“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”

The findings were reported in the journal Nature Communications.

 

XOR-gate-made-out-of-transcriptor-biological-transistor-300x221

The biological transistor is finally here opening a new age of computing

At the advent of the transistor during the middle of the last century, computers simply boomed as a new era of technology was ushered in. Though it may not have the same humongous impact the traditional transistor had during its introduction, the all biological transistor recently unveiled by scientists at Stanford University will surely change the way technology and biology merge. Made out of genetics materials (DNA and RNA) the bio transistor will allow computers to function inside living cells, something we’ve been waiting for many years.

XOR-gate-made-out-of-transcriptor-biological-transistor-300x221A transistor controls the flow of electricity through a device, acting like an on-off switch. Similarly, the biological transistor, called a “transcriptor“, controls the flow of an enzyme (RNA polymerase) as it travels along a strand of DNA. To achieve this, the researchers used a group of natural proteins, the workhorses of cells, to control how the enzyme zipped across the DNA strand.

“The choice of enzymes is important,” says Jerome Bonnet, who worked on the project. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”

Again, like in the case of the transistor whose main function is that of amplifying a signal like turning a weak radio signal into an audible one, the transcriptor can amplify a very small change in the production of an enzyme to produce large changes in the production of other proteins. After combining a couple of transcriptors, the Stanford researchers created logic gates that can host boolean operations. IF and ELSE statements are fundamental in basic computing (Boolean logic), and it is trough these logic gates that statements like AND, NAND, OR, XOR, NOR, and XNOR can be performed. The researchers chose enzymes that would be able to function in bacteria, fungi, plants and animals, so that biological computers might be made with a wide variety of organisms, Bonnet said.

But transistors aren’t enough to build a computer though, you also need memory to store information and someway to link (BUS) the memory (RAM) with the transistors. Previous research, however, has demonstrated that it is indeed possible to store data directly in DNA, so the necessary buildings block for building a working biological computer are here. Don’t expect it to come any time too soon though, let alone become widely available for public applications like those from monitoring their environment to turning processes on and off in the cells.

Nevertheless to speed up the process, Stanford has made its biological logic gate design public and anyone confident and skilled enough is more than invited to contribute.

The transcriptor has been described in a paper published in the journal Science.

Droplets

A computer made from water droplets

Droplets

If you thought the computer devised out of soldier crab swarms was cool, wait till you hear what scientists at Aalto University managed to make. In a recently published study, the researchers built a hydrophopic set-up through which they channeled water droplets, and in the process encoded information, practically building a computer.

The researchers used the term “‘superhydrophobic droplet logic” to describe the process through which they stored information. At its core, the Aalto water droplet technique is based on the billiard ball computer model, a computer science textbook algorithm. Interestingly enough, when two water droplets collide with each other on a highly water-repellent surface, they rebound like billiard balls.

Two water repealant channels were devised, made out of a copper surface coated with silver, and chemically modified with a fluorinated compound. Since the formed system is predictable, scientists were able to encode information with water droplets,  with drops on one track representing ones and drops on the other representing zeroes.

“I was surprised that such rebounding collisions between two droplets were never reported before, as it indeed is an easily accessible phenomenon,” says Henrikki Mertaniemi

Concerning its practical applications, while an immediate installation is a bit far off, the researchers involved in the project forsee a use for waterdroplets storage devices in areas where electricity is not available and autonomous, yet simple computing devices are required. Also, were the water droplets to be replaced with reacting chemicals, the systematic logic behind the water droplet computing device could be employed programmable chemical reactor.

Findings were published in the journal Advanced Materials.

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Scientists devise computer using swarms of soldier crabs

Computing using unconventional methods found in nature has become an important branch of computer science, which might aid scientists construct more robust and reliable devices. For instance, the ability of biological systems to assemble and grow on their own enables much higher interconnection densities or swarm intelligence algorithms, like ant colonies that find optimal paths to food sources. But its one thing to get inspired by nature to build computing devices, and another to use nature itself as the main computing component.

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Previously, scientific groups have used all sorts of natural computation mechanisms like fluids or even DNA and bacteria. Now, a team of  computer scientists, lead by Yukio-Pegio Gunji from Kobe University in Japan, have successfully created a computer that exploits the swarming behaviour of soldier crabs. Yup, that’s nothing you hear every day.

For their eccentric choice of computing agent, the researchers’ inspired themselves from the billiard ball computer model, a classic reversible mechanical computer, mainly used for didactic purposes first proposed in 1982 by Edward Fredkin and Tommaso Toffoli.

The billiard ball computer model can be used as a Boolean circuit, only instead of wires it uses the paths on which the balls travel, the information is encoded by the presence or absence of a ball on the path (1 and 0), and its logic gates (AND/OR/NOT) are simulated by collisions of balls at points where their paths cross. Now, instead of billiard balls think crabs!

“These creatures seem to be uniquely suited for this form of information processing . They live under the sand in tidal lagoons and emerge at low tide in swarms of hundreds of thousands.

What’s interesting about the crabs is that they appear to demonstrate two distinct forms of behaviour. When in the middle of a swarm, they simply follow whoever is nearby. But when they find themselves on on the edge of a swarm, they change.

Suddenly, they become aggressive leaders and charge off into the watery distance with their swarm in tow, until by some accident of turbulence they find themselves inside the swarm again.

This turns out to be hugely robust behaviour that can be easily controlled. When placed next to a wall, a leader will always follow the wall in a direction that can be controlled by shadowing the swarm from above to mimic to the presence of the predatory birds that eat the crabs. ” MIT tech report

Thus, the researchers were able to construct a computer which uses solider crabs for transmitting information. They were able to build a decent OR gate using the crabs, their AND-gates were a lot less reliable however. A more crab-friendly environment would’ve rendered better results, the researchers believe.

The findings were published in the journal Emerging Technologies.