Tag Archives: computer

Computers can now read handwriting with 98% accuracy

New research in Tunisia is teaching computers how to read your handwriting.

Image via Pixabay.

Researchers at the University of Sfax in Tunisia have developed a new method for computers to recognize handwritten characters and symbols in online scripts. The technique has already achieved ‘remarkable performance’ on texts written in the Latin and Arabic alphabets.

iRead

“Our paper handles the problem of online handwritten script recognition based on an extraction features system and deep approach system for sequence classification,” the researchers wrote in their paper. “We used an existent method combined with new classifiers in order to attain a flexible system.”

Handwriting recognition systems are, unsurprisingly, computer tools designed to recognize characters and hand-written symbols in a similar way to our brains. They’re similar in form and function with the neural networks that we’ve designed for image classification, face recognition, and natural language processing (NLP).

As humans, we innately begin developing the ability to understand different types of handwriting in our youth. This ability revolves around the identification and understanding of specific characters, both individually and when grouped together, the team explains. Several attempts have been made to replicate this ability in a computer over the last decade in a bid to enable more advanced and automatic analyses of handwritten texts.

The new paper presents two systems based on deep neural networks: an online handwriting segmentation and recognition system that uses a long short-term memory network (OnHSR-LSTM) and an online handwriting recognition system composed of a convolutional long short-term memory network (OnHR-covLSTM).

The first is based on the theory that our own brains work to transform language from the graphical marks on a piece of paper into symbolic representations. This OnHSR-LSTM works by detecting common properties of symbols or characters and then arranging them according to specific perceptual laws, for instance, based on proximity, similarity, etc. Essentially, it breaks down the script into a series of strokes, that is then turned into code, which is what the program actually ‘reads’.

“Finally, [the model] attempts to build a representation of the handwritten form based on the assumption that the perception of form is the identification of basic features that are arranged until we identify an object,” the researchers explained in their paper.

“Therefore, the representation of handwriting is a combination of primitive strokes. Handwriting is a sequence of basic codes that are grouped together to define a character or a shape.”

The second system, the convolutional long short-term memory network, is trained to predict both characters and words based on what it read. It is particularly well-suited for processing and classification of long sequences of characters and symbols.

Both neural networks were trained then evaluated using five different databases of handwritten scripts in the Arabic and Latin alphabets. Both systems achieved recognition rates of over 98%, which is ‘remarkable’ according to the team. Both systems, they explained, performed similarly to human subjects at the task.

“We now plan to build on and test our proposed recognition systems on a large-scale database and other scripts,” the researchers wrote.

The paper “Neural architecture based on fuzzy perceptual representation for online multilingual handwriting recognition” has been published in the preprint server arXiv.

Slug P. californica.

‘Self-aware’, predatory, digital slug mimics the behavior of the animal it was modeled on

Upgrade, or the seeds of a robot uprising? U.S. researchers report they’ve constructed an artificially intelligent ocean predator that behaves a lot like the organism it was modeled on.

Slug P. californica.

Image credits Tracy Clark.

This frightening, completely digital predator — dubbed “Cyberslug” — reacts to food, threats, and members of its own ‘species’ much like the living animal that formed its blueprint: the sea slug Pleurobranchaea californica.

Slug in the machine

Cyberslug owes this remarkable resemblance to its biological counterpart to one rare trait among AIs — it is, albeit to a limited extent, self-aware. According to University of Illinois (UoI) at Urbana-Champaign professor Rhanor Gillette, who led the research efforts, this means that the simulated slug knows when it’s hungry or threatened, for example. The program has also learned through trial and error which other kinds of virtual critters it can eat, and which will fight back, in the simulated world the researchers pitted it against.

“[Cyberslug] relates its motivation and memories to its perception of the external world, and it reacts to information on the basis of how that information makes it feel,” Gillette said.

While slugs admittedly aren’t the most terrifying of ocean dwellers, they do have one quality that made them ideal for the team — they’re quite simple beings. Gillette goes on to explain that in the wild, sea slugs typically handle every interaction with other creatures by going through a three-item checklist: “Do I eat it? Do I mate with it? Or do I flee?”

Biologically simple, this process becomes quite complicated to handle successfully inside a computer program. That’s because, in order to make the right choice, an organism must be able to sense its internal state (i.e. whether it is hungry or not), obtain and process information from the environment (does this creature look tasty or threatening) and integrate past experience (i.e. ‘did this animal bite/sting me last time?’). In other words, picking the right choice involves the animal being aware of and understanding both its state, that of the environment, and the interaction between them — which is the basis of self-awareness.

Behavior chart slug.

Schematic of the approach-avoid behavior in the slug.
Image credits Jeffrey W. Brown et al., 2018, eNeuro.

Some of Gillette’s previous work focused on the brain circuits that allow sea slugs to operate these choices in the wild, mapping their function “down to individual neurons”. The next step was to test the accuracy of their models — and the best way to do this was to recreate the circuits of the animals’ brains and let them loose inside computer simulations. One of the earliest such circuit boards to represent the sea slug‘s brain, constructed by co-author Mikhail Voloshin, software engineer at the UoI, was housed in a plastic foam takeout container.

In the meantime, the duo have refined both their hardware and the code used to simulate the critters. Cyberslug’s decision-making is based on complex algorithms that estimate and weigh its individual goals, just like a real-life slug would.

“[P. californica‘s] default response is avoidance, but hunger, sensation and learning together form their ‘appetitive state,’ and if that is high enough the sea slug will attack,” Gillette explains. “When P. californica is super hungry, it will even attack a painful stimulus. And when the animal is not hungry, it usually will avoid even an appetitive stimulus. This is a cost-benefit decision.”

Cyberslug behaves the same way. The more it eats, for example, the more satiated it becomes and the less likely it will be to bother or attack something else (no matter its tastiness). Over time, it can also learn which critters to avoid, and which can be prayed upon with impunity. However, if hungry enough, Cyberslug will throw caution to the wind and even attack prey that’s adept at fighting back, if nothing less belligerent comes around for it to eat.

“I think the sea slug is a good model of the core ancient circuitry that is still there in our brains that is supporting all the higher cognitive qualities,” Gillette said. “Now we have a model that’s probably very much like the primitive ancestral brain. The next step is to add more circuitry to get enhanced sociality and cognition.”

This isn’t the first time we’ve seen researchers ‘digitizing’ the brains of simpler creatures — and this process holds one particular implication that I find fascinating.

Brains are, when you boil everything down, biological computers. Most scientists are pretty confident that we’ll eventually develop artificial intelligence, and sooner rather than later. But it also seems to me that there’s an unspoken agreement that the crux falls on the “artificial” part; that such constructs would always be lesser, compared to ‘true’, biological intelligence.

However, when researchers can quite successfully take a brain’s functionality and print it on a computer chip, doesn’t that distinction between artificial and biological intelligence look more like one of terminology rather than one of nature? If the computer can become the brain, doesn’t that make artificial life every bit as ‘true’ as our own, as worthy of recognition and safeguarding as our own?

I’d love to hear your opinion on that in the comments below.

The paper “Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation” has been published in the journal eNeuro.

Duo of neural networks get within a pixel of reading our mind and re-creating what’s there

Machines are starting to peer into the brain, see what we’re thinking, and re-create it.

Mind.

Image credits Nathan Sawaya / PxHere.

Full disclosure here, but I’m not the hardest worker out there. I also have a frustrating habit of timing my bouts of inspiration to a few minutes after my head hits the pillow.

In other words, I wave most of those bouts goodbye on my way to dream town.

But the work of researchers from Japan’s Advanced Telecommunications Research Institute (ATR) and Kyoto University could finally let me sleep my bouts away and also make the most of them — at the same time. The team has created a first-of-its-kind algorithm that can interpret and accurately reproduce images seen or imagined by a person.

Despite still being “decades” away from practical use, the technology brings us one step closer to systems that can read and understand what’s going on in our minds.

Eyes on the mind

Trying to tame a computer to decode mental images isn’t a new idea. It’s actually been in the works for a few years now — researchers have been recreating movie clips, photos, and even dream imagery from brains since 2011. However, all previous systems have been limited in scope and ability. Some can only handle narrow domains like facial shape, while others can only rebuild images from preprogrammed images or categories (‘bird’, ‘cake’, ‘person’, so on). Until now, all technologies needed pre-existing data; they worked by matching a subject’s brain activity to that recorded earlier while the human was viewing images.

According to researchers, their new algorithm can generate new, recognizable images from scratch. It works even with shapes that aren’t seen but imagined.

It all starts with functional magnetic resonance imaging (fMRI), a technique that measures blood flow in the brain and uses that to gauge neural activity. The team mapped out 3 subjects’ visual processing areas down to a resolution of 2 millimeters. This scan was performed several times. During every scan, each of the three subjects was asked to look at over 1000 pictures. These included a fish, an airplane, and simple colored shapes.

A new algorithm uses brain activity to create reconstructions (bottom two rows) of observed photos (top row). Image credits: Kamitani Lab.

The team’s goal here was to understand the activity that comes as a response to seeing an image, and eventually have a computer program generate an image that would stir a similar response in the brain.

However, there’s where the team started flexing their muscles. Instead of showing their subjects image after image until the computer got it right, the researchers used a deep neural network (DNN) with several layers of simple processing elements.

“We believe that a deep neural network is good proxy for the brain’s hierarchical processing,” says Yukiyasu Kamitani, senior author of the study.

“By using a DNN we can extract information from different levels of the brain’s visual system [from simple light contrast up to more meaningful content such as faces]”.

Through the use of a “decoder”, the team created representations of the brain’s responses to the images in the DNN. From then on, they no longer needed the fMRI measurements and worked with the DNN translations alone as templates.

Software teaching software

Lego man.

“We are the humans now.”
Image credits Elisa Riva.

Next came a reiterative process in which the system created images in an attempt to get the DNN to respond similarly to the desired templates — be they of an animal or stained-glass window. It was a trial and error process in which the program started with neutral images (think TV static) and slowly refined them over the course of 200 rounds. To get an idea of how close it was to the desired image, the system compared the difference between the template and the DNN’s response to the generated picture. Such calculations allowed it to improve, pixel by pixel, towards the desired image.

To increase the accuracy of the final images, the team included a “deep generator network” (DGN), an algorithm that had been pre-trained to create realistic images from raw input. The DGN was, in essence, the one that put the finishing details on the images to make them look more natural.

After the DGN touched up the pictures, a neutral human observer was asked to rate the work. He was presented with two images to choose from and asked which was meant to recreate a given picture. The authors report that the human observer was able to pick the system’s generated image 99% of the time.

Next was to integrate all the work with the ‘mind-reading’ bit of the process. They asked three subjects to recall the images that had been previously displayed to them and scanned their brains as they did so. It got a bit tricky at this point, but the results are still exciting — the method didn’t work well for photos, but for the shapes, the generator created a recognizable image 83% of the time.

It’s important to note that the team’s work seems very tidy and carefully executed. It’s possible that their system actually works really well, and the bottleneck isn’t in the software but in our ability to measure brain activity. We’ll have to wait for better fMRI and other brain imaging techniques to come along before we can tell, however.

In the meantime, I get to enjoy my long-seeded dream of having a pen that can write or draw anything in my drowsy mind as I’m lying half-asleep in bed. And, to an equal extent, ponder the immense consequences such tech will have on humanity — both for good and evil.

The paper “Deep image reconstruction from human brain activity” has been published in the pre-print server biorXiv.

Robot suit office.

A history of how computers went from stealing your heart, to stealing your job

Life as we know it today couldn’t exist without computers.

Computer board.

Image credits Michael Schwarzenberger.

It’s hard to overstate the role computers play in our lives today. Our silicony friends have left their mark on every facet of life, changing everything from how we date, to deep space exploration. Computers keep our planes flying and make sure there’s always enough juice in the grid for your toaster to work in the morning and the TV when you come back home. Through them, the POTUS’ rant on Twitter can be read by millions of people mere seconds after it’s typed. They also help propel women as equal participants in the labor market and as equal, full-right members of civic society today.

So let’s take a look at these literal wonders of technology, the boxes of metal and plastic that allowed the human race to outsource tedious work of the mind at an unprecedented pace.

The Brain Mk.1 Computer

When somebody today says ‘computer’, we instantly think of a PC — a personal computer. In other words, a machine, a device we’ve built to do calculations. We can make them do a lot of pretty spectacular stuff, such as predict asteroid orbit or run Skyrim, as long as we can describe it to them in terms of math.

But just 70 years ago, that wasn’t the case. While the first part of that abbreviation is pretty self-explanatory, the second one is a tip of the hat to the device’s heritage: the bygone profession of the computer. Human computers to be more exact, though of course, the distinction didn’t exist at that time. They were, in the broadest terms, people whose job was to perform all the mathematical computations society required by hand. And boy was it a lot of math.

Abacus.

Hello, I am abacus and I will be your guide.
Image via Pixabay.

For example, trigonometry tables. The one I’ve linked there is a pretty bare-bones version. It calculates 4 values (sine, cosine, tangent, and cotangent) for every degree up to 45 degrees (because trigonometry is funny and these values repeat, sometimes going negative). So, it required 46 times 4 = 186 calculations to put together.

Now, it’s not actually hard to calculate trigonometry values, but they are tedious and prone to mistakes because they involve fractions and a lot of decimals. Another issue was the size of these things. A per degree table works well for teaching high-schoolers about trigo. For top notch science, however, tables working on a per .1, or .01 degree basis were required — meaning a single table could need up to tens of thousands of calculations.

Then, you had stuff like artillery tables. These were meant to help soldiers in the field know exactly how high to point the barrel of a gun so that the shell would fall where the other guys were. They’d tell you how much a shell would likely deviate, at what angle it would hit, and how long it would take for it to get to a target. The fancier ones would even take weather into account, calculating how much you needed to tip the barrel in one direction or another to correct for wind.

I don’t even want to think how much work went into making these charts. It wasn’t the fun kind of work where you cheekily browse memes when the boss isn’t looking, either — it’s hours upon thousands of hours of repetitive, elbow-breaking math. When you were finally done, well guess who’s getting seconds (protip: it’s computer-you) because each chart had to be tailor-made for each type of gun, and each type of ammo. After that, you had to check every result to see if you messed up even by a few decimals. And then, then, if some guys crunched a table you used as reference wrong, you’d have to do it all over again.

US 3 inch gun artillery table.

O, M, G.
Gun range tables for the US 3-inch field gun, models 1902-1905, 15 lb shell.
Image credits William Westervelt, “Gunnery and explosives for field artillery officers,” via US Army.

It’s not just the narrow profession of computers we’re talking about, though. They were only the tip of the iceberg. Businesses needed accountants, designers and architects, people to deliver mail, people to type and copy stuff, organize files, keep inventory, and innumerable other tasks that PCs today do for us. Their job contracts didn’t read ‘computers’ but they performed a lot of the tasks we now turn to PCs for. It’s all this work of gathering, processing, and transmitting data that I’ll be referring to when I use the term “background computational cost”.

Lipstick computers

Engineers, being the smart people that we are, soon decided all this number crunching wasn’t going to work for us — that’s how the computer job was born. Overall, this division of labor went down pretty well. Let’s look at the National Advisory Committee for Aeronautics, or NACA, the precursor of the NASA we all know and love today. The committee’s role was “to supervise and direct the scientific study of the problems of flight with a view to their practical solution.” In other words, they were the rocket scientists of a world which didn’t yet have rockets. Their main research center was the Langley Memorial Aeronautical Laboratory (LMAL), which in 1935 employed five people in its “Computer Pool.”

Wing design big paper.

Boeing engineers designing a wing in the 80s. Today, we use AutoCAD.
Image via Quora.

The basic research path at LMAL was a reiterative process. Engineers would design a new wing shape, for example, and then send it to a wind tunnel for testing. The raw data results would then be sent to the computers to be processed into all sorts of useful information — how much lift it would produce, how much load it could take, potential flaws, any graphics that would be needed, so on. They were so useful to the research efforts, and so appreciated by the engineers, that by 1946 the LMAL would employ 400 ‘girl’ computers.

“Engineers were free to devote their attention to other aspects of research projects, while the computers received praise for calculating […] more in a morning than an engineer alone could finish in a day,” NASA recounts of a Langley memo.

“The engineers admit themselves that the girl computers do their work more rapidly and accurately than they would,” Paur Ceruzzi from Air and Space writes, citing the same document.

Computing, in tandem with the restrictions and manpower drain of WW2, would prove to be the key to liberalizing the job market for both sexes — all of those computers LMAL employed were women. People of both sexes used to take up the computing mantle — women, in particular, did so because they could work from home, receiving and returning their tasks via mail. But in a world where they were expected to be mothers, educated only as far as housekeeping, raising children, and social etiquette was concerned, the women at LMAL were working in cutting edge science. This was a well-educated group of women on which the whole research process relied — and they showed they can pull their weight just as well as their male counterparts, if not better.

Large-scale tabulation.

Can they run Solitaire though?
Image via Computer History Museum.

In the 1940s, Langley also began recruiting African-American women with college degrees to work as computers. Picture that; in a world where racial segregation went as far as separating bathrooms, dining rooms, and seat rights in buses, their work kept planes in the air and would eventually bring a man on the Moon.

Human computers were a massive boon to industry and research at the time. They’d make some mistakes, sure, but they were pretty good at spotting and fixing them. They weren’t very fast, but they were as fast as they had to be for the needs of the day. Finally, the processing boost they offered institutions, both public and private, justified their expense for virtually every business that needed them — so human computers drove the economy forward, helping create jobs for everyone else.

But they’ve had one other monumental effect on society: they brought women and other minorities into the job market. They showed that brainpower doesn’t care about sex or race, that anyone can turn their mind to bettering society.

Actual computers

But as science advanced, economies expanded and become more complicated, the background computational cost increased exponentially. While this was happening, people have started to figure out that brain power also doesn’t care about species. Or biology, for that matter.

PC fan.

“Sweating? How crude.”
Image credits Fifaliana Rakotoarison.

The simple fact is that I can type this sentence in 10 or so seconds. I can make as many mistakes as I want, ’cause I have a backspace button. I can even scratch the whole thing and tell you to go read about the oldest tree instead. I can put those blue hyperlinks in and you’ll find that article by literally moving your finger a little. It’s extremely easy for me. That’s because it has a huge background computational cost. To understand just how much of an absolute miracle this black box I’m working on is, let’s try to translate what it does in human-computer terms.

I could probably make due with one typewriter if I don’t plan to backspace anything and just leave these things behind instead. I backspace a lot, though, so I’d say about five people would be enough to let me write and re-write this article with a fraction of the speed I do at now. Somewhere between 15 to 20 people would allow me to keep comparable speed if I really try to limit edits to a minimum.

So far, I’ve used about 4 main sources of inspiration, Wiki for those tasty link trees, and nibbled around 11ish secondary sources of information. Considering I know exactly what I want to read and where to find it (I never do) I’d need an army of people to substitute Google, source and carry the papers, find the exact paragraphs I want, at least one person per publication to serve the role links do now, and so on. But let’s be conservative, let’s say I can make do with 60 people for this bit. When I’m done, I’ll click a button and you will be able to read this from the other face of the planet across the span of time. No printing press, no trucks and ships to shuttle the Daily ZME, no news stands needed.

That all adds up to what, between 66 people and ‘a small village’? I can do their work from home while petting a cat, or from the office while petting three cats. All because my computer, working with yours, and god knows how many others in between, substitutes the work these people would have to do as a background computational cost and then carries that with no extra effort on my part.

Hello, I’m Mr. Computer and I’ll be your replacement. (Help me I’m a slave!)

Robot hand.

“Will you not shake my hand, fleshling?”
Image credits Department of Defense.

The thing is that you, our readers, come to ZME Science for information. That’s our product. Well, information and a pleasing turn of phrase. We don’t need to produce prints to sell since that would mix what you’re here for with a lot of other things you don’t necessarily want, such as printing and transport, which translate to higher costs on your part. Instead, I can use the internet to make that information available to you at your discretion with no extra cost. It makes perfect economic sense both for me, since I know almost 90% of American households have a PC, and you too, since you get news when you want it without paying a dime.

But it also cuts out a lot of the middlemen. That, in short, is why computers are taking our jobs.

By their nature, industries dealing with information can easily substitute manpower with background computational cost, which is why tech companies have incredible revenue per employee — they make a lot a lot of money, but they only employ a few people with a lot of computers to help.

This effect, however, is seeping into all three main areas of the economy: agriculture, industry, and services. Smart agriculture and robot use will increase yields, and if they don’t drive the number of jobs down, they’ll at least lower the overall density of jobs in the sector. Industry is investing heavily in robots from simple ones to man (robot?) assembly lines, or highly specialized ones to perform underwater repairs. Service jobs, such as retail, delivery, and transport are also enlisting more and more robotic help in the shape of drones, autonomous shops, robot cooks. Not only can they substitute us, but the bots are actually much better at doing those things than we are.

That’s a problem I feel will eventually hit our society, and hit it very hard. If all you’re looking at is the bottom line, replacing human workers with computers makes perfect sense. It’s synonymous with replacing paid human labor with slave computational load, which costs nothing in comparison. That’s very good for business, but ruinous for the society at large: our economies, for better or for worse, are tailored around consumption. Without a large percentage of everyone having disposable income (in the form of wages) to spend on the things we produce, our whole way of doing things goes up in flames really fast. There’s glaring wealth inequality in the world even now, and a lot of people are concerned that robots will collapse the system — that the ultra rich of today will end up owning all the machines, all the money, all the goods.

But I’m an idealist. Collapse might not be that bad a thing, considering that our economies simply aren’t sustainable. The secret is changing for the better.

Robot suit office.

Not what I meant.
Image credits Ben Husmann.

Computers are so embedded into our lives today and have become so widespread because they do free thinking and free labor for us. Everywhere around you, right now, there are computers doing stuff you want and need. Stuff that you or someone else had to do for you, but not anymore. A continuous background computational cost that’s now, well, free. Dial somebody up, and a computer is handling that call for you — we don’t pay it anything. A processor is taking care that your clothes come out squeaky clean and not too wet from the washing machine at the laundromat. Another one handles your bank account. Press a button and you’ll get money out of a box at the instructions of a computer. We don’t pay them anything. We pay the people who ‘own’ them, however. They’re basically slaves. Maybe that has to change.

In a society where robots can do virtually infinite work for almost no cost, does work still have value? If we can make all the pots and pans we need without anyone actually putting in any effort to make them, should pots and pans still have a cost? We wouldn’t think it’s right to take somebody’s labor as our own, so why would we pay ‘someone’ for the work robots do for free? And if nobody has to work because we have all we need, what’s the role of money? Should we look towards a basic income model or are works of fiction, such as Ian Bank’s The Culture, which do away with money altogether, the best source of inspiration?

Oh, and we’re at a point where self-aware artificial intelligence is crossing the boundary between ‘fiction’ and ‘we’ll probably see this soon-ish’. That’s going to complicate the matters even further and, I think, put a very thick blanket of moral and ethical concerns on top. So these aren’t easy questions, but they’re questions we’re going to have to sit down and debate sooner rather than later.

No matter where the future takes us, however, I have to say that I’m amazed at what we managed to achieve so far. Humanity, the species that managed to outsource thought, risks making labor obsolete.

Now that’s a headline I’m pining for.

Artificial synapse brings us one step closer to brain-like computers

Researchers have created a working artificial, organic synapse. The new device could allow computers to mimic some of the brain’s inner workings and improve their capacity to learn. Furthermore, a machine based on these synapses would be much more energy efficient that modern computers.

It may not look like much, but this device could revolutionize our computers forever.
Image credits Stanford University.

As far as processors go, the human brain is hands down the best we’ve ever seen. Its sheer processing power dwarfs anything humans have put together, for a fraction of the energy consumption, and it does it with elegance. If you allow me a car analogy, the human brain is a formula 1 race car that somehow uses almost no fuel and our best supercomputer… Well, it’s an old, beat-down Moskvich.

And it misfires.
Image credits Sludge G / Flickr.

So finding a way to emulate the brain’s hardware has understandably been high on the wishlist of computer engineers. A wish that may be granted sooner than they hoped. Researchers Stanford University and Sandia National Laboratories have made a breakthrough that could allow computers to mimic one element of the brain — the synapse.

 

 

 

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper.

“It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

Copycat

The artificial synapse is made up of two thin, flexible films holding three embedded terminals connected by salty water. It works similarly to a transistor, with one of the terminals dictating how much electricity can flow between the other two. This behavior allowed the team to mimic the processes that go on inside the brain — as they zap information from one another, neurons create ‘pathways’ of sorts through which electrical impulses can travel faster. Every successful impulse requires less energy to pass through the synapse. For the most part, we believe that these pathways allow synapses to store information while they process it for comparatively little energy expenditure.

Because the artificial synapse mimics the way synapses in the brain respond to signals, it removes the need to separately store information after processing — just like in our brains, the processing creates the memory. These two tasks are fulfilled simultaneously for less energy than other versions of brain-like computing. The synapse could allow for a much more energy-efficient class of computers to be created, addressing a problem that’s becoming more and more poignant in today’s world.

Modern processors need huge fans because they use a lot of energy, giving off a lot of heat.

One application for the team’s synapses could be more brain-like computers that are especially well suited to tasks that involve visual or auditory signals — voice-controlled interfaces or driverless cars, for example. Previous neural networks and artificially intelligent algorithms used for these tasks are impressive but come nowhere near the processing power our brains hold in their tiny synapses. They also use a lot more energy.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper.

“Instead of simulating a neural network, our work is trying to make a neural network.”

The team will program these artificial synapses the same way our brain learns — they will progressively reinforce the pathways through repeated charge and discharge. They found that this method allows them to predict what voltage will be required to get a synapse to a specific electrical state and hold it with only 1% uncertainty. Unlike traditional hard drives where data has to be stored or lost when the machine shuts down, the neural network can just pick up where it left off without the need for any data banks.

One of a kind

Right now, the team has only produced one such synapse. Sandia researchers have taken some 15,000 measurements during various tests of the device to simulate the activity of a whole array of them. This simulated network was able to identify handwritten digits (between 0-9) with 93 to 97% accuracy — which, if you’ve ever used the recognize handwriting feature, you’ll recognize as an incredible success rate.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper.

“We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

One of the reasons these synapses perform so well is the numbers of states they can hold. Digital transistors (such as the ones in your computer/smartphone) are binary — they can either be in state 1 or 0. The team has been able to successfully program 500 states in the synapse, and the higher the number the more powerful a neural network computational model becomes. Switching from one state to another required roughly a tenth of the energy modern computing system drain to move data from processors to memory storage.

Still, this means that the artificial synapse is currently 10,000 times less energy efficient than its biological counterpart. The team hopes they can tweak and improve the device after trials in working devices to bring this energy requirement down.

Another exciting possibility is the use of these synapses in-vivo. The devices are largely composed of organic elements such as hydrogen or carbon, and should be fully compatible with the brain’s chemistry. They’re soft and flexible, and use the same voltages as those of human neurons. All this raises the possibility of using the artificial synapse in concert with live neurons in improved brain-machine interfaces.

Before they considering any biological applications, however, the team wants to test a full array of artificial synapses.

The full paper “A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing” has been published in the journal Nature Materials.

 

 

Game

Scientists fed a game into players’ brains to pave the way for artificial senses

University of Washington researchers have hooked some people’s brains up to a computer and asked them to play a simple game — no monitor, speakers, or other stimulus included. And it worked. This is a vital first step in showing how humans can interact with virtual realities only through direct brain stimulation.

Game

Test subjects demonstrating how humans can interact with virtual realities via direct brain stimulation.
Image credits University of Washington.

“The way virtual reality is done these days is through displays, headsets and goggles, but ultimately your brain is what creates your reality,” said UW professor of Computer Science & Engineering and senior author Rajesh Rao.

The paper describes the first case of humans playing a simple, 2D computer game only through input from direct brain stimulation. Five players were presented with 21 different mazes to navigate, with a choice to move forward or down. The game offered them information of obstacles in the form of a phosphene, perceived blobs or bars of light generated through transcranial magnetic stimulation — a technique that uses magnetic coils placed near the skull to stimulate specific areas of the brain.

“The fundamental question we wanted to answer was: Can the brain make use of artificial information that it’s never seen before that is delivered directly to the brain to navigate a virtual world or do useful tasks without other sensory input? And the answer is yes.”

The participants made the right move (avoided obstacles) 15% of the time when they didn’t receive any input. But under direct brain stimulation, they made the right move 92% of the time. They also got better at the game the more they practiced their hand at detecting the artificial stimuli. This goes to show that new information — from artificial sensors or computers — can be successfully encoded and transmitted to the brain to solve tasks. The technology behind the experiment — transcranial magnetic stimulation — is usually employed to study how the brain works, but the team showed how it can be used to convey information to the brain instead.

“We’re essentially trying to give humans a sixth sense,” said lead author Darby Losey.

“So much effort in this field of neural engineering has focused on decoding information from the brain. We’re interested in how you can encode information into the brain.”

This trial was intended as a proof of concept and as such used a very simple binary system — whether a phosphene was present or not — as feedback for the players. But the experiment shows that in theory, the approach can be used to transmit information from any sensor, such as cameras or ultrasounds — to the brain. Even a binary system such as the one used for the game can give a lot of help to certain individuals, such as helping the blind navigate.

“The technology is not there yet — the tool we use to stimulate the brain is a bulky piece of equipment that you wouldn’t carry around with you,” said UW assistant professor of psychology and co-author Andrea Stocco.

“But eventually we might be able to replace the hardware with something that’s amenable to real world applications.”

The team is currently investigating how to create more complex perceptions of various senses by modulating the intensity and location of stimulation in the brain.

“Over the long term, this could have profound implications for assisting people with sensory deficits while also paving the way for more realistic virtual reality experiences,” Rao concluded.

The full paper “Navigating a 2D Virtual World Using Direct Brain Stimulation” has been published in the journal Frontiers in Robotics.

Artificial synapse brings us one step closer to building a brain-like computer

A new study describes a novel computing component which emulates the way neurons connect in the human brain. This “memristor” changes its electrical resistance depending on how much current has already flowed through it, mimicking the way neurons transmit signals through synapses, the team writes.

Image credits Pixabay / JarkkoManty.

This device could lead to significant advancements in brain-like computers, capable of handling perceptual and learning tasks much better that traditional computers while being much more energy efficient.

“In the past, people have used devices like transistors and capacitors to simulate synaptic dynamics, which can work, but those devices have very little resemblance to real biological systems,” said study leader and professor of electrical and computer engineering at the University of Massachusetts Amherst Joshua Yang.

The human brain has somewhere between 86 to 100 billion neurons which connect in up to 1,000 trillion (that’s a one followed by 15 zeros) synapses — making your brain an estimated 1 trillion bit per second processor. Needless to say, computer scientists are dying to build something with even a fraction of this processing power, and a computer that mimics the brain’s structure — and thus its computing power and efficiency — would be ideal.

Building a brain

When an electrical signal hits a synapse in your brain it prompts calcium ions to flood into it, triggering the release of neurotransmitters. This is what actually transmits the information over the synapse, causing an impulse to form in the other neuron and so on. The “diffusive memristor” described in the paper is made up of silver nanoparticle clusters embedded in a silicon oxynitride film that is embedded between two electrodes.

The film is an insulator but applying a voltage through the device and the clusters start to breaks apart through a combination of electrical forces and heat. The nanoparticles diffuse through the film to form a conductive filament, allowing current to flow from one electrode to the other. Cut the voltage, the temperature drops, and the clusters re-form — similar to how calcium ions behave in a synapse.

“With the synaptic dynamics provided by our device, we can emulate the synapse in a more natural way, more direct way and with more fidelity,” he told Live Science.

The device can thus mimic short-term plasticity in neurons, the researchers said. Trains of low-voltage, high-frequency pulses will gradually increase the device’s conductivity until a current can pass through. But, if the pulses continue, the conductivity will eventually decrease.

The team also combined their diffusion memristor with a drift memristor, which relies on electrical fields and is optimized for memory applications. This allowed them to demonstrate a form of long-term plasticity called spike-timing-dependent plasticity (STDP), adjusting connection strength between neurons based on the timing of impulses. Drift memristors have previously used to approximate calcium dynamics. But, because they’re based on physical processes very different from the ones our brains employ, they have limited fidelity and variety in what functions they can simulate.

“You don’t just simulate one type of synaptic function, but [also] other important features and actually get multiple synaptic functions together,” Yang said.

“The diffusion memristor is helping the drift-type memristor behave similarly to a real synapse. Combining the two leads us to a natural demonstration of STDP, which is a very important long-term plasticity learning rule.”

Reproducing synaptic plasticity is essential to creating a brain-like computer. And we should do our best to create one, Yang said.

“The human brain is still the most efficient computer ever built,” he added.

The team uses fabrication processes similar to those being developed by computer memory companies to scale up memristor production. Silver doesn’t lend well to all these methods, however, copper nanoparticles could be used instead, Yang said. He added that the approach is definitely scalable and single units systems should be comparable to biological synapses in size. But he added that in multiunit systems, the devices will likely need to be bigger due to practical considerations involved in making a larger system work.

The full paper “Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing” has been published in the journal Nature Materials.

Duke scientists Song and Reif made an analog DNA circuit inside a test-tube. Credit: John Joyner

Scientists make DNA analog circuit that can add and substract

Duke scientists Song and Reif made an analog DNA circuit inside a test-tube. Credit:  John Joyner

Duke scientists Song and Reif made an analog DNA circuit inside a test-tube. Credit: John Joyner

The blueprint molecule that codes every living thing on the planet was exploited by researchers to make something far less glamorous, but still very exciting: a simple calculator. The Duke University researchers toyed around with DNA and managed to make an analog circuit out of it that can do basic mathematical operations like addition and subtraction.

Previously, scientists made DNA computers that could do things like calculate square roots or even play tic-tac-toe. These projects were digital-only, though. The DNA circuit made at Duke University is all analog, meaning it doesn’t need an additional hardware that converts the signal into 1s and 0s.

An analog computer is a form of computer that uses the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In our case, instead of measuring changes in voltage as most analog computers and devices do, the DNA circuit reads the concentrations of various DNA strands as signals.

DNA has this natural ability to zip and unzip, as its nucleotide bases can pair up and bind in predictable ways. This makes it an excellent material for logic gates.

First, Duke graduate student Tianqi Song and computer science professor John Reif synthesized short pieces of DNA, which were either single-stranded or double-stranded with single-stranded ends, then mixed them together. What happens next is a single DNA strand will perfectly bind to the end of a partially double-stranded DNA. In the process, the previous bond strand detaches — like someone cutting in on a dancing couple.

The now orphaned strand can now pair up with other complementary molecules in the same circuit, creating a domino effect.

acs_synthetic_biology

Credit: ACS Synthetic Biology

As the reaction reaches equilibrium, the researchers measure concentrations of outgoing strands to solve math problems, based on input concentrations and the predictable nature of locking DNA. And if this sounds like it takes a lot of time, you’re right. Unlike a silicon-based computer which can perform a simple math operation in an instant, the DNA circuit takes hours.

So why are scientists even working on this? Well, it’s not just for the sake of experimentation, although this too is well worth it sometimes. I mean, a computer made from DNA? That just sounds very exciting to try, besides the practical applications which we might uncover by working in such an obscure field.

The other thing that makes this sort of research novel is that unlike previous attempts, the circuit is analog. The test-tube DNA circuit can also operate in wet environments and can very tiny, unlike a conventional digital computer. Theoretically, analog DNA circuits can be used to make some very sophisticated operations such as logarithms and exponentials — this is next on the list, for the Duke team. There’s also hope that at some point these circuits could be embedded in bodies and release DNA and RNA when a specific blood marker value lies outside a given range.

Findings appeared in the journal ACS Synthetic Biology.

 

The Antikythera mechanism on display. Credit: Wikimedia Commons

Ancient Greeks used this 2,100-years-old analog computer for both astronomy and astrology

Since it was found in a shipwreck off the coast of Crete in 1901, the intricate Antikythera Mechanism has puzzled scientists. Way ahead of its time, this complex mechanism of revolving bronze gears and display was used by the ancient Greeks more than 2,000 years ago to calculate the positions of the stars and planets. Now, a decade-long investigation suggests the Greeks used the world’s oldest computer for astrological purposes, as well.

The Antikythera mechanism on display. Credit: Wikimedia Commons

The Antikythera mechanism on display. Credit: Wikimedia Commons

The level of craftsmanship of the ancient computer, now housed at the National Archaeological Museum in Athens, is simply mind boggling when you factor its age. Though obscured by corrosion after it was lost for thousands of years at the bottom of the sea, the Antikythera Mechanism still has visible gears with neat triangular teeth not all that different from those in modern clocks, and a ring divided into degrees. 

The mechanism featured a handle on the side for winding the mechanism forward or backward. As an ancient Greek navigator turned the handle, trains of interlocking gear wheels drove at least seven hands at various speeds. Again, that’s very similar to how a clock works, only instead of hours, minutes, and seconds, the Antikythera Mechanism would show the position of the Sun, the moon, and each of the five planets visible to the naked eye (Mercury, Venus, Mars, Jupiter and Saturn). A rotating ball showed the current phase of the Moon and inscriptions explained which star rose and set on a particular given date. On the back of the case, two dials showed the calendar and the timing of lunar and solar eclipses.

Nothing as close to this level of sophistication would appear in over a thousand years.

A 2007 front panel recreation. Credit: Wikimedia Commons

A 2007 front panel recreation. Credit: Wikimedia Commons

Besides navigation, the Antikythera Mechanism was also likely used for more esoteric purposes, an international team of researchers reported at the Katerina Laskaridis Historical Foundation Library in Greece the other day.

For more than ten years, researchers have been trying to decipher the tiny engravings on the mechanism. The years of water and wildlife exposure have taken their toll and corrosion obscured most of the researchers’ efforts. Thanks to cutting-edge imaging techniques like X-ray scanning, the team was able to see past the many layers of accumulated sediments and paint a picture of what the fragments belonging to the ancient computer must have looked like before they were spoiled by the hand of time.

Schematic showing the level of intricacy involved in turning the more than two dozen gears of the device. Credit: Wikimedia Commons

Schematic showing the level of intricacy involved in turning the more than two dozen gears of the device. Credit: Wikimedia Commons

In total, 3,500 characters of finely inscribed text were surfaced. These were engraved on the inside covers and visible front and back sections of the mechanism.

“Now we have texts that you can actually read as ancient Greek, what we had before was like something on the radio with a lot of static,” said team member Alexander Jones, a professor of the history of ancient science at New York University.

“It’s a lot of detail for us because it comes from a period from which we know very little about Greek astronomy and essentially nothing about the technology, except what we gather from here,” he said. “So these very small texts are a very big thing for us.”

Inscriptions on the back cover of Fragment 19. These were previously obscured by sediments. Credit: Antikythera Mechanism Research Project

Inscriptions on the back cover of Fragment 19. These were previously obscured by sediments. Credit: Antikythera Mechanism Research Project

The text was not like a manual, but rather like a label that described what the whole mechanism does. “It’s not telling you how to use it, it says ‘what you see is such and such,’ rather than ‘turn this knob and it shows you something,'” Jones said. The user of the device was likely very educated.

Jones and colleagues confirmed some of the mechanisms had the function of predicting solar and lunar eclipses, events which the Ancient Greeks thought could impact human affairs. As such, besides its primarily astronomical purpose, the Antikythera Mechanism was also trusted as a tool for reading the future.

Other important findings researchers could infer from the newly uncovered text is that the Antikythera Mechanism was made on the island of Rhodes by at least two peoples, judging from how the engravings were etched. Archeologists are still searching for the other missing fragments at the site of the ancient shipwreck. If these are found, maybe the Antikythera Mechanism could turn out to be even more complex than we think it is.

The results were published in a special issue of the journal Almagest

Image: NASA engineers operating IBM System/360 Model 75 mainframe computers.

Your smartphone is millions of times more powerful than the Apollo 11 guidance computers

Image: NASA engineers operating IBM System/360 Model 75 mainframe computers.

Image: NASA engineers operating IBM System/360 Model 75 mainframe computers.

In 1969, humans set foot on the moon for the very first time. It’s really difficult to imagine the technical challenges of landing on the moon more than five decades ago if you’re not a rocket scientist, but what’s certain is that computers played a fundamental role – even back then.

Despite the fact that NASA computers were pitiful by today’s standards, they were fast enough to guide humans across 356,000 km of space from the Earth to the Moon and return them safely. In fact, during the first Apollo missions, critical safety and propulsion mechanisms in spacecraft were controlled by software for the first time. These developments formed the basis for modern computing.

Apollo Guidance Computer, 0.043MHz clock speed. Image: NASA

Apollo Guidance Computer, 0.043MHz clock speed. Image: NASA

Essential to the lunar missions was a now ancient command module computer designed at MIT called the Apollo Guidance Computer (AGC). The computer used an operating system that allowed astronauts to type in nouns and verbs that were translated into instructions for their spaceship. To control the hardware, AGC had built-in machine code instructions using a compiler called  Luminary. Here’s how some of the code for the computer looked like when it was used for Apollo 13 and 14.

While it was handy, AGC wasn’t particularly powerful having 64Kbyte of memory and operating at 0.043MHz. In fact, it was less equipped than a modern toaster!

A pocket calculator or even a USB-C charger has more computing power than the best computers used to send astronauts to the moon

Besides AGC, thousands of flight technicians and computer engineers at the Goddard Space Flight Center employed the IBM System/360 Model 75s mainframe computer in order to make independent computations and maintain communication between Earth and lunar landers.

These computers cost $3.5 million a piece and were the size of a car. Each could perform several hundred thousand addition operations per second, and their total memory capacity was in the megabyte range. Programs were developed for the 75s that monitored the spacecraft’s environmental data and astronauts’ health, which were at the time the most complex software ever developed.

nasa computer

Not bad for a computer that could barely run Mario Bros. Image: NASA

Today, however, even a simple USB stick or WiFi router is more powerful than these mainframes, let alone an iPhone. The iPhone 6 uses an Apple-designed 64 bit Cortex A8 ARM architecture composed of approximately 1.6 billion transistors.  It operates at 1.4 GHZ and can process instructions at a rate of approximately 1.2 instructions every cycle in each of its 2 cores. That’s 3.36 billion instructions per second. Put simply, the iPhone 6’s clock is 32,600 times faster than the best Apollo era computers and could perform instructions 120,000,000 times faster. You wouldn’t be wrong in saying an iPhone could be used to guide 120,000,000 Apollo-era spacecraft to the moon, all at the same time.

Computers are so ubiquitous nowadays that even a pocket calculator has much more processing power, RAM, and memory than the state of the art in computing during the Apollo era. For instance, the TI-84 calculator developed by Texas Instruments in 2004 is 350 times faster than Apollo computers and had 32 times more RAM and 14,500 times more ROM.

Even USB-C chargers are faster than Apollo computers. The Anker PowerPort Atom PD 2 runs at ~48 times the clock speed of the Apollo 11 Guidance Computer with 1.8x the program space.

These sort of comparisons aren’t quite fair, though. It’s like making a side by side comparison between the first airplanes designed by the Wright Brothers and an F-18 fighter. Sure, both could fly but the two are, technologically speaking, worlds apart. After all, the iPhone clearly beats even one of the most famous — and a lot more recent — supercomputer that ever existed: IBM’s 1997 Deep Blue supercomputer which beat Garry Kasparov in a historic chess showdown.

With this in mind, one can only awe at the kind of computer power each of us holds at their finger tips. Nevermind we use them for frivolous matters. Imagine what you’ll be holding in your hand (or inside it) 20 years from now.

Iron tracks, each 1 mm long, arranged at right angles to each other. This is the first fluid-based computer that controls multiple droplets simultaneously. Image: Nature Physics

This computer clocks uses water droplets, manipulating information and matter at the same time

Computers and water don’t mix well, but that didn’t stop Manu Prakash, a bioengineering assistant professor at Stanford, to think outside the box. Using magnetic fields and droplets of water infused with magnetic nanoparticles, Prakash demonstrated a computing system that performs logic and control functions by manipulating H2O instead of electrons. Because of its general nature, the water clock can perform any operations a conventional CPU clock can. But don’t expect this water-based computer to replace the CPU in your smartphone or notebook (electrons speed vs water droplet – not a chance). Instead, it might prove extremely useful in situations where logic operations and manipulation of matter need to be performed at the same time.

Water computer

 Iron tracks, each 1 mm long, arranged at right angles to each other. This is the first fluid-based computer that controls multiple droplets simultaneously. Image: Nature Physics

Iron tracks, each 1 mm long, arranged at right angles to each other. This is the first fluid-based computer that controls multiple droplets simultaneously. Image: Nature Physics

As you can imagine, making a computer clock based on a fluid is no easy task. Prakash realized that one way to manipulate the flow is through an external magnetic field. He designed a series of T and I-shaped tiny pieces of iron and strategically placed them on a glass slide. Then another glass is placed on top with a layer of oil sandwiched in between. Water droplets infused with magnetic nanoparticles are then carefully infused into the system. Electromagnetic coils placed around the machine manipulate and direct the droplets, very similarly to how this ferrofluid artistic rendering work.

GIF: coils and droplets racing inside the grooves. YouTube

GIF: coils and droplets racing inside the grooves. YouTube

Depending on how they placed the metal shapes, the droplets would travel along a distinct pattern. Once the magnetic field is turned on, each rotation of the field is counted as one clock cycle. With each cycle, every drop marched exactly one step forward, as recorded in the video below.

The design of the iron tracks is essential, as Physics World reports:

“If the base was just a sheet of iron with no tracks, the droplets would travel around in circles, following the energy minima created by the field. However, by carefully designing the iron tracks and incorporating breaks at the right places, the researchers can create a “ratchet” effect whereby every complete rotation causes a droplet to move into an adjacent energy minimum. Therefore, instead of travelling in circles, a droplet moves in a specific direction through the circuit. Furthermore, by creating two tracks that are mirror images of each other, two droplets will rotate in opposite directions in response to the same field.”

Because of a combination of hydrodynamic and magnetic forces, the droplets repel each other. This is a good thing, since it keeps them separated and allows for the water-based equivalent of a digital transistor. If the droplet is in a specific location the value “1” is given, “0” if absent. Basically, this is the basis for a droplet logic gate. Since the machine works with fluids, virtually any kind of fluid chemical can be introduced into the computer. This way, scientists can sort and mix chemicals on the fly, while also performing computing operations. But the ultimate purpose isn’t to superseed a digital processor. It’s about much more than that – the “algorithmic manipulation of matter”, which enables enable us to “learn to manipulate matter faster… in a fundamentally new way.” Findings appeared in Nature Physics.

“Imagine, when you run a set of computations wherein not only information is processed but also the physical matter is algorithmically manipulated. We have just made this possible at the mesoscale,” Prakash said.

Next, Prakash and colleagues are concentrating on scaling down the design.

 

 

CHIP computer

This computer is worth 9$ and it’s not as bad as you might think. No, seriously

CHIP computer

That tiny thing is the CHIP, a $9 computer with extraordinary versatile capabilities. Image: Kickstarter

Next Thing Co, a fledgling company started by three budding hardware enthusiasts, just released a KickStarter campaign in which they promise to release a computer worth nine USD. The computer, called CHIP, can do everything 90% of all people usually use computers for: office apps, surf the web and play games. The team hoped to raise $50,000 to supplement their own budget and start rolling orders at an assembly line in China. Right now, $1,040,006 were donated as I’m writing this and the numbers are swelling with 24 days still to go. Are we finally seeing the fruits of liberalizing computing and economics of scale?

chip gif

Everybody takes computers for granted today, and it’s easy to see why considering they’re ubiquitous. There are billions, however, who’ve never had a PC, and the CHIP computer (nice word play) might be a perfect fit for them, despite its makers designed it as a hacker’s playground.

“To sell C.H.I.P. for $9, we need to order tens of thousands of CHIPs. By using common, available, and volume-produced processor, memory, and wifi CHIPs, we are able to leverage the scales at which tablet manufacturers operate to get everyone the best price,” the hardware enthusiasts explain in their official KickStarter video.

The CHIP comes with 1 GHz processor, 512 MBs of RAM, 4 GB storage, along with WIFI and Bluetooth, all packed on a board the size of a matchbox. Any kind of display can be fitted with its on-board adapters, old or new (VGA and HDMI). You can use it as a programming platform, as an office work stage, to surf the web, play games or even as a music player. There’s a portable version called Pocket C.H.I.P. which gives C.H.I.P. a 4.3” touchscreen, QWERTY keyboard, and 5-hour battery – in a case small enough to fit in your back pocket. The Pocket C.H.I.P. is priced at $49, though.

A demo of a video games played on CHIP. Image: Kickstarter

A demo of a video games played on CHIP. Image: Kickstarter

“Save your documents to CHIP’s onboard storage. Surf the web…Play games with a Bluetooth controller. But wait. there’s more.” The camera shifts to a mannequin’s pocket, and the presenter says “This is PocketCHIP. It makes CHIP portable. Take CHIP, put it into PocketCHIP and you can use CHIP anywhere.”

CHIP is also fully open source.

“We built C.H.I.P. to make tiny powerful computers more accessible and easier to use. A huge part of making C.H.I.P. accessible is making sure that it can change to meet the needs of the community. That’s why both C.H.I.P. and PocketC.H.I.P. are both TOTALLY OPEN SOURCE.This means all hardware design files schematic, PCB layout and bill of materials are free for you the community to download, modify and use.”

The Pocket CHIP. Image: Kickstarter

The Pocket CHIP. Image: Kickstarter

Book review: ‘Ada’s Algorithm’

51vaZGQdMlL

Ada’s Algorithm, How Lord Byron’s Daughter Ada Lovelace Launched the Digital Age
By James Essinger
Melville House, 270pp | Buy on Amazon

In the past few years the first person that wrote the first computer program received more and more attention; that person is thought to be Augusta Ada King, Countess of Lovelace, or shortly Ada Lovelace. But how is it possible that the daughter of a notorious writer and poet, Lord Byron, wrote the first computer program years before the computer as we know it today was invented, in an era where women did not have much access to information and were not taken seriously by society?

Ada’s Algorithm, How Lord Byron’s Daughter Ada Lovelace Launched the Digital Age, comes up with the key answers to this questions and many more, like: Who was Ada Lovelace? What does a computer program mean for the 1800’s? How did Lord Byron influence all of these? If you want to read about Ada’s computer program itself I am afraid that this is not a book for you, even if the author provides links to some more technical information (like this one, which provides the full explanation of the engines and the algorithms). This book is supposed to offer more information about Ada’s life and those who influenced Lady Lovelace’s work.

It starts with a little bit of description about Lord Byron’s short and notorious life until the moment that he met Ada’s mother, Anne Isabella “Annabella” Milbanke. Sometimes the author provides more information than I wished to know, like the passage where he describes Lord Byron’s affairs, and more about Lord Byron’s life than I expected, but I think that one of the purposes of this book is trying to figure out everything about Ada’s way of thinking and personality by studying her roots.

Because Ada’s mother, Annabella, was really sick of Lord Byron’s behavior and she did not want her child to become just like her father, adventurous and daydreaming, Annabella did everything she could to prevent Ada from growing into a wild young woman. I think this is one of the key moments in Ada’s life. Because of her mother’s frustrations Ada was basically forced to learn mathematics from a young age and become more of a rational and logical person.

The second important person of her life was the mathematician and inventor Charles Babbage, which is considered the ”father of the modern computer”. Even though, as you will find out from the book he actually never finished his differential nor his analytical engine, he still influenced the computer revolution.

And now finally the interesting part comes, how all of these influenced Ada to write the first computer program. It seems that she did it back then when the idea of an algorithm was not even invented, even though nowadays it is more like common sense for us. Ada helped Babbage with his analytical engine by simply trying to put down on paper how the engine should do the mathematical calculation in steps: “Note G is especially relevant to us today. It describes step by step, in detail, the operations through which the punched cards would proceed to weave an even longer sequence of Bernoulli numbers on the Analytical Engine. Note G is highly complex, juggling mathematics and technology. Most important of all, it is in effect a program containing instructions for a computer.” The author emphasizes that the duo never used the word computer in their letters or any other writings with the same meaning that we use today, though they used it several times with the meaning of a clerk that carries out arithmetical and mathematical calculations.

The thing that I enjoyed the most is that the book also contains some of Ada’s letters to her mother and even Babbage. I find the idea of actually reading and seeing how Ada used to think and talk amazing. What I also liked about the book is the fact that even though it is written and published recently it still makes me feel like I am living in those times.

All in all, Ada’s Algorithm is a book about the whole picture of Lady’s Lovelace environment and background and also provides information from the beginnings of her father’s life, including details about almost every person that may have influenced her life and her work on first algorithm, until she passed away at the age of 36 (the same age as her fathers) killed by cancer.

As I said at the beginning, the book’s purpose isn’t to explain her inventions, but the whole picture of events and people that made possible the creation of what it might be the first computer program  in 1843. I would really recommend this book for people looking for valuable insight into Ada Lovelace’s life and the emergence of modern computers.

Using cells as living calculators

MIT engineers have taken one step forward to the realm of sci-fi gadgets, transforming bacterial cells into living calculators that can compute logarithms, divide, and take square roots, using three or fewer genetic parts.

Using cells as analog circuits

cell

Inspired by how analog electronic circuits function, the researchers created synthetic computation circuits by combining existing genetic “parts,” or engineered genes, in novel ways. They perform calculations by using biochemical functions already existing in the cell, instead of engineering new ones, thus making them more efficient than the digital circuits pursued by most synthetic biologists.

“In analog you compute on a continuous set of numbers, which means it’s not just black and white, it’s gray as well,” says Sarpeshkar, an associate professor of electrical engineering and computer science and the head of the Analog Circuits and Biological Systems group at MIT

This kind of “computer cells” could be very useful for designing cellular sensors for pathogens or other molecules, the researchers say. Furthermore, they could be combined with digital circuits to create cells that can take a specific action, triggered by a specific stimulus, such as a certain protein going over a threshold, or temperature going above a set degree.

“You could do a lot of upfront sensing with the analog circuits because they’re very rich and a relatively small amount of parts can give you a lot of complexity, and have that output go into a circuit that makes a decision — is this true or not?” says Lu, an assistant professor of electrical engineering and computer science and biological engineering.

In this study, Sarpeshkar and his colleagues have mapped analog electronic circuits onto cells; these analog circuits are very efficient because they can take advantage of the continuous range of inputs that typically occurs in a biologic body, and they can also exploit the natural continuous computing functions that are already present in cells.

Digital circuits

Digital circuits represent every value as 1 and 0, black and white; this can be very useful for performing logic operations, such as AND, NOT and OR inside cells, which many synthetic biologists have done. Basically, these functions estimate whether a certain element is present, but not how much.

They also require more parts, and a significant input of energy, which can drain the energy of the cell hosting them.

“If you build too many parts to make some function, the cell is not going to have the energy to keep making those proteins,” Sarpeshkar says.

Doing the math

To create an analog adding or multiplying circuit that can calculate the total quantity of two or more compounds in a cell, scientists combined two different circuits, each of which responds to a different input. In the first one, a sugar called arabinose turns on a transcription factor that activates the gene that codes for green fluorescent protein (GFP), while in the second one, a signaling molecule known as AHL also turns on a gene that produces GFP. If you go and calculate the total amount of GFP, you can calulate the total input of the two separate inputs.

To substract or divide, the process was a little different: they swapped one of the activator transcription factors with a repressor, which turns off production of GFP when the input molecule is present. They also had to build another analog square root circuit that requires just two parts (compared to its digital equivalent, which needs over 100 parts).

“Analog computation is very efficient,” Sarpeshkar says. “To create digital circuits at a comparable level of precision would take many more genetic parts.”

Another circuit developed by the team can divide using the ratios between two different molecules and calculating the ratios. Cells often perform this kind of computation on their own, which is critical for monitoring the relative concentrations of certain molecules.

“That ratio is important for controlling a lot of cellular processes, and the cell naturally has enzymes that can recognize those ratios,” Lu says. “Cells can already do a lot of these things on their own, but for them to do it over a useful range requires extra engineering.”

But their work ensured the fact that the calculations can go up to 10.000, which is much higher than what is going on naturally.

“It’s nice to see that frameworks from electrical engineering can be concisely and elegantly mapped into synthetic biology,” says Eric Klavins, an associate professor of electrical engineering and adjunct associate professor of biological engineering at the University of Washington who was not part of the research team.

They are now trying to create similar analog circuits in non-baterial cells – including mammalian cells.

“We have just scratched the surface of what sophisticated analog feedback circuits can do in living cells,” says Sarpeshkar, whose lab is working on building further new analog circuits in cells. He believes the new approach of what he terms “analog synthetic biology” will create a new set of fundamental and applied circuits that can dramatically improve the fine control of gene expression, molecular sensing, computation and actuation.

Via MIT.

Microprocessor with DNA (illustration). Scientists have developed and constructed an advanced biological transducer, a computing machine capable of manipulating genetic codes, and using the output as new input for subsequent computations (Credit: © Giovanni Cancemi / Fotolia)

Scientists create advanced biological transducer

Microprocessor with DNA (illustration). Scientists have developed and constructed an advanced biological transducer, a computing machine capable of manipulating genetic codes, and using the output as new input for subsequent computations (Credit: © Giovanni Cancemi / Fotolia)

Microprocessor with DNA (illustration). Scientists have developed and constructed an advanced biological transducer, a computing machine capable of manipulating genetic codes, and using the output as new input for subsequent computations (Credit: © Giovanni Cancemi / Fotolia)

Researchers at  the Technion-Israel Institute of Technology have devised an advanced biological transducer capable of manipulating genetic information and using the output as new input for sequential computations. Their findings serve as a new step forward for current efforts that might one day serve to create new biotech possibilities like individual gene therapy and cloning.

In a sense, all biological beings are walking, breathing computers – biomolecular computers. Each of the countless molecules that comprise our body communicate with one another in a logical manner that can be described and predicted. The input is a molecule that undergoes specific, programmed changes, following a specific set of rules (software) and the output of this chemical computation process is another well defined molecule.

Synthetic biomolecular computer are of great interest to scientists because they offer the possibility of actively manipulating biological systems and even living organisms. The fact that no interface is required makes them extremely appealing, since everything including “hardware”, “software” and information (input and output) are actually molecules that interact with one another in a cascade of programmable chemical events.

“Our results show a novel, synthetic designed computing machine that computes iteratively and produces biologically relevant results,” says lead researcher Prof. Ehud Keinan of the Technion Schulich Faculty of Chemistry. “In addition to enhanced computation power, this DNA-based transducer offers multiple benefits, including the ability to read and transform genetic information, miniaturization to the molecular scale, and the aptitude to produce computational results that interact directly with living organisms.”

The transducer could be used on genetic material to evaluate and detect specific sequences, and to alter and algorithmically process genetic code. Similar devices, says Prof. Keinan, could be applied for other computational problems. Strides in this direction have become ever fruitful, actually. In 2011 researches from the Weizmann Institute of Science in Rehovot, Israel, developed a biomolecular computer that could autonomously sense many different types of molecules simultaneously. Just a few months ago, the first working biological transistor was unveiled by Stanford researchers, allowing computers to function inside living cells, something we’ve been waiting for many years.

“All biological systems, and even entire living organisms, are natural molecular computers. Every one of us is a biomolecular computer, that is, a machine in which all components are molecules “talking” to one another in a logical manner. The hardware and software are complex biological molecules that activate one another to carry out some predetermined chemical tasks. The input is a molecule that undergoes specific, programmed changes, following a specific set of rules (software) and the output of this chemical computation process is another well defined molecule.”

The Israeli researchers’ findings  were reported in the journal Chemistry & Biology (Cell Press). [source]

Scientists Build Computer That Never Crashes

Scientists and researchers at the University College of London (UCL) have built a self-healing computer that may end computer crashes forever, according to the New Scientist.

Called a “systemic computer,” the machine — which is being developed by computer scientist, Dr. Peter Bentley, and UCL research engineer, Christos Sakellariou — is now operating, apparently crash-free, at the UCL campus.

New Scientist reveals that the core design of the computer takes its cue from “the apparent randomness found in nature.”   The machine can instantly recover from crashes by repairing corrupted data.

th

Mission critical

Observers have noted that the UCL computer’s revolutionary design may one day prove useful for systems that rely on computers to perform mission critical tasks — such as those in hospitals, aircraft, and various phone services also offered by RingCentral in calamity-prone areas.

The new technology can have a similarly beneficial impact on business and banking systems, which rely on massive computer networks to manage their financial transactions.  The Royal Bank of Scotland, for instance, had to set aside more than £100 million to compensate the cost of the customers who lost money when the bank’s computer system crashed in June last year.

One at a time

Typically, computers will course through data in sequential order.  They execute one instruction at a time, fetching data from the memory and executing its command before storing the computation in the memory.  The process is repeated again and again until the list of instructions is completed.  Computers perform this process under the control of a sequential timer called a program counter.

This sequential system is excellent for crunching numbers, but does not lend itself well to tasks that require simultaneous operations.  “Even when it feels like your computer is running all your software at the same time, it is just pretending to do that, flicking its attention very quickly between each program,” Dr. Bentley told the New Scientist. 

Nature isn’t like that

Dr. Bentley asserts that, because the typical computer operates on a sequential system, it is ill-suited to modelling natural processes, such as how neurons work and how bees swarm.  “Nature isn’t like that,” Dr. Bentley says.  “Its processes are distributed, decentralised, and probablistic.  And they are fault tolerant, able to heal themselves.  A computer should be able to do that,” he adds.

Dr. Bentley and Sakellariou have designed the UCL computer so that data and instructions are combined.  The instructions are stored redundantly across the machine’s various systems.  Each system, in turn, is self-reliant and has memory containing context-sensitive data.  This means it can only interact with other, similar systems.

The New Scientist notes that, while other operating systems crash when they fail to access a bit of memory, the same is not true for the UCL systemic computer.  The UCL machine’s design precludes crashing.  When one system is damaged, the machine can instantly repair itself by accessing the necessary data and instructions from other systems in the pool.

Mimicking Mother Nature’s Randomness

Rather than relying on a program counter, the UCL computer uses a pseudo-random number generator which is designed to mimic nature’s randomness.  The systems that comprise the pool then carry out instructions simultaneously, with no single system taking precedence over others.

“The systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions,” Dr.Bentley explains.

The UCL computer doesn’t sound like it should work, the New Scientist observes, but it does —  and it works much faster than expected.  Dr. Bentley and Mr. Sakellariou are presently working on teaching the computer to rewrite its own code in response to changes in its enviroment, through machine learning.

artist graph

Computer analyses fine art like an expert would. Art only for humans?

When fine art is concerned, or visual arts in general for that matter, complex cognitive functions are at play as the viewer analyze it. As you go from painting to painting, especially different artists, the discrepancies in style can be recognized, and trained art historians can catch even the most subtle of brush strokes and identify a certain artist or period, solely based on this. For a computer, this kind of analysis can be extremely difficult to undertake, however computer scientists at Lawrence Technological University in Michigan saw this as an exciting challenge and eventually developed a software that can accurately analyse a painting, without any kind of human intervention based solely on visual cues, much in the same manner an expert would.

The program was fed 1,000 paintings from 34 well known artists and was tasked with grouping artists by their artistic movements, and provide a map of similarities and influential links. In first instance, the program separated the artists into two main, distinct groups: modern (16 painters) and classical (18 painters).

Each painting had 4,027 numerical image context descriptors analyzed – content of the image such as texture, color and shapes in a quantitative fashion. Using pattern recognition algorithms and statistical computations, the software was able to group artists into styles based on the similarities and dissimilarities, and then quantify these similarities.

artist graph

So, from these two broad groups, the software sub-categorized even further. For instance, the computer automatically placed the High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo very close to each other.  High Renaissance artists Raphael, Leonardo Da Vinci, and Michelangelo were grouped similarly. The software also branched sub-groups by similarities, so artists like  Gauguin and Cézanne, both considered post-impressionists have been identified by the algorithm as being similar in style with Salvador Dali, Max Ernst, and Giorgio de Chirico, all are considered by art historians to be part of the surrealism school of art.

[RELATED] Computer recognizes attractiveness in women

The researchers conclude their “results demonstrate that machine vision and pattern recognition algorithms are able to mimic the complex cognitive task of the human perception of visual art, and can be used to measure and quantify visual similarities between paintings, painters, and schools of art.”

While a computer can analyze art with conclusions similar to those of an expert, human art historian, a serious question arises – is then, in this case, a computer able to understand art? If so, will a computer ever able to feel art?

The findings were reported in the journal Journal on Computing and Cultural Heritage (JOCCH).

[via KurzweilAi]

 

 

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Scientists devise computer using swarms of soldier crabs

Computing using unconventional methods found in nature has become an important branch of computer science, which might aid scientists construct more robust and reliable devices. For instance, the ability of biological systems to assemble and grow on their own enables much higher interconnection densities or swarm intelligence algorithms, like ant colonies that find optimal paths to food sources. But its one thing to get inspired by nature to build computing devices, and another to use nature itself as the main computing component.

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Previously, scientific groups have used all sorts of natural computation mechanisms like fluids or even DNA and bacteria. Now, a team of  computer scientists, lead by Yukio-Pegio Gunji from Kobe University in Japan, have successfully created a computer that exploits the swarming behaviour of soldier crabs. Yup, that’s nothing you hear every day.

For their eccentric choice of computing agent, the researchers’ inspired themselves from the billiard ball computer model, a classic reversible mechanical computer, mainly used for didactic purposes first proposed in 1982 by Edward Fredkin and Tommaso Toffoli.

The billiard ball computer model can be used as a Boolean circuit, only instead of wires it uses the paths on which the balls travel, the information is encoded by the presence or absence of a ball on the path (1 and 0), and its logic gates (AND/OR/NOT) are simulated by collisions of balls at points where their paths cross. Now, instead of billiard balls think crabs!

“These creatures seem to be uniquely suited for this form of information processing . They live under the sand in tidal lagoons and emerge at low tide in swarms of hundreds of thousands.

What’s interesting about the crabs is that they appear to demonstrate two distinct forms of behaviour. When in the middle of a swarm, they simply follow whoever is nearby. But when they find themselves on on the edge of a swarm, they change.

Suddenly, they become aggressive leaders and charge off into the watery distance with their swarm in tow, until by some accident of turbulence they find themselves inside the swarm again.

This turns out to be hugely robust behaviour that can be easily controlled. When placed next to a wall, a leader will always follow the wall in a direction that can be controlled by shadowing the swarm from above to mimic to the presence of the predatory birds that eat the crabs. ” MIT tech report

Thus, the researchers were able to construct a computer which uses solider crabs for transmitting information. They were able to build a decent OR gate using the crabs, their AND-gates were a lot less reliable however. A more crab-friendly environment would’ve rendered better results, the researchers believe.

The findings were published in the journal Emerging Technologies.

Shorties: Young adults browse mobile more than desktop

This is one of these statistics that just remind you that the 2000’s are nearing their end; according to data published by Opera, young adults who use their browser use the mobile more than the traditional desktop to browse.

“We have often said that the next generation will grow up knowing the Web mostly through their mobile phones,” said Jon von Tetzchner, Co-founder, Opera. “We see this trend already emerging in different regions around the world. The mobile Web will bring a profound change in how we connect with one another. I think the results from this survey already show that change taking place.”

Here are some more stats about the young folks from the US:

74% have browser the mobile web on public transit
30% have posted to their blog from their phone
90% have shared photos via picture messaging
44% have asked someone on a date via text message
15% have read an online newspaper via mobile “frequently”
15% have “never” read an online newspaper via mobile
56% have uploaded content to video-sharing sites
48% are “very comfortable” with the idea of purchasing goods online
70% have online friends they’ve never met in real life
51% of respondents in the U.S. use Opera Mini to access the web more than a desktop or laptop computer. (Most other countries have much higher percentages.)

1962 invention could be worth billions

When the ultra-strong glass was invented more than 40 years ago, it was labeled as interesting, but a manufacturing use for it was hard to find. This glass is about three times harder than regular glass, while it’s also thinner (about as thin as a dime).

Motion tablet with Gorilla Glass

The so called Gorilla glass will probably be worth billions, when it will be used to create TV or tablet screens that are hard to break, scratch or bend. In the screen business, Gorilla glass is the next big thing.

Now if you ask me, the big question is why did so much time pass before this invention was given what it deserves. Or even better, how much more inventions that could be extremely useful are just lurking in the depths of the past, awaiting to be rediscovered ?

This reminds me of another invention: optical fiber. When a chemist named Frank Hyde managed to find a way to turn fused silica into optical fiber, it was considered little more than a fancy trinket. It wasn’t until the 70s that it started to be used in communications. Not quite the get rich fast kind of thing, but still, it’s better late then never, even when it comes to inventions.