Red Laser Diffraction.

Computer chip can mimic human neurons using only beams of light

Researchers at the MIT have constructed a brain-mimicking chip that uses light instead of electricity, which could provide a significant boost in processing power and enable the wide-scale use of artificial neural networks.

Red Laser Diffraction.

Image via Wikimedia.

As far as processing power goes, nature’s designs still beat ours fair and square. Thankfully, we’re not above copying our betters, so designing a computer that uses the same architecture and functions similarly to the human brain has been a long-standing goal of the computer industry.

We have made some headway on these types of computers using algorithms known as artificial neural networks. They’ve proven themselves on tasks that would swamp traditional computers, such as detecting lies, recognizing faces, even predicting heart attacks. The catch is that such algorithms require solid processing power to work, and most computers can’t run them very well, if at all.

Follow the light

To address this shortcoming, one team of researchers has swapped the ubiquitous transistor for beams of light which mimic the activity of neurons on a chip. These devices can process information faster and use less energy than traditional chips and could be used to put together “optical neural networks” making deep learning applications many times faster and more efficient than today.

That’s because computers today rely on transistors, tiny devices that allow or cut off the flow of electricity through a circuit. They’re massively better than the vacuum tubes of yore, but still limited in what they can do. Scientists have figured out for some time now that light could speed up certain processes that computers have to perform since light waves can travel and interact in parallel so they can perform several functions at the same time. Another advantage is that once you generate light, it keeps going by itself, whereas transistors require a constant flow of energy to operate — meaning higher energy costs and the need for greater heat dispersal.

Still, one issue in particular stemmed research into optical neural networks. The first photonic processors put together by scientists using optical equipment were massive, requiring tabletops full of mirrors and precision lenses to do the same job a modest computer processor could pull off. So for a long time, light processors were considered to be a nice idea but impractical for real applications.

But in the classic MIT fashion, a team of researchers from the Institute has managed to prove everyone wrong and condense all that equipment into a modest-sized computer chip just a few millimeters across.

Thinking with lasers

Artificial neural network.

Artificial neural networks layer neurons and have the first group do a preliminary analysis, pass their results on to the next layer and so on until the data is fully crunched.,br /> Image via Wikimedia.

The device is made of silicon and simulates a network of 16 neurons in a 4 neuron by 4 layer configuration. Information is fed into the device using a laser beam split into four smaller beams. Each beam’s brightness can be altered to encode a different number or information, and the brightness of each exiting beam represents the problem’s solution (be it a number or other type of information.)

Data processing is performed by crossing different light beams inside the chip, making them interact — either by amplifying or tuning each other out. These crossings points simulate how a signal from one neuron to another can be intensified or dampened in the brain depending on the strength of the connection between them. The beams also pass through simulated neurons that further adjust their intensities.

The team then went to work testing the optical network against a traditional counterpart in vowel sound recognition. After training on recordings of 90 people making four vowel sounds, transistor-powered computers simulating a 16-neuron network got it right 92% of the time. The optical network had a success rate of just 77%, but performed the task much faster and with greater efficiency — however, the team reckons that they can get the device’s performance up to speed after they solve all the teething problems.

One of the best parts about the new network is that it relies on components made of silicon, which is already massively employed in making computer components. In other words, the optical chips could be implemented for very low costs since there’s already an infrastructure in place to allow for their production. So once the team gets works out all the kinks and upgrades it with some more neurons, we may be poised to supply very fast, very energy efficient neural networks to for a wide variety of applications — from data centers, autonomous cars, to national security services.

The study’s primary authors, Yichen Shen, a physicist, and Nicholas Harris, an electrical engineer, are starting a new company towards that end and hope to have a product ready in two years.

The paper “Neuromorphic Silicon Photonic Networks” has been published in the e-print archive ArXiv.

Leave a Reply

Your email address will not be published.