Tag Archives: chip

Researchers build advanced microprocessor out of carbon nanotubes

A group of researchers at MIT have developed a new modern microprocessor from carbon nanotube transistors, which are widely seen as a faster, greener alternative to their traditional silicon counterparts.


A close up of a modern microprocessor built from carbon nanotube field-effect transistors. Credit: MIT

The microprocessor can be built using traditional silicon-chip fabrication processes and represents a major step toward making carbon nanotube microprocessors more practical.

Silicon transistors have carried the computer industry for decades. The industry has been able to shrink down and cram more transistors onto chips every couple of years to help carry out increasingly complex computations. But experts now foresee a time when silicon transistors will stop shrinking.

Making carbon nanotube field-effect transistors (CNFETs) has become a major goal for building next-generation computers. Research indicates they have properties that promise around 10 times the energy efficiency and far greater speeds compared to silicon. But when fabricated at scale, the transistors often come with many defects.

Researchers at MIT have invented new techniques to dramatically limit defects and enable full functional control in fabricating CNFETs, using processes in traditional silicon chip foundries. They demonstrated a 16-bit microprocessor with more than 14,000 CNFETs that perform the same tasks as commercial microprocessors.

“This is by far the most advanced chip made from any emerging nanotechnology that is promising for high-performance and energy-efficient computing,” said co-author Max M. Shulaker. “There are limits to silicon. If we want to continue to have gains in computing, carbon nanotubes represent one of the most promising ways to overcome those limits.”

But the new carbon nanotube microprocessor isn’t ready yet to take over silicon chips. Each one is about a micrometer across, compared with current silicon transistors that are tens of nanometers across. Each carbon nanotube transistors in this prototype can flip on and off about a million times each second, whereas silicon transistors can flicker billions of times per second.

Shrinking the nanotube transistors would help electricity zip through them with less resistance, allowing the devices to switch on and off more quickly. At the same time, aligning the nanotubes in parallel, rather than using a randomly oriented mesh, could also increase the electric current through the transistors to boost processing speed.

The researchers have now started implementing their manufacturing techniques into a silicon chip foundry through a program by the Defense Advanced Research Projects Agency, which supported the research.

Although no one can say when chips made entirely from carbon nanotubes will hit the shelves, Shulaker says it could be fewer than five years.

Credit: Massachusetts Institute of Technology.

Tree-on-a-chip mimics passive pumping mechanism found in plants and trees

Inspired by natural hydraulic pumps found in trees and plants, MIT engineers devised a ‘tree-on-a-chip’ which mimics the process. The tiny chip can pump water out of a tank for days without moving parts or external pumps. Such a chip could prove useful in a wide range of applications like that require minimal energy input.

Credit: Massachusetts Institute of Technology.

Credit: Massachusetts Institute of Technology.

The group led by Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, were looking for an effective way to drive hydraulic actuators for small robots. The ultimate goal is to make a small robot that’s just as versatile as Boston Dynamics’ Big Dog, a four-legged, 240-pound robot that runs and jumps through almost any kind of terrain, no matter how rough.

Scaling down the hydraulic pumps and actuators found in Big Dog can be extremely challenging, however, not to mention expensive. Looking for the best way to generate passive pumping, the MIT researchers eventually found their solution in plain sight: trees.

“It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction,” Hosoi said.

Beneath the thick bark, inside every tree is a complex plumbing system consisting of a vast network of conduits. This network consists of xylem and phloem tissues which transport water and nutrients (sugars) similarly to how our very own vascular system works. These conducting tissues start in the roots and transect up through the trunks of trees, separating into the branches and then branching even further into every leaf.

Propelled by surface tension, water travels up the channels of xylem, then diffuses through a semipermeable membrane into the phloem channels that contain sugar and other nutrients. This way vital water migrates from the roots to the crown and sugars produced by the leaves travel back to the root.

Previously, other groups had tried to make microfluid chips that emulate this perfect balance but these fell short because typically pumping could be sustained for only a couple of minutes. Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering, found out what previous models were missing — emulating the tree’s leaves.

While other tree-on-a-chip designs only emulated the xylem and phloem, Comtet helped devise a new model which also accounts for the sugar transport in the leaves.

First, two plastic slides were sandwiched together, then small channels were drilled inside representing the xylem and phloem. The Xylem channel is filled with water and phloem one with water and sugar. In between the two slides, a semipermeable material mimics the diffusing membrane between xylem and phloem. The real innovation was another membrane placed over the phloem channel slide where a commo sugar cube was placed on top representing the additional sugar intake diffusing from the tree’s leaves.

The whole setup was hooked up to a tank filled with water on one end and a beaker at the other end where the water would flow. Tests showed that constant flow could be sustained for several days as opposed to mere minutes. In other words, this was a huge breakthrough.

 “As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Journal Reference: Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip, Nature Plants, nature.com/articles/doi:10.1038/nplants.2017.32

 

By 2040 our computers will use more power than we can produce

The breathtaking speed at which our computers evolve is perfectly summarized in Moore’s Law — the idea that the sum of transistors in an integrated circuit doubles every two years. But this kind of exponential growth in computing power also means that our chipsets need more and more power to function — and by 2040 they will gobble up more electricity than the world can produce, scientists predict.

Image bia pixabay

The projection was originally contained in a report released last year by the Semiconductor Industry Association (SIA) but it has only recently made headlines as the group issued its final assessment on the semiconductor industry. The basic idea is that as computer chips become more powerful and incorporate more transistors, they’ll require more power to function unless efficiency can be improved.

Energy which we may not have. They predicted that unless we significantly change the design of our computers, by 2040 we won’t be able to power all of them. But there’s a limit to how much we can improve using our methods:

“Industry’s ability to follow Moore’s Law has led to smaller transistors but greater power density and associated thermal management issues,” the 2015 report explains.

“More transistors per chip mean more interconnects – leading-edge microprocessors can have several kilometres of total interconnect length. But as interconnects shrink they become more inefficient.”

So in the long run, SIA estimates that under current conditions “computing will not be sustainable by 2040, when the energy required for computing will exceed the estimated world’s energy production.”

Total energy used for computing.
Image credits SIA

This graph shows the problem. The power requirements of today’s systems — the benchmark line — are the orange line and total energy production is the yellow one. The point they meet at, predicted to be somewhere around 2030 or 2040, is where the problems start. Today, chip engineers stack ever-smaller transistors in three dimensions in order to improve performance and keep pace with Moore’s Law, but the SIA says that approach won’t work forever, given how much energy will be lost in future, progressively denser chips.

“Conventional approaches are running into physical limits. Reducing the ‘energy cost’ of managing data on-chip requires coordinated research in new materials, devices, and architectures,” the SIA states.

“This new technology and architecture needs to be several orders of magnitude more energy efficient than best current estimates for mainstream digital semiconductor technology if energy consumption is to be prevented from following an explosive growth curve.”

The roadmap report also warns that beyond 2020, it will become economically unviable to keep improving performance with simple scaling methods. Future improvements in computing power must come from areas not related to transistor count.

“That wall really started to crumble in 2005, and since that time we’ve been getting more transistors but they’re really not all that much better,” said computer engineer Thomas Conte from Georgia Tech for IEEE Spectrum.

“This isn’t saying this is the end of Moore’s Law. It’s stepping back and saying what really matters here – and what really matters here is computing.”

This is the world’s first 1,000-processor chip

A microchip containing 1,000 independent programmable processors has been revealed by a team at the University of California, Davis, Department of Electrical and Computer Engineering.

By splitting programs across a large number of processor cores, the KiloCore chip designed at UC Davis can run at high clock speeds with high energy efficiency. Image credits: Andy Fell/UC Davis

The very efficient array is called “KiloCore” and it has a maximum computation rate of 1.78 trillion instructions per second, containing 621 million transistors. It was released at the 2016 Symposium on VLSI Technology and Circuits in Honolulu on June 16.

“To the best of our knowledge, it is the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university,” said Bevan Baas, professor of electrical and computer engineering, who led the team that designed the chip architecture.

This isn’t, by any chance, the first multiple-processor chip ever created, but most such devices only go up to 300 processors, according to an analysis by Baas’ team. They’re rarely available commercially, being used for various types of research. KiloCore is no different, being designed by IBM, using their 32 nm CMOS technology.

Each individual processor can run its own small program independently of the others, which is a fundamentally more flexible approach than so-called Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs. The idea is to split up the processing and allow all processors to function in parallel, independent – something which makes processing not only faster, but also more energy efficient. Because each processor is individually clocked, it can shut down when its not needed.

Just so you get an idea how efficient this multiple-core chip is, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. That’s about 100 times more efficient than your average laptop. Cores operate at an average maximum clock frequency of 1.78 GHz. Another remarkable feature is data transfer – they transfer data directly to each other rather than using a pooled memory area which can become a bottleneck.

 

New silicon chip technology amplifies light using sound waves

A whole new world of signal processing may be just around the corner. Yale scientists have developed a method of boosting the intensity of light waves on a silicon microchip using only sound.

A Yale team has found a way to amplify the intensity of light waves on a silicon microchip using only sound.
Image credit: Yale University

The paper, published in the journal Nature Photonics, describes a novel waveguide system that has the ability to control the interaction between sound and light waves. The system could form the foundation of a host of powerful new signal-processing technologies starting from the humble (and widely-used) silicon chip.

And this is one of the most exciting selling points of this technology: silicon chips are ubiquitous in today’s technology.

“Silicon is the basis for practically all microchip technologies,” said Rakich, assistant professor of applied physics and physics at Yale and lead author of the paper. “The ability to combine both light and sound in silicon permits us to control and process information in new ways that weren’t otherwise possible.”

“[The end result] is like giving a UPS driver an amphibious vehicle — you can find a much more efficient route for delivery when traveling by land or water.”

The advantages of integrating such a technology into a silicon chip were sought-after by numerous groups around the world, but with little success. Previous attempts just weren’t efficient enough for practical applications. The Yale group’s breakthrough came by using a new design that prevents light and sound from escaping the circuits.

“Figuring out how to shape this interaction without losing amplification was the real challenge,” said Eric Kittlaus, a graduate student in Rakich’s lab and the study’s first author. “With precise control over the light-sound interaction, we will be able to create devices with immediate practical uses, including new types of lasers.”

The system is part of a larger body of research the Rakich lab has conducted over the past five years, focused on designing new microchip technologies for light. Its commercial applications range over areas including communications and signal processing.

“We’re glad to help advance these new technologies, and are very excited to see what the future holds,” said Heedeuk Shin, a former member of the Rakich lab, now a professor at the Pohang University of Science and Technology in Korea and one of the study’s co-authors.

true north

Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors

true north

Image: IBM

The brain of complex organisms, such as humans but just as well other primates or even mice, is very difficult to emulate with today’s technology. IBM is moving things further in this direction after it announced the whooping features of its new brain-like chip: one million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores, all made possible using 5.4 billion transistors. TrueNorth as it’s been dubbed, looks amazing not just because of its raw computer power – after all, this kind of thing was possible before, you just had to build more muscle and put more cash and resources into the project – but also because of the tremendous leap in efficiency. The chip, possibly the most advance of its kind, operates at max load using only 72 milliwatts. That’s  176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Enter the world of neuroprogramming.

[ALSO READ] The most complex human artificial brain yet

Main components of IBM’s TrueNorth (SyNAPSE) chip. Image: IBM

Main components of IBM’s TrueNorth (SyNAPSE) chip. Image: IBM

The coronation of a six-year old IBM project, partially funded by DARPA, TrueNorth made its first baby steps in an earlier prototype. The 2011 version only had 256 neurons, but in between the developers made some drastic improvements like switching to the Samsung 28nm transistor process. Each TrueNorth chip consists of 4096 neurosynaptic cores arranged in a 64×64 grid. Like a small brain network that communicates with other networks, each core bundles 256 inputs (axons), 256 outputs (neurons),  SRAM (neuron data storage), and a router that allows for any neuron to transmit to any axon up to 255 cores away. In total, 256×256 means each core is capable of processing 65,536 synapses, and if that wasn’t crazy enough IBM  already built a 16-chip TrueNorth system with 16 million neurons and 4 billion synapses.

[ALSO] New circuitboard is 9,000 times more efficient at simulating the human brain than your PC

By now, some of you may be confused by all these technicalities. What do these mean? Why should you care, for that matter? The ultimate goal is to come to an understanding, complete and absolute, of how the human brain works. We’re far off from this goal, but we need to start somewhere. To run complex simulations of deep neural networks you need dedicated hardware that can be up to the job, preferably closely matching organic brain parallel computation. Then you need software, but that’s for another time.

Of course, there’s also a commercial interest. IBM is in with the big boys. They’ve constantly been on the forefront of technology for decades and people managing IBM know that big data interpretation is huge slice of the global information pie. Watson, the supercomputer that demonstrated it can win against Jeopardy’s top veterans, is just one of IBM’s big projects in this direction – semantic data retrieval. Watson’s nephews will be ubiquitous in every important institutions, be it hospitals or banks. Expect TrueNorth to play a big part in all of this, running from the inside to help the world grow faster on the outside.

More details can be found in the paper published in the journal Science.

 

 

Bone marrow-on-a-chip could remove bone marrow animal testing

A new “organ on a chip” has been developed by Harvard researchers, reproducing the structure, functions, and cellular make-up of bone marrow, a complex tissue that until now, could only be studied on living animals.

Microscopic view of the engineered bone with an opening exposing the internal trabecular bony network, overlaid with colored images of blood cells and a supportive vascular network that fill the open spaces in the bone marrow-on-a-chip (credit: James Weaver/Harvard’s Wyss Institute)

Bone marrow is one of the more complex and fragile parts of the human body – many drugs and toxic elements affect the bone marrow in ways that are hard to predict, and hard to study. Until now, the only reason to do this was to study it on living animals, something which, needless to say, is costly, unpleasant, and risky. But now, scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering have developed what they call “bone marrow-on-a-chip”, a device which could be used to develop safe and effective strategies to prevent or treat radiation’s lethal effects on bone marrow without resorting to animal testing.

The main focus of such studies is cancer treatment – many such treatments such as radiation therapy or high-dose chemotherapy are hazardous for bone marrow. Animal testing is not really efficient for studying such matters, and it also raises some moral issues.

“Bone marrow is an incredibly complex organ that is responsible for producing all of the blood cell types of our body, and our bone marrow chips are able to recapitulate this complexity in its entirety and maintain it in a functional form in vitro ,” said Don Ingber, M.D., Ph.D., Founding Director of the Wyss Institute, Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, Professor of Bioengineering at SEAS, and senior author of the paper.


Ingber leads a larger initiative to develop more “organs on a chip” – small microfluidic devices that mimic the physiology of living organs. So far, they’ve developed lung, heart, kidney, and gut chips that reproduce key aspects of organ function, and have more in the works. In order to build them, they combine multiple types of cells from an organ on a microfluidic chip, while steadily supplying nutrients, removing waste, and applying mechanical forces that the organ would naturally encounter in the human body.

The researchers report the development in the May 4, 2014 online issue of Nature Methods.

 

Creating virtually indestructible, self healing circuits

Imagine if the chip in your phone of laptop could not only defend, but also repair itself on the fly, recovering from simple scratches or battery issues to total transistor failure. It may sound like science fiction, but it is exactly what a team from CalTech has done.

chips

A chip, after being “zapped”

The team working at the High-Speed Integrated Circuits laboratory in Caltech’s Division of Engineering and Applied Science, has demonstrated this self-healing capability in tiny power amplifiers. The amplifiers were actually so tiny that 76 of them could fit inside a single penny – along with everything they need to repair themselves. In order to test their experiment, researchers zapped the chips multiple times with a high power laser, and then observed as the chips automatically developed a work-around in a fraction of a second.

“It was incredible the first time the system kicked in and healed itself. It felt like we were witnessing the next step in the evolution of integrated circuits,” says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. “We had literally just as transistors, and it was able to recover to nearly its ideal performance.”

The team’s results will appear in the March issue of IEEE Transactions on Microwave Theory and Techniques.

chips 2

At the moment, chips are extremely vulnerable; a single mechanical or electric fault can render it useless, so the CalTech engineers wanted to give integrated-circuit chips a healing ability akin to that of our own immune system – something that is able to discover and treat the fault as soon as possible. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from the sensors feeds into a custom-made application-specific integrated-circuit (ASIC) unit on the same chip, a central processor that acts as the “brain” of the system. The “brain” then analyzes the amplifier’s overall performance, figuring out where it has faulted and needs to be fixed. What’s interesting is that this mechanism does not operate on algorithms that know how to respond to every possible scenario, but rather draws conclusions based on the aggregate response of the sensors.

“You tell the chip the results you want and let it figure out how to produce those results,” says Steven Bowers, a graduate student in Hajimiri’s lab and lead author of the new paper. “The challenge is that there are more than 100,000 transistors on each chip. We don’t know all of the different things that might go wrong, and we don’t need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention.”

They have described four main categories of damage that chips suffer: static variation that is a product of variation across components, long term ageing, short-term variations that are induced by environmental conditions such as changes in load, temperature, and differences in the supply voltage, and accidental or intentional mechanical damage that causes a destruction of the circuits.

The implications of this project are absolutely huge.

“Bringing this type of electronic immune system to integrated-circuit chips opens up a world of possibilities,” says Hajimiri. “It is truly a shift in the way we view circuits and their ability to operate independently. They can now both diagnose and fix their own problems without any human intervention, moving one step closer to indestructible circuits.”

Picture perfect: quick, efficient chip eliminates common flaws in amateur photographs

Your smartphone amateur photos could be instantly converted into professional-looking pictures at the touch of a button, thanks to a chip developed by MIT researchers.

The chip, built by a team at MIT’s Microsystems Technology Laboratory can perform a number of tasks, including creating a more realistic environment or enhanced lighting in a shot without destroying the scene ambience; the technology could be easily implemented not only in cameras, but also in smartphones or tablets, making it easier for everyone to take that great pic you’ve always wanted.

photography

Usually, computational photography software applications are installed into cameras and smartphones; these systems consume lots of processing power, taking a longer time to run and requiring a considerable amount of knowledge from the user. But the chip developed by Rahul Rithe, a graduate student in MIT’s Department of Electrical Engineering and Computer Science takes an entirely different approach.

“We wanted to build a single chip that could perform multiple operations, consume significantly less power compared to doing the same job in software, and do it all in real time,” Rithe says.

Perhaps the most such notable task is known as High Dynamic Range (HDR) imaging. HDR is designed to compensate for limitations on the range of brightness that can be recorded by existing digital cameras, to capture photos as vivid as we see them with our own eyes. In order to do this, the camera takes three “low dynamic range” pictures: a normal one, an overexposed (too much light) and an underexposed (too little light). It then merges all of them, creating a single photo that features the entire range of brightness in the scene, Rithe says.

Software systems typically take a few seconds to perform this operation, while the chip, even in its initial stage, can do it in much less than a second; this makes it fast enough to even apply it to video, something previously impossible, while also requiring much less CPU power.

“Typically when taking pictures in a low-light situation, if we don’t use flash on the camera we get images that are pretty dark and noisy, and if we do use the flash we get bright images but with harsh lighting, and the ambience created by the natural lighting in the room is lost,” Rithe says.

The chip also removes unwanted noise, blurring out any undesired pixel with its surrounding neighbors, so that it matches those around it. This image is also done with traditional filters, but also blurred pixels at the edges of object, resulting in a less detailed image. The power savings offered by the chip are particularly impressive, says Matt Uyttendaele, also of Microsoft Research:

“All in all [it is] a nicely crafted component that can bring computational photography applications onto more energy-starved devices,” he says.

Source

Physicists create previously thought impossible super photons

Velocity-distribution data of a gas of rubidium atoms, confirming the discovery of a new phase of matter, the Bose–Einstein condensate. Via Wikipedia

A team of physicists from the University of Bonn developed a totally new type of source of light, the so called Bose-Einstein condensate; the results will be published in the upcoming edition of Nature. They managed to achieve this astonishing feat by greatly cooling Rubidium atoms and stashing them into each other, up until the point they become indistinguishable and behave like a single big particle, which researchers call a Bose-Einstein condensate.

Technically speaking, the Bose-Einstein condensate is a state of matter of a dilute gas, of weakly interacting bosons, cooled to a temperature very close to absolute zero (approximately -273 degrees Celsius). Under such conditions, a large fraction of the bosons occupy the lowest quantum state of the external potential, at which point quantum effects become apparent on a macroscopic scale. [Wikipedia]

The apparently unsolvable problem appears when talking about light; the problem is that when photons are cooled down, they just disappear; however, Bonn physicists Jan Klärs, Julian Schmitt, Dr. Frank Vewinger, and Professor Dr. Martin Weitz succeeded where so many others failed.

For a better understanding of the phenomena, we should ask ourselves how warm light really is; for example, if you warm a tungsten filament (standard light bulb filament), it starts glowing, first red, then moves on to yellow and then, finally, blue(ish). It would seem that for every temperature there is a different colour, but the problem is that different metals glow in different colours, so a common starting unit had to be found. In order to achieve this, physicists created a theoretical model, the so called black body. The black body is an idealized object that absorbs all the electromagnetic radiation directed at it. So if you take this theoretical object and theoretically heat it, it will give a common ground for a corelation between light colour and temperature. But what happens when you cool it ?

The creators of the "super-photon" are Julian Schmitt (left), Jan Klaers, Dr. Frank Vewinger and professor Dr. Martin Weitz (right). (Credit: © Volker Lannert / University of Bonn)

If you would cool it, it will at some point stop radiating in the visible spectrum – it will only give out infrared photons which are invisible for the human eye. Also, as you cool it, the radiation intensity will decrease as the number of photons gets smaller (because photons disappear when cooled). The problem seems impossible – how do you lower the temperature of the photons without “killing” them ?

The Bonn researchers used a really inventive system, basically using two highly reflective mirrors and bouncing a beam back and forth between them. What happens is that when light hits the mirrors, the molecules in the mirror absorb the photons and then spit them back, and a whole number of interesting things happen during those collisions:

“During this process, the photons assumed the temperature of the fluid,” explained Professor Weitz. “They cooled each other off to room temperature this way, and they did it without getting lost in the process.”

This should especially please chip designers, because they use laser light for etching logic circuits into their semiconductor materials; just how small and fine these structures can be is limited by the wavelength of light – the smaller the better. A big wavelength is just like writing on a piece of paper with a big paintbrush. In time, this development will pave the way for more performant microchips, which will ultimately affect us all.