Tag Archives: silicon

Pure Quartz.

Silicon-based life on Earth? Only artificially, so far — but maybe natural on other planets

Scientists have successfully nudged a strain of bacteria to create carbon-silicon bonds for the first time. Such research will help flesh out our understanding of silicon-based life — which doesn’t appear on Earth but could on other planets.

Pure Quartz.

Quartz, or rock crystal, the second most abundant mineral in the Earth’s crust, is a mixture of (mostly) silicon and oxygen. The first most abundant mineral, feldspar, also contains (mostly) silicon.
Image credits Stefan Schweihofer.

You may not think of it much, but silicon is actually really common here on good ole Earth — it’s the second most common element in our planet’s crust after oxygen. About nine-tenths of crustal rocks contain silicon in the form of silica or other silicates. The processor that’s allowing you to read ZME Science contains silicon — the glass panes on your windows, too. It’s so ubiquitous in our planet’s chemical makeup that a geologist can tell you which volcanoes will explode, and which would simply ‘flow’ on eruption, just by looking at how much silicon its magma contains. Sitting just one step away from carbon in the periodic table, silicon shares most of the properties that made carbon ideally suited for organic use.

Which makes the following point that much more curious: life as we know it simply isn’t that big on using silicon. It pops up here and there, in the tissues of certain plants or the shells of some marine organisms, but surprisingly little overall. Instead, Earthlings much prefer carbon.

Research from the California Institute of Technology, however, may put the element back on biology’s menu. The team successfully coaxed E.coli into producing a protein that can form carbon-silicon (C-Si) bonds. Their work sheds more light on why the latter element seems to be shunned by Earthly life, and where we might find organisms that don’t feel the same way.

Silly cons

The team started by engineering a strain of E. coli bacteria to produce a protein normally found in bacteria from Icelandic hot springs which can bind silicon to carbon. When the team first used their engineered strain to produce the protein in question, the compound proved to be very inefficient. Successive iterations and natural mutations, however, resulted in an enzyme that could forge organic silicon molecules some 15 times more efficiently than any chemical process the team could apply to the same goal. Using this molecule, the team produced twenty organic C-Si compounds, all of which proved to be stable.

So, by this step, they had proven that life can incorporate silicon — it’s just that it doesn’t particularly want to.

“You might argue, gosh, it’s so easy for a biological system to do it, how do you know it’s not being done out there?” says coauthor Frances Arnold. “We don’t really know, but it’s highly unlikely.”

Arnold was referring to other planets in this context, but the point he’s trying to make can also be applied to Earth. Do we know beyond a doubt that there isn’t silicon life on Earth? Well, no. But we have reasonable grounds to assume that there isn’t.

The issue here is a problem of availability. Silicon is much more common on Earth than carbon, but virtually all of it is extremely costly for life to access. Silicon is so prevalent in the crust because it’s a huge fan of oxygen, and it will bind with any available atom of the gas to form rocks. All of the silicon compounds that the team fed their bacteria to make these new compounds were manmade and life wouldn’t have any chance of finding them in the wild.

Carbon, on the other hand, is very stable chemically. Its relative lack of interest in hooking up with oxygen is especially useful for life, as it can use the atom to create huge molecules without much risk of them oxidizing and breaking apart. It also allows carbon to exist in a pure state (graphite for example) in nature, while silicon can’t — this is a very important distinction, as in molecular ‘economy’, this means carbon can be acquired at a much, much lower price (energy expenditure) than silicon. Finally, when you burn carbon you get a gas that can then be re-used by life; silicon lacks this perk.

Silicon optics.

But it can be quite shiny, as these silicon optic pieces showcase.
Image credits Crystaltechno / Wikimedia.

Finally, silicon-based life couldn’t use water as carbon-based life does; the two simply don’t have chemistry. Instead, it would have to substitute another liquid, such as methane, for the job — but that’s not stable under normal conditions on Earth, either.

In the end, the heart of the matter isn’t that silicon can’t be a foundation for life — it’s just that, on Earth, carbon can do the job much more easily, at greater efficiency, and at a lower cost. The ‘job’ here being life.

However, that’s not to say silicon-carbon bonds aren’t useful. We produce such compounds in the lab all the time and use them in products ranging from electronics to pharmaceuticals. The team hopes that their bacteria can help produce these substances much faster, much more cheaply, and with a lesser environmental footprint. It could also open the way to whole new materials.

“An enzyme can do what chemists thought only they could do,” Arnold says.“The chemical bond could appear in thousands and thousands of different molecules, some of which could be useful,”

“They’re all completely new chemical entities that are easily available now just by asking bacteria to make them.”

Silly life

Beyond the immediate practical considerations, the research also begs the question: is Earth-based silicon life feasible? The results showed, at the very least, that silicon isn’t harmful to life as we know it. Perhaps, if life had ready access to the element, it would incorporate it more in its structures and processes, despite its limitations.

And that invites the question of whether life can be made to incorporate elements that we’ve never seen it use before.

“What happens when you incorporate other elements?” Arnold asks. “Can nature even do that?”

“Presumably we could make components of life that incorporate silicon—maybe silicon fat or silicon-containing proteins—and ask, what does life do with that? Is the cell blind to whether carbon is there or silicon is there? Does the cell just spit it out? Does the cell eat it? Does it provide new functions that life didn’t have before?”

“I’d like to see what fraction of things that chemists have figured out we could actually teach nature to do. Then we really could replace chemical factories with bacteria.”

One particularly well-suited solution to the limitations of silicon on our Earth is to move the context to another planet. Any seasoned lover of sci-fi, and I proudly count myself among their number, has run into the idea of silicon-based aliens at least once. For now, the “alien life” part remains in the domain of fantasy, but the chemistry behind that idea is very firmly lodged in the domain of science. For example Titan, Saturn’s largest moon, sports a chilly average temperature of -179° Celsius (-290° Fahrenheit), very little oxygen (that’s locked in water ice), and an abundance of methane rivers and lakes.

[Read Further] Yes that’s weird, but the weather gets even weirder on other planets — even simple rain.

In this context, silicon would be much better suited as a biochemical base for life than carbon. In what is perhaps a sprinkling of cosmic irony, however, Titan sports a lot of carbon (even more than Earth), but precious little silicon — and most of it is buried deep, near the moon’s core. But it goes to show that there are worlds out there where silicon is the way to go, not carbon. Overall our chances of finding silicon-based life, or life that incorporates silicon, are pretty slim. And that, again, comes down to the fact that carbon is the more stable of the lot. In the grand scheme of things, there can be silicon life out there — but it will probably be pretty rare.

Still, for now, research into C-Si bonds could usher in a new method of cheaply producing what, today, are relatively pricey compounds. And organic silicon compounds could have very valuable uses in medicine and other applications. So, while we look and pine for silicon-based life out in the universe, we stand to gain a lot from studying it on Earth.

The paper “Directed evolution of cytochrome c for carbon–silicon bond formation: Bringing silicon to life” has been published in the journal Science.

Scientists coax bacteria towards silicon-based life

Life — from what we know so far, it takes some carbon, some water, and a dash of other elements to make it happen. We’ve never seen it form from anything else, no matter where we’re searched. And yet, there is one element found in abundance on Earth that biological life makes surprisingly little use of: silicon.

martian

Image credits Gomez Santos / Pixabay.

Silicon is very similar in its chemical make-up to carbon, and shares its tetravalence — each atom can bond to four other atoms — meaning silicon could, in theory, form the basis for complex molecules fundamental to life — such as proteins and DNA. Organic carbon-silicone bonds have been used by chemists for decades now in anything from paints to computer hardware. But these are produced artificially, and we’ve yet to see similar bonds pop up in nature. No silicon-based life has evolved on the planet, which is only stranger when you factor in that after oxygen, silicon is the most bountiful element in the Earth’s crust.

This has left scientists with a dilemma for decades now: is silicon-based life possible, and if so, what would it look like?

To try and answer that question, a team from the California Institute of Technology, Pasadena, has managed to coax living cells into forming carbon-silicon bonds, showing for the first time how nature can incorporate this element into the basic building blocks of life.

“No living organism is known to put silicon-carbon bonds together, even though silicon is so abundant, all around us, in rocks and all over the beach,” says one of the researchers, Jennifer Kan from Caltech.

The team started by isolating a protein that occurs naturally in Rhodothermus marinus, a bacterium which inhabits Iceland’s hot springs. Known as a cytochrome c enzyme, the protein’s main role is to shuttle electrons through the cells. The team chose it because lab tests showed that it could help create the kind of bonds used to hook carbon and silicon atoms together.

After identifying the gene that codes cytochrome c, they pasted it into a culture of E. coli bacteria to see if it would lead to the creation of those bonds. The first few tries didn’t result in much progress, but the team kept altering the protein gene within a specific area of the E. coli‘s genome until they finally achieved their goal.

“After three rounds of mutations, the protein could bond silicon to carbon 15 times more efficiently than any synthetic catalyst,” Aviva Rutkin reports for New Scientist.

This new method of trying the two elements together (with much greater efficiency than before) could change the way we think about producing the goods that require them, such as fuels, pharmaceuticals, or agricultural fertilizers. It also shows that life could (at least in part) be based on silicon.

“This study shows how quickly nature can adapt to new challenges,” one of the team, Frances Arnold, said in a press statement.

“The DNA-encoded catalytic machinery of the cell can rapidly learn to promote new chemical reactions when we provide new reagents and the appropriate incentive in the form of artificial selection. Nature could have done this herself if she cared to.”

Kan’s team had to really push the cells to create the bonds — this wasn’t something they were easily capable of doing on their own, or even very willing. Still, if the team continues to work with these kinds of bacteria, we could get an even better understanding of how life based on silicon might look like.

The full paper “Directed evolution of cytochrome c for carbon–silicon bond formation: Bringing silicon to life” has been published in the journal Science.

 

Test sample of a monolithic perovskite-silicon multijunction solar cell produced by the MIT-Stanford University team. Image: Felice Frankel

The two-in-one solar cell might harness energy cheaply and efficiently

A team at Stanford and MIT has devised a novel configuration that combines silicon – the leading solar cell semiconductor – and perovskite – a cheap mineral, only recently exploited for converting solar energy – to form two different layers of sunlight-absorbing material in order to harness energy across a wider spectrum. While performance at this stage is not impressive (it’s equally as good or bad as conventional single-layer silicon cells), researchers believe they have methods at their disposal that could double efficiency. If that were to happen, than these could be the cheap, but efficient solar cells we’ve all been waiting for.

Test sample of a monolithic perovskite-silicon multijunction solar cell produced by the MIT-Stanford University team.  Image: Felice Frankel

Test sample of a monolithic perovskite-silicon multijunction solar cell produced by the MIT-Stanford University team.
Image: Felice Frankel

Perovskite is a calcium titanium oxide mineral composed of calcium titanate (CaTiO3). The mineral has received much attention in recent years as artificial perovskite crystals have increasingly been used in solar cells. Perovskite tech has seen 400 percent growth in solar conversion efficiency in less than five years. If initially we heard about clumsy perovskite cells with a rated efficiency of only 3.8%, there are now reports of conversions closed to 19%. Definitely one to watch for, this perovskite.

This is what the researchers thought as well. They added a semi-transparent perovskite layer on top of a silicon one. Since different materials absorb different light frequencies, combining the two theoretically helps you harvest more electricity. The team first did this last year, but in a tandem configuration in which the two layers were stacked, but each had its own separate electrical connection. Now, the configuration connects the two together under the same circuit.

It’s much simpler to install and make this way, but there are some important challenges to keep in mind. The two layers were initially wired separately  for good reason: the current produced is limited by the capacity of the lesser of the two layers. MIT associate professor of mechanical engineering Tonio Buonassisi offers an analogy. Imagine a flow of water through two non-identical pipes. At one point, the volume of water that may pass through the stacked pipes is limited by the narrowest one. A chain is only as strong as its weakest link, in other words.

This is why the only currently report a modest efficiency of 13.7 percent, but Buonassisi claims his team has identified low-cost methods to up this to 30 percent.  This would involve matching the two output currents as closely as possible, according to the paper published in the journal Applied Physics Letters. Since it’s the first time perovskite and silicon have been combined in this configuration, there’s reason to believe there’s much room for improvement.

Curiosity Reveals Mars isn’t Red – it’s Greyish Blue

Mars – our planetary neighbor, the Red Planet… is not actually red. The first look at what’s under Mars’s dusty red surface has revealed a clearly greyish blue rocky layer.

The Red Planet might only be Red on the surface. Image: NASA/JPL-Caltech/MSSS

Curiosity rover has begun digging at a site called Telegraph Peak, the third drilling site in outcrop at base of Mount Sharp, where Curiosity has been working for the past five months. The decision to dig there was based on previous chemical measurements. The goal of Curiosity’s digging there is to figure out how exactly Mars evolved from a wet, lush environment into the dry, arid one we see today.

“The Pahrump Hills campaign previously drilled at two other sites. The outcrop is an exposure of bedrock that forms the basal layer of Mount Sharp. Curiosity’s extended mission, which began last year after a two-year prime mission, is examining layers of this mountain that are expected to hold records of how ancient wet environments on Mars evolved into drier environments”, NASA wrote on their website.

We have known for qute a while that Mars is mostly made of silicon and oxygen (much like Earth), also containing significant quantities of iron, magnesium, aluminium, calcium, and potassium. But when researchers analyzed the blueish-grey samples extracted from Telegraph Peak using the Alpha Particle X-ray Spectrometer (APXS) on the rover’s arm and its internal Chemistry and Mineralogy (CheMin) instrument, they were surprised to see just how much silicon the sample had.

“When you graph the ratios of silica to magnesium and silica to aluminium, ‘Telegraph Peak’ is toward the end of the range we’ve seen,” Curiosity co-investigator Doug Ming, from the NASA Johnson Space Centre in the US, said in the press release. “It’s what you would expect if there has been some acidic leaching. We want to see what minerals are present where we found this chemistry.”

Credit:NASA/JPL-Caltech/MSSS

The first surprise came from the rocks not being red, but actually much darker.

“We’re sort of seeing a new colouration for Mars here, and it’s an exciting one to us,” Joel Hurowitz, sampling system scientist for Curiosity at NASA’s Jet Propulsion Laboratory (JPL), said in a statement.

So why are we seeing this different colouration for the rocks? The key answer here is likely “oxidation” – the reddish dust on the surface of Mars is subject to oxidation; in other words, it rusts, and turns red. The grey powder that Curiosity collected is hidden and kept safe from that process, and therefore may preserve some indication of what iron was doing, without the implication of the oxidation.

The Curiosity rover will now be journeying away from Pahrump Hills, moving up Mount Sharp to see if the same thing is happening at a higher altitude.

This illustration represents the four-layer prototype high-rise chip built by Stanford engineers. The bottom and top layers are logic transistors. Sandwiched between them are two layers of memory. The vertical tubes are nanoscale electronic "elevators" that connect logic and memory, allowing them to work together to solve problems. Credit: Max Shulaker, Stanford

Stacked “high-rise” computer chips add a new dimension to manufacturing

Moore’s law says that the number of transistors in an integrated circuit doubles every two years, hence doubling also the computing power. Since it was first predicted in 1965, this trend has hold true allowing computers to evolve at an exponential rate. To support the law, scientists tweak one or all of these three main manufacturing parameters: chip size, speed and price. Now, a new dimension might be factored in: tallness.

Making chips in 3-D

Researchers at Stanford University show how this might happen after they revealed a novel manufacturing technique which can be used to make multi-story logic and memory chips. Of course, scaling chips vertically has been considered before but it wasn’t until recently that the numerous challenges that come with it were overcome. The main issues are operating temperature (lost electrons transform into heat and the more circuits you have packed the more heat generated) and so-called traffic jams that happen when computers get too busy.

This illustration represents the four-layer prototype high-rise chip built by Stanford engineers. The bottom and top layers are logic transistors. Sandwiched between them are two layers of memory. The vertical tubes are nanoscale electronic "elevators" that connect logic and memory, allowing them to work together to solve problems. Credit: Max Shulaker, Stanford

This illustration represents the four-layer prototype high-rise chip built by Stanford engineers. The bottom and top layers are logic transistors. Sandwiched between them are two layers of memory. The vertical tubes are nanoscale electronic “elevators” that connect logic and memory, allowing them to work together to solve problems. Credit: Max Shulaker, Stanford

The approach ingeniously stacks both logic and memory chips atop one another, interconnecting them with thousands of nanoscale electronic “elevators” that move data fast between the layers. This would allow data to flow  faster and using less electricity, than the traditional bottle-neck prone wires connecting single-story logic and memory chips today.

“This research is at an early stage, but our design and fabrication techniques are scalable,” said  Subhasish Mitra, a Stanford professor of electrical engineering and computer science. “With further development this architecture could lead to computing performance that is much, much greater than anything available today.”

The de facto material used for making transistors and computer chips today is silicon. The material has proven to be a great fit for the electronics industry, but it’s now nearing the full extent of its capabilities. One problem with silicon is heat. We all feel this when we hold a smartphone and put our hands over a computer. This heat is in fact electricity that leaks out of the silicon transistors. To solve this problem, Stanford researchers turned to carbon nanotubes

Carbon Nanotubes (CNTs) and their compounds exhibit extraordinary electrical properties for organic materials, and have a huge potential in electrical and electronic applications such as photovoltaics, sensors, semiconductor devices, displays, conductors, smart textiles and energy conversion devices (e.g., fuel cells, harvesters and batteries). They are so slender that nearly 2 billion CNTs could fit within a human hair. Because of their tiny diameter, CNTs are thought to lose less electrons, but packaging enough of them to became effective has proven to be difficult.

Mitra and colleagues employed a nifty trick. They started by growing CNTs the standard way, on round quartz wafers, then added a metal film that acts like a tape. Just like an adhesive, when a silicon wafer was  placed atop, the CNTs came off the quartz growth medium. To make sure they made a CNT layer with sufficient density the lift and deposit technique was repeated 13 times. The researchers report  they achieved some of the highest density, highest performance CNTs ever made – an impressive feat considering they didn’t have  sophisticated equipment at their disposal like those at a commercial plant.

The CNTs were only one part of the equation – the logic part for the transistors. They still had to figure out how to make memory for vertical chips. So, again they devised a storage medium that isn’t based on silicon, like most of today’s RAM. Instead, the Stanford team fabricated memory using titanium nitride, hafnium oxide and platinum. This formed a metal/oxide/metal sandwich. Applying electricity to this three-metal sandwich one way causes it to resist the flow of electricity. Reversing the electric jolt causes the structure to conduct electricity again. Changing from resistance to flow is how this new memory type creates digital zeroes and ones, hence the name resistive random access memory, or RRAM.

Authors write that RRAM uses less energy, which translates into a prolonged battery life for smartphones or notebooks.

The image on the left depicts today's single-story electronic circuit cards, where logic and memory chips exist as separate structures, connected by wires. Like city streets, those wires can get jammed with digital traffic going back and forth between logic and memory. On the right, Stanford engineers envision building layers of logic and memory to create skyscraper chips. Data would move up and down on nanoscale "elevators" to avoid traffic jams. Credit: Wong/Mitra Lab, Stanford

The image on the left depicts today’s single-story electronic circuit cards, where logic and memory chips exist as separate structures, connected by wires. Like city streets, those wires can get jammed with digital traffic going back and forth between logic and memory. On the right, Stanford engineers envision building layers of logic and memory to create skyscraper chips. Data would move up and down on nanoscale “elevators” to avoid traffic jams. Credit: Wong/Mitra Lab, Stanford

Ultimately, Max Shulaker and Tony Wu, Stanford graduate students in electrical engineering, unveiled a four-story high-rise chip at the IEEE International Electron Devices Meeting in San Francisco. There, they explained the key parameter that helped them achieve this amazing feat: manufacturing temperature. Typically, memory or transistors from silicon are manufactured at very high temperatures of around 1,000 degrees Celsius. The Stanford process for making RRAM and CNTs uses low temperature so they could stack memory and logic boards atop one another without risking melting anything below.

Now, imagine not four but eight, sixten or 512 of these layers stacked. You’d get a chip that’s 512 times more powerful than the same ones found today that occupy the same surface area. Truly, this has the potential to change computing. Before you make your hopes up for a quantum computer of your own at home (better wait until someone actually makes one in the lab first), you might want to consider paying more attention to this sort of advances.

 

In semiconductors like silicon, electrons attached to atoms in the crystal lattice can be mobilized into the conduction band by light or voltage. Berkeley scientists have taken snapshots of this very brief band-gap jump and timed it at 450 attoseconds. Stephen Leone image.

For the first time, physicists measure electron as it jumps from semiconductor. Yes, it’s a big deal!

All our modern electronics are based on a class of wonder materials called semiconductors. What makes these so valuable is their ability to free electrons when subjected to an electrical current or when hit by light, becoming mobile and eventually routed and switch through a transistor. It’s the very basis of our digital age, be it solar cells or computers. Now, researchers at UC Berkeley have taken a real-time snapshot of electrons being stripped from silicon’s valence shell for the very first time.

A brief jump

In semiconductors like silicon, electrons attached to atoms in the crystal lattice can be mobilized into the conduction band by light or voltage. Berkeley scientists have taken snapshots of this very brief band-gap jump and timed it at 450 attoseconds. Stephen Leone image.

In semiconductors like silicon, electrons attached to atoms in the crystal lattice can be mobilized into the conduction band by light or voltage. Berkeley scientists have taken snapshots of this very brief band-gap jump and timed it at 450 attoseconds. Image: Stephen Leone.

This jump happens so fast that extremely fast lasers,  femtosecond lasers, are unable to measure it. This time, scientists turned to a type of laser that sends pulses of light even faster – attosecond pulses of soft X-ray light lasting only a few billionths of a billionth of a second. Experiments show that the time it takes from an electron to transit from the atom’s valence shell, across the band-gap, and into the conduction region is 450 attoseconds or 450 quintillionths of a second.

“Though this excitation step is too fast for traditional experiments, our novel technique allowed us to record individual snapshots that can be composed into a ‘movie’ revealing the timing sequence of the process,” explains Stephen Leone, UC Berkeley professor of chemistry and physics.

In the experiment published in Science, Leone and colleagues zapped a silicon crystal with ultrashort pulses of visible light using a laser. Immediately after the laser was fired, a subsequent X-ray beam was directed which lasted only a few tens of attoseconds (10-18 seconds) to take snapshots of the evolution of the excitation process triggered by the laser pulses. The experimental data was then interpreted by a supercomputer simulation at the  at the University of Tsukuba and the Molecular Foundry. Not only did the simulation model the excitation of the electrons, but also  the subsequent interaction of X-ray pulses with the silicon crystal.

Physics has identified two distinct states that occur when a semiconducting atom is “activated”. First, the electron absorbs energy and jumps to a higher state where it’s free to roam – it gets excited. Then, the lattice made up of the individual atoms arranged in an orderly manner to form the crystal rearranges itself in response to the redistribution of electrons. In this second stage, part of the energy used to excite the electron is transformed into heat carried by vibrational waves called phonons.

The present experiment confirms this once more, while offering a more refined look of what happens inside. The experiments show that initially, only electrons react to the energy from the laser. Then, after the laser has stopped firing or  60 femtoseconds later, they observed the onset of a collective movement of the atoms, that is, phonons. The researchers estimate the lattice spacing rebounded about 6 picometers (10-12meters) as a result of the electron jump, consistent with other estimates.

“These results represent a clean example of attosecond science applied to a complex and fundamentally important system,” Neumark says.

 

Photo: University of California

Sand-based batteries last three times longer than conventional ones

Photo: University of California

Photo: University of California, Riverside

Expect the price of sand to skyrocket! Researchers at  University of California, Riverside have devised a coin-sized battery that uses silicone at its anode (negative side), instead of the over-used graphite, that lasts up to three times longer than conventional lithium-ion batteries. The key of the research is the silicon extraction method which uses quartz-rich sand as the feedstock and simple, non-energy intensive chemical reactions. Previously, nanoscale silicon used in batteries was dubbed to difficult to manufacture.

While out surfing, Zachary Favors, a graduate student at UC Riverside, drew inspiration from the beach sand he was resting upon. Sand is primarily made up of quartz, or silicon dioxide, but concentrations vary depending on the deposit. Favors found a quart-rich site at the Cedar Creek Reservoir in Texas, drew same samples and enlisted the help of engineering professors Cengiz and Mihri Ozkan.

The team milled the sand until it reached the nanometer scale, before introducing the tiny granules through a series of purification steps at the end of which made them look similar to powdered sugar. The purified quartz was then mixed with grounded salt and magnesium and heated. The salt acts like a heat absorber, while the magnesium removes the oxygen from the quarts, leaving pure silicon instead. Moreover, the resulting pure silicon is in a porous state, which is ideal for use at the anode. The increased surface area allows lithium ions to travel through more quickly and close the circuit.

Schematic process of silicon purification. Illustration: Scientific Reports

Schematic process of silicon purification. Illustration: Scientific Reports

To test the resulting powder in an actual anode, the researchers built a coin-sized battery. Performance tests reveal the silicon-anod lithium-ion battery lasts up to three times longer than those which use graphite. This means that l-i batteries that use silicon anode might help your phone or car battery last three times longer, which is a massive improvement that could dramatically change how smartphones or electric cars are employed. Next, Favors and colleagues will built a larger battery (the size of a phone battery) and will run more extensive tests to assess life cycle and further parameters. The report was published in the journal Scientific Reports.

Graphene is a perfect 2D crystal of covalently bonded carbon atoms and forms the basis of all graphitic structures. (c) Photo: Costas Galiotis FORTH/ ICE-HT and Dept. Materials Science, University of Patras

Breakthrough could usher away silicon and make way for graphene transistors

Graphene is a perfect 2D crystal of covalently bonded carbon atoms and forms the basis of all graphitic structures. (c) Photo: Costas Galiotis FORTH/ ICE-HT and Dept. Materials Science, University of Patras

Graphene is a perfect 2D crystal of covalently bonded carbon atoms and forms the basis of all graphitic structures. (c) Photo: Costas Galiotis
FORTH/ ICE-HT and Dept. Materials Science, University of Patras

Time and time again we’ve hailed on ZME Science the cultural and scientific advances graphene is about to bring to humanity. It’s the strongest material known so far, while also being the lightest, it can be magnetic and – something of uttermost important to science – it’s the best electrical conductor that we know of.  The latter also comes with a curse; one that electrical engineers have been trying to dispel for years, only to go through arduous sleepless nights. Building working graphene transistors is imperious for technology’s grand design of the future, but the thing with a graphene transistors is that you can’t turn it off. Now, if you know your basic electrical engineering you know that this means graphene’s  useless in this case, despite the numerous and tremendous benefits it could provide.

Yup, it’s that conductive! Scientists at University of Riverside, California (UCR) , however, may have finally developed a work-around after they published a paper in which they demonstrate a graphene transistor circuit that can switch on and off by exploiting a counter-intuitive effect called negative resistance – we’ll get to that in a minute.

 “The obtained results present a conceptual change in graphene research and indicate an alternative route for graphene’s applications in information processing,” the led by Guanxiong Liu  write.

The de facto industry standard for building just about any electronics today is silicon. Now, as faithful and reliable silicon has proven to be along the decades, it’s soon becoming obsolete. Miniaturization has come a long way, but there’s only so much you can squeeze over such a limited surface. Most industry pundits think that the downscaling of silicon chip technology cannot extend much beyond 2026.

Graphene: the transistor

Technically, from there on graphene research should become reliable enough to enter mass manufacturing and replace silicon in the industry. When transistors are concerned, however, graphene has one major flaw – it doesn’t have a bandgap. There’s no energy range in this material where electron states can’t exist or for a transistor to practically function it needs to switch current.

To make this work, you’d need to hoax graphene into working like a semiconductor – electrons cannot flow at low energy and so the material behaves as an insulator. With this in mind, several attempts have been made of creating an artificial graphene band gap using methods such as applying electric fields, doping with atoms or by stretching and squeezing the material. These techniques, however, proved to render only modest results. Practical digital circuits require a band gap on the order of 1 eV at room temperature, but he best efforts with graphene have produced a  few hundred meV at best.

Liu and colleagues however turned to an entirely different approach by exploiting what’s referred to as negative resistance, an effect in which a current entering a material causes the voltage across it to drop.  Various groups, including this one at Riverside, have shown that graphene demonstrates negative resistance in certain circumstances.

Sleepless nights into sweet dreams for graphene scientists

If you exploit these voltage drops you can perform logic and enable switching. Liu and co demonstrate the effectiveness of their approach by designing a graphene-based circuit that can match patterns and show that it has several important advantages over silicon-based versions. For starters, logic gates built from these inverted transistors could be much denser, more efficient at some tasks, and operate at terrifying speeds   over 400GHz.

The only issue that remains now is maybe the most challenging: actually building an inverted graphene transistor circuit. For now, the researchers have demonstrated experimentally that negative resistance occurs in graphene, the rest will remain to be determined. Still, their work is extremely promising and shows off an elegant and creative solution to a pestering problem that has been giving engineers headaches for nearly ten years since graphene has become truly hot.

Even so, graphene is great for electronics. Just a while ago, ZME Science wrote how graphene light sensors are 1,000 more sensitive, can reduce CPU temperature by 25% or multiply light.

Findings were reported in a paper published in the journal Mesoscale and Nanoscale Physics.

Rapid actuation of a soft robot (composed of silicone elastomers) was achieved using high-temperature chemical reactions. (c) Nature

Silicon robot hops 30 times its own height using combustion

Researchers at Harvard University in Cambridge, Massachusetts, have developed a three-legged silicon robot that uses chemical reactions to help it leap up to 30 times its own height. Combustion is typically used in hard systems like internal combustion engines where the heat generated by the chemical reaction can be withstood, but this latest demo proves that the material can withstand high working temperatures as well.

Rapid actuation of a soft robot (composed of silicone elastomers) was achieved using high-temperature chemical reactions. (c) Nature

Rapid actuation of a soft robot (composed of silicone elastomers) was achieved using high-temperature chemical reactions. (c) Nature

The key to the robots leaping ability lies in a smart soft valve, positioned at the end of a channel present in each of the three legs. This smart valve allows just the right mix of oxygen and methane to mix – one part methane to two parts oxygen. Then, the same computer that regulates how much gas is let in the channels  controls a high-voltage cable connected to electrodes in each leg. When it deems fit, the electrodes spark which causes the gas mixture to react in combustion, forming CO2 and water, while also releasing a lot of energy.

This energy kick is what allows the silicon robot to hop up to 30 times its own height, but this would have never been possible without destroying the robot were it not, yet again, for the tiny valve.  It closes in response to high pressure, thus making the pressure even higher, and then it opens after the explosion to let the exhaust gases out.


Up until now a similar effect was replicated using compressed air only, as it was thought that the high heat associated with combustion would simply fry it. The Sand Flea, another leaping robot we reported earlier on, uses compressed air to fling itself pass obstacles as high as 10 meters high. Using a smart valve system and a cleverly balanced chemical reaction, the researchers proved that combustion can be made in other soft system as well.

As for some genuine applications for this silicon leaping robot, the researchers envision their device could be used for search-and-rescue operations, leaping and cartwheeling its way over any obstacles that might block its path.

The robot was documented in a paper published in the journal Nature.