Tag Archives: storage

Sunrise jungle Indonesia.

Hard-pressed by humans, rainforests lost their ability to act as carbon sinks

Rainforests are too degraded to act as carbon sinks any longer, a new paper reports. Averaged across the globe, rainforests now have a positive output of greenhouse gases, prompting the authors to call for urgent conservation efforts that will allow rainforests to re-don the mantle of carbon sinks.

Sunrise jungle Indonesia.

Image via Pixabay.

The team, composed of scientists at the Woods Hole Research Center and Boston University, took a different approach in assessing the health of rainforests. Unlike previous research, which generally focused on deforestation (complete removal of the forests), they worked to account for more subtle changes in the form of disturbance and degradation, both natural and anthropic. These changes include small-scale tree mortality or removal, or forest gains through natural or human-assisted growth.

Sadly, they report that when taking such changes in forest density into account, tropical forests lose their ability to act as net carbon sinks, meaning they emit more carbon that they can capture.

Net producers

The study quantified changes in aboveground forest carbon across tropical America, Africa, and Asia. These areas were selected as the sheer scale of their rainforests provide the greatest ability to act as carbon stores. They’re also the most biodiverse places on the planet, providing a wealth of ecosystem services such as food, fuel, and materials to millions of people — meaning they see a lot of human activity.

The team used 12 years’ worth of satellite imagery (taken between 2003-2014), laser remote sensing technology, and measurements taken in the field to calculate losses in forest carbon from flat-out deforestation as well as the more subtle and fine-grain degradation and disturbance processes, which have previously remained unaccounted-for over large swaths of rainforest. Their findings point to a worrying, death-by-a-thousand-cuts scenario playing out in Earth’s richest ecosystems.

Overall, tropical regions have become a net source of atmospheric carbon, they report. Forests saw an increase in capture power of roughly 437 teragrams of carbon annually (expressed as ‘carbon gains’), but losses amounted to a whopping 862 teragrams — meaning rainforests contribute a roughly 425 teragrams of atmospheric CO2 yearly. Each teragram is equivalent to one trillion grams, one million metric tons, or 1.102.331 short tons. To put that number into context, China and the US emitted some 10,600, respectively 5,100 teragrams of CO2 in 2015 (29.5% and 14.3% of world emissions).

“Gains result from forest growth; losses result from deforestation and from reductions in carbon density within standing forests (degradation/disturbance), with the latter accounting for 68.9% of overall losses,” the team writes.

“In many cases throughout the tropics you have selective logging, or smallholder farmers removing individual trees for fuel wood. These losses can be relatively small in any one place, but added up across large areas they become considerable,” said WHRC scientist Wayne Walker, one of the paper’s coauthors.

Losses and gains aren’t evenly distributed, however. On a by-continent basis, the majority of losses occurred in Latin America (some 60% of loss), in the Amazon forest. Some 24% of loss was seen in Africa, and Asia experienced the least share of total losses, with a little over 16%. Degradation and disturbance were responsible for the lion’s share of continental losses in both the Americas (70% of losses) and Africa (81%), but under half (46%) in Asia. Gains were also predominantly centered in the Americas with nearly 43% of total gains, followed by Africa with 30%, and lastly Asia with 26%.

Such results are worrying, especially at a time when governments around the world are scrambling to meet their commitments to the Paris Agreement and curb climate change. The authors note that ending deforestation, degradation, and disturbance in the tropics and allowing the ecosystem to regrow would cut at least 862 teragrams of carbon per year, some 8% of global emissions. The UN already has a project set in place to help preserve natural carbon sinks — the REDD+ (Reducing Emissions from Deforestation and Forest Degradation), which offers incentives for countries to maintain forests intact. However, it depends on regular access to accurate measurements of incremental gains and losses in forest carbon density, and research such as this one will give us a better understanding of how forests function.

“These findings provide the world with a wakeup call on forests,” said WHRC scientist Alessandro Baccini, the paper’s lead author. “If we’re to keep global temperatures from rising to dangerous levels, we need to drastically reduce emissions and greatly increase forests’ ability to absorb and store carbon.”

“Forests are the only carbon capture and storage ‘technology’ we have in our grasp that is safe, proven, inexpensive, immediately available at scale, and capable of providing beneficial ripple effects — from regulating rainfall patterns to providing livelihoods to indigenous communities.”

The paper “Tropical forests are a net carbon source based on aboveground measurements of gain and loss” has been published in the journal Science.

IBM achieves 330TB storage on magnetic tape, set to improve ‘cloud’ applications

IBM scientists just reported a breakthrough in storing data on magnetic tape. Their novel storage device can hold 330 terabytes of uncompressed data or enough to store 181,500 movies. The new record-breaking prototype has an areal density (how much information can be stored on a tape’s surface) 20 times greater than that typically seen in commercial tape drives.

IBM scientist Dr. Mark Lantz, holds a single one square inch piece of tap, which alone can store 201 gigabytes. Credit: IBM Research.

IBM scientist Dr. Mark Lantz, holds a single one square inch piece of tap, which alone can store 201 gigabytes. Credit: IBM Research.

 

In a time when computers used solid-state devices (SSD) to store and retrieve data, it might sound odd that some scientists are so interested in magnetic tapes. HDD already looks obsolete but aren’t magnetic tapes like the dinosaurs of computer storage? Not so fast.

Indeed, the first byte stored on a magnetic tape was in 1951. The first tape device was called UNISERVO and had a transfer rate of 7,200 characters per second. The tapes were metal and measured 1,200 feet long (365 meters) and therefore very heavy. Steadily the tech improved fast leading to smaller, better magnetic storage devices like the compact cassette.

The low transfer rate of magnetic tapes made them unpractical in the face of CDs and hard drives but to this day many businesses, universities or libraries depend on them. While they might be rather slow for today’s standards, the biggest benefit of using tape is reliability and data cohesion over a long time. Today, the magnetic tape is the first line of defense so to speak in every important backup system. For instance, when I visited earlier this year the multi-petaflops supercomputer facility at ECMWF — one of the global leaders in weather forecasting — the most impressive sight I was allowed to see wasn’t the drawer-sized supercomputers but rather a black, room-sized enclosure. Inside, I could see hundreds of small magnetic tape cartridges each neatly arranged in a designated place. Meanwhile, half a dozen robotic arms were constantly taking out cartridges and placing new ones for new read/write sessions. This facility was responsible for processing more data — data that is crucial to understanding both the climate and the weather — than any of us can really fathom. HDDs are too vulnerable to data corruption and doing the same with SSDs would cost as much as the yearly health care budget of a small eastern European country. Tape, which is 60-year old tech, is cheap and reliable.

“Tape has traditionally been used for video archives, back-up files, replicas for disaster recovery and retention of information on premise, but the industry is also expanding to off-premise applications in the cloud,” said IBM fellow Evangelos Eleftheriou in a pres statement. “While sputtered tape is expected to cost a little more to manufacture than current commercial tape, the potential for very high capacity will make the cost per terabyte very attractive, making this technology practical for cold storage in the cloud.”

Tape storage density has skyrocketed in the last 10 years. Credit: IBM Research.

Tape storage density has skyrocketed in the last 10 years. Credit: IBM Research.

The new IBM tiny cartridge can store 201 gigabits/inch^2, an unprecedented areal recording density and the product a multi-year-long collaboration with Sony. According to Sony, some of the improvements include “advanced roll-to-roll technology for long sputtered tape fabrication and better lubricant technology, which stabilizes the functionality of the magnetic tape.”

The tape developed by IBM and Sony is made of multiple layers. Credit: IBM Research.

The tape developed by IBM and Sony is made of multiple layers. Credit: IBM Research.

Right now, this fancy tape prototype is twice as small in terms of physical size than the 60TB Seagate SSD, which is the world’s largest commercially-available hard-drive. The key enabler here is sputtering — a special technique that can produce magnetic tape with magnetic grains that are just a few nanometres across, rather than tens of nanometers.

For more on this breakthrough, check out the paper published in IEEE Transactions on Magnetics.

Single-atom magnets used to create data storage one million times more dense than regular hard disks

A team of researchers has created the smallest and most efficient hard drive in existence using only two atoms. This technology is currently extremely limited in the amount of data it can store, but the technique could provide much better storage when scaled up.

Image credits Michael Schwarzenberger.

Hard drives store data as magnetic fields along a disk housed inside the drive. It’s split into tiny pieces and each acts like a bar magnet, with the field pointing either up or down (1 or 0) to store binary information. The smaller you can make these areas, the more data you can cram onto the disk — but you can’t make them too small, or you risk making them unstable so the 1’s and 0’s they store can and will switch around.

What if you used magnets that remained stable even when made to be really tiny? Well, those of you that remember physics 101 will know that cutting a magnet in two makes two smaller magnets. Cut them again in half and you get four, then eight and so on smaller magnets — but they also become less stable.

But a team of researchers has now created something which seems to defy all odds: stable magnets from single atoms. In a new paper, they describe how using these tiny things they created an atomic hard drive, with the same functionality as a traditional drive, but limited to 2 bits of data storage.

Current commercially-available technology allows for one bit of data to be stored in roughly one million atoms — although this number has been reduced to 1 in 12 in experimental settings. This single-atom approach allows for one bit of data to be stored in one single atom. A scaled-up version of this system will likely be less efficient, but could increase current storage density by a factor of 1,000, says Swiss Federal Institute of Technology (EPFL) physicist and first author Fabian Natterer.

Holmium bits

Looks hairy.
Image source Images of Elements / Wikipedia.

Natterer and his team used holmium atoms, a rare-earth metal, placed on a sheet of magnesium oxide and cooled to below 5 degrees Kelvin. Holmium was selected because it has many unpaired electrons (which creates a strong magnetic field) sitting in a close orbit to the atom’s nucleus (so they’re relatively well protected from outside factors). These two properties taken together give holmium a strong and stable magnetic field, Natter explains, but it also makes the element frustratingly difficult to interact with.

 

The team used a pulse of electric current released from the magnetized tip of scanning tunneling microscope to flip the atoms’ field orientation — essentially writing data into the atoms. Testing showed that these atomic magnets could retain their state for several hours, and showed no case of spontaneous flip. The same microscope was used to then read the bits stored in the atoms. To double-check that the data could be reliably read, the team also devised a second read-out method. By placing an iron atom close to the magnets and tuning it so that its electronic properties depended on the orientations of the 2-bit systems. This approach allowed the team to read out multiple bits at the same time, making for a faster and less invasive method than the microscope reading technique, Otte said.

It works, but the system is far from being practical. Two bits is an extremely low level of data storage compared to every other storage method. Natterer says that he and his colleagues are working on ways to make large arrays of single-atom magnets to scale-up the amount of data which can be encoded into the drives.

But the merits and possibilities of single-atom magnets shouldn’t be overlooked, either. In the future, Natterer plans to observe three mini-magnets that are oriented so their fields are in competition with each other, making each other continually flip.

“You can now play around with these single-atom magnets, using them like Legos, to build up magnetic structures from scratch,” he says.

 

Other physicists are sure to continue research into these magnets as well.

The full paper “Reading and writing single-atom magnets” has been published in the journal Nature.

Operating system and a movie, among others, stored in DNA with no errors. The method can pack 215 petabytes of data in a single gram of DNA

Using an algorithm designed to stream videos on mobile phones, researchers showed how to maximize the data storage potential of DNA. Their method can encode 215 petabytes of data — twice as much as Google and Facebook combined hold in their servers– on a single gram of DNA. This is 100 times more than previously demonstrated. Moreover, the researchers encoded an operating system and movie onto the DNA were able to successfully retrieve the data from sequenced DNA without any errors.

Photo: Public Domain.

Every day, we create 2.5 quintillion bytes of data, and at an ever increasing pace. IBM estimates 90% of the data in the world today has been created in the last two years alone. As more and more of our lives gets transcribed in digital form, this trend will only get amplified. One problem is that hard drives and magnetic tapes will soon become inadequate for storing such vasts amount of ‘big data’ — which is where DNA comes in.

It’s often called the ‘blueprint of life’, for obvious reasons. Every cell in our bodies, every instinct, is coded in base sequences of  A, G, C and T, DNA’s four nucleotide bases. Ever since DNA was first discovered in the 1950s by James Watson and Francis Crick, scientists quickly realized huge quantities of data could be stored at high density in only a few molecules. Additionally, DNA can be stable for a long time as a recent study showed which recovered DNA from 430,000-year-old human ancestor found in a cave in Spain.

“DNA won’t degrade over time like cassette tapes and CDs, and it won’t become obsolete — if it does, we have bigger problems,” said study coauthor Yaniv Erlich, a computer science professor at Columbia Engineering, a member of Columbia’s Data Science Institute

Erlich and colleagues at the New York Genome Center (NYGC) chose six files to write into DNA:  a full computer operating system, an 1895 French film, “Arrival of a train at La Ciotat,” a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon.

All the files were compressed into a single master file then split into short strings of binary code, all 1s and 0s. The researchers used a technique called fountain codes which Erlich remembered from graduate school, to make the reading and writing process more efficient. Using the algorithm, they packaged the strings randomly into ‘droplets’, where each 1 and 0 was mapped to one of the DNA’s four nucleotide bases (A,G,C,T). The algorithm proved essential to store and retrieve the encoded data since it corrects and deletes letter combinations known to provoke errors.

Once they finished, they ended up with a huge text file made of 72,000 DNA strands each 200 bases long. The text file was sent to a startup from San Francisco called Twist Bioscience which turned all that digital data into biological data by synthesizing DNA. Two weeks later, Erlich received a vial containing DNA which encoded all of his previous work.

Overview of the data encoding and decoding process in DNA. Credit: Harvard Uni.

Overview of the data encoding and decoding process in DNA. Credit: Harvard Uni.

To retrieve the files, the researchers used common DNA sequencing tools as well as a special software which translates all the As, Gs, Cs, and Ts back into binary. The whole processes worked flawlessly seeing how the data was retrieved with no errors.

To demonstrate, Erlich installed on a virtual machine the operating system he had encoded in the DNA and played a game of Minesweeper to celebrate. Chapeau!

“We believe this is the highest-density data-storage device ever created,” said Erlich.

Erlich didn’t stop there. He and colleagues showed that you could copy the encoded data as many times as you wish. To copy the data, it’s just a matter of multiplying the DNA sample through polymerase chain reaction (PCR). The team showed that copies of copies of copies could have their data retrieved as error-free as the original sample, as reported in the journal Science.

Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules. Credit: New York Genome Center

Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules. Credit: New York Genome Center

There are some caveats, however, which I should mention. It cost $7,000 to synthesize the DNA and another $2,000 to read it. But it’s also worth keeping in mind that sequencing DNA is getting exponentially cheaper. It cost us $2.7 billion and 15 years of work to sequence the first human genome, then starting from 2008 the cost came down from $10 million to the couple-thousand-dollar mark. Sequencing DNA might become as cheap as running electricity through transistors at some point in the not very distant future.

Another thing we should mention is that DNA storage isn’t meant for mundane use. As it stands today, you can’t replace your HDD with DNA in a home computer for instance because the read-write time can take days. Instead, DNA might be our best solution for archiving the troves of data that are amounting to insane quantities with each passing day. And who knows, maybe someone can find a way to code and encode data in molecules as fast as electrons zip through a transistor — but that seems highly unlikely, if not impossible.

light-bulb-1776372_1280

Non-toxic, non-corrosive flow battery could last for more than a decade

light-bulb-1776372_1280

Credit: Pixabay

A new kind of flow battery can store energy in organic molecules dissolved in neutral pH water, unlike previous models that required expensive and corrosive electrolytes. Strikingly, tests suggest this flow battery might have an incredible lifetime losing only one percent of its capacity per 1,000 cycles. In contrast, a typical commercially available lithium-ion battery — the kind fitted inside your phone or notebook — doesn’t last for more than 1,000 cycles or two years operation.

It could make stored wind or solar competitive with energy from conventional power plants

A flow battery is very similar in construction to a fuel cell – an electrochemical cell where a dissolved electrolyte solution reversibly converts electrical energy – with the key distinction that the ionic solution is stored outside of the cell, and can be fed into the cell to generate electricity. This is of great interest to engineers since it solves the combined power capacity issue in traditional batteries.

This solution is stored in a tank — the bigger the tank, the greater the battery’s capacity. Immediately, this sounds like a fantastic opportunity to solve the storage issue of large-scale renewable energy power. However, deployment is limited because flow batteries typically employ aggressive electrolytes that degrade the battery after a while and thus reduce storage performance. Periodic maintenance is required and replacing the electrolyte is not cost effective.

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have devised a flow battery that can not only last a long time but requires minimal upkeep as well.

“Because we were able to dissolve the electrolytes in neutral water, this is a long-lasting battery that you could put in your basement,” said lead author Roy Gordon, the Thomas Dudley Cabot Professor of Chemistry and Professor of Materials Science.

“If it spilled on the floor, it wouldn’t eat the concrete and since the medium is noncorrosive, you can use cheaper materials to build the components of the batteries, like the tanks and pumps.”

Until now, flow batteries relied on chemicals that are expensive and difficult to maintain, like vanadium, or/and rare, noble metal electrocatalysts like platinum. But because the electrolytes are water soluble, the flow battery developed at Harvard is able to work with alternative chemicals.

First, the team had to identify why previously chemicals would degrade when left in a neutral solution. Through some molecular modeling and good ol’ fashion trial and error, Eugen Beh, a postdoc and first author of the new paper published in ACS Energy Letters, was able to understand why the molecule viologen was degrading in the negative electrolyte. It was then a matter of modifying its molecular structure to make the molecule more resilient in the aqueous neutral pH solution.

[ALSO SEE] What are batteries — everything you need to know

For the positive electrolyte, a similar approach was used. Thanks to its electrochemical properties, ferrocene can act like a fantastic positive electrolyte. The downside, however, is that it’s not natively soluble in the water.

“Ferrocene is great for storing charge but is completely insoluble in water,” said Beh. “It has been used in other batteries with organic solvents, which are flammable and expensive.”

Just like with the viologen, the researchers were able to modify ferrocene’s structure to turn it into a water-highly soluble electrolyte that could also be cycled stably.

Besides reducing upkeep costs, the new design also makes expensive ion-selective membranes redundant. These are essential for separating the electrolytes and because the membrane needs to withstand the aggressive chemistry inside the cell, it can be very expensive — up to one-third of the flow battery’s cost. Because the new design essentially uses salt water on both electrolytic sides, the expensive polymer membrane can be replaced with a cheap hydrocarbon variety.

These batteries aren’t meant to fit in your mobile, though. You’d literally need an external tank filled with liquids and that simply wouldn’t work for a mobile phone. They’re intended for large-scale energy storage, backing power generating plants and making the grid more resilient. Where they shine, however, is in their potential to change how we use renewable energy. One of the Department of Energy’s (DOE) goals is to have a battery that can store energy for less than $100 per kilowatt-hour. At this price tag, stored wind and solar energy become competitive on the market with energy produced by conventional baseload power plants.

“If you can get anywhere near this cost target then you change the world,” said Michael Aziz, the Gene and Tracy Sykes Professor of Materials and Energy Technologies and Roy Gordon. “It becomes cost effective to put batteries in so many places. This research puts us one step closer to reaching that target.”

“This work on aqueous soluble organic electrolytes is of high significance in pointing the way towards future batteries with vastly improved cycle life and considerably lower cost,” said Imre Gyuk, Director of Energy Storage Research at the Office of Electricity of the DOE. “I expect that efficient, long duration flow batteries will become standard as part of the infrastructure of the electric grid.”

Storage

Running out of storage on your iPhone? Download a big app, Lifehacker says

Running out of space on your smartphone? If you own an Apple device, Thorin Klosowski at Lifehacker might have just the trick for you — leave your photos and music be, and just download one huge app.

Storage

Image credits Cheon Fong Liew / Flickr.

It seems counteractive (but then again, that’s probably why it qualifies as a “hack”), but downloading one huge app when you’re short for space on an iPhone can actually free up space for you to use.

The method relies on the smartphone‘s coding. When faced with the huge download and lack of space, iOS starts clearing through the device’s memory to make room for your app. And the first things it goes for is unused app data and caches.

Head over to Settings > General > Storage & iCloud Usage > Available to see how your storage is being cleared up. After the software makes enough room available, you can simply cancel the download or delete it after install. Lifehacker suggests using the Heartstone app for this, as it is quite large at 1,89 GB and will prompt the cleaning process, but is free to download.

“Go ahead and cancel the Hearthstone download after the space is freed up or delete it when it’s finished downloading,” said Thorin Klosowski.

“Doing this, I went from 700MB available to 2.29GB.”

 

New method developed to encode huge quantity of data in diamonds

A team from the City College of New York have developed a method to store data in diamonds by using microscopic defects in their crystal lattice.

Image credits George Hodan / Publicdomainpictures

Image credits George Hodan / Publicdomainpictures

I’ve grown up on sci-fi where advanced civilizations stored immense amounts of data in crystals (like Stargate SG-1. You’re welcome). Now a U.S. team could bring the technology to reality, as they report exploiting structural defects in diamonds to store information.

“We are the first group to demonstrate the possibility of using diamond as a platform for the superdense memory storage,” said study lead author Siddharth Dhomkar.

It works similarly to how CDs or DVDs encode data. Diamonds are made up of a cubic lattice of carbon atoms, but sometimes an atom just isn’t there. So the structure is left with a hole — a structural defect. They’re also referred to as nitrogen vacancy centers as nitrogen atoms align themselves to the defects.

These vacancies are negatively charged (as there are no protons to offset the electrons’ charge from neighboring atoms). But, the team found that by shining a laser on the defects — in essence neutralizing their electrical charge — they could alter how each vacancy behaved. Vacancies with a negative charge fluoresced brightly, while those with neutral charges stayed dark. The change is reversible, long-lasting, and stable under weak and medium levels of illumination, the team said.

So just as a laser can be used to encode data on a CD’s medium, it can be turned to storing data by changing these defects’ charges. In theory, this method could allow scientists to write, read, erase, and re-write the diamonds, the team added.

Dhomkar said that in principle, each bit of data can be encoded in a spot a few nanometers — a few billionths of a meter — wide. This is a much denser information packing than in any similar data storing device. So we could use diamonds to build the superdense computer memories of the future. But, we currently have no way to read or write on such a small scale so currently “the smallest bit size that we have achieved is comparable to a state-of-the-art DVD,” Dhomkar told Live Science.

Here “but nr.2” comes into the picture. We can’t yet fully use the diamonds’ capacity, but the team has shown they can encode data in 3D by stacking layers of 2D data stores.

“One can enhance storage capacity dramatically by utilizing the third dimension,” Dhomkar said.

By using this 3D approach, the technique could be used to store up to 100 times more data than a typical DVD. Dhomkar and his team are now looking into developing ways to read and write the diamond stores with greater density.

“The storage density of such an optimized diamond chip would then be far greater than a conventional hard disk drive,” he said.

The full paper “Long-term data storage in diamond” has been published in the journal Science Advances.

Keeping coffee in the fridge enhances its flavor, besides keeping it fresh

A new study found there are some added benefits to keeping the coffee in the fridge, which not even the best baristas know. Namely, the colder storing temperature enhances flavor.

Credit: Wikimedia Commons

Credit: Wikimedia Commons

To see how storage affects the quality of coffee, researchers at the University of Bath collaborated with a local café and ground coffee beans stored at various temperatures. The beans were stored on the counter (room temperature), in a fridge, freezer and, to push to the extreme, in vats of liquid nitrogen (-196 degrees Celsius).

The flavor of coffee depends not only on the quality of the beans themselves, but also how these become grounded. Like anything in the kitchen, brewing coffee ultimately comes down to chemistry. The smaller the coffee particles, the better the flavor can be extracted with the boiling water because the particles share more surface area. Even distribution also helps extract the flavor better.

“What you’re looking for is a grind that has the smallest difference between the smallest and largest particle,” noted Christopher Hendon, a chemistry PhD student at the University of Bath. “If you have small grinds you can push flavor extraction upwards. We found that chilling the beans tightens up this process and can give higher extractions with less variance in the flavor—so you would have to brew it for less time or could get more coffee from the same beans.”

Grinding coffee the proper way might not be on the top of your list when brewing coffee is concerned, but subtleties can make all the difference at some point. In this case, the difference between a stale and a rejuvenating brew. For the industry, though, these findings could help coffee companies with their turnover. The uniform, cold beans can cut on waste because more flavor is being extracted.

Among baristas, though, the findings are sure to stir controversy. Many believe freezing coffee beans is bad because it cracks the beans, attracts water from the freezer, promotes accelerated degradation after the beans are thawed or breaks down the flavor oils in the beans. None of these views are supported by science, and there are as many favored storage methods as are professional baristas, and each brewer thinks their method is right, and the others are wrong.

As far as I can tell, this is the only study which actually investigates the relationship between freezing beans and flavor. So, at the end of the day you’re welcome to try any method you’d like, but if you’re serious about science-based coffee brewing, keep those beans in the freezer.

“The research suggests that temperature of bean needs to be more constant to help us achieve consistent grinds. It suggests that cooler temperatures will allow us to maximise surface area and utilise more of the coffee. All of this will impact on how we prepare coffee in the industry, I bet we will see the impact of this paper in coffee competitions around the globe, but also in the research and development of new grinding technology for the market place,” said Maxwell Colonna-Dashwood, the owner of the cafe which worked with the researchers.

Harvard team turns bacteria into living hard drives

A research team from Harvard University, led by Seth Shipman and Jeff Nivala, has developed a novel method of writing information into the genetic code of living bacterial cells. They pass the information on to their descendants, which can later be read by genotyping the bacteria.

BacteriaUSB

Storing information into DNA isn’t a new idea — for starters, nature’s been doing it for a long, long time now. Researchers at the University of Washington have also shown that we can synthetically manufacture DNA in the lab and write any information they want into it — and to prove it, they encoded a whole book and some images into DNA strands. But combining the two methods into an efficient data storage process has proven beyond our grasp up to now.

“Rather than synthesizing DNA and cutting it into a living cell, we wanted to know if we could use nature’s own methods to write directly onto the genome of a bacterial cell, so it gets copied and pasted into every subsequent generation,” says Shipman. “But working within a living cell is an entirely different story and challenge.”

The team exploited an immune response certain bacteria use to protect themselves from viral infection, called the CRISPR/Cas system. When the bacteria are attacked by viruses, they physically cut out a segment of the invaders’ DNA and paste it into a specific region of their own genome. This way, if that same virus attacks again, the bacteria can identify it and respond accordingly. Plus, the cell passes this information over to its progeny, transferring the viral immunity to future generations.

The geneticists found that if you introduce a piece of genetic data that looks like viral DNA into a colony of bacteria that have the CRISPR/Cas system, it would incorporate it into their genetic code. So Shipman and Nivala flooded a colony of E. coli bacteria that has this system with loose segments of viral-looking DNA strands, and they gulped it all up — essentially becoming tiny, living hard drives.

The segments used were arbitrary strings of A, T, C, G nucleotides with chunks of viral DNA at the end. Shipman introduced one segment of information at a time and let the bacteria do the rest, storing away information like fastidious librarians.

Conveniently enough, the bacteria store new immune system entries sequentially, with earlier viral DNA recorded before that of more recent infections.

“That’s quite important,” Shipman says. “If the new information was just stored randomly, that wouldn’t be nearly as informative. You’d have to have tags on each piece of information to know when it was introduced into the cell. Here it’s ordered sequentially, like the way you write down the words in a sentence.”

Bugs with the bugs

One issue the team ran into is that not all of the bacteria record every strand of DNA introduced to the culture. So even if you introduce the information step by step, let’s say the numbers from 1 to 5, some bacteria would have “12345” but others may only have “12” or “245” and so on. But Shipman thinks that because you can rapidly genotype thousands or millions of bacteria in a colony and because the data is always stored sequentially, you’ll be able to clearly deduce the full message even with these errors.

Shipman adds that the 100 bytes his team demonstrated are nothing near the limit. Cells like the microorganism Sulfolobus tokodaii could potentially store more than 3,000 bytes of data. And with synthetic engineering, you could design or program specialized hard-drive bacteria with vastly expanded regions of their genetic code, able to rapidly upload vast amounts of data.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology. Credit: IBM Research

IBM ups storage for next-gen memories hundreds of times faster than your SSD

A team from IBM Research dramatically increased the storage capacity of an alternative memory structure called phase-change memory (PCM) to 3 bits of data per cell. Many specialists think that PCM is the future, in a way similar to how flash is replacing hard drives.

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology. Credit: IBM Research

The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology. Credit: IBM Research

To store digital information comprised of “1s” and “0s”, a predictable physical deformation has to take place. Inside an HDD, a “head” moves over a spinning platter with a thin magnetic coating writing 0’s and 1’s as tiny magnetic North and South. To read the data, the head hovers over the spots and reads the 0’s and 1’s by noticing the North and South spots.  Flash drives are storage devices like hard drives but with one key difference:  they use flash memory chips that are small, light and have no moving parts, instead of the head and platter.

To work with bits, basically, you need to have a state or phase change that says this thing is turned “on” or “off”, “up” or “down” and so on.  Phase-change memory (PCM) works by reading two stable states in a material that can be either “amorphous” (no clearly defined atomic structure) and “crystalline” (ordered atomic structure). A PCM actually reads the resistance of the two states (high for amorphous and low for crystalline) which changes with an applied current. A high or medium voltage is applied to the material to “write” and a low voltage to “read”.

Like HDDs and flash drives, PCM is non-volatile, meaning the information remains as initially stored even after the electrical power is cut. Phase change memory has much lower latency than NAND, much faster read/write times and can endure at least 10 million write cycles. The average flash USB stick fails after 3,000 write cycles.

“Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” said Dr. Haris Pozidis, an author of the paper published in IEEE. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.”

“Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling,” said Dr. Evangelos Eleftheriou, IBM Fellow.

Flash memory chip built from atom-thick components

If you still don’t know what graphene is, you’d better learn pretty soon – because it’s the stuff of the future.

flash drive

Graphene-based flash drives could make this type of devices obsolete.

Graphene is a substance composed of pure carbon, with atoms arranged in a regular hexagonal pattern similar to graphite, but in a one-atom thick sheet. Ok, so what’s so special about it? Well, due to the fact that it is only one atom thick, it doesn’t really behave like a 3D material. Due to this fact, it has some unique, very useful properties which could be applied in a myriad of fields (including headphones, apparently).

The thing is, following the advancements made with graphene, other single-atom thick atoms. This time, researchers have used two of these materials—graphene and molybdenum disulfide—and put them together with some more traditional components to make a flash memory device. The device is still in its very early phases, with some of the components being manually assembled under a microscope, but already it shows some excellent properties, like like the potential to store more than one bit per device (bare in mind – 1 atom thick) for more than 10 years.

The device basically consists of two electrodes that feed current through a semiconductor within the device. To make it as compact as possible, researchers started with two sheets of graphene layered on some silicon, separated by a small gap. These sheets served as electrodes, and the next layer on top was the semiconductor formed by a single-molecule-thick sheet of molybdenum disulfide. After that, it was all insulation.

The only thing that was thicker than an atom was the insulation – even though such insulators exist.

Via ArsTechnica

Using DNA as a storage device – 100 million hours of HD video in every cup

I remember years ago, when I got my first computer – it had a storage capacity of 40 MB. A few years after that, I got a 1 GB hard drive, and nowadays, 1 TB is quite the standard – that’s a growth by a factor of about 250.000. However, data storage capacity has slowed down its tumultous develpoment in the last couple of years, but researchers are still working, trying to find the next big thing; as a matter of fact, the next big thing could actually be biological (our DNA, to be more precise). Researchers have shown that a single cup of DNA can store 100 million hours of HD video – and this is just the first results.

DNA strand with code

Biological systems have been using DNA as an information storage molecule for billions of years – after all, it holds the information that makes you human, as opposed to, say, a badger. Vast amounts of data can be stored even in microscopic environments, so it’s only natural to start looking here. So could this actually be the ultimate solution ?

However, it’s very hard to “make” DNA carry the information you want, as researchers from the EMBL-European Bioinformatics Institute (EMBL-EBI) found out. In this week’s edition of Nature, they describe a new technique that stores, reads and writes data using DNA. The research was led by Nick Goldman and Ewan Birney.

dna2

“We already know that DNA is a robust way to store information because we can extract it from wooly mammoth bones, which date back tens of thousands of years, and make sense of it. It’s also incredibly small, dense and does not need any power for storage, so shipping and keeping it is easy,” Goldman said in a statement.

The method is complex, and to accomplish their goals, they emplyed the help of bio-analytics instrument maker Agilent Technologies, a former lab of Hewlett-Packard, to help synthesize DNA from encoded digital information—in this case, an MP3 of Martin Luther King’s “I Have a Dream” speech – quite a suitable tune.

“We knew we needed to make a code using only short strings of DNA, and to do it in such a way that creating a run of the same letter would be impossible,” Goldman explained. “So we figured, let’s break up the code into lots of overlapping fragments going in both directions, with indexing information showing where each fragment belongs in the overall code, and make a coding scheme that doesn’t allow repeats. That way, you would have to have the same error on four different fragments for it to fail—and that would be very rare.”

Another good sign was the sturdiness of the DNA storage system. According to Agilent’s Emily Leproust, who helped synthesize the data into DNA, the DNA, which looked “like a tiny piece of dust”, can last for at least 10.000 years.

120819-dna

“We’ve created a code that’s error tolerant using a molecular form we know will last in the right conditions for 10,000 years, or possibly longer. As long as someone knows what the code is, you will be able to read it back if you have a machine that can read DNA,” Goldman said.

Though technically speaking, the study involved less than a megabyte of data in total, this is already a scalable result, a few orders of magnitude better than previous studies – and the advantages of DNA over both printed text and traditional hard drives are numerous – it is stable for very long periods of time, it requires no power, which makes it easy to transport and maintain, and most of all, it can cary larger amounts of data than the alternative.

Via The Conversation

The future's fingerprint: organised structures for denser hard drives. Image: University of Texas at Austin

Self-assembling polymer increases HDD memory capacity by a factor of five

Data storage has reached great heights in the past two decades. You can now fit in a typical PC hard-drive thousands of CDs and millions of floppy disks (who else remembers these?). However, magnetic hard drive developers have almost reached the physical limit to where they can cram up data. Researchers at University of Texas at Austin  used a novel technique that makes use of self-assembling polymers to create the smallest magnetic dots in the world. Their results show that hard disk storage can be increased by a factor of five.

The future's fingerprint: organised structures for denser hard drives. Image: University of Texas at Austin

The future’s fingerprint: organised structures for denser hard drives. Image: University of Texas at Austin

Magnetic hard drives store information by inscribing  zeros and ones as magnetic dots on a continuous metal surface. The closer you position these dots from one another, the more information you can cram inside. However, there’s little room developers can move nowadays as the maximum density of dots has almost been reached. Any closer positioning would cause the dots to become unstable from their neighbors’ magnetic field.

“The industry is now at about a terabit of information per square inch,” said Willson, who co-authored the Science paper with chemical engineering professor Christopher Ellison and a team of graduate and undergraduate students. “If we moved the dots much closer together with the current method, they would begin to flip spontaneously now and then, and the archival properties of hard disk drives would be lost. Then you’re in a world of trouble. Can you imagine if one day your bank account info just changed spontaneously?”

There’s a work around, however. If you can isolate each individual dot from another, then you can bypass the magnetic field issue and increase the dot density, and in turn storage. This is where the scientists worked their magic after they used the  directed self-assembly (DSA) – a method pioneered by University of Wisconsin and MIT.

“I am kind of amazed that our students have been able to do what they’ve done,” said Willson. “When we started, for instance, I was hoping that we could get the processing time under 48 hours. We’re now down to about 30 seconds. I’m not even sure how it is possible to do it that fast. It doesn’t seem reasonable, but once in a while you get lucky.

Previous attempts have rendered dot density just enough to double the storage density of disk drives. That’s pretty impressive, but Ellison and co. when way higher. They’ve synthesized block copolymers that self-assemble into the smallest dots in the world — 9 nanometers or just about the size of protein. These were attached to a guided surface which had dots and lines etched on it. While the polymers were self-assembling into position, a special top coat that goes over the block copolymers was introduced. This top coat allows the polymers to achieve the right orientation relative to the plane of the surface simply by heating.

“The patterns of super small dots can now self-assemble in vertical or perpendicular patterns at smaller dimensions than ever before,” said Thomas Albrecht, manager of patterned media technology at HGST. “That makes them easier to etch into the surface of a master plate for nanoimprinting, which is exactly what we need to make patterned media for higher capacity disk drives.”

Now Ellison and his team of graduate students are working together with HGST to see how this process can be implemented in the current manufacturing processes. Their findings were reported in the journal Science.

Is this the farthest we can go with magnetic storage? Well, I wouldn’t worry too much about it, if I were you. Solid State Drives are the next generation of storage mediums, though next generation might not be the best word here since they’ve been commercially available for years, but only recently started to pick up with the public.

Solid state drives are made from silicon microchips and store data electronically instead of magnetically, as spinning hard disk drives or magnetic oxide tape do. Thy’re faster, lighter and a lot more reliable than magnetic hard drives, the only impediment is that they’re currently roughly four times as expensive, but like all things in tech they’ll become reasonable enough for the general public in no time. Say goodbye to your HDD.

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

IBM develops smallest storage device: 12 atoms for a single bit!

Each little green bump is an atom of ferromagnetic material. All these 12 atoms captioned above form an array capable of storing on bit of information. (c) IBM

Each little green bump is an atom of ferromagnetic material. All these 12 atoms captioned above form an array capable of storing on bit of information. (c) IBM

Moore’s law states that the level of technology and computing power should double every two years, and so far the postulate hasn’t been wrong in more than 50 years. A group of IBM scientists have now managed to develop a data storage technique which allows for information to be stored with as little as 12 atoms, thousands of times less atoms than it is currently required for a single bite. Gordon Moore, the Intel co-founder, would have been proud.

Storing a single bit of data on a disk drive requires one million atoms of magnetized storage medium, and this is valid only for the most advanced storage devices currently available today. The new research from IBM suggests that, eventually, in the not so distant future storage devices could be developed at 1/83,000th the scale of today’s disk drives.

“Magnetic materials are extremely useful and strategically important to many major economies, but there aren’t that many of them,” said Shan X. Wang, director of the Center for Magnetic Nanotechnology at Stanford University. “To make a brand new material is very intriguing and scientifically very important.”

The current magnetic storage devices, like the hard-drive currently in use by your computer which basically allows for information like the one on this website to be read and stored, are made out of ferromagnetic materials like iron or nickel. When these materials are exposed to a magnetic field, their magnetic poles line up in the same direction. These materials have worked very well until now, as far as conventional hard drives or micro chips are concerned, however when miniaturization is concerned, at a certain scale bits start to interfe with each other. Antiferromagnetism works in the opposite direction, with a highly important distinction – the orbits of unpaired electrons don’t align to the same direction. Thus atoms in manganese oxide, a material that works well for this, atoms align head to foot such that the North magnetic pole of each atom seeks the South magnetic pole of the other.

Using antiferrogmanetism, the team of researchers from IBM’s Almaden Research Center, led by Andreas Heinrich, managed to create a swathe of material with a much denser magnetic pallet than conventional ferromagnetic devices.  The researchers used a scanning tunneling microscope, a device the size of a washing machine, not only to pin point at an atomic scale, but also accurately position individual atoms together, and engineer 12 antiferromagnetically coupled atoms. This is the the smallest number of atoms with which one can create a magnetic bit in which it is possible to store information.

Heading towards a golden computing age

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

To demonstrate the antiferrogmantic storage effect, the IBM researchers created a computer byte, the equivalent of one character, out of an individually placed array of 96 atoms.   They then used the array to encode the I.B.M. motto “Think” by repeatedly programming the memory block to store representations of its five letters.  Also, as if the sheer scale of this technology wasn’t amazing enough, the IBM researchers observed that, albeit in very small numbers the atoms display some quantum mechanical characteristics – simultaneously existing in both “spin” states, in effect 1 and 0 at the same time. This could have remarkable implications for quantum computing development.

This technology might take many years for regular consumers to experience

Now, although this latest gem from IBM will allow for storage devices to be built at a fraction of the current size and power consumption, don’t expect it to become commercially available to the general public for a pretty long while. The researchers were capable of holding on to a data bit for several hours at a temperature close to absolute zero, along with other conditions remarkably difficult to reach. Also, manufacturing-wise it will take some time probably before an automatic method of arranging, placing and manipulating individual atoms in the proper array.

“It took a room full of equipment worth about 1 million dollars and a whole lot of sweat,” to get the 96-atom configuration to work, Heinrich said. “The atoms are in a very regular pattern because we put them there. “Nobody knows how to make that cost effective in manufacturing…that’s the core issue of nanotechnology.

via NYT

Japanese project aims to turn CO2 into natural gas

Mankind is screwing up. I’m sorry, that’s just the way it is. Not taking care of our natural resources, polluting and destroying habitats, it’s obvious that we, as a species, made some pretty big mistakes, the combined effects of which will come back to haunt us (and already are). But that’s not to say that we’re doomed or something – on the contrary. We can and have to stop these damaging processes and reverse them as much as possible, but that’s not so easy; it’s like U-turning when you’re running at full speed, hard as hell.

smoke-stack-pollution

Finding a way to store or transform the CO2 is among the top priorities in this fight that we are in. If we can come even close to Al Gore’s challenge, we have to first come up with some innovative and efficient methods, and then apply them as quickly as possible.

Such a project was presented by Japanese researchers from the Japan Agency for Marine-Earth Science and Technology. The team led by Fumio Inagaki announced that they intend to employ the help of bacteria to transform carbon dioxide into regular natural gas. He said that such a bacteria exists ‘deep under the seabed off the northern tip of Japan’s main island’. However, the major difficulties here will be to find a way to ‘train’ the bacteria to become more and more effective and accelerate the process of creating methane gas.

They announced that in a few years from now they will be able to shorten the transformation period to 100 years; this may not seem spectacular at all, but it really is! As far as I know at least, this is the first viable idea to not only dispose of unwanted CO2, but even transform it into something useful, basically killing two birds with one stone. This may not have immediate results and does not eliminate the need for CO2 storage, but it rather suggests what we can do with it after it’s stored, being a long term solution.