Tag Archives: IBM

IBM releases the world’s first quantum-safe tape drive

Someday, quantum computers should help scientists solve problems that are intractable using classical computers. But, this also means they’ll also be able to compute things we would typically like to remain unsolvable — such as encryption algorithms.

Already looking into the future, researchers at IBM Tape Development in Tucson and IBM Research Zurich have devised the first tape drive that can keep data secure against state of the art quantum decryption techniques.

The new IBM quantum computing-safe tape drive prototype is based on a state of the art IBM TS1160 tape drive. Credit: IBM Research Zurich.

Although magnetic tape was invented many decades ago, it’s still being used by many enterprises to store highly valuable data.

Floppy disks have long been obsolete, but when it comes to archiving data, there’s no better medium than tape.

While hard drives and SSDs are much more suited for accessing databases and reading small files, tape is ideal for storing large amounts of data over a long time. That’s because it’s incredibly cheap and dense. The current theoretical limit is about 29.5 billion bits per square inch, which would mean a magnetic tape the size of a traditional hard drive could store about 35 terabytes of information. 

Another reason why tape is appealing for long-term storage is that it’s highly secure. Unlike disks, tape doesn’t need to be powered on to retain data, which means you can’t possibly access it remotely when it’s not in use. Tapes also use asymmetric encryption techniques, or public/private key encryption, to further boost security.

“Magnetic tape has a long history of leadership in storage security and is an essential technology for protecting and preserving data. For example, IBM tape drives were the first storage technology to provide built-in encryption starting with the TS1120 Enterprise Tape Drive. In addition, tape provides an additional layer of security via an air gap between the data stored on a cartridge and the outside world, i.e. data stored on a cartridge cannot be read or modified unless it is mounted in a tape drive,” Mark Lantz, who leads the Advanced Tape Technologies effort at IBM Research Zurich, told ZME Science.

When an asymmetric key pair is generated, the public key is typically used to encrypt, and the private key is typically used to decrypt. The key size (bit-length) of a public and private key pair decides how easily the key can be exploited with a brute force attack. Theoretically, a 2048-bit RSA key can’t be cracked even with today’s fastest computers — but it could be shattered with a quantum computer.

IBM researchers believe that at the current rate of development, asymmetric encryption could become obsolete as early as ten years from now. This is why they’ve developed a new class of cryptographic algorithms that should solve the potential security concerns posed by quantum computers.

The new quantum-safe algorithms, including Kyber and Dilithium, are part of a cryptography suite called CRYSTALS, which is based on mathematical problems that have been studied since the 1980s and haven’t yet succumbed to algorithmic attacks, either classical or quantum. The quantum computing safe tape drive was implemented using an IBM TS1160 tape drive, the most recent enterprise-class tape drive from IBM. These algorithms are part of the firmware, which means existing tape drives could be upgraded with them.

Next, the IBM researchers plan on working closely with the National Institute of Standards and Technology (NIST) to standardize their algorithms, which have been made open source. The plan is to implement CRYSTALS to secure the entire IBM Cloud from quantum computer attacks by 2020.

“An important next step for the team is to investigate the feasibility of implementing quantum computing safe encryption technology in a state of the art mid-range LTO (Linear Tape Open) tape drive as well as investigating the feasibility of supporting quantum computing safe encryption in older generations of tape drives,” Lantz said.

IBM achieves 330TB storage on magnetic tape, set to improve ‘cloud’ applications

IBM scientists just reported a breakthrough in storing data on magnetic tape. Their novel storage device can hold 330 terabytes of uncompressed data or enough to store 181,500 movies. The new record-breaking prototype has an areal density (how much information can be stored on a tape’s surface) 20 times greater than that typically seen in commercial tape drives.

IBM scientist Dr. Mark Lantz, holds a single one square inch piece of tap, which alone can store 201 gigabytes. Credit: IBM Research.

IBM scientist Dr. Mark Lantz, holds a single one square inch piece of tap, which alone can store 201 gigabytes. Credit: IBM Research.


In a time when computers used solid-state devices (SSD) to store and retrieve data, it might sound odd that some scientists are so interested in magnetic tapes. HDD already looks obsolete but aren’t magnetic tapes like the dinosaurs of computer storage? Not so fast.

Indeed, the first byte stored on a magnetic tape was in 1951. The first tape device was called UNISERVO and had a transfer rate of 7,200 characters per second. The tapes were metal and measured 1,200 feet long (365 meters) and therefore very heavy. Steadily the tech improved fast leading to smaller, better magnetic storage devices like the compact cassette.

The low transfer rate of magnetic tapes made them unpractical in the face of CDs and hard drives but to this day many businesses, universities or libraries depend on them. While they might be rather slow for today’s standards, the biggest benefit of using tape is reliability and data cohesion over a long time. Today, the magnetic tape is the first line of defense so to speak in every important backup system. For instance, when I visited earlier this year the multi-petaflops supercomputer facility at ECMWF — one of the global leaders in weather forecasting — the most impressive sight I was allowed to see wasn’t the drawer-sized supercomputers but rather a black, room-sized enclosure. Inside, I could see hundreds of small magnetic tape cartridges each neatly arranged in a designated place. Meanwhile, half a dozen robotic arms were constantly taking out cartridges and placing new ones for new read/write sessions. This facility was responsible for processing more data — data that is crucial to understanding both the climate and the weather — than any of us can really fathom. HDDs are too vulnerable to data corruption and doing the same with SSDs would cost as much as the yearly health care budget of a small eastern European country. Tape, which is 60-year old tech, is cheap and reliable.

“Tape has traditionally been used for video archives, back-up files, replicas for disaster recovery and retention of information on premise, but the industry is also expanding to off-premise applications in the cloud,” said IBM fellow Evangelos Eleftheriou in a pres statement. “While sputtered tape is expected to cost a little more to manufacture than current commercial tape, the potential for very high capacity will make the cost per terabyte very attractive, making this technology practical for cold storage in the cloud.”

Tape storage density has skyrocketed in the last 10 years. Credit: IBM Research.

Tape storage density has skyrocketed in the last 10 years. Credit: IBM Research.

The new IBM tiny cartridge can store 201 gigabits/inch^2, an unprecedented areal recording density and the product a multi-year-long collaboration with Sony. According to Sony, some of the improvements include “advanced roll-to-roll technology for long sputtered tape fabrication and better lubricant technology, which stabilizes the functionality of the magnetic tape.”

The tape developed by IBM and Sony is made of multiple layers. Credit: IBM Research.

The tape developed by IBM and Sony is made of multiple layers. Credit: IBM Research.

Right now, this fancy tape prototype is twice as small in terms of physical size than the 60TB Seagate SSD, which is the world’s largest commercially-available hard-drive. The key enabler here is sputtering — a special technique that can produce magnetic tape with magnetic grains that are just a few nanometres across, rather than tens of nanometers.

For more on this breakthrough, check out the paper published in IEEE Transactions on Magnetics.

IBM Logo.

IBM puts its technological might to work solving world problems under the Science for Social Good initiative

On Wednesday, tech giant IBM announced that it’s throwing its full expertise and technological might behind some of the world’s most challenging problems under its new Science for Social Good initiative.

IBM Logo.

Image credits Patrick / Flickr.

IBM’s researchers and technological prowess will team up with academia and nonprofits as part of the Science for Social Good initiative, which aims to apply “AI, cloud and deep science toward new societal challenges,” the company announced yesterday. Twelve different projects are planned for 2017, each tailored to one or more of the 17 Sustainable Development Goals singled out by the United Nations as being key to addressing the world’s most pressing issues by the year 2030.

Some of these issues include combating the opioid crisis, — a widely-abused and highly addictive class of drugs which causes the deaths of some 91 people each day in the US alone according to the CDC — furthering the development of AI, reducing inequality, slashing our carbon footprint, and improving aid for communities in emergency situations. IBM hopes that their expertise will help address these problems from a novel angle by using machine learning, data science, and wide-scale analytics to develop efficient solutions.

“We are experiencing a time when our lives and everything that surrounds us is captured digitally: Internet activity, video, customer transactions, surveys, health records, news, literature, scientific publications, economic data, weather data, geospatial data, stock market returns, telecommunication records, and government records to name a few,” the project’s page reads.

“All of this data is at our fingertips, giving us an unprecedented opportunity to innovate and change the world for the better using science and technology.”

The Overcoming Illiteracy project will employ AI to ‘translate’ texts for illiterate and low-literate adults into a medium they can understand. The aim is to help people who haven’t had access to education “navigate the information-dense world with confidence” by decoding dense, complex texts (manuals or product descriptions, for example) into their basic message and then presenting it to the user as visual elements and spoken messages. While such a technology wouldn’t directly impact illiteracy levels, it would allow people to independently navigate our text-centric society while also giving them the means to educate themselves using books and manuals — an insurmountable task without any assistance.

Another one of the programs is called the Emergency Food Best Practice: The Digital Experience, and will see the company’s Watson supercomputer compile “cognitive supply chain model of emergency food operations” to be shared with nonprofits via an interactive digital platform. Nonprofit John’s Bread & Life will help IBM develop this tool based on their own distribution model, which serves more than 2,500 meals in New York City each day.

But while work is underway on these and the other 2017 projects, IBM is on the lookout for next year’s great idea.

“If you are an NGO, or a social enterprise, we are currently scoping projects for our 2018 cycle. If you have an idea how we can help, drop us an email [good@us.ibm.com], and we will follow up,” the project page reads.

The Science for Social Good initiative draws its roots in six pilot projects conducted in 2016 which covered a broad range of subjects from health care to global innovation. One of these employed machine learning techniques to study the spread of the Zika virus, and resulted in a predictive model which identified the primate species which can act as a vector for the bug. Following the findings, these species were recommended for Zika surveillance and management and are now leading new testing in the field to help prevent the spread of the disease.


IBM develops device which could power slums with used laptop batteries

An IBM team analyzed a sample of discarded batteries and found that they can still be used and can still provide benefits. They developed a device which uses re-usable Lithium Ion cells from discarded laptop battery packs to power low energy DC devices. They found that 70% of used batteries could still store enough power to keep an LED light on more than four hours a day for a year.

The UrJar uses lithium-ion cells from the old batteries to power low-energy DC devices.

Forty percent of the world’s population doesn’t have access to a stable electricity source. This is especially obvious and problematic in the poorer areas of the world. At the same time, especially in the richer parts of the world, there is an increasing problem with electronic waste – especially from Lithium Ion batteries. The two problems can solve each other, as an Indian IBM team has showed.

They have developed UrJar – a play on words for energy (urja in Hindu) and box (jar). Urjar has a rechargable battery component built from pieces of discarded laptop batteries which can power an LED lightbulb, cell phone charger or other similar appliances.

“The most costly component in these systems is often the battery,” Vikas Chandan, a research scientist with IBM, who led the project told MIT’sTechnology Review. “In this case, the most expensive part of your storage solution is coming from trash.”

As any smartphone or laptop user will tell you, the more you use and then recharge your battery, the less maximum energy it can hold. This is called the charge capacity. When the charge capacity of a laptop battery pack falls below a satisfactory threshold, the user often times simply discards it and replaces it with a new battery pack (or replaces the device altogether). Researchers tested 32 laptop battery packs that were discarded by a business division of a large
multinational IT company in India and found that they still have significant residual capacity the mean value was 64%. The mean value corresponds to more than 50 Wh of capacity for the batteries tested, which is sufficient to power a 3 W LED light bulb, a 6 W DC fan and a 3.5 W mobile phone charger simultaneously, for around 4 hours.

This is definitely not groundbreaking technology – it won’t revolutionize the world as we know it, and it won’t help countries like India escape from poverty. But it’s a nice initiative to alleviate some of the issues areas without electricity are facing, as well as the growing problem of electronic waste.

“UrJar has the potential to channel e-waste towards the alleviation of energy poverty, thus simultaneously providing a sustainable solution for both problems”, the study writes.

The estimates show that this product could be producted for about $10; a survey on people who would like to use it show that they would be willing to pay up to $16 for it, so it’s quite a good value. To make things even better, IBM announced that they don’t want to make a business out of UrJar, and will instead make the blueprints available for developing countries.

Computer Aid, a UK-based charity that redistributes unwanted old technology, welcomed the initiative.

“We think that this is an excellent initiative as it is in line with our practice of reusing and refurbishing rather than recycling,” said Keith Sonnet, its chief executive. Refurbishing has definitely a more positive impact on the environment and we should encourage more companies to adopt this practice.”

Original study.

Smart Machines: IBM's Watson and the Era of Cognitive Computing (Columbia Business School Publishing)

Book review: ‘Smart Machines: IBM’s Watson and the Era of Cognitive Computing’

Smart Machines: IBM's Watson and the Era of Cognitive Computing (Columbia Business School Publishing)
“Smart Machines: IBM’s Watson and the Era of Cognitive Computing”
By John E. Kelly III, Steve Hamm 
Columbia University Press, 161pp | Buy on Amazon

In 1996, IBM’s Deep Blue supercomputer became the first machine to win a chess game against a reigning world champion, a position held at the time by Garry Kasparov. Fast forward to 2011 and IBM had a new super toy playing a grownups’ game – Jeopardy! The new supercomputer, called Watson, was introduced on prime time television to tens of millions of people, culminating years of painstaking work and showcasing the most advanced machine that employs artificial intelligence and natural-language processing. Emotions filled the studio, but Watson was not impressed. On a faithful February evening, Watson bested two past grand champions and set a new milestone in computing.

Elementary, my dear Watson

In the first rows, you could find John E. Kelly III, director of IBM research, who along with Steve Hamm, a writer at IBM, wrote ‘Smart Machines: IBM’s Watson and the Era of Cognitive Computing’. More then just a broad backstory on Watson, the book explores the fascinating world of cognitive machines – truly intelligent computers, based on a novel architecture that can learn by doing – and how this exciting technology might transform the world. First of all, it’s important to note that Watson is yet to reach this status. Both Watson and its predecessor, Deep Blue, are computers based on traditional von Neumann architecture comprised of zeros and ones. While these machines have demonstrated extremely impressive capabilities, both are equally limited in scope. Deep Blue had millions of playbooks mapped and it could compute moves much faster than any human, yet chess is a finite game, as stressed by game theory. Watson shined through its encyclopedic knowledge and lightning quick recall ability, yet again this is all it could do. Because their fundamental architecture is the same, these computers still work, in principle, the same way primitive versions did 50 years ago: they can only do what they’re programmed to do.

Chess is not a game. Chess is a well-defined form of computation. You may not be able to work out the answers, but in theory there must be a solution, a right procedure in any position. Now real games… are not like that at all. Real life is not like that. Real life consists of bluffing, of little tactics of deception, of asking yourself what is the other man going to think I mean to do. And that is what games are about in my theory. — John von Neumann

Enter the world of machine learning and cognitive computing, which the book faithfully explores. The two authors take turns explaining how Watson development will one day lead to veritable thinking machines that participate in dialogue with human beings, navigate vast quantities of information and solve extremely complicated, yet common problems. These systems will be able to learn from both structured and unstructured data, discover important correlations, create hypotheses for those correlations, and suggest actions that produce better outcomes. One important field where such smart machines will be employed is medicine. Kelly and Hamm describe, for instance, how Watson has already been fed some 600,000 pages of medical evidence, 2 million pages of text from 42 medical journals and clinical trials, and several thousand case histories. Now, researchers are working on the hard part: making Watson capable of inductive reasoning, so it doesn’t just do what it’s told, but also figures out what to do on its own. “The goal isn’t to replicate human brains, though. . . . People will provide expertise, judgment, intuition, empathy, a moral compass, and human creativity,” the authors write.

For instance, Watson would be contacted through an interface by a physician who wants to learn more about a specific patient and how to apply a customized treatment. Watson would automatically retrieve personal information and medical records about the patient, then, on request, the machine will present the physician with a slew of treatment options, ordered by confidence of success. This is invaluable information today, since there are so many specific cases where general treatment doesn’t work. Let’s say a Japanese-American woman has lung cancer; an oncologist assisted by Watson would be informed that women from Japan who are nonsmokers are fairly likely to have a mutation in a gene, EGFR. If this is the case, a certain drug shrinks the cancer by 95%. Watson suggest a test, along with other tests, again ordered by confidence and based on experience (the more cases Watson is exposed to, the more he learns and leans toward the right decision). Big data is ubiquitous nowadays, and more and more information is being gathered and processed every day. Smart machines will help us navigate through this sea of bits (and why not qubits) and transform the stock market, computing (see the TrueNorth neurosynaptic chip), the internet and even the way citizens interact with cities.

“The new era of computing is not just an opportunity for society; it’s also a necessity. Only with the help of smart machines will we be able to deal adequately with the exploding complexity of today’s world and successfully address interlocking problems like disease and poverty and stress on natural systems,” Kelly and Hamm write.

The book offers a highly entertaining look on how this technology might change society, but it doesn’t necessarily explain how we’re going to get there. The writing is essentially rounded towards laymen, and if you’re looking for some technical details on how cognitive computing works, this book isn’t for you. At times, I felt a bit annoyed that the book speaks too much about IBM developments and so very little about things happening outside of IBM. We all know IBM is at the very forefront of this type of research, but the future of cognitive machines won’t entirely belong to them and most certainly they aren’t the only ones working on cognitive systems in the present. Still, far from giving the impression that ‘Smart Machines: IBM’s Watson and the Era of Cognitive Computing’ an oversized IBM marketing brochure, the book does a good job at introducing cognitive machines to the uninitiated reader.

true north

Breakthrough in computing: brain-like chip features 4096 cores, 1 million neurons, 5.4 billion transistors

true north

Image: IBM

The brain of complex organisms, such as humans but just as well other primates or even mice, is very difficult to emulate with today’s technology. IBM is moving things further in this direction after it announced the whooping features of its new brain-like chip: one million programmable neurons and 256 million programmable synapses across 4096 individual neurosynaptic cores, all made possible using 5.4 billion transistors. TrueNorth as it’s been dubbed, looks amazing not just because of its raw computer power – after all, this kind of thing was possible before, you just had to build more muscle and put more cash and resources into the project – but also because of the tremendous leap in efficiency. The chip, possibly the most advance of its kind, operates at max load using only 72 milliwatts. That’s  176,000 times more efficient than a modern CPU running the same brain-like workload, or 769 times more efficient than other state-of-the-art neuromorphic approaches. Enter the world of neuroprogramming.

[ALSO READ] The most complex human artificial brain yet

Main components of IBM’s TrueNorth (SyNAPSE) chip. Image: IBM

Main components of IBM’s TrueNorth (SyNAPSE) chip. Image: IBM

The coronation of a six-year old IBM project, partially funded by DARPA, TrueNorth made its first baby steps in an earlier prototype. The 2011 version only had 256 neurons, but in between the developers made some drastic improvements like switching to the Samsung 28nm transistor process. Each TrueNorth chip consists of 4096 neurosynaptic cores arranged in a 64×64 grid. Like a small brain network that communicates with other networks, each core bundles 256 inputs (axons), 256 outputs (neurons),  SRAM (neuron data storage), and a router that allows for any neuron to transmit to any axon up to 255 cores away. In total, 256×256 means each core is capable of processing 65,536 synapses, and if that wasn’t crazy enough IBM  already built a 16-chip TrueNorth system with 16 million neurons and 4 billion synapses.

[ALSO] New circuitboard is 9,000 times more efficient at simulating the human brain than your PC

By now, some of you may be confused by all these technicalities. What do these mean? Why should you care, for that matter? The ultimate goal is to come to an understanding, complete and absolute, of how the human brain works. We’re far off from this goal, but we need to start somewhere. To run complex simulations of deep neural networks you need dedicated hardware that can be up to the job, preferably closely matching organic brain parallel computation. Then you need software, but that’s for another time.

Of course, there’s also a commercial interest. IBM is in with the big boys. They’ve constantly been on the forefront of technology for decades and people managing IBM know that big data interpretation is huge slice of the global information pie. Watson, the supercomputer that demonstrated it can win against Jeopardy’s top veterans, is just one of IBM’s big projects in this direction – semantic data retrieval. Watson’s nephews will be ubiquitous in every important institutions, be it hospitals or banks. Expect TrueNorth to play a big part in all of this, running from the inside to help the world grow faster on the outside.

More details can be found in the paper published in the journal Science.



The tiny neurosynaptic core produced by IBM. (c) IBM

Cognitive computing milestone: IBM simulates 530 billon neurons and 100 trillion synapses

First initiated in 2008 by IBM, the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program whose final goal is that of developing a new cognitive computer architecture based on the human brain. Recently, IBM announced it has reached an important milestone for its program after the company successfully simulated 10 billion neurons and 100 trillion synapses on most powerful supercomputer.

It’s worth noting, however, before you get too exited, that the IBM researchers have not t built a biologically realistic simulation of the complete human brain – this is still a goal that is still many years away. Instead, the scientists devised a cognitive computing architecture called TrueNorth with 1010 neurons (10 billion) and 1014 synapses (100 trillion) that is inspired by the number of synapses in the human brain; meaning it’s modular, scalable, non-von Neumann, ultra-low power. The researchers hope that in the future this essential step might allow them to build an electronic neuromorphic machine technology that scales to biological level.

 “Computation (‘neurons’), memory (‘synapses’), and communication (‘axons,’ ‘dendrites’) are mathematically abstracted away from biological detail toward engineering goals of maximizing function (utility, applications) and minimizing cost (power, area, delay) and design complexity of hardware implementation,” reads the abstract for the Supercomputing 2012 (SC12) paper (full paper link).

Steps towards mimicking the full-power of the human brain

 Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

Authors of the IBM paper(Left to Right) Theodore M. Wong, Pallab Datta, Steven K. Esser, Robert Preissl, Myron D. Flickner, Rathinakumar Appuswamy, William P. Risk, Horst D. Simon, Emmett McQuinn, Dharmendra S. Modha (Photo Credit: Hita Bambhania-Modha)

IBM simulated the TrueNorth system running on the world’s fastest operating supercomputer, the Lawrence Livermore National Lab (LBNL) Blue Gene/Q Sequoia, using 96 racks (1,572,864 processor cores, 1.5 PB memory, 98,304 MPI processes, and 6,291,456 threads).

IBM and LBNL achieved an unprecedented scale of 2.084 billion neurosynaptic cores containing 53×1010  (530 billion) neurons and 1.37×1014 (100 trillion) synapses running only 1542 times slower than real time.

The tiny neurosynaptic core produced by IBM. (c) IBM

The tiny neurosynaptic core produced by IBM. (c) IBM

“Previously, we have demonstrated a neurosynaptic core and some of its applications,” continues the abstract. “We have also compiled the largest long-distance wiring diagram of the monkey brain. Now, imagine a network with over 2 billion of these neurosynaptic cores that are divided into 77 brain-inspired regions with probabilistic intra-region (“gray matter”) connectivity and monkey-brain-inspired inter-region (“white matter”) connectivity.

“This fulfills a core vision of the DARPA SyNAPSE project to bring together nanotechnology, neuroscience, and supercomputing to lay the foundation of a novel cognitive computing architecture that complements today’s von Neumann machines.”

According to Dr. Dharmendra S. Modha, IBM’s cognitive computing manager, his team goal is that of mimic processes of the human brain. While IBM competitors focus on computing systems that mimic the left part of the brain, processing information sequentially, Modha is working on replicating functions from the right part of the human brain, where information can be processed in parallel and where incredibly complex brain functions lie. To this end, the researchers combine neuroscience and supercomputing to reach their goals.

Imagine that the room-sized, cutting-edge, billion dollar technology used by IBM to scratch the surface of artificial human cognition still doesn’t come near our brain’s capabilities, which only occupies a fixed volume comparable to a 2L bottle of water and needs less power than a light bulb to work. The video below features Dr. Modha explaining his project in easy to understand manner and its only 5 minutes long.

source: KurzweilAI


Atomic Bond

Incredible molecular imaging shows individual chemical bonds for first time

Atomic level imaging has come a long way in the past decade, and after scientists first managed to image molecular structure and even electron clouds, now a group of researchers at IBM Research Center Zurich have visually depicted how chemical bonds differentiate in individual molecules using a technique called non-contact atomic force microscopy (AFM).

In the image below one can clearly see detailed chemical bonds between individual atoms of a nanographene molecule or C60. In 3-D the molecule resembles a buckyball thanks to its football shape.

Atomic Bond

If you look closely you can see that some C-C chemical bonds are more highlighted than others. This is because in reality and practice, the  bonds between individual atoms differ slightly and subtly in length and strength, and for the first time we’ll now able to distinguish the different types of bonds from one another, visually.  The bright and dark spots correspond to higher and lower densities of electrons.

“In the case of pentacene, we saw the bonds but we couldn’t really differentiate them or see different properties of different bonds,” said lead author of the study Dr.  Leo Gross.

“Now we can really prove that… we can see different physical properties of different bonds, and that’s really exciting.”

Atomic Bond

The nanographene molecule imaged through the ATF versus the schematic of the molecule. (c) IBM Research Zurich

The nanographene molecule imaged through AFM versus the schematic of the molecule. (c) IBM Research Zurich

To create the images, the IBM researchers used an atomic force microscope with a tip that ended with a single carbon monoxide molecule. The CO molecule traces the image by oscillating between the tip and the sample. By measuring its wiggle and inter-molecular force  the AFM can slowly build up a very detailed image. The technique made it possible to distinguish individual bonds that differ by only three picometers, which is one-hundredth of an atom’s diameter.

“We found two different contrast mechanisms to distinguish bonds. The first one is based on small differences in the force measured above the bonds. We expected this kind of contrast but it was a challenge to resolve,” said IBM scientist Leo Gross. “The second contrast mechanism really came as a surprise: Bonds appeared with different lengths in AFM measurements. With the help of ab initio calculations we found that the tilting of the carbon monoxide molecule at the tip apex is the cause of this contrast.”

The findings were reported in the journal Science.


Scientists synthesize and image 5-ring graphite molecule in tribute to Olympics symbol


The 2012 London summer Olympic games are just a few weeks away, and as millions are set to flock to the city and other hundreds of millions will rejoice on the web and TV at the world’s grandest spectacle of athletic performance, it’s pretty clear this is one of the most anticipated events of the year. Every four-years people all over the world offer their tribute to the competition, including scientists too, of course.

“When doodling in a planning meeting, it occurred to me that a molecular structure with three hexagonal rings above two others would make for an interesting synthetic challenge,” says Professor Graham Richards, an RSC Council member.

“I wondered: could someone actually make it, and produce an image of the actual molecule?”

A joint collaborative scientific effort comprised of scientists at the Royal Society of Chemistry (RSC), the University of Warwick, and IBM Research Zurich, have  imaged the smallest possible five-ringed structure. The researchers employed synthetic organic chemistry to build the Olympicene molecule, while scanning tunneling microscopy was used to reveal a first glimpse of the molecule’s structure. To image the 1.2 nanometres in width molecule, about 100,000 times thinner than a human hair, at an unprecedented resolution, like captioned above, scientists at IBM Zurich made use of a complex technique known as noncontact atomic force microscopy.

“Alongside the scientific challenge involved in creating olympicene in a laboratory, there’s some serious practical reasons for working with molecules like this,” says Fox.

“The compound is related to single-layer graphite, also known as graphene, and is one of a number of related compounds which potentially have interesting electronic and optical properties.

“For example these types of molecules may offer great potential for the next generation of solar cells and high-tech lighting sources such as LEDs.”

source: Futurity


Study aims to lay ground for the first ‘green highway’

ibm-car IBM has teamed up with Zapadoslovenska energetika (ZSE), the biggest electric company in Slovakia, for a feasibility study which aims to prepare Bratislava, the nation’s capitol, for plug-in electric vehicles. With this in mind, the companies will look at the best way to develop a  “green highway” between Bratislava and the neighboring Austrian city of Vienna, which is about 49 miles away.

Electric vehicles are being hailed everywhere as the future of automobile transportation, but they’re still in their infancy, and a yet highly limiting infrastructure casts a shroud of unpopularity over them. At the turn of the XXth century when the automobile boom rose, infrastructure grew along with it to meet the demand for more roads and gas stations. For EVs, infrastructure comes down to a simple, yet complicated at the same time, fact – electric plug-in stations, dispersed in such a way that they cater to the vehicles’ rather poor autonomy. However, companies today don’t have the economic luxury of simply building serving stations on the go, and this is where this partnership comes in.

The two companies will thus work on a project which will tackle all the issues which might arise with building public charging  network between Bratislava and Vienna that can support a new generation of electric cars without stressing the existing power grid.

“Rising fuel prices and energy consumption are two major issues facing many cities around the world,” said Guido Bartels, general manager of IBM’s Global Energy and Utilities Industry. “These factors coupled with aging roads and infrastructures, can affect city planning, local economy, and overall community satisfaction.”

The IBM-ZSE project “tackles all of these issues,” Bartels continued. “It has the potential to introduce a modern, convenient and more intelligent way for consumers to commute, which in turn may encourage more to make the shift to an electric vehicle, while reducing stress on the energy grid.”

Hopefully, if the virtual outline of the serving network is deemed accessible, from more than one point of view, actual implementation might commence, and along with it, hopefully, other important routes in Europe and the rest of the world.


IBM to develop world’s most powerful computing system tasked with finding origins of Universe

Backed by an international consortium, ten years from now the world’s largest and most sensitive radio telescope in the world will be built – the Square Kilometer Array (SKA). The project will consist in thousands of antennas displaced across thousands of miles, with a collecting area equivalent to one square kilometer (hence the name), that will hopefully help astronomers take a peek at the Universe’s closest moments after the Big Bang. However, such a grand scientific effort requires an equally humongous computing power, one that only seven million of today’s fastest computers could match. Recently, IBM has been granted the privilege to research the exascale super computing system to be integrated with the SKA, after it won the $42 million contract to work with the Netherlands Institute for Radio Astronomy (ASTRON).

IBM has thus marched for the Herculean task of developing a solution that will match SKA’s need for reading, storing and processing one exabyte of raw data per day. An exabyte is the equivalent of 1,000,000 terabytes or 12,000,000 latest generation iPods fully stored. If you didn’t quite get the scale involved, consider that one exabyte roughly equals two days worth of global internet traffic. Massive!

In Drenthe, Netherlands, ASTRON and IBM will look at energy-efficient exascale computing, data transport at light speed, storage processes and streaming analytics technology. “We have to decrease power consumption by a factor of 10 to 100 to be able to pay the power bill for such a machine,” said Andreas Wicenec, head of computing at the International Centre for Radio Astronomy Research in the state of Western Australia.

With this purpose in mind, the researchers are currently investigating advanced accelerators and 3-D stacked chips, architectures already proven to be highly energy-efficient at IBM labs. Also, they’ll have a look at how they can optimize huge data transfers by using novel optical interconnect technologies and nanophotonics. For the task at hand, 50 people, along with astronomers from 20 countries, will work to build the most complex super-computing system in the world for the next five years.

Artist impression of the  SKA radio telescope were it to be built in Australia. (c) SKA Program Development Office

Artist impression of the SKA radio telescope were it to be built in Australia. (c) SKA Program Development Office

“To detect the signals, you really need a good antenna,” said Ronald Luitjen, an IBM scientist and data motion architect on the project. “It would be the equivalent of 3 million TV antennae dishes. This will be a unique instrument. Nothing else can do this kind of science.”

Radio telescopes in operation today are very powerful, but SKA will be in a whole different league. It will provide a real-time all-sky radio survey, on the lookout for some of the Universe’s most strange phenomena, unexplored with today’s technology. The telescope will be used to explore evolving galaxies, dark matter, look for complex organic molecules in interstellar space and study data from the Big Bang, the primordial cosmic event which gave birth to anything matter and anti-matter in the Universe more than 13 billion years ago. All these, you guessed it, require a huge computing effort – hopefully it’s to be served in the coming years before the SKA’s completion in 2024.

The $2 billion SKA will be located either in Australia/New Zealand or South Africa, with the latter being currently most favored. These regions were selected because of their low radio pollution. Nevertheless, the scientists involved in the project are looking at the bright side of the lengthy completion time. “It is really relying on the fact that technology is improving at a certain rate,” said Andreas Wicenec, head of computing at the International Centre for Radio Astronomy Research in the state of Western Australia. Well, how about quantum computing?

The SKA might hold the key to unlocking some of the Universe’s well kept secrets today, and, if anything, it will open a new era of computing, with ramifications in all spheres of science.

IBM makes significant breakthrough towards scalable quantum computers

During the past months we’ve been reporting several breakthroughs in the field of quantum computing, and now IBM seems ready to truly pave the way for quantum computers. Researchers announced they are now able to develop a superconducting qubit made from microfabricated silicon that maintains coherence long enough for practical computation. Whoa! That probably sounds like a lot to swallow, so let’s break it down.

Bits and Qubits

Information is measured in ‘bits’, and a bit may have two positions (described typically as 0 or 1). Quantum computers however don’t use these bits, and instead they use quantum bits, or ‘qubits’. But while a bit must be a 0 or a 1, a qubit can be both 0, 1, or a superposition of both. This difference might seem small and subtle, but in fact, it is absolutely humongous: a mere hundred qubits can store more classical ‘bit’ information than there are atoms in the Universe.

Three superconducting qubits. Credits: IBM research

Needless to say a computer running on qubits would be game changing, in pretty much the same way microprocessors were in their days. But what makes quantum computing extremely difficult is a problem called ‘decoherence‘. In the quantum world, things don’t happen as they do in the ‘real world’; when a qubit will move from the 0 state to the 1 state or to a superposition, it will decohere to state 0 due to interference from other parts of the computer. Generally speaking, decoherence is the loss order of the phase angles between the components. So in order for quantum computers to be practical and scalable, the system would have to remain coherent for a long enough time to allow error-correction techniques to function properly.

“In 1999, coherence times were about 1 nanosecond,” said IBM scientist Matthias Steffen. “Last year, coherence times were achieved for as long as 1 to 4 microseconds. With these new techniques, we’ve achieved coherence times of 10 to 100 microseconds. We need to improve that by a factor of 10 to 100 before we’re at the threshold we want to be. But considering that in the past ten years we’ve increased coherence times by a factor of 10,000, I’m not scared.”

Two different approaches, one breakthrough

IBM announced they took two different approaches, both of which played a significant part in the breakthrough they revealed. The first one was to build a 3-D qubit made from superconducting, microfabricated silicon. The main advantage here is that the equipment and know-how necessary to create this technology already exists, nothing new has to be invented, thanks to developments made by Yale researchers (for which Steffen expressed a deep admiration). Using this approach, they managed to maintain coherence for 95 microseconds – “But you could round that to 100 for the piece if you want,” Steffen joked.

The second idea involved a traditional 2-D qubit, which IBM’s scientists used to build a “Controlled NOT gate” or CNOT gate, which is a building block of quantum computing. A CNOT gate connects two qubits in such a way that the second qubit will change state if the first qubit changes its state to 1. The CNOT gate was able to produce a coherence of 10 microseconds, which is long enough to show a 95% accuracy rate – a notable improvement from the 81% accuracy rate, the highest achieved until now. Of course, the technology is still years away from being actually on the shelves, but the developments are very impressive.

From quantum to reality

Given the rapid progress that is being made in the field of quantum computing, one can only feel that a quantum computer is looking more and more like a real possibility. As error correction protocols become more accurate and coherence times grow longer, we are moving more and more towards accurate quantum computing – but you shouldn’t expect a quantum smartphone just yet.

“There’s a growing sense that a quantum computer can’t be a laptop or desktop,” said Steffen. “Quantum computers may well just being housed in a large building somewhere. It’s not going to be something that’s very portable. In terms of application, I don’t think that’s a huge detriment because they’ll be able to solve problems so much faster than traditional computers.”

The next steps are simple, in principle, but extremely hard to do in practice. The accuracy rate has to be at at least 99.99%, up to the point where it achieves what is called a ‘logical qubit’ – one that, for practical purposes, doesn’t suffer decoherence. From that point, the only thing left to do is develop the quantum computer architecture, and this will prove troublesome too – but the reward is definitely worth it.

“We are very excited about how the quantum computing field has progressed over the past ten years,” he told me. “Our team has grown significantly over past 3 years, and I look forward to seeing that team continue to grow and take quantum computing to the next level.”


single molecule electric charge imaging

IBM images electric charge distribution in a SINGLE molecule – world’s first!

Part of a the recent slew of revolutionary technological and scientific novelties coming off IBM‘s research and development lab, the company has just announced that it has successfully managed to  measure and image for the first time how charge is distributed within a single molecule. The achievement was made possible after a new technique, called Kelvin probe force microscopy (KPFM), was developed. Scientists involved in the project claim that the research introduces the possibility of imaging the charge distribution within functional molecular structures, which hold great promise for future applications such as solar photoconversion, energy storage, or molecular scale computing devices. Until now it has not been possible to image the charge distribution within a single molecule.

single molecule electric charge imaging The team, comprised of scientists Fabian Mohn, Leo Gross, Nikolaj Moll and Gerhard Meyer of IBM Research, Zurich, imaged the charge distribution within a single naphthalocyanine molecule using what’s called Kelvin probe force microscopy at low temperatures and in ultrahigh vacuum – these conditions were imperative, as a high degree of thermal and mechanical stability and atomic precision of the instrument was required over the course of the experiment, which lasted several days.

Derived off the revolutionary atomic force microscopy (AFM), the KPFM measures the potential difference between the scanning probe tip and a conductive sample, in our case the naphthalocyanine molecule – a cross-shaped symmetric organic molecule. Therefore, KPFM does not measure the electric charge in the molecule directly, but rather the electric field generated by this charge.

“This work demonstrates an important new capability of being able to directly measure how charge arranges itself within an individual molecule,” says Michael Crommie, professor for condensed matter physics at the University of Berkeley.

“Understanding this kind of charge distribution is critical for understanding how molecules work in different environments. I expect this technique to have an especially important future impact on the many areas where physics, chemistry, and biology intersect.”

The potential field is stronger above areas of the molecule that are charged, leading to a greater KPFM signal. Furthermore, oppositely charged areas yield a different contrast because the direction of the electric field is reversed. This leads to the light and dark areas in the micrograph (or red and blue areas in colored ones).

The new KPFM technique promises to offer complementary information about a studied molecule, providing valuable electric charge data, in addition to those rendered by scanning tunneling microscopy (STM) or atomic force microscopy (AFM). Since their introduction in 1980′, STM, which images electron orbitals of a molecule, and ATM, which resolves molecular structure, have become instrumental to any atomic and molecular scale research today, practically opening the door to the nanotech age. Maybe not that surprisingly, the STM was developed in the same IBM research center in Zurich, 30 years ago.

“The present work marks an important step in our long term effort on controlling and exploring molecular systems at the atomic scale with scanning probe microscopy,” Gerhard Meyer, a senior IBM scientist who leads the STM and AFM research activities at IBM Research – Zurich.

The findings were published in the journal Nature Nanotechnology. 

Source / image via IBM

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

IBM develops smallest storage device: 12 atoms for a single bit!

Each little green bump is an atom of ferromagnetic material. All these 12 atoms captioned above form an array capable of storing on bit of information. (c) IBM

Each little green bump is an atom of ferromagnetic material. All these 12 atoms captioned above form an array capable of storing on bit of information. (c) IBM

Moore’s law states that the level of technology and computing power should double every two years, and so far the postulate hasn’t been wrong in more than 50 years. A group of IBM scientists have now managed to develop a data storage technique which allows for information to be stored with as little as 12 atoms, thousands of times less atoms than it is currently required for a single bite. Gordon Moore, the Intel co-founder, would have been proud.

Storing a single bit of data on a disk drive requires one million atoms of magnetized storage medium, and this is valid only for the most advanced storage devices currently available today. The new research from IBM suggests that, eventually, in the not so distant future storage devices could be developed at 1/83,000th the scale of today’s disk drives.

“Magnetic materials are extremely useful and strategically important to many major economies, but there aren’t that many of them,” said Shan X. Wang, director of the Center for Magnetic Nanotechnology at Stanford University. “To make a brand new material is very intriguing and scientifically very important.”

The current magnetic storage devices, like the hard-drive currently in use by your computer which basically allows for information like the one on this website to be read and stored, are made out of ferromagnetic materials like iron or nickel. When these materials are exposed to a magnetic field, their magnetic poles line up in the same direction. These materials have worked very well until now, as far as conventional hard drives or micro chips are concerned, however when miniaturization is concerned, at a certain scale bits start to interfe with each other. Antiferromagnetism works in the opposite direction, with a highly important distinction – the orbits of unpaired electrons don’t align to the same direction. Thus atoms in manganese oxide, a material that works well for this, atoms align head to foot such that the North magnetic pole of each atom seeks the South magnetic pole of the other.

Using antiferrogmanetism, the team of researchers from IBM’s Almaden Research Center, led by Andreas Heinrich, managed to create a swathe of material with a much denser magnetic pallet than conventional ferromagnetic devices.  The researchers used a scanning tunneling microscope, a device the size of a washing machine, not only to pin point at an atomic scale, but also accurately position individual atoms together, and engineer 12 antiferromagnetically coupled atoms. This is the the smallest number of atoms with which one can create a magnetic bit in which it is possible to store information.

Heading towards a golden computing age

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

An atomically assembled array of 96 iron atoms containing one byte of magnetic information in antiferromagnetic states. (c) IBM Research-Almaden

To demonstrate the antiferrogmantic storage effect, the IBM researchers created a computer byte, the equivalent of one character, out of an individually placed array of 96 atoms.   They then used the array to encode the I.B.M. motto “Think” by repeatedly programming the memory block to store representations of its five letters.  Also, as if the sheer scale of this technology wasn’t amazing enough, the IBM researchers observed that, albeit in very small numbers the atoms display some quantum mechanical characteristics – simultaneously existing in both “spin” states, in effect 1 and 0 at the same time. This could have remarkable implications for quantum computing development.

This technology might take many years for regular consumers to experience

Now, although this latest gem from IBM will allow for storage devices to be built at a fraction of the current size and power consumption, don’t expect it to become commercially available to the general public for a pretty long while. The researchers were capable of holding on to a data bit for several hours at a temperature close to absolute zero, along with other conditions remarkably difficult to reach. Also, manufacturing-wise it will take some time probably before an automatic method of arranging, placing and manipulating individual atoms in the proper array.

“It took a room full of equipment worth about 1 million dollars and a whole lot of sweat,” to get the 96-atom configuration to work, Heinrich said. “The atoms are in a very regular pattern because we put them there. “Nobody knows how to make that cost effective in manufacturing…that’s the core issue of nanotechnology.

via NYT

Shorties: IBM sets up supercomputer to fight climate change

IBM has recently developed a new 1.6-petaflop high-performance computer for the National Center for Atmospheric Research with the purpose of installing a new supercomputing ability and help the center’s research in atmospheric and climate change.

A pentaflop is a unit of measure of a supercomputer’s performance; it is basically the ability to do a quadrillion floating point operations per second (FLOP).

Dubuque, Iowa

Dubuque, USA leads the way for the smart cities of the future

Dubuque, Iowa

The city of Dubuque, Iowa is a quite and clean city, housing a population of 60,000, making it the 8th largest city in the state. If you happen to pass through this tranquil ville you might get fooled to think this is just another town like any other, however this is not the case – Dubuque is making the transition towards soon becoming the ‘smartest’ city in America. How so? By partnering with IBM, the city council has managed to install a complex array of sensors and launched a sustainable energy campaign among its citizens which will soon allow it to become one of the most efficient urban centers in the country.

I’ve always been concerned about the energy and resource use in this country,” Roy Buol, mayor of Dubuque since 2005, said “It’s almost like people think we have an infinite supply of energy and resources, and we don’t. There are better ways of doing things, but as citizens of the world we don’t have the real-time information so we can make better decisions when we’re using energy and resources.”

During his first year as mayor, Buol managed to have the council on board for a complete turn-around of Dubuque, whose citizens at the time complained about issues like public transit, green space, water quality, and recycling. It was around this time that the city partnered with  IBM Research for Dubuque to become the first truly smart city of the nation, bringing sustainable urban infrastructure to a whole different level.

The first step was of implementing a pilot program that replaced the water meters in 300 households with smart water meters that interface with IBM’s technology. Additionally, one thousand more households have been fitted with smart electricity meters for a smarter electricity pilot that is currently underway, along with another program consisting of 250 homes monitored by smart natural gas meters. All these come together and allow for tabulating resource usage, feeding residents and the city real-time data about natural gas consumption.

Thus, citizens willing to enter the program can view exactly what energy goes in and out of their households, easy attainable through an online interface disposed by IBM. This is where the genius of the project lies, by feeding the data directly to the energy consumers, not just to the city counsel or IBM to crunch the numbers in. If you give people the means to investigate power loss, most of the time they’ll take the necessary measures themselves.

“The plan is pretty simple: Give people what they need,” says David Lyons, project manager for Sustainable Dubuque, the larger umbrella initiative that includes the IBM Research partnership. “And they’ve defined that for us. They need information that is specific to their utilization of resources. Not just about how the community or the world uses resources, but ‘how do I use resources?’”

It’s still very early for concrete results to be shown of this innovative experiment, however judging from preliminary data alone, the smart city infrastructure program looks extremely promising for Dubuque and for the rest of the other cities in the world in the future. The smart water metering pilot is already producing hard results: an overall 6.6 percent reduction in the utilization of water that was at least partially driven by an eight-fold increase in the number of households that each–by looking at data–identified and fixed a leak.

Less water waste means less energy waste, which in term leads to exponential increase of efficiency. Word about data regarding the electricity and natural gas side of the project will be unveiled next year, however one might tell it’s already rending results since Dubuque’s city council announced that they’ll expand the project (the water project is already exploding from just 300 households to 3,000 households and 1,000 small businesses).

If it turns out to be a complete success, from all points of view – mainly, sustainability and cost effectiveness – than Dubuque could easily move to any city in the world. You just need to replicate and scale, and have the necessary vision to understand that, in the long run, this is a mandatory solution for an ever growing energy crisis.

“It gives me very strong reason for hope,” Buol says. “I think this creates a very compelling model for other cities to replicate. It won’t be the City of Dubuque or IBM trying to sell what we’re doing: People will be able to see it and they’ll want the same benefits for their citizens and their businesses.”

popular science

IBM names first female CEO

The company founded just 100 years ago publicly announced first female CEO in their history. Virginia “Ginni” Rometty, famously known for her conservative nature will take over IBM from Sam Palmisano, IBM announced on Tuesday.

The veteran of technology, who is 54 takes on the corporation starting January 1, 2012, after being in charge of sales and marketing for a few years. Former CEO will still remain as chairman .

After she takes over IBM, women will be in charge of two of the world’s largest technology companies, after last month Meg Whitman was named CEO of Hewlett-Packard. But while Whitman turned to a company in disarray with numerous and serious problems to face, Rometty will inherit a finely tuned IBM whose focus on the high-margin businesses of technology services and software has helped it thrive.

“It is a good sign,” Jean Bozman, an analyst with IDC. “It does create an environment in which more of these high-ranking women executives can see that’s within reach. The more that happens, the more normal that will be. I think this might be a great sign that we’ve turned a corner. Certainly the Baby Boomers have wanted this for a long time.”

Google buys 1,000s IBM patents for law suit battle

This August bought 1,023 patents from IBM in August, according to records filed at the US Patent and Trademark Office’s website. This is in addition to the 17,000 patents the Mountain View company has gained once with its recent sealed transaction with Motorola, in the course of which Google has bought off the whole cell phone manufacturer.

The play here is both technological and strategic in nature. First of all, among these patents there are bound to be important innovative concepts which Google deems fit to launch with its next killer product. Secondly, and part of a more pressing current issue, they’ll use them to battle the current tech giants who seem to have rallied themselves against Google and have repeatedly filled patent infringement law suits. More exactly, these actions have been described as   “hostile, organized campaign against Android by Microsoft, Oracle, Apple and other companies”  by David Drummond, senior VP and chief legal officer, on the Google blog in early August.

Neither IBM or Google have chosen to comment on this recent acquisition, with due reason probably. Silence in the industrial tech war is a commodity big corporations have always wished to seal.

“We’re looking at other ways to reduce the anti-competitive threats against Android by strengthening our own patent portfolio,” he said at the time, and it seems as if the Chocolate Factory really meant it.

Just recently, Google passed a few of its patents to HTC, a major Android-supporting manufacturer, for its incoming patent suit onslaught with Apple.


Data Center

IBM is building the largest data array in the world – 120 petabytes of storage

Data Center

IBM recently made public its intentions of developing what will be upon its completion the world’s largest data array, consisting of 200,000 conventional hard disk drives intertwined and working together, adding to 120 petabytes of available storage space. The contract for this massive data array, 10 times bigger than any other data center in the world at present date, has been ordered by an “unnamed client”, whose intentions has yet to be disclaimed. IBM claims that the huge storage space will be used for complex computations, like those used to model weather and climate.

To put things into perspective 120 petabytes, or 120 million gygabites would account for 24 billion typical five-megabyte MP3 files or 60 downloads of the entire internet, which currently spans across 150 billion web pages. And while 120 petabytes might sound outrageous by any sane standard today, in just a short time, at the rate technology is advancing, it might become fairly common to encounter a data center similarly sized in the future.

“This 120 petabyte system is on the lunatic fringe now, but in a few years it may be that all cloud computing systems are like it,” Hillsberg says. Just keeping track of the names, types, and other attributes of the files stored in the system will consume around two petabytes of its capacity.

I know some of you tech enthusiasts out there are already grinding your teeth a bit to this fairly dubious numbers. I know I have – 120 petabytes/200.000 equals 600 GB. Does this mean IBM is using only 600 GB hard drives? I’m willing to bet they’re not that cheap, it’s would be extremely counter-productive in the first place. Firstly, it’s worth pointing out that we’re not talking about your usual commercial hard drives. Most likely, the hard-drives used will be of the sort of 15K RPM Fibre Channel disks, at the very least – which beats the heck out of your SATA drive currently powering your computer storage. These kind of hard-drives are currently not that voluminous in storage as SATA ones, so this might be an explanation. There’s also the issue of redundancy which is encountered in data centers, which decreases the amount of available real storage spaces and increases as a data center is larger. So the hard-drives used could actually be somewhere between 1.5 and 3 TB, all running on cutting edge data transfer speed.

Steve Conway, a vice president of research with the analyst firm IDC who specializes in high-performance computing (HPC), says IBM’s repository is significantly bigger than previous storage systems. “A 120-petabye storage array would easily be the largest I’ve encountered,” he says.

To house these massively numbered hard-drives IBM located them horizontaly on drawers, like in any other data center, but made these spaces even wider, in order to accommodate more disks within smaller confines. Engineers also implemented a new data backup mechanism, whereby information from dying disks is slowly reproduced on a replacement drive, allowing the system to continue running without any slowdown. Also, a system called GPFS, meanwhile, spreads stored files over multiple disks, allowing the machine to read or write different parts of a given file at once, while indexing its entire collection at breakneck speeds.

Last month a team from IBM used GPFS to index 10 billion files in 43 minutes, effortlessly breaking the previous record of one billion files scanned in three hours. Now, that’s something!

Fast access to huge storage is of crucial necessity for supercomputers, who need humongous amounts of bytes to compute the various complicate model they’re assigned to, be it weather simulations or the decoding of the human genome. Of course, they can be used, and most likely are already in place, to store identities and human biometric data too. I’ll take this opportunity to remind you of a frightful fact we published a while ago – every six hours the NSA collects data the size of the Library of Congress.

As quantum computing takes ground and eventually the first quantum computer will be developed, these kind of data centers will become highly more common.

UPDATE: The facility has indeed opened in 2012. 

MIT Technology Review

Homemade supercomputer made with lego is highly energy efficient

Mike Schropp can be considered a geek-tinkerer, a person whose passion for hacking, tweaking and generally setting things apart animate him day by day. His most recent project comes as a pinnacle to his self-proclaimed label, combining his passion for building computers and lego (you’d be surprised how well they come together) to build a highly efficient supercomputer.

He was inspired to begin his work with IBM’s World Community Grid project in mind, a highly ambitious collective project which involves donating your computer’s computing power while in idle mode and therefore speed up the time in which complicated computing may be completed. Research made, part of the project, includes various cures for diseases, energy efficiency programs, and loads of other highly important humanitarian projects. You can pitch to the WCG project and offer a hand or …bit simply by downloading a simple software from the official website.

Anyway, Schropp wanted to build a PC that could be able to withstand 100,000 crunching points per day (that’s pretty impressive  in grid computing terms), as well as being highly energy efficient, all within a $2,000 budget, of course.

If you’re more of a computer afficionado, these specs might make you drool a bit, as  Gizmag reports, “the final DIY PC consists of three complete systems working as one in a single box made of LEGO bricks. Schropp used three quad-core Intel Core i7 2600K CPUs, three Asus P8P67 Micro ATX motherboards, three SSDs, a DDR3 memory for each system, as well as three coolers from Thermaltake and eight Aerocool fans. The DIY PC is powered by just a single Antec 1200 HCP power supply, which proves that Schropp was entirely successful in terms of energy efficiency.”

Oh, yeah. Were did the lego parts fit into all of this?

“I’ve been addicted to Legos for longer than I can remember, so when the opportunity comes up to work on a new project of some sort the question that invariably arises is, ‘Can I use Legos?'”, said Schropp.

On his website, Schropp describes the whole process in which he assembled his platform, as well as various comparison work, going through various set-ups to see what’s the best performance to energy output ratio, before choosing his best fit.

However, with all his energy output concentrated efforts, the system still burns a lot of coal when it’s up and computing at its fullest – hundreds of watts. Still, it’s point was well served, and goes to show how far optimization, when done right, can help keep energy efficiency levels tight. The lego usage helped a lot in spreading the coverage, of course.

As Schropp notes “In the end the most important thing to me though is that I feel like I’m doing more to help contribute to a good cause in humanitarian and medical research. I know it’s just one system, but every little bit counts in finding cures and solutions.”

all photos courtesy of Mike Schropp