Tag Archives: computers

Credit: Harvard University.

Squishy computers now enable the first fully soft robots

Credit: Harvard University.

Credit: Harvard University.

Researchers at Harvard University have designed the first rubber computer that relies exclusively on soft logic. In one of the experiments, this unusual computer was used to program a soft robot that dived and surfaced inside a transparent water tank depending on the pressure it sensed. The whole setup contains no hard or electronic parts.

In the past decade, researchers have devoted a lot of interest to soft robotics, which involves designing machines that resemble biological systems like squids, caterpillars, starfish, human hands and more. Unlike their ‘hard’ counterparts, soft robots are mostly made of elastic and flexible materials which allow them to mold to the environment. Such machines can stretch, twist, scrunch and squish, change shape or size, wrap around objects, and perform tasks impossible by rigid robotics standards. Until now, even soft robots had some rigid components they couldn’t get rid of, such as electronics — but not anymore.

“We’re emulating the thought process of an electronic computer, using only soft materials and pneumatic signals, replacing electronics with pressurized air,” says Daniel J. Preston, first author on a paper published in PNAS.

The most basic components of electronic computers are logic gates. This circuitry receives input information, runs it through some programming, and then outputs a reaction, whether printing a document or moving a robotic arm on the Y-axis. Our biological circuitry isn’t all that different. For instance, when a doctor strikes the tendon below the kneecap with a soft hammer, the nervous system reacts by jerking the leg.

To make logic gates without any electronic component, Preston and colleagues used silicone tubing and pressurized air. All complex operations performed by computers can be performed with only three logic gates: NOT, AND, and OR. By programming how the soft valves react to different air pressures, the researchers were able to replicate all three of these logic gates. For instance, the NOT gate simply works like this: if the input valve has high pressure, the output will be low pressure. In the case of the fish-like robot which the researchers experimented with in a water tank, the NOT logic gate uses an environmental pressures sensor. When the sensor registers low pressure at the top, the robot dives and then surfaces when it senses high pressure. The command can also be controlled through an external soft button, as you can see in the video below.

Soft robots have their own set of advantages that make them uniquely suited for a range of applications. Modern assembly lines are packed with hard robots that perform all sorts of extremely precise and fast operations. However, if a human happens to get in the way, serious injury becomes a huge risk. With soft robotics, this is not a problem because they can only exert so much force.

Some of the properties that make soft robots so attractive are affordability, ease of manufacture, light weight, resistance to physical damage and corrosive substances, and durability. Soft robots also have the benefit of being capable of operating where electronics struggle, such as in the presence of high level of radiation or in outer-space. These properties make soft robots particularly appealing for humanitarian and rescue operations, such as in the wake of a flood, hurricane, or nuclear power plant meltdown. “If it gets run over by a car, it just keeps going, which is something we don’t have with hard robots,” Preston says.

Soft robotics offer another interesting possibility. Because there are no electronics, a completely soft robot could be made from materials which match the refractive index of water. When completely submerged, the robot would appear transparent. Preston says he hopes to make an autonomous soft robot that is invisible to the naked eye and perhaps even to sonar.

In a sea of information overload, artificial intelligence, and all sorts of complex computing, it’s refreshing to see new concepts that actually hinge on simplicity.

“There’s a lot of capability there,” Preston says, “but it’s also good to take a step back and think about whether or not there’s a simpler way to do things that gives you the same result, especially if it’s not only simpler, it’s also cheaper.”

Thermal diode could allow computers to one day function on heat alone

A research team from the University of Nebraska-Lincoln College of Engineering has developed the first bit required for heat-fueled computers: the thermal diode.

Image credits Mahmoud Elzouka & Sidy Ndao, (2017), Scientific Reports.

Ever since humanity has put together the first electronic computer, we’ve been locked in an endless battle to keep these things cool enough so they won’t fry and shut down. A struggle made all too personal as your phone is cooking in your hand after a particularly lengthy call or game. Is this the price we have to pay for modern communication and computation? Are we doomed to a future choke-full of fans and thermal conductive paste, anxiously blowing into PC cases or wailing at the sight of the on-screen thermometer?

Well, maybe not

Sidy Ndao, assistant professor of mechanical and materials engineering, and Mahmoud Elzouka, a graduate student in mechanical and materials engineering, from the University of Nebraska-Lincoln College of Engineering, developed a thermal diode that may allow computers to harness heat as an alternate energy source and keep on functioning even in ultra-high temperatures. The duo says they got the idea of creating a  nano-thermal-mechanical device, or thermal diode, after struggling with the question of how to better cool down computers. Instead of trying to dissipate heat (essentially wasting energy) like before, they decided to try and harness it for the computer’s system.

“If you think about it, whatever you do with electricity you should (also) be able to do with heat, because they are similar in many ways,” Ndao said. “In principle, they are both energy carriers. If you could control heat, you could use it to do computing and avoid the problem of overheating.”

The rectifier (diode) made from a fixed terminal (top), a moving terminal (bottom), and a thermally-expandable structure (v-shaped bent beam). Variable thickness arrow represents the decrease in thermal radiation intensity with distance from the heated surface. (b) False-color scanning electron micrograph of a quarter of the proof-of-concept microdevice, showing 6 pairs of terminals (24 pairs total.) (c) Scanning electron micrograph of the proof-of-concept microdevice. (d,e) Zoomed-in views showing the connection of the moving terminal to the folded-beam spring and the bent beam, respectively.
Image credits Mahmoud Elzouka & Sidy Ndao, (2017), Scientific Reports.

Their NanoThermoMechanical rectifier (NTMR) uses heat exclusively, meaning it could be powered by waste heat (which the authors note equals about 60% of the total domestic consumption in the US) so would “obviously cut down on waste and the cost of energy,” Ndao points out. The system is made up of two metallic plates, one fixed (upper) and one mobile (lower plate), called terminals, and can be constructed “using conventional microfabrication techniques”. Near-field thermal radiation processes carry the heat between the two plates, and their intensity decreases exponentially with distance between the terminals. So, the wider the gap between them, the lower the heat transfer rate becomes (negative bias). When the expandable structure pushes the mobile terminal up, the distance closes, increasing the rate of heat transfer (positive bias).

The mobile terminal rests on a thermally-expandable structure, which activates when the lower plate is heated and pushes the two terminals close to each other. If the upper plate is heated, the thermal radiation processes which carry heat between two close-by objects isn’t strong enough to heat the expandable structure, so there is no motion — allowing the system to act like a diode.

Hot stuff

The prototype diodes.
Image credits Karl Vogel / University of Nebraska-Lincoln Engineering.

It’s not the first time scientists toy around with heat-based systems for computers, but “the technologies proposed so far operate at cryogenic or room temperatures.” By contrast, the team’s prototype device still functions at around 630 degrees F (332 C), which is a lot more than my rig can take without melting and/or exploding. Ndao says that in the future, he expects to take the system’s limit to some ridiculous temperatures — even as high as 1,300 F (704 C).

The only major gripe I have with this system right now is that electro-mechanical computers tend to be really slow compared to pure electrical ones — that’s why we switched to electronic computers in the first place. Well, that and the fact that mechanical computers have a lot of parts in motion which tend to break. So we’ll have to wait and see just how fast this thing can be. But if they can pull it off, the thermal computer could comfortably compute in places where regular systems would boil to a halt.

Ndao said the team is trying to make the device more efficient and faster. Elzouka said that although they’ve filed for a patent already, there is still work to be done to improve the diode and its performance.

“If we can achieve high efficiency, show that we can do computations and run a logic system experimentally, then we can have a proof-of-concept,” Elzouka said. “(That) is when we can think about the future.”

“We want to to create the world’s first thermal computer,” Ndao said. “Hopefully one day, it will be used to unlock the mysteries of outer space, explore and harvest our own planet’s deep-beneath-the-surface geology, and harness waste heat for more efficient-energy utilization.”

The paper “High Temperature Near-Field NanoThermoMechanical Rectification” has been published in the journal Scientific Reports.

Emotional computers really freak people out — a new take on the uncanny valley

New research shows that AIs we perceive as too mentally human-like can unnerve us even if their appearance isn’t human, furthering our understanding of the ‘uncanny valley’ and potentially directing future work into human-computer interactions.

Image credits kuloser / Pixabay.

Back in the 1970s, Japanese roboticist Masahiro Mori advanced the concept of the ‘uncanny valley’ — the idea that humans will appreciate robots and animations more and more as they become more human-like in appearance, but find them unsettling as they become almost-but-not-quite-human. In other words, we know how a human should look, and a machine that ticks some of the criteria but not all is too close for comfort.

The uncanny valley of the mind

That’s all well and good for appearance — but what about the mind? To find out, Jan-Philipp Stein and Peter Ohler, psychologists at the Chemnitz University of Technology in Germany, had 92 participants observe a short conversation between two virtual avatars, one male and one female, in a virtual plaza. These characters talked about their exhaustion from the hot weather, after which the woman told about her frustration at the lack of free time and annoyance for waiting on a friend who’s late, then the man expressed his sympathy for her plight. Pretty straightforward small talk.

The trick was that while everyone witnessed the same scene and dialogue, the participants were given one of four context stories. Half were told that the avatars were controlled by computers, and the other half that they were human-controlled. Furthermore, half of the group was told that the dialogue was scripted and the others that it was spontaneous, in such a way that each context story was fed to one quarter of the group.

Out of all the participants, those who were told that they’d be witnessing two computers interact on their own reported the scene as more eerie and unsettling that the other three groups. People were ok with humans or script-driven computers exhibiting natural-looking social behavior, but when a computer showed frustration or sympathy on its own it put people on edge, the team reports.

Given that the team managed to elicit this response in their participants only through the concept they presented, they call this phenomenon the ‘uncanny valley of the mind,’ to distinguish between the effects of a robot’s perceived appearance and personality on humans, noting that emotional behavior can seem uncanny on its own.

In our own image

Image credits skeeze / Pixabay.

The main takeaway from the study is that people may not be as comfortable with computers or robots displaying social skills as they think they are. It’s all fine and dandy if you ask Alexa about the CIA and she answers/shuts down, but expressing frustration that you keep asking her that question might be too human for comfort. And with social interactions, the effect may be even more pronounced that with appearance alone — because appearance is obvious, but you’re never sure exactly how human-like the computer’s programming is.

Stein believes the volunteers who were told they were watching two spontaneous computers interact were unsettled because they may have felt their human uniqueness was under threat. That if computers can emulate us, what’s stopping them from taking control over our own technology? In future research, he plans to test if this effect of the uncanny valley of the mind can be mitigated when people feel they have control over the human-like agents’ behavior.

So are human-like bots destined to fail? Not necessarily — people may feel like the situation was creepy because they were only witnessing it. It’s like having a conversation with Cleverbot, only a cleverer one. A Clever2bot, if you will. It’s fun while you’re doing it, but once you close the conversation and rummage it over you just feel like something was off with the talk.

By interacting directly with the social bots, humans may actually find the experience pleasant, thus reducing its creepy factor.

The full paper “Feeling robots and human zombies: Mind perception and the uncanny valley” has been published in the journal Cognition.

 

AI can write new code by borrowing lines from other programs

DeepCoder, a system put together by researchers at Microsoft and the University of Cambridge, can now allow machines to write their own programs. It’s currently limited in scope, such as those seen at programming competitions. The tool could make it much easier for people who don’t know how to write code to create simple programs.

Image credits: Pexels.

In a world run more and more via a screen, knowing how to code — and code fast — is a good skill to have. Still, it’s not a very common one. With this in mind, Microsoft researchers have teamed up with their UoC counterparts to produce a system that allows machines to build simple programs from a basic set of instructions.

“All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, New Scientist reports.

“They could build systems that it [would be] impossible to build before.”

DeepCoder relies on a method called program synthesis, which allows the software to create programs by ‘stealing’ lines of code from existing programs — just like many human programmers do it. Initially given a list of inputs and outputs for each fragment of code, DeepCoder learned which bits do what, and how they can fit together to reach the required result.

It uses machine learning to search databases of source code for building blocks, which it then sorts according to their probable usefulness. One advantage it has over humans is that DeepCode’s AI can search for code much faster and more thoroughly than a programmer could. In the end, this can allow the system to make unexpected combinations of source code to solve various tasks.

 

Ultimately, the researchers hope DeepCode will give non-coders a tool which can start from a simple idea and build software around it says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge, UK.

“It could allow non-coders to simply describe an idea for a program and let the system build it,” he said.

Researchers have dabbled in automated code-writing software in the past, but nothing on the level DeepCoder can achieve. In 2015 for example, MIT researchers created a program which could automatically fix bugs in software by replacing faulty lines of code with material from other programs. DeepCoder, by contrast, doesn’t need a pre-written piece of code to work with, it builds its own.

It’s also much faster than previous programs. DeepCoder takes fractions of a second to create working programs where older systems needed several minutes of trial and error before reaching a workable solution. Because DeepCoder learns which combinations of source code work and which ones don’t as it goes along, it improves its speed every time it tackles a new problem.

At the moment, DeepCoder can only handle tasks that can be solved in around five lines of code — but in the right language, five lines are enough to make a pretty complex program, the team says. Brockschmidt hopes that future versions of DeepCoder will make it very easy to build basic programs that scrape information from websites for example, without a programmer having to devote time to the task.

“The potential for automation that this kind of technology offers could really signify an enormous [reduction] in the amount of effort it takes to develop code,” says Solar-Lezama.

Brockschmidt is positive that DeepCode won’t put programmers out of a job, however. WBut with the program taking over some of the most tedious parts of the job, he says, coders will be free to handle more complex tasks.

 

NASA creates computers that can survive on Venus, 30 years after the last landings

NASA’s Glenn Research Center has developed a new class of computers that can withstand the hellscape of Venus. The devices are built from a different semiconductor than regular hardware, which can carry more voltage at much higher temperatures.

SiC transistor gate electroluminesces blue while cooked at more than 400°C.
Image credits NASA / Glenn RC.

Mars has been getting a lot of attention as humanity’s first planned colony. So it’s easy to forget that it’s neither the closest nor the most Earth-like terrestrial planet in the Solar System. Both those distinctions belong to Venus — so why aren’t we looking towards it for our otherworldly adventures?

The goddess of love and beauty

Well, the thing is that Venus is awful. It’s an objectively dreadful place, a scorching hot ball of rock covered in thick clouds of boiling acid. Ironic, right?

These conditions not only make it nigh-impossible for real-estate agents to put a positive spin on the planet, it also makes it frustratingly hard to explore. Any mission to Venus has to work around one simple fact: your run of the mill computer wouldn’t like it there. Normal silicone chips can still function up to 240-250°C (482°F). After that, the chip turns from a semiconductor into a fully fledged conductor, electrons start jumping all over the place, and the system crashes.

The longest any human-made object has made it on Venus is 127 minutes, a record set in 1981 by the Soviet spacecraft Venera 13. It was designed to survive for only 32 minutes and used all kinds of tricks to make that happen — such as cooling of internal systems to -10°C (14°F) before entering the atmosphere, hermetically sealed internal chambers for instruments, and so on. Venera braved sulphuric rain, surface temperatures of 470°C (878°F), and an atmosphere 90 times that of Earth long enough to capture the first color pictures of the planet’s surface.

The face of love.
Image credits Morbx / Reddit.

After the mission, the Soviets flew three more crafts to Venus — Venera 14, Vega 1, and Vega 2 — making the last attempted landing on the planet in 1985.

Since that time, the transistor industry has developed alternative materials it can use for integrated systems. One of the most promising class of materials are silicon carbides (SiC). Their ability to support high voltages at huge temperatures has already drawn interest from the military and heavy industries, and make them ideal for a mission to Venus.

NASA’s Glenn Research Center has developed two prototype SiC chips which can be used in future Venus missions. The researchers have also worked to overcome another vulnerability of traditional integrated circuits: they’ve developed interconnects — the wires that tie transistors to other hardware components — which can withstand the extreme conditions on the planet.

Five hundred hours of fire

SiC chip designed by NASA, before and after GEER tests.
Image credits NASA / Glenn RC.

To see if the technology lives up to expectations, the team put these SiC transistors and interconnects together and housed them in ceramic-packed chips. The chips were then placed in the GEER (Glenn Extreme Environments Rig) which can simulate the temperatures and pressures on Venus for hundreds of hours at a time.

One of the chips, housing a simple 3-stage oscillator, kept stable at 1.26MHz over 521 hours (over 21 days) before the GEER had to be shut down. The second chip fizzled out after 109 hours (4,5 days), but NASA determined that it was caused by faulty setup, not the chip itself.

The results for the two chips. Image credits NASA / Glenn RC.

This performance is a far cry from that seen in the 80’s, especially considering that the chips didn’t benefit from any pressure vessels, cooling systems, or other types of protection. It’s the first system shown able to withstand the condition on Venus for weeks at a time.

“With further technology maturation, such SiC IC electronics could drastically improve Venus lander designs and mission concepts, fundamentally enabling long-duration enhanced missions to the surface of Venus,” the researchers conclude.

But it’s not only transistors we’ll need for a successful Venus rover. Drills, cameras, wheels — everything has to be adapted to work in a high pressure, high temperature, highly acidic environment. Materials science has evolved a long way since the last missions, so creating a mechanically-sound lander should be feasible. A full-fledged rover with multiple moving parts that can survive on Venus would be a lot harder to develop — NASA Glenn is working on such a machine, a land-sailing rover, which they estimate will be ready by 2033.

The full paper “Prolonged silicon carbide integrated circuit operation in Venus surface atmospheric conditions” has been published in the journal AIP Advances.

Scientists develop memory chips from egg shells

Eggshells might become the data storage of the future. A Chinese team showed that the material can be used to create greener RAM storage for out computers.

Image credits Steve Buissinne / Pixabay.

You’ve heard of eggplants, but what about eggcomputers? Seeking to bring the term about, a team from the Guizhou Institute of Technology hatched a cunning plan: they went to the market, bought a few random eggs, and ground their shells for three hours to make a homogeneous, nano-sized powder. After it was dry, the team mixed this powder into a solution and poured it onto a substrate.

They thus ended up with the part of a memory chip through which electricity actually flows — the electrolyte. But eggshells are not an item you tend to see in chip factories, so how could it function as RAM? Well, the team tested the egg-paste to see if it changes its electrical resistance when a voltage flows through it. This property can be used to create memory chips of the ReRAM, or resistive random access memory, variety. There’s a lot of interest in ReRAMas it could be used to create faster, denser, and more energy efficient storage media than traditional RAM or flash memory.

And it worked. The team was able to encode 100 bits of binary information into the eggmemory before it failed. It doesn’t stack up to the billions of cycles regular materials can take, but as a proof of concept it’s incredible.

It’s ground eggshell. That can store binary data.

Still, we’re a long way off from seeing one of these devices on the market. But if they do show promise for future applications and, with enough developement, could provide a clean, sustainable, and very egg-y alternative to the electrolytes in use today.

The full paper “A larger nonvolatile bipolar resistive switching memory behaviour fabricated using eggshells” have been published in the journal Current Applied Physics.

Atomic-sandwich material could make computers 100 times more energy efficient

A new material could pave the way for an entirely new generation of computers — one that packs in a lot more processing power while consuming only a fraction of the energy.


A false-colored electron microscopy image shows alternating lutetium (yellow) and iron (blue) atomic planes.
Image credits Emily Ryan and Megan Holtz / Cornell.

Known as a magnetoelectric multiferroic material, the new substance is made out of distinct atom-thick layers sandwiched together which shows magnetic and electrical properties at room temperature. The thin film is magnetically polarized, and this property can be flipped — the two states encoding the 1’s and 0’s that underpin our digital systems.

The researchers started with a thin, atomically-precise film of hexagonal lutetium iron oxide, or LuFeO3 — a material known to be ferroelectric, but not particularly magnetic. It’ consists of alternating layers of lutetium- and iron-oxide layers. Then, through a technique known as molecular-beam epitaxy, they “spray-painted” one extra monolayer of iron oxide for every 10 atomic repeats of the single-single monolayer pattern.

“We were essentially spray painting individual atoms of iron, lutetium and oxygen to achieve a new atomic structure that exhibits stronger magnetic properties,” said Darrell Schlom, a materials science and engineering professor at Cornell and senior author of a study.

The result was a new material that combines a phenomenon in lutetium oxide called “planar rumpling” with the magnetic properties of iron oxide to achieve multiferroic properties at room temperature. Heron explains that lutetium shows displacements on an atomic level called rumples. These can be moved around using an electric field and can shift the magnetic field of the neighboring iron oxide layer from positive to negative. So in essence, the team developed a material whose magnetic properties can be altered accurately with electricity — a “magnetoelectric multiferroic”.

“Before this work, there was only one other room-temperature multiferroic whose magnetic properties could be controlled by electricity,” said John Heron, assistant professor in the Department of Materials Science and Engineering at the University of Michigan.

“That electrical control is what excites electronics makers, so this is a huge step forward.”

Room-temperature multiferroics require much less power to write on and read than the semiconductor-based systems we use today. And, if you cut the power, the data remains encoded. Combine these two properties and you get computers that use only brief pulses of energy to function instead of the constant flow required by our current computers — as little as 100 times less energy. So, needless to say, electronics experts are always on the lookout for new room-temperature multiferoics.

“Electronics are the fastest-growing consumer of energy worldwide,” said Ramamoorthy Ramesh, associate laboratory director for energy technologies at Lawrence Berkeley National Laboratory.

“Today, about 5 percent of our total global energy consumption is spent on electronics, and that’s projected to grow to 40-50 percent by 2030 if we continue at the current pace and if there are no major advances in the field that lead to lower energy consumption.”

Heron thinks that we’re still a ways off from a viable multiferroic, likely a few years off. But the team’s work brings the field of electronics closer to developing devices which can maintain high computing speeds while consuming less power. If the industry will keep following Moore’s law — which predicts that the computing power of integrated circuits will double every year — such advances will be vital. Moore has been right since the 1960, but silicon-chip technology may be reaching its limits — whatever happens, we may not be able to power it for very long.

The full paper “Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic” has been published in the journal Nature.

 

New method developed to encode huge quantity of data in diamonds

A team from the City College of New York have developed a method to store data in diamonds by using microscopic defects in their crystal lattice.

Image credits George Hodan / Publicdomainpictures

Image credits George Hodan / Publicdomainpictures

I’ve grown up on sci-fi where advanced civilizations stored immense amounts of data in crystals (like Stargate SG-1. You’re welcome). Now a U.S. team could bring the technology to reality, as they report exploiting structural defects in diamonds to store information.

“We are the first group to demonstrate the possibility of using diamond as a platform for the superdense memory storage,” said study lead author Siddharth Dhomkar.

It works similarly to how CDs or DVDs encode data. Diamonds are made up of a cubic lattice of carbon atoms, but sometimes an atom just isn’t there. So the structure is left with a hole — a structural defect. They’re also referred to as nitrogen vacancy centers as nitrogen atoms align themselves to the defects.

These vacancies are negatively charged (as there are no protons to offset the electrons’ charge from neighboring atoms). But, the team found that by shining a laser on the defects — in essence neutralizing their electrical charge — they could alter how each vacancy behaved. Vacancies with a negative charge fluoresced brightly, while those with neutral charges stayed dark. The change is reversible, long-lasting, and stable under weak and medium levels of illumination, the team said.

So just as a laser can be used to encode data on a CD’s medium, it can be turned to storing data by changing these defects’ charges. In theory, this method could allow scientists to write, read, erase, and re-write the diamonds, the team added.

Dhomkar said that in principle, each bit of data can be encoded in a spot a few nanometers — a few billionths of a meter — wide. This is a much denser information packing than in any similar data storing device. So we could use diamonds to build the superdense computer memories of the future. But, we currently have no way to read or write on such a small scale so currently “the smallest bit size that we have achieved is comparable to a state-of-the-art DVD,” Dhomkar told Live Science.

Here “but nr.2” comes into the picture. We can’t yet fully use the diamonds’ capacity, but the team has shown they can encode data in 3D by stacking layers of 2D data stores.

“One can enhance storage capacity dramatically by utilizing the third dimension,” Dhomkar said.

By using this 3D approach, the technique could be used to store up to 100 times more data than a typical DVD. Dhomkar and his team are now looking into developing ways to read and write the diamond stores with greater density.

“The storage density of such an optimized diamond chip would then be far greater than a conventional hard disk drive,” he said.

The full paper “Long-term data storage in diamond” has been published in the journal Science Advances.

Here’s why there was no Twitter on Friday — it’s way scarier than you think

You might have noticed something strange in your Internet adventures last Friday — the distressing absence of a large part of it. An official statement from Internet provider giant Dyn released Friday explains what happened, and why it might happen again.

Image credits Blondinrikard Fröberg / Flickr.

Large sections of the Internet became basically inaccessible last week, as three massive Distributed Denial of Service (DDOS) attacks hit a company called Dyn. This company provides Domain Name Services (DNS) hosting for hundreds of websites including Twitter, Reddit, Amazon, Netflix, PayPal and so on. A DNS host basically “places” a website on the web, by connecting each computer’s IP address to the domain names of sites a user is trying to access, such as “ZMEScience.com”. Take the host out of the equation, and the other two can’t communicate — like cutting the chord between two landlines.

A DDOS attack consists of a large number of computers which simultaneously issue a massive number of fake visits on a server, basically flooding a website with connection requests, information requests — anything to keep the servers busy. Because the website host can’t tell which of the requests are valid and which are fake, they have to let them all through. The servers overload, buckle, and then nobody can access them anymore. Now, for the scary bit.

Welcome to the Internet of Things

DDOS’s are one of the oldest tricks in the book. As such, hosting companies like Dyn have robust systems in place to deal with them. They test their system against mock “stresser” services, which do the same thing, regularly. Hackers looking to launch a denial of service attack have to create specific software, then infect as many computers as possible (the botnet) and run shell programs off of them — the bigger the botnet, the more powerful the flood.

For the most part, PCs have (at least) decent firewalls and antivirus programs that defend them against this type of software. So it can be hard for hackers to gain the numbers to make a dent in servers such as the ones Dyn uses. Hosting companies just have to make sure their servers can handle more traffic than hackers can realistically throw towards them, and that’s that.

Friday’s attacks, however, used a new approach: the botnet wasn’t made up of computers like the one you’re reading this article on, but other kinds of digital devices connected to the web. Think gadgets such as smart TVs, security cameras, DVRs, webcams, even web-connected thermostats and coffee makers — collectively known as the Internet of Things (IoT). It’s a ridiculously huge entity, but these devices have lousy security for the most part. When’s the last time you changed the username and password on your fridge? Exactly.

Because users don’t update these devices’ software, use factory-set accounts and passwords, and vulnerable coding, these devices are easy to hack en-masse. Dyn’s chief strategy officer Kyle York said the company recorded tens of millions of IP addresses in the attack, a huge botnet of IoT devices turned towards bringing down their DNS services.

We hope you’ll enjoy your stay.
Image credits Ian Kennedy / Flickr.

Krebsosecurity reported that a piece of malware called Mirai was involved in the attack, The program allows pretty much anyone to create personal botnet armies, after its source code was released last month on the Internet.

“Mirai scours the web for IoT devices protected by little more than factory-default usernames and passwords, and then enlists the devices in attacks that hurl junk traffic at an online target until it can no longer accommodate legitimate visitors or users,” Krebs, a US security blogger, explained.

Since then, Chinese electronics company XiongMai has recalled its products, after discovering that its surveillance cameras were used in the attack. This is a particularly disturbing problem as many companies who sell security oweb cameras buy their tech from XiongMai, put on a fresh coat of paint and sell them under their own brand name. So yes, the webcam you’re staring down on right now could very well be XiongMai tech.

 

“It’s remarkable that virtually an entire company’s product line has just been turned into a botnet that is now attacking the United States,” Flashpoint’s researcher Allison Nixon told Krebs. “Some people are theorising that there were multiple botnets involved here. What we can say is that we’ve seen a Mirai botnet participating in the attack.”

Dyn was ultimately able to restore hosting services on Friday, and with it, access to Twitter, Amazon, and all the other sites. But this attack could be just a preview. The complexity of botnet systems like Mirai and the vulnerability of IoT devices paint a pretty grim picture between them.

“[I]nsecure IoT devices are going to stick around like a bad rash – unless and until there is a major, global effort to recall and remove vulnerable systems from the internet,” explains Krebs. “In my humble opinion, this global clean-up effort should be funded mainly by the companies that are dumping these cheap, poorly-secured hardware devices onto the market in an apparent bid to own the market. Well, they should be made to own the cleanup efforts as well.”

Just in case you missed it, you can read Dyn’s statement here.

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. Credit: IBM Research

IBM Scientists make phase-changing Artificial Neurons to mimic the Computer Power of Human Brain

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. Credit: IBM Research

An artistic rendering of a population of stochastic phase-change neurons which appears on the cover of Nature Nanotechnology, 3 August 2016. Credit: IBM Research

Scientists at IBM-Research Zürich and ETH Zürich claim they’ve made a huge leap in neuromimetic research, which ultimately aims to build a computing machine that closely mimics the human brain. The team imitated large populations of neurons for the very first time and used them to carry out complex computational tasks with remarkable efficiency.

Imitating the most complex biological entity in the universe — the human brain

In a confined space of merely two liters, the human brain is able to perform amazing computational feats requiring only 10 to 20 Watts of power. A supercomputer that would barely mimic the human brain’s computational power would be huge and would require diverting an entire river to keep it cool, were it designed using a classic von-Neumann architecture (the kind your laptop or smartphone uses). With such a great example of biological computing, we’re clearly doing awfully inefficient work right now.

[accordion style=”info”][accordion_item title=”The power of the human brain “]The average human brain has about 100 billion neurons (or nerve cells) and many more neuroglia (or glial cells) which serve to support and protect the neurons (although see the end of this page for more information on glial cells).

Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synaptic connections, equivalent by some estimates to a computer with a 1 trillion bit per second processor. Estimates of the human brain’s memory capacity vary wildly from 1 to 1,000 terabytes (for comparison, the 19 million volumes in the US Library of Congress represents about 10 terabytes of data).[/accordion_item][/accordion]

Mimicking the computing power of the brain, the most complex computational ‘device’ in the universe, is a priority for computer science and artificial intelligence enthusiasts. But we’re just beginning to learn how the brain works and what lies within our deepest recesses – the challenges are numerous. But we’re tackling them one at a time.

It all starts with imitating biological neurons and their synapses. In a biological neuron, a thin lipid-bilayer membrane separates the electrical charge encased in the cell, allowing the membrane potential to be maintained. When the dendrites of the neuron are excited, this membrane potential is altered and the neuron, as a whole, “spikes” or “fires”. Emulating these sort of neural dynamics with conventional CMOS hardware like subthreshold transistor circuits is technically unfeasible.

A chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the photograph, individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. Credit: IBM Research

A chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the photograph, individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. Credit: IBM Research

Instead, research nowadays is following the biomimetic route as close as possible. For instance, the researchers from Zürich made a nanoscale electronic phase-change device from a chalcogenide alloy called germanium antimony telluride (Ge2Sb2Te5). This sort of material can quickly and reliably change between purely amorphous and purely crystalline states when subjected to an electrical or light stimulus. The same alloy is used in applications like Bluray disks to store digital information, however, in this particular instance, the electronic neurons are analog, just like the synapses and neurons in a biological brain.

. Schematic of an artificial neuron that consists of the input (dendrites), the soma (which comprises the neuronal membrane and the spike event generation mechanism) and the output (axon). The dendrites may be connected to plastic synapses interfacing the neuron with other neurons in a network. The key computational element is the neuronal membrane, which stores the membrane potential in the phase configuration of a nanoscale phase-change device. Credit: Nature Nanomaterials.

Schematic of an artificial neuron that consists of the input (dendrites), the soma (which comprises the neuronal membrane and the spike event generation mechanism) and the output (axon). The dendrites may be connected to plastic synapses interfacing the neuron with other neurons in a network. The key computational element is the neuronal membrane, which stores the membrane potential in the phase configuration of a nanoscale phase-change device. Credit: Nature Nanomaterials.

This phase-change material emulates the biological lipid-bilayer membrane and enabled the researchers to devise artificial spiking neurons, consisting of inputs (dendrites), the soma (which comprises the neuronal membrane and the spike-event generation mechanism) and the output (axon). These were assembled in 10×10 arrays. Five such arrays were connected to create a neural population of 500 artificial neurons, more than anyone has come close yet.

“In a von-Neumann architecture, there is a physical separation between the processing unit and memory unit. This leads to significant inefficiency due to the need to shuttle data back and forth between the two units,” said Dr. Abu Sebastian, Research Staff Member Exploratory Memory and Cognitive Technologies, IBM Research – Zurich and co-author of the paper published in Nature Nanotechnology.

“This is particularly severe when the computation is more data-centric as in the case of cognitive computing. In neuromorphic computing, computation and memory are entwined. The neurons are connected to each other, and the strength of these connections, known as synapses, changes constantly as the brain learns. Due to the collocation of memory and processing units, neuromorphic computing could lead to significant power efficiency. Eventually, such neuromorphic computing technologies could be used to design powerful co-processors to function as accelerators for cognitive computing tasks,” Sebastian added for ZME Science.

Prof. David Wright, the head of Nano-Engineering, Science and Technology Group at the University of Exeter, says that just one of these integrate-and-fire phase-change neurons can carry out tasks of surprising computational complexity.

“When applied to social media and search engine data, this leads to some remarkable possibilities, such as predicting the spread of infectious disease, trends in consumer spending and even the
future state of the stock market,” Wright said, who was not involved in the present paper, but is very familiar with the work.

When the GST alloy crystallizes to become conductive, it spikes. What’s amazing and different from previous work, however, is that this firing exhibits an inherently stochastic nature. Scientists use the term stochastic to refer to the randomness or noise biological neurons generate.

“Our main achievements are two fold. First, we have constructed an artificial integrate-and-fire neuron based on phase change materials for the first time. Secondly, we have shown how such a neuron can be used for highly relevant computational tasks such as temporal correlation detection and population coding-based signal representation. For the latter application, we also exploited the inherent stochasticity or randomness of phase change devices. With this we have developed a powerful experimental platform to realize several emerging neural information processing algorithms being developed within the framework of computational neuroscience,” Dr. Sebatian told ZME Science.

The same artificial neurons can sustain billions of switching cycles, signaling they can pass a key reliability test if they’re ever to become useful in the real world, such as embedded in Internet of Things devices or the next generation of parallel computing machines. Most importantly, the energy required for each neuron to operate is awfully low, minuscule even. One neuron update needs only five picojoules of energy to trigger it. In terms of power, it uses less than 120 microwatts or hundreds of thousands of times less than your typical light bulb.

“Populations of stochastic phase-change neurons, combined with other nanoscale computational elements such as artificial synapses, could be a key enabler for the creation of a new generation of extremely dense neuromorphic computing systems,” said Tomas Tuma, a co-author of the paper, in a statement.

The phase-change neurons are still far from capturing the full range of biological neuron traits, but the work is groundbreaking on many levels. Next on the agenda is to make these artificial neurons even more efficient by aggressively scaling down the size of the phase-change devices, Dr. Sebastian said.

By 2040 our computers will use more power than we can produce

The breathtaking speed at which our computers evolve is perfectly summarized in Moore’s Law — the idea that the sum of transistors in an integrated circuit doubles every two years. But this kind of exponential growth in computing power also means that our chipsets need more and more power to function — and by 2040 they will gobble up more electricity than the world can produce, scientists predict.

Image bia pixabay

The projection was originally contained in a report released last year by the Semiconductor Industry Association (SIA) but it has only recently made headlines as the group issued its final assessment on the semiconductor industry. The basic idea is that as computer chips become more powerful and incorporate more transistors, they’ll require more power to function unless efficiency can be improved.

Energy which we may not have. They predicted that unless we significantly change the design of our computers, by 2040 we won’t be able to power all of them. But there’s a limit to how much we can improve using our methods:

“Industry’s ability to follow Moore’s Law has led to smaller transistors but greater power density and associated thermal management issues,” the 2015 report explains.

“More transistors per chip mean more interconnects – leading-edge microprocessors can have several kilometres of total interconnect length. But as interconnects shrink they become more inefficient.”

So in the long run, SIA estimates that under current conditions “computing will not be sustainable by 2040, when the energy required for computing will exceed the estimated world’s energy production.”

Total energy used for computing.
Image credits SIA

This graph shows the problem. The power requirements of today’s systems — the benchmark line — are the orange line and total energy production is the yellow one. The point they meet at, predicted to be somewhere around 2030 or 2040, is where the problems start. Today, chip engineers stack ever-smaller transistors in three dimensions in order to improve performance and keep pace with Moore’s Law, but the SIA says that approach won’t work forever, given how much energy will be lost in future, progressively denser chips.

“Conventional approaches are running into physical limits. Reducing the ‘energy cost’ of managing data on-chip requires coordinated research in new materials, devices, and architectures,” the SIA states.

“This new technology and architecture needs to be several orders of magnitude more energy efficient than best current estimates for mainstream digital semiconductor technology if energy consumption is to be prevented from following an explosive growth curve.”

The roadmap report also warns that beyond 2020, it will become economically unviable to keep improving performance with simple scaling methods. Future improvements in computing power must come from areas not related to transistor count.

“That wall really started to crumble in 2005, and since that time we’ve been getting more transistors but they’re really not all that much better,” said computer engineer Thomas Conte from Georgia Tech for IEEE Spectrum.

“This isn’t saying this is the end of Moore’s Law. It’s stepping back and saying what really matters here – and what really matters here is computing.”

People pick up and use discarded USB drives they find almost half the time

Connectivity has never been more pervasive than today. In a span of just two hundred years western civilization has gone from the electric telegraph to satellite communication. Access to the internet, which just thirty years ago was limited to land-line dial-up connections, has become ubiquitous — only a screen swipe away. Portable data storage, such as USB drives, might not be quite as useful or sought after as they once were but they remain an undeniably handy method to carry your data around.

Image via flirk user Custom USB.

So when you spot an USB drive lying abandoned on the floor or on the sidewalk, you’re faced with a very puzzling choice. Should you pick it up, or not? Surely a quick peek at the files it contains will help you return the drive to its rightful (and thankful) owner; it’s a civic duty and who better than you to see it through the end? Or maybe you’re more inclined to use it yourself, it’s finders keepers after all! Moral conundrums aside, one thing is sure — USB drives discarded in public places won’t go unnoticed for long, a new study has found.

An University of Illinois Urbana-Champaign team left 297 USB memory dropped seemingly by accident around the university grounds in places like parking lots, classrooms, cafeterias, libraries or hallways. Roughly 98% of them were removed from their original location, and almost half of them were snooped through.

The researchers wanted to know what people would do with the data on the drives after they found them, so they put HTML documents cunningly disguised with names such as “documents,” “math notes,” or “winter break pictures” on the USB sticks. If anyone tried to open these files on a computer connected to the internet, the researchers would receive a notification.

In the end, the team received 135 notifications of users opening the files, corresponding to 45% of the discarded drives. The actual number of accessed drives is most likely higher than this, as the researchers were only notified if the HTML files were opened (and even then, if an internet connection was established at the time of opening the file.)

The unknowing subjects were informed about the experiment when they opened the HTML files on the drive, and were invited to complete an anonymous survey to explain what had motivated them to pick up and use the drive in the first place. Only 43 percent of the participants chose to provide feedback. Most of them (68 percent) said that they were trying to return the drive to its owner. Part of the drives had been put on key rings with dummy house keys, and many of the participants listed this as one of the reasons behind their altruistic intentions. Another 18 percent reported that they were just curious to see what was in the files. Two very honest people admitted that they were simply planning on keeping the drive.

Ca-ching!.
Image via flirk user Custom USB.

Still, even those driven by good intentions snooped around the data, opening files like photos or texts on the drives. An argument could be made that they were trying to see how the owner looks like; but seeing as the drives had a “personal resume” file complete with contact details, I think it’s safe to say that they just let their curiosity get the better of them.

There’s nothing wrong with that. Curiosity can be a very powerful force; and when you combine that with the temptation of an USB drive, containing data only you have access to, it can become downright irresistible. But it’s also a huge security risk.

More than two-thirds of respondents had taken no precautions before connecting the drive to their computer. “I trust my Macbook to be a good defence against viruses,” said one report. Others admitted opening the files on university computers to protect their own systems.

“This evidence is a reminder to the security community that less technical attacks remain a real-world threat and that we have yet to understand how to successfully defend against them,” the authors write. “We need to better understand the dynamics of social engineering attacks, develop better technical defences against them, and learn how to effectively teach end users about these risks.”

Despite the ridiculousness of these kinds of experiments, the study shows that people aren’t cautious enough when it comes to opening unknown files on totally random drives.

“It’s easy to laugh at these attacks, but the scary thing is that they work,” said lead researcher Matt Tischer for Motherboard, “and that’s something that needs to be addressed.”

The findings, which are being presented next month at the 37th IEEE Symposium on Security and Privacy in California, also highlight just how unaware or unconcerned we can be about the potential security risks of opening unknown files on randomly found devices.

 

drawing-machuine

Researchers devise AI that allows machines to learn just as fast as humans

Computers can compute a lot faster than humans, but they’re pretty dumb when it comes to learning. In fact, machine learning itself is only beginning to take off, i.e. real results showing up. A team from New York University and Massachusetts Institute of Technology are now leveling the field, though. They’ve devised an algorithm that allows computers to recognize patterns a lot faster and with much less information at their disposal than previously.

drawing-machuine

Image shows  20 different people drawing a novel character (left) and the algorithm predicting how those images were drawn (right)

When you tag someone in photos on Facebook, you might have noticed that the social network can recognize faces and suggests who you should tag. That’s pretty creepy, but also effective. Impressive as it is, however, it took million and millions of photos, trials and errors for  Facebook’s DeepFace algorithm to take off. Humans on the other hand, have no problem distinguishing faces. It’s hard wired into us. See a face once and you’ll remember it a lifetime — that’s the level of pattern recognition and retrieval the researchers were after.

The framework the researchers presented in their paper is called Bayesian Program Learning (BPL). It can classify objects and generate concepts about them using a tiny amount of data, mirroring the way humans learn.

Humans and machines were given an image of a novel character (top) and asked to produce new versions. A machine generated the nine-character grid on the left. Image: Jose-Luis Olivares/MIT (figures courtesy of the researchers)

Humans and machines were given an image of a novel character (top) and asked to produce new versions. A machine generated the nine-character grid on the left. Image: Jose-Luis Olivares/MIT (figures courtesy of the researchers)

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto, said in a news release. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

BPL was put to the test by being presented 20 handwritten letters in 10 different alphabets. Humans also performed the test as control. Both human and machine were asked to match the letter to the same character written by someone else. BPL scored 97%, about as well as the humans and far better than other algorithms. For comparison, a deep (convolutional) learning model scored about 77%.

image

[ALSO READ] Machine learning used to predict crimes before they happen

BPL also passed a visual form of the Turing Test by drawing letters that most humans couldn’t distinguish from a human’s handwriting.The Turing test was first proposed by British scientist Alan Turing in the 1940s as a way to test whether the product of an artificial intelligence or computer program can fool humans it’s been made by humans.

“I think for the more creative tasks — where you ask somebody to draw something, or imagine something that they haven’t seen before, make something up — I don’t think that we have a better test,” Tenenbaum told reporters on a conference call. “That’s partly why Turing proposed this. He wanted to test the more flexible, creative abilities of the human mind. Why people have long been drawn to some kind of Turing test.”“We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts – even simple visual concepts such as handwritten characters – in ways that are hard to tell apart from humans,” Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines, said in the release.

“What’s distinctive about the way our system looks at handwritten characters or the way a similar type of system looks at speech recognition or speech analysis is that it does see it, in a sense, as a sort of intentional action … When you see a handwritten character on the page what your mind does, we think, [and] what our program does, is imagine the plan of action that led to it, in a sense, see the movements, that led to the final result,” said Tenanbaum. “The idea that we could make some sort of computational model of theory of mind that could look at people’s actions and work backwards to figure out what were the most likely goals and plans that led to what [the subject] did, that’s actually an idea that is common with some of the applications that you’ve pointed to. … We’ve been able to study [that] in a much more simple and, therefore, more immediately practical and actionable way with the paper that we have here.”

The research was funded by the military to improve its ability  to collect, analyze and act on image data. Like most major military applications, though, it will surely find civilian uses.

horsey-troubadour

Hard to crack and easy to remember password? Try a poem

“Please enter a strong password”, is now an ubiquitous greeting whenever we try to register online. Security experts advise we use long passwords at least 12 characters in length,  which should include numbers, symbols, capital letters, and lower-case letters. Most websites nowadays force you to enter a password under some or all of these conditions. Moreover, the password shouldn’t contain dictionary words and combinations of dictionary words. Common substitution like “h0use” instead of “house” are also not recommended – these naive attempts will fool no automated hacking algorithm. So, what we end up at the end is a very strong password, like the website kindly asked (or forced) us to do. At the same time, it’s damn difficult if not impossible to remember. People end up endlessly hitting “recover password” or, far worse, write down their passwords in email or other notes on their computer which can easily be recovered by any novice hacker.

A group of information security experts have found a workaround to make passwords both strong and easy to remember: using randomly generated poems. Marjan Ghazvininejad and Kevin Knight of the University of Southern California were oddly enough inspired by an internet comic written by the now famous and always witty Randall Munroe of Xkcd.

horsey-troubadour

Credit: XKCD

The premise of the comic is that today’s passwords are easy for computers to guess and hard for humans to remember, which sounds rightfully ludicrous. Munroe proposed an alternative: four random common words; in this case “correct horse battery staple”, which sounds a lot more manageable. You could build a story around them, like Munroe did, or use a mnemonic technique like the memory palace to make things even easier. The catch though isn’t to select words from the top of your head. Instead, you use a computer to generate a large random number, which is then broken into four pieces with each section amounting to a code that corresponds to a word in the dictionary. In the first situation of the unintelligible password, the information contained amounts to 28 bits. Munroe’s password is 44 bits, which is higher and thus better.

Ghazvininejad and Knight advanced this further. They analyzed several password generation techniques, including Munroe’s, and found that the safest, but also easiest to remember passwords are those made up of rhyming words. If you look back in history, this sounds like a no-brainer. In ancient times, society was mostly oral. A culture’s history, scientific knowledge and literature were all passed on to subsequent generations by word of mouth. Think of poems like Homer’s Odyssey or the Epic of Gilgamesh.

To create the poems, each word of 327,868 found in the dictionary is assigned a code. A random number is generated, broken into pieces then used to generate two phrases. Here are some examples:

“And many copycat supplies
offenders instrument surprise”

“The warnings nonetheless displayed
the legends undergo brocade”

“The homer ever celebrate
the Asia gator concentrate”

“Montero manages translates
the Dayton artist fluctuates”

“The market doesn’t escalate
or hiring purple tolerate”

“And Jenny licensed appetite
and civic fiscal oversight”

Some are pretty good, some are awful, but at least they’re hard to break. In their paper, the authors say these passwords could take up to 5 million years to crack. You can generate your own rhyming password using this online tool, but the authors caution you shouldn’t actually use them since a potential hacker can download all the list. Instead, enter your email here and an automated program will send you a rhyming password which will be immediately deleted from the record there after.

Today, however, you’ll find little use for this trick. Most password policies require a number and/or special character. These passwords are also a bit too long for current policies. Then, if this system becomes common, automated hack methods can be made to guess these too much faster. It’s really interesting though and a much more entertaining password than 2d1s0gus71ng!93.

Image: MASHABLE COMPOSITE ALFRED EDWARD CHALON/SCIENCE & SOCIETY PICTURE LIBRARY

Celebrating Ada Lovelace: the first computer programmer (XIXth century)

Image: MASHABLE COMPOSITE ALFRED EDWARD CHALON/SCIENCE & SOCIETY PICTURE LIBRARY

Image: MASHABLE COMPOSITE ALFRED EDWARD CHALON/SCIENCE & SOCIETY PICTURE LIBRARY

In 1847, at the tender age of 27, Ada Lovelace became the world’s first programmer, more than a hundred years before the first computer was actually introduced. Ahead of her time is likely an understatement, and of course there’s much to learn from Lovelace’s story. Today, scientists all over the world celebrate her legacy by holding special events that seek to encourage women to pursue careers in STEM (science, technology, engineering and mathematics). While gender discrepancy in STEM has somewhat leveled out, far too few women embark on this sort of career path. One can only imagine how society must have looked upon the likes of Lovelace, a full blown mathematical genius in the XIXth century, who was prolific decades before Marie Curie – perhaps the most cited science female role model – was even born. Alas, she was ‘but’ a woman.

A genius ahead of her time

The daughter of none other than Lord Byron, the famous poet, Lovelace become acquainted while she was only 18 to inventor Charles Babbage, then 42. The two strung a close friendship that would change Ada’s life forever. Babbage was working on a very early, calculator-like computer called The Difference Engine, which eventually went to grow into the Analytical Engine, a forerunner to the modern computer. In 1842, Ada translated a description of it by Italian mathematician Luigi Menabrea. “As she understood [it] so well”, Babbage asked Ada to expand the article, which eventually grew into a 20,000-word work that included the first computer program: An algorithm that would teach the machine how to calculate a series of Bernoulli Numbers.

“By understanding what the Analytical Engine could do — that it was far more than just a calculator — there’s no doubt whatsoever that Ada glimpsed the future of information technology,” said James Essinger, whose biography of Lovelace titled Ada’s Algorithm is published this week. According to Essinger, Ada expanded on Babbage’s ideas and envisioned the modern day computer. “What computers do, with literally billions of applications by billions of people, is exactly what Ada foresaw. In some ways, it is almost miraculously prophetic.”

Unfortunately, Babbage never created the machine and Ada was unable to test her theory before she died at the age of 36 of cancer.

Ada herself was an inspiration to many including Michael Faraday. On the 10 June 1840, Ada Lovelace sent a copy of her portrait to Michael Faraday with a note saying:

‘Dear Mr. Faraday,

Mr Babbage tells me that you have expressed a wish to possess one of the engravings of me, by which I feel exceedingly flattered, & hope you will accept one that we still happen to have by us.

I am sorry that there is no proof left, to which I might have put my signature.

Believe me, yours very truly

Augusta Ada Lovelace

St James’ Square’

Faraday liked to collect images of people he met or were acquainted with so this etching was gratefully received into his collection.

We can only imagine how Ada might have felt had she traveled to the future and saw what computers are capable of in our present day and how ubiquitous they’ve become . Most people in the developed world nowadays carry a tiny computer in their pockets whose computing power is greater than the combined power of all the Apollo -era computers used to help man land on the moon. Actually, almost anyone today owns or at least knows how to power up a computer – more than four billion PCs, tablets and smartphones are currently in use. It’s crazy, but while most people have yet to realize how fortunate they are to live in such exciting times, we can only hope they will eventually become inspired. There’s so much we can learn, both men and women, from the brave and brilliant Ada Lovelace.

First ever optical chip to permanently store data developed

Material scientists at Oxford University, collaborating with experts from Karlsruhe, Munster and Exeter, have developed the world’s first light-based memory banks that can store data permanently. The device is build from simple materials, in use in CDs and DVDs today, and promises to dramatically improve the speed of modern computing.

A schematic of the device, showing its structure and the propagation of light through it.
Image courtesy of University of Oxford

Von-Neumann’s Bottleneck

Computing power has come a long in a very short time, with the processors that brought Apollo 11 to the Moon only 50 years ago being outmatched by your average smartphone. But in coming so far with their development, other areas of hardware have lagged behind in evolution, holding back our computers’ overall performance. The relatively slow flow of data between the processor and memory is the main limiting factor, as Professor Harish Bhaskaran, who led the research, explains.

“There’s no point using faster processors if the limiting factor is the shuttling of information to-and-from the memory — the so-called von-Neumann bottleneck,” he says. “But we think using light can significantly speed this up.”

However, simply basing the flow of information on light wouldn’t solve the problem.

Think of the processor as a busy downtown area, the data banks as being the residential areas and information bits as the cars commuting between the two. Even if the areas were to be connected by a highway with enough lanes and light-speed speed limits, the cars getting off it and driving through the neighborhoods at low speed to reach individual homes would clog up the traffic. In the same way, the need to convert the information from photons back to electrical signals would mean that the bottleneck isn’t removed, merely constrained to that particular process.

What scientists need is to base the whole system — processing, flow and memory — on light. There have been previous attempts to create this kind of photonic memory storage before, but they proved too volatile to be useful — they require power to store data. For them to be useful as computer disk drives, for example, they need to be able to store data indefinitely, with or without power.

And international team of researchers headed by Oxford University’s Department of Materials has successfully produced just that — the world’s first all-photonic nonvolatile memory chip.

A bright future for data storage

The device uses the phase-change material Ge2Sb2Te5 (GST) — the same as that used in rewritable CDs and DVDs — to store data. The material can assume an amorphous state (like glass) or a crystalline state (like a metal) when subjected to either an electrical or optical pulse.

To take advantage of this property, the team fused small sections of GST onto a silicon nitride ridge (known as a waveguide) that carries light to the chips, and successfully proved that intense pulses sent through the waveguide can produce the desired changes in the material. An intense pulse causes it to momentarily melt and quickly cool, causing it to assume an amorphous structure; a slightly less-intense pulse can put it into a crystalline state. This is how the data is stored.

Later, when the data is required, light with much lower intensity is sent through the waveguide. The two states of the GST dictates how much light can pass through the chip, the difference is read and interpreted as either 1 or 0.

“This is the first ever truly non-volatile integrated optical memory device to be created,” explains Clarendon Scholar and DPhil student Carlos Ríos, one of the two lead authors of the paper. “And we’ve achieved it using established materials that are known for their long-term data retention — GST remains in the state that it’s placed in for decades.”

And by sending out different wavelengths of light through the waveguide at the same time, a technique called wavelength multiplexing, they can use a single pulse to encode and recover the data at the same time.

“In theory, that means we could read and write to thousands of bits at once, providing virtually unlimited bandwidth,” explains Professor Wolfram Pernice from the University of Munster.

The researchers have also found that different intensities of strong pulses can accurately and repeatedly create different mixtures of amorphous and crystalline structure within the GST. When lower intensity pulses were sent through the waveguide to read the contents of the device, they were also able to detect the subtle differences in transmitted light, allowing them to reliably write and read off eight different levels of state composition — from entirely crystalline to completely amorphous. This multi-state capability could provide memory units with more than the usual binary information of 0 and 1, allowing a single bits of memory to store several states or even perform calculations themselves instead of at the processor.

“This is a completely new kind of functionality using proven existing materials,’ explains Professor Bhaskaran. ‘These optical bits can be written with frequencies of up to one gigahertz and could provide huge bandwidths. This is the kind of ultra-fast data storage that modern computing needs.”

Now, the team is working on a number of projects that aim to make use of the new technology. They’re particularly interested in developing a new kind of electro-optical interconnect, which will allow the memory chips to directly interface with other components using light, rather than electrical signals.

This tiny slice of silicon etched with a bar-code pattern might one day lead to computers that use light instead of wires. Image: Vuckovic Lab

Prism-like bar code pattern might help make computers that use light instead of wires

A breakthrough in optical communications has been reported by Stanford engineers who used a complex algorithm to design a prism-like device that splits light into different colours (frequencies) and at right angles. This is the absolute first step towards building a circuit, and ultimately a computer, that uses light instead of wires to relay signals. This way, much more compact and efficient machines could be built.

No metal wires

This tiny slice of silicon etched with a bar-code pattern might one day lead to computers that use light instead of wires. Image: Vuckovic Lab

This tiny slice of silicon etched with a bar-code pattern might one day lead to computers that use light instead of wires. Image: Vuckovic Lab

Professor Jelena Vuckovic and Alexander Piggott, a doctoral candidate in electrical engineering, initially developed an algorithm automated the process of designing optical structures and it enabled them to create previously unimaginable, nanoscale structures to control light. Now, for the first time, the team used this algorithm to build a pattern that resembles a bar code made up of alternating rows of silicon of varying thickness with air in between. The algorithm automatically built the design with the researchers only having to give the desired inputs and outputs they’d for the system.

The alternating rows are essential since this complex arrangement guides light in a predictable pattern based on the refraction indexes it encounters along the way. Light doesn’t actually travel at the speed of light, not here on Earth at least; the speed of light you and I leaned in school is measured in vacuum. In reality, light is effected by the medium through which it travels and as such, it’s speed and angle are altered. When light passes from air into water, you see a dislocated image. That’s because the light that bounced back into your retina, allowing you to see the object you immersed in the water, has passed into a medium with a higher index of refraction. Air has an index of refraction of nearly 1 and water of 1.3. Infrared light travels through silicon even more slowly: it has an index of refraction of 3.5.
Exploiting multiple refraction indexes from silicon and air, an ‘optical link’ was built that acts like a prism to split two different wavelengths (colors) at right angles to the input, forming a T shape. Both 1300-nanometer light and 1550-nanometer light, corresponding to C-band and O-band wavelengths widely used in , were beamed at the device from above. When these hit the link, the lights were reflected opposite to each other.

Let computers do the hard work

The most beautiful part is that the algorithm did all the hard work.

“We wanted to be able to let the software design the structure of a particular size given only the desired inputs and outputs for the device,” Vuckovic said.

Of course, it took a lot of work and hundreds of tweaks from the researchers’ part to make it work in the first place. What’s great about working this, however, is that engineers can build any kind of patterns just by setting an input and output in the software. If they need different kinds of light or other geometries, they just leave it to the algorithm to calculate the optimal pattern. It all takes only 15 minutes to rend. Traditionally, optical manipulation was made a on a case by case basis requiring a lot of trial and error. This saves hundreds of hours of hard work.

“For many years, nanophotonics researchers made structures using simple geometries and regular shapes,” Vuckovic said. “The structures you see produced by this algorithm are nothing like what anyone has done before.”

Besides the optical link, the team used the algorithm to design other devices as well, such as the super-compact  “Swiss cheese” structures that route light beams to different outputs not based on their color, but based on their mode. These were used to split light modes, which is essential to optical communications for transmitting information.

Findings appeared in Scientific Reports.

Carbon_nanotube_working_computer

First computer made out of carbon nanotubes spells silicon demise in electronics

Carbon_nanotube_working_computer

In an inspiring breakthrough, Stanford researchers have created the first ever working computer made entirely out of carbon nanotubes. The technology is still very infant, as the computer  operates on just one bit of information, and can only count to 32. Theoretically, however, it can be scaled up to perform billions of operations given enough memory.  With more refining, computers such as these hint towards a new digital age where carbon nanotubes reign supreme and silicon models are obsolete.

The prototype was dubbed “Cedric” and was made part of an extensive collaborative effort. Scientists have been trying to develop a working carbon nanotubes (CNTs) based machine for years, however past attempts have failed. The interest is huge because CNTs offer a wide array of intrinsic material properties far superior to silicon, the currently industry standard used in electronics. CNTs are basically rolled-up tubes of pure carbon only one atom thick. They’re fantastic electrical conductors and due to their incredible thinness, these can be employed as very efficient semiconducting materials capable of switching on and off electrical current flowing through very fast – a property indispensable for building working transistors.

Not that fast, not that stupid either

Wafer filled with CNTs transistors.

Wafer filled with CNTs transistors.

A number of challenges have kept previous attempts for developing a working computer from CNTs. For one, transistors made out of CNTs have been around for some 15 years already, the biggest problem however is lining and connecting these up. When CNTs are put on a wafer these aren’t perfectly aligned, and as such a machine would rend errors. The Stanford scientists used a method however that built chips with CNTs which are 99.5% aligned. What about the remaining 0.5%? Of course, the scientists din’t ignore this offset – after all it would have introduced a significant error in the resulting machine’s computations. Instead, they developed a neat algorithm that factored these misalignment our of computations.

Then, a second imperfection had to be overcome. Some of the CNTs have an inherent manufacturing imperfection that makes them exclusively metallic instead of semiconducting. This means these flawed CNTs always conduct electricity which is a big problem. The researchers employed an extremely simple, yet ingenious trick to overcome this. Since they could switch on or off the good, semiconducting CNTs, the researchers switched these off and pumped a lot of current into the circuit. The good CNTs weren’t touched at all since these were switched off, while the flawed CNTs vaporized from all the energy that went through. A perfect method for filtering out the flawed tubes. In the end they assembled their machine – Cedric – which works perfectly with no errors, even if it can only count on two hands.

“People have been talking about a new era of carbon nanotube electronics, but there have been few demonstrations. Here is the proof,” said Prof Subhasish Mitra, lead author on the study.

“These are initial necessary steps in taking carbon nanotubes from the chemistry lab to a real environment,” said Supratik Guha, director of physical sciences for IBM’s Thomas J Watson Research Center.

Scaling Cedric to count billions

So far, Cedric is only a proof of concept and it’s quite bulky even by silicon standards – eight microns fat. Shrinking down the transistors will be the next obvious step. This doesn’t mean that it’s a sort of herculean task. Far from it – the technology necessary to scale up Cedric to say 64 bit with nanoscale transistors is well in place. It’s just a matter of trial and error before the scientists can get this moving. In a matter of years we may actually be able to type on a CNTs computer. In fact Cedric is already capable of achieving any task – it just needs more memory!

“In terms of size, IBM has already demonstrated a nine-nanometre CNT transistor.

“And as for manufacturing, our design is compatible with current industry processes. We used the same tools as Intel, Samsung or whoever.

“So the billions of dollars invested into silicon has not been wasted, and can be applied for CNTs.”

Silicon, the most employed material in electronics, has served mankind well. It’s cheap, durable and efficiency, however the material is reaching it’s absolute limits. It can only be shrunk so much, and for computing power to grow you need as many transistors as you can crammed up on minimally large surface. Carbon nanotubes are considered key by many industry experts into breaching the silicon limit and allow Moore’s law to still remain valid.