Tag Archives: computer science

Language skill may matter more for learning how to code than math

Programming is often perceived as a math-intensive field, which, let’s face it, can be intimidating for anyone contemplating it. A new study, however, suggests that language and problem-solving skills are more reliable predictors of how quickly a person learns a programming language than mathematical aptitude.

Language > math

Researchers at the University of Washington recruited 42 participants who joined a coding course through Codeacademy. Each participant completed ten 45-minute-long lessons that initiated them in coding with Python.

Before they enrolled, the participants completed a series of tests designed to assess their math, working memory, problem solving, and second language learning abilities.

The test results were correlated with the course’s completion metrics, including how well a student understood the lessons and the rate at which they completed checkpoints.

By the end of the study, 36 participants had completed the course. By comparing the test results before and after the Python course, the researchers could determine the weight of memory, problem-solving, language, and mathematical abilities when it comes to predicting successful learning.

While participants learned how to code in Python at different rates, the researchers found that problem solving and working memory were the most associated with how well students were able to program. Meanwhile, both general cognitive skills and language aptitude were associated with how quickly they learned to code.

In fact, aptitude for a second language accounted for almost 20% of the difference in how quickly the students learned Python, while math could account for just 2% of the variation.

This suggests that language skills are more important than numerical aptitudes when it comes to learning how to code, despite folk wisdom.

What’s more, the researchers also measured the brain activity of the participants through electroencephalography (EEG) prior to the online learning tasks. The EGG measured patterns of brain activity while the subjects were relaxed and basically doing nothing.

Electrical activity at rest has various patterns, including slow waves called beta oscillations. Previously, researchers showed that these oscillations in brain activity are linked to the ability to learn a second language. Participants who scored high on the Python course also tended to have higher levels of beta oscillations.

Taken together, these findings show that language abilities might be more important than mathematical skills when learning computer science.

Girls, who tend to have higher language skills than boys on average, typically avoid computer science because they may feel intimidated by stereotypes of a math-intensive environment.

However, this study shows that girls ought to do just as well as boys at coding, if not better.

The findings were published in the journal Scientific Reports.

How neuro-symbolic AI might finally make machines reason like humans

If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.

For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.

But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).

The gist is that humans were never programmed (not like a digital computer, at least) — humans have become intelligent through learning.

Intelligent machines

Do machine learning and deep learning ring a bell? They should. These are not merely buzz words — they’re techniques that have literally triggered a renaissance of artificial intelligence leading to phenomenal advances in self-driving cars, facial recognition, or real-time speech translations.

Although AI systems seem to have appeared out of nowhere in the previous decade, the first seeds were laid as early as 1956 by John McCarthy, Claude Shannon, Nathan Rochester, and Marvin Minsky at the Dartmouth Conference. Concepts like artificial neural networks, deep learning, but also neuro-symbolic AI are not new — scientists have been thinking about how to model computers after the human brain for a very long time. It’s only fairly recently that technology has developed the capability to store huge amounts of data and significant processing power, allowing AI systems to finally become practically useful.

But despite impressive advances, deep learning is still very far from replicating human intelligence. Sure, a machine capable of teaching itself to identify skin cancer better than doctors is great, don’t get me wrong, but there are also many flaws and limitations.

An amazing example of an image processing AI forming sentences that remarkably describe what’s going on in a picture. Credit: Karpathy and Li (2015).

One important limitation is that deep learning algorithms and other machine learning neural networks are too narrow.

When you have huge amounts of carefully curated data, you can achieve remarkable things with them, such as superhuman accuracy and speed. Right now, AIs have crushed humans at every single important game, from chess to Jeopardy! and Starcraft.

However, their utility breaks down once they’re prompted to adapt to a more general task. What’s more, these narrow-focused systems are prone to error. For instance, take a look at the following picture of a “Teddy Bear” — or at least in the interpretation of a sophisticated modern AI.

What’s furry and round? This pixel interpretation returns “Teddy Bear”, whereas any human can tell is this is a gimmicky work of art.

Or this…

Lake, Ullman, Tenenbaum, Gershman (2016).

These are just a couple of examples that illustrate that today’s systems don’t truly understand what they’re looking at. And what’s more, artificial neural networks rely on enormous amounts of data in order to train them, which is a huge problem in the industry right now. At the rate at which computational demand is growing, there will come a time when even all the energy that hits the planet from the sun won’t be enough to satiate our computing machines. Even so, despite being fed millions of pictures of animals, a machine can still mistake a furry cup for a teddy bear.

Meanwhile, the human brain can recognize and label objects effortlessly and with minimal training — basically we only need one picture. If you show a child a picture of an elephant — the very first time they’ve ever seen one — that child will instantly recognize that a) that is an animal and b) that this is an elephant next time they’ll come across that animal, either in real life or in a picture.

This is why we need a middle ground — a broad AI that can multi-task and cover multiple domains, but which also can read data from a variety of sources (text, video, audio, etc), whether the data is structured or unstructured. Enter the world of neuro-symbolic AI.

David Cox is the head of the MIT-IBM Watson AI Lab, a collaboration between IBM and MIT that will invest $250 million over ten years to advance fundamental research in artificial intelligence. One important avenue of research is neuro-symbolic AI.

“A neuro-symbolic AI system combines neural networks/deep learning with ideas from symbolic AI. A neural network is a special kind of machine learning algorithm that maps from inputs (like an image of an apple) to outputs (like the label “apple”, in the case of a neural network that recognizes objects). Symbolic AI is different; for instance, it provides a way to express all the knowledge we have about apples: an apple has parts (a stem and a body), it has properties like its color, it has an origin (it comes from an apple tree), and so on,” Cox told ZME Science.

“Symbolic AI allows you to use logic to reason about entities and their properties and relationships. Neuro-symbolic systems combine these two kinds of AI, using neural networks to bridge from the messiness of the real world to the world of symbols, and the two kinds of AI in many ways complement each other’s strengths and weaknesses. I think that any meaningful step toward general AI will have to include symbols or symbol-like representations,” he added.

By combining the two approaches, you end up with a system that has neural pattern recognition allowing it to see, while the symbolic part allows the system to logically reason about symbols, objects, and the relationships between them. Taken together, neuro-symbolic AI goes beyond what current deep learning systems are capable of doing.

“One of the reasons why humans are able to work with so few examples of a new thing is that we are able to break down an object into its parts and properties and then to reason about them. Many of today’s neural networks try to go straight from inputs (e.g. images of elephants) to outputs (e.g. the label “elephant”), with a black box in between. We think it is important to step through an intermediate stage where we decompose the scene into a structured, symbolic representation of parts, properties, and relationships,” Cox told ZME Science.

Here are some examples of questions that are trivial to answer by a human child but which can be highly challenging for AI systems solely predicated on neural networks.

Credit: David Cox, Youtube.

Neural networks are trained to identify objects in a scene and interpret the natural language of various questions and answers (i.e. “What is the color of the sphere?”). The symbolic side recognizes concepts such as “objects,” “object attributes,” and “spatial relationship,” and uses this capability to answer questions about novel scenes that the AI had never encountered.

A neuro-symbolic system, therefore, applies logic and language processing to answer the question in a similar way to how a human would reason. An example of such a computer program is the neuro-symbolic concept learner (NS-CL), created at the MIT-IBM lab by a team led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines.

You could achieve a similar result to that of a neuro-symbolic system solely using neural networks, but the training data would have to be immense. Moreover, there’s always the risk that outlier cases, for which there is little or no training data, are answered poorly. In contrast, this hybrid approach boosts a high data efficiency, in some instances requiring just 1% of training data other methods need.

The next evolution in AI

Just like deep learning was waiting for data and computing to catch up with its ideas, so has symbolic AI been waiting for neural networks to mature. And now that two complementary technologies are ready to be synched, the industry could be in for another disruption — and things are moving fast.

“We’ve got over 50 collaborative projects running with MIT, all tackling hard questions at the frontiers of AI. We think that neuro-symbolic AI methods are going to be applicable in many areas, including computer vision, robot control, cybersecurity, and a host of other areas. We have projects in all of these areas, and we’ll be excited to share them as they mature,” Cox said.

But not everyone is convinced that this is the fastest road to achieving general artificial intelligence.

“I think that symbolic style reasoning is definitely something that is important for AI to capture. But, many people (myself included) believe that human abilities with symbolic logic emerge as a result of training, and are not convinced that an explicitly hard-wiring in symbolic systems is the right approach. I am more inclined to think that we should try to design artificial neural networks (ANNs) that can learn how to do symbolic processing. The reason is this: it is hard to know what should be represented by a symbol, predicate, etc., and I think we have to be able to learn that, so hard-wiring the system in this way is maybe not a good idea,” Blake Richards, who is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University, told ZME Science.

Irina Rish, an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM), agrees that neuro-symbolic AI is worth pursuing but believes that “growing” symbolic reasoning out of neural networks, may be more effective in the long-run.

“We all agree that deep learning in its current form has many limitations including the need for large datasets. However, this can be either viewed as criticism of deep learning or the plan for future expansion of today’s deep learning towards more capabilities,” Rish said.

Rish sees current limitations surrounding ANNs as a ‘to-do’ list rather than a hard ceiling. Their dependence on large datasets for training can be mitigated by meta- and transfer-learning, for instance. What’s more, the researcher argues that many assumptions in the community about how to model human learning are rather flawed, calling for more interdisciplinary research.

“A common argument about “babies learning from a few samples unlike deep networks” is fundamentally flawed since it is unfair to compare an artificial neural network trained from scratch (random initialization, some ad-hoc architectures) with a highly structured, far-from-randomly initialized neural networks in baby’s brains,  incorporating prior knowledge about the world, from millions of years of evolution in varying environments. Thus, more and more people in the deep learning community now believe that we must focus more on interdisciplinary research on the intersection of AI and other disciplines that have been studying brain and minds for centuries, including neuroscience, biology, cognitive psychology, philosophy, and related disciplines,” she said.

Rish points to exciting recent research that focuses on “developing next-generation network-communication based intelligent machines driven by the evolution of more complex behavior in networks of communicating units.” Rish believes that AI is naturally headed towards further automation of AI development, away from hard-coded models. In the future, AI systems will also be more bio-inspired and feature more dedicated hardware such as neuromorphic and quantum devices.

“The general trend in AI and in computing as a whole, towards further and further automation and replacing hard-coded approaches with automatically learned ones, seems to be the way to go,” she added.

For now, neuro-symbolic AI combines the best of both worlds in innovative ways by enabling systems to have both visual perception and logical reasoning. And, who knows, maybe this avenue of research might one day bring us closer to a form of intelligence that seems more like our own.

AI is outpacing Moore’s Law

In 1965, American engineer Gordon Moore made the prediction that the number of transistors integrated on a silicon chip doubles every two years or so. This has proven to be true to this day, allowing software developers to double the performance of their applications. However, the performance of artificial intelligence (AI) algorithms seems to have outpaced Moore’s Law.

Credit: Pixabay.

According to a new report produced by Stanford University, AI computational power is accelerating at a much higher rate than the development of processor chips.

“Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years,” the authors of the report wrote. “Post-2012, compute has been doubling every 3.4 months.”

Stanford’s AI Index 2019 annual report examined how AI algorithms have improved over time. In one chapter, the authors tracked the performance of image classification programs based on ImageNet, one of the most widely used training datasets for machine learning.

According to the authors, over a time span of 18 months, the time required to train a network for supervised image recognition fell from about three hours in late 2017 to about 88 seconds in July 2019.

This phenomenal jump in training time didn’t compromise accuracy. When the Stanford researchers analyzed the ResNet image classification model, they found the algorithm needed 13 days of training time to achieve 93% accuracy in 2017. The cost of training was estimated at $2,323. Only one year later, the same performance cost only $12.

The report also highlighted dramatic improvements in computer vision that can automatically recognize human actions and activities from videos.

These findings highlight the dramatic pace at which AI is advancing. They mean that, more often than not, a new algorithm running on an older computer will be better than an older algorithm on a newer computer.

Other key insights from the report include:

  • AI is the buzzword all over the news, but also in classrooms and labs across academia. Many Ph.D. candidates in computer science choose an AI field for their specialization in 2018 (21%).
  • From 1998 to 2018, peer-reviewed AI research grew by 300%.
  • In 2019, global private AI investment was over $70 billion, with startup investment $37 billion, mergers and acquisitions $34 billion, IPOs $5 billion, and minority stake $2 billion.
  • In terms of volume, China now publishes the most journal and conference papers in AI, having surpassed Europe last year. It’s been in front of the US since 2006.
  • But that’s just volume, qualitatively-speaking researchers in North America lead the field — more than 40% of AI conference paper citations are attributed to authors from North America, and about 1 in 3 come from East Asia.
  • Singapore, Brazil, Australia, Canada, and India experienced the fastest growth in AI hiring from 2015 to 2019.
  • The vast majority of AI patents filed between 2014-2018 were filed in nations like the U.S. and Canada, and 94% of patents are filed in wealthy nations.
  • Between 2010 and 2019, the total number of AI papers on arXiv increased 20 times.

A history of computer science, from punch cards to virtual reality

Do you know what the infrastructure of the future is? It’s not roads or rockets, it’s information. Everything from medicine to city planning to education will be based on data, which means we’ll become increasingly reliant on computers to make sense of it all — as if computers weren’t prevalent enough. But just because they’re so ubiquitous, doesn’t mean they’re less amazing. We use them on a daily basis, either to keep in touch with friends, work on amazing projects or have fun. Yet, we have to understand that like all technological achievements, for instance, space flight, the culmination we see today is the result of a long scientific journey which began a long time ago.

In the case of computer science, you can say that this started when Calculus was invented, but that would be stretching it too far. The dots really started to connect around the 18th century when the first mentions of “digital” were made and the first tentative algorithms were published. The rest is binary history, and this beautiful infographic has the gist.

evolution_of_computer_science_IG-3

Infographic source: Computer Science Zone.

A computer rendered explosion scene from the movie Super 8 which used the Wave Turbulance model.

The Oscar winning algorithm that makes smoke and explosions seem real

A computer rendered explosion scene from the movie Super 8 which used the Wave Turbulance model.

A computer rendered explosion scene from the movie Super 8 which used the Wavelet Turbulance model.

The role of films is to immerse the viewer into another universe, where one forgoes his day to day strains and becomes lost in a story. These stories are brought to life by actors but the setting can be equally important, especially if we’re talking about a historical or science fiction movie.

This is where special effects come in, and these have certainly gone a long way.

Five decades ago, pyrotechnicians had to set-up all sorts of explosives, some more dangerous than others, to reach the desired effect. Not all were that good or realistic. Nowadays, everything is computer generated, but it took a whole community of scientists and engineers to take CGI to where it is today.

One of the most significant recent contributions to computer generated special effects is the Wavelet Turbulence algorithm that makes it easier for artists to control the final look of smoke clouds and fiery flames on screen. You’ll recognize the work instantly if you’ve seen movies like Avatar, Super 8 or Superman Man of Steel.

The algorithm blends art with physics and computer science. Specifically, it helps fluid dynamics simulation be more realistic by adding extra details and handling phenomena like swirls or vortices a lot better.

If an engineer had to wait hours or even days to render his simulation, the new algorithm optimized the process to a matter of minutes. It’s pretty much groundbreaking work, which was immediately recognized by the film industry. For their contribution, Theodore Kim, Nils Thuerey, Markus Gross and Doug James were awarded the Academy Award for Technical Achievement in 2012.

The kind of complexity the algorithm allows.

The kind of complexity the algorithm allows.

“While this work is highly technical, its ultimate goal is an aesthetic one,” said Kim. “When many people think of math and science, the perception is often that it leaves no room for creativity or intuition. However, both played a tremendous role in the design and implementation of this software and in turn it aids others in their own creative work.”

Check out how the algorithm works, as explained by Kim. Check out the Wavelet Turbulance paper and other awesome video examples on Kim’s Cornell website.

sorting algorithms

What sorting algorithms look and sound like

sorting algorithms

Credit: Timo Bingmann

Sorting algorithms are fundamental to computer science for the same reason sorting is important in your day to day life. It’s a lot easier to find things when they’re in order, which saves time and energy. Depending on how you need an array sorted, there are many sorting algorithms that you can use. Timo Bingmann made a software called Sound of Sorting which “both visualizes the algorithms internals and their operations, and generates sound effects from the values being compared.”

You can get a glimpse of how it works in this video produced by Bingmann.

The white bars represent the value of the array position corresponding to the x-axis. When an array item is set, the white bar turns red. A swap operation is two bars turning red which represents their values is being exchanged.

The sound’s frequency is calculated for each set of compared values. The sound wave is triangular and modulated to sound like an “8-bit game”, which is either very fitting or excruciatingly annoying.

You can download the software program here and then make your own sounds and videos.

dice random numbers

Computer science breakthrough in random number generation

Random numbers are essential for cryptography and computer security. The problem is that algorithms don’t really generate totally random numbers. Depending on the seed value, these generated random numbers are fairly easy or very difficult to predict. Academics at University of Texas made a breakthrough in the field by generating high-quality random numbers by combining two low-quality sources.

dice random numbers

Credit: Flickr

The work is still theoretical, but the two researchers, David Zuckerman, a computer science professor, and Eshan Chattopadhyay, a graduate student, say it could significantly improve cryptography, scientific polling, and even climate models. Already, some randomness extractors that create sequences of many more random numbers have been made using the University of Texas algorithms.

“We show that if you have two low-quality random sources—lower quality sources are much easier to come by—two sources that are independent and have no correlations between them, you can combine them in a way to produce a high-quality random number,” Zuckerman said. “People have been trying to do this for quite some time. Previous methods required the low-quality sources to be not that low, but more moderately high quality. “We improved it dramatically,” Zuckerman said.

Because computers just follow instructions, and random numbers are the opposite of following instructions, random numbers are theoretically predictable, although some easier or harder than others. Comic by XKCD

Because computers just follow instructions, and random numbers are the opposite of following instructions, random numbers are theoretically predictable, although some easier or harder than others. Comic by XKCD

“You expect to see advances in steps, usually several intermediate phases,” Zuckerman said. “We sort of made several advances at once. That’s why people are excited.”

The new algorithm, detailed in the journal ECCC, will make hacking a lot more difficult as random numbers of higher quality can be generated for less computational power.

“This is a problem I’ve come back to over and over again for more than 20 years,” said Zuckerman. “I’m thrilled to have solved it.”

 

Great Principles of Computing book review

Book review: ‘Great Principles of Computing’

Great Principles of Computing book review

 

Great Principles of Computing
By Peter J. Denning, Craig H. Martell
MIT Press, 320pp | Buy on Amazon
Is computer science really a science or just a tool for analyzing data, churning and crunching numbers? During its brief history, computer science has had a lot to endure, but it’s only recently being appreciated for its potential as an agent of discovery and thought. At first, computing looked like only the applied technology of math, electrical engineering or science, depending on the observer. In fact, during its youth, computing was regarded as the mechanical steps one needs to follow to solve a mathematical function, while computers were the people that did the computation. What you and me call a computer today actually stands for automatic computer, but along the way the distinction blurred.

Ultimately, computer science is a science of information processes, no different from biology in many respects. Not if we heed the words of  Nobel laureate David Baltimore or cognitive scientist Douglas Hofstadter who first proposed biology had become an information science and DNA translation is a natural information process. Following this line of reasoning, computer science studies both natural and artificial information processes. Like all sciences, it follows that computer science is also guided by some great principles framework – something that Denning and Martell try to expose in their book, “Great Principles of Computing.”

Denning and Martell divide the great principles of computing into six categories: communication, computation, coordination, recollection, evaluation, and design. Each serves to provide a perspective on computing, but they’re not mutually exclusive. For instance, the internet can be seen at once as a communication system, a coordination system or a storage system. During each chapter, the authors expose and explain what each principle means and how it relates to different areas: information, machines, programming, computation, memory, parallelism, queueing, and design. Of course, principles are fairly static, so their relation to one another is also discussed at length.

The great-principles framework reveals a rich set of rules on which all computation is based. These principles interact with the domains of the physical, life and social sciences, as well as with computing technology itself. As such, professionals in science and engineering might find this book particularly useful, yet that’s not to say laymen won’t have a lot to learn. But while the concepts or principles outlined in the book are very thoroughly explained, be warned at the same time that this is a technical book. With this out of the way, if you’re not afraid of a lot of schematics and a few equations here and there, “Great Principles of Computing” is definitely a winner.

 

robot_swarm

Swarm of 1,000 robots self-assemble in complex shapes

robot_swarm

All hail to the swarm! Photo: MICHAEL RUBINSTEIN/HARVARD UNIVERSITY

In a breakthrough in robotics, researchers have programmed a swarm consisting of a whooping  1,024 members which can assemble in programmable 2-D shapes. The demonstration might provide insights in how natural self-assembling swarms operate, like ants who join up to form bridges for the good of the colony. Such efforts in the future might be upgraded to support 3-D shapes. Some researchers even envision tools made out of self-assembling robots (think Transformers!), but space applications seem like the most practical field for them.

My life for the swarm!

Each Kilobot, as they’ve been named, is the size of a coin, costs $20 and is programmed to follow a strict set of rules for assembly. To communicate with other members of the swarm, the robots send out and read infrared signals, but the transmission is limited to neighboring bots only – each bot is not capable of seeing or understanding the greater whole or purpose. To assemble the swarm in geometrical shapes, like a star or the letter “K”, the researchers assigned four of the bots to act like ‘seeds’. These are placed in a cluster next to the swarm, and the robots on the far side of the pack begin to inch around the edge of the formation towards the seeds, propelled by motors that make them vibrate like ringing mobile phones.

[RELATED] Fire ants build life-saving rafts against floods [amazing photos]

Thus, the seeds act like reference points, helping the other bots coordinate themselves around them. As you might have guessed, the process can be slow. It took 12 hours for the 1,000 strong swarm to assemble in a K-shaped formation. Also, there also slower bots that cause traffic jams and  the shapes tend to look warped owing to the Kilobots’ imprecise tracking and their tendency to bump against one another before stopping.

The demonstration itself remains powerful. This is the first time something of this scale has been achieved and scientists are already thinking about how to use swarms of tiny bots such as the Kilobots to study natural self-assembling systems, like ants who join to form bridges and other structures. Other applications might seem futuristic, but no less practical if the bots are made cheaply and durable. Think of thousands of tiny bots, even the size of a grain of sand, that assemble together to form a wrench, only to become some other tool when the occasion calls for it. That’s real life Transformers. The concept isn’t new; I while ago I reported on similar developments at MIT, yet their snake-like bots were much bigger in size.

Check out the video below for a complete demonstration:

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Scientists devise computer using swarms of soldier crabs

Computing using unconventional methods found in nature has become an important branch of computer science, which might aid scientists construct more robust and reliable devices. For instance, the ability of biological systems to assemble and grow on their own enables much higher interconnection densities or swarm intelligence algorithms, like ant colonies that find optimal paths to food sources. But its one thing to get inspired by nature to build computing devices, and another to use nature itself as the main computing component.

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

A series of snapshots in OR gate of swarm balls (credit: Yukio-Pegio Gunji, Yuta Nishiyama, Andrew Adamatzky)

Previously, scientific groups have used all sorts of natural computation mechanisms like fluids or even DNA and bacteria. Now, a team of  computer scientists, lead by Yukio-Pegio Gunji from Kobe University in Japan, have successfully created a computer that exploits the swarming behaviour of soldier crabs. Yup, that’s nothing you hear every day.

For their eccentric choice of computing agent, the researchers’ inspired themselves from the billiard ball computer model, a classic reversible mechanical computer, mainly used for didactic purposes first proposed in 1982 by Edward Fredkin and Tommaso Toffoli.

The billiard ball computer model can be used as a Boolean circuit, only instead of wires it uses the paths on which the balls travel, the information is encoded by the presence or absence of a ball on the path (1 and 0), and its logic gates (AND/OR/NOT) are simulated by collisions of balls at points where their paths cross. Now, instead of billiard balls think crabs!

“These creatures seem to be uniquely suited for this form of information processing . They live under the sand in tidal lagoons and emerge at low tide in swarms of hundreds of thousands.

What’s interesting about the crabs is that they appear to demonstrate two distinct forms of behaviour. When in the middle of a swarm, they simply follow whoever is nearby. But when they find themselves on on the edge of a swarm, they change.

Suddenly, they become aggressive leaders and charge off into the watery distance with their swarm in tow, until by some accident of turbulence they find themselves inside the swarm again.

This turns out to be hugely robust behaviour that can be easily controlled. When placed next to a wall, a leader will always follow the wall in a direction that can be controlled by shadowing the swarm from above to mimic to the presence of the predatory birds that eat the crabs. ” MIT tech report

Thus, the researchers were able to construct a computer which uses solider crabs for transmitting information. They were able to build a decent OR gate using the crabs, their AND-gates were a lot less reliable however. A more crab-friendly environment would’ve rendered better results, the researchers believe.

The findings were published in the journal Emerging Technologies.

GPU upgrade makes Jaguar the fast computer in the world again

No, not the sports car, neither the predatory feline, but Oak Ridge National Labs Jaguar – a supercomputer of immense computing capabilities set to top the ranks of the fastest computers in the world, for the second time, after a GPU (graphical processing unit) upgrade.  Capable of simulating physical systems with heretofore unfeasible speed and accuracy -from the explosions of stars to the building blocks of matter – the new upgraded Jaguar will be capable of reaching an incredible peak speed of 20 petaflops (20,000 trillion computations per second). The speedy computer will be renamed “Titan” after its overhaul.

RELATED: Supercomputer simulation confirms Universe formation model

This is the second time the ORNL supercomputer will peak the top500 supercomputers of the world list, after it was surpassed by Japan’s K Computer and China’s Tianhe-1A supercomputer last year. The title will be earned as a result of an inked deal between Cray Inc., the manufacturer of the XT5-HE supercomputer at the heart of Jaguar, and ORNL, which will overhaul the DARPA computer with thousand of graphics processors from NVIDIA as well as chips from Advanced Micro Devices.

“All areas of science can benefit from this substantial increase in computing power, opening the doors for new discoveries that so far have been out of reach,” said associate lab director for computing Jeff Nichols.

“Titan will be used for a variety of important research projects, including the development of more commercially viable biofuels, cleaner burning engines, safer nuclear energy and more efficient solar power.”

The multi-year contract, valued at more than $97 million, will make out of Titan at least twice as fast and three times as energy efficient as today’s fastest supercomputer, which is located in Japan.

via