Tag Archives: DeepMind

AI makes stunning protein folding breakthrough — but not all researchers are convinced

Within every biological body, there are thousands of proteins, each twisted and folded into a unique shape. The formation of these shapes is crucial to their function, and researchers have struggled for decades to predict exactly how this folding will take place.

Now, AlphaFold (the same AI that mastered the games of chess and Go) seems to have solved this problem, essentially paving the way for a new revolution in biology. But not everyone’s buying it.

An AlphaFold prediction against the real thing.

What the big deal is

Proteins are essential to life, supporting practically all its functions, a DeepMind blog post reads. The Google-owned lab British artificial intelligence (AI) research became famous in recent years as their algorithm became the best chess player on the planet, and even surpassed humans in Go — a feat once thought impossible. After toying with a few more games, the DeepMind team set its eyes on a real-life task: protein folding.

In 2018, the team announced that AlphaFold 2 (the second version of the protein folding algorithm) has become quite good at predicting the 3D shapes of proteins, surpassing all other algorithms. Now, two years later, the algorithm seems to have been perfected even more.

In a global competition called Critical Assessment of protein Structure Prediction, or CASP, AlphaFold 2 and other systems are given the amino acid strings for proteins and asked to predict their shape. The competition organizers already know the actual shape of the protein, but of course, they keep it secret. Then, the prediction is compared to real-world results. DeepMind CEO Demis Hassabis calls this the “Olympics of protein folding” in a video.

AlphaFold nailed it. Not all its predictions were spot on, but all were very close — it was the closest thing to perfection ever seen since CASP kicked off.

“AlphaFold’s astonishingly accurate models have allowed us to solve a protein structure we were stuck on for close to a decade,” Andrei Lupas, Director of the Max Planck Institute for Developmental Biology and a CASP assessor, said in the DeepMind blog.

CASP uses the “Global Distance Test (GDT)” metric, assessing accuracy from 0 to 100. AlphaFold 2 achieved a median score of 92.4 across all targets, which translates to an average error of approximately 1.6 Angstroms, or about the width of an atom.

Improvements have been slow in the protein folding competition. Image credits: DeepMind.

It’s not perfect. Even one Angstrom can be to big of an error and render the protein useless, or even worse. But the fact that it’s so close suggests that a solution is in sight. The problem has seemed unsolvable for so long that researchers were understandably excited.

“We have been stuck on this one problem – how do proteins fold up – for nearly 50 years. To see DeepMind produce a solution for this, having worked personally on this problem for so long and after so many stops and starts, wondering if we’d ever get there, is a very special moment.”

Why protein folding is so important

It can take years for a research team to identify the shape of individual proteins — and these shapes are crucial for biological research and drug development.

A protein’s shape is closely linked to the way it works. If you understand its shape, you also have a pretty good idea of how it works.

Having a method to predict this rapidly and without hard and extensive work could usher in a revolution in biology. It’s not just the development of new drugs and treatments, though that would be motivation enough. Development of enzymes that could break down plastic, biofuel production, even vaccine development could all be dramatically sped up by protein folding prediction algorithms.

Essentially, protein folding has become a bottleneck for biological research, and it’s exactly the kind of field where AI could make a big difference, unlocking new possibilities that seemed impossible even a few years ago.

At a more foundational level, mastering protein folding can even get us closer to understanding the biological building blocks that make up the world. Professor Andrei Lupas, Director of the Max Planck Institute for Developmental Biology and a CASP assessor, commented that:

“AlphaFold’s astonishingly accurate models have allowed us to solve a protein structure we were stuck on for close to a decade, relaunching our effort to understand how signals are transmitted across cell membranes.”

Why not everyone is convinced

The announcement of DeepMind’s achievements sent ripples through the science world, but not everyone was thrilled. A handful of researchers raised the point that just because it works in the CASP setting, doesn’t really mean it will work in real life, where the possibilities are far more varied.

Speaking to Business Insider, Max Little, an associate professor and senior lecturer in computer science at the University of Birmingham expressed skepticism about the real-world applications. Professor Michael Thompson, an expert in structural biology at the University of California, took to Twitter to express what he sees as unwarranted hype (see above), making the important point that the team at DeepMind hasn’t shared its code, and they haven’t even published a scientific paper with the results. Thompson did say “the advance in prediction is impressive.” He added: “However, making a big step forward is not the same as ‘solving’ a decades-old problem in biology and chemical physics.”

Lior Pachter, a professor of computational biology at the California Institute of Technology, echoed these feelings. It’s an important step, he argued, but protein folding is not solved by any means.

Just how big this achievement is remains to be seen, but it’s an important one no matter how you look at it. Whether it’s a stepping stone or a true breakthrough is not entirely clear at this moment, but researchers will surely help clear this out as quickly as possible.

In the meantime, if you want to have a deeper look at how AlphaFold was born and developed, here’s a video that’s bound to make you feel good:

DeepMind can now learn how to use its memories, apply knowledge to new tasks

DeepMind is one step closer to emulating the human mind. Google engineers claim their artificial neural network can now use store data similarly to how humans access memory.

But we’re one step closer to giving it one.
Image credits Pierre-Olivier Carles / Flickr.

The AI developed by Alphabet, Google’s parent company, just received a new and powerful update. By pairing up the neural network’s ability to learn with the huge data stores of conventional computers, the programmers have created the first Differential Neural Computer, or DNC — allowing DeepMind to navigate and learn from the data on its own.

This brings AIs one step closer to working as a human brain, as the neural network simulates the brain’s processing patterns and external data banks supplying vast amounts of information, just like our memory.

“These models… can learn from examples like neural networks, but they can also store complex data like computers,” write DeepMind researchers Alexander Graves and Greg Wayne in a blog post.

Traditional neural networks are really good at learning to do one task — sorting cucumbers, for example. But they all share a drawback in learning to do something new. Aptly called “catastrophic forgetting”, such a network has to erase and re-write everything it knows before being able to learn something else.

Learn like a human, work like a robot

Our brains don’t have this problem because they can store past experience as memories. Your computer doesn’t have this problem either, as it can store data on external banks for future use. So Alphabet paired up the later with a neural network to make it behave like a brain.

The DNC is underpinned by a controller that constantly optimizes the system’s responses, comparing its results with the desired or correct answers. Over time, this allows it to solve tasks more and more accurately while learning how to apply the data it has access to at the same time.

At the heart of the DNC is a controller that constantly optimizes its responses, comparing its results with the desired and correct ones. Over time, it’s able to get more and more accurate, figuring out how to use its memory data banks at the same time. The results are quite impressive.

After feeding the London subway network into the system, it was able to answer questions which require deductive reasoning — which computers are not good at.

For example here’s one question the DNC could answer: “Starting at Bond street, and taking the Central line in a direction one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at what stop do you wind up?”

While that may not seem like much — a simple navigation app can tell you that in a few seconds — what’s groundbreaking here is that the DNC isn’t just executing lines of code — it’s working out the answers on its own, working with the information it has in its memory banks.

The cherry on top, the DeepMind team stated, is that DNCs are able to store learned facts and techniques, and then call upon them when needed. So once it learns how to deal with the London underground, it can very easily handle another transport network, say, the one in New York.

This is still early work, but it’s not hard to see how this could grow into something immensely powerful in the future — just imagine having a Siri that can look at and understand the data on the Internet just like you or me. This could very well prove to be the groundwork for producing AI that’s able to reason independently.

And I, for one, am excited to welcome our future digital overlords.

The team published a paper titled “Hybrid computing using a neural network with dynamic external memory” describing the research in the journal Nature.

Google used DeepMind to cut their electricity bill by a whopping 15%

Google is putting DeepMind’s machine learning to work on managing their sprawling data centers’ energy usage, and it’s is performing like a boss — the company reports a 15% drop in consumption since the AI took over.

Image via brionv/flickr

Google is undeniably a huge part of western civilization. We don’t search for something on the Internet anymore, we google it. The company’s data servers pretty much handle all of my mail at this point, along with YouTube, social media platforms and much more. But even so, it’s easy to forget that the Google we know and interact with every day is just the tip of the iceberg; it relies on huge data servers to process, transfer and store information — and all this hardware needs a lot of power.

So much power, in fact, that the company decided to do something about it. On Wednesday, Google said it had proved it could cut the energy use of its data centers by 15% using machine learning from DeepMind, the AI company it bought in 2014. These centers use up significant power to cool and maintain an ideal working environment for the servers — requiring constant adjustments of air temperature, pressure, and humidity.

“It’s one of those perfect examples of a setting where humans have a really good intuition they’ve developed over time but the machine learning algorithm has so much more data that describes real-world conditions [five years in this case]” said Mustafa Suleyman, DeepMind’s co-founder.

“It’s much more than any human has ever been able to experience, and it’s able to learn from all sorts of niche little edge cases seen in the data that a human wouldn’t be able to identify. So it’s able to tune the settings much more subtly and much more accurately.”

Suleyman said that the reduction in power use was achieved through a combination of factors. On one hand, DeepMind is able to more accurately predict incoming computational load — in other words, it could estimate when people accessed more data-heavy content such as YouTube videos. The system also matched that prediction more quickly to the required cooling load than human operators.

“It’s about tweaking all of the knobs simultaneously,” he said.

Ok, so Google’s electricity bill just went down; good for them, but what does this have to do with us? Well, a lot, actually. Data centers gobble up a lot of energy, and that means a lot of greenhouse gas emissions — combined, data centers have emission levels similar to those seen in aviation. When Google first disclosed its carbon footprint in 2011 it was roughly equivalent to Laos’s annual emissions but since then they claim they upped their game, getting 3.5 times as much computational power for the same amount of energy. Using machine learning is only the latest step in optimizing their system. The company began toying with this idea two years ago, and since then they’ve tested it on “more than 1%” of its servers, Suleyman said. It is now being used across a “double-digit percentage” of all Google’s data centers globally and will be applied across all of them by the end of the year.

Using machine learning is only the latest step in optimizing their system. The company began toying with this idea two years ago, and since then they’ve tested it on “more than 1%” of its servers, Suleyman said. It is now being used across a “double-digit percentage” of all Google’s data centers globally and will be applied across all of them by the end of the year. They haven’t released the exact amount of power their data centers use, but claims that in total its activity makes up 0.01% of global electricity use (and most of that probably goes towards the data centers.)

But DeepMind is leaving a considerable mark on their energy efficiency. It cut energy expenditure for cooling by 40%, which reduced the company’s overall power consumption by 15%.

“I really think this is just the beginning. There are lots more opportunities to find efficiencies in data centre infrastructure,” Suleyman added.

“One of the most exciting things is the kind of algorithms we develop are inherently general … that means the same machine learning system should be able to perform well in a wide variety of environments [think power generation facilities or energy networks].”

Sophia Flucker, the director of Operational Intelligence, a UK-based consultancy that advises data centers on their energy use, said it was feasible that Google had achieved such a big reduction.

“I’ve worked with some award-winning data centres, which still had plenty of room for improvement,” she said.