Tag Archives: Neural networks

AI upscales iconic 1895 film to 4K 60fps and the results are amazing

L’arrivée d’un train en gare de La Ciotat.

The year is 1896 and a huge crowd is gathered inside the back room of a Parisian café, where the famous Lumière brothers promised a spectacle of moving images. In effect, this was the world’s first movie theater, dazzling an audience that was still coming to grips with the idea of photography.

One of the earliest movies ever shot and screened by the Lumière brothers is the iconic The Arrival of the Train (L’arrivée d’un train en gare de La Ciotat). According to some accounts, the audience was so overwhelmed by the moving image of a life-sized train coming directly at them that some people screamed and ran to the back of the room. However, this seems to be more of a myth than an actual account of what happened. Nevertheless, the film must have astonished many people unaccustomed to the illusion created by moving images.

The 1895 short black-and-white silent flick only lasts 45 seconds and features a train’s arrival in the station of the French town of La Ciotat. Though it might not look like much today, bear in mind that this was one of the first films ever produced, shot in a cinematic style pioneered by the two brothers known as Actualités, or ‘actualities’ — brief bites of film.

Cinematograph Lumiere advertisement 1895. Credit: Wikimedia Commons.

The short film was shot with a cinématographe created by the Lumière brothers, which was an all-in-one combination of a motion picture camera, printer, and projector.

Since then, camera gear technology has evolved tremendously. Novel AIs allow us to see what the film would have looked like if the French brothers had used modern filming equipment. Using several neural networks, Denis Shiryaev upscaled the iconic black-and-white film to 4K quality at 60 frames per second, and you can see the breathtaking results for yourself.

https://www.youtube.com/watch?time_continue=21&v=3RYNThid23g&feature=emb_title

And here’s the 1895 original for a side-by-side comparison.

To upscale to 4K, Shiryaev used Gigapixel AI while adding FPS was possible thanks to the Dain neural networks.

That’s not all. On top of all of this, the YouTuber used the DeOldify Neural Network to colorize the film, which you can see below.

How neuro-symbolic AI might finally make machines reason like humans

If you want a machine to learn to do something intelligent you either have to program it or teach it to learn.

For decades, engineers have been programming machines to perform all sorts of tasks — from software that runs on your personal computer and smartphone to guidance control for space missions.

But although computers are generally much faster and more precise than the human brain at sequential tasks, such as adding numbers or calculating chess moves, such programs are very limited in their scope. Something as trivial as identifying a bicycle among a crowded pedestrian street or picking up a hot cup of coffee from a desk and gently moving it to the mouth can send a computer into convulsions, nevermind conceptualizing or abstraction (such as designing a computer itself).

The gist is that humans were never programmed (not like a digital computer, at least) — humans have become intelligent through learning.

Intelligent machines

Do machine learning and deep learning ring a bell? They should. These are not merely buzz words — they’re techniques that have literally triggered a renaissance of artificial intelligence leading to phenomenal advances in self-driving cars, facial recognition, or real-time speech translations.

Although AI systems seem to have appeared out of nowhere in the previous decade, the first seeds were laid as early as 1956 by John McCarthy, Claude Shannon, Nathan Rochester, and Marvin Minsky at the Dartmouth Conference. Concepts like artificial neural networks, deep learning, but also neuro-symbolic AI are not new — scientists have been thinking about how to model computers after the human brain for a very long time. It’s only fairly recently that technology has developed the capability to store huge amounts of data and significant processing power, allowing AI systems to finally become practically useful.

But despite impressive advances, deep learning is still very far from replicating human intelligence. Sure, a machine capable of teaching itself to identify skin cancer better than doctors is great, don’t get me wrong, but there are also many flaws and limitations.

An amazing example of an image processing AI forming sentences that remarkably describe what’s going on in a picture. Credit: Karpathy and Li (2015).

One important limitation is that deep learning algorithms and other machine learning neural networks are too narrow.

When you have huge amounts of carefully curated data, you can achieve remarkable things with them, such as superhuman accuracy and speed. Right now, AIs have crushed humans at every single important game, from chess to Jeopardy! and Starcraft.

However, their utility breaks down once they’re prompted to adapt to a more general task. What’s more, these narrow-focused systems are prone to error. For instance, take a look at the following picture of a “Teddy Bear” — or at least in the interpretation of a sophisticated modern AI.

What’s furry and round? This pixel interpretation returns “Teddy Bear”, whereas any human can tell is this is a gimmicky work of art.

Or this…

Lake, Ullman, Tenenbaum, Gershman (2016).

These are just a couple of examples that illustrate that today’s systems don’t truly understand what they’re looking at. And what’s more, artificial neural networks rely on enormous amounts of data in order to train them, which is a huge problem in the industry right now. At the rate at which computational demand is growing, there will come a time when even all the energy that hits the planet from the sun won’t be enough to satiate our computing machines. Even so, despite being fed millions of pictures of animals, a machine can still mistake a furry cup for a teddy bear.

Meanwhile, the human brain can recognize and label objects effortlessly and with minimal training — basically we only need one picture. If you show a child a picture of an elephant — the very first time they’ve ever seen one — that child will instantly recognize that a) that is an animal and b) that this is an elephant next time they’ll come across that animal, either in real life or in a picture.

This is why we need a middle ground — a broad AI that can multi-task and cover multiple domains, but which also can read data from a variety of sources (text, video, audio, etc), whether the data is structured or unstructured. Enter the world of neuro-symbolic AI.

David Cox is the head of the MIT-IBM Watson AI Lab, a collaboration between IBM and MIT that will invest $250 million over ten years to advance fundamental research in artificial intelligence. One important avenue of research is neuro-symbolic AI.

“A neuro-symbolic AI system combines neural networks/deep learning with ideas from symbolic AI. A neural network is a special kind of machine learning algorithm that maps from inputs (like an image of an apple) to outputs (like the label “apple”, in the case of a neural network that recognizes objects). Symbolic AI is different; for instance, it provides a way to express all the knowledge we have about apples: an apple has parts (a stem and a body), it has properties like its color, it has an origin (it comes from an apple tree), and so on,” Cox told ZME Science.

“Symbolic AI allows you to use logic to reason about entities and their properties and relationships. Neuro-symbolic systems combine these two kinds of AI, using neural networks to bridge from the messiness of the real world to the world of symbols, and the two kinds of AI in many ways complement each other’s strengths and weaknesses. I think that any meaningful step toward general AI will have to include symbols or symbol-like representations,” he added.

By combining the two approaches, you end up with a system that has neural pattern recognition allowing it to see, while the symbolic part allows the system to logically reason about symbols, objects, and the relationships between them. Taken together, neuro-symbolic AI goes beyond what current deep learning systems are capable of doing.

“One of the reasons why humans are able to work with so few examples of a new thing is that we are able to break down an object into its parts and properties and then to reason about them. Many of today’s neural networks try to go straight from inputs (e.g. images of elephants) to outputs (e.g. the label “elephant”), with a black box in between. We think it is important to step through an intermediate stage where we decompose the scene into a structured, symbolic representation of parts, properties, and relationships,” Cox told ZME Science.

Here are some examples of questions that are trivial to answer by a human child but which can be highly challenging for AI systems solely predicated on neural networks.

Credit: David Cox, Youtube.

Neural networks are trained to identify objects in a scene and interpret the natural language of various questions and answers (i.e. “What is the color of the sphere?”). The symbolic side recognizes concepts such as “objects,” “object attributes,” and “spatial relationship,” and uses this capability to answer questions about novel scenes that the AI had never encountered.

A neuro-symbolic system, therefore, applies logic and language processing to answer the question in a similar way to how a human would reason. An example of such a computer program is the neuro-symbolic concept learner (NS-CL), created at the MIT-IBM lab by a team led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines.

You could achieve a similar result to that of a neuro-symbolic system solely using neural networks, but the training data would have to be immense. Moreover, there’s always the risk that outlier cases, for which there is little or no training data, are answered poorly. In contrast, this hybrid approach boosts a high data efficiency, in some instances requiring just 1% of training data other methods need.

The next evolution in AI

Just like deep learning was waiting for data and computing to catch up with its ideas, so has symbolic AI been waiting for neural networks to mature. And now that two complementary technologies are ready to be synched, the industry could be in for another disruption — and things are moving fast.

“We’ve got over 50 collaborative projects running with MIT, all tackling hard questions at the frontiers of AI. We think that neuro-symbolic AI methods are going to be applicable in many areas, including computer vision, robot control, cybersecurity, and a host of other areas. We have projects in all of these areas, and we’ll be excited to share them as they mature,” Cox said.

But not everyone is convinced that this is the fastest road to achieving general artificial intelligence.

“I think that symbolic style reasoning is definitely something that is important for AI to capture. But, many people (myself included) believe that human abilities with symbolic logic emerge as a result of training, and are not convinced that an explicitly hard-wiring in symbolic systems is the right approach. I am more inclined to think that we should try to design artificial neural networks (ANNs) that can learn how to do symbolic processing. The reason is this: it is hard to know what should be represented by a symbol, predicate, etc., and I think we have to be able to learn that, so hard-wiring the system in this way is maybe not a good idea,” Blake Richards, who is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University, told ZME Science.

Irina Rish, an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM), agrees that neuro-symbolic AI is worth pursuing but believes that “growing” symbolic reasoning out of neural networks, may be more effective in the long-run.

“We all agree that deep learning in its current form has many limitations including the need for large datasets. However, this can be either viewed as criticism of deep learning or the plan for future expansion of today’s deep learning towards more capabilities,” Rish said.

Rish sees current limitations surrounding ANNs as a ‘to-do’ list rather than a hard ceiling. Their dependence on large datasets for training can be mitigated by meta- and transfer-learning, for instance. What’s more, the researcher argues that many assumptions in the community about how to model human learning are rather flawed, calling for more interdisciplinary research.

“A common argument about “babies learning from a few samples unlike deep networks” is fundamentally flawed since it is unfair to compare an artificial neural network trained from scratch (random initialization, some ad-hoc architectures) with a highly structured, far-from-randomly initialized neural networks in baby’s brains,  incorporating prior knowledge about the world, from millions of years of evolution in varying environments. Thus, more and more people in the deep learning community now believe that we must focus more on interdisciplinary research on the intersection of AI and other disciplines that have been studying brain and minds for centuries, including neuroscience, biology, cognitive psychology, philosophy, and related disciplines,” she said.

Rish points to exciting recent research that focuses on “developing next-generation network-communication based intelligent machines driven by the evolution of more complex behavior in networks of communicating units.” Rish believes that AI is naturally headed towards further automation of AI development, away from hard-coded models. In the future, AI systems will also be more bio-inspired and feature more dedicated hardware such as neuromorphic and quantum devices.

“The general trend in AI and in computing as a whole, towards further and further automation and replacing hard-coded approaches with automatically learned ones, seems to be the way to go,” she added.

For now, neuro-symbolic AI combines the best of both worlds in innovative ways by enabling systems to have both visual perception and logical reasoning. And, who knows, maybe this avenue of research might one day bring us closer to a form of intelligence that seems more like our own.

C.elegans.

A worm’s brain was uploaded to a hard drive and put to the test — without a single line of code

Researchers from the Vienna University of Technology (VUT) have put a brain on a circuit board — specifically, the brain of the nematode C. elegans. They are now training it to perform tasks without a single line of human-written code.

C.elegans.

C. elegans worms.
Image credits PROZEISS Microscopy / Flickr.

C. elegans isn’t much to look at. Growing to just under one millimeter in length, it’s not just tiny, it’s also a very, very simple organism. But in one respect, this little nematode is unique and uniquely valuable for science — it’s the only living being whose neural system has been fully analyzed and mapped. In other words, its brain can be recreated as a circuit — either onto a circuit board or one simulated with software — without losing any of its function.

This has allowed researchers at the VUT to ‘copy-paste’ its brain into a computer, creating a virtual copy of the organism that reacts to stimuli the same way as the real thing. The researchers are now hard at work training this digi-worm to perform simple tasks, and it has already mastered the standard computer science trial of balancing a pole.

Worm in the software

So are your brains at risk of spontaneous copyfication? No. Researchers have been able to map C. elegans‘ neural systems precisely because it’s quite dumb — it can only draw on 300 neurons worth of processing power. However, that’s enough gray matter to allow the worm to navigate its environment, catch bacteria for dinner, and react to certain external stimuli — such as a touch on its body, which triggers a reflexive squirming-away.

This behavior is encoded in the worm’s nerve cells, and governed by the strength of the connections between these neurons. When recreated on a computer, this simple reflex pathway works the same way as its biological counterpart — not because it’s been programmed to do so, but because this behavior arises from the structure itself.

“This reflexive response of such a neural circuit, is very similar to the reaction of a control agent balancing a pole,” says co-author Ramin Hasani.

Pole balancing is actually a typical control trial in computer science. It involves a pole, fixed on its lower end on a moving object, which the device has to keep in a vertical position. It does this by moving the object slightly whenever the pole starts tilting, in a bid to keep it from tipping over.

Worm test pole.

The worm’s natural behavior is very similar to that required in this test.
Image credits TU Wien.

Standard controllers don’t have much trouble passing this test. The trial is functionally similar to the processes the nematode’s neural system has to handle in the wild — move when a stimulus is registered. So, the team wanted to see if it could solve the problem without adding any extra code or neurons, just by tuning the strength of connections between cells. They chose this final parameter based on the fact that shifting synaptic strength is the characteristic feature of any natural learning process.

After some tweaking, the network managed to easily pass the pole trial.

“With the help of reinforcement learning, a method also known as ‘learning based on experiment and reward’, the artificial reflex network was trained and optimized on the computer,” explains first author Mathias Lechner.

“The result is a controller, which can solve a standard technology problem — stabilizing a pole, balanced on its tip. But no human being has written even one line of code for this controller, it just emerged by training a biological nerve system,” says co-author Radu Grosu.

After establishing that the method works, the team plans to probe the capabilities of similar circuits further. Still, the research does raise some very impactful questions — are machine learning and our brain processes fundamentally the same? If so, is silicon intelligence any less valuable or ‘alive’ than biological intelligences?

For now, however, we simply don’t know — C. elegans doesn’t know or care whether it lives as a worm in the ground or as a virtual collection of 1’s and 0’s on a computer in Vienna.

The paper “Worm-level Control through Search-based Reinforcement” has been published in the preprint server arXiv.