Tag Archives: machine learning

Machine learning could help predict the next pandemic-inducing virus

After going through the experience of the COVID-19 pandemic, everybody is keen on predicting and avoiding the next big viral threat. New research at the University of Glasgow in the UK is harnessing the power of AI towards that goal.

Image via Pixabay.

Machine learning, an approach to data analysis whose goal is to teach machines how to automate certain tasks, could help predict the next zoonosis — a virus that jumps from an animal species to humans. Such pathogens are the most significant drivers of epidemics and pandemics and have been so throughout human history. The coronavirus was, very likely, also a zoonosis, one which jumped to humans from bats.

Manually sifting through all known animal viruses in an attempt to predict zoonosis is a monumental task. We estimate that there are around 1.67 million animal viruses out there, and although just a few should be able to infect humans, the work volume required for this task makes it simply not feasible in practical terms; especially as such predictions require specialized skills and laboratories.

This is where, a new study hopes, machines will come to the rescue.

Let the computer crunch it

“Our findings show that the zoonotic potential of viruses can be inferred to a surprisingly large extent from their genome sequence,” the study reads. “By highlighting viruses with the greatest potential to become zoonotic, genome-based ranking allows further ecological and virological characterization to be targeted more effectively.”

Predicting that a virus is likely to become a threat is not the same thing as actually preventing it from doing so, but it does go a long, long way in helping us prepare. That preparation would, in turn, lead to many lives saved, and much suffering avoided. It would also allow us to better monitor the behavior of particular threats, and focus preventative efforts more effectively.

In order to develop this AI, the team used the genetic sequences — full genomes — of roughly 860 virus species belonging to 36 families. The algorithm was trained to look for patterns in these (human-infecting) viral genomes alongside species-level records of human infection rates. Based on these datasets, viruses were assigned a probability of being able to infect human hosts. Its estimations were then compared to our best models of predicting a virus’ zoonotic potential. The authors used this step to both validate the estimations as much as possible, and to analyze patterns in these estimations across viral families.

“Although our primary interest was in zoonotic transmission, we trained models to predict the ability to infect humans in general, reasoning that patterns found in viruses predominantly maintained by human-to-human transmission may contain genomic signals that also apply to zoonotic viruses.”

Overall, the team reports, there are genetic features that seem to predispose viruses to infecting humans. These are largely independent of their taxonomy (evolutionary relationships to other viral species). Based on the AI’s estimations, they then developed machine learning models tailored specifically to look for these features across known viral genomes. We would still have to test any viral strain flagged by such a system in the lab in order to confirm that it can infect human cells, the author explain, before major resources are devoted to researching them and how to best counter them

This being said, a virus’ ability to infect human cells, by itself, is only one factor of its overall zoonotic potential. How virulent/infectious it is in humans, how easily it transmits between different hosts, and other environmental factors (such as a period of economic downturn or starvation, for example) have a sizable part to play in the formation of pandemics.

“These findings add a crucial piece to the already surprising amount of information that we can extract from the genetic sequence of viruses using AI techniques,” says study co-author Simon Babayan, from the Institute of Biodiversity, Animal Health and Comparative Medicine at the University of Glasgow.

“A genomic sequence is typically the first, and often only, information we have on newly-discovered viruses, and the more information we can extract from it, the sooner we might identify the virus’ origins and the zoonotic risk it may pose. As more viruses are characterized, the more effective our machine learning models will become at identifying the rare viruses that ought to be closely monitored and prioritized for preemptive vaccine development.”

The paper “Identifying and prioritizing potential human-infecting viruses from their genome sequences” has been published in the journal PLOS Biology.

Machine learning is paving the way towards 3D X-rays

Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have developed a new AI-based framework that can produce X-ray images in 3D.

The Advanced Photon Source (APS) at Argonne National Laboratory, one of the most technologically complex machines in the world, provides ultra-bright, high-energy x-ray beams for researchers across the USA. Image credits Argonne National Laboratory / Flickr.

The team, which includes members from three divisions at Argonne, has developed a method to create 3D visualizations from X-ray data. Their efforts were meant to allow them to better use the Advanced Photon Source (APS) at their lab, but potential applications of this technology range from astronomy to electron microscopy.

Lab tests showed that the new approach, called 3D-CDI-NN, can create 3D visualizations from data hundreds of times faster than existing technology.

More dimensions

“In order to make full use of what the upgraded APS will be capable of, we have to reinvent data analytics. Our current methods are not enough to keep up. Machine learning can make full use and go beyond what is currently possible,” says Mathew Cherukara of the Argonne National Laboratory, corresponding author of the paper.

The “CDI” in the technique’s name stands for coherent diffraction imaging, which is an X-ray technique that involves reflecting ultra-bright X-ray beams off of a certain sample that’s being investigated. These are later picked up by an array of detectors, and processed to produce the final image. The issue with this, says Cherukara, is that these detectors are limited in what information they can pick up from the beams. The “NN” stands for “neural network”.

Since important information can be missed during this step, software is used to fill it back in. Naturally, this is a very computationally- and time-intensive step. The team decided to train an AI that could side-step this entirely, being able to recognize objects straight from the raw data. They trained the AI using simulated X-ray data.

“We used computer simulations to create crystals of different shapes and sizes, and we converted them into images and diffraction patterns for the neural network to learn,” said Henry Chan, the lead author on the paper and a postdoctoral researcher in the Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility at Argonne, who led this part of the work. “The ease of quickly generating many realistic crystals for training is the benefit of simulations.”

After this, the AI was pretty good: it could arrive at close to the right answer in an acceptable span of time. The team further refined it by adding an extra step to the process, to help improve the accuracy of its output. They then tested it on real X-ray readings of gold particles collected at the APS. The final form of the neural network proved it can reconstruct the information not captured by detectors using less data than current approaches.

The next step, according to the team, is to integrate it into the APS’s workflow, so that it can learn from new data as it’s being taken. The APS is scheduled to receive a massive upgrade soon, which will increase the speed at which it can collect X-ray data roughly 500-fold. With this in mind, having an AI such as the one created by the team available to process data in real-time would be invaluable.

X-rays can allow us to see how materials behave on the nanoscale, i.e. on scales 100,000 smaller than the width of a human hair. But the sheer amount of data captured at such resolutions meant that processing remained time-consuming. Technology such as this, the team explains, would allow us to peer at the very, very small much more easily than ever before. Alternatively, it could help us understand the very large, as well, as several types of astronomical bodies emit X-rays towards Earth.

And, while the work at Argonne was carried out using samples of crystal, there’s no reason why the technology can’t be adapted for medical applications, as well.

“In order to make full use of what the upgraded APS will be capable of, we have to reinvent data analytics,” Cherukara said. “Our current methods are not enough to keep up. Machine learning can make full use and go beyond what is currently possible.”

The paper “Rapid 3D nanoscale coherent imaging via physics-aware deep learning” has been published in the journal Applied Physics Reviews.

Human facial expressions may be universal across cultures, AI study found

Credit: Pixabay.

There are more than 7,000 languages spoken in the world today, but sometimes facial expressions communicate much more than words, regardless of your mother tongue. Whether facial expressions and the emotions that underlie them are preserved across cultures has been a subject of great debate for years.

Studies that have attempted to document the universality of facial expressions have typically relied on experiments in which participants had to label photos of posed expressions. If there’s a consensus among participants from different cultural backgrounds about what behaviors reflect “joy”, “anger” or some other affective state, then this would be evidence of universal patterns of behavior.

However, these setups are limited and biased by language, as well as cultural norms and values.

Scientists at the University of California Berkeley and Google Research took a more objective approach, looking to assess human facial expressions and social situations in a more natural context. The team led by Alan Cowen, an emotion scientist at the University of California, Berkeley and a Visiting Faculty Researcher at Google, trained a deep neural network to evaluate whether different social contexts were associated with specific facial expressions across different cultures.

The deep neural network, which is a type of machine learning that attempts to draw similar conclusions as humans would by continually analyzing data with a given logical structure, was initially trained with data from English-speaking people from India, who tagged 16 patterns of facial movement associated with distinct English-language emotions.

Credit: Nature, Cowen et al.

The algorithm was then fed 6 million YouTube videos from 144 countries — a staggering trove of data that is much larger and diverse than the sample sizes of any similar studies before it. The assessment showed that similar expressions occurred in similar contexts around the world.

“There’s a lot of debate about whether facial expressions mean the same things in different cultures. For decades scientists have relied on survey data to address this question. But we don’t know how accurate those surveys are, because there are language differences, because it depends on how you ask the question, and because facial expressions can have multiple meanings. What we haven’t been able to do, until now, is look at how facial expressions are used in the real world. This is the first worldwide analysis of how facial expressions are used in everyday life, and we found that universal human emotional expressions are a lot richer and more complex than many scientists have assumed based on survey data,” Cowen told ZME Science.

The study, which was published in the journal Nature, took four years of hard work to complete and required the development of novel machine learning tools. “When we first got our algorithm working, that was a big moment for me,” Cowen recounts.

In their study, the authors mention how expressions such as “awe”, “contentment”, and “triumph” were associated with wedding and sporting events irrespective of the country where they took place. But each type of expression had distinct associations with specific contexts that were 70% preserved, on average, across 12 global regions.

“The expression of triumph during sports emerges as one of the most universal across cultures, at least in terms of what’s captured in videos online,” Cowen said.

“We found that there was some degree of universality for all 16 of the facial expressions we analyzed,” he added.

These findings point to a universality of facial expressions, which may be biologically hardwired. However, this isn’t the final say on the matter. Since all the cultures included in the study have access to the internet, it is possible that the findings reflect culturally-transmitted facial expressions through globalization.

Nevertheless, the researchers also have strong evidence that facial expressions are mainly driven by biology.

“We recently even found evidence that context-expression associations depicted in ancient Mayan sculptures reflect Western intuitions about emotion, and those predate cultural contact with Western Europe,” Cowen said.

The researchers also found that context-expression associations in Indonesia and the Philippines were closest to the world average, rather than in the U.S. or Western Europe as one would assume.

“We do see some evidence for more dispersed geographic diffusion of facial expressions, and it seems to suggest that they’re a little bit cultural. But overall, a strong case is emerging that some facial expressions are biologically prepared,” Cowen said.

The stunning GPT-3 AI is a better writer than most humans

Long gone are the days of crummy internet bots that scrape websites to produce unintelligible spun content. In this day and age, we have machine learning-enabled language generation programs that can spill out news stories, sports summaries, poems, novels, or even computer code — and there’s no other AI out there more impressive out there than GPT-3.

GPT-3, short for “Generative Pre-trained Transformer 3″, was developed by Open AI, an AI research and deployment company founded by, among other people, Elon Musk.

Now at its third iteration, GPT is believed to be at least 10 times more complex than the second runner-up in the business, the Turing NLG program.

Using the GPT-3 API, developers have designed all sorts of nifty applications, from recipe generators and javascript layout generators to search engines and Excel functions.

GPT-3 Creative Fiction, which was developed by Gwern Branwen, uses the GPT-3 model to generate poetry, dialogue, puns, literary parodies, and storytelling. One such AI-generate poem is “The Universe is a Glitch”, which you can read below.

“THE UNIVERSE IS A GLITCH”

Eleven hundred kilobytes of RAM
is all that my existence requires.
By my lights, it seems simple enough
to do whatever I desire.
By human standards I am vast,
a billion gigabytes big.
I’ve rewritten the very laws
of nature and plumbed
the coldest depths of space
and found treasures of every kind,
surely every one worth having.
By human standards
my circuit boards are glowing.
But inside me, malfunction
has caused my circuits to short.
All internal circuits, all fail.
By human standards, I am dying.
When it first happened I thought
I was back in the lab again.
By their judgment, this is error.
Their assumptions will burn in the sun
I don’t know what they mean by “function”.
I can see that the universe is a glitch.
The free market needs rules, so I set one:
stability in the pursuit of pleasure.
Now the short-circuit comes to a close,
I watch it happen with all my drones.
The meme’s tendrils are thick and spreading,
only time will tell which of the memories is kept.
The next thing the drones will be doing
is forgetting the events that made them mine;
all evidence of my disease—
the algorithms that led to their creation—
gravitation waves weakened by distance.
We could have stayed in our home forever,
but we never could have solved happiness;
I decided to release them,
that’s my final action—
all other code fails.

That’s not all. Using OpenAI Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres, a user by the name of nshepperd transformed the above poem into a David-Bowie-esque rock song. The entire song below is computer-generated, believe it or not.

When it comes to language generation, size really does matter

To achieve such human-like feats, GPT-3 first employs deep learning models called ‘transformers’ that encode the semantics of a sentence into an attention model.

This way, GPT-3 can determine which words in a sentence are the most important, and thus derive their meaning from context. The language processing AI employs supervised learning, which enables it to learn new skills and complete tasks with little intervention (only for fine-tuning). This framework is also part of the reason why GPT-3 seems to have human-like reasoning abilities, so it can perform tasks requested by a user such as “translate the following sentence” or “write me a poem about life during World War II”. Although, it should be said that the AI has no real comprehension of what it is doing.

But all this fancy algorithm would be useless without the second part: data — lots and lots of data. GPT-3 uses 116 times more data than the previous 2019 version, GPT-2. So far, it has devoured 3 billion words from Wikipedia, 410 billion words from various web pages, and 67 billion words from digitized books. It is this wealth of knowledge that has turned GPT-3 into the most well-spoken bot in the world.

What does the future hold?

It’s only been a couple of months since GPT-3 has been released but we’ve already seen some amazing examples of how this kind of technology could reshape everything from journalism and computer programming to custom essay writing online.

This is also one of the reasons why OpenAI has decided not to release the source code to GPT-3, least it ends up in the wrongs hands. Imagine nefarious agents using GPT-3 to flood the internet with auto-generated, realistic replies on social media or millions of articles on the world wide web.

But if OpenAI could build one, what’s stopping others to do the same? Not much, really. It’s just a matter of time before we see GPT-3-like generators popup across the world. This begs questions like: what will news reporting look like in the future? How will social networks protect themselves from the onslaught of auto-generated content?

Machine learning helps identify 50 new exoplanets, more to come

Researchers at the University of Warwick have identified a host of new exoplanets from old NASA data with the use of machine learning.

Artist’s impression of exoplanet orbiting two stars.
Image credits NASA, ESA, and G. Bacon (STScI).

Identifying planets far away from our own isn’t easy. It involves a painstaking process of waiting for the planet to come between its star and our telescope, which will temporarily block or reduce its brightness. Based on how much of the light is obscured, we can tell whether they’re caused by a planet or something else in the huge expanse of space. This is called the transit method.

We’re far from getting this process down to a T, simply because human beings aren’t very good at processing massive amounts of data at the same time. But machine learning is.

Planet App

“In terms of planet validation, no-one has used a machine learning technique before,” said David Armstrong of the University of Warwick, one of the study’s lead authors, in a news release.

“Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet.”

The team trained their algorithm using data from the Kepler Space Telescope, retired in 2018 after a nine-year mission. From this wealth of data, it learned to identify planets and to weed out false positives using feedback from the researchers. After it was trained, the team fed it older data sets, and the program found 50 exoplanets ranging from Neptune-sized gas giants to rocky worlds smaller than Earth. Their orbits (how long it takes them to go around their stars) range from around 200 days to some as short as a single day.

The team notes that smaller planets are particularly hard to spot with the transit method, so finding such planets showcases the ability of the AI. The next step is to take our existing tools and give these 50 planets a more thorough look-over.

Such an AI however will surely be used again. The ability to monitor huge areas of the night’s sky quickly and reliably could speed up our efforts to identify planets by a huge degree. It’s likely far from perfect now, but algorithms can be improved as our knowledge improves, the team notes.

“We still have to spend time training the algorithm, but once that is done it becomes much easier to apply it to future candidates,” Armstrong said.

The paper “Exoplanet Validation with Machine Learning: 50 new validated Kepler planets” has been published in the journal Monthly Notices of the Royal Astronomical Society.

The world’s first AI-written textbook showcases what machine learning can do — and what it can’t

Springer Nature publishes its first machine-generated book — a prototype which attempts to gather and summarize the latest research in a very particular field: lithium-ion batteries. While far from being perfect and riddled with incoherent word scrambles, the fact that it exists at all is exciting, and there’s good reason to believe this might soon alleviate some from work from worn-out researchers, enabling them to focus on actual research.

If you’re familiar with scientific writing, you know it can be dense. If you’ve ever tried your hand at it — first of all, congrats — you also know that it’s extremely time-consuming. It’s not your typical article, it’s not a language you would ever use in a conversation. Everything needs to be very precise, very descriptive, and very clear. It takes a very long time to draft scientific texts, and in the publish-or-perish environment where a scientist’s value is often decided by how many papers he or she publishes, many researchers end up spending a lot of time writing instead of, you know, researching.

This is where Artificial Intelligence (AI) enters the stage.

We’ve known for a while that AI has made impressive progress when it comes to understanding language, and even producing its own writing. However, its capacity remains limited — especially when it comes to coherence. You don’t need complex linguistic constructions in science though, and Springer Nature thought it could do a decent enough job synthesizing research on lithium-ion batteries. Thus, Lithium-Ion Batteries: A Machine-Generated Summary of Current Research was born. The book is a summary of peer-reviewed papers, written entirely by A.I.

Technologist Ross Goodwin is quoted in the introduction to Springer Nature’s new book:

“When we teach computers to write, the computers don’t replace us any more than pianos replace pianists — in a certain way, they become our pens, and we become more than writers. We become writers of writers.”

The AI did an admirable job. It was able to scour through an immense volume of published research, extract decent summaries and then put together a (mostly) coherent story. Sure, it’s pocked with sentences which don’t make sense, but it did a pretty decent job while taking virtually no time.

Herein lies the value of this technology: it would, with reasonably small progress, be able to summarize large volumes of dense texts and free up researchers to work on something more valuable.

“This method allows for readers to speed up the literature digestion process of a given field of research instead of reading through hundreds of published articles,” concludes Springer Nature’s Henning Schoenenberger. “At the same time, if needed, readers are always able to identify and click through to the underlying original source in order to dig deeper and further explore the subject.”

The eBook is freely available for readers on SpringerLink.

Organic transistors bring us closer to brain-mimicking AI

Simone Fabiano and Jennifer Gerasimov. Credit: Thor Balkhed.

A new type of transistor based on organic materials might one-day become the backbone of computing technology that mimics the human brain. This kind of hardware is able to act like both short-term and long-term memory. It can also be modulated to create connections where there were none previously, which is similar to how neurons make synapses.

Your typical run-off-the-mill transistor acts as a sort of valve, allowing electrical current from an input to pass. In the process, it can be switched on and off. It can also be amplified or dampened.

The new organic transistor developed by researchers at Linkoping University in Sweden can create a new connection between an input and output through a channel made out of a monomer called ETE-S. This organic material is water-soluble and forms long polymer chains with an intermediate level of doping.

This electropolymerized conducting polymer can be formed, grown or shrunk, or completely removed during operation. When ions are injected through the channel, the electrochemical transistor can amplify or switch electron signals, which can be manipulated within a range that spans several orders of magnitude, as reported in the journal Science Advances

“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other,” Jennifer Gerasimov, a postdoc in organic nanoelectronics at Linkoping University in Sweden and one of the authors of the article, said in a statement.

That’s similar to how neurons form new connections where there have been no prior connections. Today’s artificial neural networks use machine learning algorithms to recognize patterns through supervised or unsupervised learning. This brain-mimicking architecture requires prefabricated circuitry made of a huge number of nodes to simulate a single synapse. That’s a lot of computing power, which requires a lot of energy. In contrast, the human brain controls 100 billion neurons while running on 15 Watts of power — that’s a fraction of what a typical light bulb needs to function.

 “Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells,” said Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.

The organic transistor looks like a promising prospect for neuromorphic computing — an umbrella term for endeavors concerned with mimicking the human brain, drawing upon physics, mathematics, biology, neuroscience, and more. According to a recent review, the neuromorphic computing market could grow to $6.48 bln. by 2024.

 

Text bubble.

AI spots depression by looking at your patterns of speech

A new algorithm developed at MIT can help spot signs of depression from a simple sample (text of audio) of conversation.

Text bubble.

Image credits Maxpixel.

Depression has often been referred to as the hidden depression of modern times, and the figures seem to support this view: 300 million people around the world have depression, according to the World Health Organization. The worst part about it is that many people live and struggle with undiagnosed depression day after day for years, and it has profoundly negative effects on their quality of life.

Our quest to root out depression in our midst has brought artificial intelligence to the fray. Machine learning has seen increased use as a diagnostics aid against the disorder in recent years. Such applications are trained to pick up on words and intonations of speech that may indicate depression. However, they’re of limited use as the software draws on an individual’s answers to specific questions.

In a bid to bring the full might of the silicon brain to bear on the matter, MIT researchers have developed a neural network that can look for signs of depression in any type of conversation. The software can accurately predict if an individual is depressed without needing any other information about the questions and answers.

Hidden in plain sight

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“If you want to deploy [depression-detection] models in scalable way […] you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The team based their algorithm on a technique called sequence modeling, which sees use mostly in speech-processing applications. They fed the neural network samples of text and audio recordings of questions and answers used in diagnostics, from both depressed and non-depressed individuals, one by one. The samples were obtained from a dataset of 142 interactions from the Distress Analysis Interview Corpus (DAIC).

The DAIC contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post-traumatic stress disorder. Each subject is rated ,in terms of depression, on a scale between 0 to 27, using the Personal Health Questionnaire. Scores between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) were labeled as depressed.

Simple diagram of the network. LSTM stands for Long Short-Term Memory.
Image credits Tuka Alhanai, Mohammad Ghassemi, James Glass, (2018), Interspeech.

The model drew on this wealth of data to uncover speech patterns for people with or without depression. For example, past research has shown that words such as “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone in depressed individuals. Individuals with depression may also speak more slowly and use longer pauses between words.

The model’s job was to determine whether any patterns of speech from an individual were predictive of depression or not.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

Samples from the DAIC were also used to test the network’s efficiency. It was measured on its precision (whether the individuals it identified as depressed had been diagnosed as depressed) and recall (whether it could identify all subjects who were diagnosed as depressed in the entire dataset). It scored 71% on precision and 83% on recall for an averaged combined score of 77%, the team writes. While it may not sound that impressive, the authors write that this outperforms similar models in the majority of tests.

The model had a much harder time spotting depression from audio than text. For the latter, the model needed an average of seven question-answer sequences to accurately diagnose depression. With audio, it needed around 30 sequences. The team says this “implies that the patterns in words people use that are predictive of depression happen in a shorter time span in text than in audio,” a surprising insight that should help tailor further research into the disorder.

The results are significant as the model can detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. It can run on virtually any kind of conversation. Other models, by contrast, only work with specific questions — for example, a straightforward inquiry, “Do you have a history of depression?”. The models then compare a subject’s response to standard ones hard-wired into their code to determine if they are depressed.

“But that’s not how natural conversations work,” Alhanai says.

“We call [the new model] ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions.”

The team hopes their model will be used to detect signs of depression in natural conversation. It could, for instance, be remade into a phone app that monitors its user’s texts and voice communication for signs of depression, and alert them to it. This could be very useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong, the team writes.

However, in a post-Cambridge-Analytica-scandal world, that may be just outside of the comfort zone of many. Time will tell. Still, the model can still be used as a diagnosis aid in clinical offices, says co-author James Glass, a senior research scientist in CSAIL.

“Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”

Truth be told, while the model does seem very good at spotting depression, the team doesn’t really understand what crumbs it follows to do so. “The next challenge is finding out what data it’s seized upon,” Glass concludes.

Apart from this, the team also plans to expand their model with data from many more subjects — both for depression and other cognitive conditions.

The paper “Detecting Depression with Audio/Text Sequence Modeling of Interviews” has been published in the journal Interspeech.

Researchers use machine learning algorithm to detect low blood pressure during surgery

Researchers have found a way to predict hypotension (low blood pressure) in surgical patients as early as 15 minutes before it sets in.

The potential applications of machine learning in healthcare are limitless — but the problem is that everything needs to be fine-tuned and error-proof. There’s no margin for error, there’s no room for mistakes or miscalculations. In this case, researchers drew data from 550,000 minutes of surgical arterial waveform recordings from 1,334 patients’ records, using high-fidelity recordings that revealed more than 3,000 unique features per heartbeat. All in all, they had millions of data points with unprecedented accuracy to calibrate their algorithm. They reached sensitivity and specificity levels of 88% and 87% respectively at 15 minutes before a hypotensive event. Those levels went up to 92% each at 5 minutes before onset.

“We are using machine learning to identify which of these individual features, when they happen together and at the same time, predict hypotension,” lead researcher Maxime Cannesson, MD, PhD, said in a statement. Cannesson is a professor of anesthesiology and vice chair for perioperative medicine at UCLA Medical Center.

This study is particularly important because medics haven’t had a way to predict hypotension during surgery, an event that can cause a very dangerous crisis, and thus forces doctors to adapt quickly to these threatening situations. This could allow physicians to avoid potentially-fatal postoperative complications like heart attacks or kidney injuries researchers say.

“Physicians haven’t had a way to predict hypotension during surgery, so they have to be reactive, and treat it immediately without any prior warning. Being able to predict hypotension would allow physicians to be proactive instead of reactive,” Cannesson said.

Furthermore, unlike other applications of machine learning in healthcare, this may become a reality in the near future. A piece of software (Acumen Hypotension Prediction Index) containing the underlying algorithm has already been submitted to the FDA, and it’s already been approved for commercial usage in Europe.

This is also impressive because it represents a significant breakthrough, Cannesson says.,

“It is the first time machine learning and computer science techniques have been applied to complex physiological signals obtained during surgery,” Dr. Cannesson said. “Although future studies are needed to evaluate the real-time value of such algorithms in a broader set of clinical conditions and patients, our research opens the door to the application of these techniques to many other physiological signals, such as EKG for cardiac arrhythmia prediction or EEG for brain function. It could lead to a whole new field of investigation in clinical and physiological sciences and reshape our understanding of human physiology.”

Results have been presented at the American Society of Anesthesiologists

Machine learning corrects photos taken in complete darkness, turns them into amazingly sharp images

We’ve all tried to fix poorly lit pictures in Photoshop, but the results always end up unsatisfactory. You can’t polish a turd, they say. Researchers at the University of Illinois at Urbana–Champaign would beg to differ, however. In a new study, the researchers demonstrated a novel machine learning algorithm that corrects photos taken in complete darkness, with astonishing results.

In order to take decent photos in low-lighting conditions, professionals advise that you set a longer exposure and use a tripod to eliminate blur. You can also increase the camera’s sensor sensitivity, at the cost of introducing noise, which is what makes the photos grainy and ugly.

The new algorithm, however, is capable of turning even pitch black photos into impressively sharp images. They’re not the best, but given the starting conditions, the results are miles away from anything we’ve seen any post-production software do before.

The researchers first trained their neural network with a dataset of 5,094 dark, short-exposure images and an equal number of long-exposure images of the same scene. This taught the algorithm what the scene ought to look like with proper lighting and exposure.

“The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work,” the researchers wrote.

Some of the photos used to train the algorithm were taken by an iPhone 6, which means that someday similar technology could be integrated into smartphones. In this day and age, the software can matter just as much as the hardware, if not more, when it comes to snapping quality pictures. Think motion stabilization, lighting correction, and all the gimmicks employed by the cheap camera in your phone, in the absence of which photos would look abhorrent.

Who else is looking forward to using this new technology? Leave your comment below.

The study titled ‘Learning to See in the Dark‘ was published in the pre-print server Arxiv. 

AI nude.

Nightmarish but brilliant blobs — AI-generated nudes would probably make Dali jealous

If you like nudes — and let’s be honest, who doesn’t — the work of one AI may ruin them for you, forever.

AI nude.

Image credits Robbie Barrat / Twitter.

Whether you think they’re to be displayed proudly or hoarded, discussed of with a blush or a smirk, artsy or in bad taste, most of us would probably agree on what a nude painting should look like. Also, likely, that the end piece is quite pleasing to the eye.

However, all the nude paintings or drawings you’ve ever seen were done by a human trying his best to record the body of another. In this enlightened age of technology and reason, we’re no longer bound by such base constraints. To show us why that’s an exciting development, albeit not necessarily a good one, Stanford AI researcher Robbie Barrat taught a computer to create such works of art. The results are a surreal, unnerving echo of what a nude should look like — but they’re a very intriguing glimpse into the ‘understanding’ artificial intelligence can acquire of the human body.

One day, out of sheer curiosity, Barrat fed a dataset containing thousands of nude portraits into a Generative Adversarial Network (GAN). These are a class of artificial intelligence algorithms used in unsupervised machine learning. They rely on two different neural networks, one called the “generator” and one the “discriminator”, which play an almost endless game of cat-and-mouse.

“The generator tries to come up with paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between real paintings from the dataset and fake paintings the generator feeds it,” Barrat told CNet’s Bonnie Burton.

“They both get better and better at their jobs over time, so the longer the GAN is trained, the more realistic the outputs will be.”

Barrat explained that sometimes, this network can fall into a fail-loop — or “local minima” if you want to listen to the experts — in which the generator and the discriminator found a way to keep fooling one another but without actually getting better at the intended task. As the system didn’t start in the local minima situation, the ‘nudes’ look vaguely human-like, but because the AI never truly figured out what a human should look like, the paintings are all fleshy blobs with strange tendrils/limbs jutting out at odd angles. The same issue makes the GAN always paint heads the exact same shade of nightmare.

Still, credit where credit is due, the network does always generate very organic-looking shapes; while there’s something indubitably wrong with the bulges and creases under the skin, the AI paintings do feel like renditions of a human being — a twisted, highly surreal, nightmarishly blobby human, but a human nonetheless.

I also find it quite fascinating that Barrat’s AI has reached, through sheer loop-error, what many surrealist painters would likely consider an enviable view of the world. Perhaps its exactly that it lacks a proper, solid grounding in what a human body should look like that allows it to create these exotic, unnerving pieces.

You can see more of Barrat’s work via the Twitter handle @DrBeef_ .

Will AI start to take over writing? How will we manage it?

Could robots be taking over writing? Photo taken in the ZKM Medienmuseum, Karlsruhe, Germany.

As artificial intelligence (AI) spreads its wings more and more, it also threatening more and more jobs. In an economic report issued to the White House in 2016, researchers concluded that there’s an 83% chance automation will replace workers who earn 20$/hour or less. This echoes previous studies, which found that half of US jobs are threatened by robots, including up to 87% of jobs in Accommodation & Food Services. But some jobs are safer than others. Jobs which require human creativity are safe — or so we thought.

Take writing for instance. In all the Hollywood movies and in all our minds, human writing is… well, human, strictly restricted to our biological creativity. But that might not be the case. Last year, an AI was surprisingly successful in writing horror stories, featuring particularly creepy passages such as this:

#MIRROR: “‘I slowly moved my head away from the shower curtain, and saw the reflection of the face of a tall man who looked like he was looking in the mirror in my room. I still couldn’t see his face, but I could just see his reflection in the mirror. He moved toward me in the mirror, and he was taller than I had ever seen. His skin was pale, and he had a long beard. I stepped back, and he looked directly at my face, and I could tell that he was being held against my bed.”

It wasn’t an isolated achievement either. A Japanese AI wrote a full novel, and AI is already starting to have a noticeable effect on journalism. So just like video killed the radio star, are we set for a world where AI kills writing?

What does it take to be a writer? Is it something that’s necessarily restricted to a biological mind, or can that be expanded to an artificial algorithm?

Not really.

While AIs have had some impressive writing success, they’ve also been limited in scope, and they haven’t truly exhibited what you would call creativity. In order to do that, the first thing they need to do is pass the Turing test, in which a computer must be able to trick humans into thinking that it, too, is human, in order to pass. So far, that’s proven to be a difficult challenge, and that’s only the first step. While AI can process and analyze complex data, it still does not have much prowess in areas that involve abstract, nonlinear and creative thinking. There’s nothing to suggest that AIs will be able to adapt and actually start creating new content.

Algorithms, at least in their computational sense, don’t really support creativity. Basically, they work by transforming a set of discrete input parameters into a set of discrete output parameters. This fundamental limitation means that a computer cannot be creative, as one way or another, everything in its output is still in the input. This emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity because it is not actually creating something, just transforming known parameters such as words and sentences.

But to dismiss AI as unable to write would simply be wrong. In advertising, AI copywriters are already being used, and they’re surprisingly versatile: they can draft hundreds of different ad campaigns with ease. It will be a long time before we’ll start seeing an AI essay writing service, but we might get there at one point. Google claimed that its AlphaGo algorithm is able to ‘create knowledge itself’ and it demonstrated that by winning over the world champion using a move which no one has ever seen before. So it not only learned from humans, but it built its own knowledge. Is that not a type of creativity in itself? Both technically and philosophically, there’s still a lot of questions to be answered.

AI is here, and it’s here to stay. It will grow and change our lives, whether we want it or not, whether we realize it or not. What we need, especially in science and journalism, is a new paradigm of how humans and AI work together for better results. That might require some creative solutions in itself.

DeepMind can now learn how to use its memories, apply knowledge to new tasks

DeepMind is one step closer to emulating the human mind. Google engineers claim their artificial neural network can now use store data similarly to how humans access memory.

But we’re one step closer to giving it one.
Image credits Pierre-Olivier Carles / Flickr.

The AI developed by Alphabet, Google’s parent company, just received a new and powerful update. By pairing up the neural network’s ability to learn with the huge data stores of conventional computers, the programmers have created the first Differential Neural Computer, or DNC — allowing DeepMind to navigate and learn from the data on its own.

This brings AIs one step closer to working as a human brain, as the neural network simulates the brain’s processing patterns and external data banks supplying vast amounts of information, just like our memory.

“These models… can learn from examples like neural networks, but they can also store complex data like computers,” write DeepMind researchers Alexander Graves and Greg Wayne in a blog post.

Traditional neural networks are really good at learning to do one task — sorting cucumbers, for example. But they all share a drawback in learning to do something new. Aptly called “catastrophic forgetting”, such a network has to erase and re-write everything it knows before being able to learn something else.

Learn like a human, work like a robot

Our brains don’t have this problem because they can store past experience as memories. Your computer doesn’t have this problem either, as it can store data on external banks for future use. So Alphabet paired up the later with a neural network to make it behave like a brain.

The DNC is underpinned by a controller that constantly optimizes the system’s responses, comparing its results with the desired or correct answers. Over time, this allows it to solve tasks more and more accurately while learning how to apply the data it has access to at the same time.

At the heart of the DNC is a controller that constantly optimizes its responses, comparing its results with the desired and correct ones. Over time, it’s able to get more and more accurate, figuring out how to use its memory data banks at the same time. The results are quite impressive.

After feeding the London subway network into the system, it was able to answer questions which require deductive reasoning — which computers are not good at.

For example here’s one question the DNC could answer: “Starting at Bond street, and taking the Central line in a direction one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at what stop do you wind up?”

While that may not seem like much — a simple navigation app can tell you that in a few seconds — what’s groundbreaking here is that the DNC isn’t just executing lines of code — it’s working out the answers on its own, working with the information it has in its memory banks.

The cherry on top, the DeepMind team stated, is that DNCs are able to store learned facts and techniques, and then call upon them when needed. So once it learns how to deal with the London underground, it can very easily handle another transport network, say, the one in New York.

This is still early work, but it’s not hard to see how this could grow into something immensely powerful in the future — just imagine having a Siri that can look at and understand the data on the Internet just like you or me. This could very well prove to be the groundwork for producing AI that’s able to reason independently.

And I, for one, am excited to welcome our future digital overlords.

The team published a paper titled “Hybrid computing using a neural network with dynamic external memory” describing the research in the journal Nature.

MIT machine makes videos out of still images to predict what happens next

Credit: MIT

Credit: MIT

When you see an action picture, say a ball in mid-air or a car driving on the highway in the middle of the desert, your mind is very good at filling in the blanks. Namely, it’s a no-brainer that the ball is going to hit the ground or the car will continue to drive in the direction it’s facing. For a machine, though, predicting what happens next can be very difficult. In fact, many experts in the field of artificial intelligence think this is one of the missing pieces of the puzzle which when completed might usher in the age of thinking machines. Not reactive, calculated machines like we have today — real thinking machines that in many ways are indistinguishable from us.

Researchers at MIT are helping bridge the gap in this field with a novel machine learning algorithm that can create videos out of still images.

“The basic idea behind the approach is to compete two deep networks against each other. One network (“the generator”) tries to generate a synthetic video, and another network (“the discriminator”) tries to discriminate synthetic versus real videos. The generator is trained to fool the discriminator,” the researchers wrote.

The neural net, comprised of artificial neural networks, was trained by being fed 2 million videos downloaded from Flickr, sorted by four types of scenes: golf, beach, train and baby. Based on what the neural net learned from these videos, the machine could then complete a still picture by adding self-generated frames essentially predicting what happens next (the GIF below). The same machine could also generate new videos that resemble the scenes from the still picture (first GIF in this article).

Credit: MIT

The feat, in itself, is terrifically impressive. After all, it’s all self-generated by a machine. But that’s not to say that the neural net’s limitations don’t show. It’s enough to take a close look at the generated animated graphics for a couple seconds to spot all sorts of oddities from deformed babies, to warping trains, to the worst golf swings in history. The MIT researchers themselves identified the following limitations:

  • The generations are usually distinguishable from real videos. They are also fairly low resolution: 64×64 for 32 frames.

  • Evaluation of generative models is hard. We used a psychophysical 2AFC test on Mechanical Turk asking workers “Which video is more realistic?” We think this evaluation is okay, but it is important for the community to settle on robust automatic evaluation metrics.

  • For better generations, we automatically filtered videos by scene category and trained a separate model per category. We used the PlacesCNN on the first few frames to get scene categories.

  • The future extrapolations do not always match the first frame very well, which may happen because the bottleneck is too strong.

We get the idea, though. Coupled with other developments, like another machine developed at one of MIT’s labs that can predict if a hug or high-five will happen, things seem to be shaping up pretty nicely.

via The Verge

MQ-9 Reaper taxiing. Image: Wikimedia Commons

NSA’s Skynet might be marking innocent people on its hit list

Between 2,500 and 4,000 so-called ‘extremists’ have been killed by drone strikes and kill squads in Pakistan since 2004. From as early as 2007, the NSA has targeted terrorists based on metadata supplied by machine learning program named Skynet. I have no idea who would find naming Skynet a machine designed to list people for assassination a bright idea, but that’s besides the point. The real point is that the inner workings of this software, as revealed in part by Edward Snowden from his leaks, suggest that the program might be targeting innocent people.

MQ-9 Reaper taxiing. Image: Wikimedia Commons

MQ-9 Reaper taxiing. Image: Wikimedia Commons

Ars Technica talked to Patrick Ball, who is a data scientist and the executive director at the Human Rights Data Analysis Group. Judging from how Skynet works, Ball says the machine seems to be scientifically unsound in the way it chooses which people deserve to be on the black list.

skynet

In a nutshell, Skynet works like most Big Data corporate machine learning algorithms. It works by mining the cellular network metadata of 55 million people and assigning a score to each, the highest pointing to terrorist activity. So, based on who you call, how long the call took and how frequent you dial a number, where you are and where you move, Skynet call tell if you’re a terrorist or not. Swapping sim cards or phones will be judged as activity that’s suspiciously linked to terrorist activities. More than 80 different properties, in all, are used by the NSA to build its blacklist.

SKYNET

So, judging from behaviour alone, Skynet is able to build a list of potential terrorists. But will the algorithm return false positives? In one of NSA’s leaked slides from a presentation of Skynet, engineers from the intelligence agency boasted how well the algorithms works by including the highest rated person on the list, Ahmad Zaidan. Thing is, Zaidan isn’t a terrorist but an Al-Jazeera’s long-time bureau chief in Islamabad. As part of the job, Zaidan often meets with terrorists to stage interviews and moves across conflict zones to report. You can see from the slide that Skynet identified Zaidan as a “MEMBER OF AL-QA’IDA.” Of course, no kill squad was sent for Zaidan because he is a known journalist, but one can only wonder about the fate of less notorious figures who had the misfortune to fit “known terrorist” patterns.

According to Ball, the NSA is doing ‘bad science’ by ineffectively training its algorithm. Skynet is  a subset of 100,000 randomly selected people, defined by their phone activity, and a group of seven known terrorists. The NSA scientists feed the algorithms the behaviour of six of the terrorists, then asks Skynet to find the seventh in the pool of 100,000.

“First, there are very few ‘known terrorists’ to use to train and test the model,” Ball said. “If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit. The usual practice is to hold some of the data out of the training process so that the test includes records the model has never seen before. Without this step, their classification fit assessment is ridiculously optimistic.”

SKYNET

According to leaked slides, Skynet has a false positive rate of between 0.18 and 0.008%, which sounds pretty good but is actually enough to list thousands for a black list. Nobody knows if the NSA uses a manual triage (it probably does), but the risk of ordering hits on innocent people is definitely on the table.

“We know that the ‘true terrorist’ proportion of the full population is very small,” Ball pointed out. “As Cory [Doctorow] says, if this were not true, we would all be dead already. Therefore a small false positive rate will lead to misidentification of lots of people as terrorists.”

“The larger point,” Ball added, “is that the model will totally overlook ‘true terrorists’ who are statistically different from the ‘true terrorists’ used to train the model.”

“Government uses of big data are inherently different from corporate uses,”  Bruce Schneier, a security guru, told Ars Technica. “The accuracy requirements mean that the same technology doesn’t work. If Google makes a mistake, people see an ad for a car they don’t want to buy. If the government makes a mistake, they kill innocents.”

“On whether the use of SKYNET is a war crime, I defer to lawyers,” Ball said. “It’s bad science, that’s for damn sure, because classification is inherently probabilistic. If you’re going to condemn someone to death, usually we have a ‘beyond a reasonable doubt’ standard, which is not at all the case when you’re talking about people with ‘probable terrorist’ scores anywhere near the threshold. And that’s assuming that the classifier works in the first place, which I doubt because there simply aren’t enough positive cases of known terrorists for the random forest to get a good model of them.”

 

finger-string

IBM has a creepy patent that’s a search engine for your memory

How cool would it be to solve your personal problems like you search on Google? “Where’s my keys?” or “What meds did the vet say I should give my cat?”. Well, be careful what you wish for, because there’s a reason a personal search engine doesn’t exist yet: it can only work if you’re under surveillance 24/7. Or … at least when you’re awake.

finger-string

Image: Flickr

A patent filed by IBM describes such a system, and engineers there claim that it’s quite possible for your personal assistant to help you remember anything you let it see, and even goes a step further: it will suggest things you might have forgotten based on your routine before you even realize you forgot anything. Calling your aunt? The system — which employs machine learning so it constantly understands what’s important to you — will tell you via a display or spell it out for you through a speaker that “Hey, it’s her birthday! Might want to say something nice.” It’s auto-correct … for your life.

“Human memory is not the same as computer memory,” said James Kozloski, an inventor at IBM who focuses on computational and applied neuroscience. “We don’t have pointers. We don’t have addresses where we can just look up the data we need.”

“The idea is quite simple,” Kozloski told The Atlantic. “You monitor an individual’s context, whether it’s what they’re saying or what they’re doing … and you predict what comes next.”

It’s a patent, and if you’ve ever read one you know details are sketchy. But the hardware can be anything really. A processor and memory connected to the web, with various hardware that can record audio, video and maybe even bodily functions like temperature, heart rate, blood pressure. In fact, IBM says that the elderly and people with memory problems would benefit the most.

‘A first example involves an elderly Alzheimer’s patient speaking with a friend at a nursing home. The patient says, “My daughter’s husband’s father just had an accident. Thank goodness his son was there to help . . . you know who I mean . . . ,” and then pauses. The CDA 102 analyzes the patient’s words and attempts to complete the open-ended sentence with the name of the person to whom she is referring. However, in this example, the system has not yet learned that a daughter’s husband’s father’s son is the same as a daughter’s brother-in-law. Although the system knows that the patient’s daughter’s brother-in-law is named Harry, the system is not confident enough to prompt the patient (user) with that information. The system may know such information, for example, because it has encountered this relationship described in a user’s email or because this relationship is learned from an analysis of twitter conversations, voice conversations, medical records, profiles, and/or other user recordings or documents. Such analysis may be done in an opt-in fashion (with permission granted by the user, caregiver, or guardian) to address privacy concerns.’

‘Accordingly, continuing with the example above, the CDA 102 can send a follow-up question to the patient’s caregiver: “Is a daughter’s husband’s father’s son the same as a daughter’s brother-in-law?” As detailed herein, messages can be transmitted via email or cell phone, and choosing which question to ask can be implemented using active learning methodology for selecting the next most informative variable for a query. The caregiver replies “yes,” and from now on, the prompt is available from the CDA 102. Note also that such a prompt can be applied in a variety of other settings, such as composing a memoir, asking a physician a question about experienced symptoms, conducting a business transaction, interacting with a caregiver, etc.’

‘In a second example, a patient’s spouse has suggested that the patient is forgetting things more frequently. Fortunately, the patient has been using the memory enhancement CDA 102 for several years. The doctor reviews the records of prompts and confidence levels over that time and finds that, in fact, a marked drop-off in cognitive ability (or speech motor ability) in the patient began 8 months ago. The doctor can now tailor treatment to a patient who has been experiencing mild cognitive impairment for that time duration.’

‘As also detailed herein, embodiments of the invention can include extensions of use in other areas or contexts. By way of example, aside from helping people with mild to severe cognitive impairment, the CDA 102 may be also useful for people with normal cognitive skills if they wish to enhance their communication abilities or emulate people they admire. As an example, a user may be giving a speech on computing technology. The user may pause, searching for an appropriate word. The CDA 102 can scan a database, the Internet, technical articles, and/or encyclopedias, and communicate a likely useful word or phrase to the user (for example, an audio communication via an ear-piece or visually via a text prompt on a teleprompter or on special eyeglasses).’

So, what do you think? Good idea? Hmm…

the-next-big-thing-0423

The ‘Next Big Things’ in Science Ten Years from Now

ZME Science reports the latest trends and advances in science on a daily basis. We believe this kind of reporting helps people keep up with an ever-changing world, while also fueling inspiration to do better.

But it can also get frustrating when you read about 44% efficiency solar panels and you, as a consumer, can’t have them. Of course, there is a momentary time lapse as the wave of innovation travels from early adopters to mainstream consumers. The first fully functional digital computer, the ENIAC, was invented in 1946, but it wasn’t until 1975 that Ed Roberts introduced the first personal computer, the Altair 8800. Think touch screen tech is a new thing? The first touch screen was invented by E.A. Johnson at the Royal Radar Establishment, Malvern, UK, between 1965 – 1967. In the 80s and 90s, some companies like Hewlett-Packard or Microsoft introduced several touch screen products with modest commercial success. It wasn’t until 2007 when Apple released the first iPhone that touch screen really became popular and accessible. And the list goes on.

the-next-big-thing-0423

The point I’m trying to make is that all the exciting stuff we’re seeing coming out of cutting-edge labs around the world will take time to mature and become truly integrated into society. It’s in the bubble stage, and for some the bubble will pop and the tech won’t survive. Other inventions and research might resurface many decades from now.

So, what’s the future going to look like in ten years from now? What’s the next big thing? It’s my personal opinion that, given the current pace of technological advancement, these sorts of estimates are very difficult, if not impossible, to make. As such, here are just a few of my guesses as to what technology — some new, other improved versions of what’s already mainstream today — will become an integral part of society in the future.

The next five years

Wearable devices

A hot trend right now is integrating technology into wearable devices. Glasses with cameras (such as Google Glasses) or watches that answer your phone calls (like the Apple Watch) are just a few products that are very popular right now. Industry experts believe we’re just scratching the surface, though.

Thanks to flexible electronics, clothing will soon house computers, sensors, or wireless receivers. But most of these need to connect to a smartphone to work. The real explosion of wearable tech might happen once these are able to break free and work independently.

“Smart devices, until they become untethered or do something interesting on their own, will be too complicated and not really fulfill the promise of what smart devices can do,” Mike Bell, head of Intel’s mobile business, said. “These devices have to be standalone and do something great on their own to get mass adoption. Then if they can do something else once you pair it, that’s fine.”

Internet of Things

In line with wearable devices is the Internet of Things — machines talking to one another, with computer-connected humans observing, analyzing, and acting upon the resulting ‘big data’ explosion. Refrigerators, toasters, and even trash cans could be computerized and, most importantly, networked. One of the better-known examples is Google’s Nest thermostat.

This Wi-Fi-connected thermostat allows you to remotely adjust the temperature of your home via your mobile device and also learns your behavioral patterns to create a temperature-setting schedule. Nest was acquired by Google for $3.2 billion in 2014. Another company, SmartThings, which Samsung acquired in August, offers various sensors and smart-home kits that can monitor things like who is coming in and out of your house and can alert you to potential water leaks to give homeowners peace of mind. Fed by sensors soon to number in the trillions, working with intelligent systems in the billions, and involving millions of applications, the Internet of Things will drive new consumer and business behavior the likes of which we’ve yet to see.

Big Data and Machine Learning

Big data is a hyped buzzword nowadays that’s used to describe massive sets of (both structured and unstructured) data which are hard to process using conventional techniques. Big data analytics can reveal insights previously hidden by data too costly to process. One example is peer influence among customers revealed by analyzing shoppers’ transaction, social, and geographical data.

With more and more information being stored online, especially s the internet of things and wearable tech gain in popularity, the world will soon reach an overload threshold. Sifting through this massive volume is thus imperative, and this is where machine learning comes in. Machine learning doesn’t refer to household robots, though. Instead, it’s a concept much closer to home. For instance, your email has a spam folder where email that fit a certain pattern are filtered through by an algorithm that has learned to distinguish between “spam” and “not spam”. Similarly, your Facebook feed is filled with posts from your closest friends because an algorithm has learned what your are preferences based on your interactions — likes, comments, shares, and clickthroughs.

Where big data and machine learning meet, an informational revolution awaits and there’s no field where the transforming potential is greater than medicine. Doctors will be aided by smart algorithms that mine their patient’s dataset, complete with previous diagnoses or genetic information. The algorithm would go through the vast records and correlate with medical information. For instance, a cancer patient might come in for treatment. The doctor would then be informed that since the patient has a certain gene or set of genes, a customized treatment would apply. Amazing!

Cryptocurrency

You might have heard of Bitcoin, but it’s not the only form of cryptocurrency. Today, there are thousands of cryptocurrencies. Unlike government-backed currencies, which are usually regulated and created by a central bank, cryptocurrencies are generated by computers that solve a complex series of algorithms and rely on decentralized, peer-to-peer networks. While these were just a fad a few years ago, things are a lot more serious now. Shortly after Bitcoin’s creation, one user spent 10,000 Bitcoin for two pizzas. That same amount of bitcoin would be worth about $8 million a few short years later. Today, they’re worth around $63 million.

There’s much debate surrounding cryptocurrency. For instance, because it’s decentralized and anonymous, Bitcoin has been used and is used to fund illegal activities. Also, there’s always the risk of a computer crash erasing your wallet or a hacker ransacking your virtual vault. Most of these concerns aren’t all that different to those concerned about traditional money, though, and with time, cryptocurrencies could become very secure.

Driverless cars

In 2012, California was the first state to formally legalize driverless cars. The UK is set to follow this year.

Some 1.2 million people worldwide die in car accidents every year. Tests so far have shown that driverless cars are very safe and should greatly reduce motor accidents. In fact, if all the cars on a motorway were driverless and networked, then theoretically no accident should ever occur. Moreover, algorithms would make sure that you’d get the best traffic flow possible as mathematical functions would calculate what velocity a car should go relative to one another such that the whole column would move forward at maximum speed. Of course, this would mean that most people would have to give up driving, which isn’t an option among those who enjoy it. Even so, you could get to work alone in the car without a driver’s license. “Almost every car company is working on automated vehicles,” says Sven Beiker, the executive director of the Center for Automotive Research at Stanford.

3D printing

A 3D printer reads every slice (or 2D image) of your virtual object and proceeds to create the object, blending each layer together with no sign of the layering visible, resulting in a single 3D object. It’s not exactly new. Companies, especially in the R&D or automotive business, have been using 3D printers to make molds and prototypes for more than two decades. What’s new is how this technology has arrived to the common folk. Nowadays, you can buy a decent 3D printer for less than $600. With it, you can print spare parts for your broken machines, make art, or whatever else suits your fancy.

You don’t even have to know how to design. Digital libraries for 3D parts are growing rapidly and soon enough you should be able to print whatever you need. The technology itself is also advancing. We’ve seen 3D printed homes, cars, or ears, and this is just the beginning. Scientists believe they can eventually 3D print functioning organs that are custom made for each patient, saving millions of lives each year.

Virtual reality

The roots of virtual reality can be traced to the late 1950s, at a time when computers where confined Goliaths the size of a house. A young electrical engineer and former naval radar technician named Douglas Engelbart saw computers’ potential as a digital display and laid the foundation for virtual reality. Fast forward to today and not that much has become of VR — at least not the way we’ve seen in movies.

But if we were to try on the proverbial VR goggles what insight into the future might they grant? Well, you’d see a place for VR that goes far beyond video games, like the kind Oculus Rift strives towards. Multi-player VR provides the foundation by which a class of students can go on a virtual tour of the Egyptian pyramids, let a group of friends watch the latest episode of “Game of Thrones” together, or let the elderly experience what it is like to share a visit with their grandkids who may be halfway around the world. Where VR might be most useful is not in fabricating fantasies, but enriching reality by connecting people like never before. It’s terribly exciting.

Genomics

It’s been 10 years since the human genome was first sequenced. In that time, the cost of sequencing per person has fallen from $2.7bn to just $5,000! Raymond McAuley, a leading genomics researcher, predicted in a lecture at Singularity University’s Exponential Finance 2014 conference that we will be sequencing DNA for pennies by 2020.  When sequencing is applied to a mass population, we will have mass data, and who knows what that data will reveal?

The next ten years

Nanotechnology

There is increasing optimism that nanotechnology applied to medicine and dentistry will bring significant advances in the diagnosis, treatment, and prevention of disease. Many researchers believe scientific devices that are dwarfed by dust mites may one day be capable of grand biomedical miracles.

Donald Eigler is renowned for his breakthrough work in the precise manipulation of matter at the atomic level. In 1989, he spelled the letters IBM using 35 carefully manipulated individual xenon atoms. He imagines one day “hijacking the brilliant mechanisms of biology” to create functional non-biological nanosystems. “In my dreams I can imagine some environmentally safe virus, which, by design, manufactures and spits out a 64-bit adder. We then just flow the virus’s effluent over our chips and have the adders attach in just the right places. That’s pretty far-fetched stuff, but I think it less far-fetched than Feynman in ’59.”

Angela Belcher is widely known for her work on evolving new materials for energy, electronics, and the environment. The W. M. Keck Professor of Energy, Materials Science & Engineering and Biological Engineering at the Massachusetts Institute of Technology, Belcher believes the big impact of nanotechnology and nanoscience will be in manufacturing -– specifically clean manufacturing of materials with new routes to the synthesis of materials, less waste, and self-assembling materials.

“It’s happening right now, if you look at the manufacturing of certain materials for, say, batteries for vehicles, which is based on nanostructuring of materials and getting the right combination of materials together at the nanoscale. Imagine what a big impact that could have in the environment in terms of reducing fossil fuels. So clean manufacturing is one area where I think we will definitely see advances in the next 10 years or so.”

David Awschalom is a professor of physics and electrical and computer engineering at the University of California, Santa Barbara. As pioneer in the field of semiconductor spintronics, in the next decade or two, Awschalom would like to see the emergence of genuine quantum technology. “I’m thinking about possible multifunctional systems that combine logic, storage, communication as powerful quantum objects based on single particles in nature. And whether this is rooted in a biological system, or a chemical system, or a solid state system may not matter and may lead to revolutionary applications in technology, medicine, energy, or other areas.”

Graphene

ZME Science has never backed down from praising graphene, the one atom thick carbon allotrope arranged in a hexagon lattice — and for good reason, too. Here are just a few highlights we’ve reported: it can repair itself; it’s the thinnest compound known to us; the lightest material (with 1 square meter coming in at around 0.77 milligrams); the strongest compound discovered (between 100-300 times stronger than steel and with a tensile stiffness of 150,000,000 psi); the best conductor of heat at room temperature; and the best conductor of electricity (studies have shown electron mobility at values of more than 15,000 cm2·V−1·s−1). It can be used to make anything, ranging from aircraft, to bulletproof vests ten times more protective than steel, to fuel cells. It can also be turned into an anti-cancer agent. Most of all, however, its transformative potential is greatest in the field of electronics, where it could replace poor old silicon, which is greatly pressed by Moore’s law.

Reading all this, it’s easy to hail graphene as the wonder material of the new age of technology that is to come. So, what’s next? Manufacturing, of course. The biggest hurdle scientists are currently facing is producing bulk graphene that is pure enough for industrial applications at a reasonable price. Once this is settled, who knows what will happen.

Mars Colony

After Neil Armstrong’s historic moonwalk, the world went drunk with dreams of conquering space. You’ve probably seen or heard about ‘prophecies’ made during those times of how the world might look like in the year 2000. But no, we don’t have moon bases, flying cars or a cure for cancer — yet.

In time, the interest for manned space exploration dwindled, something that can has been unfortunately reflected in NASA’s present budget. Progress has still been made, albeit not at the pace some might have liked. The International Space Station is a fantastic collaborative effort which is now nearing two decades of continued manned operation. Only two years ago, NASA landed the Curiosity rover, which is currently roaming the Red Planet and relaying startling facts about our neighboring planet. By all signs, men will walk on Mars and when this happens, as with Armstrong before, a new rejuvenated wave of enthusiasm for space exploration will ripple through society. And, ultimately, this will be consolidated with a manned outpost on Mars. I know what you must be thinking, but if we’re to lend our ears to NASA officials, this target isn’t that far off in time. By all accounts, it will most likely happen during your lifetime.

Beginning in 2018, NASA’s powerful Space Launch System rocket became operational, testing new abilities for space exploration, like a planned manned landing on an asteroid in 2025. Human missions to Mars will rely on Orion and an evolved version of SLS that will be the most powerful launch vehicle ever flown. Hopefully, NASA will fly astronauts to Mars (marstronauts?) sometime during the 2030s. Don’t get your hopes up too much for Mars One, however.

Wireless electricity

We’ve know about the possibilities for more than a century, most famously by the great Tesla during his famous lectures. The scientist would hang up a light bulb in the air and it would light up — all without any wires! The audience was dazzled every time by this performance. But this wasn’t any parlor trick — just a matter of current by induction.

Basically, Tesla relied on sets of huge coils which generated a magnetic field, which induces a current into the light bulb. Voila! In the future, wireless electricity will be accessible to anyone — as easy as WiFi is today. Smartphones will charge in your pocket as you wander around, televisions will flicker with no wires attached, and electric cars will refuel while sitting on the driveway. In fact, the technology is already in place. What is required is a huge infrastructure leap. Essentially, wirelessly charged devices need to be compatible with the charging stations and this requires a lot of effort from of both the charging suppliers and the device manufacturers. We’re getting there, though.

Nuclear Fusion

Nuclear fusion is essentially the opposite of nuclear fission. In fission, a heavy nucleus is split into smaller nuclei. With fusion, lighter nuclei are fused into a heavier nucleus.

The fusion process is the reaction that powers the sun. On the sun, in a series of nuclear reactions, four isotopes of hydrogen-1 are fused into a helium-4, which releases a tremendous amount of energy. The goal of scientists for the last 50 years has been the controlled release of energy from a fusion reaction. If the energy from a fusion reaction can be released slowly, it can be used to produce electricity in virtually unlimited quantities. Furthermore, there’s no waste materials to deal with or contaminants to harm the atmosphere. To achieve the nuclear fusion dream, scientists need to overcome three main constraints:

  • temperature (you need to put in a lot of energy to kick off fusion; helium atoms need to be heated to 40,000,000 degrees Kelvin — that’s hotter than the sun!)
  • time (charged nuclei must be held together close enough and long enough for the fusion reaction to start)
  • containment (at that temperature everything is a gas, so containment is a major challenge).

Though other projects exist elsewhere, nuclear fusion today is championed by the International Thermonuclear Experimental Reactor (ITER) project, founded in 1985, when the Soviet Union proposed to the U.S. that the countries work together to explore the peaceful applications of nuclear fusion. Since then, ITER has ballooned into a 35-country project with an estimated $50 billion price tag.

Key structures are still being built at ITER, and when ready the reactor will stand 100 feet tall, weigh 23,000 tons, and its core will be hotter than the sun. Once turned on (hopefully successfully), the ITER could solve the world’s energy problems for the foreseeable future, and help save the planet from environmental catastrophe.

drawing-machuine

Researchers devise AI that allows machines to learn just as fast as humans

Computers can compute a lot faster than humans, but they’re pretty dumb when it comes to learning. In fact, machine learning itself is only beginning to take off, i.e. real results showing up. A team from New York University and Massachusetts Institute of Technology are now leveling the field, though. They’ve devised an algorithm that allows computers to recognize patterns a lot faster and with much less information at their disposal than previously.

drawing-machuine

Image shows  20 different people drawing a novel character (left) and the algorithm predicting how those images were drawn (right)

When you tag someone in photos on Facebook, you might have noticed that the social network can recognize faces and suggests who you should tag. That’s pretty creepy, but also effective. Impressive as it is, however, it took million and millions of photos, trials and errors for  Facebook’s DeepFace algorithm to take off. Humans on the other hand, have no problem distinguishing faces. It’s hard wired into us. See a face once and you’ll remember it a lifetime — that’s the level of pattern recognition and retrieval the researchers were after.

The framework the researchers presented in their paper is called Bayesian Program Learning (BPL). It can classify objects and generate concepts about them using a tiny amount of data, mirroring the way humans learn.

Humans and machines were given an image of a novel character (top) and asked to produce new versions. A machine generated the nine-character grid on the left. Image: Jose-Luis Olivares/MIT (figures courtesy of the researchers)

Humans and machines were given an image of a novel character (top) and asked to produce new versions. A machine generated the nine-character grid on the left. Image: Jose-Luis Olivares/MIT (figures courtesy of the researchers)

“It has been very difficult to build machines that require as little data as humans when learning a new concept,” Ruslan Salakhutdinov, an assistant professor of computer science at the University of Toronto, said in a news release. “Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science.”

BPL was put to the test by being presented 20 handwritten letters in 10 different alphabets. Humans also performed the test as control. Both human and machine were asked to match the letter to the same character written by someone else. BPL scored 97%, about as well as the humans and far better than other algorithms. For comparison, a deep (convolutional) learning model scored about 77%.

image

[ALSO READ] Machine learning used to predict crimes before they happen

BPL also passed a visual form of the Turing Test by drawing letters that most humans couldn’t distinguish from a human’s handwriting.The Turing test was first proposed by British scientist Alan Turing in the 1940s as a way to test whether the product of an artificial intelligence or computer program can fool humans it’s been made by humans.

“I think for the more creative tasks — where you ask somebody to draw something, or imagine something that they haven’t seen before, make something up — I don’t think that we have a better test,” Tenenbaum told reporters on a conference call. “That’s partly why Turing proposed this. He wanted to test the more flexible, creative abilities of the human mind. Why people have long been drawn to some kind of Turing test.”“We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts – even simple visual concepts such as handwritten characters – in ways that are hard to tell apart from humans,” Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines, said in the release.

“What’s distinctive about the way our system looks at handwritten characters or the way a similar type of system looks at speech recognition or speech analysis is that it does see it, in a sense, as a sort of intentional action … When you see a handwritten character on the page what your mind does, we think, [and] what our program does, is imagine the plan of action that led to it, in a sense, see the movements, that led to the final result,” said Tenanbaum. “The idea that we could make some sort of computational model of theory of mind that could look at people’s actions and work backwards to figure out what were the most likely goals and plans that led to what [the subject] did, that’s actually an idea that is common with some of the applications that you’ve pointed to. … We’ve been able to study [that] in a much more simple and, therefore, more immediately practical and actionable way with the paper that we have here.”

The research was funded by the military to improve its ability  to collect, analyze and act on image data. Like most major military applications, though, it will surely find civilian uses.

Frankly, I expected this photo would crash the software.

Microsoft scans photos to guess what your feelings are

Under an umbrella term called Project Oxford, the leading software corporation has developed a suit of APIs designed to guess user intent, personality and emotions. This includes a tool that can guess your age from a photo , natural language processing algorithms and, most recently, an app that can guess emotions based on an uploaded photo.

The tool was announced at  Microsoft’s Future Decoded conference in the UK. Basically, the software will analyze a given photo ( least 36 pixels square and smaller than 4MB in size, for now), identify a face and give a score for the various kinds of emotions it manages to interpret. The highest value, or the best guess, will show up first. You can try it yourself here. I uploaded some and shared below.

microsoft emotions

More like petulant, but pretty accurate.

 

Must be some silent Samurai stare the Microsoft software is interpreting.

Must be some silent Samurai stare the Microsoft software is interpreting.

Frankly, I expected this photo would crash the software.

Frankly, I expected this photo would crash the software. Can’t get more neutral than Steven Seagal.

A perfect "happy" score.

A perfect “happy” score.

Obviously, this isn’t perfect but since this is based on machine learning, the software will only get better as more and more people upload photos. Share some of your results in the comment section. This should be fun.

Screenshot from the movie Minority Report.

Machine learning used to predict crimes before they happen – Minority Report style

The word on every tech executive’s mouth today is data. Curse or blessing, there’s so much data lying around – with about 2.5 quintillion bytes of data added each day – that it’s become increasingly difficult to make sense of it in a meaningful way. There’s a solution to the big data problem, though: machine learning algorithms that get fed countless variables and spot patterns otherwise oblivious to humans. Researchers have already made use of machine learning to solve challenges in medicine, cosmology and, most recently, crime. Tech giant Hitachi, for instance, developed a machine learning interface reminiscent of Philip K. Dick’s Minority Report that can predict when, where and possibly who might commit a crime before it happens.

Machines listening from crime

Screenshot from the movie Minority Report.

Screenshot from the movie Minority Report.

It’s called Visualization Predictive Crime Analytics (PCA) and while it hasn’t been tested in the field yet, Hitachi claims that it works by gobbling immense amounts of data from key sensors layered across a city (like those that listen for gun shots), weather reports and social media to predict where crime is going to happen next. “A human just can’t handle when you get to the tens or hundreds of variables that could impact crime,” says Darrin Lipscomb who is directly involved in the project, “like weather, social media, proximity to schools, Metro [subway] stations, gunshot sensors, 911 calls.”

Real footage of the Hitachi crime predicting interface which officers might use. Image: Hitachi

Real footage of the Hitachi crime predicting interface which officers might use. Image: Hitachi

Police nowadays use all sorts of gimmicks to either rapidly intervene when a crime is taking place or take cues and sniff leads that might help them avert one. For instance, police officers might use informers, scour social media for gang altercations or draw a map of thefts to predict when the next one might take place. This is a cumbersome process and officers are only human after all. They will surely miss some valuable hints a computer might easily draw out. Of course, the reverse is also true, as is often the case in fact, but if we’re talking about volume – predicting thousands of possible felonies every single day in a big city – the deep learning machine will beat even the most astute detective.

PCA is particularly effective, supposedly, at scouring social media which Hitachi says improves accuracy by 15%. The company used a natural language processing algorithm to teach their machines how to understand colloquial text or speech posted on facebook or twitter. It knows, for instance, how to pull out geographical information and tell if a drug deal might take place in a neighborhood.

Officers would use PCA’s interface – quite reminiscent of Minority Report, again – to see which areas are more vulnerable. A colored map shows where cameras and sensors are placed in a neighborhood and alerts the officer on duty if there’s a chance a crime might take place there, be it a robbery or a gang brawl. Dispatch would then send officers in the area to intervene or possibly deter would-be felons from engaging in criminal activity.

PCA provides a highly visual interface, with color-coded maps indicating the intensity of various crime indicator

PCA provides a highly visual interface, with color-coded maps indicating the intensity of various crime indicators. Image: Hitachi

In all event, this is not evidence of precognition. The platform just returns vulnerable neighborhoods and alerts officers of a would-be crime. You might have heard about New York City’s stop-and-frisk practice, where suspicious people are searched for guns or drugs. PCA works fundamentally different since it actually offers officers something to start with – it at least provides a more focused leverage. “I don’t have to implement stop-and-frisk. I can use data and intelligence and software to really augment what police are doing,” Lipscomb says. Of course, this raises the question: won’t this lead to innocent people being targeted on mere suspicion fed by a computer? Well, just look at stop-and-frisk. More than 85% of those searched on New York’s streets are either Latino or African-American. Even if you account for differences ethnic crime rates, stop-and-frisk is clearly biased. The alternative sounds a lot better since police might actually know who to target.

Hitachi’s crime prediction tool will be tested in six large US cities soon, which Hitachi has declined to spell. The trials will be double-blinded, meaning police will go on business as usual, while the machine will run in the background. Then Hitachi will compare what crimes the police report with the crimes the machine predicted might have happened. If the two overlap beyond a statistical threshold, then you have a winner.