Tag Archives: artificial intelligence

India’s first political deepfake during elections is deeply concerning

Deepfakes are AI-generated fake videos depicting individuals who, in reality, never appeared in the staged scene. This 21st-century “Photoshop” has the potential to greatly manipulate public opinion. This is evident in two party-approved deepfake videos featuring Bharatiya Janata Party (BJP) member Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal. It’s the first time a deepfake video designed for political motives has been identified in India — and it won’t be the last.

Credit: Youtube.

Officials from BJP partnered with political communications firm The Ideaz Factory to employ deepfakes in order to reach different linguistic voter bases.

Although the official languages in India are Hindi and English, there are actually 22 major languages in India, written in 13 different scripts, with over 720 dialects. In a country of 1.3 billion people, politicians cannot ignore voters who exclusively speak another dialect.

In the original video, Manoj Tiwari made a brief political statement accusing the current Delhi leadership of making false promises to their electorate.

The original was then deepfaked in English and Haryanvi, a popular Hindi dialect spoken in Delhi.

According to Vice, the two deepfakes were shared across 5,8000 WhatsApp groups in the Delhi region, reaching around 15 million people.

Deepfakes are the reason why you can see Obama calling Trump a “complete dipshit” or Mark Zuckerberg bragging about having “total control of billions of people’s stolen data”. These statements were never made in reality, but they show the tremendous power modern machine learning algorithms have of spreading fake news. Imagine someone putting their words in your mouth and making it all seem eerily genuine.

According to Deeptrace, an AI firm, there were over 15,000 deepfake videos online in September 2019. Their number doubled over nine months, a staggering 96% of which are porn deepfakes that map the faces of female celebrities on to porn stars. Then there are deepfakes made as spoofs or satire. And, of course, there are also deepfakes used for political reasons.

In this particular case, the deepfakes were approved by Manoj Tiwari’s party to serve as a sort of high-tech dubbing, in which the speaker’s lips and facial expressions are synced with the novel audio that utters words Tiwari had never spoken.

This, in itself, might sound somewhat innocent. However, where can you draw the line between what’s nefarious and what isn’t when weaponizing deepfake tech during political campaigns and elections?

In the future, as deepfakes will become increasingly harder to spot, the danger they pose to democracy and journalism cannot be understated. A lie can travel halfway around the world while the truth is putting on its shoes. By the time a deepfake is exposed as a ruse, many people will have already formed an opinion based on the fake.

Expect troubled times ahead, especially as the high-stakes US Presidential elections in November arrives. The solution? Social networks need to keep up and employ an equally powerful AI to filter and flag potential deepfakes — but that’s harder said than done. What’s truly worrisome is that many of these deepfake algorithms are freely available online and people who even don’t know how to code can easily use them to make their own fake videos.

Ultimately, people need to be aware that these things exist and should become more skeptical of what they come across online.

Why healthcare professionals need to understand AI

Artificial intelligence (AI) is becoming increasingly sophisticated at completing tasks that humans usually do, but more efficiently, quickly and at a lower cost. This offers huge potential across all industries. In healthcare, it holds particular value as it impacts patient care and wellbeing as well as the bottom-line.

The growing role of AI

Indeed, forecasts predict that medical uses of AI will be present in 90% of hospitals in the near future and replace as much as 80% of doctors’ roles. Investor Tej Kohli expects to see AI applications in healthcare contribute three to four times more global output than the Internet. This currently accounts for $50 trillion of the global economy.

There is clear, untapped potential in using AI. But for it to be fully utilised, the people in charge and implementing it must have a decent grasp of the opportunities and limitations. That means that doctors, nurses, and other healthcare professionals must get-to-grips with AI and its many subsets.

Many uses for AI

The uses of AI in healthcare are seemingly endless. They span the full spectrum of patient care and treatment, from drug discovery and repurposing to clinical trials, treatment adherence and remote monitoring. AI’s particular strength lies in highly computerised, manual work that can be easily automated. With it doing the legwork, this frees up practitioners to focus on human tasks like speaking with patients.

Matching donors and patients

Some notable examples of AI’s potential include organ donation. Matching patients with donors can be a time-consuming and inaccurate process. Through AI, more matches can be carried out in a short timeframe, compared to when a human has to manually scour the donor and patient database or find a suitable family member donor. Plus, patients can procure donors from a wide range of possible contacts, those who aren’t a biological fit, because AI can quickly link donors to patients based on a wide range of factors beyond blood type and relation.

Preventative care

Another huge benefit comes in preventative care. Consumer health applications and the Internet of Things (IoT) are helping people track their lifestyle and fitness activities. This encourages them toward healthier behaviour and proactive health management. Additionally putting them in control of their own health and wellbeing.

Better data

IoT devices like the Apple Watch can also, in theory, provide healthcare professionals with timely and accurate data. Blood pressure information, for example, can be tracked throughout the day without the potential of ‘white coat syndrome’ skewing the results. In getting this data and having AI analyse it, professionals can provide more tailored care and advice, feedback and guidance on treatments and understanding what medicines are working.

Working together across disciplines

Of course, this is but a snapshot of what AI is achieving in medical science and so much more can be done when researchers, doctors, data scientists and other frontline health workers collaborate on problems and solutions. Because, ultimately, no data scientist can fully understand the unique environment of a hospital or doctors’ surgery. Vice versa, healthcare professionals aren’t going to be able to know all the ins-and-outs of algorithms and machine learning.

That’s not to say that healthcare professionals having a general understanding of AI isn’t important. To work effectively with data science teams, there must be a baseline understanding within the healthcare sector, of the key concepts and trends in AI.

The benefits of understanding AI

There are additional benefits to knowing a bit about AI. First, healthcare leaders can make more informed decisions about AI investments and the infrastructure required. This can help projects align with the organisation’s wider goals and also ensure that costs don’t spiral.

If doctors understand the abilities of a particular AI tool, they can also use it effectively in making decisions, diagnoses and prioritising tasks. They can use a tool to identify patients at risk of developing a specific condition, for example.

Changing culture and steering the direction

Additionally, having more of a grasp of AI can change the culture around adopting such technology. Typically, the sector has lagged behind in accepting emerging technology – as was the case with electronic health records. But embracing it early can push innovation and progress further. Shaping it in a way that suits healthcare professionals, patients and the sector as a whole.

As MIT economists Andrew McAfee and Erik Brynjolfsson state, “So we should ask not ‘What will technology do to us?’ but rather ‘What do we want to do with technology?’ More than ever before, what matters is thinking deeply about what we want. Having more power and more choices means that our values are more important than ever.”

Patient communication

It can also help to reassure patients. Machine learning tools are increasingly being used in clinical settings and having a doctor with an understanding of such tools will lead to more thorough discussions. Some patients may wish to know how an AI has come to a specific decision. Doctors will have to communicate the training a machine has undertaken, the data it has been trained with and the algorithms powering its decision-making.

In any case, most patients still prefer human-to-human interactions when talking about their symptoms, test results and prognosis. AI is still mistrusted by many people, partly because they don’t understand how it works and whether it is accurate or not. They also feel that an AI doesn’t take in their ‘uniqueness’ and experience of a disease. With a well-informed doctor explaining these things, their fears will be put to rest and they can move onto to their treatment and care.

As vital as medical knowledge

As AI becomes mainstream in the healthcare setting, the onus is on healthcare professionals to invest in their AI education. Failing to understand AI is falling short of patient expectations, People cannot be treated effectively if their physician doesn’t know how their AI-powered tool works. In the future, understanding AI and medical knowledge will hold the same importance for practitioners.

So it’s worth learning about it now and keeping up with AI trends in the industry. For the good of your career as well as your patients.

AI upscales iconic 1895 film to 4K 60fps and the results are amazing

L’arrivée d’un train en gare de La Ciotat.

The year is 1896 and a huge crowd is gathered inside the back room of a Parisian café, where the famous Lumière brothers promised a spectacle of moving images. In effect, this was the world’s first movie theater, dazzling an audience that was still coming to grips with the idea of photography.

One of the earliest movies ever shot and screened by the Lumière brothers is the iconic The Arrival of the Train (L’arrivée d’un train en gare de La Ciotat). According to some accounts, the audience was so overwhelmed by the moving image of a life-sized train coming directly at them that some people screamed and ran to the back of the room. However, this seems to be more of a myth than an actual account of what happened. Nevertheless, the film must have astonished many people unaccustomed to the illusion created by moving images.

The 1895 short black-and-white silent flick only lasts 45 seconds and features a train’s arrival in the station of the French town of La Ciotat. Though it might not look like much today, bear in mind that this was one of the first films ever produced, shot in a cinematic style pioneered by the two brothers known as Actualités, or ‘actualities’ — brief bites of film.

Cinematograph Lumiere advertisement 1895. Credit: Wikimedia Commons.

The short film was shot with a cinématographe created by the Lumière brothers, which was an all-in-one combination of a motion picture camera, printer, and projector.

Since then, camera gear technology has evolved tremendously. Novel AIs allow us to see what the film would have looked like if the French brothers had used modern filming equipment. Using several neural networks, Denis Shiryaev upscaled the iconic black-and-white film to 4K quality at 60 frames per second, and you can see the breathtaking results for yourself.

https://www.youtube.com/watch?time_continue=21&v=3RYNThid23g&feature=emb_title

And here’s the 1895 original for a side-by-side comparison.

To upscale to 4K, Shiryaev used Gigapixel AI while adding FPS was possible thanks to the Dain neural networks.

That’s not all. On top of all of this, the YouTuber used the DeOldify Neural Network to colorize the film, which you can see below.

AI: a tool both for detecting and enhancing student plagiarism

Credit: Wikimedia Commons.

Prior to the invention of the internet, plagiarism was a task almost as labor-intensive as producing original work. The person analyzing the written work would have to scour libraries for relevant works and then carefully select, cut, paste, and modify passages of interest. Now, students who barely know how to write an assignment can get passing grades with literally no work simply by accessing paid essay services that are so ubiquitous online. Schools and universities have also adapted, but have struggled to keep up with ever more ingenious ways of cheating.

What to do? Of course, it’s time to use artificial intelligence — after all, that’s what everybody seems to be turning to now when faced with any problem.

And it seems like good business too — a $1.735 billion business, in fact. That’s how much Advance — a huge media and tech company that, among other things, owns Conde Nast — paid to purchase Turnitin.

Turnitin offers various AI-assisted tools in the edu-space, from automated grading to machine learning-enabled student feedback. One of its most important platforms, however, deals with plagiarism detection.

The platform, called the Authorship Investigate tool, is meant to assist high school and university educators spot at-risk students in order to check papers for plagiarism and devising remediation plans. According to Turnitin, the company’s research is trying to replicate the ‘gut feeling’ a marker gets when they suspect a student is cheating.

In a recent study, Authorship Investigate was able to detect 59% of all cheating cases. The machine learning software used sentence complexity, sentence length, and other stylometrics, as well as document information such as date created and last modified, in order to detect cheats. Without AI, markers were only able to detect 48% of cheating instances.

“Whilst Authorship Investigate was in early stages of development when this study was conducted, we’re pleased to see the value of the tool in the detection process, in bringing together all submissions made by a student and allowing rapid scanning of key points of evidence,” adds Turnitin principal product manager Mark Ricksen.

Like all arms races, however, AI can also be used to enhance cheating. For instance, there’s OpenAI’s GPT-2 algorithm, which is so powerful it can generate convincingly human texts starting from a few simple keywords. In one study, GPT-2 generated articles that were almost as convincing as genuine New York Times articles (72% of respondents rated the GPT-2 samples as “credible” compared to 83% for the New York Times). 

So, like most other things, AI can be used by both sides of the playing field. Perhaps the most effective way to tackle cheating in schools is to have students understand that taking shortcuts in their education will ultimately hurt them in the long-term. But that’s easier said than done in our modern quick-fix society.

AI learns to play chess by studying game commentaries instead of practicing

Credit: Pixabay.

Since Alan Turing wrote the first computer program for chess in 1951 (completely on paper) all the way to Gary Gasparov’s infamous loss at the proverbial hand of IBM’s Deep Blue supercomputer in 1998, chess has always been used as an indicator of progress for computers. Today, artificial intelligence systems are so advanced that humans barely have a chance at beating them. Google’s AlphaZero is a prime example; it started out knowing only the rules of chess and nothing more — no opening and closing moves, no libraries, nada. In a matter of hours, it had already played more games against itself than have ever been recorded in human chess history.

In a new study, researchers in artificial intelligence at University College London have yet again turned to chess. Only this time, their machine learning program didn’t practice millions of games to master chess but rather analyzed the language of expert commentators. Someday, the researchers say that a similar approach could allow machines to decipher emotional language and acquire skills which would have otherwise been inaccessible through ‘brute force’.

First, the researchers went through 2,700 chess game commentaries, which were pruned so that ambiguous or uninteresting moves were removed. They then employed a recurrent neural network —  a type of neural network where the output from the previous step is fed as input to the current step — and a mathematical technique called word embeddings to parse the language of the commentators.

The algorithm, called SentiMATE, worked out the basic rules of chess as well as several key strategies — including forking and castling — all by itself. On the flip side, it played quite poorly, at least, as compared to a grandmaster AI.

“We present SentiMATE, a novel end-to-end Deep Learning model for Chess, employing Natural Language Processing that aims to learn an effective evaluation function assessing move quality. This function is pre-trained on the sentiment of commentary associated with the training moves and is used to guide and optimize the agent’s game-playing decision making. The contributions of this research are three-fold: we build and put forward both a classifier which extracts commentary describing the quality of Chess moves in vast commentary datasets, and a Sentiment Analysis model trained on Chess commentary to accurately predict the quality of said moves, to then use those predictions to evaluate the optimal next move of a Chess agent,” the authors wrote.

High-level performance was not its objective, though. Where SentiMATE shines is in its ability to use language to acquire a skill instead of practicing it, thus employing less data and computing power than conventional approaches. AlphaZero, for instance, requires thousands of “little brain” — specialized chips called Tensor Processing Units (TPUs) — and millions of practice sessions to master games such as chess, Go, or Dota 2.

In a world with millions of books, blogs, and studies, machines like SentiMATE could find many practical applications. Such a machine, for instance, could learn to predict financial activities or write better stories simply by tapping into the sum of human knowledge.

SentiMATE was described in a paper published in the pre-print server ArXiv.

‘Smart’ glass recognizes numbers without the need for sensors or even electrical power

A new type of glass can identify numbers all by itself by bending light in specific ways. Credit: Zongfu Yu.

Many phones can now be unlocked with face ID, a technology that is the pinnacle of computer vision and artificial intelligence — but which also uses significant computing resources and battery life. Imagine a future, however, where the same function could be achieved with a single piece of glass that can recognize your face or other imagery without using any sensors or even power at all. Sounds like science fiction but a team of creative engineers at the University of Wisconsin-Madison has recently demonstrated such a “smart” glass.

In other words, researchers managed to embed artificial intelligence inside an inert object. The novel approach provides a low-tech alternative to traditional digital artificial vision.

The researchers led by Zongfu Yu, a professor of electrical and computer engineering, designed translucent glass with tiny bubbles and impurities embedded at strategic locations.

“We’re using optics to condense the normal setup of cameras, sensors and deep neural networks into a single piece of thin glass,” he said in a statement.

As a proof of concept, Yu and colleagues designed a glass that can identify handwritten numbers. Light reflected off an image of a number enters one end of the glass, and then focuses on one of nine spots on the side, each corresponding to individual digits. Even when a handwritten “3” was altered to become an “8”, this clever system was dynamic enough to recognize the new digit. How fast? As fast as the speed of light, the fastest thing there is.

“The fact that we were able to get this complex behavior with such a simple structure was really something,” says Erfan Khoram, a graduate student in Yu’s lab.

The system shines due to the fact that it is completely self-contained and all the “computational” machinery is embedded inside it. There is zero latency because there is no need to send information to the cloud for processing. There is also no need for electrical power.

“We could potentially use the glass as a biometric lock, tuned to recognize only one person’s face,” says Yu. “Once built, it would last forever without needing power or internet, meaning it could keep something safe for you even after thousands of years.”

Zongfu Yu (left), Ang Chen (center) and Efram Khoram (right). Credit: Sam Million-Weaver.

According to the researchers, this is an example of analog artificial vision. Designing the glass was similar to machine-learning training processes used by artificial neural networks — except that the training was done on an analog material rather than digital information. The tweaking was performed by embedding air bubbles of different sizes and shapes, as well as light-absorbing materials like graphene, at specific locations inside the glass.

“We’re accustomed to digital computing, but this has broadened our view,” says Yu. “The wave dynamics of light propagation provide a new way to perform analog artificial neural computing”

In the future, the researchers plan to test their approach for more complex image recognition, such as facial recognition.

“The true power of this technology lies in its ability to handle much more complex classification tasks instantly without any energy consumption,” says Ming Yuan, a collaborator on the research and professor of statistics at Columbia University. “These tasks are the key to create artificial intelligence: to teach driverless cars to recognize a traffic signal, to enable voice control in consumer devices, among numerous other examples.”

“We’re always thinking about how we provide vision for machines in the future, and imagining application specific, mission-driven technologies.” says Yu. “This changes almost everything about how we design machine vision.”

Credit: Flickr, Many Wonderful Artists / Public Domain.

AI is better at diagnosing skin cancer than even some of the best human experts

Credit: Flickr, Many Wonderful Artists / Public Domain.

Credit: Flickr, Many Wonderful Artists / Public Domain.

An international team of researchers has shown for the first time that artificial intelligence is better at diagnosing melanoma than human doctors. This particular form of machine learning, known as a deep learning convolutional neural network (CNN), was able to make more correct diagnoses and fewer misdiagnoses than some of the world’s most capable skin care oncologists.

Man vs machine

The CNN starts off as a blank slate. In order to teach the artificial neural network how to identify skin cancer, the researchers fed it a dataset of over 100,000 images of malignant melanomas and benign moles. With each iteration, it learned patterns of features characteristic of malignant and benign tumors, becoming increasingly better at differentiating between the two.

After this initial training round, the team of researchers led by Professor Holger Haenssle, senior managing physician at the University of Heidelberg, Germany, introduced the AI to two new sets of images sourced from the Heidelberg library. These dermoscopic images of various skin lesions were completely new to the CNN. One set of 300 images was meant to solely test the performance of the CNN. Another set of 100 images was comprised of some of the most difficult to diagnose lesions and was used to test both machine and real dermatologists.

Researchers were able to recruit 58 doctors from 17 countries. Among them, 17 (29%) indicated they had less than two years’ experience in dermoscopy, 11 (19%) said they had two to five years of experience, and 30 (52%) were experts with more than five years’ experience.

The volunteers were asked to make a decision about how to manage the condition — whether it was surgery, follow-up, or no action at all — based on two levels of information. At level I, the only information that the dermatologists had at their disposal was from dermoscopic images. Four weeks after making the level I assessment, each participant was asked to review their diagnosis at level II, where they were given far more information about the patient — including age, sex, and the location of the lesion, as well as magnified images of the same case.

At level I, humans could accurately detect melanomas 86.6% of the time and correctly identified benign lesions with an average score of 71.3%. The CNN, however, was able to detect benign moles 95% of the time. At level II, the dermatologists significantly improved their performance, as expected, having diagnosed 88.9% of malignant melanomas and 75.7% that were benign.

Even though the expert doctors were better at spotting melanoma than their less experienced counterparts, they were, on average, outperformed by the AI.

Around 232,000 new cases of melanoma are diagnosed worldwide every year, which result in 55,500 deaths annually. The cancer can be cured, but it typically requires an early diagnosis. This is why this CNN is so impressive — it would be able to identify more cancers early on, thereby saving lives.

“These findings show that deep learning convolutional neural networks are capable of out-performing dermatologists, including extensively trained experts, in the task of detecting melanomas,” Haenssle said.

Of course, all of this doesn’t mean that doctors will soon be scrapped. Far from it: the researchers say that the machine will augment the performance of doctors rather than replace them. Think of a second ‘expert’ opinion which doctors can instantly turn to.

“This CNN may serve physicians involved in skin cancer screening as an aid in their decision whether to biopsy a lesion or not. Most dermatologists already use digital dermoscopy systems to image and store lesions for documentation and follow-up. The CNN can then easily and rapidly evaluate the stored image for an ‘expert opinion’ on the probability of melanoma. We are currently planning prospective studies to assess the real-life impact of the CNN for physicians and patients,” according to Haenssle.

Concerning the study’s limitations, it’s important to note that the study’s participants made diagnoses in an artificial setting. Their decision-making process might look different in a ‘life or death’ situation, which might impact performance. The CNN also had some limitations of its own, such as poor performance with images of melanomas on certain sites such as the fingers, toes, and scalp. For this reason, there is still no substitute for a thorough clinical examination performed by a trained human physician.

That being said, these impressive results indicate that we’re about to experience a paradigm shift, not only in dermatology but in just about every medical field, thanks to developments in artificial intelligence.

The findings appeared in the journal Annals of Oncology.

Three different source videos bring da Vinci's Mona Lisa to life. Credit: Samsung.

AI can create convincing talking head from a single picture or painting

Three different source videos bring da Vinci's Mona Lisa to life. Credit: Samsung.

Three different source videos bring da Vinci’s Mona Lisa to life. Credit: Samsung.

Researchers used machine learning to create an amazing AI that can create eerie videos of people talking starting from a single frame — a picture or even a painting. The ‘talking head’ in the videos follows the motions of a source face (a real person), whose facial landmarks are applied to the facial data of the target face. As you can see in the presentation video below, the target face mimics the facial expressions and verbal cues of the source. This is how the authors brought Einstein, Salvador Dalí, and even Mona Lisa to life using only a photograph.

This sort of application of machine learning isn’t new. For some years, researchers have been working on algorithms that generate videos which swap faces. However, this kind of software required a lot of training data in video form (at least a couple of minutes of content) in order to generate a realistic moving face for the source. Other efforts rendered 3D faces from a single picture, but could not generate motion pictures.

Credit: Samsung.

Computer engineers at Samsung’s AI Center in Moscow took it to the next level. Their artificial neural network is capable of generating a face that turns, speaks, and can make expressions starting from only a single image of a person’s face. The researchers call this technique “single-shot learning”. Of course, the end result looks plainly doctored, but the life-like quality increases dramatically when the algorithm is trained with more images or frames.

Credit: Samsung.

The authors also employed Generative Adversarial Networks (GAN) — deep neural net architectures comprised of two nets, pitting one against the other. Basically, each model tries to outsmart the other by creating the appearance of something “real”. This competition promotes a higher level of realism.

If you pay close attention to the outputted faces, you’ll notice that they’re not perfect. There are artifacts and weird bugs that call out the fakeness. That being said this is surely some very impressive work. The next obvious step is making Mona Lisa move her lower body as well. In the future, she might dance for the first time in hundreds of years — or her weird AI avatar, at least.

The work was documented in the preprint server Arxiv.

The world’s first AI-written textbook showcases what machine learning can do — and what it can’t

Springer Nature publishes its first machine-generated book — a prototype which attempts to gather and summarize the latest research in a very particular field: lithium-ion batteries. While far from being perfect and riddled with incoherent word scrambles, the fact that it exists at all is exciting, and there’s good reason to believe this might soon alleviate some from work from worn-out researchers, enabling them to focus on actual research.

If you’re familiar with scientific writing, you know it can be dense. If you’ve ever tried your hand at it — first of all, congrats — you also know that it’s extremely time-consuming. It’s not your typical article, it’s not a language you would ever use in a conversation. Everything needs to be very precise, very descriptive, and very clear. It takes a very long time to draft scientific texts, and in the publish-or-perish environment where a scientist’s value is often decided by how many papers he or she publishes, many researchers end up spending a lot of time writing instead of, you know, researching.

This is where Artificial Intelligence (AI) enters the stage.

We’ve known for a while that AI has made impressive progress when it comes to understanding language, and even producing its own writing. However, its capacity remains limited — especially when it comes to coherence. You don’t need complex linguistic constructions in science though, and Springer Nature thought it could do a decent enough job synthesizing research on lithium-ion batteries. Thus, Lithium-Ion Batteries: A Machine-Generated Summary of Current Research was born. The book is a summary of peer-reviewed papers, written entirely by A.I.

Technologist Ross Goodwin is quoted in the introduction to Springer Nature’s new book:

“When we teach computers to write, the computers don’t replace us any more than pianos replace pianists — in a certain way, they become our pens, and we become more than writers. We become writers of writers.”

The AI did an admirable job. It was able to scour through an immense volume of published research, extract decent summaries and then put together a (mostly) coherent story. Sure, it’s pocked with sentences which don’t make sense, but it did a pretty decent job while taking virtually no time.

Herein lies the value of this technology: it would, with reasonably small progress, be able to summarize large volumes of dense texts and free up researchers to work on something more valuable.

“This method allows for readers to speed up the literature digestion process of a given field of research instead of reading through hundreds of published articles,” concludes Springer Nature’s Henning Schoenenberger. “At the same time, if needed, readers are always able to identify and click through to the underlying original source in order to dig deeper and further explore the subject.”

The eBook is freely available for readers on SpringerLink.

Credit: Pixabay.

AI is so good at inventing stories that its creators had to shut it down to avoid ‘fake news’

Credit: Pixabay.

Credit: Pixabay.

Researchers have designed an artificial intelligence algorithm that can effortlessly write plausible stories. It’s so good that the OpenAI Institute — which built the AI — has now barred access from the open source community over fears that the technology could be used for nefarious purposes like spreading fake news. 

Founded in 2015, OpenAI is a non-profit research organization that was created to develop an artificial general intelligence that is available to everyone. Several Silicon Valley heavyweights are behind the project, including LinkedIn founder Reid Hoffman and Tesla CEO Elon Musk.

For some time, OpenAI has been working on natural language processing algorithm that can produce natural-sounding text. The latest version of the algorithm, called GPT-2, was trained with more than 8 million stories that were posted on Reddit and had a “karma” score of 3 or higher. Starting from nothing but a headline, the algorithm is capable of creating a new story, making up attributions and quotes that are disturbingly compelling. It can be used for anything from writing news stories, essay help, and other pieces of text.

Here are some examples of GPT-2 in action, which made up a whole story starting from an initial paragraph written by a human.

SYSTEM PROMPT (HUMAN-WRITTEN)

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

SYSTEM PROMPT (HUMAN-WRITTEN)

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.

“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”

“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”

“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”

“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”

Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.

The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:

May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!

The generated text certainly has its flaws and is not entirely comprehensible, but it’s a very powerful demonstration nonetheless. So powerful that OpenAI decided to close access to the open source community.

“We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” said Jack Clark, policy director at OpenAI, speaking to the BBC.

Of course, a lot of people were not happy, to say the least. After all, the research institute is called OpenAI, not ClosedAI.

https://twitter.com/AnimaAnandkumar/status/1096209990916833280

OpenAI says that its research should be used to launch a debate about whether such algorithms should be allowed for news writing and other applications. Meanwhile, OpenAI is certainly not the only research group working on similar technology, which puts the effectiveness of OpenAI’s decision into question. After all, it’s only a matter of time — perhaps just months — before the same results are independently replicated elsewhere.

“We’re not at a stage yet where we’re saying, this is a danger,” OpenAI’s research director Dario Amodei said. “We’re trying to make people aware of these issues and start a conversation.”

“It’s not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will,” Brandie Nonnecke, director of Berkeley’s CITRIS Policy Lab told the BBC.

“Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content.”

Teapot golfball.

Artificial intelligence still has severe limitations in recognizing what it’s seeing

Artificial intelligence won’t take over the world any time soon, a new study suggests — it can’t even “see” properly. Yet.

Teapot golfball.

Teapot with golf ball pattern used in the study.
Image credits: Nicholas Baker et al / PLOS Computational Biology.

Computer networks that draw on deep learning algorithms (often referred to as AI) have made huge strides in recent years. So much so that there is a lot of anxiety (or enthusiasm, depending on which side of the contract you find yourself) that these networks will take over human jobs and other tasks that computers simply couldn’t perform up to now.

Recent work at the University of California Los Angeles (UCLA), however, shows that such systems are still in their infancy. A team of UCLA cognitive psychologists showed that these networks identify objects in a fundamentally different manner from human brains — and that they are very easy to dupe.

Binary-tinted glasses

“The machines have severe limitations that we need to understand,” said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study. “We’re saying, ‘Wait, not so fast.”

The team explored how machine learning networks see the world in a series of five experiments. Keep in mind that the team wasn’t trying to fool the networks — they were working to understand how they identify objects, and if it’s similar to how the human brain does it.

For the first one, they worked with a deep learning network called VGG-19. It’s considered one of the (if not the) best networks currently developed for image analysis and recognition. The team showed VGG-19 altered, color images of animals and objects. One image showed the surface of a golf ball displayed on the contour of a teapot, for example. Others showed a camel with zebra stripes or the pattern of a blue and red argyle sock on an elephant. The network was asked what it thought the picture most likely showed in the form of a ranking (with the top choice being most likely, the second one less likely, and so on).

Combined images.

Examples of the images used during this step.
Image credits Nicholas Baker et al., 2018, PLOS Computational Biology.

VGG-19, the team reports, listed the correct item as its first choice for only 5 out of the 40 images it was shown during this experiment (12.5% success rate). It was also interesting to see just how well the team managed to deceive the network. VGG-19 listed a 0% chance that the argyled elephant was an elephant, for example, and only a 0.41% chance that the teapot was a teapot. Its first choice for the teapot image was a golf ball, the team reports.

Kellman says he isn’t surprised that the network suggested a golf ball — calling it “absolutely reasonable” — but was surprised to see that the teapot didn’t even make the list. Overall, the results of this step hinted that such networks draw on the texture of an object much more than its shape, says lead author Nicholas Baker, a UCLA psychology graduate student. The team decided to explore this idea further.

Missing the forest for the trees

For the second experiment, the team showed images of glass figurines to VGG-19 and a second deep learning network called AlexNet. Both networks were trained to recognize objects using a database called ImageNet. While VGG-19 performed better than AlexNet, they were still both pretty terrible. Neither network could correctly identify the figurines as their first choice: an elephant figurine, for example, was ranked with almost a 0% chance of being an elephant by both networks. On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

Glass figurines.

Well, they’re definitely glass figurines to you and me. Not so obvious to AI.
Image credits Nicholas Baker et al / PLOS Computational Biology.

In this experiment, too, the networks’ first choices were pretty puzzling: VGG-19, for example, chose “website” for a goose figure and “can opener” for a polar bear.

“The machines make very different errors from humans,” said co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

“We can fool these artificial systems pretty easily.”

For the third and fourth experiment, the team focused on contours. First, they showed the networks 40 drawings outlined in black, with the images in white. Again, the machine did a pretty poor job of identifying common items (such as bananas or butterflies). In the fourth experiment, the researchers showed both networks 40 images, this time in solid black. Here, the networks did somewhat better — they listed the correct object among their top five choices around 50% of the time. They identified some items with good confidence (99.99% chance for an abacus and 61% chance for a cannon from VGG-19, for example) while they simply dropped the ball on others (both networks listed a white hammer outlined in black for under 1% chance of being a hammer).

Still, it’s undeniable that both algorithms performed better during this step than any other before them. Kellman says this is likely because the images here lacked “internal contours” — edges that confuse the programs.

Throwing a wrench in

Now, in experiment five, the team actually tried to throw the machine off their game as much as possible. They worked with six images that VGG-16 identified correctly in the previous steps, scrambling them to make them harder to recognize while preserving some pieces of the objects shown. They also employed a group of ten UCLA undergrads as a control group.

The students were shown objects in black silhouettes — some scrambled to be difficult to recognize and some unscrambled, some objects for just one second, and some for as long as the students wanted to view them. Students correctly identified 92% of the unscrambled objects and 23% of the scrambled ones when allowed a single second to view them. When the students could see the silhouettes for as long as they wanted, they correctly identified 97% of the unscrambled objects and 37% of the scrambled objects.

Silhouette and scrambled bear.

Example of a silhouette (a) and scrambled image (b) of a bear.
Image credits Nicholas Baker et al / PLOS Computational Biology.

VGG-19 correctly identified five of these six images (and was quite close on the sixth, too, the team writes). The team says humans probably had more trouble identifying the images than the machine because we observe the entire object when trying to determine what we’re seeing. Artificial intelligence, in contrast, works by identifying fragments.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

The results suggest that right now, AI (as we know and program it) is simply too immature to actually face the real world. It’s easily duped, and it works differently than us — so it’s hard to intuit how it will behave. Still, understanding how such networks ‘see’ the world around them would be very helpful as we move forward with them, the team explains. If we know their weaknesses, we know where we need to put most work in to make meaningful strides.

The paper “Deep convolutional networks do not classify based on global object shape” has been published in the journal PLOS Computational Biology.

Look at all these faces. None of them are real — they were created by an AI

All these hyper-realistic faces were generated using NVidia’s new algorithm and it’s awesome — and a bit scary.

Credits: NVidia.

Women, children, different skin tones and complexities — it doesn’t matter: NVidia’s algorithm generates them all equally well. The algorithm separates coarse features (such as pose and identity) from finer details, producing faces in different positions and lighting. It can even throw in some random details like blemishes or freckles.

To better illustrate this ability, the computer scientists who created this exhibit a face generated with different amounts of noise. The results are truly impressive.

Effect of noise inputs at different layers of our generator. (a) Noise is applied to all layers. (b) No noise. (c) Noise in fine layers only. Noise in coarse layers only. We can see that the artificial omission of noise leads to featureless “painterly” look. Coarse noise causes large-scale curling of hair and appearance of larger background features, while
the fine noise brings out the finer curls of hair, finer background detail, and skin pores.

“We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature,” a paper published on arXiv reads. “The new architecture leads to an automatically learned, unsupervised separation of high-level
attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis.”

Neural style transfer, the technique they used here, is one that has been used before to generate synthetic images — think of those algorithms that let you transform a photo into a particular style. Imagine a landscape image as if it were painted by Van Gogh, for instance. Neural style transfer typically requires three images: a content image, a style reference image, and the input image you want to style. In this case, NVidia taught its generative adversarial network (GAN) to generate a number of styles: faces with glasses, different hairstyles, different ages, etc. As far as we can tell, there’s not a particular weak point in the algorithm’s outputs.

However, the network also tried generating cat faces — and while results are still impressive they’re not as good. Some are indeed indistinguishable from the real thing, but a few are quite bizarre. Can you spot them?

Image credits: NVidia.

“CATS continues to be a difficult dataset due to the high intrinsic variation in poses, zoom levels, and backgrounds,” the team explain.

However, they did much better on different types of datasets — with cars and bedrooms.

AI-generated cars and bedrooms. Image credits: NVidia.

So what does this mean? Well, for starters, we might never be able to trust anything we see on the internet. It’s a remarkable achievement for NVidia’s engineers and for AI progress in general, but it’s hard to envision all the ways in which this will be used. It’s amazingly realistic — maybe even too realistic.

Here’s a video detailing the face generation process:

Chess.

Novel AI can master games like chess and Go by itself, no humans needed

UK researchers have improved upon a pre-existing AI, allowing it to teach itself how to play three difficult board games: chess, shogi, and Go.

Chess.

Image via Pexels.

Can’t find a worthy opponent to face in your favorite board game? Fret not! Researchers at the DeepMind group and University College, both in the UK, have created an AI system capable of teaching itself (and mastering) three such games. In a new paper, the group describes the AI and why they believe it represents an important step forward for the development of artificial intelligence.

Let’s play a game

“This work has, in effect, closed a multi-decade chapter in AI research,” Murray Campbell, a member of the team that designed IBM’s Deep Blue, writes in a commentary accompanying the study.

“AI researchers need to look to a new generation of games to provide the next set of challenges.”

Nothing puts the huge strides AI has made over the years into perspective quite like having one beat you at a game. Over two decades ago, an AI known as Deep Blue managed such a feat in a chess game against world champion Gary Kasparov in 1997. Since then, the machines have also managed victories in shogi and Go (think of them as Japanese and Chinese versions of chess).

While impressive, such achievements also showcased the shortcomings of these computer opponents. These programs were good at their respective game — but only at playing that one game. In the new paper, researchers showcase an AI that can learn and master multiple games on its own.

Christened AlphaZero, this AI is based closely on the AlphaGo Zero software and uses a similar reinforcement learning system. Much like a human would, it learns through trial and error by repeatedly playing a game and looking at the results of its actions. All we have to do is explain the basic rules of the game, and then the computer starts playing — against itself. Repeated matches let AlphaZero see which moves help bring about a win, and which simply don’t work.

Over time, all this experience lets the AI become quite adept at the game. AlphaZero has shown that given enough time to practice, it can come to defeat both human adversaries and other dedicated board game AIs — which is no small feat. The system also uses a search method known as the Monte Carlo tree search. Combining the two technologies allows the system to teach itself how to get better at playing a game.

AlphaZero results.

Tournament evaluation of AlphaZero in chess, shogi, and Go. The results show games won, drawn, or lost (from AlphaZero’s perspective) in matches against Stockfish, Elmo, and AlphaGo Zero (AG0). AlphaZero was allowed three days for training in each game.
Image credits DeepMind Technologies Ltd

It certainly did help that the team ran the AI on a very beefy platform — the rig employed 5000 tensor processing units, which is on a par with the capabilities of large supercomputers.

Still, AlphaZero can handle any game that provides all the information that’s relevant to decision-making. The new generation of games to which Campbell alluded earlier do not fit into this category. In games such as poker, for example, players can hold their cards close to their chests (and thus obfuscate relevant information). Other examples include many multiplayer games, such as StarCraft II or Dota. However, it likely won’t be long until AlphaZero can tackle such games as well.

“Those multiplayer games are harder than Go, but not that much higher,” Campbell tells IEEE Spectrum. “A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder. I think both games are within 2 to 3 years of solution.”

The paper “Mastering board games” has been published in the journal Science.

Captcha.

New AI solves most Captcha codes, potentially causing a “huge security vulnerability”

The world’s most popular website security system may soon become obsolete.

Captcha.

Image credits intergalacticrobot.

Researchers at the Lancaster University, UK, Northwest University, and Peking University (both in China) have developed a new Ai that can defeat the majority of captcha systems in use today. The algorithm is not only very good at its job — it also requires minimal human effort or oversight to work.

The breakable code

“[The software] allows an adversary to launch an attack on services, such as Denial of Service attacks or spending spam or fishing messages, to steal personal data or even forge user identities,” says Mr Guixin Ye, the lead student author of the work. “Given the high success rate of our approach for most of the text captcha schemes, websites should be abandoning captchas.”

Text-based captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) do pretty much what it says on the tin. They’re systems that typically use a hodge-podge of letters or numbers, which they run through additional security features such as occluding lines. The end goal is to generate images that a human can distinguish as being text while confusing a computer. It relies on our much stronger pattern recognition abilities to weed out machines. All in all, it’s considered pretty effective.

Captcha.

Because it’s drenched in security features that make it a really annoying lecture.
Image credits Guixin Ye et al., 2018, CCS ’18.

The team, however, plans to change this. Their AI draws on a technique known as a ‘Generative Adversarial Network’, or GAN. In short, this approach uses a large number of (software-generated) captchas to train a neural network (known as the ‘solver’). After going through boot camp, this neural network is then further refined and pitted against real captcha codes.

In the end, what the team created is a solver that works much faster and with greater accuracy than any of its predecessors. The programme only needs about 0.05 seconds to crack a captcha when running on a desktop PC, the team reports. Furthermore, it has successfully attacked and cracked versions of captcha that were previously machine-proof.

The programme was tested on 33 captcha schemes, of which 11 are used by many of the world’s most popular websites — including eBay, Wikipedia, and Microsoft. The system had much more success relative to its counterparts, although it did have some difficulty breaking through certain “strong security features” used by Google. Still, even in this case, the system saw a success rate of 3% which sounds pitiful, but “is still above the 1% threshold for which a captcha is considered to be ineffective,” the team writes.

Test results.

Results with the base (only trained with synthetic images) and fine-tuned solver (also trained with real-life examples).
Image credits Guixin Ye et al., 2018, CCS ’18.

So the solver definitely delivers. But it’s also much easier to use than any of its competitors. Owing to the GAN-approach the team used, it takes much less effort and time to train the AI — which would involve manually deciphering, tagging, and feeding captcha examples to the network. The team says it only takes 500 or so genuine captcha codes to adequately train their programme. It would take millions of examples to manually train it without the GAN, they add.

One further advantage of this approach is that it makes the AI system-independent (it can attack any variation of captcha out there). This comes in stark contrast to previous machine-learning captcha breakers. These manually-trained systems were both laborious to build and easily thrown off by minor changes in security features within the codes.

All in all, this software is very good at breaking codes; so good, in fact, that the team believes they can no longer be considered a meaningful security measure.

“This is the first time a GAN-based approach has been used to construct solvers,” says Dr Zheng Wang, Senior Lecturer at Lancaster University’s School of Computing and Communications and co-author of the research. “Our work shows that the security features employed by the current text-based captcha schemes are particularly vulnerable under deep learning methods.”

“We show for the first time that an adversary can quickly launch an attack on a new text-based captcha scheme with very low effort. This is scary because it means that this first security defence of many websites is no longer reliable. This means captcha opens up a huge security vulnerability which can be exploited by an attack in many ways.”

The paper “Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach” has been published in the journal CCS ’18 Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security.

Credit: Pixabay.

AI outperforms top corporate lawyers in accuracy — and is 100 times faster

A legal startup challenged 20 expert lawyers to test their skills against its AI-powered algorithm. The lawyers generally performed more poorly than the machine. The AI was found to have a level of accuracy on par with the very best lawyers that participated in the challenge — however, it performed the job 100 times faster.

Credit: Pixabay.

Credit: Pixabay.

Artificial intelligence (AI) is no longer a figment of our imagination — what’s more, it’s already pretty mainstream. A lot of people are already using voice assistants like Google Home or Amazon Alexa, whose suggestions are powered by artificial intelligence algorithms that tap into vasts amounts of data. Actually, if you’re using the internet, you’re already interacting with AI tech one way or another. Search results, social news feeds, Netflix recommendations — these are all delivered by AI.

As AI becomes more prevalent, researchers estimate that millions of jobs will become displaced in the coming years. The more repetitive the task, the more likely a robot overlord will take over. If you’re employed as a truck or taxi driver, teller, cashier or even as a cook, you run at risk of being sacked in favor of a machine. Some would think creative jobs like writing, painting or creating music are exempted from such trends because there’s the impression you need inherently human qualities to deliver — but that’s just wishful thinking. For instance, AIs can now write scripts for movies, novels, and even classical music like a human composer. Sure, they might not be as good as work produced by humans, but these are all proofs of concept, foretelling grander things to come.

Compared to music or creative writing, AI-algorithms for legal tasks sounds like a breeze. While much legal work requires an actual person — such as in court or when briefing clients — attorneys and legal staff also spend much of their workday analyzing complicated ‘legalese’. But it’s precisely because the language is very rigorous and well defined that a machine might be more fitting for some legal tasks.

Case in point, when LawGeex pitted 20 corporate lawyers against an AI algorithm, the humans were outperformed. The task that both humans and the machine had to complete involved reviewing risks contained in five non-disclosure agreements (NDAs). The participants were experienced lawyers working for important companies such as Goldman Sachs, Cisco and Alston & Bird.

Credit: LawGeex.

Credit: LawGeex.

The humans had an average accuracy score of 85%, with the top-performing lawyers achieving 94% and the worst performer scoring 67%. The AI matched the top-performing lawyers, recording 94% accuracy for the task. However, it took only 26 seconds to review all five documents, compared to 51 minutes for the speediest lawyer or 156 minutes for the slowest.

According to We Forum, about 23% of legal work can be safely automated, and LawGeex’s recent demonstration serves as a prime example of this. However, it’s anyone’s guess how far you can take this. For instance, some have proposed that an AI might make for the fairest judge. In 2016, an artificial intelligence system had correctly predicted the verdicts of cases heard at the European Court of Human Rights, with a 79% accuracy.

At this moment, researchers estimate that 71% of all hours spent on labor are performed by humans, and 29% by machines. In 2025, the ratio is expected to flip to 48% human labor and 52% machine.

Ethics.

New model boils morality down to three elements, aims to impart them to AI

How should a computer go about telling right from wrong?

Ethics.

Image credits Mark Morgan / Flickr.

According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.

Lying is bad — usually

“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.

“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”

So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.

We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.

Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.

In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.

When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.

When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.

One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.

“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”

The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.

SETI project uses AI to track down mysterious light source

Credit: Breakthrough Listen.

Last year, astronomers tasked with hunting alien signals identified 21 repeating light pulses emanating from a dwarf galaxy located 3 million-light years away. The source could be a fast-rotating neutron star — or it could be alien technology, perhaps meant to propel a space-sailing craft. Now, the researchers used artificial intelligence to pore through the dataset to discover 72 new fast radio bursts generated by the mysterious light source.

Fast radio bursts (FRBs) are bright pulses of radio emission mere milliseconds in duration. The signals acquired by the Green Bank Telescope in West Virginia and then initially analyzed through traditional methods by the Breakthrough Listen — a SETI project led by the University of California, Berkeley — lasted only an hour.

What sets the source in question — called FRB 121102 — apart from other on-off fast radio bursts is that the emitted bursts fired in a repeated pattern, alternating between periods of quiescence and frenzied activity.

Since the first readings made on August 26, 2017, the team of astronomers has devised a machine-learning algorithm that scoured through 400 terabytes of data recorded over a five-hour-long period.

The machine learning algorithm called a “convolutional neural network” is often employed by tech companies to display online search results or sort images. It found an additional 72 bursts not detected originally, bringing the total number of detected bursts from FRB 121102 to around 300 since it was initially discovered in 2012.

“This work is exciting not just because it helps us understand the dynamic behavior of fast radio bursts in more detail, but also because of the promise it shows for using machine learning to detect signals missed by classical algorithms,” said Andrew Siemion, director of the Berkeley SETI Research Center and principal investigator for Breakthrough Listen, the initiative to find signs of intelligent life in the universe.

The mystery still lingers, though. We still don’t know much about FRBs or what produced this sequence, but the new readings help put some new constraints on the periodicity of the pulses generated by FRB 121102. It seems like the pulses are not fired all that regularly after all, at least not if the pattern is longer than 10 milliseconds. More observations might one day help scientists figure out what is driving these enigmatic light sources, the authors of the new study wrote in The Astrophysical Journal.

“Whether or not FRBs themselves eventually turn out to be signatures of extraterrestrial technology, Breakthrough Listen is helping to push the frontiers of a new and rapidly growing area of our understanding of the Universe around us,” said UC Berkeley Ph.D. student Gerry Zhang.

Text bubble.

AI spots depression by looking at your patterns of speech

A new algorithm developed at MIT can help spot signs of depression from a simple sample (text of audio) of conversation.

Text bubble.

Image credits Maxpixel.

Depression has often been referred to as the hidden depression of modern times, and the figures seem to support this view: 300 million people around the world have depression, according to the World Health Organization. The worst part about it is that many people live and struggle with undiagnosed depression day after day for years, and it has profoundly negative effects on their quality of life.

Our quest to root out depression in our midst has brought artificial intelligence to the fray. Machine learning has seen increased use as a diagnostics aid against the disorder in recent years. Such applications are trained to pick up on words and intonations of speech that may indicate depression. However, they’re of limited use as the software draws on an individual’s answers to specific questions.

In a bid to bring the full might of the silicon brain to bear on the matter, MIT researchers have developed a neural network that can look for signs of depression in any type of conversation. The software can accurately predict if an individual is depressed without needing any other information about the questions and answers.

Hidden in plain sight

“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” says first author Tuka Alhanai, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“If you want to deploy [depression-detection] models in scalable way […] you want to minimize the amount of constraints you have on the data you’re using. You want to deploy it in any regular conversation and have the model pick up, from the natural interaction, the state of the individual.”

The team based their algorithm on a technique called sequence modeling, which sees use mostly in speech-processing applications. They fed the neural network samples of text and audio recordings of questions and answers used in diagnostics, from both depressed and non-depressed individuals, one by one. The samples were obtained from a dataset of 142 interactions from the Distress Analysis Interview Corpus (DAIC).

The DAIC contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post-traumatic stress disorder. Each subject is rated ,in terms of depression, on a scale between 0 to 27, using the Personal Health Questionnaire. Scores between moderate (10 to 14) and moderately severe (15 to 19) are considered depressed, while all others below that threshold are considered not depressed. Out of all the subjects in the dataset, 28 (20 percent) were labeled as depressed.

Simple diagram of the network. LSTM stands for Long Short-Term Memory.
Image credits Tuka Alhanai, Mohammad Ghassemi, James Glass, (2018), Interspeech.

The model drew on this wealth of data to uncover speech patterns for people with or without depression. For example, past research has shown that words such as “sad,” “low,” or “down,” may be paired with audio signals that are flatter and more monotone in depressed individuals. Individuals with depression may also speak more slowly and use longer pauses between words.

The model’s job was to determine whether any patterns of speech from an individual were predictive of depression or not.

“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai says. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”

Samples from the DAIC were also used to test the network’s efficiency. It was measured on its precision (whether the individuals it identified as depressed had been diagnosed as depressed) and recall (whether it could identify all subjects who were diagnosed as depressed in the entire dataset). It scored 71% on precision and 83% on recall for an averaged combined score of 77%, the team writes. While it may not sound that impressive, the authors write that this outperforms similar models in the majority of tests.

The model had a much harder time spotting depression from audio than text. For the latter, the model needed an average of seven question-answer sequences to accurately diagnose depression. With audio, it needed around 30 sequences. The team says this “implies that the patterns in words people use that are predictive of depression happen in a shorter time span in text than in audio,” a surprising insight that should help tailor further research into the disorder.

The results are significant as the model can detect patterns indicative of depression, and then map those patterns to new individuals, with no additional information. It can run on virtually any kind of conversation. Other models, by contrast, only work with specific questions — for example, a straightforward inquiry, “Do you have a history of depression?”. The models then compare a subject’s response to standard ones hard-wired into their code to determine if they are depressed.

“But that’s not how natural conversations work,” Alhanai says.

“We call [the new model] ‘context-free,’ because you’re not putting any constraints into the types of questions you’re looking for and the type of responses to those questions.”

The team hopes their model will be used to detect signs of depression in natural conversation. It could, for instance, be remade into a phone app that monitors its user’s texts and voice communication for signs of depression, and alert them to it. This could be very useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong, the team writes.

However, in a post-Cambridge-Analytica-scandal world, that may be just outside of the comfort zone of many. Time will tell. Still, the model can still be used as a diagnosis aid in clinical offices, says co-author James Glass, a senior research scientist in CSAIL.

“Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” he says. “This is a step forward in seeing if we can do something assistive to help clinicians.”

Truth be told, while the model does seem very good at spotting depression, the team doesn’t really understand what crumbs it follows to do so. “The next challenge is finding out what data it’s seized upon,” Glass concludes.

Apart from this, the team also plans to expand their model with data from many more subjects — both for depression and other cognitive conditions.

The paper “Detecting Depression with Audio/Text Sequence Modeling of Interviews” has been published in the journal Interspeech.

Google just let an Artificial Intelligence take care of cooling a data center

The future is here, and it’s weird: Google is now putting a self-taught algorithm in charge of a part of its infrastructure.

It should surprise no one that Google has been intensively working on artificial intelligence (AI). The company managed to develop an AI that beat the world champion at Go, an incredibly complex game, but that’s hardly been the only implementation. Google taught one of its AIs how to navigate the London subway, and more practically, it developed another algorithm to learn all about room cooling.

They had the AI learn how to adjust a cooling system in order to reduce power consumption, and based on recommendations made by the AI, they almost halved energy consumption at one of their data centers.

“From smartphone assistants to image recognition and translation, machine learning already helps us in our everyday lives. But it can also help us to tackle some of the world’s most challenging physical problems — such as energy consumption,” Google said at the time.

“Major breakthroughs, however, are few and far between — which is why we are excited to share that by applying DeepMind’s machine learning to our own Google data centres, we’ve managed to reduce the amount of energy we use for cooling by up to 40 percent.”

The algorithm learns through a technique called reinforcement learning, which uses trial and error. As it learns, it starts to ask better questions and design better trials, which allows it to continue learning much faster. Essentially, it’s a self-taught method.

In this particular case, the AI tried different cooling configurations and found ways that greatly reduced energy consumption, saving Google millions of dollars in the long run as well as lowering carbon emissions for the data center.

Now, Google took things one step further and has completely assigned control of the cooling center to the AI. Joe Kava, vice president of data centers for Google, says engineers already trusted the system, and there were few issues regarding the transition. There’s still a data manager that will oversee the entire process, but if everything goes according to plan, the AI will manage the entire process on its own.

This is no trivial matter. Not only does it represent an exciting first (allowing an AI to manage an important infrastructure component), but it also may help reduce the energy used by data centers, which can be quite substantial. A recent report from researchers at the US Department of Energy’s Lawrence Berkeley National Laboratory concluded that US data centers accounted for about 1.8% of the overall national electricity use.

Efforts to reduce this consumption have been made, but true breakthroughs are few and far between. This is where machine learning could end up making a big difference. Who knows — perhaps the next energy revolution won’t be powered by human ingenuity, but rather by artificial intelligence.

Credit: Pixabay.

How artificial intelligence is destined to revamp education

Credit: Pixabay.

Credit: Pixabay.

In recent years, technology has shaped classrooms all over the world. Not too long ago, chalk and blackboards were all you needed, but then computers, tablets, and the internet came along. But recently, education is being augmented and taken to next level by virtual reality (VR) and artificial intelligence (AI). According to a recent Pearson report, AI is set to positively transform education in the coming years.

“The future offers the potential of even greater tools and supports. Imagine lifelong learning companions powered by AI that can accompany and support individual learners throughout their studies — in and beyond school — or new forms of assessment that measure learning while it is taking place, shaping the learning experience in real time,” the authors of the report wrote.

Smart classrooms beget smart students

Customized learning is one of the main fields of education where AI is set to have a significant impact. It used to be unthinkable to imagine one-on-one tutoring for each and every student out there, for any subject but now artificial intelligence promises to deliver. For instance, one US-based company called Content Technologies Inc is leveraging deep learning to ‘publish’ customized books — decades-old books that are automatically revamped into smart and relevant learning guides, like advice on writing a research paper about AI.

AIs also shine in the fact that they are able to analyze students’ abilities, interests and potential through education profiles, classroom interaction, social media, and the like, to find the best learning method (or even career path) for them.

Artificial intelligence will not only revamp the classroom completely, it will also change the face of the job market. Because technology always leverages inequality, it will inevitably exacerbate this inequality. Yes, AI will make many jobs obsolete and will cut the number of employees required in other fields. However, AIs can then be used to train learners how to respond to a jobs market re-shaped by technology by helping them “achieve at higher levels, and in a wider set of skills, than any education system has managed to date,” according to Pearson.

When AI is combined with artificial reality tech like the amazing HoloLens by Microsoft, the Oculus Rift VR headset, or Google Expedition, there is virtually no limit to what you can achieve. These powerful technologies redefine, for instance, what experiential or hands-on learning means. Imagine a paleontology class where students are immersed in a simulated Jurassic environment, later they dissect a dinosaur or inspect a micro-CT scan of a fossil.  

But all of this does not mean that teachers will be superseded — far from it, human teachers will still play a major role in the classroom, one that’s adapted to 21st century needs. For instance, teachers will have to help students develop non-cognitive skills such as confidence and creativity that are difficult if not impossible to transfer from a machine. Simply put, there’s no substitute for good mentors and guides.