Tag Archives: AI

AI detects childhood diseases with doctor-like accuracy

A new artificial intelligence (AI) model exhibited an accuracy comparable to that of experienced doctors.

Artificial intelligence has developed dramatically in recent years. In the medical industry, AI is intensely discussed, though more on the lines of image recognition or analysis than diagnosis. For instance, one such algorithm has been taught to assess a person’s age and blood pressure just by looking at a photo of its eye, whereas another one has been able to detect Alzheimer’s from brain scans even before doctors are able to do so. Now, a team of researchers has expanded the range of AI abilities, developing an algorithm that can diagnose common childhood diseases.

Diagnosis was thought of as a strictly human pursuit, especially in modern medicine, where the range of disease entities, diagnostic testing, and biomarkers has increased tremendously in recent years. Subsequently, clinical decision-making has also become more complex and demanding — to be left only in the hands of capable doctors.

However, in the current digital age, the electronic health record has grown into a massive repository of data, data which can be used to emulate how doctors think. To formulate a diagnosis, physicians frequently use a logical approach to establish a diagnosis. They start from the chief complaint, asking targeted questions related to that complaint and other relevant aspects. Then, they check the background, history, and any other bits of useful information, and offer a diagnosis. Of course, experimented doctors do this almost intuitively, without mentally breaking down all the steps, but in a sense, the whole process is very logical.

So could this approach be emulated on a computer? These researchers think so.

Kang Zhang and colleagues developed an AI-based model that applies an automated natural language-processing (NLP) system, using deep learning techniques to identify clinically relevant information from electronic health records. The model searches health records, mentions of symptoms, any lab results, as well as a library of guidelines for best practices.

They trained and calibrated the model on 1.3 million patient visits to a major health center in Guangzhou, China. They had a total of 101.6 million data points.

After this, the AI was capable of identifying common childhood diseases with an accuracy comparable to that of a doctor. Furthermore, it was capable of spitting them into two categories: common (and less dangerous) conditions such as influenza and hand-foot-mouth disease, and dangerous or life-threatening conditions, such as acute asthma attack and meningitis.

Researchers emphasize that the machine isn’t meant to replace the doctor’s diagnosis, but provide a tool to help streamline health practice. It could, for instance, triage patients by potential disease severity, and serve as a diagnosis aid in complicated cases.

“Although this impact may be most obvious in areas in which there are few healthcare providers relative to the population, such as China, healthcare resources are in high demand worldwide, and the benefits of such a system are likely to be universal.”

The study has been published in Nature Medicine.

Organic transistors bring us closer to brain-mimicking AI

Simone Fabiano and Jennifer Gerasimov. Credit: Thor Balkhed.

A new type of transistor based on organic materials might one-day become the backbone of computing technology that mimics the human brain. This kind of hardware is able to act like both short-term and long-term memory. It can also be modulated to create connections where there were none previously, which is similar to how neurons make synapses.

Your typical run-off-the-mill transistor acts as a sort of valve, allowing electrical current from an input to pass. In the process, it can be switched on and off. It can also be amplified or dampened.

The new organic transistor developed by researchers at Linkoping University in Sweden can create a new connection between an input and output through a channel made out of a monomer called ETE-S. This organic material is water-soluble and forms long polymer chains with an intermediate level of doping.

This electropolymerized conducting polymer can be formed, grown or shrunk, or completely removed during operation. When ions are injected through the channel, the electrochemical transistor can amplify or switch electron signals, which can be manipulated within a range that spans several orders of magnitude, as reported in the journal Science Advances

“We have shown that we can induce both short-term and permanent changes to how the transistor processes information, which is vital if one wants to mimic the ways that brain cells communicate with each other,” Jennifer Gerasimov, a postdoc in organic nanoelectronics at Linkoping University in Sweden and one of the authors of the article, said in a statement.

That’s similar to how neurons form new connections where there have been no prior connections. Today’s artificial neural networks use machine learning algorithms to recognize patterns through supervised or unsupervised learning. This brain-mimicking architecture requires prefabricated circuitry made of a huge number of nodes to simulate a single synapse. That’s a lot of computing power, which requires a lot of energy. In contrast, the human brain controls 100 billion neurons while running on 15 Watts of power — that’s a fraction of what a typical light bulb needs to function.

 “Our organic electrochemical transistor can therefore carry out the work of thousands of normal transistors with an energy consumption that approaches the energy consumed when a human brain transmits signals between two cells,” said Simone Fabiano, principal investigator in organic nanoelectronics at the Laboratory of Organic Electronics, Campus Norrköping.

The organic transistor looks like a promising prospect for neuromorphic computing — an umbrella term for endeavors concerned with mimicking the human brain, drawing upon physics, mathematics, biology, neuroscience, and more. According to a recent review, the neuromorphic computing market could grow to $6.48 bln. by 2024.

 

Teapot golfball.

Artificial intelligence still has severe limitations in recognizing what it’s seeing

Artificial intelligence won’t take over the world any time soon, a new study suggests — it can’t even “see” properly. Yet.

Teapot golfball.

Teapot with golf ball pattern used in the study.
Image credits: Nicholas Baker et al / PLOS Computational Biology.

Computer networks that draw on deep learning algorithms (often referred to as AI) have made huge strides in recent years. So much so that there is a lot of anxiety (or enthusiasm, depending on which side of the contract you find yourself) that these networks will take over human jobs and other tasks that computers simply couldn’t perform up to now.

Recent work at the University of California Los Angeles (UCLA), however, shows that such systems are still in their infancy. A team of UCLA cognitive psychologists showed that these networks identify objects in a fundamentally different manner from human brains — and that they are very easy to dupe.

Binary-tinted glasses

“The machines have severe limitations that we need to understand,” said Philip Kellman, a UCLA distinguished professor of psychology and a senior author of the study. “We’re saying, ‘Wait, not so fast.”

The team explored how machine learning networks see the world in a series of five experiments. Keep in mind that the team wasn’t trying to fool the networks — they were working to understand how they identify objects, and if it’s similar to how the human brain does it.

For the first one, they worked with a deep learning network called VGG-19. It’s considered one of the (if not the) best networks currently developed for image analysis and recognition. The team showed VGG-19 altered, color images of animals and objects. One image showed the surface of a golf ball displayed on the contour of a teapot, for example. Others showed a camel with zebra stripes or the pattern of a blue and red argyle sock on an elephant. The network was asked what it thought the picture most likely showed in the form of a ranking (with the top choice being most likely, the second one less likely, and so on).

Combined images.

Examples of the images used during this step.
Image credits Nicholas Baker et al., 2018, PLOS Computational Biology.

VGG-19, the team reports, listed the correct item as its first choice for only 5 out of the 40 images it was shown during this experiment (12.5% success rate). It was also interesting to see just how well the team managed to deceive the network. VGG-19 listed a 0% chance that the argyled elephant was an elephant, for example, and only a 0.41% chance that the teapot was a teapot. Its first choice for the teapot image was a golf ball, the team reports.

Kellman says he isn’t surprised that the network suggested a golf ball — calling it “absolutely reasonable” — but was surprised to see that the teapot didn’t even make the list. Overall, the results of this step hinted that such networks draw on the texture of an object much more than its shape, says lead author Nicholas Baker, a UCLA psychology graduate student. The team decided to explore this idea further.

Missing the forest for the trees

For the second experiment, the team showed images of glass figurines to VGG-19 and a second deep learning network called AlexNet. Both networks were trained to recognize objects using a database called ImageNet. While VGG-19 performed better than AlexNet, they were still both pretty terrible. Neither network could correctly identify the figurines as their first choice: an elephant figurine, for example, was ranked with almost a 0% chance of being an elephant by both networks. On average, AlexNet ranked the correct answer 328th out of 1,000 choices.

Glass figurines.

Well, they’re definitely glass figurines to you and me. Not so obvious to AI.
Image credits Nicholas Baker et al / PLOS Computational Biology.

In this experiment, too, the networks’ first choices were pretty puzzling: VGG-19, for example, chose “website” for a goose figure and “can opener” for a polar bear.

“The machines make very different errors from humans,” said co-author Hongjing Lu, a UCLA professor of psychology. “Their learning mechanisms are much less sophisticated than the human mind.”

“We can fool these artificial systems pretty easily.”

For the third and fourth experiment, the team focused on contours. First, they showed the networks 40 drawings outlined in black, with the images in white. Again, the machine did a pretty poor job of identifying common items (such as bananas or butterflies). In the fourth experiment, the researchers showed both networks 40 images, this time in solid black. Here, the networks did somewhat better — they listed the correct object among their top five choices around 50% of the time. They identified some items with good confidence (99.99% chance for an abacus and 61% chance for a cannon from VGG-19, for example) while they simply dropped the ball on others (both networks listed a white hammer outlined in black for under 1% chance of being a hammer).

Still, it’s undeniable that both algorithms performed better during this step than any other before them. Kellman says this is likely because the images here lacked “internal contours” — edges that confuse the programs.

Throwing a wrench in

Now, in experiment five, the team actually tried to throw the machine off their game as much as possible. They worked with six images that VGG-16 identified correctly in the previous steps, scrambling them to make them harder to recognize while preserving some pieces of the objects shown. They also employed a group of ten UCLA undergrads as a control group.

The students were shown objects in black silhouettes — some scrambled to be difficult to recognize and some unscrambled, some objects for just one second, and some for as long as the students wanted to view them. Students correctly identified 92% of the unscrambled objects and 23% of the scrambled ones when allowed a single second to view them. When the students could see the silhouettes for as long as they wanted, they correctly identified 97% of the unscrambled objects and 37% of the scrambled objects.

Silhouette and scrambled bear.

Example of a silhouette (a) and scrambled image (b) of a bear.
Image credits Nicholas Baker et al / PLOS Computational Biology.

VGG-19 correctly identified five of these six images (and was quite close on the sixth, too, the team writes). The team says humans probably had more trouble identifying the images than the machine because we observe the entire object when trying to determine what we’re seeing. Artificial intelligence, in contrast, works by identifying fragments.

“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

The results suggest that right now, AI (as we know and program it) is simply too immature to actually face the real world. It’s easily duped, and it works differently than us — so it’s hard to intuit how it will behave. Still, understanding how such networks ‘see’ the world around them would be very helpful as we move forward with them, the team explains. If we know their weaknesses, we know where we need to put most work in to make meaningful strides.

The paper “Deep convolutional networks do not classify based on global object shape” has been published in the journal PLOS Computational Biology.

Chess.

Novel AI can master games like chess and Go by itself, no humans needed

UK researchers have improved upon a pre-existing AI, allowing it to teach itself how to play three difficult board games: chess, shogi, and Go.

Chess.

Image via Pexels.

Can’t find a worthy opponent to face in your favorite board game? Fret not! Researchers at the DeepMind group and University College, both in the UK, have created an AI system capable of teaching itself (and mastering) three such games. In a new paper, the group describes the AI and why they believe it represents an important step forward for the development of artificial intelligence.

Let’s play a game

“This work has, in effect, closed a multi-decade chapter in AI research,” Murray Campbell, a member of the team that designed IBM’s Deep Blue, writes in a commentary accompanying the study.

“AI researchers need to look to a new generation of games to provide the next set of challenges.”

Nothing puts the huge strides AI has made over the years into perspective quite like having one beat you at a game. Over two decades ago, an AI known as Deep Blue managed such a feat in a chess game against world champion Gary Kasparov in 1997. Since then, the machines have also managed victories in shogi and Go (think of them as Japanese and Chinese versions of chess).

While impressive, such achievements also showcased the shortcomings of these computer opponents. These programs were good at their respective game — but only at playing that one game. In the new paper, researchers showcase an AI that can learn and master multiple games on its own.

Christened AlphaZero, this AI is based closely on the AlphaGo Zero software and uses a similar reinforcement learning system. Much like a human would, it learns through trial and error by repeatedly playing a game and looking at the results of its actions. All we have to do is explain the basic rules of the game, and then the computer starts playing — against itself. Repeated matches let AlphaZero see which moves help bring about a win, and which simply don’t work.

Over time, all this experience lets the AI become quite adept at the game. AlphaZero has shown that given enough time to practice, it can come to defeat both human adversaries and other dedicated board game AIs — which is no small feat. The system also uses a search method known as the Monte Carlo tree search. Combining the two technologies allows the system to teach itself how to get better at playing a game.

AlphaZero results.

Tournament evaluation of AlphaZero in chess, shogi, and Go. The results show games won, drawn, or lost (from AlphaZero’s perspective) in matches against Stockfish, Elmo, and AlphaGo Zero (AG0). AlphaZero was allowed three days for training in each game.
Image credits DeepMind Technologies Ltd

It certainly did help that the team ran the AI on a very beefy platform — the rig employed 5000 tensor processing units, which is on a par with the capabilities of large supercomputers.

Still, AlphaZero can handle any game that provides all the information that’s relevant to decision-making. The new generation of games to which Campbell alluded earlier do not fit into this category. In games such as poker, for example, players can hold their cards close to their chests (and thus obfuscate relevant information). Other examples include many multiplayer games, such as StarCraft II or Dota. However, it likely won’t be long until AlphaZero can tackle such games as well.

“Those multiplayer games are harder than Go, but not that much higher,” Campbell tells IEEE Spectrum. “A group has already beaten the best players at Dota 2, though it was a restricted version of the game; Starcraft may be a little harder. I think both games are within 2 to 3 years of solution.”

The paper “Mastering board games” has been published in the journal Science.

Captcha.

New AI solves most Captcha codes, potentially causing a “huge security vulnerability”

The world’s most popular website security system may soon become obsolete.

Captcha.

Image credits intergalacticrobot.

Researchers at the Lancaster University, UK, Northwest University, and Peking University (both in China) have developed a new Ai that can defeat the majority of captcha systems in use today. The algorithm is not only very good at its job — it also requires minimal human effort or oversight to work.

The breakable code

“[The software] allows an adversary to launch an attack on services, such as Denial of Service attacks or spending spam or fishing messages, to steal personal data or even forge user identities,” says Mr Guixin Ye, the lead student author of the work. “Given the high success rate of our approach for most of the text captcha schemes, websites should be abandoning captchas.”

Text-based captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) do pretty much what it says on the tin. They’re systems that typically use a hodge-podge of letters or numbers, which they run through additional security features such as occluding lines. The end goal is to generate images that a human can distinguish as being text while confusing a computer. It relies on our much stronger pattern recognition abilities to weed out machines. All in all, it’s considered pretty effective.

Captcha.

Because it’s drenched in security features that make it a really annoying lecture.
Image credits Guixin Ye et al., 2018, CCS ’18.

The team, however, plans to change this. Their AI draws on a technique known as a ‘Generative Adversarial Network’, or GAN. In short, this approach uses a large number of (software-generated) captchas to train a neural network (known as the ‘solver’). After going through boot camp, this neural network is then further refined and pitted against real captcha codes.

In the end, what the team created is a solver that works much faster and with greater accuracy than any of its predecessors. The programme only needs about 0.05 seconds to crack a captcha when running on a desktop PC, the team reports. Furthermore, it has successfully attacked and cracked versions of captcha that were previously machine-proof.

The programme was tested on 33 captcha schemes, of which 11 are used by many of the world’s most popular websites — including eBay, Wikipedia, and Microsoft. The system had much more success relative to its counterparts, although it did have some difficulty breaking through certain “strong security features” used by Google. Still, even in this case, the system saw a success rate of 3% which sounds pitiful, but “is still above the 1% threshold for which a captcha is considered to be ineffective,” the team writes.

Test results.

Results with the base (only trained with synthetic images) and fine-tuned solver (also trained with real-life examples).
Image credits Guixin Ye et al., 2018, CCS ’18.

So the solver definitely delivers. But it’s also much easier to use than any of its competitors. Owing to the GAN-approach the team used, it takes much less effort and time to train the AI — which would involve manually deciphering, tagging, and feeding captcha examples to the network. The team says it only takes 500 or so genuine captcha codes to adequately train their programme. It would take millions of examples to manually train it without the GAN, they add.

One further advantage of this approach is that it makes the AI system-independent (it can attack any variation of captcha out there). This comes in stark contrast to previous machine-learning captcha breakers. These manually-trained systems were both laborious to build and easily thrown off by minor changes in security features within the codes.

All in all, this software is very good at breaking codes; so good, in fact, that the team believes they can no longer be considered a meaningful security measure.

“This is the first time a GAN-based approach has been used to construct solvers,” says Dr Zheng Wang, Senior Lecturer at Lancaster University’s School of Computing and Communications and co-author of the research. “Our work shows that the security features employed by the current text-based captcha schemes are particularly vulnerable under deep learning methods.”

“We show for the first time that an adversary can quickly launch an attack on a new text-based captcha scheme with very low effort. This is scary because it means that this first security defence of many websites is no longer reliable. This means captcha opens up a huge security vulnerability which can be exploited by an attack in many ways.”

The paper “Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach” has been published in the journal CCS ’18 Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security.

Ethics.

New model boils morality down to three elements, aims to impart them to AI

How should a computer go about telling right from wrong?

Ethics.

Image credits Mark Morgan / Flickr.

According to a team of US researchers, a lot of factors come into play — but most people go through the same steps when making snap moral judgments. Based on these observations, the team has created a framework model to help our AI friends tell right from wrong even in complex settings.

Lying is bad — usually

“At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral,” says Veljko Dubljević, a neuroethics researcher at North Carolina State University and lead author of the study.

“There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model — and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.”

So what’s so special about the ADC model? Well, the team explains that it can be used to determine what constitutes as moral or immoral even in tricky situations. For example, most of us would agree that lying isn’t moral. However, we’d probably (hopefully) also agree that lying to Nazis about the location of a Jewish family is solidly moral. The action itself — lying — can thus take various shades of ‘moral’ depending on the context.

We humans tend to have an innate understanding of this mechanism and assess the morality of an action based on our life experience. In order to understand the rules of the game and later impart them to our computers, the team developed the ADC model.

Boiled down, the model posits that people look to three things when assessing morality: the agent (the person who is doing something), the action in question, and the consequence (or outcome) of the action. Using this approach, researchers say, one can explain why lying can be a moral action. On the flipside, the ADC model also shows that telling the truth can, in fact, be immoral (if it is “done maliciously and causes harm,” Dubljević says).

“This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not,” Dubljević says. “This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming.”

In order to test their model, the team pitted it against a series of scenarios. These situations were designed to be logical, realistic, and easily understood by both professional philosophers as well as laymen, the team explains. All scenarios were evaluated by a group of 141 philosophers with training in ethics prior to their use in the study.

In the first part of the trials, 528 participants from across the U.S. were asked to evaluate some of these scenarios in which the stakes were low — i.e. possible outcomes weren’t dire. During the second part, 786 participants were asked to evaluate more dire scenarios among the ones developed by the team — those that could result in severe harm, injury, or death.

When the stakes were low, the nature of the action itself was the strongest factor in determining the morality of a given situation. What mattered most in such situations, in other words, was whether a hypothetical individual was telling the truth or not — the outcome, be it good or bad, was secondary.

When the stakes were high, outcome took center stage. It was more important, for example, to save a passenger from dying in a plane crash than the actions (be them good or bad) one took to reach this goal.

“For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions,” Dubljević says.

One of the key findings of the study was that philosophers and the general public assess morality in similar ways, suggesting that there is a common structure to moral intuition — one which we instinctively use, regardless of whether we’ve had any training in ethics. In other words, everyone makes snap moral judgments in a similar way.

“There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior,” Dubljević says. “Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I’m working on currently.”

The paper “Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment” has been published in the journal PLOS ONE.

Credit: Pixabay.

AI-controlled glider learns to ride air currents like an eagle

The albatross is one of nature’s most interesting creatures. They seem to fly for hours and hours and yet they only flap their wings on rare occasions. The bird achieves this marvelous feat, sometimes traveling for 10,000 miles at a time, by exploiting dynamic soaring, essentially catching currents. How exactly birds ride air currents is not perfectly understood — but we’re getting there.

Researchers have reported in the journal Nature that they were able to design an artificial intelligence (AI) system that learned how to take advantage of a particular type of air current — rising columns of warm air known as thermals — in order to fly a glider.

Credit: Pixabay.

Credit: Pixabay.

The AI used reinforcement learning, a type of machine learning where an agent learns how to behave in an environment by performing actions and seeing the results. In other words, the machine wasn’t instructed on how to perform this task — it had to act in an optimal way, maximizing results based on a number of inputs. It’s the same algorithm used by Google’s infamous AlphaGo that learned to play the board game Go all by itself and then beat professional players a feat simply not possible with conventional programming techniques.

Here, the machine was relayed information such as the glider’s pitch, yaw, groundspeed, and airspeed — parameters which it had to constantly tweak in order to reach the highest climb rate possible.

Thermal updrafts are key to allowing a glider to stay airborne for as long as possible. In an updraft, the increase in vertical air movement can be enough to stop the glider falling and, if the vertical wind is strong enough, allow it to climb. Without an updraft, a glider will gradually fall toward the ground.

The researchers at the University of California, San Diego, and the Abdus Salam International Center for Theoretical Physics, Trieste, Italy, first trained the machine learning algorithm in a simulator, then got their hands dirty in the field. They performed roughly 240 flights above Poway, California which each lasted three minutes, on average. However, during some flights, the glider stayed in the air for up to 45 minutes, flying at the same level as eagles. The glider even got attacked by an eagle during one particular flight.

During this whole time, the AI was completely in control, using thermals to climb into the sky.

This early work suggests that the autonomous aircraft of the future could exploit air currents in order to save energy. Such unmanned aircraft could one day fly alongside migratory birds the whole way, tracking their every behavior and offering important scientific insight.

Before this happens, though, more work needs to be done. Thermals are just one of many types of air currents that an aircraft encounters in real life. A temperature current, or rising air current, is formed when air masses of different temperatures meet. But then there are currents generated by air breaking off mountains or by the collision of air movement at convergence zones. Nevertheless, the work is impressive and promising.

It’s also fascinating to see such powerful demonstrations that show just how flexible reinforced learning can be — from controlling gliders that touch tips with eagles to beating world champions at Go.

Google just let an Artificial Intelligence take care of cooling a data center

The future is here, and it’s weird: Google is now putting a self-taught algorithm in charge of a part of its infrastructure.

It should surprise no one that Google has been intensively working on artificial intelligence (AI). The company managed to develop an AI that beat the world champion at Go, an incredibly complex game, but that’s hardly been the only implementation. Google taught one of its AIs how to navigate the London subway, and more practically, it developed another algorithm to learn all about room cooling.

They had the AI learn how to adjust a cooling system in order to reduce power consumption, and based on recommendations made by the AI, they almost halved energy consumption at one of their data centers.

“From smartphone assistants to image recognition and translation, machine learning already helps us in our everyday lives. But it can also help us to tackle some of the world’s most challenging physical problems — such as energy consumption,” Google said at the time.

“Major breakthroughs, however, are few and far between — which is why we are excited to share that by applying DeepMind’s machine learning to our own Google data centres, we’ve managed to reduce the amount of energy we use for cooling by up to 40 percent.”

The algorithm learns through a technique called reinforcement learning, which uses trial and error. As it learns, it starts to ask better questions and design better trials, which allows it to continue learning much faster. Essentially, it’s a self-taught method.

In this particular case, the AI tried different cooling configurations and found ways that greatly reduced energy consumption, saving Google millions of dollars in the long run as well as lowering carbon emissions for the data center.

Now, Google took things one step further and has completely assigned control of the cooling center to the AI. Joe Kava, vice president of data centers for Google, says engineers already trusted the system, and there were few issues regarding the transition. There’s still a data manager that will oversee the entire process, but if everything goes according to plan, the AI will manage the entire process on its own.

This is no trivial matter. Not only does it represent an exciting first (allowing an AI to manage an important infrastructure component), but it also may help reduce the energy used by data centers, which can be quite substantial. A recent report from researchers at the US Department of Energy’s Lawrence Berkeley National Laboratory concluded that US data centers accounted for about 1.8% of the overall national electricity use.

Efforts to reduce this consumption have been made, but true breakthroughs are few and far between. This is where machine learning could end up making a big difference. Who knows — perhaps the next energy revolution won’t be powered by human ingenuity, but rather by artificial intelligence.

Credit: Wikimedia Commons.

German banking giant is using AI to write its earnings reports

Credit: Wikimedia Commons.

Credit: Wikimedia Commons.

In light of new regulations that force banks to cut down their research earnings, Commerzbank — which is the second largest bank in Germany — is looking to artificial intelligence to write its earning reports. Previously, the same technology was used to write quick reports on soccer matches or political events such as elections.

The German bank worth $525 billion is working on this project with Retresco, a content automation company in which Commerzbank invested two years ago through its fintech incubator.

Speaking to the Financial Times, Michael Spitz, who is head of Commerzbank’s research and development unit Mainincubator, said that this kind of technology shows great promise because “equity research reports reviewing quarterly earnings are structured in similar ways.” What’s more, these kinds of documents are often prepared under common reporting standards, which are easily read by machine learning algorithms. In other words, there’s a lot of routines and robotic mechanics that an AI might handle just as well if not better than a human — it would certainly be faster and more productive, capable to writing reports on the fly, for instance.

According to Spitz, this technology is “already advanced enough to provide around 75% of what a human equity analyst would when writing an immediate report on quarterly earnings.” However, the AI is nowhere near good enough to be able to produce content for clients — this kind of custom writing might take a lot more time and development to supersede. So, if you’re working as a bank analyst, don’t be too worried.

“If it is related to much more abstract cases, we feel that we are not there yet — that we can or maybe will ever replace the quality of a researcher,” Spitz added.

But that’s not to say that an AI can’t handle some of these so-called abstract cases. For instance, an AI developed by Japanese researchers wrote a novel that nearly won a literary award. Here’s an excerpt, from the book called The Day A Computer Writes A Novel.

“I writhed with joy, which I experienced for the first time, and kept writing with excitement.

“The day a computer wrote a novel. The computer, placing priority on the pursuit of its own joy, stopped working for humans.”

Many banks are eager to cut research spending following the implementation of European investor protections known as Markets In Financial Instruments Directive (MiFID II). The regulations that came into effect earlier this year are designed to increase transparency across the European Union’s financial markets and standardize the regulatory disclosures required for particular markets. Some of the MiFID measures include pre- and post- transparency requirements, as well as new standards for financial firms.

As a direct consequence of MiFID, investors are forced to pay for research explicitly instead of bundling its costs into trading commissions. For some firms, their research revenue has fallen by as much as 30 percent as a result. Commerzbank hopes that AI will help offset some of its losses.

AI is starting to beat us at our favorite games: Dota2

It started with chess. It moved on to Go. Now, AI is ruining computer games for us after beating humans in Dota2.

Screenshot from Dota2.

There’s a distinct difference between games like chess and Go as compared to most strategy computer games: vision. In the first case, the board is open and visible to everyone. But in computer games, you often have what is called the fog of war — enemy units, and often terrain, are hidden from the player unless directly explored. For Artificial Intelligence (AI), dealing with this type of uncertainty is incredibly problematic and difficult to manage. Chess and Go are also turn-based games, whereas in Dota2, the computer needs to react and adapt in real-time.

[panel style=”panel-default” title=”Dota2″ footer=””]Dota2 (originally: Defense of The Ancients) is a free-to-play team game played 5v5. Each team occupies a base on one side of the map, and the purpose of the game is to destroy the other base. Each of the ten players independently controls a powerful character, known as a “hero”, who all have unique abilities and different styles of play. During a match, players collect experience points and items for their heroes to successfully battle the opposing team’s heroes in player versus player combat. A game typically lasts about 30-60 minutes. [/panel]

To teach AIs to play the game, OpenAI, a nonprofit AI research company co-founded by Elon Musk, used a technique called reinforcement learning. Essentially, the AI is given the basic capability to play the game and then is left to its own devices. It plays more and more, learning from its mistakes and improving iteration after iteration. The programmers set different reward criteria that it tries to optimize this trial-and-error approach, but there’s no shortcut — it needs to play a lot of games.

To reach its current level, the AI had to play 180-years’ worth of games every day for 19 days. Then, it was faced against very skilled amateurs (ranking top 1% in the game) — and it beat them. Of course, there’s still a way to go before the AI can square off against the best of the best, but beating skilled players is extremely impressive, particularly considering the sheer amount of chaos and hidden information in the game. It also works as a proof of concept, showing that the AI can improve in a reasonable amount of time, and there’s no reason why it couldn’t progressively improve until it masters the game and becomes unbeatable.

We’ll see if this is the case a bit later. The International 2018, Dota’s flagship tournament, is set to kick off in August, when an exhibition match will be held between leading pro players and these AIs. I can’t wait to see what happens.

Atom2Vec.

An AI recreated the periodic table from scratch — in a couple of hours

A new artificial intelligence (AI) program developed at Stanford recreated the periodic table from scratch — and it only needed a couple of hours to do so.

Atom2Vec.

If you’ve ever wondered how machines learn, this is it — in picture form. (A) shows atom vectors of 34 main-group elements and their hierarchical clustering based on distance. The color in each cell stands for value of the vector on that dimension.
Image credits Zhou et al., 2018, PNAS.

Running under the alluring name of Atom2Vec, the software learned to distinguish between different atoms starting from a database of chemical compounds. After it learned the basics, the researchers left Atom2Vec to its own devices. Using methods and processes related to those in the field of natural language processing — chiefly among them, the idea that the nature of words can be understood by looking at other words around it — the AI successfully clustered the elements by their chemical properties.

It only took Atom2Vec a couple of hours to perform the feat; roughly speaking, it re-created the periodic table of elements, one of the greatest achievements in chemistry. It took us hairless apes nearly a century of trial-and-error to do the same.

I’m you, but better

The Periodic Table of elements was initially conceived by Dmitri Mendeleev in the mid-19th century, well before many of the elements we know today had been discovered, and certainly before there was even an inkling of quantum mechanics and relativity lurking beyond the boundaries of classical physics. Mendeleev recognized that certain elements fell into groups with similar chemical features, and this established a periodic pattern (hence the name) to the elements as they went from lightweight elements like hydrogen and helium, to progressively heavier ones. In fact, Mendeleev could predict the very specific properties and features of, as yet, undiscovered elements due to blank spaces in his unfinished table. Many of these predictions turned out to be correct when the elements filling the blank spots were finally discovered.

“We wanted to know whether an AI can be smart enough to discover the periodic table on its own, and our team showed that it can,” said study leader Shou-Cheng Zhang, the J. G. Jackson and C. J. Wood Professor of Physics at Stanford’s School of Humanities and Sciences.

Zhang’s team designed Atom2Vec starting from an AI platform (Word2Vec) that Google built to parse natural language. The software converts individual words into vectors (numerical codes). It then analyzes these vectors to estimate the probability of a particular word appearing in a text based on the presence of other words.

The word “king” for example is often accompanied by “queen”, and the words “man” and “woman” often appear together. Word2Vec works with these co-appearances and learns that, mathematically, “king = a queen minus a woman plus a man,” Zhang explains. Working along the same lines, the team fed Atom2Vec all known chemical compounds (such as NaCl, KCl, and so on) in lieu of text samples.

It worked surprisingly well. Even from this relatively tiny sample size, the program figured out that potassium (K) and sodium (Na) must be chemically-similar, as both bind to chlorine (Cl). Through a similar process, Atom2Vec established chemical relationships between all the species in the periodic table. It was so successful and fast in performing the task that Zhang hopes that in the future, researchers will use Atom2Vec to discover and design new materials.

Future plans

“For this project, the AI program was unsupervised, but you could imagine giving it a goal and directing it to find, for example, a material that is highly efficient at converting sunlight to energy,” he said.

As impressive as the achievement is, Zhang says it’s only the first step. The endgame is more ambitious — Zhang hopes to design a replacement for the Turing test, the golden standard for gauging machine intelligence. To pass the Turing test, a machine must be capable of responding to written questions in such a way that users won’t suspect they’re chatting with a machine; in other words, a machine will be considered as intelligent as a human if it seems human to us.

However, Zhang thinks the test is flawed, as it is too subjective.

“Humans are the product of evolution and our minds are cluttered with all sorts of irrationalities. For an AI to pass the Turing test, it would need to reproduce all of our human irrationalities,” he says. “That’s very difficult to do, and not a particularly good use of programmers’ time.”

He hopes to take the human factor out of the equation, by having machine intelligence try to discover new laws of nature. Nobody’s born educated, however, not even machines, so Zhang is first checking to see if AIs can reach of the most important discoveries we made without help. By recreating the periodic table, Atom2Vec has achieved this goal.

The team is now working on the second version of the AI. This one will focus on cracking a frustratingly-complex problem in medical research: it will try to design antibodies to attack the antigens of cancer cells. Such a breakthrough would offer us a new and very powerful weapon against cancer. Currently, we treat the disease with immunotherapy, which relies on such antibodies already produced by the body; however, our bodies can produce over 10 million unique antibodies, Zhang says, by mixing and matching between some 50 separate genes.

“If we can map these building block genes onto a mathematical vector, then we can organize all antibodies into something similar to a periodic table,” Zhang says.

“Then, if you discover that one antibody is effective against an antigen but is toxic, you can look within the same family for another antibody that is just as effective but less toxic.”

The paper “Atom2Vec: Learning atoms for materials discovery,” has been published in the journal PNAS.

NASA Explores the Use of Robotic Bees on Mars

Graphic depiction of Marsbee - Swarm of Flapping Wing Flyers for Enhanced Mars Exploration. Credits: C. Kang.

Graphic depiction of Marsbee – Swarm of Flapping Wing Flyers for Enhanced Mars Exploration. Credits: C. Kang.

Robot bees have been invented before, but Mars might be a place for them to serve a unique purpose. Earlier this year, it was revealed that the Japanese chemist Eijio Miyako led a team at the National Institute of Advanced Industrial Science and Technology (AIST) in developing robotic bees. So they’re not really bees; they’re drones. Miyako’s bee drones are actually capable of a form of pollination similar to real bees.

Bees have been the prime subject of many a sci-fi films including The Savage Bees (1976), The Swarm (1978), and Terror Out of the Sky (1978). In the 21st century, bees have been upgraded. Their robotic counterparts shall have an important role to play in future scientific exploration. And this role could very well be played out on the surface of Mars.

Now, NASA has begun to fund a project to create other AI-steered robotic bees for the future exploration of Mars. The main cause of experimenting with such mini robots is for the desirable need for speed. The problem is this: the traditional rovers sent to Mars in the past move very slowly. NASA anticipates an army of fliers to move significantly faster than their snail-like predecessors.

A number of researchers in Alabama are currently collaborating with a group based in Japan to design these mechanical drones. Sizewise the drones are very similar to real bees; however, the wings are unnaturally large. The lengthened wingspan was a well-needed feature to add since the Red Planet’s atmosphere is thinner compared to Earth’s. These small insect-like robots have been dubbed “Marsbees.”

If used, the Marsbees would travel in swarms and be able to return to some sort of a base, not unlike the way bees return to their hive. The base would likely be a rover providing a place for the Marsbees to be reenergized. But they would not have to come to this rover station to send out the information they’ve accumulated. Similar to satellites, they would be able to transmit their findings wirelessly. Marsbees would also likely be able to collect a variety of data. If their full development is feasible and economical, the future for Marsbees looks promising.

AI nude.

Nightmarish but brilliant blobs — AI-generated nudes would probably make Dali jealous

If you like nudes — and let’s be honest, who doesn’t — the work of one AI may ruin them for you, forever.

AI nude.

Image credits Robbie Barrat / Twitter.

Whether you think they’re to be displayed proudly or hoarded, discussed of with a blush or a smirk, artsy or in bad taste, most of us would probably agree on what a nude painting should look like. Also, likely, that the end piece is quite pleasing to the eye.

However, all the nude paintings or drawings you’ve ever seen were done by a human trying his best to record the body of another. In this enlightened age of technology and reason, we’re no longer bound by such base constraints. To show us why that’s an exciting development, albeit not necessarily a good one, Stanford AI researcher Robbie Barrat taught a computer to create such works of art. The results are a surreal, unnerving echo of what a nude should look like — but they’re a very intriguing glimpse into the ‘understanding’ artificial intelligence can acquire of the human body.

One day, out of sheer curiosity, Barrat fed a dataset containing thousands of nude portraits into a Generative Adversarial Network (GAN). These are a class of artificial intelligence algorithms used in unsupervised machine learning. They rely on two different neural networks, one called the “generator” and one the “discriminator”, which play an almost endless game of cat-and-mouse.

“The generator tries to come up with paintings that fool the discriminator, and the discriminator tries to learn how to tell the difference between real paintings from the dataset and fake paintings the generator feeds it,” Barrat told CNet’s Bonnie Burton.

“They both get better and better at their jobs over time, so the longer the GAN is trained, the more realistic the outputs will be.”

Barrat explained that sometimes, this network can fall into a fail-loop — or “local minima” if you want to listen to the experts — in which the generator and the discriminator found a way to keep fooling one another but without actually getting better at the intended task. As the system didn’t start in the local minima situation, the ‘nudes’ look vaguely human-like, but because the AI never truly figured out what a human should look like, the paintings are all fleshy blobs with strange tendrils/limbs jutting out at odd angles. The same issue makes the GAN always paint heads the exact same shade of nightmare.

Still, credit where credit is due, the network does always generate very organic-looking shapes; while there’s something indubitably wrong with the bulges and creases under the skin, the AI paintings do feel like renditions of a human being — a twisted, highly surreal, nightmarishly blobby human, but a human nonetheless.

I also find it quite fascinating that Barrat’s AI has reached, through sheer loop-error, what many surrealist painters would likely consider an enviable view of the world. Perhaps its exactly that it lacks a proper, solid grounding in what a human body should look like that allows it to create these exotic, unnerving pieces.

You can see more of Barrat’s work via the Twitter handle @DrBeef_ .

AI spots thousands of craters on the Moon — including over 6,000 previously undiscovered ones

Without an atmosphere to protect it, the Moon is under constant assault from meteorites and asteroids, hitting the satellite and leaving behind a horde of craters. Using a novel AI-based technique, a team of researchers has developed a new way to identify and count these craters.

An artificially colored mosaic constructed from a series of 53 images taken by the Galileo Spacecraft. Can you see the craters?

“When it comes to counting craters on the moon, it’s a pretty archaic method,” says Mohamad Ali-Dib, a postdoctoral fellow in the Centre for Planetary Sciences (CPS).

Indeed, while astronomy has benefitted from the automation of many processes, crater counting had lagged behind — but not anymore.

“Basically we need to manually look at an image, locate and count the craters and then calculate how large they are based off the size of the image. Here we’ve developed a technique from artificial intelligence that can automate this entire process that saves significant time and effort.”

Ali-Dib wasn’t the first to come up with this idea. Several projects have attempted to develop algorithms for the detection of lunar craters, but they performed rather poorly. However, the new algorithm, which was trained on a large dataset covering two-thirds of the moon, performed much better. It was so good at understanding the general shape and characteristics of a crater that it was even able to detect craters on other bodies, such as Mercury.

“It’s the first time we have an algorithm that can detect craters really well for not only parts of the moon, but also areas of Mercury,” says Ali-Dib, who developed the technique along with Ari Silburt, Chenchong Charles Zhu, and a group of researchers at CPS and the Canadian Institute for Theoretical Astrophysics (CITA).

They fed 90,000 images of the moon’s surface into an artificial neural network (ANN). ANNs mimic the vast network of neurons in a brain, simulating the biological learning process. After the learning process, the neural network was able to not only identify but also categorize craters larger than five kilometers. The team believes that with further “training” it will also be able to zoom in on smaller craters.

Some lunar craters last for billions of years. Image credits: NASA.

Since the moon also doesn’t have tectonics or strong erosion, the craters can remain visible for extremely long periods of time, with Ali-Dib’s team finding craters as old as four billion years. However, this is also the main drawback of the algorithm: it requires an atmosphere-less body, without erosion, and clearly visible craters.

Journal Reference: Ari Silburt et al. Lunar Crater Identification via Deep Learning. arxiv.org/abs/1803.02192

Slug P. californica.

‘Self-aware’, predatory, digital slug mimics the behavior of the animal it was modeled on

Upgrade, or the seeds of a robot uprising? U.S. researchers report they’ve constructed an artificially intelligent ocean predator that behaves a lot like the organism it was modeled on.

Slug P. californica.

Image credits Tracy Clark.

This frightening, completely digital predator — dubbed “Cyberslug” — reacts to food, threats, and members of its own ‘species’ much like the living animal that formed its blueprint: the sea slug Pleurobranchaea californica.

Slug in the machine

Cyberslug owes this remarkable resemblance to its biological counterpart to one rare trait among AIs — it is, albeit to a limited extent, self-aware. According to University of Illinois (UoI) at Urbana-Champaign professor Rhanor Gillette, who led the research efforts, this means that the simulated slug knows when it’s hungry or threatened, for example. The program has also learned through trial and error which other kinds of virtual critters it can eat, and which will fight back, in the simulated world the researchers pitted it against.

“[Cyberslug] relates its motivation and memories to its perception of the external world, and it reacts to information on the basis of how that information makes it feel,” Gillette said.

While slugs admittedly aren’t the most terrifying of ocean dwellers, they do have one quality that made them ideal for the team — they’re quite simple beings. Gillette goes on to explain that in the wild, sea slugs typically handle every interaction with other creatures by going through a three-item checklist: “Do I eat it? Do I mate with it? Or do I flee?”

Biologically simple, this process becomes quite complicated to handle successfully inside a computer program. That’s because, in order to make the right choice, an organism must be able to sense its internal state (i.e. whether it is hungry or not), obtain and process information from the environment (does this creature look tasty or threatening) and integrate past experience (i.e. ‘did this animal bite/sting me last time?’). In other words, picking the right choice involves the animal being aware of and understanding both its state, that of the environment, and the interaction between them — which is the basis of self-awareness.

Behavior chart slug.

Schematic of the approach-avoid behavior in the slug.
Image credits Jeffrey W. Brown et al., 2018, eNeuro.

Some of Gillette’s previous work focused on the brain circuits that allow sea slugs to operate these choices in the wild, mapping their function “down to individual neurons”. The next step was to test the accuracy of their models — and the best way to do this was to recreate the circuits of the animals’ brains and let them loose inside computer simulations. One of the earliest such circuit boards to represent the sea slug‘s brain, constructed by co-author Mikhail Voloshin, software engineer at the UoI, was housed in a plastic foam takeout container.

In the meantime, the duo have refined both their hardware and the code used to simulate the critters. Cyberslug’s decision-making is based on complex algorithms that estimate and weigh its individual goals, just like a real-life slug would.

“[P. californica‘s] default response is avoidance, but hunger, sensation and learning together form their ‘appetitive state,’ and if that is high enough the sea slug will attack,” Gillette explains. “When P. californica is super hungry, it will even attack a painful stimulus. And when the animal is not hungry, it usually will avoid even an appetitive stimulus. This is a cost-benefit decision.”

Cyberslug behaves the same way. The more it eats, for example, the more satiated it becomes and the less likely it will be to bother or attack something else (no matter its tastiness). Over time, it can also learn which critters to avoid, and which can be prayed upon with impunity. However, if hungry enough, Cyberslug will throw caution to the wind and even attack prey that’s adept at fighting back, if nothing less belligerent comes around for it to eat.

“I think the sea slug is a good model of the core ancient circuitry that is still there in our brains that is supporting all the higher cognitive qualities,” Gillette said. “Now we have a model that’s probably very much like the primitive ancestral brain. The next step is to add more circuitry to get enhanced sociality and cognition.”

This isn’t the first time we’ve seen researchers ‘digitizing’ the brains of simpler creatures — and this process holds one particular implication that I find fascinating.

Brains are, when you boil everything down, biological computers. Most scientists are pretty confident that we’ll eventually develop artificial intelligence, and sooner rather than later. But it also seems to me that there’s an unspoken agreement that the crux falls on the “artificial” part; that such constructs would always be lesser, compared to ‘true’, biological intelligence.

However, when researchers can quite successfully take a brain’s functionality and print it on a computer chip, doesn’t that distinction between artificial and biological intelligence look more like one of terminology rather than one of nature? If the computer can become the brain, doesn’t that make artificial life every bit as ‘true’ as our own, as worthy of recognition and safeguarding as our own?

I’d love to hear your opinion on that in the comments below.

The paper “Implementing Goal-Directed Foraging Decisions of a Simpler Nervous System in Simulation” has been published in the journal eNeuro.

Google AI can now look at your retina and predict the risk of heart disease

Google researchers are extremely intuitive: just by looking into people’s eyes they can see their problems — cardiovascular problems, to be precise. The scientists trained artificial intelligence (AI) to predict cardiovascular hazards, such as strokes, based on the analysis of retina shots.

The way the human eye sees the retina vs the way the AI sees it. The green traces are the pixels used to predict the risk factors. Photo Credit: UK Biobank/Google

After analyzing data from over a quarter million patients, the neural network can predict the patient’s age (within a 4-year range), gender, smoking status, blood pressure, body mass index, and risk of cardiovascular disease.

“Cardiovascular disease is the leading cause of death globally. There’s a strong body of research that helps us understand what puts people at risk: Daily behaviors including exercise and diet in combination with genetic factors, age, ethnicity, and biological sex all contribute. However, we don’t precisely know in a particular individual how these factors add up, so in some patients, we may perform sophisticated tests … to help better stratify an individual’s risk for having a cardiovascular event such as a heart attack or stroke”, declared study co-author Dr. Michael McConnell, a medical researcher at Verily.

Even though you might think that the number of patients the AI was trained on is large, AI networks typically work with much larger sample sizes. In order for neural networks to be more accurate in their predictions, they must analyze as much data as possible. The results of this study show that, until now, the predictions made by AI cannot outperform specialized medical diagnostic methods, such as blood tests.

“The caveat to this is that it’s early, (and) we trained this on a small data set,” says Google’s Lily Peng, a doctor and lead researcher on the project. “We think that the accuracy of this prediction will go up a little bit more as we kind of get more comprehensive data. Discovering that we could do this is a good first step. But we need to validate.”

The deep learning applied to photos of the retina and medical data works like this: the network is presented with the patient’s retinal shot, and then with some medical data, such as age, and blood pressure. After seeing hundreds of thousands of these kinds of images, the machine will start to see patterns correlated with the medical data inserted. So, for example, if most patients that have high blood pressure have more enlarged retinal vessels, the pattern will be learned and then applied when presented just the retinal shot of a prospective patient. The algorithms correctly discovered patients who had great cardiovascular risks within a 5-year window 70 percent of the time.

“In summary, we have provided evidence that deep learning may uncover additional signals in retinal images that will allow for better cardiovascular risk stratification. In particular, they could enable cardiovascular assessment at the population level by leveraging the existing infrastructure used to screen for diabetic eye disease. Our work also suggests avenues of future research into the source of these associations, and whether they can be used to better understand and prevent cardiovascular disease,” conclude the authors of the study.

The paper, published in the journal Nature Biomedical Engineering, is truly remarkable. In the future, doctors will be able to screen for the number one killer worldwide much more easily, and they will be doing it without causing us any physical discomfort. Imagine that!

Will AI start to take over writing? How will we manage it?

Could robots be taking over writing? Photo taken in the ZKM Medienmuseum, Karlsruhe, Germany.

As artificial intelligence (AI) spreads its wings more and more, it also threatening more and more jobs. In an economic report issued to the White House in 2016, researchers concluded that there’s an 83% chance automation will replace workers who earn 20$/hour or less. This echoes previous studies, which found that half of US jobs are threatened by robots, including up to 87% of jobs in Accommodation & Food Services. But some jobs are safer than others. Jobs which require human creativity are safe — or so we thought.

Take writing for instance. In all the Hollywood movies and in all our minds, human writing is… well, human, strictly restricted to our biological creativity. But that might not be the case. Last year, an AI was surprisingly successful in writing horror stories, featuring particularly creepy passages such as this:

#MIRROR: “‘I slowly moved my head away from the shower curtain, and saw the reflection of the face of a tall man who looked like he was looking in the mirror in my room. I still couldn’t see his face, but I could just see his reflection in the mirror. He moved toward me in the mirror, and he was taller than I had ever seen. His skin was pale, and he had a long beard. I stepped back, and he looked directly at my face, and I could tell that he was being held against my bed.”

It wasn’t an isolated achievement either. A Japanese AI wrote a full novel, and AI is already starting to have a noticeable effect on journalism. So just like video killed the radio star, are we set for a world where AI kills writing?

What does it take to be a writer? Is it something that’s necessarily restricted to a biological mind, or can that be expanded to an artificial algorithm?

Not really.

While AIs have had some impressive writing success, they’ve also been limited in scope, and they haven’t truly exhibited what you would call creativity. In order to do that, the first thing they need to do is pass the Turing test, in which a computer must be able to trick humans into thinking that it, too, is human, in order to pass. So far, that’s proven to be a difficult challenge, and that’s only the first step. While AI can process and analyze complex data, it still does not have much prowess in areas that involve abstract, nonlinear and creative thinking. There’s nothing to suggest that AIs will be able to adapt and actually start creating new content.

Algorithms, at least in their computational sense, don’t really support creativity. Basically, they work by transforming a set of discrete input parameters into a set of discrete output parameters. This fundamental limitation means that a computer cannot be creative, as one way or another, everything in its output is still in the input. This emphasizes that computational creativity is useful and may look like creativity, but it is not real creativity because it is not actually creating something, just transforming known parameters such as words and sentences.

But to dismiss AI as unable to write would simply be wrong. In advertising, AI copywriters are already being used, and they’re surprisingly versatile: they can draft hundreds of different ad campaigns with ease. It will be a long time before we’ll start seeing an AI essay writing service, but we might get there at one point. Google claimed that its AlphaGo algorithm is able to ‘create knowledge itself’ and it demonstrated that by winning over the world champion using a move which no one has ever seen before. So it not only learned from humans, but it built its own knowledge. Is that not a type of creativity in itself? Both technically and philosophically, there’s still a lot of questions to be answered.

AI is here, and it’s here to stay. It will grow and change our lives, whether we want it or not, whether we realize it or not. What we need, especially in science and journalism, is a new paradigm of how humans and AI work together for better results. That might require some creative solutions in itself.

Robot reading.

Chinese AI outperforms humans in language comprehension test — the first time a machine ever has

A new artificial intelligence (AI) developed by the Alibaba Group has humans beaten on their own turf — the software has outperformed humans in a global reading comprehension test.

Robot reading.

Image credits herval / Flickr.

China’s biggest online commerce company is making big strides in the field of artificial intelligence. The Alibaba Group has developed a machine-learning model which scored higher than human users on the Stanford Question Answering Dataset (SQAD), a large-scale reading comprehension test with more than 100,000 questions. On January 11, the AI scored 82.44 on the test, compared to 82.304 scored by humans.

It’s the first time an AI has outperformed people in this task and has done so in style — SQAD is considered one of the world’s most authoritative machine-reading gauges.

Computer speak good now

Computers have shown they can gain the upper hand against human players in all sort of complex tasks now — most strikingly in games such as chess. However, all these tasks had one common feature: they were all structured in such a way that a sharp memory and awesome computing capability represented huge assets.

Up to now, however, languages were always seen as a human field par excellence. So this win might be a bit more nerve-wracking than those before. Looking towards the future, the win has huge implications in society, especially in the customer service sector.

These jobs were traditionally insulated from the effects of automation, relying on armies of call-center employees even while factories swapped workers for robots. As someone who has had the distinct misfortune of working in a call center, I can only wish the robots good luck, endless patience, and a blanket apology. However, the advent of this AI points to profound shifts to come in the sector, and many people, unlike me, actually like/need those jobs — for them, this does not bode well.

It’s also a nerve-wracking to see AIs make such huge strides since, just two months ago, another Chinese AI passed the medical exam.

The Alibaba Group has worked closely with Ali Xiaomi, a mobile customer service chatbot which can be customised by retailers on Alibaba’s online market platform to suit their needs. Si Luo, a chief scientist at Alibaba’s research arm, said that the result means simple questions, such as “why does it rain?”, can be answered with a high degree of accuracy by machines.

“We believe the underlying technology can be gradually applied to numerous applications such as customer service, museum tutorials, and online response to inquiries from patients, freeing up human efforts in an unprecedented way,” Si said.

Ali Xiaomi was designed to identify the questions raised by customers and then look for the most relevant answers from pre-prepared documents. This made it a suitable platform for the new AI, as the processes that Ali Xiaomi uses are, in broad lines, the same ones that underpin the Stanford test.

Still, despite its superhuman result, Alibaba researchers say that the system is still somewhat limited. It works best with questions that have clear-cut answers; if the language is too vague, or the expression too ungrammatical, the bot likely won’t work properly. Similarly, if there’s no prepared answer, the bot will likely malfunction.

Scientist trains AI to generate Halloween costumes ideas, and some are pretty good

If you’re having problems deciding on Halloween costumes, you might find inspiration in an unexpected place: artificial intelligence (AI). Professor Panda, Strawberry shark, and Pirate firefighter are my favorites.

Image credits: Yasin Erdal.

Janelle Shane is a researcher who likes to explore the weirder side of AI. She felt that there’s often too little creativity when it comes to Halloween costumes, so she employed the help of neural networks (something she’s done several times in the past) to come up with some spooky-fresh ideas.

“I train neural networks, a type of machine learning algorithm, to write humor by giving them datasets that they have to teach themselves to mimic. They can sometimes do a surprisingly good job, coming up with a metal band called Chaosruga craft beer called Yamquak and another called The Fine Stranger (which now exists!), and a My Little Pony called Blue Cuss.”

However, it wasn’t an easy process. For starters, she didn’t have a large enough dataset to start training the AI. So she crowdsourced it by asking readers to list awesome Halloween costumes, receiving over 4,500 suggestions. There were no big surprises in the datasets. The classics dominated the list — with 42 witches, 32 ghosts, 30 pirates, 22 Batmans, 21 cats (30 including sexy cats), 19 vampires, and 17 each of pumpkins and sexy nurses. Overall, some 300 costumes (around 6%) were “sexy.” This is a bit surprising to me, since going to Halloween parties you get the feeling that much more than 6% of them focus on sexiness.

The submissions were certainly creative, and it was clear that the AI would have a tough job surpassing its human counterparts. She used a version of AI which learns words from scratch, letter by letter, with no knowledge of their meaning. Early in the training, the AI made many missteps, but as it learned and learned, it became better and better at generating costume ideas. Janelle herself took to Twitter to present some of the results:

Credits: Twitter – @JanelleCShane, via BI.

Some of the costume ideas were seriously awesome:

  • Punk Tree
  • Disco Monster
  • Spartan Gandalf
  • Starfleet Shark
  • A masked box
  • Martian Devil
  • Panda Clam
  • Potato man
  • Shark Cow
  • Space Batman
  • The shark knight
  • Snape Scarecrow
  • Gandalf the Good Witch
  • Professor Panda
  • Strawberry shark
  • Vampire big bird
  • Samurai Angel
  • Lady Garbage
  • Pirate firefighter
  • Fairy Batman

I’m telling you, shark knight, Space Batman, and Pirate firefighter are gonna be massive. Spartan Gandalf sounds like he’s just too powerful. There are many more ideas, go read them here.

The AI also came up with what could very well be Marvel’s next cast of superheroes (or spoofs).

  • The Bunnizer
  • Ladybog
  • Light man
  • Bearley Quinn
  • Glad woman
  • Robot Werewolf
  • Super Pun
  • Super of a bog
  • Space Pants
  • Barfer
  • Buster pirate
  • Skull Skywolk lady
  • Skynation the Goddess
  • Fred of Lizard

While this is still an easy-going use of AI, it raises an interesting question. After all, looking at some of the ideas on the list, you could easily mistake it for creativity. Will future, more refined AIs be… creative?

Elon Musk’s AI just beat the pros at Dota 2 — one of the most popular and complex computer games

After chess and Go, artificial intelligence starts beating us at more and more games. If just for a section of the game, AI proved superior to the Dota 2 pros.

As the world’s biggest eSports unfolded, fans in Seattle’s KeyArena were given a special treat. While 18 teams were fighting for the over $24,000,000 prize pool, an unlikely contender entered the ring: a bot from Elon Musk-backed start-up OpenAI. OpenAI made short work of Danylo “Dendi” Ishutin, one of the most respected pros in the scene, though only for the opening phase of the game (the laning phase).

Defense of the Ancients

Defense of the Ancients, or Dota, started as a battle mod of Warcraft III, all the way back in 2003. Since then, the project developed as a standalone game — becoming Dota2. Year after year, the player base grew, the game developed, and the scene flourished.

In Dota, two teams of five players fight against each other, with the goal of destroying each other’s base. It’s a surprisingly complex game, with many facets. First, teams take turns picking and banning from a pool of over 100 heroes, each with their own strengths and weaknesses — as well as an arsenal of unique skills and spells. There’s a lot of strategy involved in this phase, as well as in the gameplay itself. In several ways, Dota is like a game of chess. Of course, being nimble with the mouse and keyboard, knowing when to attack and cast abilities is also crucial, but what really makes Dota the hit it is today is the team play. It’s a 5v5 game, with each player playing a distinct and very important role.

The AI didn’t go through all these phases — not yet at least — so we can still enjoy human supremacy. But for the simpler early stages of the game, the ship has long sailed.

1v1 gameplay.

Man vs Robot

More than two decades ago, in 1996, IBM’s Deep Blue chess algorithm shocked the world when it defeated the world champion, Gary Kasparov. In 2016, another AI beat the Go world champion, which is even more impressive, considering that Go has over 10600 more possible moves than chess. That’s way, way, way more than there are atoms in the Universe. But Dota, as many other computer games, has a few fundamental differences from chess and Go.

For starters, you don’t see all the map/board. In chess and Go, both players see the same thing: everything. Dota involves the so-called fog of war — players see their own side of the map and are unaware of what is happening outside it. Making algorithm-based decisions when you don’t have all the information becomes immensely difficult but after only two weeks of training, the AI managed to beat the pros in a 1v1 game.

In those two weeks, it amassed lifetimes of experience, which was easily visible in the matchup. Like the other players who faced it, Dendi expressed surprise that the bot beat him so easily, saying that it “feels a little like [a] human, but a little like something else.”

There’s still a lot of ground to cover before a bot team can take on a human team — the 1v1 game is only a mock simulation of the 5v5 game — but it feels like AIs are stepping into a whole new world where they just shouldn’t. Next year, the plan is to have a full bot team take on the humans.

Elon Musk, the mastermind behind Tesla Motors and SpaceX, founded OpenAI as a nonprofit venture to prevent AI from destroying the world, something which he has been very vocal about. For now, the AI is limited to eSports parlor tricks but even this seemed unbelievable just a few months ago. Who knows what will happen next?