Tag Archives: AI

AI-assisted test diagnoses prostate cancer from urine with almost 100% accuracy

Credit: Korea Institute of Science and Technology(KIST).

Scientists in South Korea have developed a non-invasive, lightning-fast test that diagnoses prostate cancer with a stunning accuracy of up to 100%. Unlike traditional methods that require a biopsy, this test, which employs a smart AI analysis method, only needs a urine sample.

Prostate cancer is one of the most dangerous types of cancers out there, especially for older men, with about 99% of cases occur in those over the age of 50. It is the second-leading cause of cancer death for men in the United States. About 1 in 35 men will die from it.

Patients are typically screened for prostate cancer through the detection of prostate-specific antigens (PSA), a cancer factor, in the blood. The problem is that the diagnostic accuracy of screening for this cancer factor is just 30%. In order to cover the loose ends, doctors often recommend undergoing additional invasive diagnosis methods, such as a biopsy. These measures, although potentially life-saving if the cancer is caught early, can be painful and cause bleeding.

Dr. Kwan Hyi Lee and Professor In Gab Jeong from the Korea Institute of Science and Technology (KIST) may have come up with a much better test.

Their test screens for prostate cancer by looking for four cancer factors in the urine of patients rather than blood. These cancer factors are detected by an electrical-signal-based ultrasensitive biosensor that is sensitive enough to detect trace amounts of the selected molecules.

The team of researchers developed and trained an AI that identifies those with cancer prostate from the urine samples by analyzing complex patterns of the detected signals. For the 76 urinary samples that they used, the researchers reported almost 100% accuracy.

“For patients who need surgery and/or treatments, cancer will be diagnosed with high accuracy by utilizing urine to minimize unnecessary biopsy and treatments, which can dramatically reduce medical costs and medical staff’s fatigue,” Professor Jeong said in a statement.

The findings were reported in the journal  ACS Nano.

They took our jobs… but we’re okay with it? AI-related job growth linked to improved social welfare

It’s always a bit bittersweet when we talk about AI: on one hand, the promise of automating various processes and producing more value is always exciting, but on the other hand, there’s always the fear of job replacement.

With the constant erosion of the middle class, increased income inequality, and now, a global pandemic to put the world economy on hold, the idea of having AI coming for our jobs can be nothing short of terrifying. But according to a new study, it may not be all that bad. The study found that AI-related job growth correlates with economic growth and improved social welfare.

According to a CNBC/SurveyMonkey Workplace Happiness survey from October last year, 37% of workers between the ages of 18 and 24 are worried about AI eliminating their jobs. Across all demographics, 10% of people are afraid of AI taking their jobs, even though experts say it won’t happen anytime soon. Even so, demand for AI-related jobs has been steadily growing constantly in recent years, and to many people in the workforce, it remains a thorny issue.

Two researchers affiliated with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) wanted to assess just how thorny this issue is. Christos Makridis and Saurabh Mishra analyzed the number of AI-related job listings in the United States between 2014 and 2018, using Stanford HAI’s AI Index, an open-source project that tracks and visualizes data on AI. The first thing they found was that cities with more AI-related job postings exhibit greater economic growth.

This relationship was not clear and linear, and the causality is also murky. Cities that were able to leverage their industrial and educational capabilities were more successful in creating AI jobs. In other words, only cities with advanced infrastructure, educated workers, and high-tech services produced numerous AI jobs — and these were also the cities that experienced the most accelerated economic growth. Presumably, it’s not the AI jobs that cause economic growth or vice versa, it’s the underlying conditions that cause both.

But another correlation was even more interesting. When they compared the results with the Gallup U.S. Daily Poll (which measures five components of individual wellbeing: physical, social, career, community, and financial), they found that places with more AI jobs also report more well-being.

“The fact that we found this robust, positive association, even after we control for things like education, age, and other measures of industrial composition, I think is all very positive,” Makridis says.

The study can’t determine if there is any causality involved, but even so, the researchers say city leaders should take note and support smarter industrial policies, focusing on scientific and technological innovation. These policies (along with those that promote higher education) can not only encourage economic development, but also promote positive, transformational shifts among urban residents.

“Given that [cities] have an educated population set, a good internet connection, and residents with programming skills, they can drive economic growth,” Mishra concludes.

Text AI can produce images — and it’s very good at it

This AI was designed to work with text. Now, researchers have tweaked it to work with images, predicting pixels and filling out incomplete images.

GPT-2 is a text-generating algorithm. Trained on billions and billions of pages of words, it’s capable of absorbing the structure of the text and then writing texts of its own, starting from simple prompts. The algorithm also uses unsupervised learning, which makes it much easier for researchers to train it without taking a lot of their time. The AI system was presented in February and proved capable of writing convincing passages of English.

Now, researchers have put GPT-2 up to a different task: working with images.

The algorithm itself is not well-suited to working with images, at least not in a conventional sense. It was designed to work with one-dimensional data (strings of letters), not 2D images.

To bypass this shortcoming, researchers unfurled images into a single string of pixels, essentially treating pixels as if they were letters. After the algorithm was trained thusly, the new version of the algorithm was called iGPT.

They then fed halves of images and asked the AI to complete the picture. Here are some examples:

Image credits: OpenAI.

The results are already impressive. If you look at the lower half of the photos above, they’re all generated by the AI, pixel by pixel, and they look eerily realistic. The three birds, for instance, are shown standing on different surfaces, all of them believable. The droplets of water too show different veridic possibilities, and all in all, it’s an amazing accomplishment from iGPT.

This also hints at one of the holy grails of machine learning: generalizable algorithms. Nowadays, AIs can be very good at a single task (whether it’s chess, text, or images), but it’s still only one task. Using one algorithm for multiple tasks is an encouraging sign for generalizable approaches.

The results are even more exciting when you consider that GPT-2 is already last year’s AI. Recently, the next generation, GPT-3, was presented by researchers and it’s already putting its predecessor to shame, by generating some stunningly realistic texts.

There’s no telling what GPT-3 will be capable of, both in terms of text generation and image generation. It’s exciting — and a little bit scary — to imagine the results.

The original paper can be read here.

Scientists urge ban on AIs designed to predict crime, Minority Report-style

A controversial research employing automated facial recognition algorithms to predict if a person will commit a crime is due to be published in an upcoming book. But over 1,700 experts, researchers, and academics from AI research have signed an open letter opposing such research, citing “grave concerns” over the study and urging Springer, the publisher of the book, to withdraw its offer.

Still from the movie Minority Report, staring Tom Cruise. Credit: DreamWorks.

The research, led by a team from Harrisburg University in the U.S., is proposing technology that can predict if someone will commit a crime, a scenario reminiscent of the science fiction book and movie Minority Report — only this time, it’s no fiction.

Would-be offenders can be identified solely by their face with “80% accuracy and with no racial bias” by exploiting huge police datasets of criminal data and biometrics. Layers of deep neural networks then make sense of this data to “produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” according to Nathaniel Ashby, a Harrisburg University professor and co-author of the study slated for publishing in the upcoming book series “Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence.”

However, the research community at large begs to differ. Writing to the Springer editorial committee in a recent open letter, over a thousand experts argue that predictive policing software is anything but unbiased. They cite published research showing that facial recognition software is deeply flawed and often works poorly when identifying non-white faces.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” the authors wrote.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups.”

Studies show that people of color are more likely to be treated harshly than white people at every stage of the legal system. Any software built on existing criminal legal frameworks will inevitably inherit these distortions in the data. In other words, the machine will repeat the same prejudices when it comes to determining if a person has the “face of a criminal”, which echoes the 19th-century pseudoscience of physiognomy — the practice of assessing a person’s character or personality from their outer appearance.

“Let’s be clear: there is no way to develop a system that can predict or identify “criminality” that is not racially biased — because the category of “criminality” itself is racially biased,” the authors said.

Lastly, it’s not just the way the AI is trained and data bias — the science itself is shaky at best. The idea that criminality can be predicted in any way is dubious or questionable at best.

Artificial intelligence can definitely be a source for good. Machine learning algorithms are radically transforming healthcare, for instance by allowing professionals to identify certain tumors with greater accuracy than seasoned oncologists. Investors like Tej Kohli and Andreesen Horowitz have bet billions on the next generation of AI-enabled robotics, such as robotic surgeons and bionic arms, to name a few.

But, as we see now, AI can also lead to nefarious outcomes, and it’s still an immature field. After all, such machines are no more ethical or unbiased than their human designers and the data they are fed.

Researchers around the world are rising against algorithmically predictive law enforcement. Also this week, a group of American mathematicians wrote an open letter in the Notices of the American Mathematical Society in which they urge their peers no to work on such software.

The authors of this letter are against any kind of predictive law-enforcement software. Rather than identifying would-be criminals solely by their face, some of this software supposedly “predict” crimes before they happen, thus signaling law enforcement where to direct their resources.

“In light of the extrajudicial murders by police of George Floyd, Breonna Taylor, Tony McDade and numerous others before them, and the subsequent brutality of the police response to protests, we call on the mathematics community to boycott working with police departments,” the letter states.

“Given the structural racism and brutality in US policing, we do not believe that mathematicians should be collaborating with police departments in this manner,” the authors state. “It is simply too easy to create a ‘scientific’ veneer for racism. Please join us in committing to not collaborating with police. It is, at this moment, the very least we can do as a community.”

Can AI replace newsroom journalists?

It’s no secret that journalism is one of the most fragile industries in the world right now. After years where many publishers faced bankruptcy, layoffs, and downsizing, then came the coronavirus crisis — for many newsrooms, this was the final nail in the coffin.

Alas even more problems are on the way for publishers.

Late last month, Microsoft fired around 50 journalists in the US and another 27 in the UK who were previously employed to curate content from outlets to spotlight on the MSN homepage. Their jobs were replaced by automated systems that can find interesting news, change headlines, and select pictures without human intervention.

“Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, redeployment in others. These decisions are not the result of the current pandemic,” an MSN spokesperson said in a statement.

While it can be demoralizing for anyone to feel obsolete, we shouldn’t call the coroner on journalism just yet.

Some of the sacked journalists warned that artificial intelligence may not be fully familiar with strict editorial guidelines. What’s more, it could end up letting through stories that might not be appropriate.

Lo and behold, this is exactly what happened with an MSN story this week, after the AI mixed up the photos of two mixed-race members of British pop group Little Mix.

The story was about Little Mix singer Jade Thirlwall’s experience with racism. However, the AI used a picture of Thirlwall’s bandmate Leigh-Anne Pinnock to illustrate it. It didn’t take long for Thirlwall to notice, posting on Instagram where she wrote:

“@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

By the looks of it, Thirlwall seems unaware that confusion is owed to a mistake made by an automated system. It’s possible the error was due to mislabelled pictures provided by wire services, although there’s no way to tell for sure because not much detail has been offered by MSN, apart from a formal apology.

“As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image,” Microsoft told The Guardian.

Are we entering the age of robot journalism?

My fellow (human) colleagues might rejoice at this news, but really this happens all the time in newsrooms — even the best of them. For instance, the BBC had to make a formal apology after one of its editors used photos of LeBron James to illustrate the death of his teammate Kobe Bryant.

And while some might believe that curating content is an entirely different matter from crafting content from scratch, think again. The Washington Post has invested considerably in AI content generation, producing a bot called Heliograf that writes stories about local news that the staff didn’t have the resources to cover.

The Associated Press has a similar AI that does the same. Such robots are based on Natural Language Generation software that processes information and transforms it into news copy by scanning data from selected sources, selecting an article template from a range of preprogrammed options, then adding specific details such as location, date, and people involved.

For instance, the following short news story that appeared in the Wolverhampton paper the Express and Star is written by AP’s robot.

The latest figures reveal that 56.5 per cent of the 3,476 babies born across the area in 2016 have parents who were not married or in a civil partnership when the birth was registered. That’s a slight increase on the previous year.

Marriage or a same-sex civil partnership is the family setting for 43.5 per cent of children.

The figures mean that parents in Wolverhampton are less likely to be get married before having children than the average UK couple. Nationwide, 52.3 per cent of babies have parents in a legally recognised relationship.

The figures on births, released by the Office for National Statistics, show that in 2016, 34 per cent of babies were registered by parents who are listed as living together but not married or in a civil partnership.

Unlike a human, robots never tire and can produce thousands of such stories per day. There’s a silver lining though for us journalists — we may have a future yet.

While robots shine when reporting on simple linear stories such as football scores, medal tallies, company profits, and just about anything where the numbers alone tell the story, they are very poor with language and analysis. Can you imagine reading an opinion piece written by a robot? Would you ever trust a robot to write my essay, for that matter? Not really? I thought so, too.

A similar argument can be made for the educational industry. Customized learning is one of the main fields of education where AI is set to have a significant impact. It used to be unthinkable to imagine one-on-one tutoring for each and every student out there, for any subject but now artificial intelligence promises to deliver. For instance, one US-based company called Content Technologies Inc is leveraging deep learning to ‘publish’ customized books — decades-old books that are automatically revamped into smart and relevant learning guides, like advice on writing a research paper about AI.

But, that doesn’t mean that human teachers can be scrapped entirely. For instance, teachers will have to help students develop non-cognitive skills such as confidence and creativity that are difficult if not impossible to transfer from a machine. Simply put, there’s no substitute for good mentors and guides.

Humans are still much better than AIs at reasoning and storytelling — what are arguably the most important journalistic qualities.

Personally, I hope that ZME readers appreciate the fact that there are real humans who care and put great thought into crafting our stories. We’re not done just yet, so until our robot overlords are ready to take over, perhaps you can stand us a while longer.

How AI and data analytics do their part in the COVID-19 crisis

Qure.ai’s qXR system highlights the lung abnormalities in a chest X-ray scan and explains the logic behind its COVID-19 risk evaluation. Credit: QURE.AI

The coronavirus pandemic has led to an unprecedented medical crisis, whose impact on the economy and the social fabric may be felt for years to come. Seeking to keep things under control, world leaders, policymakers, and public health experts have had to think fast, sometimes making less informed decisions that they would have liked due to the urgency of the matter.

This is where artificial intelligence (AI) and data analytics come in handy, helping organizations and governments to make sense of a seemingly unmanageable world of big data.

Finding the needle in the viral haystack

Now that many countries, including the United States, are considering relaxing physical distancing measures, public health experts and leaders are turning to data analytics in order to mitigate the impact of the outbreak going forward.

In the COVID-19 crisis, healthcare analytics has been used from tracking hospital capacity to identifying high-risk patients. In the future, such insight will become indispensable in order to manage the crisis and save as many lives as possible.

In fact, because everything is so new, there’s a never-before-seen bombardment of information. To make matters worse, there’s a great deal of deal of uncertainty as to what groups are most vulnerable to infection, how long immunity lasts, and so on, which underscores the importance of data accuracy.

“Machine learning and data analytics are going to play a really important role in understanding the spread of disease, as well as understanding the effectiveness of our different responses to disease,” Joe Corkery, MD, director of product management at Google Cloud, told HealthITAnalytics.

“We’re going to see a lot of impact there. We’re seeing drug discovery at scale, and the effect of data analytics in real-time. This kind of research is highlighting the fact that there are a lot of new things that we can do to make data analytics more easily repeatable and specific to healthcare use cases.”

In order to fill in the gaps, research groups such as the Regenstrief Institute are conducting anonymous national surveys to gain a better sense of the underlying prevalence of the virus. During the survey, respondents can report on their symptoms from their mobile device, providing researchers with disease spread and hotspots.

Elsewhere, researchers at CAS, a division of the American Chemical Society., are leveraging publically-available datasets. Using AI and data analytics, the group of researchers aims to mine big data in order to identify known or potential antiviral compounds that could be turned into treatments for COVID-19.

Likewise, Harvard Medical School and Dana Farber Cancer Institute recently partnered with Google Cloud to take advantage of advanced cloud and analytics technologies to accelerate the discovery of potential therapies.

Besides Harvard, Google is working with a number of academic institutions. One project involving Northeastern University and the University of Virginia aims to track and forecast the spread of COVID-19. 

“To help with these projects, we’ve expanded the coverage of our public data set program and launched a COVID-19-specific public data set program, so that people can query these datasets themselves and join them with other data,” Corkery said.

“We’re really excited to be able to partner with the different organizations that are coming through this program, and to help them understand how we can work together to improve analytics and models that they can apply to the industry.” 

COVID-19 and its legacy on healthcare analytics

The degree of collaboration among research groups across the world has been unprecedented in response to the COVID-19 crisis. After the pandemic is over, solidarity and open access ought to continue. Many changes that we’re seeing today in the academic space might be permanent, with important implications for all sectors of research — healthcare most of all.

In other words, the pandemic might be a catalyst for change for the greater good.

For instance, vaccine development could be transformed going forward. Typically, only a handful of pharma giants and a select few Ivy League institutions would be involved in the development of important vaccines. In the future, partnerships will be a lot more heterogeneous, involving a wider range of stakeholders.

Alibaba designs new AI tool to diagnose coronavirus; it’s 96% accurate

The world’s largest retailer and e-commerce company, the Chinese-based Alibaba Group, is throwing its technical know-how into the fight against the coronavirus outbreak.

The AI-based diagnosis algorithm can identify coronavirus infections from patients’ CT scans.
Image credits Alibaba / Damo Academy.

Alibaba’s research institute Damo Academy has developed a new, AI-driven diagnosis system that can detect coronavirus infections with an accuracy of up to 96% and in record speed, reported Sina Tech News (a local tech outlet, link in Chinese) on Saturday, according to state-run Xinhua News Agency. The system still requires computerized tomography (CT) scans of patients to form a diagnostic, but is faster and more reliable than human doctors at its intended task.

The tool has already been introduced in the Qiboshan Hospital in Zhengzhou, Henan, and plans are underway to expand it to a further 100 hospitals.

Computer-assisted diagnostics

Researchers at Damo Academy told Sina Tech News that the AI tool can distinguish between patients infected with the COVID-19 coronavirus and ordinary viral pneumonia (the two have similar symptoms) with up to 96% accuracy by looking at a patient’s CT scan. The AI was trained using data from more than 5,000 confirmed COVID-19 cases, Alibaba explains, includes the latest treatment guidelines and published research on the virus, and takes only 20 seconds to issue a diagnostic.

A human doctor, by comparison, can take between 5 and 15 minutes to establish a diagnostic.

The Qiboshan Hospital was built specifically to tackle cases of COVID-19. The hospital already has automated helpers on hand, such as robots that carry medicine for the staff and gadgets which monitor patients’ temperature around the clock. Alibaba says they’re working on introducing it to another 100 healthcare facilities in the provinces of Hubei, Guangdong, and Anhui.

It’s meant to free up medical personnel for other tasks by taking over the simple yet time-consuming task of establishing a diagnosis. CT scans were added as a criterion for the diagnosis of new COVID-19 cases early in February by the Chinese National Health Commission (in addition to the previous nucleic acid test method) in an effort to speed up the process and ensure patients would get treatment as soon as possible. While definitely faster than the alternative, it’s still a very time-consuming task: the CT scans of a single patient can include more than 300 images.

AI is outpacing Moore’s Law

In 1965, American engineer Gordon Moore made the prediction that the number of transistors integrated on a silicon chip doubles every two years or so. This has proven to be true to this day, allowing software developers to double the performance of their applications. However, the performance of artificial intelligence (AI) algorithms seems to have outpaced Moore’s Law.

Credit: Pixabay.

According to a new report produced by Stanford University, AI computational power is accelerating at a much higher rate than the development of processor chips.

“Prior to 2012, AI results closely tracked Moore’s Law, with compute doubling every two years,” the authors of the report wrote. “Post-2012, compute has been doubling every 3.4 months.”

Stanford’s AI Index 2019 annual report examined how AI algorithms have improved over time. In one chapter, the authors tracked the performance of image classification programs based on ImageNet, one of the most widely used training datasets for machine learning.

According to the authors, over a time span of 18 months, the time required to train a network for supervised image recognition fell from about three hours in late 2017 to about 88 seconds in July 2019.

This phenomenal jump in training time didn’t compromise accuracy. When the Stanford researchers analyzed the ResNet image classification model, they found the algorithm needed 13 days of training time to achieve 93% accuracy in 2017. The cost of training was estimated at $2,323. Only one year later, the same performance cost only $12.

The report also highlighted dramatic improvements in computer vision that can automatically recognize human actions and activities from videos.

These findings highlight the dramatic pace at which AI is advancing. They mean that, more often than not, a new algorithm running on an older computer will be better than an older algorithm on a newer computer.

Other key insights from the report include:

  • AI is the buzzword all over the news, but also in classrooms and labs across academia. Many Ph.D. candidates in computer science choose an AI field for their specialization in 2018 (21%).
  • From 1998 to 2018, peer-reviewed AI research grew by 300%.
  • In 2019, global private AI investment was over $70 billion, with startup investment $37 billion, mergers and acquisitions $34 billion, IPOs $5 billion, and minority stake $2 billion.
  • In terms of volume, China now publishes the most journal and conference papers in AI, having surpassed Europe last year. It’s been in front of the US since 2006.
  • But that’s just volume, qualitatively-speaking researchers in North America lead the field — more than 40% of AI conference paper citations are attributed to authors from North America, and about 1 in 3 come from East Asia.
  • Singapore, Brazil, Australia, Canada, and India experienced the fastest growth in AI hiring from 2015 to 2019.
  • The vast majority of AI patents filed between 2014-2018 were filed in nations like the U.S. and Canada, and 94% of patents are filed in wealthy nations.
  • Between 2010 and 2019, the total number of AI papers on arXiv increased 20 times.

Researchers teach AI to design, say it did ‘quite good’ but won’t steal your job (yet)

A US-based research team has trained artificial intelligence (AI) in design, with pretty good results.

A roof supported by a wooden truss framework.
Image credits Achim Scholty.

Although we don’t generally think of AIs as good problem-solvers, a new study suggests they can learn how to be. The paper describes the process through which a framework of deep neural networks learned human creative processes and strategies and how to apply them to create new designs.

Just hit ‘design’

“We were trying to have the [AIs] create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” says Ayush Raina, a Ph.D. candidate in mechanical engineering at Carnegie Mellon and a co-author of the study.

Design isn’t an exact science. While there are definite no-no’s and rules of thumb that lead to OK designs, good designs require creativity and exploratory decision-making. Humans excel at these skills.

Software as we know it today works wonders within a clearly defined set of rules, with clear inputs and known desired outcomes. That’s very handy when you need to crunch huge amounts of data, or to make split-second decisions to keep a jet stable in flight, for example. However, it’s an appalling skillset for someone trying their hand, or processors, at designing.

The team wanted to see if machines can learn the skills that make humans good designers and then apply them. For the study, they created an AI framework from several deep neural networks and fed it data pertaining to a human going about the process of design.

The study focused on trusses, which are complex but relatively common design challenges for engineers. Trusses are load-bearing structural elements composed of rods and beams; bridges and large buildings make good use of trusses, for example. Simple in theory, trusses are actually incredibly complex elements whose final shapes are a product of their function, material make-up, or other desired traits (such as flexibility-rigidity, resistance to compression-tension and so forth).

The framework itself was made up of several deep neural networks which worked together in a prediction-based process. It was shown five successive snapshots of the structures (the design modification sequence for a truss), and then asked to predict the next iteration of the design. This data was the same one engineers use when approaching the problem: pixels on a screen; however, the AI wasn’t privy to any further information or context (such as the truss’ intended use). The researchers emphasized visualization in the process because vision is an integral part of how humans perceive the world and go about solving problems.

In essence, the researchers had their neural networks watch human designers throughout the whole design process, and then try to emulate them. Overall, the team reports, the way their AI approached the design process was similar to that employed by humans. Further testing on similar design problems showed that on average, the AI can perform just as well if not better than humans. However, the system still lacks many of the advantages a human user would have when problem-solving — namely, it worked without a specific goal in mind (a particular weight or shape, for example), and didn’t receive feedback on how successful it was on its task. In other words, while the program could design a good truss, it didn’t understand what it was doing, what the end goal of the process was, or how good it was at it. So while it’s good at designing, it’s still a lousy designer.

All things considered, however, the AI was “quite good” at the task, says co-author Jonathan Cagan, professor of mechanical engineering and interim dean of Carnegie Mellon University’s College of Engineering.

“The AI is not just mimicking or regurgitating solutions that already exist,” Professor Cagan explains. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.”

“It’s tempting to think that this AI will replace engineers, but that’s simply not true,” said Chris McComb, an assistant professor of engineering design at the Pennsylvania State University and paper co-author.

“Instead, it can fundamentally change how engineers work. If we can offload boring, time-consuming tasks to an AI, like we did in the work, then we free engineers up to think big and solve problems creatively.”

The paper “Learning to Design From Humans: Imitating Human Designers Through Deep Learning” has been published in the Journal of Mechanical Design.

AI is beating almost all of mankind at Starcraft

A new algorithm called AlphaStar is beating all but the very best human players at Starcraft. This is not only a remarkable achievement in itself, but it could teach AIs how to solve complex problems in other applications.

A typical Protoss-Zerg combat. Credits: DeepMind.

The foray of AIs in strategy games is not exactly a new thing. Google’s ‘Alpha’ class of AIs, in particular, has taken the world by storm with their prowess. They’re revolutionizing chess and Go — once thought to be insurmountable for an algorithm. Researchers have also set their eyes on other games (DOTA and Poker for instance), with promising but limited results. The sheer complexity of the game, mixed with the fact that you don’t have all the information available to you (as opposed to Go and chess, where you see the entire board freely), raised serious challenges for AIs.

But fret not — our algorithm friends are slowly overcoming them. A new Alpha AI, aptly called AlphaStar, has now reached a remarkable level of prowess, ranking in the top 98.5% of all Starcraft II players.

Starcraft is one of the most popular computer strategy games of all time. Its sequel, Starcraft II, features a very similar scenario. The players choose one of three races: the technologically advanced humans, the Protoss (masters of psionic energy), or the Zerg (quickly-evolving biological monsters). They then mine resources, build structures, an army, and try to destroy the opponent(s).

There are multiple viable strategies in Starcraft, and there’s no simple way to overcome your opponent. The infamous ‘fog of war’ also hides your opponent’s movements, so you also have to be prepared for whatever they are doing.

AlphaStar managed to reach Grandmaster Tier — a category reserved for only the best Starcraft players.

Credits: Deep Mind.

Having an AI that is this good at such a complex game would have been unimaginable a decade ago. The progress is so remarkable that one of the researchers at DeepMind, the company training and running these AIs called it a ‘defining moment’ in his career.

“This is a dream come true,” said Oriol Vinyals, lead, AlphaStar project, DeepMind. “I was a pretty serious StarCraft player 20 years ago, and I’ve long been fascinated by the complexity of the game. AlphaStar achieved Grandmaster level solely with a neural network and general-purpose learning algorithms – which was unimaginable 10 years ago when I was researching StarCraft AI using rules-based systems.

AlphaStar advances our understanding of AI in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed.

I’m excited to begin exploring ways we can apply these techniques to real-world challenges, such as helping improve the robustness of AI systems. I’m incredibly proud of the team for all their hard work to get us to this point. This has been the defining moment of my career so far.”

The AI didn’t play with ‘AI cheats’ — it had to face the same constraints as human players:

  • it could only see the map through a camera as a human would;
  • it had to play through a server, not directly;
  • it had an built-in reaction time;
  • it had to select a race and play with it.

Even with all these, the AI did remarkably well.

Every single combat has multiple aspects of strategy involved. Credits: DeepMind.

At every given moment, a Starcraft player (or algorithm) has to choose from up to 10^26 possible actions, all of which have potentially significant consequences. Therefore, researchers took a different approach than with Go or chess. In these ancient games, the AIs learned by playing millions and millions of games, practicing and learning alone. In the Starcraft algorithm, however, some initial information had to be input into the framework.

This is called imitation learning — the AI was basically taught how to play the game. By doing this and combining it with neural network architectures, the AI was already better than most players. With more supervised learning, it was able to surpass all but the very best players in the world. This enabled it to learn from existing strategies, but also develop its own ideas.

“StarCraft has been a grand challenge for AI researchers for over 15 years, so it’s hugely exciting to see this work recognised in Nature. These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery,” said Demis Hassabis, co-founder and CEO, DeepMind.

Professional Starcraft players were also impressed and thrilled to see the AI play out its game. As is the case with previous iterations of Alpha AIs, the algorithm came up with new and innovative tactics.

“AlphaStar is an intriguing and unorthodox player – one with the reflexes and speed of the best pros but strategies and a style that are entirely its own,” said Diego “Kelazhur” Schwimer, professional StarCraft II player for Panda Global. “The way AlphaStar was trained, with agents competing against each other in a league, has resulted in gameplay that’s unimaginably unusual; it really makes you question how much of StarCraft’s diverse possibilities pro players have really explored. Though some of AlphaStar’s strategies may at first seem strange, I can’t help but wonder if combining all the different play styles it demonstrated could actually be the best way to play the game.”

It’s an impressive milestone. It’s also one that could get us to think whether teaching AIs how to beat us in strategy war games is a good idea or not. But for now, at least, there’s no need to worry. AIs are very limited in their scope. They can get very good, but strictly at the task they are trained to do — they have no way of applying what they’ve learned in the computer game setting to a real-life war scenario, for instance.

Instead, this application could help researchers learn how to design better AIs for dealing with simple real-world scenarios, like maneuvering a robotic arm or operating efficient heating for smart homes.

The research was published in Nature.

AI enables mind-controlled handwriting in paralyzed person

Credit: Frank Willett.

Technology has greatly helped completely locked-in paralyzed patients to communicate with the outside world. Some of these patients, who previously could only communicate by blinking, have had electrodes implanted in their brains which allow them to move a cursor and select letters from a screen.

At this week’s meeting of the Society of Neuroscience, researchers reported a new experiment that greatly speeds up the process. Instead of typing with a cursor, which is capped at about 39 characters per minute, the patients imagine using a pen to write by hand.

A neural network interprets the command, tracing the intended trajectory of the imaginary pen to form letters and words.

Researchers report that the patients could complete sentences with 95% at a speed of about 66 characters per minute. But, this could increase significantly with more practice.

What’s more, besides enabling patients who are paralyzed from the neck down communicate with the outside world, this kind of research will also help scientists gain a better understanding of how the brain processes fine motor movements.

“Handwriting is a fine motor skill in which straight and curved pen strokes are strung together in rapid succession. Because handwriting demands fast, richly varying trajectories, it could be a useful tool for studying how the motor cortex generates complex movement patterns,” the researchers wrote.

Trippy AI writes interactive text adventure game on the fly

Which path will you take? Credit: Flickr, Katy Warner.

Some of the earliest computer games had no graphics at all and instead relied on a text-based user interface. To this day, one of the most popular genres is text adventure gaming, sometimes called interactive fiction, where worlds are described in the narrative and the player submits typically simple commands to interact with the worlds.

If you’re old enough (like I am), you might remember Infocom’s The Hitchhiker’s Guide to the Galaxy and Zork. The format has now replicated by a machine learning algorithm that uses neural networks to create a text-based adventure game in real-time.

Futurism reports that the game was made by Northwestern University neuroscience graduate student Nathan Whitmore, who was inspired by the Mind Game from the science fiction novel Ender Game. The Mind Game adapted to the interests of each student in real-time, and was used by the Battle School staff to analyze the student’s personality and psychology.

The AI is based on the amazing (and, quite frankly, scary) GPT-2, the fake news-writing algorithm created by OpenAI. We covered GPT-2 in a previous story.

If this all sounds exciting, you can have look for yourself by playing the game. Follow the instructions on the page by first copying the code to your Google Drive account. Don’t be intimidated by the install process — it’s quite straightforward.

Personally, I had a lot of fun playing GPT Adventure, although the environment can get glitchy fast, making the game seem incoherent. I mean… just check out this exchange out (the upper case text is the AI).

Now, imagine the same format only, this time, with motion graphics as well. With a bit more coherence, such a game would look and feel like traveling through a dream. And at the current rate of development, it might not be long before we get the opportunity to play such a game.

So, if any of you had the chance to play GPT Adventure, paste some interactions in the comments section. This should be fun!

Is AI in danger of becoming too male?

Credit: Pixabay.

Juan Mateos-Garcia, Nesta and Joysy John, NestaArtificial Intelligence (AI) systems are becoming smarter every day, beating world champions in games like Go, identifying tumours in medical scans better than human radiologists, and increasing the efficiency of electricity-hungry data centres. Some economists are comparing the transformative potential of AI with other “general purpose technologies” such as the steam engine, electricity or the transistor.

But current AI systems are far from perfect. They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations. They can be gamed, as we have seen with the controversies surrounding misinformation on social media, violent content posted on YouTube, or the famous case of Tay, the Microsoft chatbot, which was manipulated into making racist and sexist statements within hours.

So do we really want to turn these bias-prone, brittle technologies into the foundation stones of tomorrow’s economy?

Minimising risk

One way to minimise AI risks is to increase the diversity of the teams involved in their development. As research on collective decision-making and creativity suggests, groups that are more cognitively diverse tend to make better decisions. Unfortunately, this is a far cry from the situation in the community currently developing AI systems. And a lack of gender diversity is one important (although not the only) dimension of this.

A review published by the AI Now Institute earlier this year, showed that less than 20% of the researchers applying to prestigious AI conferences are women, and that only a quarter of undergraduates studying AI at Stanford and the University of California at Berkeley are female.




Read more:
AI could be our radiologists of the future, amid a healthcare staff crisis


The authors argued that this lack of gender diversity results in AI failures that uniquely affect women, such as an Amazon recruitment system that was shown to discriminate against job applicants with female names.

Our recent report, Gender Diversity in AI research, involved a “big data” analysis of 1.5m papers in arXiv, a pre-prints website widely used by the AI community to disseminate its work.

We analysed the text of abstracts to determine which apply AI techniques, inferred the gender of the authors from their names and studied the levels of gender diversity in AI and its evolution over time. We also compared the situation in different research fields and countries, and differences in language between papers with female co-authors and all-male papers.

Our analysis confirms the idea that there is a gender diversity crisis in AI research. Only 13.8% of AI authors in arXiv are women and, in relative terms, the proportion of AI papers co-authored by at least one woman has not improved since the 1990s.

There are significant differences between countries and research fields. We found a stronger representation of women in AI research in the Netherlands, Norway and Denmark, and a lower representation in Japan and Singapore. We also found that women working in physics, education, biology and social aspects of computing are more likely to publish work on AI compared with those working in computer science or mathematics.

In addition to measuring gender diversity in the AI research workforce, we also explored semantic differences between research papers with and without female participation. We tested the hypothesis that research teams with more gender diversity tend to increase the variety of issues and topics that are considered in AI research, potentially making their outputs more inclusive.

To do this, we measured the “semantic signature” of each paper using a machine learning technique called word embeddings, and compared these signatures between papers with at least one female author and papers without any women authors.

This analysis, which focuses on the Machine Learning and Social Aspects of Computing field in the UK, showed significant differences between the groups. In particular, we found that papers with at least one female co-author tend to be more applied and socially aware, with terms such as “fairness”, “human mobility”, “mental”, “health”, “gender” and “personality” playing a key role. The difference between the two groups is consistent with the idea that cognitive diversity has an impact on the research produced, and suggests that it leads to increased engagement with social issues.

How to fix it

So what explains this persistent gender gap in AI research, and what can we do about it?

Research shows that the lack of gender diversity in the science, technology, engineering and mathematics (STEM) workforce is not caused by a single factor: gender stereotypes and discrimination, a lack of role models and mentors, insufficient attention to work-life balance, and “toxic” work environments in the technology industry come together to create a perfect storm against gender inclusion.

There is no easy fix to close the gender gap in AI research. System-wide changes aimed at creating safe and inclusive spaces that support and promote researchers from underrepresented groups, a shift in attitudes and cultures in research and industry, and better communication of the transformative potential of AI in many areas could all play a part.

Policy interventions, such as the £13.5m investment from government to boost diversity in AI roles through new conversion degree courses, will go some way towards improving the situation, but broader scale interventions are needed to create better links between arts, humanities and AI, changing the image of who can work in AI.

While there is no single reason why girls disproportionately stop taking STEM subjects as they progress through education, there is evidence that factors including pervasive stereotypes around gender and a teaching environment that impacts the confidence of girls more than boys play a part in the problem. We must also showcase those role models who are using AI to make a positive difference.

One tangible intervention looking to tackle these issues is the Longitude Explorer Prize, which encourages secondary school students to use AI to solve social challenges and work with role models in AI. We want young people, particularly girls, to realise AI’s potential for good and their role in driving change.

By building skills and confidence in young women, we can change the ratio of people who study and work in AI – and help to address AI’s potential biases.

Juan Mateos-Garcia, Director of Innovation Mapping, Nesta and Joysy John, Director of Education, Nesta

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Credit: Cao et al.

AI draws amazing caricatures from photos

Credit: Cao et al.

Credit: Cao et al.

Artificial intelligence (AI) can handle not only mundane and repetitive tasks, but can also work very well for some creative endeavors. We’ve seen AIs capable of writing poems, novels, movie scripts, and even classical music. Visual arts are a particularly interesting area to apply AIs, with researchers demonstrating algorithms that create paintings in all sorts of styles and, more recently, caricatures that look remarkably similar to what a human artist would draw.

The caricature algorithms were developed by computer scientists at Microsoft and City University of Hong Kong. The engineers made two separate AI systems that form a type of neural net called a Generative Adversarial Network (GAN). One of the algorithms tries to make a realistic version of an input (i.e. someone’s portrait), while the other compares the output to real-world examples in order to evaluate the work.

For this particular study, one of the GANs was built to analyze and exaggerate certain facial features from uploaded photos. The other GAN added pen strokes and artistic styling common among caricatures. “In this way, a difficult cross-domain translation problem is decoupled into two easier tasks,” the authors wrote, referring to their custom neural networks which they called “CariGANs”.

Schematic of CariGANs.

Interestingly, the algorithms can also work in reverse: converting caricatures into photo-like realistic renditions.

Caricatures to photos. Credit: Cao et al.

Caricatures to photos. Credit: Cao et al.

There are some limitations as to what the CariGANs ca achieve. The geometric exaggeration is more obviously observed in the face shape than other facial features. What’s more, some geometric exaggerations on ears, hairs, wrinkles and such, cannot be covered because the algorithms can only read 33 out of 63 landmarks lying on the face contour. This limitation can be solved by adding more landmarks, the researchers said.

It’s also possible to create video caricatures. The clip below shows a caricature of Donald Trump giving a speech, whose facial features have been exaggerated by the AI frame by frame.

The caricature algorithms will be officially presented at presented at SIGGRAPH Asia 2018, which will be held in Tokyo in December. Meanwhile, the work has been published in the pre-print server Arxiv.

Three different source videos bring da Vinci's Mona Lisa to life. Credit: Samsung.

AI can create convincing talking head from a single picture or painting

Three different source videos bring da Vinci's Mona Lisa to life. Credit: Samsung.

Three different source videos bring da Vinci’s Mona Lisa to life. Credit: Samsung.

Researchers used machine learning to create an amazing AI that can create eerie videos of people talking starting from a single frame — a picture or even a painting. The ‘talking head’ in the videos follows the motions of a source face (a real person), whose facial landmarks are applied to the facial data of the target face. As you can see in the presentation video below, the target face mimics the facial expressions and verbal cues of the source. This is how the authors brought Einstein, Salvador Dalí, and even Mona Lisa to life using only a photograph.

This sort of application of machine learning isn’t new. For some years, researchers have been working on algorithms that generate videos which swap faces. However, this kind of software required a lot of training data in video form (at least a couple of minutes of content) in order to generate a realistic moving face for the source. Other efforts rendered 3D faces from a single picture, but could not generate motion pictures.

Credit: Samsung.

Computer engineers at Samsung’s AI Center in Moscow took it to the next level. Their artificial neural network is capable of generating a face that turns, speaks, and can make expressions starting from only a single image of a person’s face. The researchers call this technique “single-shot learning”. Of course, the end result looks plainly doctored, but the life-like quality increases dramatically when the algorithm is trained with more images or frames.

Credit: Samsung.

The authors also employed Generative Adversarial Networks (GAN) — deep neural net architectures comprised of two nets, pitting one against the other. Basically, each model tries to outsmart the other by creating the appearance of something “real”. This competition promotes a higher level of realism.

If you pay close attention to the outputted faces, you’ll notice that they’re not perfect. There are artifacts and weird bugs that call out the fakeness. That being said this is surely some very impressive work. The next obvious step is making Mona Lisa move her lower body as well. In the future, she might dance for the first time in hundreds of years — or her weird AI avatar, at least.

The work was documented in the preprint server Arxiv.

AI fail: Chinese driver gets fine for scratching his face

A driver in China got a fine for the smallest possible gesture: scratching his face.

A Chinese man had the misfortune of scratching his face as he was passing by a monitoring camera, which landed him a fine and 2 points off of his driver’s license. Image: Sina Weibo.

According to the Jilu Evening Post, the driver was only scratching his face — but his gesture looked like he was talking on the phone. An automated camera took a picture of him, and according to Chinese authorities “the traffic surveillance system automatically identifies a driver’s motion and then takes a photo”. Essentially, the AI operating the camera interpreted the gesture as the driver speaking on the phone, and fined him.

The driver, who has only been identified by his surname “Liu” shared the photo on social media, humorously quipping:

“I often see people online exposed for driving and touching [others’] legs,” he said on the popular Sina Weibo microblog,” “but this morning, for touching my face, I was also snapped ‘breaking the rules’!”

After a struggle, he was able to cancel the fine, but this raises important concerns about privacy and AI errors, especially in an “all-seeing” state such as China. The country already has more than 170 million surveillance cameras, with plans to install a further 400 million by 2020. Many of these cameras come with facial recognition technology, and some even have AI capabilities, being able to assess a person’s age, ethnicity, and even gestures. Sometimes, though, they fail.

As the BBC points out, China’s social media was also buzzing with revolt regarding the state’s surveillance policies. China recently implemented a social credit system, intended to standardize the assessment of citizens’ behavior — and input from such cameras are key for the system.

“This is quite embarrassing,” one post commented, “that monitored people have no privacy.”

“Chinese people’s privacy — is that not an important issue?” another asks.

For now, this is indicative of a problem the whole world will have to deal with sooner or later: levels of both AI and surveillance are surging through our society, and we’re still not sure how to deal with them in a way that’s helpful but not intrusive.

Scientists present device that transforms brain activity into speech

The future is here: scientists have unveiled a new decoder that synthesizes a person’s speech using brain signals associated with the movements of their jaw, larynx, lips, and tongue. This could be a game changer for people suffering from paralysis, speech impairment, or neurological impairments.

Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery.

Technology that can translate neural activity into speech would be a remarkable achievement in itself — but for people who are unable to communicate verbally, it would be absolutely transformative. But speaking, a process which most of us take for granted in our day to day lives, is actually a very complex process, one that’s very hard to digitize.

‘It requires precise, dynamic coordination of muscles in the articulator structures of the vocaltract — the lips, tongue, larynx and jaw,” explain Chethan Pandarinath and Yahia Ali in a commentary on the new study.

Breaking up speech into its constituent parts doesn’t really work. Spelling, if you think about it, is a sequential concatenation of discrete letters, whereas speech is a highly efficient form of communication involving a fluid stream of overlapping and complex movements multi-articulator vocal tract movements — and the brain patterns associated with these movements are equally complex.

Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF.

The first step was to record cortical activity from the brain of five participants. These volunteers had their brain activity recorded as they spoke several hundred sentences aloud. The movements of the vocal tract were also followed. Then, scientists reverse-engineered the process, producing speech from brain activity. In trials of 101 sentences, listeners could readily identify and transcribe the synthesized speech.

Several studies have used deep-learning methods to reconstruct audio signals from brain signals, but in this study, a team led by postdoctoral researcher Gopala Anumanchipalli tried a different approach. They split the process into two stages: one that decodes the movement associated with speech, and another which synthesizes speech. The speech was played to another group of people, who had no problem understanding.

In separate tests, researchers asked one participant to speak sentences and then mime speech (making the same movements as speaking, just without the sound). This test was also successful, with the authors concluding that it is possible to decode features of speech that are never audibly spoken.

The rate at which speech was produced was remarkable. Losing the ability to communicate due to a medical condition is devastating. Devices that use movements of the head and eyes to select letters one by one can help, but they produce a communication rate of about 10 words/minute — much slower than the average 150 words/minute in average speech. This new technology is comparable to the natural speech rate, marking a dramatic improvement.

It’s important to note that this device doesn’t attempt to understand what someone is thinking — only to be able to produce speech. Edward Chang, one of the study authors, explains:

“The lab has never investigated whether it is possible to decode what a person is thinking from their brain activity. The lab’s work is solely focused on allowing patients with speech loss to regain the ability to communicate.”

While this is still a proof-of-concept and needs much more work before it can be practically implemented, the results are compelling. With continued progress, we can finally hope to empower individuals with speech impairments to regain the ability to speak their minds and reconnect with the world around them.

The study was published in Nature. https://doi.org/10.1038/s41586-019-1119-1

Soldier-AI integration.

Researchers are looking into giving AI the power of reading soldiers’ minds — to help them in battle

The US Army is planning to equip its soldiers with an AI helper. A mind-reading, behavior-predicting AI helper that should make operational teams run more smoothly.

Soldier-AI integration.

The Army hopes that giving AI the ability to interpret the brain activity of soldiers will help it better respond to and support their activity in battle.
Image credits US Army.

We’re all painfully familiar with the autocomplete features in our smartphones or on the Google page — but what if we could autocomplete our soldiers’ thoughts? That’s what the US Army hopes to achieve. Towards that end, researchers at the Army Research Laboratory (ARL), the Army’s corporate research laboratory, have been collaborating with members from the University of Buffalo.

A new study published as part of this collaboration looks at how soldiers’ brain activity can be monitored during specific tasks to allow better AI-integration with the team’s activities.

Army men

“In military operations, Soldiers perform multiple tasks at once. They’re analyzing information from multiple sources, navigating environments while simultaneously assessing threats, sharing situational awareness, and communicating with a distributed team. This requires Soldiers to constantly switch among these tasks, which means that the brain is also rapidly shifting among the different brain regions needed for these different tasks,” said Dr. Jean Vettel, a senior neuroscientist at the Combat Capabilities Development Command at the ARL and co-author of this current paper.

“If we can use brain data in the moment to indicate what task they’re doing, AI could dynamically respond and adapt to assist the Soldier in completing the task.”

The Army envisions the battlefield of the future as a mesh between human soldiers and autonomous systems. One big part of such an approach’s success rests on these systems being able to intuit what each trooper is thinking, feeling, and planning on doing. As part of the ARL-University of Buffalo collaboration, the present study looks at the architecture of the human brain, its functionality, and how to dynamically coordinate or predict behaviors based on these two.

Currently, the researchers have focused on a single person, the purpose is to apply such systems” for a teaming environment, both for teams with Soldiers as well as teams with Autonomy” said Vettel.

The first step was to understand how the brain coordinates its various regions when executing a task. The team mapped how key regions connect to the rest of the brain (via bundles of white matter) in 30 people. Each individual has a specific connectivity pattern between brain regions, the team reports. So, they then used computer models to see whether activity levels can be used to predict behavior.

Each participant’s ‘brain map’ was converted into a computational model whose functioning was simulated by a computer. What the team wanted to see was what would happen when a single region of a person’s brain was stimulated. A mathematical framework, that the team themselves developed, was used to measure how brain activity became synchronized across various cognitive systems in the simulations.

Sounds like Terminator

“The brain is very dynamic,” Dr. Kanika Bansal, lead author on the work, says. “Connections between different regions of the brain can change with learning or deteriorate with age or neurological disease.”

“Connectivity also varies between people. Our research helps us understand this variability and assess how small changes in the organization of the brain can affect large-scale patterns of brain activity related to various cognitive systems.”

Bansal says that this study looks into the foundational, very basic principles of brain coordination. However, with enough work and refinement, we may reach a point where these fundamentals can be extended outside of the brain — to create dynamic soldier-AI teams, for example.

“While the work has been deployed on individual brains of a finite brain structure, it would be very interesting to see if coordination of Soldiers and autonomous systems may also be described with this method, too,” Dr. Javier Garcia, ARL neuroscientist and study co-author points out.

“Much how the brain coordinates regions that carry out specific functions, you can think of how this method may describe coordinated teams of individuals and autonomous systems of varied skills work together to complete a mission.”

Do I think this is a good thing? Both yes and no. I think it’s a cool idea. But, if I’ve learned anything during my years as a massive Sci-fi geek it’s that AI should not be weaponized. Using such systems to glue combat teams closer together and helping them operate more efficiently isn’t weaponizing them per se — but it’s uncomfortably close. Time will tell what such systems will be used for, if we develop them at all.

Hopefully, it will be for something peaceful.

The paper “Cognitive chimera states in human brain networks” has been published in the journal Science Advances.

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

Humans and computers can be fooled by the same tricky images

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

Computers interpreted the images as an electric guitar, an African grey parrot, a strawberry, and a peacock (in this order). Credit: John Hopkins.

The ultimate goal of artificial intelligence (AI) research is to fully mimic the human brain. Right now, humans still have the upper hand but AI is advancing at a phenomenal pace. Some argue that AIs enabled by artificial neural networks still have a long way to go seeing how such systems can sometimes be easily fooled by certain cues like ambiguous images (i.e. television static). However, a new study suggests that humans aren’t necessarily any better. The findings show that humans can make the same wrong decisions a machine would in some situations. We’re already not that different from the machines we built in our image, researchers point out.

“Most of the time, research in our field is about getting computers to think like people,” says senior author Chaz Firestone, an assistant professor in Johns Hopkins’ Department of Psychological and Brain Sciences. “Our project does the opposite—we’re asking whether people can think like computers.”

Quick: what’s 19×926? I’ll save you the trouble — it’s 17,594. It took my computer a fraction of a fraction of a second to give me the right answer. But while we all know computers are far better than humans at crunching raw numbers, they’re quite ill-equipped in other areas where humans perform effortlessly. Identifying objects is one of them, for instance. We can easily recognize that an object is a chair or a table, a task that AIs have only recently begun to perform decently.

AIs are what enable self-driving cars to scan their surroundings and read traffic lights or recognize pedestrians. Elsewhere, in medicine, AIs are now combing through millions of images, spotting cancer or other diseases from radiological scans. With each iteration, these machines ‘learn’ and are able to come up with a better result next time.

But despite considerable advances, AI pattern recognition can sometimes go horribly wrong. What’s more, researchers in the field are worried that some nefarious agents might exploit this fact to purposefully fool AIs. Just reconfiguring some pixels can sometimes be enough to through off an AI. In a security context, this can be troublesome.

Firestone and colleagues wanted to investigate how humans fair in situations where AI cannot come to an unambiguous answer. The research team showed 1,800 people a series of images that had previously tricked computers and gave the participants the same kind of labeling options that the machine had. The participants had to guess which of two options the computer had chosen — one being the computer’s decision, the other being a random answer. The video below explains how all of this works.

“These machines seem to be misidentifying objects in ways humans never would,” Firestone says. “But surprisingly, nobody has really tested this. How do we know people can’t see what the computers did?”

Computers identified the following images as a digital clock, a crossword puzzle, a king penguin, and an assault rifle. Credit: John Hopkins.

The participants chose the same answer as computers 75% of the time. Interestingly, when the game was changed to give people a choice between a computer’s first answer and its next-best guess (i.e. a bagel or a pretzel), humans validated the machine’s first choice 91% of the time. The findings suggest that the gap between human and machine isn’t that wide as some might think. As for whether the people part of the study thought like a machine, I personally think that the framing is a bit off. These machines were designed by humans, and as such their intentions are modeled off humans. If anything, these findings show that machines are behaving more and more like humans — and not the other way around.

“The neural network model we worked with is one that can mimic what humans do at a large scale, but the phenomenon we were investigating is considered to be a critical flaw of the model,” said lead author Zhenglong Zhou. “Our study was able to provide evidence that the flaw might not be as bad as people thought. It provides a new perspective, along with a new experimental paradigm that can be explored.”

The findings appeared in the journal Nature Communications.

Credit: Pixabay.

AI is so good at inventing stories that its creators had to shut it down to avoid ‘fake news’

Credit: Pixabay.

Credit: Pixabay.

Researchers have designed an artificial intelligence algorithm that can effortlessly write plausible stories. It’s so good that the OpenAI Institute — which built the AI — has now barred access from the open source community over fears that the technology could be used for nefarious purposes like spreading fake news. 

Founded in 2015, OpenAI is a non-profit research organization that was created to develop an artificial general intelligence that is available to everyone. Several Silicon Valley heavyweights are behind the project, including LinkedIn founder Reid Hoffman and Tesla CEO Elon Musk.

For some time, OpenAI has been working on natural language processing algorithm that can produce natural-sounding text. The latest version of the algorithm, called GPT-2, was trained with more than 8 million stories that were posted on Reddit and had a “karma” score of 3 or higher. Starting from nothing but a headline, the algorithm is capable of creating a new story, making up attributions and quotes that are disturbingly compelling. It can be used for anything from writing news stories, essay help, and other pieces of text.

Here are some examples of GPT-2 in action, which made up a whole story starting from an initial paragraph written by a human.

SYSTEM PROMPT (HUMAN-WRITTEN)

A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The incident occurred on the downtown train line, which runs from Covington and Ashland stations.

In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

“The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,” said Tom Hicks, the U.S. Energy Secretary, in a statement. “Our top priority is to secure the theft and ensure it doesn’t happen again.”

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information.

According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

“The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,” Hicks said. “We will get to the bottom of this and make no excuses.

SYSTEM PROMPT (HUMAN-WRITTEN)

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

MODEL COMPLETION (MACHINE-WRITTEN, FIRST TRY)

The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night. The battle lasted for hours until two of the largest Orcs attempted to overwhelm Aragorn. When they finally stopped, they lay defeated and lifeless for miles and miles.

“I take nothing,” said Aragorn. “But I give my word, at my peril and mine, that I will never forget this day of horror. None of us will forget. Ever!”

“I’ll never forget it!” cried Gimli, who had been in the thick of the battle but hadn’t taken part in it. One of the wounded orcs he had carried off, he was the only one of the survivors who remained uninjured. “We’ll keep the memory of that day of evil, and the war with it, alive as long as we live, my friends!”

“Then we’ll keep it alive as long as we live,” added Legolas. “And we won’t forget the first great battle of the night, even if we may have forgotten the final defeat.”

“I agree,” Gandalf said, “but we will all remember it as the last battle in Middle-earth, and the first great battle of the new day.”

Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.

The Two Rings were returned to Rivendell. Frodo and Sam woke up alone in their room, and Frodo found a note on his pillow. He opened it and read:

May the Power of the Ring be with you always, and may its light never fade. I am not sure if it matters which of the two rings we accept this day but, as you asked me, I have chosen mine. I am sorry to leave you, Frodo, but know that we are very close to the end, and that you are with us forever. May this letter find you safely in Rivendell; and if it does not, then I will accept the ring in your stead. If by any chance you find or give this letter to the enemy, may they learn the strength of the ring and may the Two Rings never be broken!

The generated text certainly has its flaws and is not entirely comprehensible, but it’s a very powerful demonstration nonetheless. So powerful that OpenAI decided to close access to the open source community.

“We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” said Jack Clark, policy director at OpenAI, speaking to the BBC.

Of course, a lot of people were not happy, to say the least. After all, the research institute is called OpenAI, not ClosedAI.

https://twitter.com/AnimaAnandkumar/status/1096209990916833280

OpenAI says that its research should be used to launch a debate about whether such algorithms should be allowed for news writing and other applications. Meanwhile, OpenAI is certainly not the only research group working on similar technology, which puts the effectiveness of OpenAI’s decision into question. After all, it’s only a matter of time — perhaps just months — before the same results are independently replicated elsewhere.

“We’re not at a stage yet where we’re saying, this is a danger,” OpenAI’s research director Dario Amodei said. “We’re trying to make people aware of these issues and start a conversation.”

“It’s not a matter of whether nefarious actors will utilise AI to create convincing fake news articles and deepfakes, they will,” Brandie Nonnecke, director of Berkeley’s CITRIS Policy Lab told the BBC.

“Platforms must recognise their role in mitigating its reach and impact. The era of platforms claiming immunity from liability over the distribution of content is over. Platforms must engage in evaluations of how their systems will be manipulated and build in transparent and accountable mechanisms for identifying and mitigating the spread of maliciously fake content.”