Scientists urge ban on AIs designed to predict crime, Minority Report-style

A controversial research employing automated facial recognition algorithms to predict if a person will commit a crime is due to be published in an upcoming book. But over 1,700 experts, researchers, and academics from AI research have signed an open letter opposing such research, citing “grave concerns” over the study and urging Springer, the publisher of the book, to withdraw its offer.

Still from the movie Minority Report, staring Tom Cruise. Credit: DreamWorks.

The research, led by a team from Harrisburg University in the U.S., is proposing technology that can predict if someone will commit a crime, a scenario reminiscent of the science fiction book and movie Minority Report — only this time, it’s no fiction.

Would-be offenders can be identified solely by their face with “80% accuracy and with no racial bias” by exploiting huge police datasets of criminal data and biometrics. Layers of deep neural networks then make sense of this data to “produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” according to Nathaniel Ashby, a Harrisburg University professor and co-author of the study slated for publishing in the upcoming book series “Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence.”

However, the research community at large begs to differ. Writing to the Springer editorial committee in a recent open letter, over a thousand experts argue that predictive policing software is anything but unbiased. They cite published research showing that facial recognition software is deeply flawed and often works poorly when identifying non-white faces.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” the authors wrote.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalize social hierarchies and legitimize violence against marginalized groups.”

Studies show that people of color are more likely to be treated harshly than white people at every stage of the legal system. Any software built on existing criminal legal frameworks will inevitably inherit these distortions in the data. In other words, the machine will repeat the same prejudices when it comes to determining if a person has the “face of a criminal”, which echoes the 19th-century pseudoscience of physiognomy — the practice of assessing a person’s character or personality from their outer appearance.

“Let’s be clear: there is no way to develop a system that can predict or identify “criminality” that is not racially biased — because the category of “criminality” itself is racially biased,” the authors said.

Lastly, it’s not just the way the AI is trained and data bias — the science itself is shaky at best. The idea that criminality can be predicted in any way is dubious or questionable at best.

Artificial intelligence can definitely be a source for good. Machine learning algorithms are radically transforming healthcare, for instance by allowing professionals to identify certain tumors with greater accuracy than seasoned oncologists. Investors like Tej Kohli and Andreesen Horowitz have bet billions on the next generation of AI-enabled robotics, such as robotic surgeons and bionic arms, to name a few.

But, as we see now, AI can also lead to nefarious outcomes, and it’s still an immature field. After all, such machines are no more ethical or unbiased than their human designers and the data they are fed.

Researchers around the world are rising against algorithmically predictive law enforcement. Also this week, a group of American mathematicians wrote an open letter in the Notices of the American Mathematical Society in which they urge their peers no to work on such software.

The authors of this letter are against any kind of predictive law-enforcement software. Rather than identifying would-be criminals solely by their face, some of this software supposedly “predict” crimes before they happen, thus signaling law enforcement where to direct their resources.

“In light of the extrajudicial murders by police of George Floyd, Breonna Taylor, Tony McDade and numerous others before them, and the subsequent brutality of the police response to protests, we call on the mathematics community to boycott working with police departments,” the letter states.

“Given the structural racism and brutality in US policing, we do not believe that mathematicians should be collaborating with police departments in this manner,” the authors state. “It is simply too easy to create a ‘scientific’ veneer for racism. Please join us in committing to not collaborating with police. It is, at this moment, the very least we can do as a community.”

Leave a Reply

Your email address will not be published. Required fields are marked *