Tag Archives: skynet

Human and robot hand.

Elon Musk reveals that Neuralink is aimed at stopping AIs from taking over

Musk plans to combat the rise of dangerous AI with Neuralink, a brain-computer interface which would allow us to keep tabs on the systems and prevent them from “becoming other”, he said.

Human and robot hand.

Image credits VISLOQ / Pixabay.

About one month ago, billionaire Elon Musk revealed his latest venture, Neuralink, in an interview with Wait But Why. In the short term, its aim will be to develop and market a device that can help those with severe brain injuries communicate and interact with the world around them through the use of computers. In the long term, they hope the tech will enable people to communicate by “consensual telepathy” and effectively turn cloud-based AI into an extension of the human brain.

There’s a sleuth of reasons why we’d want this — primarily because telepathy is freaking cool. Then there’s the more boring stuff such as improved communication and connectivity, faster exchange of ideas, easier pooling of knowledge for research, understanding your fellow man, things like that. But!

Good ol’ Musk may have another, more long-term goal in mind for Neuralink. Responding to a Tweet on Sunday, the entrepreneur revealed that “the aspiration” behind the new company is to protect humanity from homicidal AIs by putting the reigns firmly in our brains.

On the off chance you don’t know what Skynet is, shame on you. It’s the name of a fictional, self-aware AI system in the “Terminator” series of movies, which saw humanity as a threat and tried his best to wipe us out. In his interview with WBW, Musk said Neuralink’s goal is to build “micron-sized devices” to mediate human-machine interfaces at all times. Not only will this let us keep AIs under control, it should also allow us to communicate in what essentially is machine-powered telepathy, unshackling communications from the constraints or words and the act of talking.

“If I were to communicate a concept to you, you would essentially engage in consensual telepathy. You wouldn’t need to verbalize unless you want to add a little flair to the conversation or something,” he says “[…] but the conversation would be conceptual interaction on a level that’s difficult to conceive of right now.”


MQ-9 Reaper taxiing. Image: Wikimedia Commons

NSA’s Skynet might be marking innocent people on its hit list

Between 2,500 and 4,000 so-called ‘extremists’ have been killed by drone strikes and kill squads in Pakistan since 2004. From as early as 2007, the NSA has targeted terrorists based on metadata supplied by machine learning program named Skynet. I have no idea who would find naming Skynet a machine designed to list people for assassination a bright idea, but that’s besides the point. The real point is that the inner workings of this software, as revealed in part by Edward Snowden from his leaks, suggest that the program might be targeting innocent people.

MQ-9 Reaper taxiing. Image: Wikimedia Commons

MQ-9 Reaper taxiing. Image: Wikimedia Commons

Ars Technica talked to Patrick Ball, who is a data scientist and the executive director at the Human Rights Data Analysis Group. Judging from how Skynet works, Ball says the machine seems to be scientifically unsound in the way it chooses which people deserve to be on the black list.


In a nutshell, Skynet works like most Big Data corporate machine learning algorithms. It works by mining the cellular network metadata of 55 million people and assigning a score to each, the highest pointing to terrorist activity. So, based on who you call, how long the call took and how frequent you dial a number, where you are and where you move, Skynet call tell if you’re a terrorist or not. Swapping sim cards or phones will be judged as activity that’s suspiciously linked to terrorist activities. More than 80 different properties, in all, are used by the NSA to build its blacklist.


So, judging from behaviour alone, Skynet is able to build a list of potential terrorists. But will the algorithm return false positives? In one of NSA’s leaked slides from a presentation of Skynet, engineers from the intelligence agency boasted how well the algorithms works by including the highest rated person on the list, Ahmad Zaidan. Thing is, Zaidan isn’t a terrorist but an Al-Jazeera’s long-time bureau chief in Islamabad. As part of the job, Zaidan often meets with terrorists to stage interviews and moves across conflict zones to report. You can see from the slide that Skynet identified Zaidan as a “MEMBER OF AL-QA’IDA.” Of course, no kill squad was sent for Zaidan because he is a known journalist, but one can only wonder about the fate of less notorious figures who had the misfortune to fit “known terrorist” patterns.

According to Ball, the NSA is doing ‘bad science’ by ineffectively training its algorithm. Skynet is  a subset of 100,000 randomly selected people, defined by their phone activity, and a group of seven known terrorists. The NSA scientists feed the algorithms the behaviour of six of the terrorists, then asks Skynet to find the seventh in the pool of 100,000.

“First, there are very few ‘known terrorists’ to use to train and test the model,” Ball said. “If they are using the same records to train the model as they are using to test the model, their assessment of the fit is completely bullshit. The usual practice is to hold some of the data out of the training process so that the test includes records the model has never seen before. Without this step, their classification fit assessment is ridiculously optimistic.”


According to leaked slides, Skynet has a false positive rate of between 0.18 and 0.008%, which sounds pretty good but is actually enough to list thousands for a black list. Nobody knows if the NSA uses a manual triage (it probably does), but the risk of ordering hits on innocent people is definitely on the table.

“We know that the ‘true terrorist’ proportion of the full population is very small,” Ball pointed out. “As Cory [Doctorow] says, if this were not true, we would all be dead already. Therefore a small false positive rate will lead to misidentification of lots of people as terrorists.”

“The larger point,” Ball added, “is that the model will totally overlook ‘true terrorists’ who are statistically different from the ‘true terrorists’ used to train the model.”

“Government uses of big data are inherently different from corporate uses,”  Bruce Schneier, a security guru, told Ars Technica. “The accuracy requirements mean that the same technology doesn’t work. If Google makes a mistake, people see an ad for a car they don’t want to buy. If the government makes a mistake, they kill innocents.”

“On whether the use of SKYNET is a war crime, I defer to lawyers,” Ball said. “It’s bad science, that’s for damn sure, because classification is inherently probabilistic. If you’re going to condemn someone to death, usually we have a ‘beyond a reasonable doubt’ standard, which is not at all the case when you’re talking about people with ‘probable terrorist’ scores anywhere near the threshold. And that’s assuming that the classifier works in the first place, which I doubt because there simply aren’t enough positive cases of known terrorists for the random forest to get a good model of them.”