There's been a lot of attention recently to the existential risks of technologies loosely labeled as Artificial Intelligence.
Dire warnings of an AI-fueled Armageddon have been issued by people as luminant as Elon Musk and Stephen Hawking... Predictions that conjure up images of a non-human intelligence intent on exterminating all of humanity.
Maybe they're right. Maybe someday we'll face a rise-of-the-robots. But for now the reality is, we can barely get robots to walk and climb properly, let alone navigate rubble-strewn hallways to rescue people in a disaster.
As frightening (and, sadly, entertaining) as these apocalyptic blockbuster predictions may be, they are probably far in the future, if they happen at all.
Wrong. Forget about Terminators rampaging through shopping malls! Artificial intelligence poses a different and more immediate threat, called Predictive Data Analysis.
This is about to get very real because rapid advances in machine learning techniques will soon allow for sophisticated profiling and prediction of future behavior.
This is the imminent threat of AI
Based on AI predictions of what a person might do in the future, we may not be far from the day when authorities might pre-arrest people because it's safer (and cheaper).
This is not science fiction nor social fantasy, it's a terrifyingly real potential made possible by some stunning recent advances, and it's not clear we are appreciating the risks.
But how? I mean, is this really possible?
Although we aren't privy to the inner workings of government-sponsored AI software, we can draw some remarkable insights from what's being done with anonymized aggregate data, as described in this insightful article in The Guardian...
Scientist Seth Stephens Davidowitz analyzed anonymous Google search results, uncovering disturbing truths about our desires, beliefs and prejudices
Check it out... it offers remarkable insights into topics as wide-ranging as sex, terrorism, hate and prejudice. And then consider this...
If such stunning conclusions can be drawn from anonymized data, imagine what can be derived about someone when the system knows precisely who they are. Precisely who YOU are. And has access to even more types of data than what Google collects, including:
Begin by collecting all this about someone who's been convicted of committing a criminal act. Then apply sophisticated AI technologies to discover objectionable tendencies and behaviors in that person's history preceding the commission of the crime.
The result will be a profile, a set of predictors.
You can see where this is going...
Now do the same thing for every criminal, every last one of them. Include all the mandatory upon-arrest DNA samples.
Unleash machine-learning algorithms on all of this.
The result will be a set of patterns which potentially are so detailed and so prescient that the system can know to a high degree of probability that the criminal will commit another crime even before they themselves know it.
Finally, perform the same analysis for everyone. The entire population. Include DNA samples when you can get them (remarkably often), but even without that you can derive amazingly predictive results.
No one would argue that it isn’t a worthy goal to find ways to prevent horrors like 9/11 or the Sandy Hook school shootings, in Newtown Connecticut, USA (recap: in 2012, 20-year-old Adam Lanza killed 20 children and seven adults).
Mr. Lanza had no criminal record, but an extensive investigation after the shooting revealed some extremely worrying and predictive aspects of his life.
That analysis was performed after-the-fact and by manual methods, not digital, but very soon artificial analysis of big data will be able to call attention to such threats long before they're imminent. Possibly even months or years before.
All you have to do is see just one picture of those blood-soaked little kids to realize just how powerfully society might be motivated to act-before-fact.
Hobbled by understaffed departments and tight budgets, the temptation will be extremely strong for government to solve predicted future problems by using incarceration now, rather than expensive ongoing monitoring and supportive intervention.
"It's going to take enormous moral fiber to resist incarcerating people before they've done anything wrong".
Ah, but you say, the Constitution will protect us!
How big do the piles of collapsed buildings or dead kids have to get before the public demands an exception to the Constitution if a technology can predict such events, say, 84% of the time?
Big data could wind up getting us in big trouble.
Maybe someday malevolent machines such as Terminators or evil artificial intelligence systems like SkyNet will be a problem, but right now that's all just Hollywood.
The real threat from artificial intelligence is already almost upon us, and it's… us! We're in danger of creating the psychic "precogs" of the film Minority Report entirely in software, and then using these AI predictions to justify horribly dehumanizing actions.
How we choose to apply the results of artificial intelligence could be far more dangerous than AI itself.
Innocent until proven guilty, eh? Now, yes. But all you have to do is redefine proven to include predictions of future guilt by astonishingly prescient artificial intelligence to wind up in a dystopian nightmare.
Even now AI is so intricate that scientists struggle to get it to explain how and why it came to the conclusions it did.
Imagine a machine decides you are going to be a criminal, and you are incarcerated. And no one knows how or why it concluded that, it just did. How could you even appeal against it?
Sounds too far-fetched?
Have a second look at all those dead kids before you answer.
And unlike Terminator robots, this is coming soon.
1. Although medical data is protected by HIPAA regulations in the USA and by a sophisticated set of laws in the EU, anti-terrorism laws allow certain exceptions in the interest of national security.