"

Bradley C. Love

Funded by

Fear Humans, Not Artificial Intelligence

Published on May 15, 2016, by Bradley C. Love — Comments

Are we on the cusp of creating super-intelligent machines? Would such a super-intelligence put humanity at existential risk? Certainly, leaders in academia and industry are convinced that the danger of our own creations turning on us is real. For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil Artificial Intelligence (AI) from bringing about the end of humanity. Universities, such as MIT, Oxford, and Cambridge, have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates, and Stephen Hawking have all raised the alarm.

The end would appear nigh unless we act before it’s too late. Alternatively, perhaps science fiction and industry-fuelled hype have overcome better judgment. The cynic might say that this doomsday vision has taken on religious proportions. While previous generations dreamed of exploring the stars and interacting with alien species, the current technological and cultural zeitgeist is decidedly dystopian. This vision is buoyed in the tech world because it feeds egos — what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends? No longer are tech figures cast as mere business leaders, but instead as gods who will determine the future of humanity and beyond. By the same token, frontline tech workers can reconceptualise their lives as involving something larger than properly pairing ads with cat videos using efficient algorithms. Who could blame them? For Judgment Day researchers, proclamations of an “existential threat” is not just a call to action, but a call to be funded generously and an opportunity to rub shoulders with the tech elite.

So, are smart machines more likely to kill us, save us, or simply drive us to work? To answer this question, it helps to step back and look at what is actually happening in AI. The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There has been no qualitative breakthrough in approach. Instead, performance gains are attributable to larger training sets (also known as Big Data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (e.g., capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays — it’s easy to specify the objective function.

In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The President instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the President is being coerced. Missiles released. End of humanity. The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetency rather than an evil AI turning on us, no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.

Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine deciding on this course of action. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, none of this is new and essentially follows the arc of history where humans use available technologies to kill one another. The AI would simply be following its programming and objective function.

Apart from people using AI to kill one another (rather than the AI deciding to do it itself), there are real dangers from AI, but these dangers are economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job. There will not be a flood of replacement “AI repair person” jobs to take up the slack. Already, high-tech companies with massive valuations, such as Facebook, employ relatively few people. For a historical perspective, it is also important to keep in mind that the Luddites were not irrational — they did lose their high-paying textile jobs to machines and it took two generations before their descendants’ wages reached what they earned.

There will be many losers as AI improves. Of course, these losses will be more than offset by gains in productivity, so the problem is essentially a political, social, and economic challenge on how to properly assist those (most of us?) who will be displaced by machines. Notice the danger here is not being killed by machines, but rather outcompeted by people employing machines. Even that is not really the danger — the true danger is that people will not look after one another as machines permanently displace entire classes of labor.

In summary, we should focus on the very real challenges to our survival, such as climate change, weapons of mass destruction, etc., not fanciful killer AI robots. For the foreseeable future, machines will remain a threat only when directed by humans by malice or through carelessness. The real challenges we face from AI are economic and social, but these are human problems that do not involve defending ourselves against sentient killer robots that have turned on us. If the machines ever come for us, they will likely be sent by other humans rather than deciding on their own accord. Taxi and delivery drivers will be made redundant by machines, not mowed down. In almost all cases, the dangers of AI are really about humans behaving badly.

Share

Funded by