5 Things Everyone Gets Wrong about Artificial Intelligence and Machine Learning
For some, the current advances in artificial intelligence are a curiosity; for others, an overstated storm in a teacup; for others still, an area of equal excitement and concern that we really ought to be paying more attention to. At one end of the spectrum, there are people like Stephen Hawking, warning that “the development of full artificial intelligence could spell the end of the human race.” At the other end are people like MIT robotics professor Rodney Brooks, who argues that current progress in machine learning is neither as frightening nor as promising as has been suggested.
What’s undoubtedly the case is that AI and machine learning are receiving more interest than any time since the science fiction heyday of the 1960s – perhaps more interest than ever before. Yet a lot of writing about AI is ill-informed, misleading, or based more on what the writer has seen in the cinema than what’s happening in the real world. That leads to all kinds of misconceptions – here are the ones that come up most often.
1. The distinction between artificial intelligence and artificial general intelligence
One key mistake that’s made in talking about artificial intelligence is that the terms we use are seldom properly defined. So we use ‘artificial intelligence’ to mean anything from a basic chatbot, to AlphaGo (the computer that is one of the world’s leading Go players), to a Terminator-like robot that can outthink a human.
But these are very much not the same thing. The distinction is between artificial intelligence (also sometimes called artificial narrow intelligence, to make the difference clearer) and artificial general intelligence. For instance, a chess-playing computer might be able to beat grand masters with ease, but not be able to beat a toddler at telling the difference between a photo of a duck and a photo of a cow. That computer has narrow, domain-specific intelligence, and this kind of narrow intelligence is an area in which progress is coming on in leaps and bounds.
By contrast, humans are generally intelligent. We can play chess, identify the differences between photos, read someone’s emotions in their expression, coordinate our limbs in precise movements, translate languages and all sorts of other things besides. This versatility of intelligence is something that no computer is capable of – yet. Indeed, while your mobile phone can complete more computations per second than any human in the world, there isn’t an artificial general intelligence that can outthink a puppy in general measures of intelligence, let alone a human.
This distinction is something that frequently leads to people talking past each other, when both are using “artificial intelligence”, but one is referring to narrow and one to general intelligence. You might find yourself asking how artificial intelligence can possibly be a danger when Amazon’s advertising algorithms, on noting that you have bought a mattress, believe that this is only the first step in an exciting collection of mattresses. But this represents a failure of narrow intelligence, not an attempt at general intelligence. And researchers disagree about whether the right way to build an artificial general intelligence is to focus on narrow intelligence and then widen its remit, or whether general intelligence requires a different approach altogether.
2. What artificial intelligence looks like in practice
Illustrating stories about artificial intelligence is hard work for newspaper editors. They typically have to resort to rows of green code, like a low-budget hacker movie, or they just give in and use a photo of the Terminator. But neither is accurate. Because the most intelligent things on Earth at the moment are humans, we have a tendency to imagine artificial intelligence as android intelligence (an android being a robot that looks like a human – think of the Terminator, or Data from Star Trek). You might well have seen creepy photos of modern androids that move clumsily and that no one could possibly mistake for human.
But there’s no reason at all to think that artificial intelligence might look like us. After all, what’s the need? You don’t need your laptop or your phone to have a face, least of all one that looks convincingly human, so why would you need a greater intelligence to look like you?
But this isn’t just about what artificial intelligence would physically look like. It’s also about its impact on the world and on our day-to-day lives. Think about the things that you take for granted that might have seemed bizarre just a few years ago. You might go home and ask Alexa to play some music for you, from your Spotify playlist where you know you’ll get a mixture of songs you already know and songs that it’s been predicted you might like. You’ll get automated marketing emails that process huge amounts of data about you to figure out exactly the deals that you might like. And you probably don’t think about the technological backdrop to any of this.
It’s possible that the growth of artificial general intelligence might be just the same. (This assumes a ‘slow takeoff’ model, where artificial general intelligence develops gradually; an alternative model assumes a ‘fast takeoff’, where an intelligence recursively improves itself at incredible speed, reaching superhuman intelligence in a matter of days or even hours). It might be like having an extremely attentive personal assistant or butler, who anticipates your needs and sees to it that they’re addressed, then glides smoothly into the background when you don’t need them.
3. How artificial intelligence could affect the economy
Much has been written about the impact of artificial intelligence and machine learning more generally on the economy. One area in which we’re likely to see effects in the near-term is in the development of self-driving cars. About one in fifteen workers in the USA is employed in the trucking industry; about one in a hundred are truck drivers. Yet self-driving cars could eliminate many of these jobs at a stroke, with the cost of the new technology paying for itself in terms of savings on salaries.
But the impact of artificial intelligence on the jobs market has the potential to be much greater than this. There are a huge number of jobs that, while it may take a long time for artificial intelligence to replace human skills completely, are maybe 80% automatable. The work of solicitors is a good example here; there are some things that machines aren’t capable of doing (for instance, reading a client’s facial expressions to work out if they are updating their Will under duress) but much of a solicitor’s work, such as drafting documents and checking legal precedents, may be automatable sooner rather than later. That means that firms currently employing five solicitors might only need one. This wouldn’t require artificial general intelligence; just a continuation of the trajectory for automation and machine learning that we’re currently on.
In previous technological revolutions, new technologies have typically employed as many workers as were made redundant by technological advances (though note that they haven’t always been the same workers; sometimes advances have brought new groups into the workplace while leaving previous workers unemployed). Plenty of commentators are assuming that the same pattern will repeat itself; that new jobs will replace old ones, leaving the unemployment rate unchanged. The problem is that those new jobs have yet to appear.
There is an alternative to this doom-and-gloom scenario, though. Artificial intelligence should enable us to radically increase our productivity (think about one worker being able to produce five times what they did previously with AI help). That might mean that instead of mass unemployment, everyone just gets to work fewer hours to produce the same amount of goods and services. That has happened to a certain extent in previous technological revolutions – think about the amount of leisure time that a typical factory worker might enjoy today, compared with their Victorian equivalent – but it’s nearly impossible to say whether that’s the impact that AI would have.
4. Why some people are frightened about artificial intelligence
In relation to misunderstanding what artificial intelligence might look like in practice, there’s the associated misunderstanding of what makes it potentially frightening. When the dangers of rogue artificial intelligence are raised, it’s easy to assume that this means something like Hal 9000 in 2001: A Space Odyssey; a computer that simply refuses to follow orders and starts killing humans wantonly and deliberately.
But that’s the stuff of science fiction; it’s not what researchers into AI safety, for instance, are concerned about. People concerned about artificial intelligence aren’t typically concerned about computers ceasing to follow orders – instead, they’re worried about what might happen from artificial intelligence following precisely the instructions it’s given. That could be from instructions that are in themselves ill-intentioned, or the dangers of instructions being taken too literally (for a basic and unlikely example, think of a self-driving car instructed to “get me to the airport as quickly as possible”, disregarding speed limits, other road users, and pedestrians who might be in the way). There are risks that relate to current dangers, exacerbated by artificial intelligence – for instance, imagine a terrorist cell with access to artificial intelligence, enabling them to hack into security systems.
Other risks relate to handing over too much control to a machine. Modern-day human drone operators face a particular set of challenges, from fatigue to PTSD. Artificial intelligence could take humans out of the loop altogether. But anyone who’s ever made any command of a computer that was taken a bit too literally can see why applying this to a war situation could have horrific consequences, such as an AI instructed to pilot a drone to defend a convoy carrying out acts that no human operator would consider ethical.
But there are less obvious areas of danger too. The use of algorithmic trading on the stock markets contributed in 2010 to the ‘Flash Crash’, where the US stock markets lost over a trillion dollars in a matter of minutes. Different traders were using algorithms to maximise their own gains without concern for the integrity of the system as a whole. This was an egregious example, but flash crashes happen all the time; this is something that will only get worse as artificial intelligence is used to a greater extent and human oversight is reduced. As a rule of thumb, any mistake or immoral deed that can be carried out by a human can be carried out by an AI under human instruction, only the AI may be able to perform the task much more quickly and with less transparency.
5. Just how significant advances in AI and machine learning could prove to be
In 1993, the science fiction writer Vernor Vinge coined the phrase “technological singularity” to refer specifically to the development of superhuman artificial intelligence. He argued, “we are entering a regime as radically different from our human past as we humans are from the lower animals… It is a point where our old models must be discarded and a new reality rules.”
This point of view can sound like it could only come from a writer of science fiction. But it’s worth thinking about where Vinge is coming from here. There’s only one real reason that human beings are the dominant species on planet Earth today. Among animals, there are only two things that we do outstandingly well: one is throwing things and the other is thinking – and it wasn’t the ability to throw things that made the difference. Humans are the most intelligent species on the planet, and we don’t know what it’s like to be around beings – even artificial ones without consciousness, under our control – that can out-think us.
What will that mean? Artificial general intelligence might be able to create a utopia, where our AI servants work for us at incredible rates of production, and no human has to go without anything that they might want. A super-productive AI might be given a badly-phrased instruction, carry it, and wipe us all out. Or a particular group might use AI to achieve their goals to the destruction of all other options, whether that’s creating an unbreakable dictatorship or returning civilisation to the Stone Age.
This sounds far-fetched, and it could be; researchers differ on whether we’re 20 years from artificial general intelligence, or whether it’s an impossibility that will never be achieved, or anything in between. Even among computer scientists who believe artificial general intelligence is an inevitability, many believe it won’t happen for thousands of years. But if superhuman AI was invented, it’s hard to argue that it wouldn’t lead to a radically different world from the one we know today, for better or for worse. Artificial intelligence isn’t about evil robots coming to kill us all; it could be a lot stranger and a lot more frightening than that.
Images: chess; servers; trucker; dystopia; early computer.