4 Challenges for Computer Scientists in the 21st Century


The 21st century offers huge opportunities for computer scientists.

You might also enjoy…

In a generation, the internet has gone from being a novelty to a ubiquitous part of everyday life. Self-driving cars seem set to go from an implausible fantasy to a feature of our roads in even less time. Big data allows our habits from our spending to our health to be analysed at incredible speed. And in the background of it all, the rise of artificial intelligence promises changes in our lifestyles that could make the Industrial Revolution seem minor. Computer scientists will be at the heart of all of these changes.
But these opportunities won’t exist in a vacuum. The difference between a technological driven future that is unimaginably Utopian, and an irreversible decline into a Dystopia, may be smaller than we realise – and that, too, will often be up to computer scientists to address as they push forward our progress in the coming century. In this article, we take a look at some of the key problems that technological advances will force computer scientists to confront, and what could possibly be done.

1. Algorithmic Bias

Susanne isn’t old enough to have a credit rating yet.

Imagine the situation of an entrepreneur in her late teens – let’s call her Susanne – trying to get a loan approved to launch an app that she’s developed. It’s a great idea and all of her friends are already using it, but most traditional banks turn her down, because they don’t want to take a risk on someone who’s barely out of secondary school. But this happens less in the 21st century, not only because we’re well aware that teens can be tech superstars too, but also because her suitability for a loan may not be calculated by a human. Instead, the responsibility may fall to an algorithm to which her age is meaningless, and that bases its assessment solely on the objective strength of her business plan. In the future, a teenager like Susanne would have no difficulty in getting a loan to launch her business.
Or at least, this is what you might assume would happen.
The reality can be rather more complicated, and significantly more depressing. As our use of big data expands, the algorithm will have an ever-growing number of variables to assess when it makes its decision about Susanne’s loan. It won’t just look at her financial situation and business plan. It could look at any aspect of the data about her; alongside her proposal for launching the app, it might consider the TV shows she’s liked on Facebook, or the keywords used in her blog posts. It may even have access to the spending history demonstrated on her loyalty cards. And from all of this, it will build up a complete picture of Susanne.
The algorithm won’t have been written to take into account Susanne’s age. But it’s highly likely that all of that data will nonetheless map onto a set of traits that can easily be associated with being a teenager. Perhaps the app recognises that successful entrepreneurs usually read the Financial Times and the Economist, while Susanne’s Facebook likes include Teen Vogue (which might have been what inspired her) but don’t include any broadsheets. Perhaps her blog posts mention Zayn Malik more often than they discuss the FTSE 100. Perhaps all of this correlates with the traits of people who are usually less promising as receipts of loans. If so, the ‘objective’ algorithm will turn Susanne down just as surely as a prejudiced human being who can’t see past her age. And all of this happens even if Susanne’s business plan is obviously brilliant, given half a chance.
This isn’t a far-future problem, either. Scandals as a result of algorithmic bias have already emerged, such as the research that demonstrated Google was more likely to show adverts for highly paid executive jobs to men than to women. Algorithms like this are being used in decision-making not just by banks, but by the criminal justice system, by employers, by insurers. Their bias can have an immediate impact on people’s lives, and perpetuate the problems that they were partly intended to solve. Right now, we don’t have many answers to this problem – beyond computer scientists being very careful not to bake their own biases into the programs they create, and monitoring their output regularly to try and spot and address any problems as quickly as possible.

2. Security in the internet of things

Your fridge might know your every move.

If you think about predictions for what the future held over the past fifty years, the concept of the internet of things has been a reoccurring theme. Despite the silly name, the idea is straightforward: that everyday items from your fridge to your washing machine to the traffic lights outside your house would be networked, and would work more intelligently as a result.
In practice, that might mean a network of traffic lights across a city that worked in tandem to minimise congestion by adapting their cycles to the flow of traffic at any given time. That might mean a fridge that could scan bar codes and identify which items were due to go off; perhaps even hook up with your online shopping order to put together a shopping list of things that were running low, or give you recipe suggestions based on what you had available. Or that might mean a washing machine that you could load up and leave to run itself at whichever point electricity was cheapest (for instance, when the solar panels on your roof were at full strength), thereby saving you money. Anything in your home or out on the street could be networked for these kinds of technological possibilities.
These ideas have gone from being the stuff of science fiction to the stuff of reality. You might already be able to control lights, heating and the sound system in your home remotely from your mobile phone, for instance. But there is – of course – a downside to all of this as well. As soon as an item becomes ‘intelligent’ in this way, it becomes open to being hacked. In 2015, a pair of hackers wirelessly took over the electronic systems in a Chrysler Jeep, gaining control of the dashboard functions, the steering, the transmission and – most frighteningly of all – the brakes. Chrysler were forced to recall 1.4 million vehicles in order to apply a software patch that would prevent that kind of attack in future. The Chrysler hack was carried out by researchers who made their findings public so that Chrysler could fix the problem in future. But it’s easy to imagine similar vulnerabilities in other systems being found and exploited by hackers who are not so civic-minded. Depending on what’s being hacked, this could be exploited for anything from theft to murder.
And the danger isn’t solely from hacking. This year, a man in Ohio was charged with arson on the basis of evidence from his pacemaker; its records of his heart rate demonstrated that he was lying about his actions at the time when a fire broke out at his home.
In this instance, you might well think that his fate – betrayed by his own use of technology – was well deserved, but it doesn’t take too much imagination to think of scenarios under which our devices could store information about us that isn’t evidence of criminal activity, but that nonetheless we might want to keep private. People already struggle to keep passwords and virus protection up-to-date on things that they are already aware are vulnerable, like laptops; how much of a culture change will it take to make people happy with the idea of updating the antivirus software on their fridge? Computer scientists of the future will have to deal with these challenges, or a recall of 1.4 million cars is going to seem like a drop in the ocean compared with the commercial costs of getting these security issues wrong.

3. Encryption and quantum computing

Well, it looks alive, but you never can tell…

Quantum computing is hugely exciting. Explaining it also sounds a little bit like magic. In a standard computer, a bit is a piece of information that can exist in one of two binary states – 0 or 1. As every cliched hacker sequence in a movie will tell you, all our standard computing is made up of strings of one and zeros at base.
But a quantum bit – called a qubit, just so this sounds even more like Harry Potter – can encode much more information. This is because subatomic particles can exist in more than one state at a time. You’re probably familiar with the metaphor of Schrodinger’s Cat, because you’ve almost certainly heard bad jokes about it – the cat that is in a box and is both dead and alive. The metaphor is for subatomic particles in a quantum state, which can be both one and zero, alive and dead, on and off.
If this seems mind-bending, then don’t worry; that’s everyone’s reaction. The important consequence is that computing based on the use of quantum states can work a lot quicker – millions of times faster – and use less energy than the computers we’re used to. That opens up amazing possibilities, but there is the inevitable downside.
We can currently send data and keep it private because we can use encryption to do so. Think about an encrypted file as a locked door that you can’t open unless you have the key. There are only so many different shapes that a key can have, so you could make a whole series of keys and try them all, but chances are that this would take so long that someone would have caught you at it long before you found the right one. Now imagine that this whole process of making and testing keys was millions of times faster. You would presumably conclude that the locked door is not quite so much of a barrier any more.
In many cases we already know the answers to being ready for when quantum computing takes off. The security industry has been aware of the problem for some time, and they’ve been working hard. The problem lies less in having a technological solution available than to persuading people to take it seriously. When you remember that one of the world’s most commonly used passwords is still ‘password’, you might grasp the nature of what computer scientists will have to deal with. If you can’t persuade people to use passwords that aren’t guessable, how can you persuade them that super-fast magic computers powered by un-dead cats will be coming to steal all of their data?

4. Artificial intelligence

What happens if they take matters into their own hands?

Has the rest of this article worried you a little? All of those problems are easily surmountable in comparison with this one. Concerns about artificial intelligence are grounded in the knowledge that computing power is growing all the time. There are plenty of individual domains in which computers are significantly smarter than any human is ever lived. No one expects to be able to perform mathematical calculations faster than their computer, and areas where we used to believe a human touch was required are fast becoming the domain of machines – as we’ve seen through the success of Google DeepMind’s Go-playing computer, AlphaGo. The question then arises: how much longer before computers are more intelligent than humans across a wider range of domains? Computer scientists vary in their answer to this question. Some say never. Some say as little as ten years.
This matters because we have no idea how the world would work if humans weren’t the smartest things in it. Our global dominance – which, you should remember, is causing the mass extinction of less intelligent species – is almost entirely because we can out-think everything else on the planet. We have never tried to control anything more intelligent than ourselves. It’s very unclear how that might be done.
You might think that this concern anthropomorphizes computers too much. You might argue that it doesn’t matter if a tool is more intelligent than you; it’s still a tool, it doesn’t have free will, and therefore it will do what you tell it to. But that’s one of the fears computer scientists have about artificial intelligence – that it will do what we tell it to without any understanding of the possible consequences. What if a super-intelligence carries out an ill-thought-through command too quickly and too efficiently for the consequences to be averted? How can we ensure that an artificial intelligence does what we intended and not necessarily what we asked?
Imagine a dog asking for a treat. You want to get it a treat, but you have no money, so you go out to work and earn money, then go to the shops, then buy it a treat. The process of going out to work makes no sense to the dog, but because you’re more intelligent than it is, you understand why it’s a step in the process. We may find ourselves in the same position as a dog – unable to tell if an artificial intelligence’s actions are incomprehensible because we aren’t intelligent enough to understand them, or if they are simply wrong – and what if we can’t spot the difference in time to stop a catastrophic mistake?
Images: computer code; keyboard; woman in red top; kitchen; scary cat; robot