There is a lot of talk, even hype, about artificial intelligence (AI) and how it will take over the world. We are understandably concerned about how this technology might disrupt work or take jobs away. Some, including the late Stephen Hawking, who spoke openly about AI in an interview with the BBC in 2014, and Elon Musk, founder, chief executive officer and lead designer at SpaceX and co-founder of Tesla, who gave his opinion to a journalist at The Independent in July 2017, have warned that it poses an existential threat. We are a long way off from creating anything resembling human intelligence in a computer though.
The current technology described as AI is, in fact, machine learning. The exponential growth of computing power has allowed machines to develop and learn faster. Last year, a Google computer beat the world champion in the strategy game of Go. That is impressive, but it was largely a display of brute force computing power. In a previous experiment, Google asked a learning machine to try to understand videos on YouTube. It’s first attempt at cognition was to define the concept of a cat. After being presented with a list of 20,000 different items, it began to recognise pictures of cats using a ‘deep-learning’ algorithm.
Machine learning is important and becoming more ubiquitous by the day. You are accessing it every time you search on Google, use its translation software or speak to Amazon’s Alexa. And the better it gets, the greater the impact of machine learning on our work. Task-specific computers will increasingly be able to take on professional jobs in finance or law where computers have been found to outperform lawyers in reviewing documents. With the development of natural language processing, computers are also able to understand speech or written text. It has allowed some customer service functions, such as web chats, to be taken on by machines. They have also been able to do some journalistic roles by writing financial or sports reports and even compose their own fake videos.
There is no doubt that machine learning will impact on most jobs. But that does not mean it will be taking them over just yet. I will take an example from my own role as a university lecturer. In my career I will grade a few thousand student reports. If I give the reports on a specific subject to a computer, it can read the information in a few minutes. From that, it can do a decent job of marking them. What if it has not seen that topic before? The computer cannot grade them, but I can. I can bring my human, cognitive skills and apply them to a topic that I have not graded previously. Computers can still help me though. For example, I can use them to check plagiarism or referencing. Machine learning allows me to work smarter. In the employee benefits sector, computer learning can quickly analyse vast amounts of information. Through that it can help both staff and employers make better, more informed choices.
For the last few decades many jobs, even whole sectors, have been disrupted by technology. Think of how AirBnB, Uber or Facebook have all disrupted many traditional sectors. There is no question that with machine learning that trend will continue. In order to maintain our relevance, it is important to focus on the humanness of what we do. Think of people-centred jobs such as elderly care, physiotherapy or perhaps hairdressing. Even if a robot could do these jobs, would we want it to?
My advice is to think about the uniquely human part of your job, because that is the future of work. Advanced computing can support you by helping you to work smarter. And what happens if the machines ever achieve something like true human intelligence? Personally, I am not convinced they will take over the world. Just as they did with Google, I think they will look around the internet and come to the conclusion that the cats are in charge.
Mark Brill is a senior lecturer in future media at Birmingham City University