What should my kids study at school?
A lecture given to Canford School ‘Festival of Ideas’
The question I get asked more than any other is ‘what should my children study?’ It reflects a wide social concern: that our economy is changing quickly and in unpredictable ways. Are children learning things that will be ill-suited or irrelevant in 10 or 20 years’ time?
It’s not an unreasonable fear. My ‘career adviser’ — a grand title for someone who just read ‘bank manager’ and ‘supermarket manager’, from a pre-prepared list — offered me exactly zero useful advice. I am now doing work that didn’t exist when I left school. And if anyone had told me 20 years ago that by the mid 2010s most young people would type words on tiny computers in their free time, I would have laughed you out of the room.
The point is: it’s very hard to predict any of this accurately. But that doesn’t seem to prevent the ever-expanding industry of ‘futurism’ and prediction, so it won’t stop me either. But everything the follows has a very large caveat: this might be entirely wrong, although I think the broad contours are about right.
One technology above all menacingly stalks this question: artificial intelligence. The leap forward in AI — which has been gradual and then sudden — is behind the first genuine mass panic of the 21st Century: that we are entering a world in which robots will take every job currently done by humans, effectively putting us all out of work. Media outlets enjoy writing scary and misleading headlines about this.
We are extremely good at knowing exactly what jobs we will lose, since we can see them and name them. We are however very bad at imagining what new jobs might replace them, since they don’t exist yet.
That doesn’t mean to say there won’t be significant disruption to the workforce. The nature of AI gives a few clues as to what might happen. Although we have a childish obsession with marching machines, Skynet and humanoids, the real action is ‘domain specific’ AI, often using a technique known as ‘machine learning’. A human feeds an algorithm with data and teaches what each input means. From that it can detect patterns, from which it can mimic a particular human behaviour or perform a very specific task: driving up the freeway, predicting weather, giving credit scores, measuring sentiment, reading license plates and the like.
It seems likely that jobs most at risk will be those involving routine tasks, as these can most easily be done by machines trained on previous examples. The safest, and most likely to be created, work is ‘non-routine’. Machine learning AIs are starting to outperform humans in a small but quietly growing number of narrow tasks. Over the last year alone inroads have been made into driving, brick-laying, fruit-picking, burger-flipping, banking, trading, automated stock-taking and more.
(We all have our own dystopias, and mine is a bifurcated labour market where you either have a highly paid job at an inspirationally branded tech-firm, or the opportunity to ride a bicycle on minimum wage to deliver food to these busy and important people. But there’s no chance trying to get a stable job as a local journalist, paralegal, truck driver or tax accountant. But this is a different question).
The good news is that there are already exciting industries and areas of work — decent, well-paid work — which are either thriving, or will do fairly soon. A couple of examples should suffice to make the point. Take cyber security. In the few years I’ve attended IT security events they’ve gone from workshops in airless rooms to country-manor marquees with ‘have-a-try’ Ferraris out front. Fraud and cybercrime cost the UK around £11bn in 2017, and this will only grow. Within a decade your TV, dog, house, car, fridge, clothing and more, will be part of the invisible ‘internet of things’ network, all chipped up and communicating online with each other. I’ve seen too many demonstrations of smart devices getting hacked with dongles to think it’s going to get better before it gets worse. It won’t be long before your smart coffee machine is hacked with ransomware — and you’ll be asked to pay a some Bitcoin to a Ukrainian hacker just to get your morning caffeine.
Every day it gets a little easier to be a cyber-criminal. I’ve spend a lot of time in the criminal underworld, including for my first book The Dark Net. Criminals are the smartest, more creative, most ambitious people you’ll ever meet, and will keep good people in work for decades.
Stopping these people is a job with some moral purpose — but it’s also exciting, varied and well-paid. ‘Penetration Testers’ for example are paid money to try to break into a company’s IT system and then explain how they did it. Can you imagine anything more fun? Any kids at school caught breaking into the IT system should get a wrist-slap: followed by a fast track course on how to become a Certified Ethical Hacker.
Governments like to say that ‘coding’ and‘programming’ are the jobs of the future. This is true in the short term, but to me they feel like the sort of routine tasks that can be automated. (Not to mention that programmers and coders will be competing very directly with workers from developing world countries who will be able to do the exact same work far cheaper).
I have generally found programming and coding skills to be most useful when combined with other skill sets — like statistics, research methods, writing, or data visualisation. In my job at Demos, I look for people who can supplement data science or coding with expertise in social sciences (or vice versa). The interactions of different skill sets often generates interesting insights, approaches and ideas. My ideal colleague is the social scientist who knows just enough about data science to know what opportunities it presents for our areas of work.
This multi-disciplinary tech-plus-non-tech expertise will be important in lots of areas. We will need people who can both understand technology and explain it to non-specialists. (This is a rarer skill than you might think). I can think of one immediate job right now. According to the new GDPR rules, citizens have the right to ask companies to explain if a decision about them is being taken by a machine. Who’s gonna do that? Only someone who can both write clear English and understand code.
This will become an entire profession, because as machines make ever more decisions about our lives, governments, and us, will want to understand why and how these decisions are taken. Whether it’s YouTube’s Recommendation Engine, AI-powered diagnosis tools or automated CV checkers, we do not want to live in a world comprised of powerful machines that no-one except a few tech-wizards can understand. It is often said that ‘politicians don’t get tech’. That’s true — but so is the reverse. Technologists often don’t get politics, either. There will be whole teams of professionals whose job it is to check machines and figure out if its fair and lawful. Is there bias in the AI-tool being used by judges to help sentencing decisions? Are newsfeeds during an election accidentally giving an unfair advantage to one candidate? Are a cluster of IP addresses being offered differential prices?
As so often, science fiction is a handy guide. In the worlds dreamt up by Isaac Asimov, the great sci-fi pioneer who coined the word ‘robotics’, machines play a central role in economic and social life. But there are teams of highly-educated humans whose job it is to figure how robots think, interview them, and ensure they are built with in conformity with societal norms. If you ditch the idea of a humanoid robot and replace it with software, this isn’t that far off. At some point in the next few years, we’ll have to program an autonomous vehicles to make a decision when a pedestrian wanders directly in front of your self-driving car. Should it:
– Slam on the brakes and risk killing both driver and passenger?
– Swerve and risk major injury to a large group of innocent passer-by?
– Keep going and almost certainly kill the pedestrian?
What if the passenger was a small child, and the other was an adult? Some Answering such questions takes us into the realm of moral philosophy (examined in the famous ‘trolley’ thought experiments). Then of course there are questions of legal culpability and insurance. These are questions that must not be left solely to technologists — and mustn’t be left unanswered. Plenty to keep us occupied here: the technology itself is only half the challenge.
So to return to the original question: given this dizzying combination of disruption and opportunity, what should your kids learn at school ?
The most obvious advice is to learn what machines aren’t good at. As I wrote above routine, repetitive tasks — whether physical or intellectual — are most at risk in the years ahead. (Ironically, the role of ‘futurist’ will surely be at risk too — machines will be far better at prediction than humans). So it makes sense to think about non-routine careers. Note that those I’ve described above — penetration tester, robot ethicist, technology poet — are all non-routine. There will be no shortage of opportunities in this field. And better still if these jobs require motor skills. ‘Moravec’s Paradox’, which still just about holds, states that high-level reasoning often requires little computational power, but low level sensorimotor skills needs a lot.
It’s too prescriptive to start taking courses based on some projected future labour market you read about on Medium. Not to mention tedious and boring — so view this as very general advice. More broadly it makes sense to focus on general purpose skills: empathy, team-building, concentration. Whatever the job or task at hand, such abilities will always be important. You’ll note I didn’t mention ‘creativity’. I’m not convinced that humans have some mythical monopoly over creativity. Machines are already highly innovative in all sorts of areas, including music, art, engineering and design. In a world where machines have super-human abilities in certain specific tasks, knowing how to make the most of software and technology will be highly prized. I call this ‘problem framing’: the ability to think strategically, plan effectively, and identify opportunities and difficulties. In my experience multi-disciplinary study is extremely useful to develop these skills — and often provokes interesting insights and angles into problems. So if you’re going to study coding — throw in some modules on history. If you’re planning to study English — throw in a bit of data visualisation and a foreign language. You get the picture.
Perhaps the most important ability of all (and here I agree with Yuval Noah Harari, who argues this in his recent book 21 Questions for the 21st Century) will be the desire and willingness to learn and reinvent. Many people leaving school today will have 2, 3 or more careers in their lives. Others will witness dizzying technological change and innovation in their profession (I’m thinking here about doctors, architects and engineers using AI and data analytics). Those willing to pick up and master new technology in their profession will thrive. For others it may even mean changing direction completely several times, as different industries come and go. In this world, the future belongs to those who learn how to learn — and ideally enjoy the process. But isn’t that what education is meant to be about anyway?