The Age Of Automation Is Now: Here's How To 'Futureproof' Yourself
Are robots coming for your job? New York Times tech columnist Kevin Roose says companies and governments are increasingly using automation and artificial intelligence to cut costs, transform workplaces and eliminate jobs — and more changes are coming.
"We need to prepare for the possibility that a lot of people are going to fall through the cracks of this technological transformation," Roose says. "It's happened during every technological transformation we've ever had, and it's going to happen this time. And in fact, it already is happening."
In his new book, Futureproof: 9 Rules for Humans in the Age of Automation, Roose examines the benefits and pitfalls of automation — and reflects on how we as a society could responsibly manage technological innovation.
Roose notes that automation could lead to tremendous scientific breakthroughs. "It could help us cure rare diseases. It could help us fix the climate crisis. It could do any number of amazing things that we really, really need," he says.
But Roose adds that he worries about the motivations of the humans in charge, particularly "the executives at large companies who are using automation to replace workers without transforming their companies, without developing new products."
"They're not trying to innovate and transform their businesses," he says. "They're purely trying to do the same amount of work with fewer people."
On the argument that as technology changes, new jobs emerge
I was very optimistic about this technology because of that argument ... that automation and artificial intelligence will destroy some jobs, but they will create other jobs — and those jobs will replace the lost ones. But as I started looking more into the present of AI and also the past of automation, I learned that it's not always that smooth.
During the Industrial Revolution, for example, there were people who didn't find work for a long time. Wages for workers didn't catch up to corporate profits for something like 50 years. So a lot of the people who went through those technological transformations ... didn't have a good time. They weren't necessarily happier, or living better lives, or wealthier as a result of this new technology.
But there's also a difference today, which is that artificial intelligence is not just replacing repetitive manual labor. It's also replacing repetitive cognitive labor. It's able to do higher value tasks, not just moving data around on a spreadsheet or moving car parts around in a factory. It's able to do the work of white-collar workers in fields that generally require college educations and specialized training. That's one difference.
Then the other difference is there's been some new research out about the effect that automation has been having in the economy. And it's shown that while for much of the 20th century, automation was creating new jobs faster than it was destroying old jobs, for the last few decades, the opposite has been true: Old jobs have been disappearing faster than new jobs have been created.
On how "bureaucratic bots" and algorithms are used to determine some government assistance programs and criminal justice decisions
I don't think people fully appreciate the extent to which things like benefits, who qualifies for nutrition assistance, who qualifies for public housing, are determined by algorithms now. And sometimes that works fine, and some other times it doesn't work so great. There was a case a few years ago in Michigan where an algorithm that the state was using to determine benefits eligibility misfired, and it kicked a lot of people off their benefits wrongly and that affected people's lives in real, tangible ways.
There are other kinds of bots and automation being used by governments in the criminal justice system, for example, to predict whether a given defendant is likely to re-offend if you put them out on parole. These algorithms are generally not open and inspected by the public — they're sort of "black boxes," and we don't really know how they work and there's not a lot of accountability for them. And so as a result, we end up with these mysterious machines making these decisions that affect millions ... of people's lives and we don't really understand what they're doing.
On the power — and danger — of the YouTube algorithm
YouTube is owned by Google, and Google has the best AI research team in America. They produce the most award-winning papers. They have the best Ph.D.s. They're at the vanguard of artificial intelligence. And a lot of that research and expertise for the last decade has been going into honing this YouTube algorithm with these techniques that are brand new and that are making it much more effective. Something like 70% of all the time that people spend on YouTube is directly related to recommendations that come from this algorithm. ...
Maximizing watch time is the No. 1 goal of this algorithm. And so some of the ways that it's learned that it can keep people on YouTube for a long time are by introducing them to new ideas, maybe to conspiracy theories, maybe to more extreme versions of something that they already believe, things that will sort of lead them down these rabbit holes. And so this has had an effect on politics. This has had an effect on our culture. And it's resulted in some cases where people have been radicalized because the algorithm thought that radicalizing them would be a good way to keep them watching YouTube.
On the jobs that are relatively safe from automation
The more AI experts and computer scientists I talked to, the more sure I became that we have been preparing people for the future in exactly the wrong way: We've been telling them [to] develop these technical skills in fields like computer science and engineering. We've been telling people to become as productive as possible to optimize their lives, to squeeze out all the inefficiency and spend their time as effectively as possible, in essence, to become more like machines. And really, what we should be teaching people is to be more like humans, to do the things that machines can't do. ...
We have been preparing people for the future in exactly the wrong way. ... What we should be teaching people is to be more like humans, to do the things that machines can't do.
There are three categories of work that I think is unlikely to be automated in the near future. One is "surprising work." So this is work that involves complex rules, changing environments, unexpected variables. AI and automation really like regularity. They like concrete rules, bounded environments and repetitive action. So this is why AI can beat a human in chess, but if you asked an algorithm to teach a kindergarten class, it would fail miserably because that's a very irregular environment with lots of surprises going on. So those surprising jobs are the first jobs I think are relatively safe.
The second category is what I call "social jobs," jobs that involve making people feel things rather than making things. So these would be the jobs in social services and health care, nursing therapists, ministers, but also people who perform sort of emotional labor as part of their jobs, people like flight attendants and baristas, people we don't typically think of as being "social" workers, but their jobs do involve an element of making people feel things.
And the third category of work that I think is safe is what I call "scarce work." And this is work that involves sort of high-stakes situations, rare combinations of skills, or just people who are experts in their fields. This would include jobs that we have decided are unacceptable to automate. We could replace all of the human 911 operators with robots. That technology exists. But if you call 911 today, you will get a human because we want humans to be doing that job when we're in trouble. We want a human to pick up the phone and help us to deal with our problems.
Sam Briger and Kayla Lattimore produced and edited the audio of this interview. Bridget Bentz, Molly Seavy-Nesper and Meghan Sullivan adapted it for the Web.
Copyright 2021 Fresh Air. To see more, visit Fresh Air.