When I was a kid in the early nineties there were no apps to remind you of things, so mostly you just hoped would remember. In particular, I hoped I would remember to check, on the futuristic date of August 29th, 1997, if Judgment Day had indeed occurred, as Terminator 2 said it would.
In the movie, that was the date a military artificial intelligence called Skynet became self-aware, according to time-travelers who had been there. The A.I.’s first decision, when it realized it was a thing, was to start a nuclear exchange in the hopes that it could eradicate human beings before they could unplug it.
I’m not sure what I ended up doing that day—today you can recall what happened on a given date by checking what emails you sent and received—but I don’t think I remembered what day it was supposed to be, and I am sure there was no nuclear holocaust.
Human beings are not great at predicting the future, but we have had a long-creeping suspicion that at some point overly smart computers will cause us huge problems. 1997 is now twenty years ago, and while our computers haven’t started any wars yet, they have begun to take our jobs.
We have robot cashiers, robot pilots, and robot stock traders. Automatic GPS-guided machinery has been planting and harvesting crops for years now. Algorithms are writing sports recaps, novels, and even some not-terrible poems. (Yes, it’s true, and it’s harder than you think to tell the difference.)
Real artificial intelligence, as in the kind that might be hard to distinguish from human intelligence, may not be that far off, and when it gets here, not even specialized, high-skill jobs are safe.
Nobody is really in a position to prevent this takeover. Market forces will bury all kinds of human-driven industries as they become obsolete. This isn’t a new thing—there are no telegram services any more, for instance, outside of the very specialized boutique hipster nostalgia market.
Even if we don’t want this to happen, we will choose it in many cases. I’m ashamed to say it, but I do slightly prefer the robot cashiers at my local Safeway, if only because the interactions I have with them seem less strained. However, there’s no reasoning with them if they don’t give you your change, so they still need to be supervised by a human being.
But even that job—supervisor of robot cashiers—is on its way out. Amazon is already testing a grocery store where you can just walk in, take what you like, and leave. A computerized sensor system (Skynet?) will remove the money from your account.
Millions of people in the US alone make a living doing some kind of driving, and robot-operated cars are already cruising the highways, and killing many fewer people while they do it. Apparently these robots are already so much better than humans at avoiding fatal accidents, that it may become illegal, in our lifetimes, to operate a vehicle if you are human.
Helping Them Help Us
All of this job takeover stuff is entirely separate from the other big problem with A.I., which is that we may lose control of our computers once they become smarter than us. This is actually a serious unsolved issue, not just a beloved sci-fi plot.
Programmers know that computers follow the instructions you give them, but what’s difficult is giving them instructions that create the results you want. Powerful A.I. is often compared to the genie in so many fables: it grants what you ask it, but so literally and forcefully that you wish you never found the lamp to begin with.
Author Yuval Harari gives a classic example in this talk. Let’s imagine the first superintelligent A.I. is given the harmless-sounding task of calculating as many digits of Pi as possible. It quickly recognizes that human beings are using energy to run things like coffeemakers and hot tubs—energy that could be harnessed to calculate more digits of Pi. The logical procedure is therefore to subvert and destroy humankind, eliminating its interference in the computer’s assigned goal. And just like that, we’re living in a James Cameron movie.
You might think our computer scientists would account for that possibility. Of course we will account for the possibilities that occur to us. But unless we can outthink the superintelligence at every stage as we develop it—which would defeat much of the purpose—we can’t predict how it will interpret our instructions, especially if it figures out how to lie to us.
However, this problem isn’t as imminent as the robot takeover of the job market. Basically, from here on in, there will be an increasing number of people for whom there is no work they can do better than—or cheaper than—a robot.
Of course, if computers and robots were doing most of the work, maybe we wouldn’t need jobs. Immense wealth could be produced with very little human work. You can imagine, if robots were doing half of everybody’s job, that we could simply work half as much, pay a “robot tax” to share the cost of their development and maintenance, and society would be just as prosperous.
Theoretically, we could also do this if the robots did all the work. The same work is getting done, only by machines that don’t get tired or disgruntled. So why would we need jobs at all? Well, without slipping into a political rabbit-hole, let’s just say that it probably wouldn’t work out that way without a drastically different economic system. It’s easy to imagine that the only people with any wealth would be the small number of trillionaires who supply the robots that do every other job. Another sci-fi movie plot.
The Next Hottest Thing: Being Human
This is all very scary, but we can take solace in a couple of certainties:
Firstly, we have no idea what will really happen. Beyond the imminence of self-driving cars and robot-supervised grocery stores, we can’t be certain it will be catastrophic.
The other one is that there are some things A.I.s and robots can do better than other things, and they probably can’t provide everything we value. At the moment, we can barely build robots that can walk down a hallway and turn a doorknob, let alone write a knock-knock joke.
Can a robot replace your taxi driver? Yes, at least the driving part. Can a robot replace your therapist? Not any time soon. Can a robot replace your dog? Never!
Can a robot replace your barista, or your bartender? Not really. Automated machines can dispense drinks just fine, but drinks aren’t all that’s being supplied. It’s hard to imagine a robot supplying the sense that you are being attended to by another human being who cares whether you’re happy with what you are served.
If we can better appreciate the subtle qualities humans (and animals) offer that robots can’t, we may not end up with a dystopia on our hands. As more and more of our needs are met by machines, demand might shift to qualities that can, for now or forever, only be produced and delivered by humans: empathy, understanding, wisdom, solidarity, humor, real eye contact, little nuances in language and craft, even simple feelings like that of being in a room with another person.
Once robots are fulfilling our current fixation on things and conveniences, we might finally recognize the role of thousands of subtle human qualities that we all value, but never talk about when we talk about the economy. (Not that we need to wait for Judgment Day before we start exploring those “markets.”)
Nobody knows where we’re headed, but it’s interesting to think about how different society would be if people were no longer valued primarily for their ability to supply material goods and services. Productivity will remain the measure of a machine’s worth, but talking about human being in those terms might become crass or taboo.
If the robots take over all the utilitarian stuff, we might need to become a society of carers, understanders, expressers and connectors, the way we are today a society of managers, designers, laborers and servers. And maybe we’re better suited for that anyway.