Posted on April 10, 2015 at 2:36 PM
That was how one wag, a fellow undergrad at my college in the late 70’s, rewrote “people making computers to help people,” the “tag line” that IBM was using in its TV commercials at the time. It got a good laugh. Indeed, it sounded more accurate than the original.
Even more so now, I was reminded last week by an interview on the Fox Business Channel with Apple co-founder Steve Wozniak. The upshot: although he had been a serious skeptic of artificial intelligence, he is now sufficiently surprised and alarmed to reassess his view about what computers might achieve. The current reality and near-term prospects for computerized automation looks like it will clearly supplant an increasing number of heretofore uniquely human activities, with troubling implications for what will be left for people to do.
Consider: stock trades are done by computer, more efficiently than by people. The folks (mostly guys) are gone from the floor of the Chicago Mercantile Exchange. Self-driving cars. The prospect that goods ordered on the internet will be delivered by drone in, oh, say 10 minutes instead of by some unreliable delivery person. Industrial robots replacing human workers. Robot dogs, for cryin’ out loud. You don’t have to believe in so-called “strong AI,” the notion that the sould, or something like it, will emerge from a future computer, kind of like Dr. Lanning thought in I, Robot and Ray Kurzweil thinks in real life. (Why, music from Kurzweil’s synthesizer can be pretty darn indistinguishable from a “real” musical instrument, I understand.) You just have to think that computers can fool us into thinking, for example, we are talking to a real person from wherever (India?) than to a computer on a telephone call to an automated service line. And that is almost certainly coming. Think Siri. Think Her.
In the interview in question, Wozniak told Maria Bartiromo that it’s not like we set out to make machines that think, we just tried to see what things we could do with machines, and one thing led to another, and, well….
Heretofore, we human beings have just been moving on to what things humans should really be spending time on. And so, one might think, it will continue to go—unless we increasingly have trouble finding those uniquely human activities. As it is, we keep putting machines inbetween ourselves and other people, preferring the virtual to the “natural.” Think of the last time you were around a dinner table with everyone looking at his or her cell phone. Why, my friend says the kids on his sons’ Little League team spend all their time on theic cell phones during a baseball game; i.e., on the bench or in the field, not in the bleachers.
So we all need to “code,” to write software, Bartiromo asked Wozniak. No, he said, hardware is even more important. Education MUST include robotics if future students are to be ready to live in the world. In fact, I hear that parents of means in the San Francisco Bay Area are scrambling precisely to get their kids involved in robotics from a very young age.
So if that happens, whither the “liberal arts,” or, better, whither classical education? And the fine arts? And so on. The implications run very, very deep.
And one implication is that if you have money you will inhabit a world in which the computers are more important as people. And if that means that you make up, collectively, what The Economist calls the “global meritocracy,” what happens to the rest of mankind? The people controlling the resources and large societal choices will not just be changing the basics of the culture—the water we swim in, so to speak—but quite possibly also limiting future choices for future humans, rather than creating opportunities. That, to me, is reminiscent of C.S. Lewis’s worry in The Abolition of Man. Or, more prosaically, it sounds like Anne Hathaway from The Dark Knight Rises: a few people “living so large and leaving so little for the rest of us.”
I surprise myself in that I’m starting to sound like a real leftist here. But I am certainly not sure where all this ends. And contemplating it is, I think, a reminder why justice is such a powerful consideration for ethical discussions of cutting-edge biotechnologies. It’s not just the old consequentialist argument about distribution of whatever goods a technology may spawn. It is, as I take some of the contemporary European thinkers to be arguing, a question of whether the march of technology fundamentally destroys human equality, the dignity of all individual people, dignity in that sense of what is commonly precious about all humans soley by virtue of the sort of beings they are, including but not limited to that generally recognized common-core dignity, individual choice (autonomy).
One doesn’t need a genetically unique Homo to get there—just enough social momentum to establish a permanent overclass and underclass. And how can that be avoided, really, if there is no essential human nature?