Posted on March 14, 2017 at 8:57 PM
Mark McQuain, in his February 21st blog post, discussed an interesting article which proposed that ethical decisions be made by robots. Although the author’s specific arguments invite numerous responses, underneath these arguments lies the question: why does modern man spend such effort to use technology to rid himself of yet another intrinsic function of his existence?
It seems to me that this wish to pass off ethical decision making is a prime example our drive to divest ourselves of difficult, painful, messy, and often guilt-inducing work in our moral and spiritual lives.
J. Budziszewski described this problem in his book What We Can’t Not Know when he wrote, “…two universals are in conflict: universal moral knowledge, and the universal desire to evade it.”
If we look closely, behind the artfully constructed arguments heavily refined in postmodern academia, is an unspoken motive of moral avoidance–the desire to distance oneself from the emotionally painful or otherwise costly consequences of man’s existence. Technology has already been used quite well to help us avoid other discomforts— why not to help us avoid emotional discomfort as well?
For example, how many instances of discussion of physician-assisted suicide are really driven by the physician’s, family’s, and government’s sense that their lives would be so much easier if this person would just die before things got messy? How neatly this prevents emotional strain on the part of everyone besides the patient. The all-too-well-developed arguments invoking “compassion” and “dignity” are in fact contrived as a veneer to cover this motive.
And here abortion is the close cousin of assisted suicide– far better to promote “choice” than to deal with the unpleasant social consequences of unwanted pregnancies. Think of all the tough decisions we can avoid!
Of course, if there are ways to reduce or mitigate ethical dilemmas, it is reasonable to pursue those. But the article does not describe avoidable dilemmas, only a desire to avoid poor decision-making. The author states, “I don’t want it to be human. I want it to be true to its code.” But this is really avoidance of the difficult task of developing one’s own code. It is avoidance of moral decision-making, and of the hard consequences across one’s entire life once one does adopt a code. Such as if one decides to become a Christian–quite a few lifestyle decisions to make, if one truly means it. How easily we might avoid such personal decisions if we rely on a computer code instead of a personal code. But the problem is not lessened, it is just passed to others. In this article’s case, it’s passed into the hands of programmers. Or, unless computer programmers have taken a larger interest in ethics that has been heretofore apparent, into the hands of their ethics consultants.
What drives man to such avoidance? We could find any number of man’s base drives among the reasons–selfishness, greed, sloth…but among these must be fear. Fear of emotional inadequacy, fear of being wrong, fear of one’s own mortality, fear of the tough personal consequences if one were to admit what is in fact is the right thing to do. Too little in our society, or in man’s existence itself, do we admit that we are just plain afraid.