In Defense of Intuition- Or, a Lesson for Empirical Bioethics

Author

Jennifer Blumenthal-Barby

Publish date

Tag(s): Legacy post
Topic(s): Neuroethics Philosophy & Ethics

by J.S. Blumenthal-Barby, Ph.D.

On March 17, 2016 philosopher Peter Railton delivered the Ethics, Politics, and Society lecture at Rice University. Railton is Gregory S. Kavka Distinguished University Professor, Arthur F. Thurnau Professor, and John Stephenson Perrin Professor at The University of Michigan. His talk was titled, “Homo prospectus: A new perspective on the mind.”

Railton’s main aim was to counter a current trend in the social sciences that involves a distrust of our intuitive responses to the world. This skepticism has crept into moral philosophy as the empirical literature showing the effects of various framing effects on moral judgments increases. Philosophical ethics has had a heavy reliance on “thought experiments” whereby agents are posed with a moral dilemma and asked to make a decision about what ought to be done. Consider, for example, the trolley problem posed in part to help us evaluate consequentialism (i.e., a person is faced with a choice of whether to do nothing and allow a train to travel along a track to kill 5 people, or to flip a switch diverting the train onto another track where 1 will be killed). Neuroscience has shown that we prospect by making “mental models” during these judgments (predictive and evaluative models of the options). Of course, we are often not aware of the models themselves; what we are instead aware of is our intuition or “sense” that a certain choice is a good choice or the best option in the situation.

As mentioned, these intuitions are increasingly questioned. Return to the trolley dilemma just posed. Roughly, 70% of people will say “flip” and 30% will say “don’t.” Now, describe the scenario differently: imagine that you are on a footbridge above the tracks and instead of flipping a switch to divert the train from the 5 to the 1, you must push another person off the bridge onto the track. Now, roughly, 30% will say “push” and 70% will say “don’t.” But what is the morally relevant difference between the two scenarios? Or, consider this version of the dilemma: you are stepping off a bus with another person stepping off in front of you when you notice terrorists with bombs outside the bus ready to rush in. You can push the man in front of you into the terrorists to detonate their bombs. Roughly, 70% will say “push” and 30% will say “don’t.” Finally, consider Railton’s “beckon” version of the dilemma: you are near the track and another man is there and you can “beckon” for him to come onto the track such that the train will hit him and be diverted from hitting the 5. Roughly, 30% will say “beckon” and 70% will say “don’t beckon.”

What’s going on with these fluctuating intuitions about seemingly morally similar cases? As Railton pointed out, if we dig deeper we see a more unified structure (or model) behind these intuitions. What we find is that the agents presented with these dilemmas are making a simulation of what kind of person performs the action in question and then using that model to answer the question about the morality of the action. Railton discovered this by asking people to imagine that the agent in question was their friend and then asking them whether they would trust him more or less or the same if he made the switch flip, footbridge push, bus push, or beckon. Trust stayed about the same for switch flip, decreased for footbridge push, increased for bus push, and decreased for beckon. Essentially, trustworthiness served as a consistent mediator and mental model across the seemingly inconsistent intuitions. And, arguably a reliable (the result of years of social/evolutionary experience or “implicit knowledge”) model rather than a nonsensical one. Railton would not have discovered this had he not dug deeper to explore the intuitions under study.

His lesson: we need to pay more attention to intuitions in morality, but we should not accept them blindly or use them for quick answers. Instead, we should probe them and figure out their structures and nuances. In other words, it essentially a methodological point: we need to use thought experiments in moral philosophy and bioethics more creatively. Note that this would help us more accurately discover the descriptive facts about how we in fact make moral judgments, but leaves the normative questions regarding how we ought to make moral judgments untouched. For example, if Railton’s experiments are read to point toward a more descriptively virtue based ethical theory (that we judge an action as wrong to the extent that it is something an “un-trustworthy” person would do), then there is a further question about the value or merit of a virtue based approach to ethics. If the normative evaluation points towards a more consequentialist or action/outcome based approach, then there might be reason to facilitate interventions to steer humans away from their natural dispositions and towards different ones.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.