A version of this article was printed in Humanism Ireland, May-June, Vol. 146 (2014)
The notion that our moral intuitions possess epistemic authority has been associated with a number of philosophers within the canon of Western thought. Roughly speaking, these thinkers have argued that our intuitions have recourse to a unique authority of perception that yields special access to a sphere of moral legitimacy. Others, however, have claimed that our intuitions are incredibly diverse and often conflict with each other—for example, your intuition says assisted suicide is morally permissible and my intuition says it’s wrong. But it seems the two contrasting intuitions cannot both be right. At the same time, most of us think our own moral intuitions are right: they do not seem inconsistent to us, and we have a strong sense to believe them. Accordingly, they strike us as correct.
Undoubtedly, moral intuitions can be shaped by our
particular culture, environment or code of belief, and indeed they can be caused
by individual prejudice or self-interest. At other times we intuitively feel something
is wrong, but on further reflection, or with the benefit of hindsight, we change
our minds. These factors would seem to indicate that our moral intuitions can
sometimes be mistaken and are not exceptionally reliable.
The 18th century philosophers David Hume and Jean-Jacques Rousseau had the view that the origins of ethics are to be found in particular intuitive feelings or similar inclinations. Immanuel Kant, on the other hand, absolutely rejected the relationship between ethics and sentiments; instead he said that in order to fully understand the nature of morality we must discover the ‘pure moral law’, furnished by reason alone. For Kant, feelings and sentiments are to be suspended. The divergence of intuition and reason, and the extent both play in our moral evaluation, is something philosophers have disputed for the last three centuries at least.
The 18th century philosophers David Hume and Jean-Jacques Rousseau had the view that the origins of ethics are to be found in particular intuitive feelings or similar inclinations. Immanuel Kant, on the other hand, absolutely rejected the relationship between ethics and sentiments; instead he said that in order to fully understand the nature of morality we must discover the ‘pure moral law’, furnished by reason alone. For Kant, feelings and sentiments are to be suspended. The divergence of intuition and reason, and the extent both play in our moral evaluation, is something philosophers have disputed for the last three centuries at least.
Trolley problems
In the 1960s
and 70s, two philosophers—Philippa
Foot and Judith Jarvis Thomson—formulated a number of provocative thought
experiments so to consider this problem more closely. They are often recognised
in contemporary applied ethics as ‘trolley problems.’
In the standard switch dilemma, you
have a runaway trolley bolting out of control towards five people who will be
killed if it proceeds on its present course. These five people can be saved,
though, by throwing a switch and redirecting the trolley onto another set of
tracks, one that has just one person on it, but if you redirect the trolley then
one person will be killed. When asked what you should do in these
circumstances, most people say you should redirect the trolley, thus rescuing
the five lives.
In another even more extreme predicament—the footbridge dilemma—the trolley, once again, is
headed for five people. This time though you are standing next to a large man
on a footbridge spread over the tracks. You realise the only way to save the
five people is to push the stranger off the footbridge, thus halting the trolley.
(You consider jumping and thus sacrificing yourself, but at that point you realise
you are not large enough to stop the trolley.) Sadly, the large stranger will
be killed due to the collision, but it will be enough to halt the trolley, thus
saving the other five lives. When asked what you should do in these circumstances,
most people say you should not push
the stranger off the bridge.
Both dilemmas offer us the chance to surrender
one life to rescue five, so it seems they are morally comparable. Yet, about 90 percent of subjects consider it acceptable in the switch
dilemma to throw a switch in order to redirect the trolley to save five lives
at the cost of one, but a similar majority consider it unacceptable to sacrifice the stranger in the footbridge dilemma, even though it would save five lives at the
expense of one. When pushed for a reason why they would sacrifice one life in
the switch dilemma, but not in the footbridge dilemma, most people can’t offer
anything coherent. Over the past few decades, many philosophers have tried to
find ways to get around the problem posed, but haven’t come up with anything entirely
persuasive.
Further
thought experiments
Other thought
experiments were subsequently devised to help us understand more specific moral
decisions and behaviour. For example, the psychologist Jonathan Haidt asked
participants to respond to the following story:
Julie and her brother Mark are travelling in France on a summer
holiday from college. One night they decide it would be interesting and fun if
they tried making love. Julie was already taking birth-control pills, but Mark
uses a condom as well, just to be safe. They both enjoy the experience, but they
decide never to do it again. They keep that night as a special secret, which
makes them feel even closer to each other. Was it wrong for Julie and Mark to
have sex?
Haidt found that
about 80 percent of subjects he questioned thought their behaviour was morally wrong.
When asked to give a reason for their answer, respondents often alluded to the
dangers of inbreeding or that the siblings could be emotionally hurt; but we
know from the story, Julie and Mark used two forms of birth control and that they
were not hurt from the experience. After some time, people usually say
something like: “I don’t know why, I can’t explain it, I just know it’s wrong!”
Haidt refers to this phenomenon as ‘moral dumbfounding’—that is, where someone
arrives at a moral decision, but when asked to give a justification for it, he
is dumbfounded. This suggests it’s often not one’s reasons that gauge one’s conclusion,
instead it’s one’s moral intuition.
Consider another
dilemma:
A pregnant woman is informed at the hospital that her
foetus is badly impaired in various ways, both mentally and physiologically. As
a result, the doctor advises her that abortion would be appropriate in light of
the circumstances. She wants to go home first and discuss it with her husband.
A few hours later, after thinking it over, they make the decision that abortion
is the right decision. The couple don’t want to bring an impaired infant, one
that is in lingering pain and is only expected to live for a few months, into
the world.
Almost everybody
who thinks that abortion ought to be permissible would say it’s the correct
thing to do in this instance. But this is not where the story ends!
The night after the decision to go ahead with the abortion was
made, the woman unexpectedly went into labour and the baby was born
spontaneously. Now that the baby is born, abortion is not an option. As
predicted, the baby is severely impaired and in extensive pain. The doctors at
the hospital estimate that the infant will only live a few months at the very
most. The parents still maintain that in the interest of the child, it would be
better if his life was ended.
Should
infanticide be morally permissible in this case? I would guess people are a lot
more hesitant about the moral permissibility of infanticide than abortion here.
Those who believe abortion is acceptable, but not infanticide, might protest
though: one act, they might say, is generally legal (in most western
democracies anyway) but the other act is not; that it is wrong to kill a
newborn infant, but not an unborn foetus; or that there’s no actual being
attached to the woman’s body in the latter case, so infanticide is not
defensible, but abortion is. But it seems none of these arguments apply to one
instance and not the other: you remind them that laws are not always in
accordance with what is morally right; the intrinsic moral status of a being
cannot be determined by something as arbitrary as whether or not one is inside
his mother’s body (nor the moral significance 10-12 hours make); and that the
couple’s original intention was to have an abortion mainly in interest of the
infant anyway—something the couple still insist on.
From this we can
see the reasons here for supporting abortion are also applicable to infanticide.
Evidently, it seems it’s the
intuitive response that is in control of the judgement people reach, as almost everybody
is morally repulsed by the idea of infanticide. A clear example of the intuitive ‘yuck’ response has come from the
philosopher Stephen Mulhall, who once said that even to consider the possibility that infanticide
could be tolerable is to bear “evil thoughts.”
The clear solution, he argues, is to reject a view that comes to such
conclusions. But is Mulhall right? Assuming abortion is defensible in the above
case, can we simply reject infanticide merely because it feels intuitively
wrong?
As we can see from these various thought experiments,
people have a tendency to subscribe to fundamental conflicting appraisals. To
be sure, none of these duel dilemmas necessarily imply it’s unattainable for
one to make a rational argument that happens to be in accordance with her
intuitions. What it does seem to show, however, is that moral intuitions can
often be held in a set of circumstances without one really offering a well
founded reason for supporting them.
Morality and
neuroscience
What’s going on here? Some have proposed that the
roots of our diverging moral judgements in parallel situations may lie in our
different emotional responses to particular situations. Over the last decade or so, insights into the
way we make moral judgements has come from experiments using functional Magnetic Resonance Imaging (fMRI), largely carried out by the cognitive neuroscientist
and philosopher Joshua Greene.
Greene designed experiments that sought to find points of a conflict between
brain areas identified with emotions and areas devoted to cognition. He began
to scan subjects while they were presented with trolley problems (like the ones
presented above).
His testing indicate that when people were asked to
make a moral judgement about ‘personal’ infringements, like pushing a stranger
of a footbridge, we can see heightened activity lighting up in the areas of the
brain correlated with emotions, when collated with people asked to make
judgements about comparatively ‘impersonal’ infringements, like throwing a
switch to redirect a trolley. In another experiment, Greene examined the brain
activity of the small number of subjects who said that it would be okay to push
the stranger off the footbridge; here he found activity in brain areas more
related with cognitive activity than in the brains of those who said it would
be wrong to push the stranger.
Although the field is still in its infancy, and the validly of these results are still debatable, the findings do suggest that our moral intuitions largely have an emotional impulse rather than a rational cost-benefit basis. If these findings are accurate, then perhaps David Hume was correct when he famously declared that reason alone is merely the “slave of the passions.” This sentiment is reiterated by Jonathan Haidt, who claims our moral judgements are normally the outcome of fast, almost automatic, intuitive reactions and that we tend to use after-the-fact justifications by way of conscious reasoning to support our earlier intuitive judgement. On top of that, there’s some evidence to suggest that when it comes to emotionally charged social issues like capital punishment and abortion for instance, the force of one’s moral intuitions can sometimes actually shape one’s belief about the science, evidence and facts of that particular matter, rather than working the other way around.
Morality and
natural selection
Greene then sought to explain why our initial
gut-reactions often tend to acquire mastery over our impartial reflective
capacities. All this makes sense from an evolutionary point of view, he contends, because
“we have evolved mechanisms for making quick, emotion-based social judgements,
for ‘seeing’ rightness and wrongness, because our intensely social lives favour
such capacities, but there was little selective pressure on our ancestors to
know about the movements of distant stars.”
Humans have lived in small groups for most of our evolutionary past
where violence could only be inflicted in a direct and personal way.
The notion of pushing a stranger (like in the footbridge dilemma) evokes these strong intuitively
based reactions. The idea of redirecting a train (as in the switch dilemma), on the other hand, has only
been feasible in the last couple of centuries—too short a time in terms of
evolution—to have a similar effect on our emotional reactions as pushing
someone off a bridge. Greene’s hypothesis, it
seems, may also help explain our other dilemmas: our revulsion to incest has a very good
biological function as we are more likely to have a genetically abnormal child
with a relative; and we probably also have
an evolved propensity to nurture and care for infants, and possessing a strong
emotional reaction against infanticide serves as a useful evolutionary survival
mechanism.
Greene’s findings seem
to fit nicely into the broader evolutionary view of the origins of morality. Certainly
others beforehand have maintained its roots can be largely explained as a
natural phenomenon. In his Treatise of
Human Nature, David Hume gets close to an evolutionary understanding of
loyalty: “A man naturally loves his children better than his nephews, his
nephews better than his cousins, his cousins better than strangers.” Hume wrote
this over a century before Charles Darwin’s On
the Origin of Species, and he didn’t
have a palpable understanding of natural selection to explain why one generally
has a preference for one’s relatives over others. Darwin himself also believed
the origins of morality could be understood in naturalistic terms. In The Descent of Man he declared that “[a]ny animal whatever, endowed with well-marked social
instincts ... would inevitably acquire a moral sense or conscience as soon as
its intellectual powers had become as well developed, or nearly as well
developed, as in man.”
Furthermore, during the 1960s and 70s scientists like Robert Trivers and E.O. Wilson examined various emotions like sympathy, anger, gratitude and guilt to help explain the evolution of cooperation and reciprocal altruism. In more recent years the psychologist Paul Bloom, author of the recently published Just Babies: The Origins of Good and Evil, carried out a number of experiments to show that three-month old babies seem to judge individuals on the way they treat others; he found that infants will reward the good guy and punish the bad guy. Bloom argues this evidence suggest humans possess a universal moral sense. The primatologist Frans de Waal has long claimed that non-human primates are also capable of genuine altruism. What’s more, reciprocal behaviour has been observed in other species, like dolphins, dogs and rats.
Normative implications
Does this
explanation of our moral sense as a natural phenomenon—in the sense that it has
evolved and is part of our biological nature—prove our intuitions are morally valid
after all? Evolutionary theory, to be sure, can describe common morality, like devotion
to our family members and of duties associated with reciprocity. Despite the discernible
empirical validity here, the argument is centrally unsound since it commits a
form of the ‘naturalistic fallacy.’ There’s
no necessary relationship between the origin of morality and its justification. Some of our intuitive gut reactions may just express the biological imperatives to spread our genes. This line of reasoning draws moral conclusions from natural facts. It does not
follow that something is good for the reason that it’s performing what it
evolved to do. These findings, in actual fact, should make us more sceptical about
placing reliance on our intuitions; as Green’s work has shown, intuitions can
be oversensitive when responding to things that don’t seem
morally relevant, and also can be insensitive
to things that do seem relevant. Biology may have set the building blocks of morality, but as the biologist and philosopher Massimo Pigliucci put it, "it is a very long way from that to Aristotle's Nicomachean Ethics, or Jeremy Bentham and John Stuart Mill's utilitarianism."
If we say our intuitions are a product of natural selection and the wiring of our brain, how can we therefore make a real distinction between right and wrong? If we cannot rely on our intuitions for valid pointers, does this mean morality is merely an illusion? I don’t believe it necessarily does. However, it does indicate that intuitions can cloud our rational capacities and should lead us to be more self-probing before making moral evaluations. There’s nothing said here that necessarily declares it’s not possible for us to sidestep our intuitions and for impartiality and investigative equilibrium to be attained when it comes to moral analysis. We should not lose sight of the fact that people can often come to moral conclusions through discussion, communication and debate with others. We possess capacities to reason that do not seem to confer any evolutionary advantage. As the thought experiments indicate, despite our initial moral sentiments, we are capable of recognising, on reflection, that these certain dilemmas have similar outcomes; this seems to show there’s a part of our thinking that can recognise this discord.
Physicists, like everyone else, intuitively perceive the material world using their senses, but they can overcome their common sense perceptions with analytic deliberation and measured testing to see that the world operates in very different ways to their initial perceptions. Similarly, the task of ethics is not to just sit with our intuitions—our common sense morality—since, as we have expressed, they are often myopic and not reliable indicators to what we should do. Rather we need to critically ask what we should do in certain cases, and what the reasons for our conduct are.
mm... I have to reconsider myself as not such a rational being as I thought I was before reading the article. Interesting analogy at the end there (counterintuitive stuff in physics)
ReplyDeleteThanks Nathalie!
DeleteYes, I included a small number of examples to show how our moral intuitions can be fickle and confused. Maybe I could have included other ones. In terms of concern for non-human animal, many people express outrage when they hear about fox-hunting or whale killing, for example, and accuse their proponents as barbaric and inhumane. However it seems to me that factory farming is much worse overall, but many people who condemn fox-hunting consume meat and even dairy. Why are they outraged by one and not the other? Here’s an amusing trolley dilemma in action (http://www.youtube.com/watch?v=NG4WhppBNCM)