Author

Jennifer Blumenthal-Barby

Publish date

by J.S. Blumenthal-Barby, Ph.D.

In a recent article in Ethics, “Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics,” Josh Greene argues that empirical research in moral judgment has significant relevance for normative ethics in that it (1) exposes the inner workings of our moral judgments, revealing that we should have less confidence in some of our judgments and the ethical theories that are based on them, and (2) informs us of where we tend to rely on intuition or automatic processing (which is often heavily emotive), but ought to rely more manual, controlled processing (such as consequentialist reasoning).

Problems with our (intuitive) moral judgments (and for deontology?)

Greene uses a camera analogy throughout the article: a camera has an automatic mode (which allows for efficiency) and a manual mode (which allows for flexibility and is sometimes important to use). Such is the case with moral reasoning. The problem is that automatic, intuitive moral judgment is susceptible to framing effects, reflects imperfect cognitive heuristics, and is often resistant to evidence that might change or undermine it (instead, we tend to engage in post-hoc rationalization or “intuition chasing”). Greene recounts a substantial amount of this evidence. And, here’s the kicker: deontological type judgments (“rights,” “duties”) are linked more closely to the automatic, emotional response systems (the VMPFC region of the brain is activated), and consequentialist type judgments (impartial cost-benefit reasoning) are linked more closely to the controlled, reasoned response systems (the DLPFC region of the brain is activated). Thus, according to Greene, we should have less confidence in deontological moral theories than in consequentialist ones.

When to rely on manual mode and why all is not lost

All is not lost, however, for we can identify areas where relying on automatic mode, or moral intuitions is bound to be particularly problematic, and where we should use more manual mode. Greene argues that it is fine to use automatic processes in situations where our judgments have been shaped by trial and error experience and in situations that are about familiar moral problems. But in situations involving unfamiliar moral problems or in situations where there exists significant moral disagreement, it is better to use manual mode. Examples of “unfamiliar moral problems” include moral problems arising from recent cultural developments such as the rise in technology or ability to help people far away by easy Internet donation. Greene also advocates that in most day-to-day morality, relying on automatic mode is sufficient, but that at the level of rule/policy making we should turn on manual mode and aim for the best long term consequences (rather than trying figure out who has what rights and duties). Finally, Greene advocates for using reflective equilibrium whereby we test our intuitive judgments in particular cases against reasons and first principles and then readjust them accordingly such that they become more like considered judgments rather than mere intuitions (incidentally, this probably what earlier intuitionists in philosophy such as Sidgwick meant by intuitions—see the paper by David Brink in the same issue of Ethics). But more specifically, he argues that these exercises of reflective equilibrium need to be even more expansive: we need to consider our intuitions and principles in light of what we know about moral judgment from the empirical sciences.

Implications for bioethics

Bioethical judgments arguably rely on a fair amount of automatic processes (we often uses cases as intuition pumps, we often make these judgments in the moment so to speak, moments that are particularly emotional) and they arguably involve a good amount of situations involving unfamiliar moral problems (advances in technology) and a fair amount of disagreement about the right thing to do among those involved. Thus, one might argue that bioethics is especially vulnerable to the sorts of concerns that Greene outlines. One important task for bioethicists might be to identify situations involving unfamiliar moral problems or disagreement and then work with empirical scientists to understand the processes driving the judgments in these situations so as to determine whether they are desirable/relevant or undesirable/irrelevant. And if they are undesirable/irrelevant, to determine how we need to readjust our moral judgments in light of the processes that are causing us to arrive at them. To close, one useful example that Greene gave in his article was incest. Through empirical science, we learned that judgments about incest are likely based on evolutionary drive to avoid producing offspring with genetic diseases. This response process is not relevant in cases where the couple will not reproduce, thus we may want to readjust our moral judgments concerning the condemnation of incest in these cases.

We use cookies to improve your website experience. To learn about our use of cookies and how you can manage your cookie settings, please see our Privacy Policy. By closing this message, you are consenting to our use of cookies.