Discussing ethical altruism and consequentialism vs. deontology
with philosopher, Richard Yetter Chappell
“Why should I want to be moral?” Dr. Fischelson asked. “What’s in it for me? Social scorn? Death by hemlock?”
Uri scoffed. “It not a matter of what’s in it for you.”
“Yes, it is,” Joshua objected. “You can’t just say, be moral. You have to say why.”
This excerpt from my novel is about as much as I’ve written on consequentialism vs. deontology…let’s just say I’ll take the hard problem of consciousness over such ethical conundrums any day of the week. I may not be brave enough to tackle the big questions in contemporary moral philosophy, but I’m pleased to introduce you to a philosopher who is.
“What is fundamentally worth caring about? What should we do about it?”
These are the questions Richard Yetter Chappell faces head on.
What question initially seems trivial but reveals unexpected depths when seriously examined?
RICHARD YETTER CHAPPELL: Should we want others to act rightly?
You might initially assume that of course we should. But this is only obvious if rightness is determined by what philosophers call "agent-neutral" reasons—reasons that serve impartial goals, like promoting the common good, that are the same for everyone.
Many deontologists instead believe that rightness is "agent-relative", giving the agent in the situation special reasons or goals (e.g. to keep their own hands clean) that others needn't share. (Maybe we should each care about our own clean hands, but not those of that other agent.) If ethics is agent-relative, then impartial bystanders should want others to act wrongly whenever that would better serve agent-neutral goals, such as the common good.
I call this "the curse of deontology", and I think it provides us with strong reasons to doubt that ethics is agent-relative. If instead, as consequentialists believe, it's right to do what best promotes the common good, then we can all reasonably hope that others successfully act rightly.
Do you hold views that contradict the mainstream in your field? What are they, and why is the mainstream wrong?
RYC: Most philosophers think that utilitarianism is a deeply "counterintuitive" view, and that common sense better supports some form of deontology. I believe the opposite. My explanation of where others go wrong is that (i) they focus on assessing superficial deontic verdicts about what agents ought to do in hypothetical situations, rather than deeper telic verdicts about what we ought to prefer or hope to see happen; and (ii) they don't understand that deontic verdicts are decomposable into telic and decision-theoretic components.
Utilitarianism's distinctive claims are telic in nature, and those claims are all entirely commonsensical: of course we should prefer better outcomes! (And as the curse of deontology brings out, other views sound bizarre when they deny this.) Counterintuitive deontic verdicts emerge when you combine utilitarianism with naive instrumentalism: the view that agents should make decisions by doing whatever seems superficially likely to achieve their goals (no matter how machiavellian). But we should reject that view of instrumental rationality as clearly incompatible with what we know about human fallibility, bias, etc. As a result, I think the "mainstream" objections to utilitarianism aren't just unconvincing; they're fundamentally confused and targeting the wrong theory.
If you had to compress your worldview into a single provocative statement, what would it be?
RYC: Vibes are no substitute for systematic thought.
If I'm allowed a more substantive follow-up, I'd add: Moral reflection should proceed via two steps. First, think about what outcomes we morally ought to prefer. Then consider what norms and ways of thinking will best serve to help bring about a future that's more rather than less morally preferable. Afterwards, put these reflections into practice. (I like effective altruism as a serious explicit effort to implement that post-reflection step. But readers should, of course, use their own judgment.)
Are there any philosophical questions that will never be resolved? Why?
It depends what you mean by "resolved". I don't expect any of the really "big" questions to ever secure universal consensus, because there are multiple internally coherent philosophical "worldviews", and rational argument proceeds by way of identifying internal inconsistencies (premises you expect your interlocutor to accept that entail a conclusion that they currently deny). This means that once they've ironed out all their internal inconsistencies, there's no way to rationally persuade them to change worldviews.
That said, I expect any philosophical question is "resolvable" in the sense that some people can come to know the truth of the matter. They just won't be able to persuade everyone else. (I imagine most philosophers often feel themselves to be in this position! Alas, there's no externally valid test to determine whether the feeling is accurate or not in any given case…)
WHAT DO YOU THINK?
Is it right to do what best promotes the common good? Or do you have a duty to do what’s moral, regardless of the consequences? Or is there another moral framework you prefer?
Is utilitarianism is a deeply "counterintuitive" view?
Are “vibes are no substitute for systematic thought”?
RICHARD YETTER CHAPPELL is an Associate Professor of Philosophy at the University of Miami. He is the author of three books including Parfit's Ethics (Cambridge University Press), and is currently working on a fourth, Beyond Right and Wrong. His Substack, Good Thoughts, tries to put his methodology for moral reflection into practice.
To get full access to my published books and posts, upgrade your subscription. Find out more. You can customize which bits of this newsletter you receive by visiting your account.
Thanks for supporting my niche literary endeavors and off-beat philosophical speculations. Cheers!
—Tina Lee Forsee
I want to thank Richard for doing this interview. It's refreshing to talk about something other than consciousness for a change!
"Is it right to do what best promotes the common good?"
Seems like it would depend on how we define "the common good". Years ago in a discussion someone pointed out to me that with moral philosophy, you always eventually hit a wall of subjectivity. Over the years, I've never seen a convincing way to pierce that wall. Which has left me skeptical that any simple principle will work in every situation.
I'd say utilitarianism starts off intuitive enough, but it has counter-intuitive implications. (Kill one patient to save five?) But so does deontology, as Richard covered in his answers. There might be value in doing multiple checks for any particular question, a hedonistic utilitarian one, a preference utilitarian one, and a deontological one. But really, when they conflict we're likely to go with the answer we wanted anyway.
Which to me, implies that morality is a social technology, one we all create together, and continuously recreate. That's unsatisfying. We'd all prefer that there be universal unchanging principles, ideally ones that agree with our own preferences. Instead we have to figure out how to live together without anyone being able to prove their preferences are the one true ones that everyone must accept.