"Is it right to do what best promotes the common good?"
Seems like it would depend on how we define "the common good". Years ago in a discussion someone pointed out to me that with moral philosophy, you always eventually hit a wall of subjectivity. Over the years, I've never seen a convincing way to pierce that wall. Which has left me skeptical that any simple principle will work in every situation.
I'd say utilitarianism starts off intuitive enough, but it has counter-intuitive implications. (Kill one patient to save five?) But so does deontology, as Richard covered in his answers. There might be value in doing multiple checks for any particular question, a hedonistic utilitarian one, a preference utilitarian one, and a deontological one. But really, when they conflict we're likely to go with the answer we wanted anyway.
Which to me, implies that morality is a social technology, one we all create together, and continuously recreate. That's unsatisfying. We'd all prefer that there be universal unchanging principles, ideally ones that agree with our own preferences. Instead we have to figure out how to live together without anyone being able to prove their preferences are the one true ones that everyone must accept.
I'm skeptical of simple principles too, or grand laws that are meant to work in every situation. The common good is a tricky concept to articulate, but I'm also skeptical of attempts to avoid taking flourishing or happiness into account (a la Kant).
I guess to give a little defence of "vibes" based morality, I think it's probably fair to say that moral philosophers are not necessarily the most moral people, or the best people to go to for help with our real life ethical conundrums. I think in those cases, we are better off turning to wise elders with lots of good life experience and good judgment. That might be seen as a "vibes" based approach, but I think in practice it's the best one. I'm not sure if Richard would agree or disagree?
I personally prefer virtue ethics, but I think VE, consequentialism and deontology may ultimately be pretty much reconcilable. I'm arguing for that a little in the post I'm working on atm, on virtue ethics.
Yeah, I agree that there's an important place for vibes-based ethics in everyday life, and that most "real life ethical conundrums" benefit more from life experience than from philosophical reflection (because they really depend upon subtle empirical uncertainties about how we can expect our choices to affect others, and indeed ourselves). I'm more thinking about academic/intellectual contexts, where we hope to make moral progress by improving upon current conventional vibes. For related discussion, see:
Like you, Tina, I approach moral philosophy with reluctance. However, seeing as no one else has stepped up to comment on Professor Chappell's contribution, I'll share my thoughts. I have no prior commitment to deontology or utilitarianism, or for that matter virtue ethics or the so-called "Socratic ethics" recently proposed by Agnes Callard. But questions about morality need to be asked, and sometimes I poke that bear.
The distinction between what agents ought to do and what they ought to prefer seems a little cloudy to me. Our idea of what we should do is always based on what we prefer (or think we prefer). Deontology is simply an attempt to codify what we should do to realize what we prefer, using reason to discover fixed principles, rather than leaving it to the mood of the moment. One could even argue that utilitarianism is a special form of deontology, in which the rationally discovered rule is to maximize the common good.
What constitutes "the common good" is unfortunately left open. For example, euthanizing the old and the disabled, or the very stupid or the non-compliant, would arguably serve the common good, assuming these groups to be in the minority. Any utilitarian brave enough to espouse such a view would be taking for granted what we ought to do. They would be arguing instead about what we ought to prefer, or "hope to see happen," or about what constitutes a "better outcome." It's the idea of a "better outcome" that needs to be examined, and this is why I think utilitarianism just pushes the real argument down the road.
If I had an answer, I would offer it. I am skeptical, though, that systematic thought can wrestle it to the ground in the absence of "vibes," if by this we mean "affect." Setting aside our vibes about euthanizing the useless, there is no obstacle to such a programme that I can see. But for whatever reason, people won't set them aside. They seem somehow ethically important, despite their rational intractability, and so we are stuck with them. And if we are stuck with them, we will have to work with them.
My first thought is someone could argue it's psychically abhorrent and doesn't take into account the emotional and empathetic beings we are, but I don't know if this counts as consequentialism.
I have to admit, I largely rely on vibes. Whether that's right or wrong, I don't know, but systematic thought in ethics makes my head spin.
That's so true. I think in a real life trolley dilemma you'd see a bunch of bystanders doing nothing, although pulling out a phone to record the scene seems to be fairly instinctual for some these days.
I can tell you from personal experience that in extreme emergency situations where it's up to you to act, something else takes over your normal or rational mind, some irrationally heroic or instinctual self that bypasses your normal thinking self, sometimes to the point that you can't even remember what happened or what "you" did.
Each person's good is a part of the common good, so we should be happy for others to live happily, so long as they're not harming others (and can either finance themselves or have a caregiver who is happy to shoulder that responsibility). Further, as Tina points out in her reply, it would cause most people a lot of distress to see others badly mistreated! Such sympathetic impulses seem good for society and worth nurturing.
(I suspect that we spend a lot more on the elderly than is really justified, and society would do better to redirect more of those public funds to invest more in children and young adults instead. But that's very different from saying we should be killing them! And in any case, we surely spend a lot of money on worse priorities than taking care of each other. So it's not like this is at the top of the list of suboptimal spending.)
I could imagine a utilitarian arguing that capital punishment should be more widespread for repeat criminal offenders. But rehabilitation (to become a productive and trustworthy member of society again) would obviously be vastly preferable in any case where it's feasible.
But on your more general point, I agree that getting clearer on what constitutes "the good" is a very important open question.
Perhaps if we define our idea of the good in terms of materialistic satisfactions, we arrive at one definition of the common good, whereas if we emphasize non-materialistic satisfactions we arrive at another.
A separate aspect of Prof. Chappell's contribution that I wanted to take up is the idea that once a person has "ironed out all their internal inconsistencies, there's no way to rationally persuade them to change worldviews."
The idea that there are multiple nominally coherent worldviews makes sense to me, and since each worldview is internally consistent, there is no rational way out of it. My question is whether there are non-rational methods to change a worldview, and whether any of them would be considered legitimate. Of course, I would set aside violence, but would art or literature, or guided psychedelic experiences, or forms of social engineering such as educational programmes or "awareness" campaigns be the kinds of things we might consider as a way to change worldviews?
And behind this there is the question of whether we need to change someone's worldview at all. Is there a sense in which some worldviews are better than others?
Yeah, I imagine there are any number of experiences - including the everyday subconscious inclination to conform to views that your friends respect - that could lead to non-rational worldview revisions. It's an interesting question in general when we should be open to non-rational belief revisions, vs. when it would constitute objectionable manipulation. (I don't currently have a satisfying answer!)
Whether it matters: well, insofar as moral beliefs affect how we behave, how we vote, etc., they may end up having a significant impact on others' lives, for better or for worse.
Thanks. At the moment I'm collecting ideas about how to change "ways of seeing." I was thinking more on a one-to-one level, but your point about peer pressure is well-taken. It opens the question of whether one might attempt to influence a group, as opposed to an individual. This brings into play some questions on my mind about intersubjectivity, or the space between individuals we call "public."
The struggle I have is that this seems like a false dichotomy. Either you support the "common good" regardless of the means employed (hello, Thanos) or you take a "moral" stand whether or not it results in a repugnant outcome (bonjour, Jean Valjean).
Human life is messy and complicated and seldom lends itself to neat little boxes. What is the right thing to do? Most of the time, the best answer is "It depends."
"Is it right to do what best promotes the common good?"
Seems like it would depend on how we define "the common good". Years ago in a discussion someone pointed out to me that with moral philosophy, you always eventually hit a wall of subjectivity. Over the years, I've never seen a convincing way to pierce that wall. Which has left me skeptical that any simple principle will work in every situation.
I'd say utilitarianism starts off intuitive enough, but it has counter-intuitive implications. (Kill one patient to save five?) But so does deontology, as Richard covered in his answers. There might be value in doing multiple checks for any particular question, a hedonistic utilitarian one, a preference utilitarian one, and a deontological one. But really, when they conflict we're likely to go with the answer we wanted anyway.
Which to me, implies that morality is a social technology, one we all create together, and continuously recreate. That's unsatisfying. We'd all prefer that there be universal unchanging principles, ideally ones that agree with our own preferences. Instead we have to figure out how to live together without anyone being able to prove their preferences are the one true ones that everyone must accept.
I'm skeptical of simple principles too, or grand laws that are meant to work in every situation. The common good is a tricky concept to articulate, but I'm also skeptical of attempts to avoid taking flourishing or happiness into account (a la Kant).
I guess to give a little defence of "vibes" based morality, I think it's probably fair to say that moral philosophers are not necessarily the most moral people, or the best people to go to for help with our real life ethical conundrums. I think in those cases, we are better off turning to wise elders with lots of good life experience and good judgment. That might be seen as a "vibes" based approach, but I think in practice it's the best one. I'm not sure if Richard would agree or disagree?
I personally prefer virtue ethics, but I think VE, consequentialism and deontology may ultimately be pretty much reconcilable. I'm arguing for that a little in the post I'm working on atm, on virtue ethics.
Looking forward to reading it!
Yeah, I agree that there's an important place for vibes-based ethics in everyday life, and that most "real life ethical conundrums" benefit more from life experience than from philosophical reflection (because they really depend upon subtle empirical uncertainties about how we can expect our choices to affect others, and indeed ourselves). I'm more thinking about academic/intellectual contexts, where we hope to make moral progress by improving upon current conventional vibes. For related discussion, see:
https://www.goodthoughts.blog/p/limiting-reason
(Also the post I'll be publishing tomorrow morning on 'Vibe bias'!)
Like you, Tina, I approach moral philosophy with reluctance. However, seeing as no one else has stepped up to comment on Professor Chappell's contribution, I'll share my thoughts. I have no prior commitment to deontology or utilitarianism, or for that matter virtue ethics or the so-called "Socratic ethics" recently proposed by Agnes Callard. But questions about morality need to be asked, and sometimes I poke that bear.
The distinction between what agents ought to do and what they ought to prefer seems a little cloudy to me. Our idea of what we should do is always based on what we prefer (or think we prefer). Deontology is simply an attempt to codify what we should do to realize what we prefer, using reason to discover fixed principles, rather than leaving it to the mood of the moment. One could even argue that utilitarianism is a special form of deontology, in which the rationally discovered rule is to maximize the common good.
What constitutes "the common good" is unfortunately left open. For example, euthanizing the old and the disabled, or the very stupid or the non-compliant, would arguably serve the common good, assuming these groups to be in the minority. Any utilitarian brave enough to espouse such a view would be taking for granted what we ought to do. They would be arguing instead about what we ought to prefer, or "hope to see happen," or about what constitutes a "better outcome." It's the idea of a "better outcome" that needs to be examined, and this is why I think utilitarianism just pushes the real argument down the road.
If I had an answer, I would offer it. I am skeptical, though, that systematic thought can wrestle it to the ground in the absence of "vibes," if by this we mean "affect." Setting aside our vibes about euthanizing the useless, there is no obstacle to such a programme that I can see. But for whatever reason, people won't set them aside. They seem somehow ethically important, despite their rational intractability, and so we are stuck with them. And if we are stuck with them, we will have to work with them.
My first thought is someone could argue it's psychically abhorrent and doesn't take into account the emotional and empathetic beings we are, but I don't know if this counts as consequentialism.
I have to admit, I largely rely on vibes. Whether that's right or wrong, I don't know, but systematic thought in ethics makes my head spin.
In practical cases there is often insufficient time for systematic thought.
That's so true. I think in a real life trolley dilemma you'd see a bunch of bystanders doing nothing, although pulling out a phone to record the scene seems to be fairly instinctual for some these days.
I can tell you from personal experience that in extreme emergency situations where it's up to you to act, something else takes over your normal or rational mind, some irrationally heroic or instinctual self that bypasses your normal thinking self, sometimes to the point that you can't even remember what happened or what "you" did.
Each person's good is a part of the common good, so we should be happy for others to live happily, so long as they're not harming others (and can either finance themselves or have a caregiver who is happy to shoulder that responsibility). Further, as Tina points out in her reply, it would cause most people a lot of distress to see others badly mistreated! Such sympathetic impulses seem good for society and worth nurturing.
(I suspect that we spend a lot more on the elderly than is really justified, and society would do better to redirect more of those public funds to invest more in children and young adults instead. But that's very different from saying we should be killing them! And in any case, we surely spend a lot of money on worse priorities than taking care of each other. So it's not like this is at the top of the list of suboptimal spending.)
I could imagine a utilitarian arguing that capital punishment should be more widespread for repeat criminal offenders. But rehabilitation (to become a productive and trustworthy member of society again) would obviously be vastly preferable in any case where it's feasible.
But on your more general point, I agree that getting clearer on what constitutes "the good" is a very important open question.
Perhaps if we define our idea of the good in terms of materialistic satisfactions, we arrive at one definition of the common good, whereas if we emphasize non-materialistic satisfactions we arrive at another.
A separate aspect of Prof. Chappell's contribution that I wanted to take up is the idea that once a person has "ironed out all their internal inconsistencies, there's no way to rationally persuade them to change worldviews."
The idea that there are multiple nominally coherent worldviews makes sense to me, and since each worldview is internally consistent, there is no rational way out of it. My question is whether there are non-rational methods to change a worldview, and whether any of them would be considered legitimate. Of course, I would set aside violence, but would art or literature, or guided psychedelic experiences, or forms of social engineering such as educational programmes or "awareness" campaigns be the kinds of things we might consider as a way to change worldviews?
And behind this there is the question of whether we need to change someone's worldview at all. Is there a sense in which some worldviews are better than others?
Yeah, I imagine there are any number of experiences - including the everyday subconscious inclination to conform to views that your friends respect - that could lead to non-rational worldview revisions. It's an interesting question in general when we should be open to non-rational belief revisions, vs. when it would constitute objectionable manipulation. (I don't currently have a satisfying answer!)
Whether it matters: well, insofar as moral beliefs affect how we behave, how we vote, etc., they may end up having a significant impact on others' lives, for better or for worse.
Thanks. At the moment I'm collecting ideas about how to change "ways of seeing." I was thinking more on a one-to-one level, but your point about peer pressure is well-taken. It opens the question of whether one might attempt to influence a group, as opposed to an individual. This brings into play some questions on my mind about intersubjectivity, or the space between individuals we call "public."
I want to thank Richard for doing this interview. It's refreshing to talk about something other than consciousness for a change!
Hear, hear! I tend to wander among various vaguely related interests in a sort of random walk.
The struggle I have is that this seems like a false dichotomy. Either you support the "common good" regardless of the means employed (hello, Thanos) or you take a "moral" stand whether or not it results in a repugnant outcome (bonjour, Jean Valjean).
Human life is messy and complicated and seldom lends itself to neat little boxes. What is the right thing to do? Most of the time, the best answer is "It depends."