
Utilitarians think you should always perform the action that produces the greatest total amount of benefit in the world. Usually, “benefit” is understood either in terms of pleasure/enjoyment (and the absence of pain), or in terms of desire-satisfaction (and absence of frustration). This sounds like a very nice view, and it is held by some nice and smart people (a surprising number, in fact, given the existence of the objections below).
What is wrong with this view?
1. Utilitarianism is counter-intuitive
When you first hear it, it sounds intuitive: How to decide what to do? Well, do the best thing. What’s best? The thing that produces the most benefit. What benefits us? Pleasure and/or desire-satisfaction.
But when you think about it more, it no longer seems so simple. Some famous examples in ethics:
a. Organ harvesting
Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?
b. Framing the innocent
You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?
c. Deathbed promise
On his death-bed, your best friend (who didn’t make a will) got you to promise that you would make sure his fortune went to his son. You can do this by telling government officials that this was his dying wish. Should you lie and say that his dying wish was for his fortune to go to charity, since this will do more good?
d. Sports match
A sports match is being televised to a very large number of people. You’ve discovered that a person has somehow gotten caught in some machine used for broadcasting, which is torturing him. To release him requires interrupting the broadcast, which will decrease the entertainment of a very large number of people, thus overall decreasing the total pleasure in the universe. Should you leave the person there until the match is over?
e. Cookie
You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?
f. Sadistic pleasure
There is a large number of Nazis who would enjoy seeing an innocent Jewish person tortured – so many that their total pleasure would be greater than the victim’s suffering. Should you torture an innocent Jewish person so you can give pleasure to all these Nazis?
g. The Professor and the Serial Killer

Consider two people, A and B. A is a professor who gives away 50% of his modest income to charity each year, thereby saving several lives each year. However, A is highly intelligent and could have chosen to be a rich lawyer (assume he would not have to do anything very bad to do this), in which case he could have donated an additional $100,000 to highly effective charities each year. According to GiveWell, this would save about another 50 lives a year.
B, on the other hand, is an incompetent, poor janitor who could not have earned any more money than he is earning. Due to his incompetence, he could not have given any more money to charity than he is giving. Also, B is a serial murderer who kills around 20 people every year for fun.
Which person is morally worse? According to utilitarianism, A is behaving vastly worse than B, because failing to save lives is just as wrong as actively killing, and B is only killing 20 people each year, while A is failing to save 50 people.
h. Excess altruism
John has a tasty cookie, which he can either eat or give to Sue. John knows that he likes cookies slightly more than Sue, so he would get slightly more pleasure out of it. Nevertheless, he altruistically gives the cookie to Sue. According to utilitarianism, this is immoral.
2. The Utilitarian’s Dilemma
A common reaction for utilitarians is to “bite the bullet” on each of these examples, i.e., embrace the counterintuitive consequences. Why isn’t this a good response?
It’s not good because utilitarianism, like all ethical theories, rests on ethical intuitions. The utilitarian faces a dilemma:
a) If you don’t accept ethical intuition as a source of justified belief, then you have no reason for thinking that enjoyment is better than suffering, that satisfying desires is better than frustrating them, that we should produce more good rather than less, or that we should care about anyone other than ourselves.
b) If you do accept ethical intuition, then at least prima facie, you should accept each of the above examples as counter-examples to utilitarianism. Since there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall.
So either way, you shouldn’t believe utilitarianism.
Aside: Suppose you think there is some other way of gaining ethical knowledge. E.g., you endorse an ethical naturalist view that claims that ethical theories can be justified like scientific theories. Or you’re a cultural relativist who thinks we just need to observe the social conventions. Or you embrace one of the other confused derivations of ‘ought’ from ‘is’ that are out there. Then the problem is that all of these approaches support values other than simple utilitarianism, if they work at all. (See Ethical Intuitionism, https://www.amazon.com/gp/product/0230573746/, for what’s wrong with these approaches.)
3. A third -lemma

The only way out is to argue that not all intuitions are equal: some are probative while others are not. The utilitarian needs to explain why the ethical intuition that we morally ought to care about others counts, but the intuitions about the examples in section 1 above don’t count.
Note: Why did I say “don’t count” rather than “count for less”? Because
i) There are so many strong and widespread intuitions that conflict with utilitarianism that if they even count for a little, you should probably reject utilitarianism overall.
ii) Also, there is an asymmetry between utilitarianism and other views: utilitarians think that enjoyment is good and that we have a moral reason to promote good (for any being that has interests). All other moral views agree with this. The difference is that utilitarians think that is the only moral reason we have, whereas other views think there are additional morally relevant considerations. Hence, to arrive at utilitarianism, you have to first embrace the intuitions common to all moral theories, then reject any other apparent moral reasons. Note the asymmetry: non-utilitarians do not reject utilitarian moral reasons. This is why the utilitarian must reject all but a very few intuitions.
So how might one justify this?
a. Maybe general, abstract intuitions are better than concrete intuitions about particular cases.
Problem: It’s not obvious that utilitarian intuitions are any more abstract or general than non-utilitarian intuitions. E.g., imagine a case of a very selfish person causing harm to others, and you’ll get the intuition that this is wrong. Talk about the Shallow Pond example, or the Trolley Problem. It’s about equally plausible to say that core utilitarian claims rest on intuitions about cases like those as it is to make a similar claim about deontology.
You can also represent deontology as resting on abstract, general intuitions, e.g., that individuals have rights, that we have a duty to keep promises, etc. It’s about equally plausible to say deontology rests on general intuitions like these as to say the same of utilitarianism.
b. Maybe non-utilitarian intuitions are approximations to utilitarian results in normal circumstances.
I’ve heard something like this suggestion. I guess (?) the idea is that maybe on some deep level, we’re really utilitarians, and we have the intuitions cited in section 1 because those sorts of intuitions usually result in maximizing utility, in normal circumstances (e.g., usually killing healthy patients lowers total utility). We just get confused when someone describes a weird case in which the thing that usually lowers utility would raise it.
Responses:
i) Why is it more plausible to say we are subconscious utilitarians who easily get confused than to say that we are subconscious deontologists who don’t get so easily confused?
ii) Also, why is this more plausible than the ethical egoist’s hypothesis that we are really egoists deep down, and that our altruistic intuitions result from the fact that helping other people usually, in normal circumstances, redounds to your own benefit? Then we just get confused when someone raises an unusual case in which the thing that would normally help you doesn’t?
c. Maybe there are specific problems with each of the above intuitions.
This is the only approach that I would accept as a reasonable defense of utilitarianism. I.e., you look at each of the cases from section 1, and in each case you show a way in which that intuition leads to some sort of incoherence or paradox (see, e.g., https://philpapers.org/archive/HUEAPF.pdf), or you find specific evidence that the intuition is caused by some factor that we independently take to be unreliable at producing true beliefs (where this factor doesn’t cause standard utilitarian intuitions), or you can argue that these intuitions are produced by some feature of the cases that everyone agrees is morally irrelevant.
So that leaves some room open for a rational utilitarianism, but this would require a lot more work, so we don’t have time to investigate that approach here. But until someone successfully carries out that rather large project, we should default to deontology.
The difference is that utilitarians think that is the only moral reason we have, whereas other views think there are additional morally relevant considerations. Hence, to arrive at utilitarianism, you have to first embrace the intuitions common to all moral theories, then reject any other apparent moral reasons.
If you take one step back from this, you get social constructivism. It is empirically obvious that societies create and enforce norms. Moral beliefs beyond social constructivism are questioned and contended, but everybody believes in cops.
P.S. If by chance you don’t believe in the local police, they have both a website and an office in your hometown. You can go look at them. Alternatively, if you upset other people badly enough they will come to you.
What is social constructivism? Wikipedia says: “Social constructivism is a sociological theory of knowledge according to which human development is socially situated and knowledge is constructed through interaction with others. Like social constructionism, social constructivism states that people work together to construct artifacts.”
sounds like you mean that we each learn our individual moral intuitions and consciences from our experiences growing up and those are influenced by our environments/cultures. This says a lot about how norms emerge and are adapted, not so much about whether utilitarianism is justified, any particular approach is correct, or in what sense moral claims can be mistaken. But perhaps you did not intend it to.
Correct. I view the question of whether utilitarianism is correct to be fundamentally unanswerable, whereas the definition of and penalties for murder in Florida can be determined by reading section 782 of the Florida statutes.
I’d define moral philosophy as the questions of what people should do and why they should do it. If you do not wish to face the penalties for crimes and lesser violations of social norms, the enforcement of such laws and norms should influence your choices. It may not necessarily be the determining factor depending on your other commitments.
If you “view the question of whether utilitarianism is correct to be fundamentally unanswerable,” and define “moral philosophy as the questions of what people should do and why they should do it,” are there other approaches to moral questions that are more determinate?
How should we decide what is a crime and what penalty should apply? Why is that easier to grapple with than utilitarianism?
I think you made two fatal mistakes in your response, but they may be a single error repeated:
1) “if you take this a step back…” DDTT
2) “police have an office…” I don’t think that’s what is meant by not believing in police.
I agree with the author that utilitarianism cannot be trusted because utilitarians are not omniscient and simply use imaginary logic to declare that whatever they think is “best” must be considered authoritative. They are not definitively wrong, but this leaves them in the same basket as any other philosophers, and therefore utilitarianism is inferior as a philosophy, since its reason for being is to be superior to other philosophies, to escape that basket to begin with.
How one measures “good” or “best” is something that must be considered individually for every actual instance, and so the assumption that morality/ethics can be reduced to quantitative and formulaic dictates is not simply misguided, it is misleading.
Timothy – I’m not saying that the police are justified, or even that “the police are justified” is a sentence with an intelligible meaning. I’m simply saying that other people (including police) exist, that they have expectations of us, and that failing to meet those expectations causes trouble (whether systematized or impromptu). It is wise to take this into account when deciding what to do.
As to your second point I agree but you’re describing the problem, not the solution. Where multiple people are involved it is very common for them to decide differently, and resolving the differences is the problem that presents itself in those times.
David Berger – If a government doesn’t use force, then the law becomes a dead letter. That is usually worse than bad government. Consider the following:
http://www.reuters.com/news/picture/aerial-images-show-extent-of-looting-and-idUSRTXEDBIY
en.wikipedia.org/wiki/2021_South_African_unrest
Dave: I’d recommend a book called Legal Systems Very Different From Ours. Reading it you’ll see that we’ve been struggling with your questions a very long time and we’ve found workable answers, but never a perfect answer. So the only answer I have for you is “muddle through”. Welcome to humanity.
Hey Jay, I have my own answers to my questions. I was just curious whether you are particularly dismissive toward utilitarianism or toward moral theories generally. Your comment made it sound like you thought legal reasoning about punishment is easy and moral reasoning is hard, as if they were unrelated. I probably misunderstood.
Thanks for the recommendation, I’ve had that book on the shelf for a while, I should open it up.
You’re not really saying anything much, as I explained. First you appear to be using an argument ad absurdum approach by saying “take this a step back”, but really you are just knowingly misapplying the moral reasoning presented. Then you misrepresent what is meant by the word “believe” in this context. Your reply on both of my points failed to address those points, instead exacerbating my confusion by doing so. Then you got even more confusing by leaving the topic of utilitarianism entirely, simply saying it is “wise” to obey the police.
Your reply to Berger was even worse. Did you really mean to claim that government force can be used to quell violence which appears to be a reaction to government force? I’m not picking a “side” in terms of the recent situation in South Africa, but it seems like you are voicing a reflexive preference for violent and oppressive government when you say, while referring to looting by the impoverished as if that is the worst possible form if civil unrest, that ‘not using force is worse than bad government’.
I’m interested in discussing utilitarianism. I consider the ease with which authoritarians justify their policies using utilitarian arguments (often if not always of dubious validity) to be a profound flaw in utilitarianism. If I’m reading you right, you consider it more of a benefit than a flaw, but that I am not interested in discussing. So if that is actually the case, that you “believe in police” meaning so long as they have power they should be obeyed, believe the rioting in South Africa would be easily solved by government crackdowns, and consider the danger of autocrats disingenuously using utilitarian explanations for bad government less of a problem than insufficiently violent government, let me know and I’ll spend no more time on this exchange.
I think the reply is rather easier. A State that is willing to slaughter it’s own citizens will harm more people than a state that is not. Therefore a state that is willing to kill it’s own people for convenience cannot be the one with the greatest Utility.
I think utilitarianism can deal with most of those counterexamples by appealing to a notion of “what societal norm produces the most utility”. For example, in situations like the surgeon’s “kill one person to save five”, a norm where the surgeon kills the person results in a society where people are reluctant to visit the hospital for fear of being sacrificed, which results in lower overall utility.
Perhaps, but this post was about act-utilitarianism, not rule-utilitarianism. Rule-U is coming up later.
But even act utilitarianism is about a “net good”. And net good means you must consider all foreseeable consequences, not just the ones from the next few hours. Were this to be a thing that happens, you would get foreseeable responses from a person’s friends and family and I’d guess those of the “saved” lives who didn’t sign up to benefit from cold blooded murder. The certain result would be social and political discord and in many cases, violence. Causing riots, revenge assaults, and destabilizing society would outweigh the moral benefit of altruistic murder.
I also think Dan C’s objections of the longer term are apt criticism, if utilitarianism is about “net benefit” but you get to ignore inevitable but non-instant effects, this seems like a utilitarian strawman. No principle survives if you shackle it sufficiently to arbitrary ignorance and pathological shortsightedness.
I dont think so
What you’re getting at here is the difference between what is called “act utilitarianism” and “rules utilitarianism.” Act utilitarianism, roughly, is the idea that we should do whatever individual act maximizes utility. Rules utilitarianism attempts to get around some of the counterintuitive implications of act utilitarianism by saying we behave according to rules which would maximize utility, even if it’s not utility maximizing for a given individual act. Therefore, the surgeon should not kill the patient to save the five lives, because even though that would be utility maximizing as an individual act, it would not maximize utility if it was adopted as a rule.
But this move still fails, as far as I can tell. Consider the following modification of the organ harvesting case:
A group of surgeons, being committed utilitarians, decide that organ harvesting is the right thing to do. However, they are sensitive to the idea that it might make people unwilling to visit the hospital and thus lower utility. Therefore, working closely together, they ensure they can engage in organ harvesting without getting caught. They only kill patients who are already organ donors, and only patients whose deaths they can pass off as natural causes. They all take turns, spreading it out, only doing it rarely, and ensure all the records verify their stories. They also ensure they only do it on rare occasion, so as not to attract attention. As a result, they are able to kill the occasional patient and use their organs to save five additional lives, while avoiding any of the downsides.
If utilitarianism is correct, you’d have to conclude that *if* the following scenario could work as described, then it wouldn’t be wrong for the doctors to kill their patients. In fact, it would be wrong for the doctors to *not* kill their patients in those circumstances. And if utilitarianism was correct, a utilitarian should respond by saying something like “Yeah, if it could work out that way, then the doctors should kill their patients. Unfortunately, that probably wouldn’t work out in real life, but it would be a better world if we could make it happen like that.” But I’ve never heard a utilitarian say this. They’re response is more like “Luckily, that wouldn’t work in the real world.” But it’s not clear why they see that as a lucky break, rather than an unfortunate limitation.
Counterexamples (a) and (b), at least, seem to falsely presuppose that utilitarians must endorse reckless short-sighted behaviour. As I explain in this old blog post:
“[Standard counterexamples] generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. the erosion of trust in vital social institutions). The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect. In other words, this typically disastrous act type happened, in this particular instance, to work out for the best. So, the argument goes, Consequentialism must endorse it, but doesn’t that typically-disastrous act type just seem clearly wrong? (The organ harvesting case is perhaps the paradigm in this style.)
To that objection, the appropriate response seems to me to be something like this: (1) You’ve described a morally reckless agent, who was almost certainly not warranted in thinking that their particular performance of a typically-disastrous act would avoid being disastrous. Consequentialists can certainly criticize that. (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters. There’s a big difference between your typical case of “harvesting organs from the innocent” and the particular case of “harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences.” The salience of the harm done to the first innocent still makes it a bitter pill to swallow. But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any unjustifiable status-quo bias, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.”
In general, the allegedly “intuitive” objections to utilitarianism mostly strike me as merely intuitive at first, before one reflects on them more carefully, in just the same way that you suggest the utilitarian principle only sounds intuitive “when you first hear it”.
“If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters.”
Does it? How do I know that the voice I think I’m hearing really is the voice of God? Isn’t it far more likely that I’ve made a mistake in believing that? And wouldn’t it be morally reckless for me to just assume that some voice I hear really is that reliable that I can bet other people’s lives on it?
To put it another way: you say “when you have 100% reliable testimony that this will save the most innocent lives on net”. How could there ever be testimony that was that reliable? How is that epistemic state even possible?
It seems to me that this is a fundamental flaw in any philosophy that depends on being able to accurately calculate consequences of arbitrary complexity, arbitrarily far into the future.
> “How could there ever be testimony that was that reliable?”
I’m just describing *what follows* from that hypothetical; it doesn’t matter whether it’s really possible or not. If it isn’t, that just makes my argument (against the putative counterexample) all the stronger. Indeed, this was G.E. Moore’s view: that on simple act utilitarian grounds, given appropriate epistemic humility, one would *never* be justified in breaking generally-reliable rules (e.g. against murder). So the objector’s claim — that utilitarianism counterintuitively licenses murder — would then be shown to be false.
> “any philosophy that depends on being able to accurately calculate consequences of arbitrary complexity”
No moral theory depends on our being able to do any such thing. Google “criterion of rightness vs decision procedure”.
It seems to me that you’re saying utilitarianism is of no practical use, because we can never have enough epistemic confidence in any utilitarian calculation that led to a conclusion at odds with generally accepted deontological rules.
It isn’t supposed to be useful. That’s what T Max has been trying to tell you:
> It isn’t about the reasoning we might use to explain why something is wrong (or right) but the nature of what “wrong” is to begin with.
It’s like the theory of evolution versus intelligent design. They don’t disagree about what kinds of animals there are; they disagree about how they came to exist.
> you’re saying utilitarianism is of no practical use, because we can never have enough epistemic confidence in any utilitarian calculation
Philosophy in general has no direct practical use. That’s nearly definitive: philosophy is intellectual consideration that has no practical use. It seems to me you’re trying to use philosophers as priests and mathematicians as nuns, providing divine dictates, but as algorithms rather than commandments. I use hyperbole in jest, of course, but my point (and yours) should be taken very seriously, and was exactly what I was trying to say. The nature of logic is that unless we have precise quantities for all the variables (physically measurable without need of judgements to assess or categorize), we are using induction rather than deduction at best, and the Problem of Induction will prevent us from EVER having epistemic confidence in ANYTHING. This is not the only fatal flaw in utilitarianism, but it is sufficient. As a philosophical premise, utilitarianism is important and profound, but the utility of a philosophical premise is to explore an intellectual paradigm, not provide practical guidance of personal behavior, or even social policy. A useful analogy is the (epistemically uncertain) distinction between science and engineering.
Certainly Mills, in getting credit for formulating utilitarianism, intended to guide real-world morals, but he was hardly the first to recognize that more benefit is better than less benefit, and more harm is worse than less harm.
It was indeed my intention to point out that applying utilitarianism in the real world is an approach that is as rife with moral hazard as it is productive, and in fact that this applies to all efforts to mechanize human social matters or behavior as algorithmic. Effectively this postmodern mode of ignoring the innate uncertainty of all moral statements (indeed, all linguistic statements) is reificafion, even when we try to hedge our bets by substituting statistical probabilities for epistemic uncertainty.
@Mark Young:
> It isn’t supposed to be useful.
@TMax:
> Philosophy in general has no direct practical use.
Tell that to all the philosophers that advocate that people should do particular things based on philosophical arguments (such as Peter Singer advocating that everyone should donate their income over a certain level to charity based on utilitarian arguments).
@TMax:
> As a philosophical premise, utilitarianism is important and profound, but the utility of a philosophical premise is to explore an intellectual paradigm, not provide practical guidance of personal behavior, or even social policy. A useful analogy is the (epistemically uncertain) distinction between science and engineering.
I’m afraid I don’t see the analogy here at all, at least not the way you are describing philosophy. Science is not just about “exploring an intellectual paradigm”. It builds models that make predictions and tests those predictions against experiments. Models whose predictions disagree with experiments are discarded. Engineering makes use of scientific models whose predictions have been confirmed in building things.
If philosophy means building “intellectual models” of things like ethics and testing them against practical experience, then I could see the analogy with science. But that’s not how you are describing philosophy, since you are separating “exploring an intellectual paradigm” from personal behavior and social policy. But that separation means you can’t *test* an intellectual paradigm, which makes it useless to even start exploring it in the first place. What good is the exploration if you’re never going to be able to test anything you come up with?
Or perhaps when you say…
> the Problem of Induction will prevent us from EVER having epistemic confidence in ANYTHING. This is not the only fatal flaw in utilitarianism, but it is sufficient.
…you are saying that we *can* test utilitarianism against experience, and it fails the test. But then it would be wrong to say that philosophy has no practical use, since there could be *other* philosophical paradigms that would *pass* the test of experience instead of failing it–and philosophers should stop exploring utilitarianism and go looking for other paradigms.
So I’m confused about what your actual position is.
“when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences.”
Wouldn’t this just result in you not being able to make moral judgements at all? I do not mean to abuse a reductio ad absurdum, but imagine there is a simple dilemma of a serial killer being on trial. The serial killer actually committed every murder they were accused of. Do you think the jury should convict if they believe the serial killer committed the murders beyond a reasonable doubt? Or do you think the serial killer can only be convicted in the event that every juror gains 100% knowledge of every murder to determine that the defendant is in fact the killer?
The reason I ask is because it seems like a serial killer should be removed from society to save more lives, but the times they have been removed from society I do not think there are many cases (if any) where every juror is 100% knowledgable of the killer’s murders.
See my reply to Peter Donis. I’m not assuming that 100% reliable testimony is possible. When there’s significant uncertainty (as is always the case in real life), utilitarians can explain how NOT murdering is actually what has higher expected value.
If you have to introduce magic into your example, it acts more to condemn your point rather than secure it. This is equivalent to saying “in a world that does not operate like ours,…” OK but who cares about fictional worlds with made-up properties?
about the organ harvesting – you can just set up the example a bit differently, i.e. nobody else recognizing he did it, he fakes it on some other cause etc
Utilitarianism is indeed a mess but so is deontology. Show me a categorical imperative and I will show you that it is actually a hypothetical imperative.
Also, all ought claims are derived from is claims. In the history of the world no one has ever justified an ought claim with anything other than is claims. To say “you can not derive ought from is” is the equivalent of saying you can not derive ought, period.
I suppose you can claim that oughts are invalid or incoherent but what you can not say is that oughts are derived from or supported by something other than is claims.
Why can’t ought claims be derived from other ought claims?
Isn’t that just a punt? What supports the ought claim that support the ought claim? Is it oughts all the way down? How is claiming the existence of an ought not a claim about what exists (is)?
All arguments eventually punt. Any assumption can be questioned.
https://www.google.com/search?q=munchausen+trilemma&rlz=1C9BKJA_enUS793US811&hl=en-US&ei=cXrsYb6bFtTHkPIPhI-CoAg&oq=munchausen+tril&gs_lcp=ChNtb2JpbGUtZ3dzLXdpei1zZXJwEAEYADIFCAAQkQIyBQgAEIAEMgUIABCABDIFCAAQkQIyBggAEBYQHjIGCAAQFhAeMgYIABAWEB4yBggAEBYQHjoHCAAQRxCwAzoHCAAQsAMQQzoMCC4QyAMQsAMQQxgAOgcIABCxAxBDOggIABCABBCxAzoECAAQQ0oFCDgSATFKBAhBGABQpBpY3SZgqjBoAXACeACAAdoBiAGMB5IBBTAuNC4xmAEAoAEByAERwAEB2gEECAAYCA&sclient=mobile-gws-wiz-serp#imgrc=gLqr9-r4yO0IWM
Of course they do but even a punt is an “is” statement. The final punt at the bottom of all moral arguments is “because that’s what I desire” which is also a statement about what is. All justifications for ought claims are “is” declarations.
You have yet to show how you would justify an ought claim with anything other than “is” statements.
“I have an intuition” is an “is” statement.
It is the claim that your intuition exists.
“ “I have an intuition” is an “is” statement.”
Literally, yes. But you seem to misunderstand. That statement does not necessarily appear in the argument. Someone can assume “no one ought to torture babies” and refuse to argue for it or justify it in that or any other way. I agree if you think that is dogmatic and unconvincing, but they would not need to say “I have this intuition therefore it is true.” They can just say, “this is obvious, and if you require a justification for it, we really have nothing to discuss.”
Persons are not required to place total trust in their intuitions, though trying to find a good perspective from which to criticize them can be difficult. Not everyone is an intuitionist, and I’m not sure, but it doesn’t seem like it would necessarily violate intuitionism for someone to say, “I have seen enough evidence to convince me that my intuition is wrong in this particular case.” I should read MH’s book before I put my foot further into my mouth.
“I have an intuition, therefore it must be true” is not very convincing, is it? “If you share this intuition, you must share the conclusion of this argument,” works better. Is that a version of your thought? That still involves an argument with an ought premise and ought conclusion; the part we are adding is a separate argument for why someone might be persuaded by the first argument (a matter of fact, not “ought”), not part of the “ought” argument itself.
Your approach reminds me of Bertrand Russell’s trick, I’ve forgotten what it’s called, to deal with paradoxical sentences like “the present king of France is bald” when there is no king in France. Is it true or false? He said that to formalize such statements, we must add in an existence claim. So it would formalize to something like “something exists, and that thing is the king of France, and it is bald.” Then it is no longer paradoxical, just trivially false, since there is no king of France. I think someone made a good criticism which I’ve forgotten, and formalization still tosses in existential qualifiers in that circumstance, so it must’ve been a criticism of applying it in ordinary language. Sad how many interesting things I have forgotten.
“To say ‘you can not derive ought from is’ is the equivalent of saying you can not derive ought, period.”
This isn’t true. To say that you can’t derive an ought from an is just means you can’t make a valid argument with nothing but “is” statements as premises and have an “ought” statement as a conclusion. This doesn’t mean no “ought” statements can be derived. It just means that deriving them requires at least one “ought” statement in the premises as well.
So when you say “Also, all ought claims are derived from is claims. In the history of the world no one has ever justified an ought claim with anything other than is claims”, that, too, is false. In the history of the world, nobody has ever justified an “ought” claim solely from “is” claims. But many people have justified ought claims from other ought claims, or from combinations of is and ought claims.
To use a slightly modified version of Mike’s argument against factory farming as an example, the following argument is invalid:
P1: Factory farming causes extreme pain and suffering.
P2: Gustatory pleasure is a minor benefit.
C: Therefore, we ought not support factory farming for our gustatory pleasure.
That’s an invalid argument. The conclusion doesn’t follow from the premises. This argument, however, is valid:
P1: Factory farming causes extreme pain and suffering.
P2: Gustatory pleasure is a minor benefit.
P3: We ought not cause extreme harms for the sake of minor benefits.
C: Therefore, we ought not support factory farming for our gustatory pleasure.
What supports or justifies P3?
Why pick on P3? What justifies P1 & P2?
From a certain standpoint, all premises are conclusions from some other argument, with infinite regress. The alternative is to view the argument as a hypothetical – if I believe the premises and the logic is valid, I am stuck with the conclusion. If I don’t like 5he conclusion, I can find something wrong with a premise or the logic.
Agreed. all 3 Ps require justification with more “is” statements. It alway has and always will come down to humans agreeing on what is. All oughts are derived from what is. There are no examples of oughts derived from anything other than is claims, because all oughts are presented as is claims.
“There are no examples of oughts derived from anything other than is claims.”
Can you provide an example of deriving an ought claim entirely from is claims? Or provide an example of a syllogism that contains nothing but “is” statements in the premises and validly derives an “ought” in the conclusion? If you are able to do this successfully, you would become the most famous philosopher in the world overnight.
Also there is no infinite regress. All ought statements are at bottom justified by “because that’s what I want.” Ergo egoism is true.
“I gave money to the homeless person because seeing their suffering was causing me to suffer and giving them money alleviated my suffering and made me feel good and I want to not suffer and I want to feel good.”
All ought claims are supported by the claimer’s desire to move away from suffering and towards satisfaction. To get an outcome you desire requires predicting what actions will result in that outcome. In other words it requires knowing what is. This is why all ought claims are justified by is claims.
Kevin DC: “Can you provide an example of deriving an ought claim entirely from is claims?”
Sure.
Is claim #1: I want to feel good rather than bad.
Is claim #2: Giving money to this homeless person will make me feel good.
Conclusion: Therefore I ought to give money to the homeless person.
All ought claims ever made are derived entirely from is claims. Can you give an example of one that isn’t?
“Is claim #1: I want to feel good rather than bad.
Is claim #2: Giving money to this homeless person will make me feel good.
Conclusion: Therefore I ought to give money to the homeless person.”
This argument you presented is not logically valid – the conclusion is not entailed from the premises you state. I’m not even sure why you think it is? Can you please identify the specific rule of inference you think leads from the conclusions you stated to the conclusion you reached, i.e., modus tollens etc?
@Timmy, you are either making some unstated and controversial assumptions or just missing the point. C above is derived from an ought. So your statement “all oughts are derived from what is” is trivially wrong. It would make more sense to say that all oughts are ultimately derived from is, but that is still controversial and needs some explanation. Maybe you mean that ultimately what we intuit about oughts is determined by sociological processes. But an opponent can grant that and still insist that has nothing to do with what is justified. HM, for example, is an intuitionism and moral realist, so he thinks that sociological processes could be wrong (e.g. Nazi Germany). Some self-referential propositions are made true by being widely believed, but ethical propositions don’t obviously fit in that category.
Dave and Kevin.
Declaring the “existence” of your ethical intuition is a claim about what IS.
Declaring the existence of a feeling you have is a claim about what IS.
In both cases above you are declaring that “My feelings and intuitions exist.”
Moreover, Making any ought declaration is claiming the existence (is) of an imperative course of action. All declarations are claims about what exists (is).
Ergo, ought claims ARE is claims. Ought claims are declarations that there exists (is) an imperative course of action.
This is why if you are declaring an ought, every justification you give to support this claim will also be an “is” statement including other ought statements which are claims about the existence of an imperative action.
The notion that an ought claim is not itself a type of “is” claim is false. And this is why most people are utterly confused on the subject of morality.
Socrates nailed it a long long time ago.
Virtue = Wisdom
Ought = Is.
Know thyself. Know what exists. Know what is and you know what ought to be.
@Timmy
You have some interesting ideas, but I don’t think you have delivered the details.
“Moreover, Making any ought declaration is claiming the existence (is) of an imperative course of action. ”
I guess?
“All declarations are claims about what exists (is).”
I guess? “Unicorns have horns.”? Are ought statements declarations?
“Ergo, ought claims ARE is claims. Ought claims are declarations that there exists (is) an imperative course of action.”
Okay? Where must this imperative course of action be located? What sort of existence are we discussing, if existence is not limited to the observable universe? If I say you ought to do something, what do you think that means? That you believe something? That our culture believes something? That it would benefit you?
“This is why if you are declaring an ought, every justification you give to support this claim will also be an “is” statement including other ought statements which are claims about the existence of an imperative action.”
If there is no distinction between is and ought, that is where you should have started the discussion. I could perhaps almost understand you if you say ought is a subset of is. That seems like really contorted thinking, but maybe I don’t understand you quite yet.
“The notion that an ought claim is not itself a type of “is” claim is false. And this is why most people are utterly confused on the subject of morality.”
Now you sound like a moral realist. I am confused indeed.
“Know what is and you know what ought to be.”
This needs more explanation, for me at least.
Ethical intuition – I take that to be self evident as a premise. Some normative statements are as self evident as any positive statement. Now, perhaps you disagree. Maybe you find the claim “we ought not cause massive suffering for minor personal gain” to totally unintuitive. If so, you can reject P3 and declare the argument unsound (although it would still be logically valid.) But that doesn’t change the fact that validly deriving an “ought” in the conclusion still requires an “ought” in the premises. Without an ought in the premises, there can’t be an be an ought in the conclusion either. That’s just how rules of logical inference work. The conclusion of an argument can’t contain anything that isn’t also contained within the premises. If there is an “ought” in the conclusion, there must also be an “ought” premise, otherwise the conclusion is automatically invalid.
The declaration that you have an intuition is a claim about what “is.”
Your appeal to logic is a claim that logic exists (is) and that logic IS a thing that validates declarations.
All declarations are claims about what is. There is no part of your justification for ought that is not an “is” claim.
@Timmy
I think the disconnect is this: we are analyzing the argument logically, you are analyzing the conclusion linguistically.Yes, if I accept that I have an obligation, that will alter my behavior. But that does not address the form of argument that persuaded me that I have such an obligation. Yes, we can express ideas that have an ought in them as if they did not, as if they referred to a Platonic realm that we could hypothetically inspect to discover the truth of the matter. But “the sky is blue” and “the sky ought to be blue” express two different ideas. Yes “it would be good for you to do x” sounds like it is saying “you ought to do x” without the “ought”. But that is quibbling. There is a valid distinction there whether or not the “ought” is explicit, a distinction that you seem to want to ignore.
Is the following argument illogical to you?
I need to go out and get the mail.
It’s raining.
I don’t want to get wet.
I have no raincoat but I do have an umbrella.
The umbrella is the only thing I have that will keep me dry.
I ought to take and use the umbrella when I go get the mail.
Those are all “is” claims.
And it all seems perfectly logical to me.
“I need to go out and get the mail.
It’s raining.
I don’t want to get wet.
I have no raincoat but I do have an umbrella.
The umbrella is the only thing I have that will keep me dry.
I ought to take and use the umbrella when I go get the mail.”
The conclusion does not follow from the premises unless you add another premise, such as “If I don’t want to get wet, I ought to take steps not to get wet.” Since this is a truism, it may seem unnecessary, but if you analyzed the argument formally, it is incomplete without that premise connecting the other premises to the conclusion.
Perhaps we could generalize that you always ought to do what seems likely to accomplish what you want. Non-egoists would disagree. “Want” does not always imply “ought”, and “ought” does not always imply “want.”
If I want to accomplish X I ought to perform an action that accomplishes X.
“Non-egoists would disagree.”
Well I’d say that non-egoists are mistaken about what drives the moral actions of humans. What appear to be acts of altruism in humans are better described as acts of wise self interest. We commit acts of kindness and helping because they feel good and they raise our social status which raises our likelihood of being well fed and well mated and that’s why evolution made it feel both good and right to be caring and helpful. Because it will get us fed and laid.
The “moral actions” of humans are not “obligations to others” they are wise and conditioned self interest. To see them as obligations to others is to be confused about the reality of our moral nature I think. Such confusion causes hopelessly dysfunctional moral theories like utilitarianism and deontology.
@Timmy
The “wise” in “wise self-interest” does most of the work, but it leaves a lot unstated. If you are trying to persuade me that we always ought to do what we want, and we always want to do what we ought to do, you need to be more explicit. Which way is causality going? Somehow your exposition so far has not added up to an argument that I understand, and if I can’t understand it, it is unlikely to persuade me. I don’t think I am hostile to your conclusion, I just don’t see how you are getting there clearly.
You seem to distinguish sharply between “oughts” and obligations. That also could benefit from some clarification.
“The declaration that you have an intuition is a claim about what ‘is.’”
This is trivially false. To say that I have an ethical intuition that X is not to say that X *is*, it is only to say that X ought to be.
“If I want to accomplish X I ought to perform an action that accomplishes X.”
This, too, is obviously false. If I want to commit murder, it doesn’t follow from this that I *ought* to go about committing murder. Ted Bundy wanted to commit rape, but it obviously doesn’t follow from this that therefore Ted Bundy *ought* to have raped people. Etc.
But even if it was true that “I want to X, therefore I ought to perform an action that accomplishes X”, in saying this, you’re explicitly contradicting everything else you’ve stated so far. If “I want X” entails or implies “I ought X,” then your syllogisms where you’re claiming to derive “ought” conclusions entirely from “is” statements fail by your own definition. You include premises like “I want to be happy” and “I don’t want to get wet” in the arguments where you claim to derive an “ought” in the conclusion without any “ought” statements in the premises. And, since according to you, “want” statements directly imply “ought” statements, that means by having premises about what you want, you are, in fact, including implicit “ought” in the premises of your arguments, ergo your claim that you’re deriving an “ought” conclusion without any “ought” in the premises is trivially false.
“Know what is and you know what ought to be.”
This, too, is just obviously false. It *is* the case that people regularly commit murder, rape, and genocide. According to what you said, since we know this *is*, we therefore know it *ought* to be. Ergo, you must believe people *ought* to commit murder, rape, and genocide. It *is* the case that children die slow and painful deaths from bone cancer, but this doesn’t entail that it ought be the case. Etc. Knowing what is, only tells you what is. It doesn’t tell you what ought to be. Knowing that “it is the case that I want X” in no way entails “I ought X,” which is why “It is the case that John Wayne Gacy wanted to rape and murder” does not entail “John Wayne Gacy ought to rape and murder.”
@Dave
“If you are trying to persuade me that we always ought to do what we want”
No, it’s not that we “ought” to always do what we want, it’s that the way humans function is that we always do what we believe will make us feel good, and we always do what we believe will make us suffer less. So we take the action that we believe will result in us feeling good and not feeling bad. That is how humans work. That is not how humans ought to work. That’s how they work. You have no choice.
You do not give money to the homeless person on the street because you intuit that it is “the right thing to do.” Your intuition does not tell you that it is “right” your intuition tells you that you will feel good immediately when you do it, and that you will also raise your tribe status by doing it which will make you feel even better, and so you give the money because of your belief that the world is such that performing that action will make you feel good and you want to feel good and you are driven to take actions that will make you feel good.
This is the form of all moral actions by humans. This fact about the way the world is is both illuminated by science and by the intuition of people who take the time to notice their own human nature. To not understand this is to not “know thyself” and there’s a good reason why Socrates and the oracle denoted “know thyself” as the prime form of wisdom.
Once we know that humans operate in this way, that we will always take actions that we believe will make us feel good, or stop feeling bad (same thing), then we can properly understand what people mean when they make ought claims. In every case, a person making an ought claim is telling you what they want. If they say “you ought to not kill babies” they are telling you that they don’t want anyone killing babies because that will make them feel bad. If you think that when people make ought claims they are telling you anything other than what they want, you are mistaken and confused about the causes of human moral actions and proclamations. You are guilty of not knowing thyself.
Once you understand that all ought claims are nothing more than expressions of what the ought claimer wants, understanding human morality becomes easy. You will no longer be confused by Hume’s observation that all people always justify their ought claims with is claims. Because they are telling you what they want, and once you know what you want, the only thing that can tell you what action to take to get it is your belief about what is.
I invite you to present any moral ought claim and I will demonstrate with ease how it fits into this theory which is also Socrates theory that wisdom = virtue.
And no, the burn of exercising hard is not a counter factual. You go through the burn to get to the euphoria post workout. The burn was necessary to achieve the euphoria. You would not put yourself through the burn if not for the carrot of the ensuing euphoria.
@ KevinDC
Do you abstain from killing/raping/genociding because you believe it is morally wrong? That’s not why I don’t do those things. I don’t do those things for the same reason 99% of all humans don’t do those things. Because we have no desire to do those things. We think that doing those things will make us feel bad not good and we don’t want to feel bad we want to feel good.
But there is a very small percentage of people who are clinically psychopathic and they DO engage in murder/rape/genocide because they want to and they believe it will make them feel good.
Now you and I can make laws that prohibit and punish them for committing those acts to protect ourselves and others from them, but what does it mean to tell a psychopath that it is “wrong” to murder/rape/genocide? What good does saying “it’s wrong” do? Will it convince them not to do it? You already know that’s not true.
So all you can do is state that you don’t like murder/rape/genocide and you can get together with the other 99% of us who do not like those things and we can make laws to prevent them as best we can but that’s it. The idea that stating it is “morally wrong” for someone else to commit murder is utterly useless considering 99% of the people don’t want to do those things anyway and the 1% who do want to do those things won’t agree with you that it is “wrong.”
@Timmy
Now I think I understand your hypothesis. You have argued that it is a plausible interpretation in every case. But you have not made much effort to show that it is the best interpretation.
Also, you have made a claim about what is happening when we make ought claims, but you have not said much about why particular persons make specific ought claims and not others. Presumably, you think this is because each thinks their claim is more consistent with what they want. This does not really simplify the situation, especially if the sorts of reasons people actually use to convince each other have anything to do with what will make them happy. We end up in the same arguments, but with different motivations.
Your remarks bring Kant and Haidt to mind. Kant would say you are describing prudence, and saying nothing about morality; that you actually claim persons ignore morality, it doesn’t exist. He wants to argue against you.
A cynical take on Haidt is that he thinks moral talk is all about public relations, with persons trying to convince others to tolerate what they have done or wish to do, and protect or enhance their status. Deeper psychological processes actually determine what we do – our conscious thoughts and words about that are mostly rationalizations. So he is sort of agreeing with you. Your position also reminds me of the emotivists, who thought of moral statements as not really making truth claims, but expressing the idea “yay team!” In a complicated way.
But even taking your idea as given doesn’t tell us how our subconscious arrives at its decision, how often it is right, nor what sorts of principles would make us happy if represented by our laws and social norms. Okay, we want to be happy. What will make us happy?
Actually, I don’t see how my moral beliefs affect my happiness directly. If we watered down your thesis a bit, so that it was just a variant of consequentialism advocating attitudes that make the world more pleasant, it goes from implausible to tautology. But I’m pretty sure that isn’t what you mean.
“Once we know that humans operate in this way, that we will always take actions that we believe will make us feel good, or stop feeling bad (same thing), then we can properly understand what people mean when they make ought claims. In every case, a person making an ought claim is telling you what they want.”
If so, it is quite indirect. And you’ve said nothing to exclude other possibilities.
“Once you understand that all ought claims are nothing more than expressions of what the ought claimer wants, understanding human morality becomes easy. “
Really? What I want depends on a lot of non-obvious things. It’s not clear that it is a simpler question, even assuming it is the important question.
“You will no longer be confused by Hume’s observation that all people always justify their ought claims with is claims.”
It seems pretty clear what he meant.
“I invite you to present any moral ought claim and I will demonstrate with ease how it fits into this theory which is also Socrates theory that wisdom = virtue.”
I have no doubt you can apply your thesis to any moral claim. Can you show evidence that it explains all circumstances better than its rivals? Or that it is even empirically distinct from them? What would I observe in my experience or in the behavior of others that can only be explained by your thesis?
“And no, the burn of exercising hard is not a counter factual. You go through the burn to get to the euphoria post workout. The burn was necessary to achieve the euphoria. You would not put yourself through the burn if not for the carrot of the ensuing euphoria.”
That’s one way of thinking of it. It has never worked well for me, if at all. But that is a different topic.
@Dave,
“Can you show evidence that it explains all circumstances better than its rivals?”
I have already done so by presenting the example of giving a homeless person money not because you intuit that it is “the right thing to do” but because you believe it will make you feel good both in the moment and ongoing. This is a better explanation than the theory that you intuit that it is “the right thing to do’ whatever that means.
And I invite you to quarrel with that claim and/or present an example of a different moral situation where my theory does not provide a superior explanation to non-egoism.
“but you have not said much about why particular persons make specific ought claims and not others.”
Because they believe different things about what is. That is another challenge I can throw at you. Show me any moral disagreement and I will show you that it is actually a disagreement about what is.
BTW I am loving this debate. I very much appreciate your well thought engagement and this forum that Professor Huemer has provided. And I do apologize for coming off so hyper cocksure. It’s a strategy I that I have discovered works great if you want to find out very quickly that you are wrong about something. People make sure to let me know when I am wrong and will go a long way to prove it. 😉
@Timmy
“I have already [shown evidence that it explains all circumstances better than its rivals] by presenting the example of giving a homeless person money”
That says what your hypothesis is, not why it is more believable than other explanations people might give for donating to the homeless.
“And I invite you to quarrel with that claim and/or present an example of a different moral situation where my theory does not provide a superior explanation to non-egoism.”
No, as I said before, it seems to apply generally, but so do it’s rivals. I could claim that a devil makes me do things. That would also apply very generally, but would not give much of a reason for believing it.
“Show me any moral disagreement and I will show you that it is actually a disagreement about what is.”
Okay. Kant was very explicit in denying that morality was about consequences of actions. I guess you can just say he is deluded, and his elaborate reasons are just rationalizations. But why should I agree with either of you? The fact that it is possible to consider that possibility is not evidence for it being true.
Thanks for kind words. I have also enjoyed our chat.
@Dave
“But why should I agree with either of you?”
Well Kant’s moral theory says you should tell the Nazis that Anne Frank is in the attic if they ask. It also says that if we learned that a meteor was going to destroy all life on earth in two weeks time that the most important thing for us to do with our final 2 weeks would be to get busy killing everyone on death row. These are just a few of the obvious problems with Kant’s theory.
And Professor Huemer did a good job of pointing out the dysfunctionality of utilitarianism.
I described pretty well how my brand of egoism works and why it is a better explanation of the moral actions and declarations of humans and I gave some concrete examples of it and I invited you to point out any flaws but I don’t see where you have done so. You are just asking why you should believe me over Kant, and I’m like, because you haven’t been able to point out any flaws or false statements in what I am saying and Kant wants you to snitch on Anne Frank.
@Timmy
“ Well Kant’s moral theory says you should tell the Nazis that Anne Frank is in the attic if they ask.”
Well, your theory says I should/will tell them, if that is what I want. Is there any conclusion it is incapable of endorsing? I guess it would never allow us to endorse an action we don’t want.
“ It also says that if we learned that a meteor was going to destroy all life on earth in two weeks time that the most important thing for us to do with our final 2 weeks would be to get busy killing everyone on death row. “
I’m skeptical of that interpretation. I guess it wouldn’t be Kant’s craziest notion.
“These are just a few of the obvious problems with Kant’s theory.”
I envy you if anything Kant says seems obvious to you. These examples show that Kant sometimes conflicts with our intuitions. Are our intuitions always correct? I guess in your approach they are always expressions of what we want, though perhaps we might be mistaken about the causal linkage between what we advocate and the actual consequences of following our own suggestions.
Does your theory actually contradict any of the content of the other theories, or just make claims about our motivations when advocating for them?
“I described pretty well how my brand of egoism works”
Yes.
“ and why it is a better explanation of the moral actions and declarations of humans”
But the rival theories you seem to want me to reject are more about what we should do and how we know what we should do, rather than why we would feel motivated to talk about it or do it. I guess Kant does talk about motivation here and there, since he wants us to not be motivated by our inclinations.
“ and I gave some concrete examples of it and I invited you to point out any flaws but I don’t see where you have done so.”
Are you saying it is a tautology?
@Dave
“Well, your theory says I should/will tell them, if that is what I want.”
My theory says that you should definitely not tell them because it will make you feel bad and you do not want to feel bad you want to feel good. You would only tell them if you believed it would make you feel good to see them kill her. If you believed that there are 2 possibilities.
1. You are mistaken that it will make you feel good. It actually ends up making you feel very bad. Your mistake was getting the causality (is) wrong. If you had gotten the causality “is”right and predicted correctly that your actions would result in making you feel bad, you would not have told them where she was. Getting the “is” wrong caused you to get the ought wrong.
2. The other possibility is that you were correct and it did make you feel good in the full light of day which would make you a clinical psychopath who exist in our human population but only at a rate of around 2%. For the other 98% of us humans we feel really bad about innocent young girls getting killed. And for the 2% of psychopaths who would feel good about innocent young girls getting killed what is the usefulness of calling their actions morally wrong? They don’t agree with us and never will. They are pathological. The only thing you can do is say that you don’t like it and we can make laws against killing but it has no meaning to tell a psychopath they ought not do what their genetic disease causes them to do.
If you are one of the 98% non-psychopathic humans in the world the correct choice is to not tell on Anne Frank because it will make you feel bad in the full light of day. And if you are a psychopath what I say about your morality is meaningless.
BTW this is not my theory. Nothing I am saying is new to philosophy. It’s OG philosophy. It’s Socrates’ “wisdom = virtue.” It’s the “practical wisdom” side of virtue ethics. It is reasoned and rational which would make Kant happy. It squares with Hume’s “reason is slave to the passions.” It is naturalism and egoism, and eudaimonism and pragmatism and I could go on. But it’s definitely not deontology or utilitarianism.
I always thought that the problem with utilitarianism is the difficulty of coming up with a utilometer that lets us compare persons’ utility. This seems too technical for most persons, though, and it is true that in some circumstances (e.g. tort lawsuits) we are more or less forced to make such calculations.
“Utilitarianismish” calculations can be useful in high information environments, e.g. budgeting for a hospital, although, changing some of the assumptions won’t make the previous answer “wrong.” while budgets don’t directly decide who should actively be killed, they in effect decide which marginal patients will die. No amount of prescience about risk profiles and cost of treatment seem like good reasons to override basic social norms.
Bernard Gert’s synthesis is a sort of reverse utilitarianism. He advocates a set of general rules, but with a meta rule that any time the rules seem to lead to a crazy outcome, an exception can be arranged, or a violation can be forgiven, if done quite publicly. Taken to the extreme, this begins to resemble the common law system of precedent,
Another sort of meta level analysis might examine where utilitarian calculations derive different conclusions. That is, where is the division between assumptions about costs and benefits that say “go ahead” and those that say “nope?”
But I am a bit skeptical that actual utilitarians ever get to the point of doing the math. Long-term effects of allowing violations of basic norms seem really hard to predict. God-like beings could perhaps succeed, but they would not really need interpersonal morality if they were infallible or invulnerable. Mere humans need some good cheap heuristics, and a way to get over it when things don’t work out.
Another meta point – utilitarianism doesn’t actually tell us what is good/bad. We have to have a more fundamental idea about that in order to estimate the variables we will use in the cost/benefit calculations. E.g., should we assume that saving two lives is better than saving one? What effects will the two (retirees?) have on future utility, compared to the one (brilliant young researcher?)? Maybe you think the answer is obvious, but utilitarianism doesn’t actually tell us. It lets us make our own decisions, and plug them into a formula. But what principles do we use to derive our answers?
Can we get a follow up post on what’s wrong with Rule Utilitarianism? I find it much more plausible and tempting intellectually, and these objections feel less forceful when I take that perspective. What would you say is the tour-de-force case against rule utilitarianism?
Also, I would like a discussion of the amoralism of ethical skeptics who claim not to know what intrinsic moral wrongness even is or the relevance of intuitions in ethics, and a take-down of amoral egoistic theories which say “morality is a useful fiction, social poetry, that gets people to behave well/a practical agreement or contract, and it’s enough to just care about maximizing our own experience to encourage mass social cooperation”.
The popular commentator known as Destiny on twitch has advocated this worldview for years now, and I constantly meet people in the intellectual corners of the internet who think it’s bulletproof and an excellent foundation for liberalism and progressivism. (He identifies as a weird mix of “amoral egoist” and “rule utilitarian” and a “social contract theorist,” but he is badly misusing these terms.
What he really means is the view I described above: morality is a fiction that we appeal to in order to create a useful mass agreement that coordinates people’s self interest. The reason it works is that, on purely empirical terms, it so happens that our self interest generally overlaps sufficiently among neurologically normal people to allow for social order and prosperity while containing the minority of neurologically abnormal anti social people in prisons through brute force. If you disagree, he might tell you to read Pinker’s books, which in his view show not that moral realism is true, but that the interchangeability of perspectives and economic mutualism are built into the logic of interactions, so that over time (thousands of years, admittedly) it will just be superior on purely self interested terms for huge societies to cooperate for personal gain. He thinks human history is inherently ultimately on a path of least resistance to an incentive structure where people behave in the pacified way we see in contemporary society. See his “My Moral Breakdown” video here (the first few minutes summarize the rest)
https://youtu.be/N-eTcjGsK08
Video looks Interesting. Bernard Gert took the opposite view. We need simple heuristics in order for society to function, but that means exceptions will always arise (or different interpretations of a general rule). But then if someone can make a public case for making an exception to the rule in a specific circumstance, that can be allowed. The assumption that we need logically bulletproof rules is just too epistemically demanding – the history of philosophy is littered with attempts that all have anomalies and counterexamples. Instead we need something like common law, that uses experience to accumulate wisdom through precedent. That assumes that we known justice in a very general sense, but hard cases will always arise and require interpretation. The flaw in historical common law was that it has been difficult to correct errors, even when there is a pretty good consensus that they were errors. I think this could be addressed, if people were really interested in doing so. But legislation has colonized law, and common law is in low regard.
I think the comment system deleted my reply because I included a link. How frustrating.
It doesn’t delete them. It holds them for moderation.
Can we get a follow up on what’s wrong with Rule Utilitarianism? also, what about amoral egoism (“morality is a useful fiction for coordinating mass social agreement”)
The way you stayed it, amoral egoism is a factual claim that does not contradict any ethical claim. (Well, except for the “fiction” part. Obviously, morality is beneficial. That has no relevance for whether it is fiction or something else.)
Stated
Rule-U is coming up, but it gets a much more friendly treatment.
I am very happy to see a post about utilitarianism critiqued from the intuitionist perspective. I am sympathetic to your arguments. I spend a lot of time around the rationalist community commenting on blogs like Scott Alexander’s and replying to utilitarians. I find this problem very fascinating and am thinking about it a lot because I am butting heads with them on this issue.
Recently Scott Alexander did an ask me anything in which someone asked him how he felt about utilitarianism. He said he was in favor (as I already knew) but believed it had to be grounded in intuition (which I already knew too.) I accept that we have intuitions about suffering and pleasure, but utilitarians reject other intuitions as evidence counter to the theory, usually as a categorically bad type of evidence. But if I ask for alternative derivations, they are unconvincing because they do not bridge the is-ought gap. Scott is coherent in believing in intuitions as grounding but he only uses 2 intuitions and then seems to cast them aside like Wittgenstein’s ladder not to use intuitions again to reject utilitarianism. Here is my very relevant comment from about 2 days ago:
“Your consequentialist FAQ grounds utilitarianism on two intuitions, namely Morality Should Live in the World and Other People Should Have Non-Zero Value. I also think that you have to ground morality on a non-inferential foundation of intuitions, but I believe in certain circumstances other strong intuitions are more important than utility maximization. Do you think other intuitions ever override utilitarianism?
People have a wide range of intuitions such as natural rights, parental duty, blameworthiness, appropriate level of obligation, non-fungibility of persons, etc. If we accept that intuitive evidence (IE) is actually evidence, we should count other intuitions as evidence against our utilitarian hypothesis (UH) rather than just flaws in human perception. If P(UH|IE_1) > P(UH), then it should be the case that IE_2, IE_3 and IE_4–if they are counter to utilitarianism–count as evidence against it: P(UH|E_2,E_3, E_4) < P(UH).
I find some intuitions to be very very strong. I think that the repugnant conclusion is correct. I think that animal suffering is a utility monster that probably outweighs human utility by an order of magnitude at least. And I think that if consequences are all that matter and commission and omission are identical, then NOT donating more of your surplus income would have to be morally equivalent to murdering people. Not saving a starving child in South Sudan because you want to spend money on a new TV would be like ceteris paribus killing the child in South Sudan to get money for the TV because consequences are all that matter–morality lives in the world.
Putting aside the logistics and technical details such as "well, it might not be as easy to help animals" or "well, killing someone might lead the the degeneration of non-violence norms with South Sudan" etc. etc., I think that utilitarianism just feels deeply unintuitive and that counts as evidence agains the theory."
—-
I find that utilitarians often do not bite the bullet and have technical objections that usually rest on maintaining societal norms. But I think these are ex-post facto justifications. I think they feel the intuition that omission and commission not morally equivalent, but they say something like "well establishing a norm of permitting murder has second order and third order effects" Basically, the arguments get reduced to a point where you can't say for certain and the utilitarian ends up behaving almost exactly as they would before. Like effective altruists only giving 10% of income when it should be like 95% they respond with something like "this is a good equilibrium". I hate to psychologize them but I think they feel guilty because they know the implication of their utilitarian theory is a lot of giving and they have to deal with that somehow.
I am reminded of a quote you gave in Ethical Intuitionism pp.250-251
"I am not sure, now, why I thought these things. If someone had asked me, 'Do you think that every belief requires an infinite series of arguments in order to be justified?' I would surely have said no. So what was 'arbitrary' about Ross' ethical views could not have been the mere fact that he advanced some claims without argument. I think I would have been satisfied, or at least less inclined to make the 'arbitrariness' objection, if Ross had stated a single ethical principle from which all other ethical principles could be logically derived-as, for example, the utilitarians do. No one makes the charge of arbitrariness against utilitarians. But this makes little sense-if a single foundational ethical principle may be non-arbitrary, why not two? Or six? Or a hundred?"
Which I remember really sticking out to me as a good point. I think when something is one principle it seems more true.
Thanks, Parrhesia. I guess most utilitarians would say that they should give more to charity, but they are selfish and morally imperfect. Some might say that there are “prudential reasons” to keep the money, but “moral reasons” to give it away, and perhaps the two kinds of reasons are incommensurable.
However, these same utilitarians would not for a second consider committing a (positive) murder in exchange for $20,000, even though they are constantly failing to save lives at a cost of $2000 each. So a simple appeal to selfishness doesn’t seem to explain it. It seems that they, like the rest of us, have a doing/allowing distinction intuition.
*Picture of a somber looking man*
“A man who was devoted to maximizing happiness”
lol
That, btw, was a picture of John Stuart Mill, the best-known utilitarian. The other two pictures are two other famous utilitarians, Jeremy Bentham and Peter Singer. I guess I could have put that info in the captions.
Wonderful article, as is typical.
I guess I’m not seeing what’s compelling here. You admit that many people hear these arguments and, nevertheless, still judge that utilitarianism is the best overall fit for their intuitions about morality and theoretical simplicity. At the end of the day, if you believe in the philosophical process for discovering moral truth at all, it’s the object level interplay of the various arguments for and against a theory of morality which everyone has to weigh and if those object level arguments aren’t convince you then why would you be compelled by this meta-level claim.
I mean, to the extent I find moral realism appealing, I look at all the arguments you’ve listed and the many other arguments against versions of deontic theories and I judge that utilitarianism is the far far better fit for my moral and theoretical intuitions (e.g. intuition that true theories tend to be simple etc..). It feels like you think this argument should compel some people not compelled already by those object level considerations but I’m not seeing why.
Also, your reply that “it’s not clear that utilitarianism reflects simpler or more abstract intuitions” is far from compelling. That hardly rules out that line of argument. As you admit, the game is about trying to satisfy our moral intuitions and those intuitions often come with feelings of strength and when I weigh those I come to a different conclusion. If you want to convince those who come to such conclusions I think you’ll need to advance further object level arguments.
Are you trying to address MH’s argument, make a new argument, or just sort of provide literary criticism?
I think the examples in the OP would be convincing for many people who had not before thought of them. Of course, if you already heard all of them and rejected them, then this post probably wouldn’t convince you.
If you place a lot of weight on simplicity of moral theories, you might profit from my article “When Is Parsimony a Virtue?”, Philosophical Quarterly 59 (2009): 216-36. The article (I claim) explains why simplicity is important in scientific theorizing, but points out that those reasons don’t generally apply to philosophical theories. So most philosophical appeals to simplicity are misguided.
If parsimony is more important than intuitive plausibility in the face of counterintuitive hypothetical implications (counterexamples), then why not adopt an even simpler and less pluralistic theory than utilitarianism, which is pluaralistic (after all, it folds positive and negative emotion, like pleasure and pain, into its set of considerations, which are distinct things–no one would say a state of consciousness of 0 pleasure is a state of great pain, and vice versa)? I.e., a theory where the moral good is identified with increasing the rightward spin of as many objects as possible?
You said:
“Utilitarians think you should always perform the action that produces the greatest total amount of benefit in the world”
But for most relevant “actions” in our world (for instance, for all “political actions”) the “benefit” they produce cannot be measure in any meaningful way. You cannot add “the benefit that measure A produce to person X” with “the disbenefit that measure A product to person Y”
At the end of the day pleasure, enjoyment and frustration are “subjective” and so, unknown to the decision maker and they are also non-measurable: you can not say this action gives me 10.26 units of pleasure or 0.12 units of frustration and even less add this two for diferent persons. Buchanan “Cost and Choice” came to mind.
So, even if utilitarianism were right it would be totally useless for any practical purpose.
It sounds like maybe the complaint is that you can’t measure pleasure to the hundredth of a point. That’s true; we can only estimate. The utilitarian wouldn’t be too bothered by this, though. They would say you should strive to maximize expected utility, recognizing that you never know for sure which action will actually produce the most utility. And they would probably also say that you can make estimates, or reasonable guesses, rather than claiming to have precise numbers.
Which can be good enough when you are talking about one individual expected utility.
But very likely your action will affect more than one person. Some of the affected people will see their utility increased and some others decreased. Being “utility” mostly subjective and difficult to measure, it should be almost impossible to you to come up with an answer, even with a reasonable guess, to the question:
Does my action increase (let alone “maximize”) the “subjective total utility” of the “bunch of people” affected by my action? I doubt that I can even precisely know who is going to be affected by my action. Let alone be able to estimate their individual subjective change in utility (isolating the effect of my action from the other actions that are affecting their utility at the same time) and add together all these individual subjective changes in utility (positive and negative).
I find troubling making a question I cannot possibly answer, the key question in deciding my course of action.
Your argument seems to assume that utilitarianism is an alternative to deontology. However, as you describe utilitarianism, it seems to me that utilitarianism *is* deontology! To be specific, it’s the deontological claim that the only ethical value worth pursuing is maximizing pleasure and minimizing pain. The scenarios you pose are simply scenarios where that conflicts with some other deontological value, and you correctly point out that not everyone’s ethical intuitions line up with the utilitarian conclusion in these scenarios.
I (and other ethicists) use “deontology” to refer to non-consequentialist ethical views. That’s just a stipulation.
That just pushes the problem back to what “consequentialist” means. The usual argument from utilitarians is that they look at consequences while deontologists don’t. But I’m not sure that argument actually works. For example, in your “deathbed promise” scenario, a deontologist who believes that breaking promises is wrong can simply phrase that in terms of consequences: the deontologist views the consequence of breaking the promise as worse than the consequence of the money not going to charity. So another way of phrasing the argument would be that, just as utilitarianism is deontology, deontology is consequentialism! It’s just a matter of *which* consequences a particular view gives more weight to.
For the deontologist, it’s not the /consequences/ of breaking the promise that are bad; it’s the /mere fact/ of breaking the promise. Of course the consequentialist asks “But *why* is it bad, if not for the fact that it has the bad consequence of making promises unreliable, and thus making trust and trade so much harder, and thus leading to less thriving?” Which is why the deontologist tries to come with examples of breaking promises where the consequences are *good*. “See,” they say. “Good consequences, but still a bad act!”
The consequentialist can either accept the hypothetical and say “Well under those precise and odd circumstances breaking the promise wouldn’t be wrong”, or reject the hypothetical and say “But real life doesn’t work like that — we still get bad consequences, and *that* is *why* breaking the promise is still a bad act!” Either way, the deontologist and consequentialist still disagree.
So no, deontology is not a form of consequentialism.
Breaking the promise isn’t the action, it’s a *consequence* of the action. The action is giving the money to charity instead of to the son of the person you made the promise to. A consequentialist could perfectly well judge “breaking the promise” to be a bad consequence of that action. It’s just that the consequentialist you happen to be describing does not attach any “badness” to the “breaking the promise” consequence. But another consequentialist might. The first consequentialist might protest that no, the second one is a deontologist, not a consequentialist, but that’s a quibble about words. Both of them are looking at consequences; they’re just attaching different weights to them.
The “why is it bad” question can just as easily be asked of a utilitarian. Nothing has its “goodness” or “badness” intrinsically written on it. Judging *anything* as good or bad is a human value judgment, and different humans can make different judgments.
In a case where breaking a promise led to something else that was claimed to be “good”, then you have multiple consequences of the same act which have different values, and you have to decide how to weigh them. That conflict is going to be there for anyone, whether they call themselves a deontologist or a consequentialist. The difference between them will be how they balance different, opposing consequences, not that one “looks at consequences” and the other doesn’t. The consequentialist you describe is simply ignoring the consequence “breaking the promise” altogether, giving it zero weight, whereas the deontologist you describe is *considering* that consequence, not ignoring it. So which one is more deserving of the label “consequentialist”?
> Of course the consequentialist asks “But *why* is it bad, if not for the fact that it has the bad consequence of making promises unreliable, and thus making trust and trade so much harder, and thus leading to less thriving?”
Even if we don’t count “breaking the promise” as a consequence itself, it has the consequence of making the person doing it feel bad, and it has the consequence of depriving the son of his inheritance.
Also, since the person you made the promise to is dead, nobody is going to call you out for breaking your promise, so doing it won’t make promises any less reliable. So the “consequences” you describe won’t even happen in this case.
Does the above make it a case where breaking the promise is ok according to the consequentialist?
> Breaking the promise isn’t the action, it’s a *consequence* of the action.
No, that’s wrong.
> The action is giving the money to charity instead of to the son of the person you made the promise to.
Yes, that’s right. That action is an instance of giving money to charity. It is an instance of giving money. It is an instance of giving money that doesn’t belong to the giver. It is an instance of giving money that was promised to one person to a different person. It is an instance of someone who promised to transfer money to one person transferring it to a different person. It is an instance of someone doing something other than what they promised to do. It is an instance of breaking a promise.
It is all of those things.
Consequences of actions come after the actions. “Breaking the promise” is not something that comes after the donation; the donation *is* the breaking of the promise.
A lot of the rest of what you say is true, but irrelevant to the only point *I* made. I am an atheist. I think that all religious beliefs are either false or meaningless. But that doesn’t mean that I can’t tell a Christian from a Hindu. They’re both wrong, but they’re wrong in different ways. And if you say that Hindus worship the Christian God, then you’re wrong, too. Likewise, if you say that deontologists are consequentialists, then you’re wrong.
@Mark Young:
> Consequences of actions come after the actions.
I’m not sure I agree with this, but that’s a question about choice of words; given that this is what you mean by “consequences”, I understand what you are saying. So by this definition, a “consequentialist” is someone who *never* looks at any characteristics of the action itself to decide whether it’s good or bad; they *only* look at things that happen after the action and are caused by it. Whereas a “deontologist” is someone who *does* look at characteristics of the action itself; they might also look at things that happen after the action and are caused by it, but they do not prohibit, as consequentialists do, looking at characteristics of the action itself and assigning moral weight to them.
Given these definitions, it seems like “consequentialism” is a sort of artificially limited form of “deontology”; “deontologists” can take into account everything that consequentialists do, plus more things that “consequentialists” forbid themselves from taking into account.
If you count breaking a rule as a consequence all by itself, deontology disappears in a puff of smoke. DDTT
Likewise for the emotional repurscussions to the person breaking the rule. It is trivial to reframe things to make any philosophical question evaporate rather than confront it, but that simply ignores rather than fulfills the purpose of philosophy to begin with. And if I am not mistaken, that is the basic argument being presented by the article, that so-called utilitarianism simply make the validity of utilitarianism unfalsifiable by selectively expanding or contracting the scope of their analysis to avoid all the conflicts utilitarianism causes as if that makes the conflicts non-existent.
I would say that if you count “breaking a promise” as a consequence, then the difference between deontology and consequentialism disappears in a puff of smoke. Which was my original point.
As for emotional repercussions, aren’t those part of a person’s utility? And doesn’t utilitarianism consider them? Some people might give them less weight than others, but I don’t think it’s problematic to consider them as part of the consequences.
I agree that conflicts about whether particular acts are ethical/moral or not are unavoidable, and that the claim of utilitarianism to provide a general method for resolving all of them is not valid.
> And if I am not mistaken, that is the basic argument being presented by the article, that so-called utilitarianism simply make the validity of utilitarianism unfalsifiable
If you’re talking about Mike’s post on why he’s not a utilitarian, then I’m quite sure you’re wrong. The article clearly says that utilitarianism is easily falsified.
Sure, he allows that a utilitarian might “bite the bullet” and accept all those odd moral claims, but that’s not anything special about utilitarianism. It’s an option open to anyone whose views are reduced to (near) absurdity.
Thank you for that clarification. I would have hoped it wasn’t needed, but I noticed in several of the comments that wasn’t so.
“You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?”
I’m no utilitarist, but I don’t find this counter-intuitive…?
Unless you give it to Bundy, or you invent some purpose-built assumption to justify giving it to the nun (who, apropos of nothing, some people believe did more to cause greater suffering than the serial killer by counseling the suffering poor to accept their lot as God’s grace, a sin made worse by the postmortem revelation that she doubted her own faith even while doing so, but not because she was doing so) then you are being counter-intuitive (in terms of utilitarianism, if not simple psychology).
What I mean is that my moral intuitions are not appalled at the thought of Ted Bundy enjoying a tasty cookie, even a tasty cookie that Mother Theresa (who, for the sake of argument, is very saintly and beyond reproach) could have enjoyed.
The question isn’t “can you” but “should you”. The contention is that utilitarians must believe we *should* give the cookie to Bundy, and that giving it to Mother Theresa would be wrong (because Bundy would enjoy it more).
Does *that* match your intuition?
I don’t have an intuition one way or the other, and am not sure what to make of people who report such intuitions. If I had to report an intuition, it would be that there’s no fact of the matter about what you should do.
It’s one thing to report a preference about who gets the cookie, but if someone wants to claim that there is some stance-independent normative fact about who you “should” give a cookie to, that strikes me as very strange. Even if that’s true, I don’t have an intuition that it’s the case.
No, the “should” part does not match my intuition.
@Timmy:
“Do you abstain from killing/raping/genociding because you believe it is morally wrong?”
Uh, yes?
I mean, I don’t have any particular desire to go out and kill or rape strangers. But there are certainly people I wish didn’t exist and might have a decent chance of successfully killing, and I don’t do it in large part because of moral beliefs. Similarly, I’ve been in plenty of situations where I really, really wanted to have sex with someone who didn’t want to, and I didn’t rape. Saying that I must have had some kind of want to not rape that was larger than my palpable want to have sex is simply begging the question. That so-called “want” was a set of moral beliefs based upon something other than my wants.
Most of the problems you set out can be answered by utilitarians relatively easily, by variations on the theme of: yes, in a world where we were absolutely certain that our actions would actually have the consequences set out in the question, and there would be no other consequences to our actions other than those the question sets out, then we should act as suggested, no matter how morally repugnant it seems. We don’t have that certainty, and the questions are framed in such a way that in considering them people are likely
1. to imagine a real world situation in which such certainty is absent; while at the same time
2. being asked to assume that such certainty is present,
So the questions are misleading. They effectively ask two questions in one, with very different answers, and then say that people answering in different ways or not being sure how to answer proves that there is an issue.
I think you can frame variants of these trolley problem questions which aren’t misleading. So, say a spaceship is attached to the surface of a moon by a chain. The moon is falling toward the sun. The spaceship can fly off, but only by detaching the chain. The longer the decision is put off, the greater the risk that the spaceship will be unable to escape the sun’s gravity, and that it will be destroyed; it is already so hot outside the spaceship that its shields are starting to break down, and the spaceship’s engines are at full blast, straining to drag the moon away from the sun, with the chain taut between the spaceship and the moon. This has slowed the moon’s acceleration towards the sun, but it continues to accelerate sunwards, and the ship’s screens show the surface of the sun getting ever closer. Dug in, under the surface of the moon, is a human being, who will inevitably die when the moon falls into the sun. It is by now so hot that it would be impossible for them to even leave their bunker, let alone make the safety of the spaceship. They have tried to do so, but been driven back into the bunker by the intense heat, which is increasing all the time. Should the captain of the spaceship detach the chain?
I think most people, putting themselves in the spaceship captain’s position, would detach the chain (though I could of course be wrong!). Even if people don’t agree on the answer, I think this question deals with the issues of causation and certainty in questions like “should I shoot one person when my kidnapper tells me that otherwise he will kill all six of us?”, and gets to the basic issues, showing that it isn’t the case that Utilitarianism leads to morally repugnant outcomes that people would be unlikely to actually support in real life. I’m happy that I would both agree with detaching the chain, and that I don’t find doing so intuitively repugnant.
I think a variant of this spaceship question would deal with the classic trolley problem where you kill one person rather than allowing five others to die, though crucially it would have to present a compelling and easy to visualise scenario in which causation and certainty were clear, rather than merely stated, as they are in the trolley problem.
The other questions you pose seem to be variants of: should we want people we hate to be happy, which is a real issue, but I’m not sure is particularly problematic for utilitarianism.
The rest of your post is only meaningful on the basis that trolley problems, as normally posed, are getting at something real. I don’t think they are. I think the bigger problem with Utilitarianism is “so what?”
As is inevitably the case with amateurish musing on subjects professionals have thought about for decades, I’m sure that there are flaws in the above that aren’t immediately apparent to me!
I believe you are fundamentally mistaken about the nature and purpose of the trolly problem, and that this is such a common error it is nearly universal. The trolly problem isn’t about using a simplistic comparison of quantitative harm, which is how most people use it, as you have done. It is about examining whether failure to take action has the same moral impact as taking action does. The thought experiment is meant to probe how much less (if any) responsibility inaction confers compared to action, rather than present an outrageously simplistic “ethical calculation” question such as “is it better if one person dies or three?”
I think this ties in to another comment on this article which essentially treats utilitarianism as such an obvious and unavoidable truth that it effectively becomes deontology, despite being entirely and completely consequentialist by definition.
I simply do not find utilitarianism counterintuitive.
I don’t think the mere weight of philosophical consensus is a very strong foundation for judging one philosophical position to be correct and another incorrect. And the 2020 PhilPapers survey finds a nearly equal split between consequentialism, virtue ethics, and deontology. That’s cold comfort to anyone appealing to the expert judgments of philosophers.
Even if there were a consensus, I have yet to see academic philosophy adequately grapple with the possibility that the intuitions popular among those publishing and writing about academic philosophy may be idiosyncratic and unrepresentative of the way other people think. If philosophers are psychologically idiosyncratic, this may undermine much of the force behind arguments relying on intuition, absent some reason to think philosophers generally have more reliable intuitions than others.
I have undertaken the project that you advised utilitarians take and written a series of blog articles responding to your specific examples.
Oh I totally forgot to post the link–here it is
https://benthams.substack.com/
@Peter Donis
The more I read your comments, the more convinced I am that you have a fundamentally mistaken idea about philosophy, both in the subject it studies and the premises it establishes while studying it. And I don’t think you’re the only one, either here or in general, so I’d like to discuss it. This isn’t meant as a personal criticism, but an exploration of issues.
There are no people who consciously “use” consequentialism (including utilitarianism, a more focused subset of consequentialism) or deontology in order to make moral decisions. People simply do what they think is “right”. Philosophers try to deconstruct what this notion of “right” (or, more practically, “wrong”) is intellectually, but on an abstract level, not a behavioral one the way psychologists might. Overall, your questions and responses seem to focus on this psychological perspective, as if deontology or consequentialism are motivational factors, rather than theological (the study of “good”, not simply the study of “god” as it is typically applied) paradigms.
When philosophers identify themselves as one or the other, they’re identifying the philosophical approach they prefer/specialize in, not suggesting they (or anyone else) can or does exclusively consider only consequences or only rules when making decisions. The foundation of either “school” is an unprovable hypothesis, not a falsifiable theory. These hypotheses are contrary but not incompatible. “Are actions wrong because of their consequences or because they violate a rule/law/principle?” is a question that can be answered “yes”. Either premise can be reformulated in the opposite terms; “causing bad consequences is bad” is a “rule” in consequentialist terms, and also deontological terms.
Discussion of these things is complicated (in fact, made both pointless and ludicrous outside of philosophical discourse itself) because in the real world, divorced from philosophical musing, laws are (should be) established to avoid bad consequences and socially imposed sanctions are considered “consequences” of behavior independent of whether potential physical repercussions occur or not. Pragmatic morality, which might be educated by philosophy but is, and cannot be, so “pure” from an intellectual standpoint, provides for both the violation of rules for good (eg. civil disobedience) and adherence to rules for abstract non-consequentialist reasons (being “law abiding” as either an object lesson or social action). Religious (theistic) morality further confounds the distinction by establishing non-physical consequences for violating rules or abiding by them (heaven/hell or karma being the most familiar).
I’ll close by noting that in the postmodern era (the last century and a half or so, but mostly the more recent portion) the premise of ‘darwinistic pretensions’ has joined karma and God as a method of imagining rules that “punish” bad actions with ‘deontological consequences’. It has no more validity than the Bible or Buddhist reincarnation cycles, because what traits will prove adaptive in the future is unknowable, which is why natural selection occurs to begin with and morality is not a biological instinct. Evolution is caused by stochastic mutations and capricious nature, not any sort of drive towards a mathematical perfection of either biological reproduction or compassion. There is no “right” or “wrong”, or even “good” or “bad” in nature, there is only “whatever happens happened”.
> When philosophers identify themselves as one or the other, they’re identifying the philosophical approach they prefer/specialize in
This is part of it, yes, but at least some philosophers go beyond this. Peter Singer, for example, doesn’t just say he prefers a utilitarian approach: he claims that everybody should donate all of their income over some level to charity, and he bases this claim on an explicitly utilitarian analysis, not just on “because I think it’s right”. And he’s not the only philosopher who has based claims about how other people should act explicitly on a particular philosophical approach.
I admit my personal attitude towards philosophy is kind of a two-edged sword. On the one hand, whenever I see a particular philosopher making claims like those Singer makes, basing them explicitly on a particular philosophical approach, I almost always disagree with them, and my disasgreement is usually based, at least in large part, on disagreeing with the philosophical approach they are using. But on the other hand, if no philosophers ever did that, if all philosophers limited themselves to just saying “this is my preferred approach” without claiming any more, then I probably would find philosophical discussions too uninteresting to participate in!
>I admit my personal attitude towards philosophy is kind of a two-edged sword.
I can’t say I am generally enthusiastic about philosophers, or deny the truth about what you’ve said in this comment. But my instincts are contrary, I guess, which is why I would respond to the behavior you described by observing that the same personal recommendations could justified as easily with the contrary ethos. Donating a “certain percentage” of income sure seems more deontological than utilitarian to me, no matter how it is explained. I’m not convinced that philosophers who give advice like this aren’t just giving examples of practical applications rather than designating themselves paragons of virtue. But again I’m on the other side, since I do find most philosophical discussions too uninteresting to bother reading, let alone participating in. Yet I find myself in this position when I do, often defending the profession against amateurs, even though I am an amateur who thinks that almost everything philosophers have said for thousands of years is mostly useless junk. (I have a very well formulated paradigm explaining how and why, I’m not just a contrarian grouch.) Go figure.
I appreciate your responding in a charitable manner, either way.
> the same personal recommendations could justified as easily with the contrary ethos. Donating a “certain percentage” of income sure seems more deontological than utilitarian to me, no matter how it is explained.
My reaction to Singer, for example is similar: he seems like a deontologist to me, even though he says he’s a utilitarian and frames his arguments in explicitly utilitarian terms. That actually is in line with the general point I’ve been making in this thread, which is that utilitarianism (or more generally consequentialism) is actually a form of deontology. 🙂
In case anyone wants to critique me, here is my argument against utilitarianism.
Prop0: The purpose of a theory of ethics/law (for simplicity I’m going to conflate the two) is to resolve interpersonal conflict. Without conflict — e.g. in a garden of Eden utopia — there is no need for an ethical system of beliefs.
Prop1: A theory of ethics/law should be able to resolve interpersonal conflict in a reasonably objective manner. Otherwise, the theory does not objectively resolve interpersonal conflict but simply rationalizes individuals’ conflicting subjective values.
Remark1: A mercury thermometer reading ~71 degrees fareinheit is reasonably objective. Namely, it is reasonable/obvious/common sense to say that if any other person comes to that thermometer and says it is reading e.g. 47 degrees fareineit, then that person is wrong/incorrect.
Remark2: The “best” flavor of ice cream is not reasonably objective. Namely, it is not reasonable/obvious to say that if any other person says their favorite ice cream is vanilla, then that person is wrong/incorrect. This observation is simply the economic fact that value is subjective to the individual.
Prop3: Utilitarianism (either rule or act) is inherently not objective because utilitarian calculations necessarily must perform interpsonal utility comparisons — and these are not reasonably objective. Thus by P1 utilitarianism does not meet the requirements of a theory of ethics/law.
Remark4: To be concrete, let’s consider two people, X and Y, on an island. X, who likes to kill people, says “I’ve done my utilitarian calculations and I have concluded that it is a net benefit in ‘utils’ if I kill you”. Person Y, who does want to be killed, says “No, your ‘util’ calculations are incorrect, it will be a net loss of ‘utils’ if you kill me”. Utilitarianism cannot reduce this argument between X and Y to objective propositions that reasonable people can agree on. This argument over ‘utils’ is analogous to two people arguing what the “best” flavor of ice ceam is.
I personally advocate Hoppe’s argumentation ethics. In Hoppe’s framework, we observe that X and Y have conflicting values (X wants to kill Y, Y doesnt want to be killed). Furthermore, X and Y are having a rational argument to try to resolve their conflict. If we assume both X and Y are arguing in good faith, then Hoppe showed that both X and Y are presupposing self-ownership and by extension private property. Therefore, X’s position is self contradictory.
Btw, even though I advocate Hoppe’s argumentation ethics, I am sympathetic to Heumer’s ethical intuitionist approach. He does a great job reducing arguments about conflict down to very plausible/reasonably objective ethical intuitions.
As a critique, I think your claim that a discussion between two people over whether one should murder the other can be described as either rational or reasonable is basically insane, and reflects badly not just on you but on every philosopher you have ever advocated or sympathized with. But if I were to engage such insanity, I would say the problem results from conflating “utility” with ‘fulfillment of desires’. Obviously they are not inherently exclusive, but that does not suggest that they are therefore coincident. If one must construct an exception to ones analysis of consequentialism/utilitarianism to account for the “good results” excluding such nakedly (and insanely) hedonistic consequences, then so be it. I suspect the pleasure Y receives from not being dead will always be orders of magnitude larger than any pleasure X could get from murder. Of course, I understand your example was meant to be absurd for demonstrations sake, rather than insane, but I will reiterate you laid it on a little thick calling this “conflict” reasonable or rational. I suppose converting your gedanken to a more conventional lifeboat scenario, with limited rations making “calculation of utility” more necessary but also more complicated, might not illustrate your conjecture as well. Nevertheless, my analysis remains unchanged: utilitarian consideration of consequences must distinguish “hedonistic pleasure” (nerve sensations without consequence) from “stoic pleasure” (emotional utility) or the game is spoiled before it begins.
I apologize if you do not find this analysis cogent or useful. Thanks for your time. Hope it helps.
As you suspect, the murder scenario is intentionally ridiculous to illustrate how — despite it being reasonably/intuitively obvious that you shouldn’t kill innocent people — it is not at all obvious whether “total utility” would increase or decrease. The reason it is not obvious is because “value” or “utility” or “fulfillment of desires” or whatever are all subjective. Who’s to say person X is not a utility monster? How do reasonable people objectively ascertain this?
I understand that “utility” is a broad term, and that people like to distinguish between different kinds. You mention “hedonistic pleasure” vs “stoic pleasure”. My contention is that these distinctions are irrelevant because all of them are subjective. How can reasonable people objectively calculate the “stoic pleasure” experienced by another person? Ditto for “hedonistic pleasure” or any other type of utility.
Also, I understand that if philosopher X’s supporters are mongoloids then this is evidence that X’s ideas are bad. But still, this is a very weak form of evidence and you really shouldn’t form a strong opinion of other people based on their advocates.
Prop0 seems plausible as a statement about law, but not about morality. Or at least, people like Aristotle and Kant considered intrapersonal issues (duty to the self) as part of morality, though perhaps you could argue that it is incoherent to have a duty to one’s self. There are other areas where legal does not imply moral and moral does not imply legal.
Hoppe’s argument is interesting but incomplete. He does not give an explicit explanation of how we know which norms argumentation presupposes. Also, his argument for why such norms must be considered necessary universals seems just mistaken, though that is not to say no one could make a better case for it. His case is something like, the argument is analogous to a mathematical proof, and we do not think of mathematical theorems as true only while we are proving them, they are true always and everywhere. But the assumptions of mathematical proofs are true always and everywhere, while the assumptions Hoppe is making are contingent on persons being engaged in argumentation (or at least, not obviously necessary, so in need of an argument to demonstrate that).
In short, Hoppe argues that you can’t argue without making certain presuppositions, and so you can never refute those presuppositions by arguing. But the part of the argument that establishes exactly what we must presuppose is missing or flawed. He tells us what he thinks must be presupposed, but fails to show that this is logically entailed in the act of arguing.
His weakest part of argument is where he argues for acquisition of property by first use/objective link. I feel I could easily be persuaded, and may already believe that this is the best approach. But Hoppe argues that if humans had used any other approach, we would all be dead. This seems false for various reasons, as his principle has never been an explicit part of most legal systems, and it isn’t clear that primitive hunter-gatherers actually used it, and most explicitly denied it (though I suppose he could argue this was propaganda rather than an accurate description of how a society actually worked). Also, it is not clear why humans can’t survive without this, while animals can.
Yes, you are right about Prop0 only applying to law. And by law I don’t mean the arbitrary rules written by politicians but rather the rules where it is moral to coercively enforce them.
I have no problem with people that use utilitarian analysis in situations where there is no interpersonal conflict/coercion. I should have said that my argument against utilitarianism is restricted to utilitarianism’s application to legal theory. For example “Wealth redistribution is ethical because it raises total utility, therefore it is moral to coercively take people’s wealth”.
For the Hoppe stuff, I am not an expert but here are some brief remarks that are hopefully helpful.
Physically threatening an opponent to affect an argument contradicts good faith argumentation. This fact is not proven logically but is self-evident. Someone who cannot see the self-evidentness simply does not know what good faith argument means.
For universality, threatening an opponent outside of an actual debate causes an interpersonal conflict. If you are committed to resolving conflicts via rational argument, then you are already presupposing self-ownership of the person you are threatening. There might be an error with this, but I view Hoppe’s theory as being contingent on a person being *commited to* resolving conflicts via argument rather than a person actually engaged in argument at some particular time and location.
For self-ownership implies private property via first use, I agree with you that this is the hardest to establish. There are other ways to show this besides the “we’d all be dead otherwise” approach. But I think it’s difficult to do in just a couple sentences.
>consequentialism) is actually a form of deontology.
This is the misconception I was trying to address. It indicates a mistaken idea of what deontology is. It isn’t about the reasoning we might use to explain why something is wrong (or right) but the nature of what “wrong” is to begin with. Is breaking a rule, independently of any consequences, itself an immoral act? In “pop philosophy”, consequentialism is a form of deontology at least as much as the other way around. (In fact more, unless you reject deontology entirely, which is harder to do than it sounds.) The very idea that the moral implications of the consequences of an action accrue to the action itself is a deontological perspective, if any if those consequences are matters of retribution or punishment rather than direct physical results; indirect results don’t qualify as consequences in a philosophically rigorous consequentialism. If someone says “don’t move or I’ll kill this hostage” and you move and they kill someone, that is a consequence of their behavior, not yours. The linking of proscribed actions with subsequent sanctions, even with foreknowledge, is a deontological principle, a widely accepted but not automatic rule that says “you should avoid actions that *might* have bad consequences”. True consequentialism (including actual utilitarianism, which is the kind the article rejects, rather than the pragmatic quasi-utilitarianism which innately demands a mix of deontology, (just as a function capitalism requires a socialist safety net and a functional socialism cannot prevent personal savings) doesn’t cotton to uncertainty like that. Only the actual results, not potential results (even if actualized in expressly identical but separate instances) make an action “good” or “bad”. And of course arguments from either source must side step the epistemological issue of any distinction between whether an action is wrong, and whether it can be known to be wrong
Really, the whole thing, both consequentialism and deontology, are just secular efforts to recreate the authority of God (or karma, or biology) without the supernatural autocracy, to give some explanation for how morality can exist at all without being a physical force. This is why, in philosophy, the two are presented as a contrast; the rule of parsimony suggests that immorality should be defined as “transgression of rules” or “causing suffering”, not both. Each creates the construct that is primitive in the other; rules are made to avoid suffering, and not causing suffering is a rule. Hybrids of the two (all of real life morality) being necessary and convenient but intellectually impure. So “a deontologist” believes that bad consequences occur because a rule was transgressed, while “a consequentialist” believes a rule was transgressed because bad consequences occured. Neither denies the existence of both consequences and rules, but which is the chicken and which the egg?
Since a significant proportion of people are theistic, this becomes a theological debate about whether God is more about love and compassion (It decrees commandments because It wants what is best for us; a consequentialist perspective) or more about obedience and worship (whatever It says we shouldn’t do is what “deserves” punishment simply because It said so, the deontological take), with very little change otherwise. Karma is more obviously biased towards consequentialism in principle, but deontology in effect, while ironically Darwinian pretensions are more about the deontological principle “go forth and multiply”, or, alternatively, regard for the species more than the individual, than the more proximate consequentialism that supposedly/actually mediates natural selection.
> It indicates a mistaken idea of what deontology is. It isn’t about the reasoning we might use to explain why something is wrong (or right) but the nature of what “wrong” is to begin with. Is breaking a rule, independently of any consequences, itself an immoral act?
But consequentialism has to apply the same kind of reasoning–just to something causally downstream of the act, instead of the act itself. Consequences don’t come naturally “labeled” with whether they are good or bad any more than acts themselves do; you have to have some system of rules, or axioms, or premises, or whatever you want to call them, that tell you which consequences are good and which are bad. And deontologists, as I’ve already pointed out, are perfectly able to apply their deontological rules to consequences of acts: “Yes, that act itself might be morally neutral, but it would result downstream in an innocent person being killed, and that’s wrong, so we can’t do the act.”
So consequentialism is not somehow independent of applying rules to things. If you want to restrict the term “consequentialism” to mean “applying rules to events causally downstream of the act” and “deontology” to mean “applying rules to the act itself”, I can’t stop you, because your choice of words is yours, but I don’t think the distinction being made is very useful, and I think it often does more to obfuscate actual ethical issues than to clarify them.
> the rule of parsimony suggests that immorality should be defined as “transgression of rules” or “causing suffering”, not both.
I don’t think this is the kind of distinction you are claiming it is. “Suffering” is not something we can directly, objectively “read off” the physical state of things, any more than “transgression of rules” is. So in any practical sense, “causing suffering” just turns out to be another rule that humans have to make judgments about–did this particular act cause suffering, or not? (And in many real situations, making that judgment is actually *harder* than making a judgment about other deontological prohibitions, like breaking a promise or killing an innocent person.) And then “consequentialism”, once again, just becomes a particular variety of deontology in which the only thing that is considered wrong is causing suffering.
I really appreciate this post. However I think there are effective responses to all of these counterexamples on utilitarianism.net
@Mark Young
> They don’t disagree about what kinds of animals there are
Perfectly put. Aside from the potential distraction which might result from wondering which is being condemned by comparison to creationists. ?
> my moral intuitions are not appalled at the thought of Ted Bundy enjoying a tasty cookie.
The issue isn’t about the thought of Bundy enjoying a cookie, but the fact that it was you that gave it to him, while depriving Mother Theresa (or some other person more admirable than Bundy) of that pleasure. If your moral intuition does not find that disturbing, you simply have no moral intuition.
@ Peter
>As for emotional repercussions, aren’t those part of a person’s utility? And doesn’t utilitarianism consider them?
No, and no. Only actions and concrete results are properly considered in actual utilitarianism, which is why most amateur (and no small proportion of professional) consideration of utilitarianism is philosophically insignificant, however helpful it might be in framing qausi-logical moralizing. Of course, whether “pleasure” is considered an emotional or a concrete result will always boil down to the hard (unresolvable) mind/body problem, so even stating the case tends to overstate the case. Particularly when utilitarianism, whether rigorous or postmodern, is applied to social policy and equally illicit machiavellian “consequences” pertaining to a government’s popularity is considered, since official causes and concrete ramifications like rioting and insurrection are seemingly linked through a populace’s “emotions”.
The idea of utilitarianism as a “logical”, non-deontological approach to ethics requires that the only teleological “cause and effect” which should be considered is that of physics and chronology; inevitable and discrete. (What I call a “forward teleology”.) If the occurrences that follow the action are mitigated/mediated through additional ethical decision-making by other humans (ethical agents), then it is *their* actions which are being examined, not the ones they claim (honestly or not) caused them to want to take those actions. The whole point of consequentialism, including utilitarianism, what makes it a comprehensible notion as a non-deontological ethos, (if we presume it is to begin with) is that intent is entirely and completely irrelevant; the moral significance of an action are its antecedents, not its precedents, what follows from it not what occurred prior to it. This makes “inverse teleologies”, the kind humans use when they take actions “in order to” or “so that” some event will (actually might) happen, irrelevant when considering actions and consequences (causes and effects), since in such intention-based teleologies, the “cause” is the goal, or intended result, while the consequence (‘I do this as a consequence of believing it will have that result’) is the action being taken. It should be obvious that accepting such backwards teleologies makes all consequentialism incoherent. But of course that doesn’t prevent them from being rampantly utilized, which is what makes most amateur philosophy incoherent.
At this point I feel a responsibility to confess that I am, technically speaking, an amateur philosopher (my vocation is priest of an atheist religion, and my professional employment is entirely unrelated.) But rather than accept my statements are incoherent, I must insist they are more coherent than most others you might disagree with. 😉
> Only actions and concrete results are properly considered in actual utilitarianism
I don’t understand this, because you’ve said that “causing suffering” is wrong according to utilitarianism, and that’s an emotional repercussion, isn’t it?
Or do you mean this statement, that “actual utilitarianism” only considers “actions and concrete results” as a *criticism* of utilitarianism?
> The whole point of consequentialism, including utilitarianism, what makes it a comprehensible notion as a non-deontological ethos, (if we presume it is to begin with) is that intent is entirely and completely irrelevant
I get this, but I don’t see how it’s consistent with saying that “emotional repercussions” aren’t part of the consequences of an action. “Emotional repercussions” aren’t a matter of intent; if you cause someone to suffer, that’s true whether or not you intended to.
> its antecedents, not its precedents
Btw, “antecedent” and “precedent” are synonyms (“ante” and “pre” both mean “before”). A straightforward derivation of an antonym to those from Latin would be “postcedent”, but English doesn’t appear to have adopted that as a word, preferring “consequence”.
@Mark
>>Breaking the promise isn’t the action, it’s a *consequence* of the action.
>That is wrong.
I also bristle at the “deontologists are consequentalists” (or vice versa) canard, and I agree with your comment generally. But I think you might have missed the point being made in a way that illustrates a broader point I’ve tried to make, so hopefully you’ll forgive me for trying to clarify someone else’s argument. From a deontological perspective, it is breaking the promise that is the action with moral implications, and making the donation after promising not to is simply a consequence of that immoral action. This exemplifies why the “argument” between those who frame their positions from what they consider a utilitarian/consequentialist perspective and those who frame their position from what they consider the deontological perspective (though very few do the latter, postmodernism having made utilitarianism effectively the universal default perspective of all ethical contemplation outside the isolated safety of the densest academic thesis.) Resorting (re-sorting) to declarations of whether the promise was broken before or after the donation is patently silly, because it is reasonably obvious that the two things are simultaneous and not even truly separate.
This lead me to my broader issue. When consequentialists and deontologists disagree about what makes something wrong, they aren’t just debating teleologies like physicists would (if morals were physical forces). They are, usually unknowingly, disagreeing about what the “something” is. In deontology, *breaking rules is itself an immoral action*, it doesn’t even matter if there are physical consequences beyond that; they are irrelevant in terms of the morality being considered and discussed. Conversely, of course, consequentialists do the opposite; there *are no moral rules*, and so donating money after promising not to *cannot possibly be considered an immoral action*. Or, at least, the morality of the donation is no different than if the promise had not been made, and unless you’re going to say a donation of money that the source of that money wouldn’t appreciate is always immoral, regardless of whether they extracted a promise to that effect, the point is moot.
Before anyone replies with examples that would show this approach would result in absurd or unethical circumstances, the answer is “yes, so?” If either consequentialism or deontology were a “perfectly satisfactory ethos” in which that could not happen, the other wouldn’t exist as even an academic premise, let alone a paradigm worth arguing about. Likewise for my explanation of this matter. It may be correct or incorrect, or proper or improper (the first referring to typical reasoning, the second according to academic philosophy) but it cannot possibly be “wrong”. With that said, I’d be happy to discuss it with anyone who disagrees, because that’s why we’re all here.
> consequentialists do the opposite; there *are no moral rules*
As you state it, this is false. A correct statement, given where you are drawing the line between deontology and consequentialism, is that for consequentialists, there are no moral rules *that apply directly to an action*; you have to apply rules about what is right or wrong, good or bad, to consequences of the action, and those consequences can’t themselves be actions taken by someone else (since, as you said in another post, that means we should be evaluating that someone else’s action instead).
@ Gooby
> “value” or “utility” or “fulfillment of desires” or whatever are all subjective.
Everything beyond *cogito ergo sum* is subjective, philosophically speaking. Of the three, “utility” is necessarily the one that must be (and is) considered objective, or there is no reason or basis for discussing utilitarianism at all. One can simply begin and end with solipsistic hedonism, and get on with fulfilling whatever unavoidably narcissistic desires that occur. It was (obviously, I believe) my intention to separate arbitrary sensations from effects that result in materially increased productivity which have more utility than self-centered pleasures when I made a distinction between hedonistic and stoic pleasure, so regardless of whether you deem both “subjective”, that separation remains.
Two additional points. First, I’m unsure whether “mongoloid” has the repercussions or validity you seem to suggest, but I’m quite certain that judging an argument by the followers of the person who offered it is unreasonable. Second, I have neither formulated nor presented any strong opinions of any person in this discussion, only their arguments. I suppose you meant to take exception to my use of the word “insane” in my analysis, but allowances for hyperbole must be accepted in the spirit they were offered in this regard. You admit your gedanken was absurd, and offer as justification an absurd stance dismissing essentially all statements as “subjective” and seemingly therefor (therefore?) useless. This juxtaposition reinforces my analysis. None of your intellectual positions is strong; is that tepid enough for you?
> Of the three, “utility” is necessarily the one that must be (and is) considered objective, or there is no reason or basis for discussing utilitarianism at all.
This makes my personal preferred objection to utilitarianism very simple: no, utility *cannot* be validly considered to be objective, so there is indeed no basis for utilitarianism.
@ Gooby
>“Wealth redistribution is ethical because it raises total utility, therefore it is moral to coercively take people’s wealth”.
Now we know the rest of the story. You aren’t interested in discussing philosophy, you have a political perspective you want to advance, with philosophical argument as a mere rubric.
By “wealth redistribution… to coercively take people’s wealth”, you’re obviously inveigling against taxation. But you give away the game with excessive redundancy; “coercive taking”, and government “coercively enforcing” statutes in a different bit; as if there were a way to do anything by statute without “coercion”. ‘Government is bad when it governs by force!’ you are saying, while reserving your criticism for when it does things you personally don’t want it to do. But if you were enlightened, you’d understand why taxation increases the “utility” of those dollars by amassing them with everyone else’s taxes in order to accomplish things that stochastic individual actions cannot ever achieve, and thereby increases rather than decreases the wealth of even the most heavily taxed, at least as currently practiced in civilized and free democratic countries. People who argue that taxes are good (as long as they are equitable [not the same as equal] and fractional [taking a portion of income rather than all of it] and the expenditures are controlled by democratic bodies and include provisions for oversight to minimize corrupt enrichment) because they leverage the taxed amounts to provide more “utility” than they would if controlled by the individuals they were collected from are not relying on utilitarian philosophy, but actual and demonstrable economic calculations. Taxing income and spending those revenues doesn’t increase some abstract philosophical “utility” but a very real and physical *productivity*, even when you call it “wealth redistribution”. There are plenty of other mechanisms that are actual wealth redistribution in a much more direct and literal sense. And unfortunately for the US the continued anti-tax wealth-hoarding rhetoric and policies folks like you keep pushing make these less ethical alternatives seem more and more sensible to more and more people when compared to the dysfunctional form of corporate capitalism we currently have, which has much worse ramifications than simple economic concerns or socially-constructed and legally mediated property claims.
So you’re barking up the wrong tree and are grabbing the stick by the wrong end and putting the cart before the horse and all sorts of other cliche analogies by weeping about marginal taxation being bad from either an anti-utilitarian or pro-utilitarian perspective.
> “Everything beyond *cogito ergo sum* is subjective”
Remark 1 and 2 explain what I mean by subjective vs objective: a proposition is subjective if its truth depends on the person saying it, a proposition is objective if its truth is independent of the person saying it. If a thermometer reads 71 and someone says it reads 47, then this person is wrong. The proposition “this thermometer reads 47” is universally false for all people; it does not matter which person says it. Contrast this with a statement such as “the best flavor of icecream is vanilla.”
I appreciate your replies, but unfortunately I have not really learned anything. Perhaps you can simply answer this question. Suppose there are two people on an island, person X and Y. The only good on the island is coconuts. X has 1,000 coconuts stored in his stash whereas Y has 10 coconuts stored in his stash. How can a utilitarian, person Z, **objectively** determine whether total utility will increase or decrease after person Z coerces X and Y into a new distribution of 505 coconuts each?
@ Gooby
>person Z, **objectively** determine whether
You are correct, you haven’t learned anything. You’ll have to correct that deficiency on your own, since by simply ignoring what I’ve tried to teach you, you’ve made it obvious there is no utility in my efforts. Ethics cannot be as easily determined as physical properties like temperature. Go figure.
@ Peter
>utility *cannot* be validly considered to be objective,
You missed the previous caveat: *NOTHING* can be considered to be objective (except for the unquestionable existence of the entity doing the considering: cogito ergo sum). But something (besides I/you/us) must be as close as is necessary to be considered objective *enough*, or the exercise is over (in fact, all exercises are moot) before it started. This is kindergarten-equivalent metaphysics, that shouldn’t even be brought up when discussing ethics.
In consequentialism, consequences (which is to say teleologies: chronologically sequential cause-and-effect links between occurrences) MUST be assumed to be real (objective), or you cannot discuss consequentialism *even for the purpose of disputing it*. You’re just flapping your arms in the air and bellowing “nuh-uh!” as loud as you can.
Likewise, in utilitarianism, utility MUST exist, and be as close to concretely measurable as anything else, or you aren’t considering utilitarianism to begin with, and cannot coherently discuss whether it is invalid. And with deontology, whatever “rules” or “laws” or “norms” or “mandates” or “statutes” being referred to must be assumed to actually be something. It doesn’t matter where they “came from”, or how they are “enforced” or anything else: there MUST be such things, or obviously there can be no deontology to argue for or against.
The Problem of Induction is a LOT tougher than yoi’re giving it credit for, and all the physics and math in the universe can’t make the assumption that abstract things considered by philosophy *necessarily* conform to deductive logic stick. Even if these things are not (or are!) entirely fictional (and that includes such every-day assumptions as actions and causes and intentions, let alone the more esoteric stuff like values and utility and rules) and invented by philosophers for no purpose other than hypothesis, diddling around with quibbling over whether they “objectively” exist is pointless nonsense. If you want to imagine how the universe or ethics would work if they are not objective, you can only do so by comparison to an ontos or ethos where they are. Which is the control and which the gedanken is unimportant, and not something real philosophers waste time wondering about. At least not unless they are discussing metaphysics rather than ethics, because that is essentially the entirety of what metaphysics boils down to.
> something (besides I/you/us) must be as close as is necessary to be considered objective *enough*, or the exercise is over (in fact, all exercises are moot) before it started.
Yes, fine; but is “utility” such a thing, or not? I think not. Are you saying it is, or are you agreeing with me that it’s not, and thereby saying you think “the exercise is over” (or at least that it should be) as far as utilitarianism is concerned?
I’ve read your entire long post here and I still can’t tell what your answer is to that simple question.
@ Peter
>but is “utility” such a thing, or not?
Yes. No. Maybe. 😉
The answer is “yes” if you are discussing utilitarianism, regardless of whether you are utilitarian. The answer is “no” if you are discussing deontology, regardless of whether you are a deontologist. The answer is “yes and no and maybe” in all other circumstances.
You’re trying to skip to the end. But there isn’t one, it’s an eternal struggle. You question (and your “I think not” answer) is based on a false assumption that either consequentialism or deontology is “correct”, (and a contingent assumption that if one is correct the other is therefore not correct) as if they are competing scientific theories. That is a nearly mandatory attitude for a postmodernist, but is, if you’ll excuse the expression, *wrong*. Or, as several postmodernists have tried to dismiss my positions on other topics just within the past day or so have put it, “not even wrong”. Their efforts have been unsuccessful for the same reason your consternation is occurring here, and illustrates the quintessential nature of what I mean by “postmodern”. Philosophical paradigms, proper ones at least, are not susceptable to being dismissed, they can only be discussed. The exercise is never over unless you misunderstood its purpose to begin with. That purpose is not idle speculation and endless semantic games, but it isn’t discovery and proof of a scientific theory (mathematical equation) either.
Utilitarianism is neither true or untrue, it is simply a paradigm, a set of terms if you will, through/by which ethics can be discussed. In the real world (which includes religious tenets and political policy as well as our daily lives) ethics (morality without the theology, which I will again point out is not the same as theism) cannot be ‘solved’. Nor is it a discoverable set of laws, whether of the physical (mathematical) or statute (social) variety. It is the dubious necessity of deciding for yourself how and why to judge each and all of your actions, and oh by the way the actions of everyone else.
So I’ve found myself again providing a long answer to a short comment, because for all my life-long effort I have yet to figure out how to be eloquent, and can only manage to be verbose in its stead. In short, the answer is “that really isn’t as simple a question as you think it is”.
Thanks again for your time. I sincerely continue to hope it helps.
> You question (and your “I think not” answer) is based on a false assumption that either consequentialism or deontology is “correct”, (and a contingent assumption that if one is correct the other is therefore not correct)…
It isn’t a matter of “correct”, it’s a matter of “close enough to objective”. And you’re the one who set that standard: you said that *something* has to be “close enough to objective” for us to do philosophy at all. What that something is will depend on what kind of philosophy we’re trying to do. If we’re trying to do utilitarian philosophy, then utilitarianism has to be such a thing.
And whether or not utilitarianism is such a thing must itself be such a thing. If utilitarianism being “close enough to objective” is not itself a thing that is “close enough to objective”, then by your own standard, the utilitarian has sawed off the branch he’s sitting on. (And the same would be true of a deontologist who treated his preferred deontology as “close enough to objective”, but only for him and those who agree with him, not for those who take other philosophical positions such as utilitarianism.)
This is probably part of the same issue that came up earlier, when you said philosophy has no practical use and I disagreed. To me, if philosophy (or more precisely the “ethics” part of philosophy) is just constructing arbitrary abstract systems and discussing their logical consequences without any regard to whether they have any useful connection to the real world and our real experiences and the real ethical choices we face, then it’s not worthy of the name “ethical philosophy”. It’s just pointless word games, and I’m not in this discussion, or any discussion of philosophy, to talk about pointless word games. And it seems to me that if you are going to set a standard like “close enough to objective”, then you should have a similar attitude to mine. There’s no reason to care whether “utility” is “close enough to objective” if utilitarianism has no practical use.
> … as if they are competing scientific theories. That is a nearly mandatory attitude for a postmodernist
I don’t get this at all. Postmodernists, to the extent they have any coherent system of thought at all, reject the notion of comparing things as if they were competing scientific theories. Any such comparison presupposes that there are objective facts about the world, independent of anyone’s ideological beliefs, that can be discovered by experiment, and that testing theories against experiment can falsify them. Postmodernists reject those presuppositions.
> If we’re trying to do utilitarian philosophy, then utilitarianism has to be such a thing.
Ugh, I meant *utility* has to be such a thing. And similarly with my further comments on the same issue.
> In the real world (which includes religious tenets and political policy as well as our daily lives) ethics (morality without the theology, which I will again point out is not the same as theism) cannot be ‘solved’. Nor is it a discoverable set of laws, whether of the physical (mathematical) or statute (social) variety. It is the dubious necessity of deciding for yourself how and why to judge each and all of your actions, and oh by the way the actions of everyone else.
I agree with all of this. But I don’t appear to draw the same conclusion from it that you do. You appear to draw the conclusion that we can still use utilitarianism, or deontology, or any other ethical paradigm, and discuss ethics that way. I draw the conclusion that we don’t have *any* ethical paradigm that is any good, and we need to either find some new ones or learn how to discuss particular ethical questions without appealing to any general paradigm since none of them work.
> You appear to draw the conclusion that we can still use utilitarianism, or deontology, or any other ethical paradigm, and discuss ethics that way. I draw the conclusion that we don’t have *any* ethical paradigm that is any good
So my response is to recognize and understand both reality and the nature of philosophy (ontos and ethos, if you will), yours is to make petulant declarations about what is “needed” based on assumptions that are *wrong* about what is possible, or even ideal. It isn’t so much that we “can” use these paradigms, but that we *must*.
Seriously, I’m not trying to make this personal. I understand your plight and why the attitude you’re presenting is extremely common, and seems sensible, and even why you are basically helpless but to perceive my acceptance of what you see as the inadequacy of these paradigms as some sort of acquiescence or settling. You’re essentially saying that an ethical premise that isn’t perfect isn’t “any good”, that your proclamation “none of them work” is a meaningful claim. You expect an ethical theory to be like a scientific theory, and want to dismiss consequentialism or deontology or both because they don’t provide absolute moral certainty or authority by “proving” their own validity.
But your attitude and expectations are wrong, for at least two reasons. First, philosophy isn’t science, and that is not a flaw or deficiency, merely a difference and even a benefit/purpose. Both consequentialism (considering ethics based on the consequences of an action) AND deontology (considering ethics based on application of ethical dictates) are true, and are good, and work. We can find new paradigms, modify and extend these, or even invent an entirely new general paradigm than “ethical questions” to begin with. Go for it. None of that will somehow make these already developed paradigms any less important when discussing ethics. You’re demanding a “theory of everything” of ethics, but forgetting that we don’t even have a TOE for physics yet.
The second way you are wrong is related but a even trickier to explain and understand. Let’s pretend that ethics not only could be but should be like physics. If you’re actually trying to apply logic to human behavior (which is not constrained by logic) then you should be able to realize that even if some new “theory of everything” ethical paradigm is invented, it would not actually be that unless it resolved all the inconsistencies (both internally and in the contradictions between the two) of consequentialism and deontology both, anyway. Consider an analogy to physics: did Einstein disprove Newton’s equations? Does QCD make relativity inaccurate? A physics TOE would have to not just explain all of these paradigms, but also explain why they sometimes conflict.
I could go on for ever in just this vein, but I’m eager to move on to another issue, which I’ll address in my next missive.
> You’re essentially saying that an ethical premise that isn’t perfect isn’t “any good”
No. I’m not asking for perfection. See below.
> that your proclamation “none of them work” is a meaningful claim.
Yes, but not for the reason you give.
> You expect an ethical theory to be like a scientific theory, and want to dismiss consequentialism or deontology or both because they don’t provide absolute moral certainty or authority by “proving” their own validity.
Not at all. I expect an ethical theory to be something that can lead to some kind of consensus, even if it’s an imperfect one, about how to resolve at least some subset of ethical conflicts. But I don’t see any ethical theory actually doing that. Here we are days into this discussion and we can’t even come to consensus about what “consequentialism” and “deontology” mean, let alone how they would resolve any ethical conflicts. More generally, I don’t see ethical theories actually helping to resolve ethical conflicts in the real world. Instead, they give different sides in ethical conflicts one more reason to entrench themselves in their positions and fail to reach any meeting of the minds. The lesson I draw from this is that the strategy of “come up with a general ethical theory and then try to apply it to particular cases” does not work.
Of course you might disagree with the assessment I’ve just made, but if so, you’re not going to convince me by more and more detailed descriptions of what general ethical theories are and how they work. And you’re certainly not going to convince me by saying that ethical philosophy has no practical use: that’s just *agreeing* with me. I would like to see some examples of a general ethical philosophy actually *working* in a practical case, actually resolving some ethical conflict–not in the sense of *claiming* to resolve it in the abstract by applying its general method, but of actually resolving it by convincing real people who started out disagreeing to come to some workable consensus. If your only response to that is “that’s not what ethical theories are for”, then I guess we have a fundamental difference of opinion in this area.
> Consider an analogy to physics
I don’t think ethics is like physics. You’re the one who keeps bringing up that analogy. But to take your questions as you ask them:
> did Einstein disprove Newton’s equations?
No, observations did, by showing cases where they were not accurate. Einstein came up with a more general theory that made predictions that matched those observations, and also showed why Newton’s equations work within the domain in which they were known to work.
> Does QCD make relativity inaccurate?
Bad question since QCD is a quantum field theory, which is by definition consistent with relativity.
> A physics TOE would have to not just explain all of these paradigms, but also explain why they sometimes conflict.
“Paradigms” is not something any physicist came up with. A philosopher came up with it. Physicists *don’t* think of Newtonian physics and General Relativity, for example, as “conflicting”. They think of GR as a more general theory that includes Newtonian mechanics as a particular special case, applicable in a certain more restricted domain. And they also realize that someone in the future is likely to come up with a still more general theory that includes GR as a particular special case, applicable in a certain more restricted domain as compared to that more general theory. (Most physicists believe this more general theory will be some kind of quantum gravity theory, but no such theory to date has made any useful experimental predictions that can be tested.)
The history of ethics doesn’t look anything like this.
> It isn’t so much that we “can” use these paradigms, but that we *must*.
Why must we? I understand why we must face ethical conflicts and see what we can do to resolve them, but why must we use these particular paradigms to do it?
Sorry for the sequential “batch processing” spam responses, but I am loving these discussions, and hope I’m not being too obnoxious posting all I can when I can.
@ Peter
>for consequentialists, there are no moral rules *that apply directly to an action*; you have to apply rules about what is right or wrong, good or bad, to consequences
Yeah, no. You’re getting close, but you aren’t quite grasping the point I made. In consequentialism as an ethical system (not as an approximation of a practical morality that people might model on consequentialism and call that, but the actual philosophical precept of consequentialism as a logical yet moral force analogous to a field theory in empirical science) *there are no rules* about good or bad, there are only instances and categories. In trying to comprehend consequentialism, your standard/typical reasoning skills (instincts they are sometimes called, knowing it is a misnomer, or else they are called logic, unaware that too is not the case) cannot help but abstract the existence of these categories as set definitions that correspond to and can be expressed as “rules”, but those ‘rules’ not only don’t actually exist, they cannot exist. It is only when we construct an ethos with *no rules about what makes something good or bad* (even the meta-rule that, like I said, you cannot easily avoid constructing in your mind that says “things with bad consequences are bad”) that you are dealing with actual consequentialism rather than engaging in semantic sleight of hand and word games. It is the bad consequences that make an action that caused them bad, not a rule that says so, because that would be deontology.
Is it possible for such a lack of rules to exist, physically or even intellectually or logically? Of course not (we presume, or else it is all that exists, and the distinction is what is imaginary rather than the ontos/ethos where that is the case), but you still have to try as hard as you can, and then try even harder, to assume it does exist, or all talk of ethics (in this context) is useless, not just the talk we place, epistemically, in the category “consequentialism”. And the same is true on “the other side” with deontology. There are no consequences (or causes) in deontology, no repercussions or effects. Just isolated occurrences (whether they be events such as actions or objects such as measurable quantities) that are each individually judged good or bad independently and solely based on whether they violate a “rule”.
> In consequentialism as an ethical system…*there are no rules* about good or bad, there are only instances and categories.
Ok, fine, then substitute “instances and categories” for “rules”. Go on and on as long as you like about meta and meta-meta and so forth. If you have something that deserves the term “ethics” at all, sooner or later it has to bottom out in something that’s “good” or “bad”. You yourself say “bad consequences”:. Ok, fine, then how do I know whether some particular consequence is good or bad? There has to be *some* way of judging that for any given consequence. That is what I mean when I talk about “rules”. You seem to have a hangup about the term “rules” because of your particular definition of “deontology”, but I’m not using the term that way. I don’t know what the standard consequentialist term is for what I’m talking about; if you can suggest a different term for me to use, I’ll be happy to use it for this discussion.
> There are no consequences (or causes) in deontology, no repercussions or effects. Just isolated occurrences (whether they be events such as actions or objects such as measurable quantities) that are each individually judged good or bad independently and solely based on whether they violate a “rule”.
I’m not sure I agree with this narrow definition of “deontology”. Or, if you insist on using “deontology” in this narrow way, then I don’t agree that “deontology” and “consequentialism” as you’ve defined those terms (since your definition of “consequentialism” is narrow as well) are the only two possibilities for an ethical framework.
>“causing suffering” is wrong according to utilitarianism, and that’s an emotional repercussion, isn’t it?
A good question, much appreciated. In any consequentialism, “suffering” identifies, tautologically, a lack of good, or utility, or happiness, or what have you, which we might inexorably link in our minds with an emotional experience, but it is not the emotion that makes it bad, but rather the other way around. So one way of responding to your concern would be to say that it is essentially a coincidence that suffering is both bad and an emotional repercussion, albeit a perfectly predictable coincidence because of the real (real? objective? meh) nature of emotions and morality.
Another way of seeing it is as the product of a moral instinct; do we feel suffering as a sign that something is bad, rather than it is bad because there is suffering. In an abstract sense, the link is not necessary. Nor is it, it turns out, in a concrete sense; sexual masochism and “no pain no gain” being indicative of the epistemology of the matter. Similarly, penance and punishment. These things aren’t typically related to utilitarianism, but that’s because it gets too obvious that they’re too complex to be sorted out. We don’t need to calculate all the chaos in the atmosphere to predict the weather, as much as we might aspire towards that as an ideal.
Yet another approach would be considering suffering in more relative terms, though I would avoid that because it would get confused with the supposedly quantitative measurements of utility itself. Amputating a limb causes suffering regardless of whether it was a medical necessity or a workplace accident (ignoring the pain of the event itself I mean, referring here to the longterm suffering of the enduring loss of the limb). And how shall we account for the anguish of a medical amputation that wasn’t necessary; is it the horror at the futility or the actual loss of function that we mean by “suffering”. And does that mean that people who are easily moved to tears or subject to depression actually “suffer” more by such injury than those that don’t?
So although the association of suffering and emotion are obvious and perhaps even inextricable, that does not mean they are the same thing, and it is the more abstract (though “objective” in postmodern parlance) suffering, rather than the more poignant, emotional, even *real* (but nevertheless “subjective”) suffering, that all ethical philosophical systems are actually dealing with.
The inverse aspect of philosophical ethics is the (novel, I’m not contending this is a standard or even accepted formulation) idea I mentioned in a reply to Gooper drawing a distinction between “hedonistic pleasure” and “stoic pleasure”. The common real-world experience of pleasure and suffering are not necessarily the same as what is being referred to in ethics. This is unavoidable, because in the real world they are not “objective” enough to be useful in the way philosophical contemplation requires. So “suffering” is a stand-in/shorthand for “experiences that are typically avoided” rather than equated to the emotional experiences that “make” us avoid them. Likewise with “pleasure”, or more commonly when discussing this seriously rather than trying to dismantle it with postmodernism, “happiness”; a stand-in for “occurrences which are preferred”. The divorce of ethical ramification and emotional repercussion, as you put it, is more obvious with “happiness”, which can be recognized as distinct from “cheerfulness”. Although that very distinction makes the actual intellectual problem worse, since in postmodern hands, happiness is much more easily dismissed as “subjective”, ie, trivial or even illusory. Still, the difference between a psychological state (still just a proxy for moral approbation or condemnation) and an emotional presentation (ramifications that are “merely” transient feelings) is made clear.
Did that help?
> In any consequentialism, “suffering” identifies, tautologically, a lack of good, or utility, or happiness, or what have you, which we might inexorably link in our minds with an emotional experience, but it is not the emotion that makes it bad, but rather the other way around.
So if “suffering” in the sense of a sentient being experiencing pain is *not* tautologically “bad”, but only a predictable consequence of something tautologically “bad”, how is the set of tautologically “bad” things defined? Is it some particular philosopher’s whim? Or is there some kind of at least putatively objective way of telling?
> We don’t need to calculate all the chaos in the atmosphere to predict the weather
We do if we need to predict it more than a few days ahead. Which is why we can’t predict it more than a few days ahead. The equivalent in ethics is that we can only reliable calculate consequences that are close enough, in space and time, to our actions.
> Did that help?
You’ve spent a lot of words talking about what “tautologically good and bad ” are *not*. What I’m looking for is your view on what they *are*.
>how is the set of tautologically “bad” things defined?
They aren’t defined, they are experienced. What makes you assume that there is such a “set”? Is it impossible for you to imagine that the idea of morality itself is… imaginary? If so, why is that? (That last is purely a rhetorical question, in that attempting to answer it can only prove that your answer must be incorrect. The word “bad” doesn’t necessarily correspond to a set or category, it is just a description not a logical symbol in a defined system of calculus.)
>We do if we need to predict it more than a few days ahead.
This isn’t just an assumption, it is a false assumption. The validity of a “calculate chaos” meteorological prediction decreases rather than increases as the length of time increases, since the chaos inevitably averages out. The reason we can’t predict whether more than a few days ahead (and can’t precisely predict extremely localized weather even a few hours ahead) is twofold: limited computational power, and limited observational precision. This combines with a third problem, which is that the assumption that weather can ever be accurately and precisely predicted using computation has not yet been empirically demonstrated to begin with. It is not necessarily a false assumption, but it is still just an assumption.
I wanted to address those two specific statements first, but now it is time to get to the REAL matter, which will be, again, an additional message.
> They aren’t defined, they are experienced.
Earlier you said that experiences are not what makes anything good or bad, they just happen to be correlated with what’s good or bad. But now you’re using experiences as the criterion for something being tautologically good or bad. This seems contradictory. I can’t make sense of your position.
> What makes you assume that there is such a “set”?
I’m not the one who claimed that there are things that are “tautologically” good or bad, you are. I’m just trying to understand what you mean by that. So far what you’ve said doesn’t make sense to me.
> Is it impossible for you to imagine that the idea of morality itself is… imaginary?
I can imagine it just fine. But if that is in fact the case, then nothing is “tautologically” good or bad in any objective sense, or even any “close enough to objective” sense. And then both consequentialism and deontology are meaningless. Which is fine with me, but you keep talking as though they mean something and using a lot of words to try to explain what you think they mean. What’s the point of all that if it’s all imaginary?
If your answer to the above is something along the lines of “well, these creatures called humans have these experiences and use these ethical theories to try to organize them, let’s try to systematically describe how that all works”, then that would be fine if we were discussing anthropology, but we’re not. At least, I thought we were not. If we are, I’ll just bow out of the discussion since I’m not here to discuss anthropology.
> The validity of a “calculate chaos” meteorological prediction decreases rather than increases as the length of time increases, since the chaos inevitably averages out.
This is garbled. The computer models that are used to predict the weather are not “calculating chaos”. They’re calculating approximations to differential equations that express physical laws, using finite differences since computers can only calculate to a finite precision. Chaos, or more precisely sensitive dependence on initial conditions, combined with finite precision of measurement and finite precision of computer calculation, is the reason why those finite difference approximations will inevitably diverge from the actual reality, and the degree of divergence *increases* exponentially as you push the prediction further into the future. It never decreases. There is no “chaos averaging out”; “chaos” is not some overlay or correction to some underlying orderly process.
> The reason we can’t predict whether more than a few days ahead (and can’t precisely predict extremely localized weather even a few hours ahead) is twofold: limited computational power, and limited observational precision.
That, *plus* sensitive dependence on initial conditions, i.e., chaos. That is what causes the divergence of the computer model from reality to increase *exponentially* as you push the prediction further into the future, and thus to make the predictions useless within a short time window. For the (rare) physical systems that are not chaotic (or for which the chaotic aspect only comes into play on very long time scales), this does not happen; the predictions diverge only *linearly* from reality as you push them into the future, even with limited computational power and limited observational precision. That’s why, for example, we can predict the motions of objects in the solar system with good accuracy decades or centuries into the future (and also “retrodict” them decades or centuries into the past, to demonstrate, for example, that the rotation of the Earth has been slowing for a long time).
Maximising hedons is probably better than maximising paperclips. As far as arbirtrary “high score” games are concerned, you could play worse ones.
That said, the pain/pleasure of others isn’t mine, and sadism/empathy are scope-insensitive emotions. So it’s not like this “high score” games is somehow a logically compelling necessity.
I just lost half an hour of text because of this damnable interface, so I’m going to skip responding to Peter’s quibbling entirely and address Tom’s last comment briefly before returning (or rather adding) to my planned discourse.
>So it’s not like this “high score” games is somehow a logically compelling necessity.
You have defined morality/ethics quite directly: it is not, cannot be, and should not be confused with, a “logically compelling necessity”. This explains why humanity has been forced to rely for thousands of years on the logically incomprehensible idea of “free will” to even imagine that morality exists.
Which brings us to my larger premise, which I will introduce with a quote from Peter, though this is simply a heading rather than the idea I will address:
>You’ve spent a lot of words talking about what “tautologically good and bad ” are *not*. What I’m looking for is your view on what they *are*.
As I believe I did mention (perhaps subsequently to your query) is that they are not, that is what they are. Morals are not logically compelling, they are not physically necessary, they are not tautological. Anything that would be any (and therefor all) of these things is not ethics or morals, and expecting or demanding adherence to them (or using them predictively) would exclude them from being moral or ethical.
Obviously we could all continue to discuss these things interminably, just as more rigorous and astute philosophers have done for thousands of years. I’m fine with that, because I enjoy the process of argument, but I abhor the postmodern practice of confusing reasoning with logic. So I propose a different approach, at this point in the conversation, which I hope Peter and perhaps others will find more productive.
Let’s say that I’ve developed a new, apparently unique, and putatively universal ethical paradigm. Let’s imagine that I present it, briefly and therefor partially, and invite you all to consider and discuss it. Will any such efforts allow it to be more “successful” than consequentialism or deontology? Is it at all possible it could be, regardless of whether we find it convincing or perhaps even a reasonably compelling necessity, make the existing paradigms of “an action is good or bad because of its consequences” or “an action is good or bad because it adheres to or violates a rule” uninformative or incomprehensible? Would any such paradigm be impossible to explain as relying on teleologies, and therefore consequentialism, or rules, and therefore deontology? Isn’t the consequent nature of utility or emotions unavoidable in any ethos? Doen’t the fact that any ethical principles can be expressed as “rules” make all ethos deontology?
Let’s try it and see. I propose a theology (ethos or morality, independent of whether it allows or requires theism) which has one object, words, and one principle, honesty. Nothing else in the universe necessarily exists, though anything or everything in our cosmos very well might exist. Actions (which are not limited to words but could be) are moral if they are honest, without any other referent or categorization, and immoral if they are not honest, or necessarily require or result in dishonesty. I refuse to “define” these terms further, relying on the commonly recognized meaning to be sufficient, though more specific or particular “definitions” can be developed, so long as that process itself adheres to this theology. (One further caveat is needed to accommodate participation in this exercise by postmodernists: there can be no use of numbers, math, or abstract symbols, only words.)
Would you like to play a game?
> Morals are not logically compelling, they are not physically necessary, they are not tautological.
I’ve never made any of the claims you are denying here. As I’ve already pointed out, you are the one who introduced the term “tautologically”. I took you to be describing, if not something universally the case, at least something that was *taken* to be the case within the context of some particular ethical theory, such as consequentialism (which is the context in which you introduced the term). In other words, I took you to be saying that *within the framework of consequentialism*, some things are taken to be tautologically good or bad. Which I agree with; I think that within *any* particular framework, some things are taken to be tautologically good or bad. For a particular deontological framework, it’s usually easy to see what those things are, since they’re usually spelled out in the rules (e.g., the Ten Commandments). For consequentialism, they’re not, and I was trying to figure out what *your* understanding was of what those things are for consequentialism, or at least for some particular version of it. As far as I could tell, you never answered that question, and now you say I’m quibbling. I don’t see how it’s quibbling to ask you to explain a claim that you made.
> Would you like to play a game?
Not particularly, no. I note, however, that your proposed ethical theory satisfies my statement that within the framework of any ethical theory, some things are taken to be tautologically good or bad. For your theory, those are honesty (good) and dishonesty (bad).
Darn thing interpreted a bump of the mouse as “post comment”. To finish what I wanted to post: what I would like to know is what you (@TMax) think are the tautologically good and bad things within the ethical framework of consequentialism, since, as I noted, you have confirmed my statement that within any ethical framework there must be *some* such things.
@Peter
>, that your proposed ethical theory satisfies my statement
Well, you played the opening round of the game at least, and that is most appreciated. My condolences on your loss through forfeiture immediately after that, but I’m all for you jumping back in at any time. There are no penalties for anything but dishonesty, and there are no penalties for that other than dishonesty.
I’m very serious about both the proposal and the ethos, by the way. You wanted (someone else) to create a better paradigm. I’m guessing you didn’t imagine I’d already done that, years ago, and for all the same reasons you’ve been frustrated by standard philosophy, and many more. But I feel compelled to mention that all ethos have tautological (however you interpret it) “definitions” of good and bad, because good and bad are themselves tautological (as long as you honestly interpret this use with the same “definition”/meaning as the prior one.) In consequentialism, causes are tautological; they are not explicitly defined that way as they seem to be in deontology for the simple reason that cause and effect are already unavoidably (tautologically) assumed and proven to exist in the real world. Deontology cannot rely on that same “logical intuition” to make whatever rules it is referring to exist.
I felt compelled to mention that because I live this ethos daily and moment by moment. And I believe (which is to say ‘know for a fact regardless of whether I can convince you’) that everyone else does, also, despite being only unconsciously aware of it. So my paradigm also satisfies the desired compilation of “imperfect concensus” and ‘involuntary logical necessity’ that we have been hypothesizing.
> I felt compelled to mention that because I live this ethos daily and moment by moment.
So there is at least one ethical theory that *does* have practical use in your view?
Anyway, you do deserve credit for actually trying to live your ethos. Many people don’t practice what they preach.
> In consequentialism, causes are tautological
I don’t understand what you mean by this or how it addresses the question of what is tautologically good or bad in consequentialism. “Causes” doesn’t seem to me like the sort of thing that can be good or bad.
@ e…nudo
>I find troubling making a question I cannot possibly answer, the key question in deciding my course of action.
That’s what people refer to as “free will”. I posted on Reddit yesterday, in response to a meme on r/funny concerning God’s consternation on Its creation actually using our free will to disbelieve in It (meant as an insult to the religious rather than actual humor) that the Garden of Eden story in Genesis is actually an illustration of this very aspect of free will and “moral sentiment”. People who ‘disagree’ with consequentialism, especially the utilitarian variant, routinely make the mistake of believing that actual knowledge is necessary for the suffering caused by an action to be accrued to immorality of the actor. This reveals an inherent acceptance of a deontological premise that postmodernists have. Or actually 2: that an ethos must be “fair” from their perspective to be valid, and that suffering that results from an action is the fault of whoever executed that action.
All of our perceptions and beliefs about morality are a melange of consequentialism and deontology. It isn’t only postmodernists who insist that only a “pure” version of either can be “correct”; most modernists thought so, too.
>Btw, “antecedent” and “precedent” are synonyms
Thank you, Peter, for that pedantry. But I meant to refer to the common quasi-legalistic idea (misbegotten though it may be) of ”
‘pressedence’, being the impact of a decision that occurred *before* on a decision that was subsequent. This is in direct contrast to the idea of “consequence”, as it indicates a lack of physically inevitable consequence, though I agree with you that the etymological antecedent of the term, ‘pre-cedence’ is more grammatically and semantically correct. I would have written “subsequent” if the word had come to me, as you seem to have understood.
Since I didn’t directly reply to your confusion about what constitutes “quibbling”, I’ll mention that even though your comment was clearly pedantry (in the unflattering connotation of that term) I don’t consider it quibbling, because you did not (as far as I can tell, or at least not consequentially even if subsequently) use it as an excuse to avoid addressing the actual point being made. Had you claimed you were unable to understand what I meant to say because of this etymological determinist approach, that would have resulted in it being qualified as quibbling. Pedantry is merely an aside; quibbling is an obstacle.
@Peter
>> It isn’t so much that we “can” use these paradigms, but that we *must*.
Why must we?
Because they are the paradigms available. You can, at will, avoid discussing ethics. Whether you can avoid considering ethics is an epistemic issue. But regardless of whether “use” is meant to refer to arguing about or thinking about applying ethics, some paradigm is necessary to be doing so, and these are the paradigms we must use in order to be doing so. Practically speaking, of course. You could do what you fantasized doing and what I actually accomplished, and envision some additional paradigm, but A) that merely expands what is meant by “these” because B) to be honestly ethical, *all* available paradigms must be considered, even if only long enough to be rejected, or we else we are not actually dealing with ethics, but simply pretending to. It would be like saying you’re discovering or proving laws of physics but never testing them physically.
> regardless of whether “use” is meant to refer to arguing about or thinking about applying ethics, some paradigm is necessary to be doing so
Here we disagree. Nothing forces us to use a paradigm. Ethical situations and ethical conflicts can be described without using any particular paradigm. Solutions can be suggested without using any particular paradigm. One can of course suggest a paradigm to use, but it’s not required.
> to be honestly ethical, *all* available paradigms must be considered, even if only long enough to be rejected, or we else we are not actually dealing with ethics, but simply pretending to
Although this sounds reasonable, I’m not sure it’s actually required. It might well be prudent in the case of paradigms that have been around a long time and in which a lot of people believe.
> It would be like saying you’re discovering or proving laws of physics but never testing them physically.
I would say that discussing ethics without a paradigm is like discussing the results of particular physical experiments without having a general theory that predicts things in that domain. You can still describe the experiments and look for patterns in the results.
> You can still describe the experiments and look for patterns in the results.
Not really, no. You can naively babble about the physical activity involved in an experiment, or you can chat about the general domain as if the real world is as concisely divided into such things as scientific research is, obviously. But it isn’t really an experiment without specific patterns being predicted by some specific theory, and only if you are aware of those would your description or discussion be useful or accurate enough to be considered much more than garbled nonsense.
I feel bad just addressing this response, though, because doing so might feed in to your mistaken notion that philosophy could or would benefit or be more useful if it were more like science. But I’m an optimistic person, so I’m hopeful there’s at least some small chance you’ll understand how backwards that is, and why discussing an ethical philosophy requires knowing and accepting (if only to intelligably refute) it’s premises. Perhaps you can converse about consequentialism without studying consequentialism, but not without knowing what cause and effect are, and thereby accepting the objective existence of consequences. What distinguishes ‘utilitarians’ from ‘deontologists’ is whether they consider consequences or rules to exist *as moral objects*, not whether they both are existentially present in the real world. But naive people continue to “think” that ideas like ‘punishment is a consequence of breaking rules so deontology is consequentialism’ or ”bad results are caused by bad acts’ is a “rule” so consequentialism is deontology’ are insightful, and they are mistaken.
To try to salvage your analogy from its postmodern bias, your claim about discussing experiments should be replaced with calculating results. We can perform the math without any knowledge or consideration of theory or predictions. Because that’s science, which *must* be that way; arithmetic principles cannot be domain-specific. In contrast, philosophy is otherwise, conscientiously so.
> it isn’t really an experiment without specific patterns being predicted by some specific theory
Tell that to all the scientists who ran important experiments and collected important data without having any theory that explained what they were doing. I think you have a mistaken view of how much of science progresses.
> your mistaken notion that philosophy could or would benefit or be more useful if it were more like science
I have made no such claim. I have only said that philosophy is not like science, with which you appear to agree. I have never said that philosophy *should* be more like science. I think you are either misreading things I have said, or not really reading what I have said at all but only responding to what you imagine I have said.
> discussing an ethical philosophy requires knowing and accepting (if only to intelligably refute) it’s premises.
Sure, that’s true of any logical system.
> knowing what cause and effect are, and thereby accepting the objective existence of consequences.
Of course, this is obvious. I have never said otherwise.
> What distinguishes ‘utilitarians’ from ‘deontologists’ is whether they consider consequences or rules to exist *as moral objects*
Yes, I understand that this is how you are using the terms ‘consequentialist” and “deontologist”. It still doesn’t answer the question of how consequentialists, by your definition, tell which consequences are good and which are bad, which I have asked repeatedly and you haven’t answered.
> postmodern bias
I don’t understand where this is coming from at all. I suspect we have very different ideas of what “postmodernism” is. Earlier I argued against your claim that “postmodernism” means wanting everything to be like science.
> I meant to refer to the common quasi-legalistic idea (misbegotten though it may be) of ”
‘pressedence’, being the impact of a decision that occurred *before* on a decision that was subsequent.
Ah, that makes sense. I wasn’t intending to be pedantic, I just thought you might have made a typo and meant to use another word.
Huemer, could you comment on Hans-Hermann Hoppe’s argumentation ethics? He talked about it in this lecture.
http://propertyandfreedom.org/2016/10/hans-hermann-hoppe-on-the-ethics-of-argumentation-pfs-2016/
@Peter
> So there is at least one ethical theory that *does* have practical use in your view?
Not the way you’re thinking, which the equivalent of putting crude oil in your gas tank and expecting the car to run. Practical application of an ethos comes from a theology, not a theory, even if philosophers do appear to do so using gedankin, or suggest personal behavior based on the paradigm. The practical use of a theology is a religion, whether it is an organized doctrine or a personal morality. Though the phrase “ethical theory” might be used to refer to any and maybe even all of these aspects of ethics, your usage suggests you’re thinking of it as the same thing as a scientific theory. So it is no wonder you find existing ethical theory to be insufficient, since you are mistaken about its methods, purpose, and utility.
>I’m not sure it’s actually required
Do you mean required by the person trying to be ethical, or required by the nature of morality itself? New and old, common or esoteric, popular or not, it is necessary to consider all available frameworks to be truly moral, even those you don’t personally find convincing. People affected by your actions, including those who generate or enforce any rules you may be violating, have as much right to judge your actions as you do, or you aren’t being moral, you’re simply excusing narcissism. You in turn have the freedom to ignore their perspectives *after* considering them, of course, just as you can skip the whole bother of trying to be moral to begin with.
Overall this entire discussion keeps highlighting for me the problem that is the postmodern mindset, and the problems it causes as well. (Which are far worse than the supposed inadequacies of ulitiarianims or deontology.) Ethics isn’t an algorithm, and you find that troublesome, but it really is definitive. If guidance for the behavior of entities which putatively have self-determination (whether described as free will or freedom or even moral responsibility) could be reduced to algorithmic calculations or enforced as logical necessity, either those entities would not actually have self-determination or such a system would not actually be ethics.
> Not the way you’re thinking, which the equivalent of putting crude oil in your gas tank and expecting the car to run.
I have no idea what you mean by this. I made it quite clear what I meant by “practical use”: that you are trying to live your ethos, not just state it or talk about it or debate it in the abstract. This is the same thing you describe:
> Practical application of an ethos comes from a theology, not a theory, even if philosophers do appear to do so using gedankin, or suggest personal behavior based on the paradigm. The practical use of a theology is a religion, whether it is an organized doctrine or a personal morality.
Given what you mean by “religion” (since you have described yourself as a priest of an atheistic religiion), I agree with this. I just don’t see how it’s any dfiferent than what I described as an ethos having practical use.
> the problem that is the postmodern mindset, and the problems it causes as well. (Which are far worse than the supposed inadequacies of ulitiarianims or deontology.) Ethics isn’t an algorithm, and you find that troublesome
I only find it “troublesome” in the sense that, considered purely abtractly, it would be nice if we had a method that simple for resolving ethical conflicts. But we don’t, and that’s that.
What I can’t understand is how you are connecting “the postmodern mindset” with finding it troublesome that ethics isn’t an algorithm. As I commented in another post just now, I suspect you and I mean very different things by “postmodern”, and I am having a lot of trouble understanding what you mean by it.
>What I can’t understand is how you are connecting “the postmodern mindset” with finding it troublesome that ethics isn’t an algorithm.
I know. That’s what I mean by the postmodern mindset to begin with. Both finding it troublesome that ethics isn’t an algorithm, and being unable to understand why ethics isn’t an algorithm.
I wish I could be less opaque. I’m not quite ready to post the magnum opus diatribe trying to explain it comprehensively. But maybe soon.
> That’s what I mean by the postmodern mindset to begin with. Both finding it troublesome that ethics isn’t an algorithm, and being unable to understand why ethics isn’t an algorithm.
Neither of these things maps to “postmodern mindset” to me, so I still don’t understand where you’re coming from here. (Unless you mean that “unable to understand why ethics isn’t an algorithm” is just a special case of “unable to understand much of anything”, which *would* be a much better map to “postmodern mindset” to me.)
> it is necessary to consider all available frameworks to be truly moral, even those you don’t personally find convincing. People affected by your actions, including those who generate or enforce any rules you may be violating, have as much right to judge your actions as you do, or you aren’t being moral.
You are saying three different things here. One is that you should consider the effects of your actions on other people before deciding what to do, which is of course true, and I would expect any prudent person to agree with it.
The second is that you should consider whatever general rules are in force in your society before deciding what to do, which is of course also true, and I would also expect any prudent person to agree with that. (Someone else made a similar statement way upthread.)
The third is that you should consider all available ethical frameworks to be truly moral, which I don’t agree with, and which is not implied by either of the other two statements. It’s a much stronger claim.
>The third is that you should consider all available ethical frameworks to be truly moral, which I don’t agree with, and which is not implied by either of the other two statements. It’s a much stronger claim.
It is implied by your acceptance that both of the other two statements are true and what you would expect a “prudent” (aka ethical) person to do. If there were a third ethos, and a fourth, and a fifth, and you likewise recognized each of them as true and what you’d expect a good person to do, would that make it more obvious that my third statement is not simply stronger, but self-evident?
> It is implied by your acceptance that both of the other two statements are true and what you would expect a “prudent” (aka ethical) person to do.
I was not using “prudent” as a synonym for “ethical”, at least not in the sense that you are using the term “ethical”, since you are using it to mean someone who agrees with your claim that a person must consider all available ethical frameworks. I am not claiming that a prudent person must agree with that, nor do I agree that they should.
> If there were a third ethos, and a fourth, and a fifth, and you likewise recognized each of them as true
You could try and suggest more, I suppose, but you’ve already suggested on that I don’t think a prudent person must agree with, so I don’t think your hypothetical here is realistic.
> what you’d expect a good person to do
I was not using “prudent” as a synonym for “good” either. I don’t think it is.
>I was not using “prudent” as a synonym for “ethical”…
Your entire response was quibbling and pseudo-semantic rubbish. You agree that a person would agree with the different judgements of ethos based on the paradigms of those ethos (not a foregone conclusion, though it sounds like and you made it seem like it would be), and that is what matters, not how it is described or false claims you want to make about why I described it as I did.
>so I don’t think your hypothetical here is realistic.
I didn’t suggest a third, you’re confabulating my third statement as a third example similar to the first two, which you *did* agree with, before you realized it was going to get you in the uncomfortable position of recognizing your error in reasoning. So now you’re going so far as to suggest that being prudent isn’t a good thing. More importantly than whether prudence is good (it is), the word directly relates to the issue being discussed, which is whether purposeful contemplation can possibly be considered purposeful contemplation when it arbitrarily rejects an opportunity to contemplate. All I did was observe that there really isn’t any necessary difference between one kind of purposeful contemplation (prudence) and another (ethics).
It might be worth your while to go back and think harder about the “hypothetical”. Would it make any difference if you did not find a third, fourth, or any number of other examples to be true? If you refused to consider them to begin with, would that be ethical?
My position was not that the first two statements (the only two examples I gave) implied the third statement, but that your agreement with the first two implied the truth of the third. My original point was, and still is, that ethical conduct requires a best effort, because of the nature of ethics itself. Your rebuttal is, effectively “nuh-uh”. How many different ethical systems should we consult to resolve the conflict by learning what their shared nature is and also determining if any specific conduct supports or refutes my contention? You might say “only one”, but which one? A random one? Perhaps, but would that actually resolve the conflict ethically, or would the resolution thereby be randomly or arbitrarily determined? I suggest it would be best, because I am magnanimous, that you should choose whichever particular one makes the most convincing case that you are correct and I am mistaken. But how many will we have to, or choose to, or be able to examine to figure out which one that is? My position remains unchanged, and is thereby not merely implied but proven, because the correct answer is also the one that is most favorable to you, which is “as many as possible”.
> I didn’t suggest a third
To clarify, I was referring to your earlier honesty/dishonesty ethos, the one you said you yourself live by.
> Your entire response was quibbling and pseudo-semantic rubbish.
My inclination is to say the same about yours, so evidently we do not have enough common ground to have a useful discussion. Good luck to you.
>I was referring to your earlier honesty/dishonesty ethos
You didn’t offer that as an example though, nor did you address it, the way you did the other two. But I did imagine a prudent person would agree with it.
>we do not have enough common ground to have a useful discussion.
It’s been very useful from my perspective. Perhaps to others as well; we may never know. But for that reason, I’ll address this to anyone who has or will ever read this, including you, Peter;
Thanks for your time. Hope it helps.
A.) While this would maximize our intrinsic good in the short term, this is not something we would want to do long term. If we set this social precedent, people would be afraid to go to surgeons for things in fear of having their organs harvested. Having social cohesion and rule of law maximizes utility far more than saving 5 lives.
B.) I’d make a similar argument for this claim. Most riots don’t seem to cost a large loss of life- if any. Randomly framing people is setting a very bad precedent for ignoring rule of law.
C.) I’d just bite this one, although I will admit this is a strong objection. I was reading about this in the SEP article on W.D Ross the other day.
D.) Diminishing marginal utility answers this. Essentially, the person who is no longer being tortured is at such a low point that stopping his suffering matters far more than thousands of small annoyances. Who benefits from getting 50 grains of rice more- someone who has 0 grains of rice, or someone who has 5000000000 grains of rice?
E.) I don’t think this is a utilitarian debunk. If I were to bite it, the worst you could say is that I’d give a cookie to a serial killer.
F.) Diminishing marginal utility, see D.)
G.) I disagree that a utilitarian would reach this conclusion. We care about rule of law because of the long term utility it brings. An active murderer is causing far more damage than someone who is sub optimally donating. Furthermore, A is already “saving” lives by donating, so we’d have to factor that in too.
H.) This is ignoring the fact that John would derive pleasure from giving a cookie away. It’s entirely possible that he would derive more pleasure from giving it away as opposed to eating it. This is really just an issue for classical utilitarians. A flourishing based utilitarian could easily argue this is making John more virtuous and self-actualized by helping someone else.
@Thaddeus
>this is not something we would want to do long term. If we set this social precedent, people would
I think this is the heart of the criticism of utilitarianism rather than a defense of it. Using reasoning like this, practically anything can be “gamed out” to be insufficiently based in objectivist-style unenlightened self interest. And, likewise but inversely, anything can be justified by utilitarianism, up to and including totalitarianism. I’m not saying utilitarianism is either worthless or a perfect ethic, but to examine it reasonably we have to limit consideration to direct consequences, not whether they conform to any personal vision of an ideal society. If being what is best for society can only be what results in a “best society”, it isn’t an ethos so much as a justification scheme for political unilateralism. So “social precedent” is vaporous as a consequence, since it is merely a proxy for the personal opinion of what whichever visionary/sociologist/psychologist is imagining how people could/should react.
It seems to me that a society which takes social obligation more seriously, including even a moral duty to save five lives by harvesting organs from one, might seem insane to us but it is not necessarily impossible to consider it ethical, as long as that moral duty is not the premise of a legal requirement. We might think that nobody would want their doctor to take their organs, but wouldn’t everybody want their doctor to give them organs?
And I’ll return again to the real moral of the trolly problem, which isn’t a simplistic arithmetic of which quantity is larger, but whether inaction results in as much moral responsibility as action does.
I now think that ethical intuitionism be inclined to moral particularism and consequentialism based on particularism.