Risk Refutes Absolutism

Some Extreme Views in Ethics

I thought of an argument several years ago to refute rights-absolutism. Actually, part of the argument was inspired by David Friedman’s objections to absolutism in The Machinery of Freedom (as I vaguely remember it, he basically says that if you’re an absolutist, then you should probably hold that everything is impermissible). But the clever part of the argument I thought of myself. But then I was scooped by Jackson & Smith, in a now well-known paper: their paper giving essentially the same argument appeared while my own paper was under review. 🙁 This made it harder, but eventually I got my argument published.

Anyway, so I’m going to tell you why absolutism is indefensible. Actually, the argument is more general than absolutism. Here are some ethical views you might hold:

Rights Absolutism: The view that rights (at least some rights?) are absolutely inviolable and can never be outweighed by any costs or benefits. If you have to choose between violating a right and letting the entire world be destroyed, you should let the world be destroyed.

*Aside: some people may regard absolutism as built into the concept of “rights”. Libertarians are especially likely to be absolutists (especially baby libertarians). Now, since I know that someone is going to try to pretend that there aren’t really any such crazy libertarians, I’ll give you a pair of quotes from Ayn Rand:

Since Man has inalienable individual rights, this means that the same rights are held, individually, by every man, by all men, at all times.” “When we say that we hold individual rights to be inalienable, we must mean just thatInalienable means that which we may not take away, suspend, infringe, restrict or violate—not ever, not at any time, not for any purpose whatsoever.”

Deontological Absolutism: More general than rights absolutism, this is the view that there are some non-consequentialist moral concerns (maybe rights, maybe something else) that outweigh all consequentialist concerns, however large. Kant held this view. E.g., he thought that you should never lie, even to save someone’s life.

Lexical Priority Views: Even more general, this is the view that there is some type of practical concern that outweighs any quantity of some other type of quantifiable concern. This is compatible with consequentialism.

Example: John Stuart Mill is usually understood as taking a lexical priority view, despite being a consequentialist. He thought there were “higher” and “lower” quality pleasures, and, apparently (?), that any amount of a higher pleasure outweighs any quantity of a lower pleasure.

The Risk Problem

So here’s the problem. I’m going to phrase it in terms of rights, but you can do analogous reasoning for any lexical priority view.

Say you’re considering some action that might produce some harm. Suppose it’s the sort of harm such that, if you were certain that the harm would occur, then the action would be a rights violation, and hence, according to absolutists, absolutely prohibited no matter what the consequences. Now, what should you say if the action only has some non-extreme probability (neither 0 nor 1) of causing the harm? Is it still prohibited?

Example: I suppose that punishing a person for a crime they did not commit is a rights violation. So if you know that John is innocent, it would be wrong to punish him, no matter what benefits could be gained, or harms avoided, by doing so. (Anscombe says something like this, without the “rights” talk.) Okay, what about punishing a person who has some probability of being innocent?

I can think of 3 views the absolutist could take about this. All of them are extremely bad views.

  1. The action is absolutely proscribed, as long as there is any nonzero probability of causing the harm.
  2. The action is absolutely proscribed if the probability is 1. Otherwise, it can be justified by sufficiently large costs or benefits.
  3. The action is absolutely proscribed if the probability is over some threshold, T, which is strictly between 0 and 1.

Problem with (1)

That view basically implies (given general facts of reality) that all actions are wrong. That’s because basically every action has a nonzero probability of causing a harm of a kind such that it would a rights violation to cause that harm with certainty.

Example: we can’t punish any criminal defendants, because there is always some chance, however small, that they are innocent. So shut down the criminal justice system.

Example: Don’t scratch your nose, because there is a nonzero chance that doing so will somehow set off some bizarre series of events that winds up killing someone. (If you knew that this would happen, I take it, it would be wrong to scratch. So, on view (1), it is also wrong if there is any nonzero probability of it happening.)

Problem with (2)

This view defeats the point of being a rights theorist. The probability of a harm happening is never literally 100%.

You could take this view if you want, but it wouldn’t capture any of the intuitions that absolutists are trying to capture. They want their view about rights to apply to some actual actions.

Problem with (3)

This is the clever part. Say there is a threshold, T, such that if (and only if) the probability of harm is less than T, then the action can be justified by large enough costs or benefits.

There could be a pair of actions, A and B, where each of them is slightly below the threshold, and they produce large enough benefits, so both are justified. But it could also be that the combined action, A+B (the “action” of doing both A and B), is over the threshold, so A+B can not be justified by any benefits, however large.

That looks like something close to an incoherence in the theory. If you consider A+B as a single action, then it is prohibited, but if you consider A and B as two separate actions, then both are permitted. But I think that the permissibility of some behavior cannot depend on how you divide things up — e.g., whether you count it as “one action” or “two actions”.

Analogy: Galileo has a great argument to show that objects should fall at the same speed regardless of their weight. Suppose you think that in general, heavier objects fall faster. If you dropped a (heavy) cannon ball and a (light) musket ball, you think the cannon ball would fall faster. Now, what if you tied the two of them together with a thin string and then dropped them? If you consider them now as one object, this one object should fall faster than the cannon ball by itself would fall. On the other hand, if you consider them as two distinct objects, then the musket ball will be trying to fall slower than the cannon ball; therefore, the musket ball will act as a drag on the cannon ball, so they will fall slower than the cannon ball by itself would fall. But it cannot be that the actual physical result depends on whether this assemblage counts as “one object” or “two objects”.

Example: Think of the justice system again. Let’s say that each criminal defendant is permissibly punished because each is sufficiently likely to be guilty, and the benefits are sufficiently large. But, if we run a criminal justice system in any large society, we are pretty much certain that some innocent people are going to be punished, sometimes. So the absolutist view lets you punish each of these defendants individually . . . but prohibits having the criminal justice system.

Solution

You can still be a moderate deontologist. This view holds that rights can in principle be outweighed. The existence of a “right” has the effect of raising the standards for justifying a harm. That is, it’s harder to justify a rights-violating harm than an ordinary, non-rights-violating harm. E.g., you might need to have expected benefits many times greater than the harm.

This view has a coherent response to risk. The requirements for justification are simply discounted by the probability. So, suppose that, to justify killing an innocent person, it would be necessary to have (expected) benefits equal to saving 1,000 lives. (I don’t know what the correct ratio should be.) Then, to justify imposing a 1% risk of killing an innocent person, it would be necessary to have expected benefits equal to saving 10 lives (= (1%)(1,000)).

Note: I think we have to say that, in order to avoid the problem where the permissibility of some behavior would depend on how we count actions.

This avoids the crazy consequences of absolutism, where, e.g., you can’t scratch your nose. The probability of the nose-scratching killing someone is so small that it gets justified by the modest consequentialist reason in favor of scratching. An absolutist could not say this.

I Love Corporations

Here’s something I don’t understand. It looks to me as if there is quite a lot of distrust and hostility toward corporations, especially large corporations, compared to other people and organizations. I put “compared to…” in there, because I could understand it if people were just distrustful in general (maybe that’s an evolutionary adaptation, whatever). But it seems as if people are much more distrustful of corporations than they are of (a) the government, (b) people they know in their personal lives.

It seems that way to me, from comments that students periodically make, comments I see on the internet, etc. You hear conspiracy theories about the corporate elite running things, vague complaints about how the “system” is “rigged”; you see corporate villains in TV shows and movies, in which the heroes are usually government agents (cops, government spies, etc.) Of course, libertarians are way more distrustful of the government, but libertarians are a small fraction of society. The rest of society, with non-libertarian political views, is more distrustful of corporations.

This is weird to me, because of the three groups, (a) government, (b) individuals in ordinary life, and (c) corporations, (c) appears to me to be by far the best behaved, the most beneficial, the smallest threat — and in ways that should be readily apparent to us.

There must be a lot of people who will be incredulous that I could possibly say such a thing. So let me explain what I mean.

Personal Experience

I would initially expect most people’s attitudes to be pretty closely tied to their personal experience, more so than their book learning or what they hear on the internet. Now, I have had personal experience with individuals, corporations, and government. All three are, of course, sometimes unsatisfactory. But my experience with large corporations is way better than my experience with either individuals or government — better from the standpoint of my ending up feeling satisfied, or being made better off by interacting with them.

It looks to me as if there are a good number of asshole individuals out there — a lot of people are just very self-centered, not good at and not interested in seeing anyone else’s perspective, not particularly interested in being fair, rational, or moral. Anyway, there are few people from whom one can gain much value. Now, those same people are of course in corporations, including management positions. So you might anticipate that they would cause corporations to act like assholes too. But, as far as I have been able to directly observe, the situation is quite the reverse.

The customer experience

Customers of big corporations are often unreasonable and disagreeable, and the company puts up with it and bends over backwards to make the customers happy. Example: I buy a product at a big chain store, take it home, cut off the packaging, then decide, for no particular reason, that I don’t like it anymore. I take it back to the store to return it. Dialogue: “Is there anything wrong with it?” “Nope, I just don’t want it anymore.” “We’re very sorry, sir.” Then they give me my money back. That’s the sort of interaction that I typically have with big corporations and their representatives. (In case this isn’t obvious: in that story, I’m the one who’s being a jerk.)

That is to say nothing of the enormous benefits that I directly reap, every day, from interacting with all these corporations, especially the big ones. For example, I’m typing this on a computer made by a big corporation; I would have no bloody idea how to make a computer, and would probably take a million years to do it without the help of any big corporations. At the same time, I’m sitting on furniture and wearing clothes made by big corporations. They’re high quality and amazingly affordable. If I tried to make my own clothes, I have no doubt, it would come out terrible, and it would take nearly forever. Even if I went to a tailor — but he had to make the clothes without the help of any supplies from big corporations — it would still come out pretty crappy, and it would be many times more expensive.

Basically, if it weren’t for the big corporations, I think we’d have the standard of living of primitive tribes. You don’t have to be overflowing with gratitude all the time (it’s normal to take for granted the great goods in your life, so okay) — but it’s very strange to me if someone, in a position similar to mine, is filled with resentment and distrust at the organizations that regularly provide enormous benefits to them for a fraction of their reservation price.

Now, I understand that these corporations aren’t doing all this out of the goodness of their hearts. They’re in it for the profit. I don’t care; I’m just glad that they’re producing enormous benefits for me and society. And I can hardly complain about the profit motive, since I don’t work for free either. If the university doesn’t pay me, I don’t show up to work!

The employee experience

What about our experiences in the employee-employer relationship? I suppose we might be resentful because we never feel we’re getting paid enough. Whatever your job is, you probably think you deserve more. Or maybe you just look at the low-income manual laborers, and think they deserve more.

Would you act differently if you ran a corporation? Let’s say Amazon offers you the CEO position tomorrow. It comes with a $10 million annual salary. (I’m making that up — I don’t know how much the actual salary is. But that’s realistic for a big corporation.) You have the option of reducing your salary and giving the remaining money to your employees. You could cut your salary by, say, $9.9 million (who needs more than 100k a year?), distribute that money to the employees, and in doing this, raise each of their annual salaries by $13. (Amazon has 750,000 employees; 9,900,000/750,000 = 13.) Would you do that?

I would not, and neither would most people. Maybe a few very unusual people would. But we can hardly be resentful and distrustful of someone for just behaving the way the vast majority of normal people would behave.*

*By the way, if you’re an altruist, you most definitely should not do that. Because you could instead give that $9.9 million to the most effective charities, and thereby probably save around 5,000 lives every year. Which, by the way, is the sort of thing that some of these rich capitalists in fact do. E.g., Bill Gates and Warren Buffett have literally saved millions of lives through the Bill and Melinda Gates Foundation. How many lives have you saved?

The citizen experience

I don’t know about you, but my experiences with government are much less satisfactory. They don’t have a customer service number. They don’t have any employees whose job is to try to resolve complaints and keep you happy. They don’t take returns under any circumstances.

They demand money for services that you never asked for, don’t want, and in some cases did not even receive. They issue orders and threaten you with physical violence if you don’t follow their orders or hand over the money they want. Sometimes, they demand that you do free labor for them (“jury duty”), again under threat of violence.

Now, I understand the arguments that this institution is necessary. I understand how someone could think that. But I don’t really understand how a person could feel happy about their interactions with the state, yet feel dissatisfied and resentful about their interactions with corporations.

The saying in the business world is “the customer is always right.” The attitude in government is “the government is always right (even when it’s wrong).” Thus, if a company makes a mistake (gives you a defective product, or whatever), they will generally apologize profusely and pay for their mistake. If the government makes a mistake (e.g., the police fail to respond to your call for help while your house is being broken into, or they bust down the wrong door in one of their no-knock raids and shoot your dog), their attitude will be, “oh well, sucks for you.”

The attitude conveyed by most businesses is “You’re the boss.” “Welcome!” “We’re so happy to see you!” “Thank you, and have a nice day.” “Let us know if anything about your experience is not to your liking.” Etc. Sure, the employees are not really sincere in these expressions of emotion. But at least the business thinks they should act like they care about you.

The government has no such idea. The attitude conveyed in everything they do is “We’re the boss,” and they have no interest in pretending to care about you. Do what we say, give us your money, then get out. If there’s anything about your experience that is not to your liking, you can go fuck yourself. (Note: Not actual quotations.)

Sometimes, you see an irate and unreasonable customer loudly berating an employee of some business over the business’ perceived failure. The employee generally listens patiently and tries to fix the problem. Try doing that to one of the government agents who are there to “serve and protect” you. You’ll probably wind up in jail, if not in the hospital.

Am I idiosyncratic? Is this not the experience of other people? How does it come about that people have more positive feelings about government than business? Or is that not really the case?

History

Maybe we shouldn’t rely on personal experience so much. Look at history. Look at the experiences of people around the country, and around the world.

There are periodic stories about crimes committed by corporations — defrauding customers, polluting the environment, insider trading, etc. But btw, before you get indignant about the harm to the environment, think about your own role. If you’re buying all the products that cause that environmental damage, it’s hard for you to get indignant at the people who are making them for you. People are making plastic goods, gasoline-powered vehicles, and meat products because ordinary people want them, because we (most of us, anyway) won’t buy alternative products. So maybe we should save our biggest resentment for ourselves, the average consumers.

Anyway, there are also stories about the crimes committed by governments. They make the crimes of corporations look cute. When governments misbehave, they do stuff like murdering hundreds of thousands of civilians. Or kidnapping millions of people and holding them prisoner for years. And I don’t mean in some indirect or speculative way — I mean literally sending employees with guns to go shoot people. And in case you think I’m just talking about other governments, bear in mind that the U.S. government killed hundreds of thousands of civilians in the Iraq war. In Vietnam, they probably killed a million. If there were a corporation that did shit like that, people on all sides of the political spectrum would condemn it as the most evil corporation ever.

With most of my readers, who are libertarian-leaning, this is preaching to the choir. But the observations above aren’t controversial or recherche libertarian ideas. They’re all well-known. Everyone knows how businesses and governments behave; that’s why you never see someone trying to chew out a cop for poor service. Everyone knows at least something about the great war crimes of the past, and we know they were done by governments. Everyone also knows about the value of the things that we get from corporations all the time — that’s why we keep buying them.

Hypotheses

I have thought of three psychological explanations.

1: Power worship. People hold more affection for government for the same reason that the government behaves worse: Because they have power. Ordinary human beings do not admire those who treat them well. Ordinary human beings admire power.

2: Ressentiment. People resent the rich, but only the rich who might be thought to have earned their money. For example, people don’t resent the British royal family, or lottery winners, because those rich people have so obviously not done anything to earn their wealth. Therefore, they don’t make us feel inferior. Rich business people, though . . . Grr. Who do they think they are?

3: Democratic Ideology. Maybe people are very generous in our attitudes toward the government because we identify with the government. We buy into the democratic ideology which teaches us that in a democracy, “the people rule”. So whatever the government does, it’s really us doing it. We can’t be mad at ourselves. It’s kind of like how you let your family members get away with crap that you wouldn’t tolerate with strangers — you might even make excuses and pretend that your criminal family member is really a good guy at heart, despite those few people that he murdered. (They probably had it coming. And what about all the people that he didn’t murder? Most murderers are much worse!)

Psychologizing SJT’s

An Annoying Habit

Earlier (https://fakenous.net/?p=1354), I hypothesized that a good part of what motivates Social Justice Warriors (SJW’s) is the desire to preserve the particular social group they belong to, and to acquire status within that group.

This is a very annoying and divisive kind of claim to advance, because it does not actually engage with the SJWs’ theories as theories. It ignores the question of the truth of those theories and ignores the reasons that SJW’s themselves would offer for their ideas. It might also be taken as a personal attack. (Not necessarily, though; it’s not necessarily bad to try to preserve one’s group or increase one’s status.)

In this post, I won’t correct these problems; I’m just going to make things worse. I’m not going to argue the case for or against the ‘Social Justice’ ideology. I’m not doing that because I have so little buy-in to that worldview on its face that it just isn’t interesting to me to talk about. (It’s like why I would not partake of a debate about whether the Earth is flat. It isn’t a live issue to me, and I wouldn’t expect to learn anything from thinking about it.)

So if you want to know whether the Social Justice ideology is true, don’t read this. Also, if you are an SJW, you probably won’t get anything out of this.

But the psychological/sociological analysis of the movement is of some interest to me. So, I’m going to try to explain why I think that SJW’s are doing the things I say they’re doing. This won’t convince any SJW’s, but it might be interesting to others.

SJT’s

Background: SJW’s can be divided into two categories (with possible overlap, obviously): Social Justice Activists (SJA’s), and Social Justice Theorists (SJT’s). The former are political activists. The latter are people who devote a lot of time, often a whole career, to theorizing about “social justice” from a particular ideological perspective, one focused on race, sex, and other ‘identity’ categories, ‘systems of oppression’, &c.

This post is particularly about SJT’s. I think SJT’s are largely engaged in

(i) working to preserve a social group (the ‘social justice subculture’, if you will), and

(ii) promoting their own social status within that group.

Those aren’t the only things they are doing, but I think those things are a large part of what they are doing. (So, for example, they are not merely trying to understand reality or to improve the larger society.)

Evidence

1. General implausibility

Of course, a significant part of why I think SJT’s are doing these things is that I just find their ideology extremely implausible in various ways. (But if you don’t agree with this, there is no quick way that I could expect to convince you.) Some of the things they find oppressive or offensive I find it very hard to see as creating a major problem for someone who wasn’t looking to feel oppressed or offended. So, that naturally makes me want to look for other things they are doing, besides trying to describe reality.

2. Preaching to the choir

If you look at SJT writings (which I’ve looked at some of), you’ll notice that they tend to presuppose a far-left, identity-politics ideology. They almost never give arguments that could be expected to appeal to a conservative, a libertarian, a moderate, a reasonable undecided person, or even a leftist of a different variety.

(It’s not impossible to defend a point of your ideology in a way that could appeal to reasonable people who don’t initially share that ideology. See, e.g., my arguments on gun control, immigration, and drug prohibition.)

This could be explained by the hypothesis that their intended audience is other committed SJT’s (surely that is their actual audience), not people with different views. This, in turn, could be explained if they are doing the things I say (preserving the social group and gaining status within it) — then it makes perfect sense that they would mainly be talking to their own group. It is much harder to understand if they are trying to improve the larger society.

3. Linguistic style

A second feature of SJT writings that will probably jump out at you immediately (if you’re not an SJT) is how hard they are to understand. They are often filled with undefined, specialized jargon, vague abstractions, and otherwise confusing sentences.

From Eve Kosofsky Sedgwick’s The Epistemology of the Closet: “An assumption underlying the book is that the relations of the closet — the relations of the known and unknown, the explicit and the unexplicit around homo/heterosexual definition — have the potential for being peculiarly revealing, in fact, about speech acts more generally. It has felt throughout this work as though the density of their social meaning lends any speech act concerning these issues — and the outlines of that ‘concern’ it turns out are broad indeed — the exaggerated propulsiveness of wearing flippers in a swimming pool: the force of various rhetorical effects has seemed uniquely difficult to calibrate.”

One hypothesis is that the linguistic style is intentionally exclusionary: they are specifically trying to keep outsiders from hearing or adding to their conversation. I know it works on me: if I’m looking for a non-libertarian article to read and possibly respond to, I’m not going to pick one that I have to spend five hours just trying to understand. And so the insiders are able to maintain an insular conversation with each other.

But, whether or not SJT’s are actively trying to discourage readers, they certainly are not trying to encourage outsiders to engage with them. Again, this makes sense if their purposes are as I suggest.

4. Rejecting objectivity

There’s a pretty good overlap between far-left theorists (including SJT’s) and people who criticize ideals of objectivity and rationality.

I think rationality and objectivity are necessary if one wants to benefit society. However, they are not needed — indeed, they are a hindrance — if one’s aim is to preserve one’s preferred ideology regardless of the truth, or to signal one’s loyalty to a social group. I also think these facts are readily available to almost any smart, reflective person. So, if a person is trying to undermine objectivity and rationality, that’s evidence that their goals are more along the lines of ideology-defense and affiliation-signaling than benefitting society.

5. Inconsistent pattern of concern

Some of the pattern of concern of SJT’s is hard to reconcile with their stated values. For instance, ending patriarchy and oppression of women is a key SJT value. But then one would think that the treatment of women in the Islamic world would be a central focus of their concern, much more so than the (vastly better) treatment of women in the West.

Another key SJT value is diversity, especially in education. It is said that we need affirmative action in the academy so that students can be exposed to the different perspectives of people who are very different from the students. But then one would think that SJT’s would especially favor affirmative action for people with a variety of different ideologies.

One would also think they would especially favor affirmative action for immigrants. E.g., black people from Africa would be more favored than American blacks, and much more than middle class American blacks.

One would also think SJT’s would favor affirmative action for Asians, at least in those areas where Asians are underrepresented, such as philosophy.

None of these things is the case. I think this sort of thing calls into question whether their main goal is really to promote the values that they profess.

6. Assumptions about SJW status hierarchy

I’m assuming here that there is a social status hierarchy in the woke culture, that people who identify with that culture want higher status, and that status is tied to claimed oppression and claimed concern about others’ oppression. Is there independent evidence of these assumptions?

There have been some interesting cases of people making false claims of SJ-related oppression (being oppressed because of one’s identity group). For instance, the Jussie Smollett case, or the Rolling Stone rape case. So it’s clearly not just right-wing ideologues who think SJW’s have a status hierarchy tied to oppression; actual SJW’s (or aspiring SJW’s) perceive that hierarchy and try to gain position in it.

There are also cases of people publicly expressing outrage with dubious sincerity. For instance, in the Rebecca Tuvel case, some people signed the petition, then later, after it appeared that the profession was not getting behind it, wanted their names removed from it. In some cases, we see people expressing outrage about a book, article, or speaker, without having read the allegedly outrage-inspiring texts. I would cite Charles Murray and Noah Carl here.

Granted, people vary a lot in their emotional reactions, so there may be many people who feel genuine outrage at things that do not bother me. But I just find it hard to credit outrage at an article one has not read, or outrage that evaporates when one learns that other people are not getting on the bandwagon.

All of this makes sense, however, if people are trying to gain status in an ideologically-defined subculture. Now, most of these are not SJ theorists per se. But it is relevant to note that there is this kind of status hierarchy that people are trying to climb.

By the way, I found some of the discourse in the Democratic Primary Debates striking. Candidates tried to score points by pointing to their being female or of a minority race. Some tried to discredit others by pointing to the others’ wealth. Quite odd, on its face. But they were trying to get status with the SJW base. (This is compatible with their really thinking that white men are bad and wealthy people are bad.)

7. Psychological mechanisms

The basic psychological mechanisms I’m appealing to are ones that I think we have independent knowledge of. We already know about Groupthink, which happens when a group is too ideologically uniform. There is then a tendency for group members to publicly take increasingly extreme and unreasonable positions. Explanation: they are trying to outdo each other in manifestations of loyalty to the group’s ideology.

I think we also already know, in general, that human social groups, especially ones that provide a sense of meaning and purpose (like religions, cultures, and ideologies) don’t like to disband.

We know that a person’s ideology (including religions and political ideologies) can be very emotionally important to a person, and leaving it can be extremely hard. I have not experienced this, but I have been told this by people who have. The famous right-wing ideologue, David Horowitz, was once a communist. He describes how it was extremely emotionally difficult to leave that ideology, which he had to do after he learned about the Soviet atrocities. Luckily for him, he found another ideology to fervently devote himself to.

So, if you have an ideologically defined social group, it makes sense that there would be people concerned about protecting the ideology. And I don’t just mean rationally defending it from fallacious objections because you are concerned about the truth. I mean there would be a motivation to defend it from any criticism, to insulate it from falsification.

How SJT’s Will Respond

As I say, I don’t think many SJT’s will find the above convincing. Maybe they’ll flat-out deny the factual claims (“No, we don’t have any status hierarchy. I don’t know why you would think that.” “No, there’s nothing obscure about our writing; I have no idea what you’re talking about.”) Anyway, I’m sure they’ll say they aren’t trying to climb a status hierarchy or defend the ideology from being tested by evidence.

But their saying that won’t be significant evidence that I’m wrong, because that’s what we would expect them to say if they are in fact doing those things. Of course if you’re trying to climb the hierarchy and protect the ideology, then you’d want to insist that the ideology is just true and that you’re only motivated by social justice.

And by the way, no, this doesn’t mean that I’m insulating my own view from falsification (cuz I know some people are about to say that). I’m not saying nothing would be evidence against my claims. I’m saying specifically that the disagreement of SJT’s won’t be strong evidence against my claims.

We Need Capitalists

A Fictional Capitalist

A lot of people hate “capitalism”. Many of these people, however, don’t actually know what capitalism is, and so they hate things about “capitalism” that aren’t actually features of capitalism. But I’m going to set that problem aside for now. At least some people are bothered by things that are genuinely key features of capitalism.

So here is one central objection to capitalism that is actually about capitalism: On the face of it, it appears as if it rewards certain people handsomely, for socially valueless activities that no one deserves to be paid for. In particular, capitalists can make a lot of money, for seemingly doing no work. On its face, that looks unjust.

Terminology

Just to make sure you know what I’m talking about:

Capitalism is an economic system that allows private control of the means of production by capitalists.

Socialism, by contrast, has collective control of the means of production, by or “on behalf of” the workers. This could mean control by the state, or (in the anarcho-socialist view) worker cooperatives.

Capitalists are people whose income derives from owning capital. A business owner gets to collect the profits from a business, not for doing any particular work, but just for owning the stuff that is being used to produce goods. You could become a capitalist in a small way right now (if you’re not already): go online and buy some stocks. The stocks represent partial ownership of some businesses. If they go up in value, you can make a profit.

An Objection to Capitalism

So here, again, is the objection that I think many have to capitalism:

  • In the capitalist system, empirically, capitalists not only make a living; they often become very rich. But you shouldn’t be able to get rich just because you own stuff. You should have to do real, productive work. Owning stuff isn’t productive work.
  • The workers in a business, meanwhile, typically make much less money than the capitalists. Yet the workers are the ones doing the actual productive work! This looks unjust.

People then try to devise theories to explain how the capitalists have managed to scam society. Notice, by the way, that the underlying evaluative assumption behind the objection is something that most conservatives would agree with — that people should be paid (only) for being productive. (Of course, there are other objections to capitalism that turn on values that conservatives would disagree with.)

To be clear, the objection is not that business management is a useless activity, like “Oh, managers are just pushing papers around; that’s useless.” (That’s kind of a dumb objection.) Defenders of capitalism are liable to confuse capitalists with managers. To be a capitalist, per se, is not to manage a business. Nor is it to do anything else at all, except own stuff, and collect income from that. And the possibility of doing that is central to capitalism.

Recently, I bought some Boeing stock. In so doing, I became a part owner of that business. I am now in principle entitled to some small share of that business’ profits. I can assure you, however, that I have absolutely no intention of helping to manage Boeing. I have no intention of doing any work whatsoever for Boeing. The “work” I will do will be strictly limited to selling those shares at some future date, hopefully at a profit. That’s it.

A Real Capitalist

Of course, many capitalists are also business managers. But (a) as that example shows, the “capitalist” function and the “management” function are distinct; and (b) it is actually their “capitalist” role that makes most of the money, for the world’s richest people, not their “manager” role. E.g., look at Bill Gates, Warren Buffett, Jeff Bezos: their wealth is almost entirely due to their stock holdings.

So I think this is a good philosophy-of-economics question: how is being a capitalist, as such, productive? How, if at all, does that function make a person worthy of receiving payment? If it doesn’t, is that an injustice, or some other sort of flaw, in our society?

I’m going to guess that most of my readers here are pro-capitalist. But you still might not know exactly how to answer those questions.

Three Cheers for the Capitalist

There are at least three important functions served by capitalists (and these are focusing specifically on the “capitalist” role, not the “manager” role).

1. Risk Acceptance

Business is risky. Most new businesses fail within a few years. Even big, long-established companies frequently go into decline. E.g., Sears was the largest retailer in the 1980’s and earlier, yet in 2018, they filed for Chapter 11 bankruptcy.

If business activity is to go on at all, someone has to assume that risk. Workers could do that in theory, but, empirically, few want to. When you take a job, you don’t want to have to front the company some money to get started or expand operations — and then stand to lose that money if the business fails.

So it’s left to another group of people to play that role — the capitalists. When a business fails, the capitalists are in line to take the loss. In return, if the business succeeds, they collect the profits.

2. Deferred Consumption

Most People

If the economy is to grow, someone has to set aside resources today and, instead of consuming them, use them to try to increase future production. Anyone could do that, but, empirically, most people don’t want to do it, or don’t want to do very much of it. Most people, if they acquire some extra money, will shortly spend it.

So it falls to a small group of people to do most of the investing. In return for forgoing immediate consumption, they get a little bit more expected money at a future date.

3. Evaluation and Allocation

It is not a simple matter to have a productive, functioning society. It’s not sufficient to just tell everyone, “Okay, go do some productive work.” Someone has to decide what work should be done, and as a matter of fact, most ideas about that will not work. Most business plans will destroy value rather than create value. I.e., the economic value of the goods or services they will make available will be less than the value of the resources they will use up.

(Note: this is true in an advanced, prosperous society. In a primitive society, it would be much easier to find straightforwardly useful things to do. In an advanced society, there are many more bad business possibilities.)

Because business is risky and most possible business ideas will not work, someone has to evaluate business ideas and decide to which ones resources should be allocated. This, by the way, goes not just for new business proposals, but also for existing businesses. Even a business that has been around for a long time needs constant reevaluation — maybe resources should be diverted away from it to other businesses.

Most people don’t want to do that. Moreover, most people cannot do that — if they try, they will do a very bad job of it. Some relatively small group of people are interested in doing this task and are good at it. In the capitalist system, they will be rewarded with more resources, which they can also direct in efficient ways, and so on. The people who are bad at this task lose their resources and then have to stop doing the task.

The above is the basic theory of capitalism. Of course, there can be failures in practice. It’s possible for an individual or business, in some cases, to acquire resources not through productive activity but through scamming or stealing. The real world is always imperfect, whether it’s a capitalist, socialist, or other system.

We could try a different system. We could have government officials evaluate business ideas, and allocate taxpayer money to the ones they consider worthy. That would be the state socialist system. Most government officials, though, are not very good at this, and the socialist system does not have a credible mechanism for stopping people who are very bad at this job from continuing to do it.

What’s It Worth?

You might be tempted to say: “Okay, so the capitalists are doing something useful. But the workers are still doing the main productive activity. Surely the capitalists are not doing so much as to justify their making millions or billions of dollars.”

No, it’s plausible that they actually are. Two important points about this:

  1. The activity of the capitalist is leveraged, so to speak — i.e., a capitalist’s actions affect what many other people are doing. If an individual blue-collar worker does a good job or a bad job, it makes a small difference to that company’s productivity. But a capitalist’s decisions can easily make the difference to whether that entire company exists or not. They affect the disposition of much larger amounts of resources than the actions of an individual worker. So the productivity benefit of having a single skillful capitalist, rather than an incompetent one, can easily be many times greater than the productivity benefit of having a single skillful worker rather than an incompetent one.
  2. Also keep in mind the role of supply and demand. There are many more people who are willing and able to do blue collar work than there are people who are willing and able to serve the functions of the capitalist. As with anything else, scarcity drives up the price of capitalists.

Conclusion: Our objection to capitalism is misguided. Capitalists serve crucial productive functions, and their wealth is plausibly explained by their large contributions to productivity.

Existential Risks: AI

Some people worry that superintelligent computers might pose an existential threat to humanity. Maybe an advanced AI will for some reason decide that it is advantageous to eliminate all humans, and we won’t be able to stop it, because it will be much more intelligent than us.

When journalists write about this sort of thing, they traditionally include pictures of the Terminator robot from the movie of the same name — so I’ve gone ahead and followed the tradition. I gather that this annoys AI researchers. (As in this news story, which claims that a quarter of IT workers believe that Skynet is coming: betanews.com/2015/07/13/over-a-quarter-of-it-workers-believe-terminators-skynet-will-happen-one-day/. That story is typical media alarmist BS, btw. The headline is completely bogus.)

I have no expertise on this topic. Nevertheless, I’m going to relate my thoughts on it anyway.

(1) Consciousness

It seems to be an article of faith in philosophy of mind and AI research that, as soon as they become sophisticated enough at information-processing, computers will literally be conscious, just as you and I are. I personally have near-zero credence in this. I think there is basically no reason at all to believe it.

This, however, does not mitigate the existential risk. A non-conscious AI can still have enormous power and could still cause great damage. If anything, its lack of consciousness might accentuate the threat. Genuine consciousness might enable a machine to understand what is valuable, to understand the badness of pain and suffering, and so on — which might prevent it from doing horrible things.

Though I don’t think conscious AI is on the horizon, I do think we are, obviously, going to have computer systems doing increasingly sophisticated information-processing, which will do many human-like tasks, and do them better than all or nearly all humans. These systems are going to be so sophisticated as to be essentially beyond normal human understanding. It is possible that some of them will get out of our control and do things they were not intended to do.

(2) Computers Are Not People Too

However, the way that most people imagine AI posing a threat — like in science fiction stories that have human-robot wars — is anthropomorphic and not realistic. People imagine robots that are motivated like humans — e.g., megalomaniacal robots that want to take over the world. Or they imagine robots being afraid of humans and attacking out of fear. Or robots that try to exterminate us because they think we are “inferior”.

Advanced AI won’t have human-like motivations at all — unless we for some reason program it to be so. (In my view, it won’t literally have any motivations whatsoever. So to rephrase my point: AI won’t simulate humanlike motivations, unless we program it to do so. Henceforth, I won’t repeat such tedious qualifications.) I don’t know why we would program an AI to act like a megalomaniac, so I don’t think that will happen. It won’t even have an instinct of self-preservation, unless we decide to program that. It won’t ‘experience fear’ at the prospect of its destruction by us, unless we program that. Etc.

The desire to take over the world is a peculiarly human obsession. Even more specifically, it is a peculiarly male human obsession. Pretty much 100% of people who have tried to take over the world have been men. The reason for this lies in evolutionary psychology (social power led to greater mating opportunities for males, in our evolutionary past). AI won’t be subject to these evolutionary, biological imperatives.

(3) Bugs

But that does not mean that nothing bad is likely to happen. The more general worry that we should have is programming mistakes, or unintended consequences.

Computers will follow the algorithm that we program into them. We can know with near certainty that they will flawlessly follow that algorithm. The problem is that we cannot anticipate all the consequences of a given algorithm, in all possible circumstances in which it might be applied — especially if it is an incredibly complicated algorithm, as would be the case for an AI program — and in those cases where an algorithm implies ‘absurd’ results, results that are obviously not what the designer intended, a computer has no capacity whatsoever to see that. It still just follows the algorithm. (Again, that’s because it is not conscious and has no mental states at all.)

Nick Bostrom gives the example of a computer system built to make paperclips more efficiently. The computer might improve its own intelligence, eventually figuring out how to escape human control and convert most of the Earth’s mass into paperclips. This would have human extinction as an unfortunate side effect.

Now, I don’t think that specific example is likely, nor is it a typical example of the sort of unintended consequences that algorithms have. The usual unintended consequences would be more complicated, so not very good for making points in blog posts.

The point of concern, in my view, is that when we write down a general, exceptionless rule, that rule almost always turns out to have some absurd consequences in some circumstances. With a computer program, you can’t just put in a clause that says, “unless this turns out to imply something obviously crazy.” You can’t rule out the obviously crazy unless you first have a complete and perfectly precise definition of that.

(4) Crashing

So, I think the simple problem of unintended consequences is a more realistic concern than the worry that AI is going to get its own ideas and decide that it hates us, that it wants political power, that we’re inferior, etc.

But in fact, I still don’t think this is all that big of a threat. The main reason is that, when programs malfunction — when the complicated algorithm the computer is following has an unintended consequence — what almost always happens is that the program crashes, that is, it just shuts down. Or it does some bizarre, random thing that seemingly makes no sense.

Now, that could be very bad if, say, your program is in charge of a robotic surgery, or air traffic control, etc. But it isn’t going to start acting like an agent coherently pursuing some different goal (which is how things go in the sci-fi stories).

So when designing AI, I think we need to have backup systems to take over in case the main system crashes.

(5) The Real Threat

So far, I seem to be minimizing the purported existential threat of AI. But now, actually, I think there is a serious existential risk that has something to do with AI. But it’s not the AI that would be out to get us. It is human beings that would be at the core of the threat.

The danger posed to humanity by humans is nowhere near as speculative as the usual out-of-control-AI scenarios. We know humans are dangerous, because we have many real cases of humans killing large numbers of human beings. Right now, there are people who quite seriously believe that it would be good if humanity became extinct. (A surprisingly plausible argument could be made for that!) There are others who would gladly kill all members of particular countries, or particular religious or ethnic groups. There are probably millions of people in the world who would like to kill all Jews, or all Americans.

Most of these crazy humans fail to trigger genocides and other catastrophes because they simply lack the power to do so, and/or they do not know how to do it.

So here I come to the real AI threat. It is that AI may make dangerous humans even more dangerous. Crazy humans may use sophisticated computer systems to help them figure out how to cause enormous amounts of harm.

Military AI

And here is why I don’t think the Skynet/Terminator story is so silly, after all. In the Terminator stories, Skynet starts as a military defense system. I think it’s completely plausible that the military will use AI to help control military systems and help figure out the best strategies for killing people. If one country does it, other countries will do it too.

I also think it’s not implausible that countries might be almost forced to give control of their military systems to computers — or else face crushing military disadvantage to other nations.

A simple example: suppose there was a computer-controlled fighter jet in a dogfight with a human-controlled jet. The computer can perform 3 billion calculations in one second; the human can do about three. Against a good program, the human is simply going to have no chance. Hence, every military would be forced to put its airplanes under computer control.

A similar point might apply to the rest of the military forces of a nation. What if we were in another cold war with Russia? But this time, technology is so advanced that, if the Russians launch a nuclear attack on the U.S., the U.S. President would have only two minutes, once the Russian missiles were detected, to decide how to respond. If that were known to be the situation, it would be very tempting to place the U.S. nuclear arsenal under computer control. If both nations put their nuclear arsenals under computer control, then AI-prompted Armageddon starts to seem more likely. (I note that during the Cold War, there were several cases in which the U.S. and Soviet Union actually came close to nuclear war, and some involved computer errors and other false detections of missile launches. https://www.ucsusa.org/sites/default/files/attach/2015/04/Close%20Calls%20with%20Nuclear%20Weapons.pdf)

So there are two threats: governments of powerful nations may kill us with AI-controlled military systems (possibly by mistake). Or terrorists may use AI to figure out how to kill us, because they’re crazy.

Of course, we can use AI to defend against hostile AI. But destruction usually has an advantage over protection. (It’s easier to destroy, there are more ways of doing it, etc.) So the hostile AI’s will just have a big, unfair advantage.

AI in a box

Some of the discussion of the AI existential risk has to do with whether we humans could keep a sophisticated AI under control, or whether it would escape from us and then possibly destroy us. There is a sort of game in which two people role play as a human and an AI, and the AI tries to talk the human into letting it (the AI) have access to the outside world (the internet, etc.). Half the time, the AI succeeds, even against human players who are initially convinced of the dangers of AI (https://en.wikipedia.org/wiki/AI_box). A real superintelligent AI would have better odds.

But I think we don’t need to debate that, because if humans knew how to make an AI sophisticated enough to extinguish us, and it were reasonably easy and affordable, some human beings would create an AI, deliberately program it to cause maximum destruction, and release it.

This point renders moot almost every safeguard that you might think of to prevent AI from causing mass destruction. A human will just disable the safeguard, or deliberately create an AI without it.

As I suggested in a previous post (“We Are Doomed”), we will have to hope that humans become less crazy and evil, before our technology advances enough to make it very easy to destroy everything. This is something of a faint hope, though, because it only takes one crazy person with access to a powerful technology to wreak mass destruction.

What Should Candidates Know?

Back when she was still running for President, Amy Klobuchar was criticized for not knowing the name of the President of Mexico. That was reminiscent of the criticism of Gary Johnson in 2016 for not knowing what Aleppo was. (On the other hand, Trump seemed to be undamaged, though he was certainly ridiculed for it, by the revelations that he thought that Frederick Douglas was still alive, that we might be able to stop a hurricane with a nuclear bomb, or that we might be able to stop Covid-19 with an ordinary flu vaccine.)

Incidentally, I don’t believe the news media who report on things like this have any interest in those facts, except to attack someone for not knowing them. The Gary Johnson story was literally the first and last time I ever heard the word “Aleppo” anywhere other than in a computer game, and pretty much the only fact about Aleppo that they reported was that Gary Johnson didn’t know about it. Likewise, the only information I have heard about the current President of Mexico is that Klobuchar didn’t know his name.

By these standards, I myself would not be educated enough to serve as President. I think this is a reductio ad absurdum, since I’m also highly confident that I am in the 1% most educated people in the country, maybe the top 0.01%. (On the other hand, I wouldn’t have thought of dropping a nuclear bomb on a hurricane, but I guess that doesn’t matter.)

This raises the question: what should we expect a presidential candidate to know? What is the sort of thing that, if they don’t know it, we should take that as disqualifying, or at least a major strike against their candidacy? I guess I’ll assume this is in a possible situation where there are multiple candidates who are decent people, so you’re not forced to choose between ignorance and evil.

Arguments for ‘the President of Mexico’

Information Needed for the Job

You can see the argument that the U.S. President needs to know who the Mexican President is, since the U.S. President will have to deal with that person, perhaps negotiate trade deals, etc. This is just practically relevant information for doing the job.

Reply: Don’t be silly. At the time our President has to meet with the Mexican President, the President’s staff will brief him or her on the important facts and background of the meeting. The U.S. government is huge, and it obviously contains top experts on all the major areas of U.S. policy, and all those experts are ready to serve the President. It’s not like Klobuchar would go into a meeting with LĂłpez Obrador and not know who he was.

Sign of General Ignorance

Second argument: “That’s all true, but not knowing something like who is the President of Mexico just shows that a candidate is a generally ignorant person, and probably not educated enough or intellectual enough to be a good President.”

Reply: Yeah, I think that’s silly too. Actually, I think that a person who believes that “second argument” is revealing their own ignorance and naivete. If you knew who the President of Mexico was, congratulations. But that’s one of about a million details that someone could say “the President needs to know in order to do the job.” It is naive and simplistic to think that a normal person (not someone with a freakishly encyclopedic memory) would carry around all that information and be able to call it up when a random item from the list of “important facts that the President needs to know” is selected to question them about. That is not how this thing works. The way it works is that the President is surrounded by experts on many different things, each of whom knows their own limited area, and the President talks to the relevant people when a decision needs to be made.

“Educated people” do not know all the important facts in the world; no one does. Most educated people know a tiny sliver of the world’s important facts, they are shocked that everyone doesn’t know the specific sliver of stuff that they know, and they assume all the other stuff outside their area is unimportant. They also typically don’t realize just how much stuff that is.

If we keep doing this “gotcha question” tactic, we’re going to be selecting for Trivial Pursuit champions. A Presidential candidate’s campaign preparation is going to consist in memorizing long lists of details. Start with the names and locations of all the countries in the world. (There are ~200 of them. How many can you name? What, are you unaware of the very existence of dozens of entire countries?!) Then the head of state and form of government of each country. Add in the names and locations of all the major cities in the world that are involved in any important news story that has happened in the last 5 years.

The candidate will have to review all major news stories from the last several years, and will have to memorize all “important” details contained in them. If there is a war somewhere, the candidate will have to know the names of all cities involved in it, the major factions, the cause of the dispute, the leaders’ names, when it started, etc. He or she must also know all terminological variants on any important thing in the story. E.g., if there was a disease outbreak, the candidate has to know every alternative term used to refer to that disease, lest a reporter catch him off guard with an unfamiliar expression and force him to ask “what is that?”

There are many more details that could be said to be necessary for the President to know. Surely, if one is going to take the oath to uphold the Constitution (that’s the only thing Presidents are sworn to do), one must know the Constitution, right? So a candidate will have to know the name and content of every ‘important’ Constitutional provision.

They should also know all the major laws and issues in every important area of policy — farm policy, energy policy, education, labor, health care, highway transportation, pollution, water, immigration, emergency services, tort law, illegal drugs, prescription drugs, health and safety, disaster relief, space exploration, arts, banking, air traffic, firearms, military defense, etc. Those are just a few areas of government policy that I can think of off the top of my head. I’m sure there are many more. You’d have to spend at least several hours studying about each of these things, because some hostile journalist can ask you about any one of them, picking a random technical term from that area. Like, “What do you think of the nuclear triad?”

Or rather, this is what would happen if voters cared about this sort of thing as much as journalists purport to care.

Fair Questions

That said, I don’t think all knowledge questions are unfair, and I think some kinds of ignorance are genuinely disqualifying.

Knowledge of One’s Own Policies

The first category I’d suggest is that of key facts about a candidate’s own proposals, particularly facts that would be needed to assess whether the proposal is a good idea. Thus, if some candidate proposes to offer Medicare for all Americans, that candidate should know such basic things as how much this would cost and how it could be paid for. It’s fair to ask these questions, since the candidate would have to have looked at those things in order to arrive at the conclusion that this is a good proposal. If they didn’t, then the candidate is either irrational or campaigning on false pretenses.

General Understanding

I don’t think a candidate needs to know specific details such as the current Mexican President’s name. But I think a candidate needs to have a general understanding of how many things in the world work. This sort of general, diffuse understanding of our world is not the sort of thing that particular policy-area experts are likely to give the President on an ad hoc basis.

Thus, for example, I think it’s worrisome that a President would think that a nuclear bomb might be a good solution to a hurricane. This suggests a kind of confusion that advisors couldn’t just straightforwardly remedy by reminding the President of a little factual detail. It suggests a person who probably would not be a good decision-maker, even with experts to feed him all the factual details.

Or, for a different example, I should think it worrisome if a President was under the impression that health care for everyone in the country could be paid for by a tax on billionaires. (If all the country’s billionaires gave all their accumulated wealth to the government, it would fund the federal government for nine months.)

Under the heading of general understanding, I would say that a candidate ought to have a basic understanding of economics, of the sort that you’d get in econ 101. Thus, they should not think that price controls work, or that the trade deficit represents a “loss” to the U.S. They should know what a demand curve is, and how the Fed lowers interest rates. I list these things because (a) it’s plausible that you have to understand these things in order to make a lot of policy decisions intelligently, and (b) it’s unlikely that the President’s advisors will successfully remedy a complete ignorance of them on an ad hoc basis, each time the President needs to make one of those decisions.

Attitude to Experts

A leader needs to have enough understanding of how the world works to know when to rely on experts. He should not, e.g., think that he knows better than medical professionals do about the properties of a new disease. This is crucial since no individual knows enough to competently manage all the things the government has its hands in. (This is one reason for thinking the government has too many hands in too many things. But leave that aside for now.)

On the other hand, a President should be able to recognize what sorts of things an expert cannot authoritatively tell one, or what they cannot be trusted to be objective about. E.g., he shouldn’t gullibly assume that experts in industry X can be trusted to say what are the best regulations for industry X.

Rationality and Goodness

The final point isn’t a matter of knowledge per se. But a good leader needs to be able to weigh the information that advisors give him, and to make a decision based upon rational moral considerations (as opposed, e.g., to making purely self-interested choices or emotional choices). The leader has to care about what is just, what rights people have, and what is good or bad for society. These two key traits — rationality and morality — cannot be supplied by advisors (as ordinary factual details can be); the leader has to have these traits as standing features of his character.

So those are the sort of things that questions aimed at political candidates should try to draw out. Journalists shouldn’t be trying to find factual details to trip candidates up over. They should be trying to get candidates to display how the candidates think about a normative political question.

This calls for questions of detail about the justifications for policies that the candidate supports. People who are used to thinking seriously about normative questions will be able to address objections. If reasonable, they will be able to recognize valid concerns and reason to a conclusion despite the existence of uncertainty and mixed costs and benefits. People who are not serious about figuring out the right answers will find it hard to fake those things.

Of course, by my standards, most (nearly all?) actual politicians would probably be disqualified.

Thoughts on a Pandemic

1. Neither Hoax nor Apocalypse

I’ve seen some extreme differences of opinion regarding the Covid-19 outbreak. Some people think it’s a big hoax, maybe created to make Trump look bad (okay, maybe Trump is the only person who thought that), or to justify a government power-grab (https://thewashingtonstandard.com/the-coronavirus-hoax-overhyped-to-bring-about-more-tyranny/). (Yep, that last one is from none other than Ron Paul.)

Other people seem to think it’s some kind of apocalypse. For some reason, I guess the apocalypse is expected to particularly impact toilet paper production, which explains why the supermarkets are out of toilet paper. Many frozen foods and junk food (the stuff in the potato chip aisle) are also out. Fortunately, in my local stores, there are still plenty of fruits and vegetables in the produce section, so I’m fine. People are desperate, but not desperate enough to eat broccoli. (When the stores start running out of kale, then you’ll know we’re in trouble.)

I’d like to suggest the obvious — that both of these reactions are irrational. They’re the product of two different biases: (i) the bias toward alarmism, and (ii) the bias toward conspiracy thinking. Both biases result from the drive to overdramatize the world. When we form our beliefs, we don’t really try to figure out what is plausible or realistic, so much as what is an entertaining story. Stories with world-ending catastrophes are interesting, as are stories with backroom conspiracies among the powerful.

Neither of these things is happening. Covid-19 is not a hoax, nor is it the apocalypse.

Why I’m Not a Conspiratist

I have written about conspiracy theories before (https://fakenous.net/?p=699), and much of that applies here. To be fair, Ron Paul didn’t mean that the entire thing is invented — he knows that Covid-19 is a real disease that is really killing people. He just thinks that the government and media are greatly exaggerating the threat, because it helps to justify an expansion of government power. I would find that more plausible if it were only politicians and media people, but many medical professionals also seem to be seriously worried. It’s not as if the CDC, WHO, or NIH are known for going crazy with exaggerated pandemic predictions. If they’d done stuff like this before (and they’ve had plenty of opportunities), and then it didn’t pan out, then I’d be suspicious now. But they haven’t.

I have no medical expertise. I also know that at least some people are disputing the more alarming predictions from WHO, etc. But it appears that most medical experts think the situation is very serious. This, it seems to me, is a good time for deferring to experts. (See my earlier discussion of trusting experts: https://fakenous.net/?p=550.)

By the way, I’d probably disagree with those experts about a lot of political questions, and I certainly wouldn’t defer to them on such questions. I also know that a lot of those experts are government employees, so they might have a pro-government bias. Nevertheless, they are obviously better positioned than me to judge a medical, public health question.

Why I’m Not an Apocalyptist

A friend strenuously advised me to start stockpiling food while I still can. I didn’t do it. Why not?

I accept that Covid-19 might kill a million Americans. I don’t know if it will or not — I’m just saying I can’t rule that out based on my current knowledge. But even if that happens, that is not going to bring our society crashing down — our society is not that fragile. The death of a million people, the great majority of whom would be retirees, would not, for example, collapse our food production or distribution capacity. It would not stop us from growing kale or making toilet paper.

(And before someone freaks out, I’m not saying this isn’t a horrible outcome. I’m saying it won’t break down society. At some future date, though, we may well encounter a much worse disease that really would collapse our society.)

I assume there is also going to be a serious recession. Many people will lose their jobs. People in entertainment and travel-related industries should be especially worried. However, again, this is just a big recession worry, not a collapse-of-society worry. In particular, it really doesn’t call for buying up a 12-month supply of toilet paper. (To be fair, I haven’t really seen people predicting collapse of society. I’m just reacting to the people rushing out to buy up multi-month supplies, as if production of cheetos and paper goods is going to shut down.)

2. But Don’t Be Stupid

I’ve noticed that quite a lot of people (especially young people and men) are overly risk-tolerant. We just assume that really bad stuff won’t happen to us. E.g., we assume the coronavirus isn’t going to kill us.

I don’t assume that. I think there is a non-negligible chance that, if I keep going outside like normal, Covid-19 will kill me in the next several weeks. I’m not alarmist — I mean, I think maybe there’s a 1% chance that that would happen (due to asthma, I am at higher risk than normal, but even ordinary people have a non-trivial risk). Maybe it’s only 0.1%. Or maybe it’s 5%. I don’t know, since there seems to be a wide range of opinion about the risk. But if I don’t go out, there’s almost 0 chance that I’ll die in that time period.

People have a hard time properly taking account of “small” risks like 1% — normally, if P(A) < 0.01, we just form the outright belief “~A”, and thenceforth ignore the possibility of A. But a 1% chance of death is clearly worth spending several weeks indoors to avert.

I recommend to you all (readers) to do a similar expected utility estimate, if you’re doing stuff that has a “small” chance of giving you the virus. Because I wouldn’t want to lose any FakeNous readers due to the virus.

3. The Economic Cost

There’s a pretty good chance that the biggest harm of the coronavirus is going to turn out to be the economic cost created by our response to it. We don’t know whether that’s true or not, since we don’t know how bad the epidemic will be and we don’t know how much the economic cost will be. But I’m saying that it wouldn’t be terribly surprising if the economic cost turned out to be greater.

I say this despite — again — the fact that I take the virus seriously as a public health threat. It’s just that I also take very seriously the harm to society of shutting down large portions of the economy for weeks at a time. So I think we just need to think seriously about that in designing our response. Public health officials may be experts on things like the spread of infectious disease, but they are not experts on these economic costs, so we can’t just take their word on what is the best overall response to the virus.

The Public Health Threat of Recessions

Now, you might be tempted to react, “Public health is more important than money!” But this is not a rational position. First, of course, whether public health is more important than money depends upon how much public health impact and how much money we are talking about.

Second, some research suggests that economic costs come with public health costs. A lot of people are going to lose their jobs because of the Covid-19 response. When the unemployment rate goes up, we also see an increase in suicides, depression, and deaths from prescription drug overdoses (https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-015-2313-1).

Wait, Maybe Recession Is Good for You

I was going to leave it there, but then I did a little more reading, and I learned that some scholars think that unemployment lowers the death rate. As crazy as it sounds, the case for that effect seems more compelling. (See https://www.nature.com/articles/d41586-019-00210-0.) That is, even though it’s true that increases in unemployment lead to higher suicide and mental health problems, the overall death rate seems to go down.

One reason is fewer traffic fatalities, because people are driving less. It may also be that people consume less alcohol and cigarettes, because they’re trying to save money. There will also be fewer on-the-job accidents. All of these are mentioned in that article in Nature. It seems that most of the effect is due to cardiovascular disease deaths, which for some reason decline when unemployment rises (https://www.ncbi.nlm.nih.gov/pubmed/28772108).

However, I still find this intrinsically hard to believe. Anyway, presumably recession is still bad overall; I don’t think anyone is suggesting that we should have more recessions. It just might not be as bad as we thought.

Side point: we should probably try to figure out why employment would increase the death rate, and try to address that. (Are people’s jobs causing them to have heart attacks??)

“Stimulus” Won’t Work

The government has one idea about reacting to recessions: “stimulus”. “Let’s spend a trillion dollars, cut interest rates to zero, cut taxes, and send checks to people.”

None of that will work. It’s a complete misunderstanding of the problem. The coming recession is not going to be caused by a liquidity crisis or a shortage of demand (for which gov’t stimulus is a conventional remedy). Even if you think some recessions are caused by those things, obviously this one isn’t. We know that, because we know why we are anticipating a recession to begin with: we expect a recession because lots of people are not going to work, and people working is where goods and services come from.

There is nothing we can do about that, unless we make people go back to work. The government “stimulus” can’t possibly address the recession, unless it somehow makes people go back to work. Which, in normal circumstances, is in fact what the government hopes for. But in this case, we don’t want people to go back to work, because the whole reason they’re staying home is to avoid transmitting the virus. So this whole stimulus idea is just a confusion — a complete failure to even understand the theory behind stimulus (if you think that theory makes sense to begin with).

Caveat: one thing the government could in fact do is to spread the costs. The economic cost of the coronavirus is inevitable (unless we want to send everyone back to work and spread the virus). That cost is going to fall very hard on people in specific industries (airlines, hotels, restaurants, &c.), and much less on other industries. So the government could spread that around — e.g., take money from the rest of the country and give it to the hotel workers. That would probably reduce the utilitarian cost of the virus. I doubt that they’ll do that, though, because the majority of people won’t care to give their own money, during a recession, to help those in the most heavily-affected industries.

4. Effect on Academia

This isn’t so important to most people, but I’m thinking about the effect of the coronavirus on academia, since that’s my industry. We’re doing comparatively quite well, due to the possibility of online teaching — so we don’t have to just shut down.

After the current panic is over, I think there will be a modest uptick in online courses, because some people will have discovered that they like them. I suspect, though, that most people are discovering that they don’t like online courses. So most will go back to “real” courses after this.

Now, you might think that in-person instruction is so inefficient that it has to go by the wayside, even if it is moderately more satisfying. Is it really worth it to have all these people traveling to meet in the same place, just so they can learn in a slightly more enjoyable way? Besides all the costs of travel, you also have the costs of maintaining the physical buildings and grounds.

If you think that, though, you probably haven’t paid enough attention to the incredible inefficiency that has been endemic to academia for decades. We’ve never cared much about wasting resources. I don’t see why we would start now.

The whole academic industry is founded on signaling (see Caplan, The Case Against Education, https://www.amazon.com/dp/0691174652). Degrees have value because they signal that the holder is smart, hard-working, conscientious, able to follow directions, etc.

So, whether online instruction replaces in-person instruction will ultimately depend upon whether getting an online degree improves, worsens, or has no effect on your ability to signal desirable traits. My guess is that going to the in-person classes helps with the signaling. It’s too easy to take online courses while sleeping in and wearing your pajamas all day.

On the positive side, maybe businesses, including academia, are starting to realize how wasteful and pointless in-person department/business meetings are. Just send out a damn email.

Also, by the way, I suspect that some academics are discovering that we don’t like going to conferences. For instance, some of us have noticed that we were relieved that the APA meeting was cancelled.

Libertarian Pandemic Policy

So, I just had my last in-person classes of the semester. Thanks to Covid-19, my courses have been converted to online courses (at least they’re not completely canceled!). Anyway, this seems like a good time to reflect on what the libertarian response to the threat of disease epidemics might be. I saw a Facebook post from someone else suggesting, I guess, that libertarians have no way of dealing with this kind of problem. (?)

I don’t think many libertarians will have a hard time with this. But if you’re not one, maybe you don’t know the answer. Maybe you think something like this:

“Since libertarians are against government intervention, they’d be against taking any measures to stop deadly diseases, right? Travel restrictions, quarantines, mandatory testing — these all violate our liberties! And that’s always bad, according to libertarians, right?”

Response: Most libertarians are not against all government intervention. Most are minimal statists, who allow a (minimal) role for the state. Others, the libertarian anarchists, ultimately hope for government to be eliminated. These latter libertarians, however, do not oppose all the things that the government does; they would still allow for the crucial minimal functions of the state to be performed by someone (by a non-state actor).

Minimal State Libertarianism

The minimal state view: the government’s central function is to protect individuals against fraud and aggression. But what counts as aggression?

Physically damaging someone’s body without their consent is the paradigm case. Now, two important observations about this:

(1) One does not have to inflict the damage directly; e.g., you don’t have to touch the other person’s body with your body. It can be indirect — you can send a harmful object on a trajectory where it will predictably interact with the other person’s body in a physically damaging way. E.g., throwing a projectile. Or, more to the present point, what if you knowingly sent a virus toward another person, knowing that it would infect the person and then physically damage his body? Surely that would count as aggression.

(2) The damage need not be certain. Inflicting an unreasonable risk of physical harm can count as aggression — or at least, as something relevantly like aggression for purposes of assessing the morally legitimate response. (Maybe it’s not literally aggression, but it’s sufficiently aggression-like.) For instance, playing Russian Roulette with unwilling victims counts as aggression, and calls for a coercive response. That’s true even if you imagine a gun with a million chambers, so that the probability of shooting someone is only 1/1,000,000. No libertarian has trouble with this. Similarly, driving while drunk poses an unreasonable risk to pedestrians and normal motorists and can thus be prohibited.

Of course, what counts as unreasonable risk is open to debate. It’s going to have to do with the probability of harm, the total magnitude of the threatened harm, and how good one’s reasons are for imposing it (see previous post on meat & disease risk).

That’s the core of the libertarian justification for disease-prevention measures. Any individual who is at risk of carrying a communicable disease, such as Covid-19, is posing a risk of physical harm to others when he interacts with them. If the risk is ‘unreasonable’ (in light of the probability, magnitude, and reasons for imposing), then those under this threat would be justified in using coercion to protect themselves from the potential physical harm. Since individuals could justly do that, they can also delegate it to the state to do that (if you accept the state as legitimate in general).

So that would be the justification for quarantining, restricting movement, requiring testing, etc. (The state can’t impose an unconditional requirement of testing, but they can insist that people get tested in order to be allowed to interact with others in a way that would be dangerous to those others if one had the disease.)

Limits

Obviously, this doesn’t mean that the state could, morally, impose just any restrictions that might reduce disease risk. E.g., they could not declare that all gay people have to be quarantined in concentration camps to protect the population against AIDS. Again, restrictions have to be related to preventing unreasonable risk-imposition.

Yes, this makes it open to debate exactly what is permissible for the government to do, since there will be a wide area of borderline cases. But also, there are some clear cases.

Libertarian Anarchism

The above account is pretty straightforward. But some crazy, extremist libertarians actually advocate for complete abolition of government! How could an anarcho-capitalist society deal with disease outbreaks?

Note: If, like most people, you have never read anything about anarcho-capitalism, then you have no idea what I’m talking about. In that case, you should read part 2 of my book, The Problem of Political Authority, before continuing. I am not going to write up an(other) explanation and defense of an-cap here for people who have no idea what the theory is. By the way, if you think you know what the theory is, but you haven’t actually read any published work by an anarcho-capitalist, then you are mistaken: you do not know what the theory is.

This would have to be done by private agents. Private businesses, associations of businesses in the same geographic area, and associations of homeowners, would need to decide what measures they wanted to take to protect against disease spread. E.g., your HOA could say that no one can enter the neighborhood unless they have a test from a reputable hospital indicating they are negative for the Covid.

Based on the preceding reasoning, they would also be justified in coercively enforcing this rule, and they could direct their protection agency to do so.

You might worry that this doesn’t provide a coordinated, society-wide response — different private groups will adopt different policies — and therefore that it would be better to have a central government. Note, however, that even if you have governments, that doesn’t provide a coordinated international response — different governments will adopt different responses. Yet few people think that this shows that we need to have a world government. The response based on associations of property owners is just like a government-based response, except with very small governments.

Criminal Justice vs. Disease Prevention: Why the Double Standard?

That all seemed like a fairly easy explanation. Is there any serious problem for libertarians in this neighborhood?

Here is the most interesting philosophical problem that I have thought of: libertarians (along with most other people!) think that, in the criminal justice context, a person should be presumed innocent until proven guilty. This implies that in various cases where it is very uncertain whether a person is a criminal, we should let that person go. The probability that they are a criminal could be pretty high (e.g., 50%!), and we would still let them go.

But letting such a person go clearly poses a large risk to the rest of society, since most people who have committed crimes in the past will commit more crimes in the future. How is this consistent with what I said above about how the government can use coercion to protect us from risks of harm? Quarantining people is a lot like imprisoning them. So why is it okay to quarantine a person who only has a 1% chance of being infected, but not okay to imprison a person who has a 50% chance of being a criminal?

By the way, I’d like to point out that this is not just a problem for libertarians, but a problem for anyone with normal, mainstream views. Almost everyone agrees that you can quarantine a person with a 1% chance of having a deadly disease, but that you cannot imprison a criminal defendant who has a 50% chance of being guilty.

As best I can figure, the relevant differences are these:

a. Imprisoning alleged criminals is a punishment measure (it aims at harming them because they deserve it), while quarantine is not. There are higher standards of evidence for punishment.

b. Of course, criminal punishments are usually a lot more harmful than quarantining.

c. A person who has (or might have) a communicable disease poses a danger to others when they interact with other people, even without any further bad choices by them. But an (alleged) criminal does not pose a danger to others in the future unless he takes further wrongful choices.

d. A person with a communicable disease poses a threat to many more people, because if he transmits the disease to others, these others could transmit it to still others, and so on. Ordinary crime doesn’t exponentially spread in this way. As a result, a much lower probability is needed in the disease case to count as “unreasonable risk”.

e. In the criminal justice case, there is a greater threat of government abuse of power resulting from lowering the standard of evidence.

f. Also, in the case of crimes, it is more realistic to expect very high evidential probabilities of guilt to be attainable if the person is in fact guilty, as contrasted with the case of a new disease, where there might not be reliable tests (and it might not be reasonable to expect there to be such).

How to Prevent New Diseases like Covid 19

The Source of New Infectious Diseases

This seems like a timely question, in the light of the current Covid-19 outbreak. Everyone knows how you get an infectious disease: you get it from another person who has it. But where did the first person get it? And if they just evolve in the same way that animals and plants evolve, shouldn’t it take really long for new diseases to appear?

In my lifetime, the following new diseases have appeared, just off the top of my head: Covid-19, SARS, Mad Cow Disease, Swine Flu, and AIDS. The last of these has killed tens of millions of people. There will certainly be more such diseases appearing over the next few decades, and we never know when a more deadly, more contagious one will appear. Think about what would happen if there was a disease as deadly as AIDS and as contagious as Covid 19. I think there is a non-negligible chance that this is how the human species will finally end.

Where do these things come from? The answer is well-known, but on the chance that you don’t know about it or (more likely) just haven’t thought about it, the main answer is animals. Those diseases all started out as non-human animal diseases. Apart from SARS, they were transmitted to the human species through a human being eating body parts from an infected animal. Covid-19 probably came from eating pangolin meat, mad cow of course from cows, swine flu from pigs, and AIDS from chimpanzees and monkeys (which were hunted for meat in Africa). The disease enters a human population through a meat-eater, mutates (except Mad Cow, which is a prion disease, not a virus), then spreads from there.

The Moral Problem

So that’s another reason to end the meat industry (besides the severe cruelty of that industry). Of course, when you eat some meat, under normal conditions, you only have a tiny chance of getting a disease from that, let alone a new disease. But, in the extremely unlikely event that you do get a new disease, you create a worldwide threat, and one that may threaten people for generations to come. In the age of globalization, any new communicable disease is in danger of spreading across the world.

Notice also that the risk is not merely assumed by meat-eaters. Once one of these diseases migrates to a human host, it will mutate to become a human disease, and then spread through human-human contact. So everyone who has contact with other humans, whether they are carnivores or vegetarians, is put in danger from this sort of thing.

I think there is a plausible case that this makes meat-eating a rights-violation — not just a violation of animal rights, but a violation of human rights. The meat eaters of the world violate the rights of vegetarians by subjecting us to an unnecessary risk of disease for no good reason.

The Ethics of Risk

Certainly, if you knew that you were going to transmit a deadly disease to other people, and you did so anyway, just to gain a little pleasure for yourself, this would be a rights violation. But you might think it isn’t a rights-violation if there is only a small chance of this.

Sometimes, it’s permissible to impose a small risk of harm on others, and sometimes it isn’t. It matters (a) how big the harm would be, (b) how likely it is, and (c) how good your reason for imposing the risk is. Some examples:

  • (Illustrating (a), Size of Harm): It’s ok to keep a gun in your house, but not ok to keep a nuclear bomb, even if (let’s suppose) the probability of a nuclear bomb accident is much less than the probability of a gun accident.
  • ((b), Probability): It’s permissible to drive normally, but not permissible to drive while drunk.
  • ((c), Quality of Reason): It’s not permissible to play Russian roulette, for fun, with unwilling other people. Even if your revolver has 1,000 chambers (giving only a 1/1000 chance of killing someone), it is still not okay. Nor is it ok with a million chambers, or a billion. No number of chambers is okay.

I just assume those judgments. I’m not going to argue for them. Those examples illustrate the relevance of factors (a), (b), and (c) above.

How does the meat industry fare? Any given meat company or consumer has only a tiny chance of introducing a new disease to humanity, so they’re doing well on (b). However, there is a potential for extremely large harm, so they’re doing poorly on (a). Unfortunately, we have no good estimates of the expected harm.

Finally, meat producers and consumers are doing extremely poorly on (c) — overall, they have extremely poor reasons for selling or eating meat. Note: I take it that, for purposes of assessing the quality of one’s “reason” for imposing a given risk, we look at one’s overall, net reason, omitting the possible moral reason created by the very risk we’re talking about. (That’s to avoid circularity or double counting.)

So, leaving aside the disease risk, how good of a reason do we have for selling or eating meat? Overall, extremely poor — we get some increase in gustatory pleasure, we also get increased risk of cardiovascular disease and cancer, and we cause a vast quantity of suffering for other sentient beings that is orders of magnitude greater than that increase in pleasure that we get. So the overall reason for doing it (leaving aside the disease risk) is less than zero. See my book on vegetarianism: https://www.amazon.com/dp/1138328294/.

So the risk isn’t justified. It’s like the Russian Roulette case, only worse. In the Roulette case, you merely have no reason to do it, considered apart from the shooting risk; in the meat case, you have overall extremely powerful reason not to do it, considered apart from the disease risk.

What if you think, for some reason, that vast quantities of pain and suffering for other creatures are not a reason against meat-production? (Many people appear to assume this, though no one has been able to give any non-absurd defense of this. Again, see my Dialogues on vegetarianism.) Even so, the overall reason you have to eat meat would still be dubious, since it’s not even clear that you are overall better off. There is a reasonable argument that you’re overall worse off due to health effects. (See https://well.blogs.nytimes.com/2011/01/07/nutrition-advice-from-the-china-study/.)

Btw, this moral reason against meat-production applies even to humanely-raised meat (if such there be).

I discuss this and other arguments in this online discussion (with Aeon Skoble, Andy Lamey, and Shawn Klein): https://www.cato-unbound.org/issues/february-2020/what-do-humans-owe-animals.

A Right-Wing, Populist Critique of President Trump

Critics of the President have a variety of arguments that seem to fall on deaf ears with most Republican voters. I’m going to try to explain why that is, then suggest some arguments that might have a chance of being more persuasive. I don’t want to make the liberal, or libertarian, or even non-partisan case against the President here. I want to give a critique from the precise political standpoint that he appeals to.

First, Some Failed Attacks

Example 1: Joe Walsh (one of the Republicans who is mounting a primary challenge to Trump in 2020) addressed a crowd in Iowa, at which Walsh promised to be “honest”, “decent”, and not “cruel”. He went on to criticize how the President “makes everything about himself”. The crowd of fellow Republicans booed Walsh. (https://www.youtube.com/watch?v=DtzkLR2Jv9k) Many other conservative figures have gone after Trump, with similar results.

Example 2: Jordan Klepper from The Daily Show talked to some Trump supporters in Iowa: https://www.youtube.com/watch?v=Cas2wXZz1PE. At one point, a woman says that one can tell that Trump has done nothing wrong, since he has been completely open and not trying to hide anything in the impeachment trial. Klepper points out that Trump actually blocked witnesses from testifying. The woman pauses, then announces, “I don’t care.” (3:05-3:35)

Hypothesis: The arguments against Trump have no effect on Trump-supporters, because they appeal to values that those supporters do not care about (or do not care enough about). Take the Ukraine story: Trumpists might start by denying that Trump tried to blackmail Ukraine. But there is no point in trying to present evidence that he did it, because their real position is not that he didn’t do it. The Trumpists’ real position is complete indifference to that issue.

Example 3: The repeated complaints that Trump said some racially insensitive thing or other. These obviously are not going to have any effect. Anyone who is supporting Trump does not care about that sort of thing.

What Does the Alt-Right Care About?

General rule: If you want to persuade someone, it is necessary to (i) understand what that person cares about, and (ii) appeal to those values in making your argument. These are not sufficient for persuasion, of course, and I don’t know whether any persuasion is possible in the present case. But if it could be done, that’s how it would be done. So, here is what I think the alt-right/populist base wants:

  1. They want America to be respected.
  2. They want the nation to be strong and to be defended from foreign enemies.
  3. They want to preserve American jobs.
  4. They want the Republican party to beat the Democrats.
  5. They want there to be less immigration.
  6. Because many are evangelical Christians: they want Christianity to spread in this society. They want abortion to stop, and they want more traditional, conservative moral values.

Notice that I am not trying to pillory the alt-right; I am trying to best understand the concerns and values that they themselves would appeal to. My argument against President Trump is not to reject those values. My argument is that President Trump will not give us those things. He is promoting and will continue to promote the opposite of most of those things.

The Alt-Right Case Against Trump

1. Respect

Trump does not encourage respect for our country. Respect for America is eroding due to its current leader. Other world leaders literally laugh at our leader (https://www.youtube.com/watch?v=w-3FyvZTIuU, https://www.youtube.com/watch?v=hne29xkUPbg). Regardless of how Republican voters in America perceive him, Mr. Trump is not regarded around the world as an intelligent, knowledgeable, or competent person. This is for a variety of reasons, including his lack of the sort of political knowledge that most leaders have; his frequent language and factual errors; his failure to take his duties seriously; and so on.

2. Strength

President Trump does not make America strong. He weakens America, in several ways.

a. He weakens our relationships with allies. For example, in the Middle East, one of the very few groups that strongly supported America was the Kurds. But back in October, Trump betrayed them by letting Turkey attack them, probably because Turkish strongman Erdogan called up Trump and talked him into it (https://www.newyorker.com/magazine/2019/10/28/turkey-syria-the-kurds-and-trumps-abandonment-of-foreign-policy). Trump showed no concern about ISIS fighters being released as a result. In the Ukraine case, Trump tried to stop military aid to a U.S. ally that was under invasion by a U.S. adversary. He has showed no concern about Russian aggression. He even toyed with the idea of leaving NATO. Long-standing U.S. allies and potential allies are now on notice that they cannot count on America.

b. He admires enemies of the U.S., including communists and former communists such as Kim Jong-Un and Vladimir Putin. Russia is most definitely not our friend, and it is dangerous to have a leader who doesn’t understand that, or doesn’t care. There is ongoing speculation that Putin has kompromat on Trump, which is fed by Trump’s strange refusal to criticize Putin or to recognize any bad behavior by the Russian government. Even if false, this sort of speculation weakens respect for America. If true, it is extremely damaging to America.

c. Trump’s presence weakens America internally, by dividing America against itself. His modus operandi is about fueling tribalism and division. He spurs the most angry reactions, and rather than trying to defuse tensions or encourage compromise, he carries on public insult wars with citizens and other government officials and makes the most extreme and reckless accusations (calling the impeachment inquiry a “coup”, calling his predecessor “the founder of ISIS”, etc.). This is probably why Putin wanted Trump to win in 2016, as Putin’s goal is to weaken America, especially through sowing internal discord. Putin has acknowledged that he wanted Trump to win (though of course he did not admit that his motive was to weaken America) (https://www.youtube.com/watch?v=1O1piTvGt_A).

d. Trump’s leadership weakens the U.S. government specifically (but not in the good way, which would be by reducing its responsibilities and expenditures). One way it does so is by the very high turnover in the government – competent people are constantly quitting or getting fired, due to the extreme difficulty of working with Trump. Morale in the government must be low, especially among honest, non-corrupt people, and respect for our institutions is eroding, due to Trump’s tendency to attack any person or institution that does not just slavishly praise him, and to make any reckless accusation that suits him at the moment. His stances also can be expected to call forth the most extreme partisanship on the part of Democrats.

3. The Economy

Actually, employment statistics are doing surprisingly well. However, Trump’s trade war is estimated to cost average Americans about $1300 a year (https://www.salon.com/2020/02/07/trumps-trade-war-is-making-americans-average-incomes-decrease-government-report-says/). His trillion-dollar deficits are also something that our descendants for a long time are going to pay for, something that conservatives have traditionally worried about.

4. The future of the GOP

Trump is not helping the Republican party. He is weakening it, much as he weakens America. He has caused prominent, staunch allies to turn against the Republican party. George Will, one of the most articulate and influential voices for conservatism for the last fifty years, finally left the Republican party in 2016, due to Donald Trump. Trump and his voters are on their way to alienating every serious conservative intellectual. This is due not only to Trump’s anti-intellectualism, but also to his disregard of core conservative principles. He has no loyalty to the party, as he will freely attack any other Republican, and any Republican principle, whenever he feels like it (McCain, Romney, the Bushes, etc.). By the time Trump leaves office, the Republican party may be left with no serious intellectual voices, and no leaders with integrity or decency, since Trump and his base seek to destroy anyone who tries to stand on principle. In the long run, this is a powerful blow to the conservative movement, far outweighing a single election victory.

Though Trump energizes a large portion of Republicans, he also alienates moderates in a way that may do lasting damage to the GOP, beyond Trump’s own term. He creates an impression of what Republicans are about that is extremely unappealing to voters in the middle.

5. Immigration

Trump’s administration may have slowed the rate of immigration into the U.S., but not by much (https://usafacts.org/data/topics/people-society/immigration/immigration-and-immigration-enforcement/immigrants/). He has not built the wall. Some old fencing has been updated, but approximately zero new miles of barrier have been added to the U.S.-Mexico border (https://en.wikipedia.org/wiki/Trump_wall). Mexico is not paying for it.

6. Christianity and morality

Mr. Trump is not Christian, nor is he helping Christianity in the long run. He is among the least pious public figures in America and is probably the first atheist President (though he will not publicly admit that). He shows no interest in such Christian virtues as humility, or charity, or chastity, or piety; he is if anything an enormous advertisement for the opposite of all those traits. His position as the nation’s leader helps to promote his combination of deeply profane attitudes, to make them seem socially acceptable and even to encourage admiration for them.

His current putative pro-life stance is probably just a stance of convenience, as he previously called himself “pro-choice in every respect”. Most likely, he does not care about the unborn, any more than he cares about conservatism or the Republican party. But, just as he is weakening the Republican party, he is probably weakening American Christianity, through dividing Christians and turning ordinary Christians against core values that their belief system has hitherto supported.

By the way, he also used to be a ‘liberal’ and a fan of Hilary Clinton, before he was running against her. (https://www.youtube.com/watch?v=JK1QzLW13hI, https://www.youtube.com/watch?v=m7BsXluIq-0). He just decided to become a “Republican” so he could get power. Mr. Trump does not care about the things that sincere Republicans or Christians care about; he just says whatever he thinks is useful for manipulating other people, to get himself money and power. His business career was focused on tricking other people into giving him as much as possible of what he wants, while giving them as little as possible of what they want in return (often refusing to pay even what he agreed to pay). And that is exactly what he has done to the Republican party, and to the country.

When he dies, Mr. Trump will go to his grave laughing at all the people he scammed in his life, not least of all the American voters.

The Social Function of ‘Intersectionality’

This year, I was invited to participate in this Social Justice Warrior conference at Santa Clara University: https://www.scu.edu/fagothey-conference/. I guess I was invited to be the token libertarian. They asked me to speak about Intersectionality (a topic on which I have previously written nothing), and Immigration (on which my views are well-known). The conference is this Friday and Saturday, so I’ll be in the middle of it when this post goes up.

So I thought I would post here my comments on the concept of “intersectionality”. Note: since I have done zero academic work on this topic and in fact did not think about it at all before they invited me to go to this panel discussion, this is just armchair, personal opinions. If you want an academic-style discussion by people who work on this, go elsewhere.

What’s Intersectionality?

The term “intersectionality” (as I recently looked up) was coined by Kimberle Crenshaw, a social-justice-activist lawyer and philosopher, who also happens to be a black woman. She gives an example of a company that was sued for discrimination against black women (DeGraffenreid v. General Motors). The company had black employees, and it also had female employees, but it had no black female employees. (The black employees were men doing manual labor; the female employees were white women doing secretarial work — or something like that.)

I guess the plaintiffs lost because the judge thought that (a) there was no racial discrimination since the company had black workers, and (b) there was no gender discrimination since the company had female workers, and (c) that’s all there is to it.

Crenshaw has a plausible case that there could still be discrimination, based on the category of ‘black women’. So the result of the intersection of these two categories (‘black’ and ‘woman’) can’t be predicted just from looking at each category separately.

So that led Crenshaw to talk about ‘intersectionality’, which I guess is the phenomenon of having multiple oppressed-people categories, with the suggestion that this can give you unique experiences of marginalization that are more than the sum of the forms of oppression due to the individual categories.

Is it Real?

Every concept attempts to capture some aspect of reality. In addition, a concept may also serve useful social functions. I’ll get to that part in a second.

As to capturing reality: Crenshaw’s example is very plausible on its face. It’s entirely possible that there could be discrimination against black women, even when there was no discrimination against black men, nor against white women. So I see no problem with recognizing intersectionality as a valid concept (one that succeeds in capturing a real phenomenon).

But now I want to talk about its social function.

The Social Function of ‘Intersectionality’

About Cultural Groups

There is a phenomenon among humans of forming ‘cultural groups’. Roughly, these are collections of humans who have some shared norms, beliefs, and practices; who influence each other and help to enforce those norms and practices; and who tend to ‘identify’ as members of that group.

These cultural groups used to be mainly tribes. Now they’re mainly nations. But also, within a large cultural group, there can be smaller ones (sub-cultures). Some cultural groups are territorially united (like nations); others are ideologically united (like ‘the libertarian movement’).

I note some pervasive features of cultures. One, they try to preserve and defend themselves. If it’s a territorial group, it will try to defend its territory against outsiders. If it’s an ideological group, it will will try to defend its beliefs against outsider beliefs.

Most cultural groups, by the way, hold extremely self-congratulatory beliefs; they tend to regard themselves as superior to all other groups. In the case of ideological groups (including religions), it is common to believe that one’s own belief system is the key to saving the world.

Another interesting feature: basically all cultural groups have status hierarchies. Social status is unequally distributed, and your status will tend to affect lots of things that people care about, such as how nice other group members are to you, how much respect you get, how much material resources you get, and mating opportunities. People generally want higher positions in these hierarchies.

The Social Justice Tribe

Social justice warriors are an example of an ideologically-united cultural group. (They often don’t like the label ‘Social Justice Warrior’. It is, however, the only common term I know of that refers to this specific group.)

This tribe has, as a core part of their identity, a self-conception as people who are fighting against prejudice in our society. They have core beliefs revolving around the pervasiveness and awfulness of this prejudice, especially racism and sexism (but there are also several other forms of prejudice that are held to be rampant).

The wider society is, for the SJW’s, divided into good and bad groups: the good people are the oppressed, and the bad people are the oppressors.

This tribe, by my read, had its origin in the Civil Rights Movement of the 1960’s. At that time, racial and gender prejudice was very bad and very obvious, so it is easy to understand why a movement would arise to fight against prejudice.

The movement did more than promote political change. It also provided a sense of meaning and identity for its members. It became a new tribe that people could identify with. When this happened, one could predict that the tribe would display the characteristics of human cultural groups mentioned above.

The Threat to the Tribe

Since shortly after its inception, the SJ Tribe has always faced a peculiar threat: the threat of its own success. The Tribe’s identity is centered around the belief in and opposition to rampant prejudice. Therefore, if the tribe were to actually succeed in its goals, that very success would undermine the tribe’s core beliefs. If they successfully defeat prejudice, they will have to stop believing that prejudice is rampant, they will have no reason (or greatly reduced reason) to devote themselves to fighting prejudice, and then the tribe will disband.

Preserving the Tribe

Tribes don’t voluntarily disband. This one has been under great pressure by the waning of prejudice in our society. It also, however, possesses impressive intellectual resources — some of the nation’s most clever and educated elites.

The solution to the above threat is to develop increasingly advanced, increasingly subtle prejudice detectors — so that when the obvious forms of prejudice disappear, you can still find pervasive injustices to fight against. The SJ tribe’s most talented elites have devoted decades to developing theories that help to identify more prejudice and more evils attributable to prejudice.

Enter “intersectionality”. As I see it, this concept is not simply an attempt to describe an aspect of reality (though, as I’ve indicated, it is that). It also serves as one example of the self-defense mechanism of the SJ subculture. Its social function is to enable one to see more forms of prejudice, and worse prejudice, than we could previously see without using that concept.

(It’s hardly the most striking example of that. But it’s the example that I was prompted to think about.)

Climbing the Status Hierarchy

In addition to serving a social function for the cultural group, the concept of intersectionality also serves a useful function for certain individuals: it helps them climb the social status hierarchy within the Social Justice Tribe.

Recall that in the SJ tribe, “oppressed” = Good, and “oppressor” = Bad. Note also that these classifications are group-based, not individual. (You don’t look at whether someone has individually oppressed someone else; you look at whether one group has oppressed another group.)

It is no accident that the person who introduced the idea of Intersectionality was a black woman. In introducing this concept, she was in effect reminding people that she was a member of two oppressed groups. Furthermore, members of multiple oppressed groups are even more oppressed than you would expect just from the oppression of their individual groups. Therefore, they are entitled to higher status in the SJ hierarchy than previously recognized.

The ‘black woman’ category could only be trumped by a category with three or more sources of oppression. (Maybe the best person in the world would be a poor, disabled, black, lesbian, Muslim, immigrant transwoman.) (See https://en.wikipedia.org/wiki/Oppression_Olympics.)

So What

Of course, none of this shows that intersectionality isn’t a real phenomenon. Indeed, the best tools for defending your ideological group and climbing the status hierarchy are going to be ideas that have intuitive plausibility and a basis in fact.

My aim is just to understand what is going on in our society. Lots of concepts are valid descriptions of some aspect of the world but don’t become popular like ‘intersectionality’ (which has very much taken root in academic-progressive culture, though not in the wider society). Most of the time, when you coin a new term and introduce a (semi-)new concept, it does not take hold, and people don’t wind up talking about it decades later.

So that was my analysis of why ‘intersectionality’ took hold: It is useful for particular members of the SJ sub-culture for promoting their status, and it is useful to the sub-culture as a whole for defending against the SJ sub-culture’s primary threat: the waning of prejudice.

Appendix: Reactions

Not everyone at the conference was happy with the above observations. I was on a panel, so each panelist gave a brief presentation, followed by a collective Q&A. I knew people wouldn’t like my thoughts, but I didn’t know exactly what they would say. Here are some of the things they said.

– One person couldn’t understand why anyone would hold my outdated view of cultures, which he thought most cultures don’t conform to. I am not sure, but I think he meant my view about status hierarchies, and I think he thought that most groups are not hierarchical.

– The same person objected to my ostensible view that people who resist oppression are just trying to get status in some academic subculture that those people themselves have never heard of. (Of course, I don’t think that and didn’t say that.)

– The same person objected to my ostensible view that the more subtle forms of oppression that people are objecting to today only recently appeared. (I don’t think that and didn’t say that either.)

– Two people strenuously denied that we (or left-wing activists, I guess?) have gotten any more sensitive to prejudice over the years; we’ve always been equally sensitive to all the same forms of (alleged) prejudice.

– One person strenuously insisted that SJW’s would be happy for their movement to disband due to having succeeded. They hope and pray for this day; they don’t want to be SJ activists, but they are just forced into it by the way the society keeps oppressing them.

Those were the main objections as I recall them.

We Are Doomed

Far Future Doom

Obviously, humanity will at some future time be extinct. That goes without saying. That’s almost a metaphysical truth; nothing (of the relevant kinds) lasts forever.

There is a fascinating Wikipedia article about the far future, https://en.wikipedia.org/wiki/Timeline_of_the_far_future, which includes (among other things) many events that could extinguish life on Earth. The Sun will leave the main sequence (running out of hydrogen) within about 5 billion years. It will probably engulf the Earth within 8 billion years. Long before that, though, multiple other disastrous things are expected to happen. One item says that within only 600 million years, all plants that use C3 photosynthesis (99% of all plant species) will die. Another item says that the rest of the plants will probably die within 800 million years.

I don’t think any people are going to live to see any of that happen, though. I think we’ll die of stupidity long before that. (Life will probably still continue without us, though. E.g., the bacteria will have hundreds of millions of years to flourish without us.)

A Story of Early Doom

Hypothetical: Suppose we learned that a large asteroid was on a collision course for Earth. To best illustrate my point, let’s make up some more details. Suppose that somehow, we know 30 years in advance that the asteroid is coming. Scientists are unsure of whether the asteroid will actually hit us. Some say it will probably hit us; others say that it will almost certainly miss. The median estimate, let’s say, is a 5% chance of hitting the Earth. All agree that the impact would be disastrous, though they disagree on exactly how disastrous it would be.

Suppose, further, that engineers have devised various plans for averting the collision, each of which would require at least several years to implement, and would cost billions of dollars. There is disagreement on exactly how effective each plan is, how much each would cost, and how long each would take to complete. Every expert agrees, though, that at least some plan (if not multiple plans) should be attempted. Finally, to make my point clear, assume that the asteroid will in fact hit the Earth if nothing is done (though the scientists in the scenario are not yet certain of this), but that some of the plans people have devised would in fact work, if adopted in a timely manner.

Question: Would we avoid the asteroid impact?

If you asked me this hypothetical 20 years ago, I would have taken for granted that humanity would, one way or another, come together to stop the threat. The last several years, however, have showed that human beings are a good deal stupider and all-around crappier than I previously comprehended. So today, I think there is a pretty high chance that some of the following would happen:

(a) Some political party takes up the cause to avoid the asteroid impact. The opposing political party or parties then immediately decide that “their side” must be pro-asteroid (or anti-asteroid-avoidance). The latter party uses their political power to stall asteroid avoidance plans. Members of the pro-asteroid party who cross the aisle and try to cooperate on asteroid-avoidance get labeled traitors by their party, whereupon they face primary challengers and are kicked out of office.

(b) Asteroid skeptics point to uncertainties in the science, arguing that we have no solid evidence that the asteroid is actually going to hit the Earth. They tout the most optimistic arguments about the asteroid, and magnify all uncertainties in the case for global disaster. They also point to common cost overruns in government programs and argue that we shouldn’t commit to spending unknown billions of dollars to avert a threat that almost certainly isn’t even going to kill anyone.

(c) Different groups of humans can’t agree on who should pay for asteroid avoidance. The Americans want China to pay more; China wants America to pay more. Both are angry at the Russians for refusing to pay anything, and nobody wants to be a sucker and let other nations free ride.

(d) The average human, having never witnessed an asteroid impact, does not intuitively believe that such things happen, and he refuses to believe the “arrogant”, egghead scientists. Web sites appear from trolls and opportunists with conspiracy theories about how the mainstream scientists are all lying and/or incompetent. Some say that you can just look up in the sky and see that there are no asteroids. They argue that there are no large asteroid impacts reported in all of human history, and thus this one is probably a hoax. They try to associate the asteroid theory with particular “identity groups”, and people who don’t belong to those groups then instinctively reject asteroid avoidance. These trolls get money because their unhinged claims attract clicks and hence generate revenue.

(e) More balanced news sites give equal attention to the orthodox position and the skeptical position, represented by the three scientists in the world who think the asteroid isn’t a serious threat.

(f) The U.S. President (who knows that he personally will not be around in 30 years) declares that the asteroid is “fake news” and a very dishonest hoax invented by the biased media and/or greedy astronomers trying to shake down the government for more money for their field. He tweets that if anyone just looks at the telescope images, they can see that the asteroid is a hoax. Millions of his followers retweet these comments, without in fact looking at the telescope images. A few others look at the images and find themselves unable to verify that the asteroid is really on a collision course, whereupon they conclude that the mainstream scientists are wrong.

(g) As it becomes clear that nothing is being done about the asteroid, scientists become increasingly active in trying to convince the masses. They try a variety of approaches. Some try sober, well-reasoned analysis. These scientists, however, are ignored because they are boring; also, skeptics take the scientists’ calm demeanor as proof that the scientists must be lying about the seriousness of the threat. Other scientists make increasingly alarmed and emotional appeals. The latter scientists, however, are dismissed by skeptics as being too emotional and obviously partisan.

(h) Nearly every person working in government knows that the asteroid threat is real, but many of them worry that they’ll be voted out of office if they try to do anything about it, because the asteroid issue is unpopular among the masses. They reason that it’s not worth losing their jobs for a small chance of saving humanity; also, they correctly reason that any given one of them can’t actually make a difference, if they don’t have the rest of their party behind them. Therefore, around half of the political leaders vote to do nothing. Or they vote for a “compromise plan” that takes only very weak, unlikely-to-succeed measures against the asteroid.

(i) When a member of party X complains about the asteroid threat and our failure to do anything about it, members of party Y ignore the issue and immediately start babbling comments like, “What about all the issues that your party hasn’t done anything about? What about the crimes committed by such-and-such politician? What about the threat of nuclear war, or biological weapons, or terrorism, or cancer? Cancer has killed a lot more people in history than asteroids!” This succeeds in derailing the conversation and preventing people from party Y from thinking about the asteroid for more than a few seconds at a time.

(j) The above goes on for 29 years. In the last year, scientists come to 100% agreement that the asteroid is in fact going to hit the Earth within a year and kill everyone. They also agree that it is too late to do anything about it. About half of all average human beings refuse to accept this, up until the day that the asteroid hits, killing billions of people and triggering Earth’s largest mass extinction.

Dying of Stupidity

That’s an example of what I mean by “dying of stupidity”. Specifically, I have in mind a scenario in which:

  1. A threat is identified by experts well in advance,
  2. It is agreed among experts to be serious (though there need not be agreement on exactly how serious),
  3. There are technically feasible plans known to experts that would stop the threat,
  4. The cost of trying to avert the problem is easily worth it and well-known, among the experts, to be so, and yet
  5. The threat is not stopped.

Any minimally smart species would in fact avert any such threat. From what I’ve observed of human beings, however, we are not such a species. There is thus a good chance that we will die of stupidity in the above sense.

Existential Threats

The above story is just an example. I don’t think we are actually going to die of an asteroid impact. We’ll probably have died of something else long before the next big asteroid hits.

In some ways, the asteroid scenario is actually a poor choice to illustrate my point. The asteroid threat is already on some people’s radar screen (pun intended), and some people are already looking out for asteroids. The scenario of a huge asteroid impact is simple, sensational, easy to understand, and relatively far from the main hot-button ideological issues. It’s also not all that expensive to avert. So there is a pretty good chance that we will develop an adequate asteroid defense before one becomes needed, assuming that nothing else kills us first.

The real thing to worry about would be a threat that is complicated or subtle, so that you need expertise to even understand how there is a threat; one that works over a long period of time and with no well-defined ending point; one that touches on hot-button ideological issues; or one whose probability and time of occurrence are extremely difficult to estimate. Those are the kinds of threats that we’re not going to address until it is too late.

By the way, in case you think my story is a metaphor for global warming, it isn’t. My story is meant as an example of one possible existential threat among many, most of which we probably cannot now anticipate. Global warming is not actually an existential threat – although the way people have responded to it should give us apprehension about how we would respond to a genuinely existential threat.

Take, for example, the projected end of C3 photosynthesis in 600 million years (about which I have almost no knowledge, as I am no biologist). To believe that this event is going to happen, one has to have a certain degree of intelligence, plus either expertise in biology or significant trust in experts – none of which the average American presently has. Also, it’s plausible that averting this event (if it could be averted at all) would require planning and very large-scale action, very long in advance. There would not, however, be any specific time – no particular year, or even any particular century – at which one could say that the plan had to be started. So it’s likely that there would be no particular point at which that issue would rise to the top of the political agenda. There would be no election year in which it would be politically advantageous to campaign on a promise to save C3 plants from extinction millions of years in the future.

Species Suicide

I don’t know what the most likely existential threat is. But here is one kind of scenario that I think is particularly likely: humanity is wiped out by one or more human beings working deliberately for that goal.

Aside: Some people worry that out-of-control AI may kill us. But I think we should worry more about out-of-control humans. We already have those. They already possess intelligence, unpredictable motives, and often insane, evil beliefs. Computers are way more predictable and controllable.

In the future, humans are going to have access to more and more powerful technology (including increasingly sophisticated computers, for any technical reasoning that is needed to carry out their plans). So it will become easier and easier to cause a large amount of harm. Now, you might say that advanced technology can also be used for defense, to protect against out-of-control people. That is true; however, it is almost a law of nature that it is easier to destroy than it is to protect or create things of value.

For example, at some not-too-distant future time, it might be technically feasible for an individual person to genetically engineer a virus capable of wiping out humanity. We might develop technology whereby a moderately intelligent person could produce stuff like that – perhaps with computer assistance, this person would not even have to be particularly expert in biology or medicine. If that technology appears, we are doomed. Someone is going to do it. Once the virus is released, it might be difficult or impossible to stop it.

Again, that is just an example. We will probably develop other technologies, thus far undreamt of, that will make it easy for individuals or small organizations to cause enormous amounts of harm. Most likely, these technologies would not be originally designed to cause harm. It’s just that powerful technologies that can do extremely valuable things will also generally let you do extremely bad stuff, if you have the opposite motives. Since we haven’t figured out how to control insane humans, and since a good number of us are crazy, we are very likely going to kill ourselves long before nature destroys us.

I don’t have a particular solution for this. I think we have to hope that humanity becomes less stupid and evil over the next few centuries, before whatever unknown threat appears that’s going to require coordinated action to prevent our extinction. Of course, since we are so awful now, most of us don’t give a crap whether that happens or not.

Great Philosophers Are Bad Philosophers

My introduction to philosophy was largely through the great philosophers of the past — the likes of Plato, Aristotle, Descartes, Hume, Kant. From the beginning, I was struck by how bad they were at thinking. Sometimes, they just seemed to be bad at logic, committing fallacies and non sequiturs that even an undergraduate such as myself could quickly see. Other times (almost always!), they seemed to have extremely poor judgment, happily proclaiming absurd conclusions to the world, rather than going back and questioning their starting points. I wondered why that was. Were these really the best philosophers humanity had produced?

Later, I figured out what (I think) was going on. These were not the best philosophers of the past that we were reading. They were merely the greatest philosophers. Skill at thinking was merely one criterion among many — and not a particularly central one at that — for ‘greatness’.

Bad Thinking

“What? How dare he say such things about Plato, and Kant, and even David Hume! Who does this Huemer guy think he is??”

Well, not a great philosopher. Just a good one. Let me give you some examples of what I’m talking about. Incidentally, of the great philosophers, Aristotle is perhaps the best at thinking. Most of his errors are pretty reasonable mistakes, if you lived in his time. Plato, Hume, and Kant, on the other hand, are all very unreasonable, illogical thinkers.

Plato

In The Republic (book I), Thrasymachus says that government leaders rule solely for their own good, and treat the populace the way a shepherd treats sheep, to be used for their wool and meat. Plato has Socrates respond to this by arguing that the art of the shepherd, as such, is only concerned with the good of the sheep. He goes on to talk about how the art of medicine is concerned with health. He also claims that no one would agree to be a ruler without being paid.

This is all just a terrible way of responding to the challenge. Thrasymachus’ statements about the sheep are just an analogy, which only serves to illustrate Thrasymachus’ point — whether shepherds really are concerned for the good of sheep is completely irrelevant. The relevant point is how leaders actually behave in the real world, which requires empirical evidence about leaders. Arguing about shepherds or doctors is pretty irrelevant to that, and arguing about what is the true “art” of medicine or of governing is definitely irrelevant. The one relevant point Socrates makes is that rulers would not be willing to rule without receiving payment. That, of course, is false. (But maybe this was less obvious in Socrates’ time?)

This isn’t an outlier case, either. The Platonic dialogues have these sorts of useless arguments by analogy all over the place.

Hume

The biggest problem with Hume was that he was constantly drawing absurdly skeptical conclusions, about almost everything, and this doesn’t seem to bother him. He doesn’t stop and say, “Wow, that sounds crazy. Maybe my starting assumptions are wrong?” This sort of dogmatism and lack of judgment is amazingly common among philosophers.

Here’s an example of Hume’s thinking. He starts with a hypothesis: all concepts (he calls them “ideas”) are formed by your first having a sensory experience (which he calls an “impression”), and then your mind making a sort of fainter copy of that sensory experience. In brief: all ideas are copies of impressions.

Later, he notices that certain concepts really don’t seem as if they could have been formed by copying impressions. For instance, the concept of the self, or the concept of causation as normally understood (I have to put that qualifier in there, because Hume also gives a revisionary account of causation). What would a rational person say at that point? “Ok, so my hypothesis was false. I wonder what the right account is?”

Not Hume. He just declares that we do not in fact have those concepts.

In another (in)famous passage, he talks about the “missing shade of blue”: imagine a person who has seen many shades of blue, but there is one particular shade he has never seen. You show the person a series of color swatches, arranged by hue, with the swatch for that particular shade removed. Could the person imagine the missing shade? Hume agrees that the person probably could imagine it. Then he basically says, “Yeah, that’s a counterexample to my theory, but it’s a weird case, so let’s not worry about it,” then continues using his hypothesis as if it were a known fact.

Kant

We all know about Kant’s ethical theory, centered on the “Categorical Imperative” (which has “three formulations” that are somehow one, kind of like the Holy Trinity). According to one formulation, you always have to act in such a way that you could will that the maxim of your action was a universal law. That’s supposed to capture all of morality, and you have to follow that principle no matter what. E.g., if a murderer comes to your door looking for his intended victim, and the victim happens to be hiding in your house, you can’t just lie to the murderer and tell him the victim is somewhere else. Because you can’t will that everyone always lie.

What about Kant’s argument for the Categorical Imperative? I bet you can’t say what the argument was, can you? That’s because almost no one covers it in classes or discusses it in the literature. And that’s because it’s completely unconvincing and not worth discussing, except to make points about bad arguments. Here is a key statement:

But if I think of a categorical imperative, I know immediately what it contains. For since the imperative contains besides the law only the necessity that the maxim should accord with this law, while the law contains no condition to which it is restricted, there is nothing remaining in it except the universality of law as such to which the maxim of the action should conform; and in effect this conformity alone is represented as necessary by the imperative.
There is, therefore, only one categorical imperative. It is: Act only according to that maxim by which you can at the same time will that it should become a universal law. (Foundations of the Metaphysics of Morals [1969], 44)

Got that? Okay, so that explains why you can’t lie to murderers to stop them from finding their intended victims.

How Bad Is That?

I note that the above are not examples of very subtle mistakes, nor are they attributable, say, to not having access to modern science or other modern discoveries. Those really are simply examples of people being very bad at thinking. No skilled thinker of any time should have said that kind of stuff. It’s not as if, e.g., you had to study quantum mechanics or predicate logic in order to realize that you should not cling to a hypothesis after finding multiple counter-examples to it.

Greatness

What Is It?

So that explains why I say those people are bad philosophers. But then . . . in what sense can they possibly be deemed great philosophers?

Well, being “great” in the sense of “the Great Philosophers” isn’t really about being good at thinking in the normal sense. (The normal sense of being good at thinking, I take it, is about forming beliefs that are supported by the available evidence and likely to be correct, and only forming such beliefs.)

Greatness, however, is more about influence. Western philosophy is a big, 2000-year+ conversation. The “great” philosophers are the participants who had the greatest influence on that conversation. They said stuff that other people found interesting, and kept thinking about, and telling other people about for centuries after the great philosophers’ deaths. That’s what a ‘great’ philosopher is – not a philosopher who said a bunch of true stuff based on good reasons.

The Greatness-Badness Connection

Saying true stuff based on good reasons is not incompatible with greatness in that sense. But it isn’t particularly conducive to greatness; in fact, it is strongly anti-correlated with greatness.

Why is that? Think about how one goes about influencing a conversation, and getting other people to talk about oneself. It’s not by saying the most reasonable things. It’s by saying things that are interesting or enjoyable to discuss. You can’t be completely stupid, or else people won’t want to talk about your ideas, but it actually helps if your ideas are implausible. If someone says, “Things are pretty much the way they seem here,” that’s not going to stimulate much discussion. It’s when someone comes up with an idea that is new and amazing or outrageous that other people want to talk about it.

For perhaps the best example, look at Kant. His idea of locating space and time in us is startlingly original. Wouldn’t it be amazing if that was true? Or Hume: isn’t it just outrageous and amazing how he argues against the justification of basically everything we know? Plato is kind of amazing too, with the whole realm of perfect circles, perfect Justice, etc., that the soul grasped before our birth.

But, of course, most ideas that are amazing or outrageous are also very badly wrong. And most arguments for such ideas are of course going to be bad arguments. And so, most people who believe such arguments and ideas are going to have to be bad at thinking – that is, not reliably oriented toward the truth. They’re going to be people with poor judgment and/or poor reasoning skills, since otherwise they’d realize that these amazing ideas are almost certainly wrong.

Our Confusion

Not everyone realizes all this. Most people, I suspect, believe that the Great Philosophers are actually good at philosophy. The reason they think this is that they know the Great Philosophers are the ones whom our ancestors have passed down to us, out of a large number of people who wrote and thought in the past. I guess people assume that “philosophy” or “fame” is sort of like a conscious agent, so if it chooses to focus on certain thinkers and certain works to tell us about, it must be that those are the best and most important things. Or at least very good and important.

But it doesn’t work like that. The current canon of Western philosophy is a spontaneous order, and the factors by which people were selected for inclusion need not involve truth or cogency of reasoning. There is no gatekeeper to say, “This idea is too obviously wrong; I’m not going to let people talk about it.”

Caveat

I don’t know the other thinkers of the past who didn’t get into the canon. So I don’t actually know if they were generally any better at reasoning than the Greats. So it could be that the Greats, despite how bad they were thinking, were still better than the other thinkers of their time.

(Why) Does Terrorism Work?

Evidence that it Doesn’t Work

I looked at this question briefly in The Problem of Political Authority. The evidence I found from political science suggested that terrorism is rarely effective – i.e., terrorist groups hardly ever attain their stated political goals. In many cases, their goals are set back because of their attacks. This is especially true when they target civilians (cases of successful terrorism tend to be attacks on military targets).

The reason is basically that terrorist attacks, especially on civilians, increase public support for hardline, hawkish politicians, who tend to do the opposite of whatever the terrorists wanted. E.g., the 9/11/2001 attack resulted in two decades of hawkish politicians in America, in which nearly everyone in both parties agreed on a highly aggressive stance in the Middle East. It caused America to invade two countries and topple their governments, killing hundreds of thousands of (mostly Muslim) civilians, in addition to bombing many more people in multiple countries in the Middle East. So, even though the attack succeeded in causing harm to Americans, it was a spectacular failure from the standpoint of benefiting Islam or people in Muslim countries.

Wtf Are They Thinking?

I think if you’re planning to kill people, you’re obligated to think hard about it first, and to try very hard to verify that your plan will actually produce the (putatively) good consequences that you anticipate.

The stuff I said above about 9/11 was not hard to anticipate. It was completely obvious that the U.S. government was not going to swiftly withdraw from the Middle East and that it would instead go to war in the Middle East. Any idiot could also tell that this war or collection of wars would be extremely destructive to the people in the Middle East.

The general empirical facts about terrorist success are no secret either. Anyone who was thinking about doing some terrorism could do some investigation — “Hm, I wonder how often this works?” — and they would find out that it rarely works and often backfires.

So, one can only conclude that terrorists don’t make serious efforts to verify that their killings will advance, rather than hinder, their stated goals.

That’s odd. Why would you go kill a bunch of people, and even sacrifice your own life, when you have approximately no evidence for thinking that it will help your cause, and it may instead harm the cause?

I am not sure. But this makes me suspect that terrorists’ actual motives are not what they often say. E.g., Islamic terrorists don’t care about advancing Islam, or helping Muslim people, or even reducing foreign intervention in Muslim countries. None of that. They don’t want to help the ingroup. They just want to hurt the outgroup. In other words, their motive is hate.

A lot of human behavior is like that.

Wait, Maybe it Worked

As I say, I thought that 9/11 was an excellent example of how a “successful” terrorist attack (as in, the operation was carried out according to plan) can be a complete failure from the standpoint of advancing the motivating cause. I thought that in 2012, when I wrote that book.

More recently, it occurred to me that the 9/11 attack might have worked, to a certain extent. The U.S. didn’t get out of the Middle East, no, and things have gotten worse for Muslims in the Middle East. But Islam has made cultural inroads in the West. Bizarrely enough, I think Islam is more popular in the West, much better respected in certain (non-Muslim) circles, than it was before that attack. I would not have predicted this at all.

More specifically, Islam has become popular among left-wing people. (Of course, not among right-wingers.) Muslims are now one of their favored identity groups – along with women, blacks, Latinos, etc.

I think this is due to 9/11, because as I recall, before 9/11, Islam was not on the radar screen of mainstream U.S. society. People did not talk about it. 9/11 evoked some “Islamophobia” (as people say), which in turn evoked a counter-reaction from the left, turning a fair number of them pro-Islam.

As a case in point, see Ayaan Hirsi Ali, the Muslim apostate who escaped from severe and very tangible oppression in Somalia and has gone on to speak out boldly and insistently, and at great risk to her own physical safety, against the oppression of women in the Islamic world.

Hirsi Ali spoke at CU-Boulder on Monday evening. She spoke of how her home country was riven with tribal conflict, and of her concern that America and the West are descending into tribalism and abandoning the values that made us peaceful and prosperous.

You might assume that Hirsi Ali would be a hero for feminists and social justice crusaders across the world. Instead, many progressives are more interested in attacking her as an “Islamophobe” (https://en.wikipedia.org/wiki/Ayaan_Hirsi_Ali#Criticism) – in effect siding with the oppressors against an oppressed woman of color.

Wtf Are We Thinking?

This is on its face odd, because Islam is not exactly a progressive belief system. The Islamic world is, let us say, not full of liberal feminists and LGBTQ activists. (What if progressives were protesting against people who criticize Trump, calling them “Trumpophobes”?) So what’s going on?

One possibility: appeasement. Left-wing individuals seem to like the idea of appeasement more than right-wingers. Maybe terrorism has worked on the progressives in exactly the way the terrorists intended: left-wing Westerners have been frightened into deferring to Islam.

That is the charitable read. Here is another possibility. Remember how I said above that the motives of terrorists are not what they say? That they don’t care about benefiting their own side, but only about harming their enemies?

The same may be true of a significant portion of Western progressives: they don’t mainly want to help their favored groups; they mainly want to harm their disfavored groups. They don’t, for example, aim to help the poor, or women, or blacks, but simply to harm the rich, men, and whites. Of particular import here: they don’t want to help other countries; they want to hurt America.

That would explain how progressives could be on the same side as Islamic extremists – how they could, for example, side with Muslim traditionalists against Hirsi Ali. If progressives just wanted to help oppressed minorities, they would join people like Hirsi Ali in calling for an Islamic reformation. But if they mainly wanted to hurt America, then they would probably side with the Islamic extremists.

Hostility to One’s Own

Obviously, the above would apply only to some leftists. Many others would in fact side with (e.g.) Hirsi Ali, just as one would expect a feminist or advocate for social justice to do.

You may still think that I have given an excessively uncharitable interpretation, even of just a significant number of progressives. If you’re tempted to say that, then you probably haven’t read enough academic political discourse. So let me help you out.

The following is an excerpt from an essay by Ward Churchill, formerly a Professor of Ethnic Studies at my own university (before he was fired for plagiarism). Churchill wrote this the day after the 9/11 attack, discussing the terrorists who destroyed the World Trade Center (https://cryptome.org/ward-churchill.htm):

They did not license themselves to “target innocent civilians.”
[…]
Let’s get a grip here, shall we? True enough, they [the people in the World Trade Center] were civilians of a sort. But innocent? Gimme a break. They formed a technocratic corps at the very heart of America’s global financial empire – the “mighty engine of profit” to which the military dimension of U.S. policy has always been enslaved – and they did so both willingly and knowingly. […] To the extent that any of them were unaware of the costs and consequences to others of what they were involved in – and in many cases excelling at – it was because of their absolute refusal to see. More likely, it was because they were too busy braying, incessantly and self-importantly, into their cell phones, arranging power lunches and stock transactions, each of which translated, conveniently out of sight, mind and smelling distance, into the starved and rotting flesh of infants. If there was a better, more effective, or in fact any other way of visiting some penalty befitting their participation upon the little Eichmanns inhabiting the sterile sanctuary of the twin towers, I’d really be interested in hearing about it.

Caveat: This is anecdotal. But, reading the above passage, I get a really strong sense that (a) the author hates America and a large portion of Americans, (b) the author is happy about the 9/11 attack, (c) the author puts himself on the same side as the terrorists.

Granted, that is an extreme case (which is why it caused a great controversy when people noticed the article). But it is not unusual in the academic world to read texts with a strong anti-American (and anti-white, anti-male, anti-wealthy, etc.) emotional tone. My subjective impression is that my fellow academics do not merely criticize America (or men, white people, etc.) in the hope of improving these things. Many have a deep-seated emotional hostility that they just need to let out.

I’m not going to talk now about whether such hostility is justified. My point right now is that this could explain how terrorism might succeed: violent attacks on a society could further one’s cause, if a sufficient number of influential people within that society are actually extremely hostile to their own society. Those people might then gain sympathy for the terrorists and their cause.

I think it would be a very rare circumstance that this would work — it is very unusual that a significant portion of influential people are that hostile to their own society. But, strange as it seems, I think that has happened to us.

They’re Not on Your Side

I guess I have one more point to make, on the off chance that this isn’t completely obvious to everyone reading this. If you’re a progressive, and you think Islamic terrorists are on your side, then you’re an even bigger fool than the people who read Donald Trump’s tweets and believe them. The Islamists – the ones who blow up buildings and issue fatwahs – would not hesitate for one second to kill you, if they somehow had the power in your society. They would have no problem with murdering people for being gay, either, or for being atheists, or for criticizing a theocratic government, or for being victims of rape.

So I think we in the West need to put a leash on our self-hatred long enough to realize that not everyone who attacks us does so in the name of justice.

You Don’t Agree with Karl Popper

I’m teaching Philosophy of Science this semester. It’s a fun class. I just had occasion to discuss the philosophy of Sir Karl Popper, who was among the most influential philosophers of science of the last century (and therefore of all time). He had an enormous influence on our intellectual culture, especially on scientists, and he is the reason why you hear complaints about “unfalsifiability” from time to time (accusations that religion is “unfalsifiable”, etc.).

If you’re a philosopher of science, you probably don’t take Popper’s philosophy seriously. But if you’re an ordinary person, or a libertarian, or especially a scientist, there is a pretty good chance that you think you agree with Karl Popper.

So, as a public service, I am here to explain to you that no, you probably do not agree with Popper at all — unless you are completely out of your mind.

What He Said

You probably associate Popper with these ideas: It’s impossible to verify a theory, with any number of observations. Yet a single observation can refute a theory. Also, science is mainly about trying to refute theories. The way science proceeds is that you start with a hypothesis, deduce some observational predictions, and then see whether those predictions are correct. You start with the ones that you think are most likely to be wrong, because you’re trying to falsify the theory. Theories that can’t in principle be falsified are bad. Theories that could have been falsified but have survived lots of attempts to falsify them are good.

I wrote that vaguely enough that it’s kind of what Popper said. And you might basically agree with the above, without being insane. But the above paragraph is vague and ambiguous, and it leaves out the insane basics of Popper’s philosophy. If you know a little bit about him, there is a good chance that you completely missed the insane part.

The insane part starts with “deductivism”: the view that the only legitimate kind of reasoning is deduction. Induction is completely worthless; probabilistic reasoning is worthless.

If you know a little about Popper, you probably think he said that we can never be absolutely certain of a scientific theory. No, that’s not his point (nor was it Hume’s point). His point is that there is not the slightest reason to think that any scientific theory is true, or close to true, or likely to be true, or anything else at all in this neighborhood that a normal person might want to say.

Thus, there is no reason whatsoever to believe the Theory of Evolution. Other ways of saying this: we have no evidence for, no support for the Theory of Evolution. There’s no reason to think it’s any more likely that we evolved by natural selection than that God created us in 4004 B.C. The Theory of Evolution is just a completely arbitrary guess.

(This is not something special about Evolution, of course; he’d say that about every scientific theory.)

This is not a minor or peripheral part of his philosophy. This is the core of his philosophy. His starting point is deductivism, which very quickly implies radical skepticism. The deductivism is the reason for all the emphasis on falsification: he decided that since one can’t deduce the truth of a theory from observations, the goal of science must not be to identify truths. But, since we can deduce the falsity of a theory from observations, the goal of science must be to refute theories.

As I say, most people don’t realize that this is Popper’s view — even though he makes it totally clear and explicit. I think there are two reasons why people don’t realize this: (a) the view is so wildly absurd that when you read it, you can’t believe that it means what it says; (b) Popper’s emotional attitude about science is unmistakably positive, and he clearly doesn’t like the things that he calls “unscientific”. So one would assume that his philosophy must give us a basis for saying that scientific theories are more likely to be correct than unscientific ones. But then one would be wrong.

Now, in case you still can’t believe that Popper holds the irrational views I just ascribed to him, here are some quotations:

“We must regard all laws and theories as guesses.” (Objective Knowledge, 9)

“There are no such things as good positive reasons.” (The Philosophy of Karl Popper, 1043)

“Belief, of course, is never rational: it is rational to suspend belief.” (PKP, 69)

“I never assume that by force of ‘verified’ conclusions, theories can be established as ‘true’, or even as merely ‘probable’.” (Logic of Scientific Discovery, 10)

“[O]f two hypotheses, the one that is logically stronger, or more informative, or better testable, and thus the one which can be better corroborated, is always less probable — on any given evidence — than the other.” (LSD, 374)

“[I]n an infinite universe […] the probability of any (non-tautological) universal law will be zero.” (LSD, 375; emphasis in original)

About the last quotation, note that many scientific theories contain universal laws (e.g., the law of gravity). So, Popper is not just denying that we can be certain of these theories, and not just denying that they are likely to be true; he claims that they are absolutely certain to be false.

In the penultimate quotation, note the “on any given evidence” clause: When you get done testing your scientific theory, and it survives all tests, you can’t say that it’s likely to be correct; it’s less likely to be correct, even after you’ve gathered all the evidence, than some unfalsifiable, unscientific theory.

All of this is the sort of view that you would expect from the most extreme science-hater. The weird thing about Popper is that he inexplicably combines this stuff with a strong positive evaluation of science. We have no reason to believe in science, and pseudoscience is more likely to be correct, and in fact the paradigmatic scientific theories are definitely wrong . . . but hey, isn’t science great? Okay, now let’s get on with the wonderful science stuff!

I can’t explain this combination of attitudes. I don’t think Popper ever attempted to explain it himself.

By the way, the core idea — deductivism, and inductive skepticism — seems to be surprisingly popular among philosophers. It’s ridiculous. It’s like if a major position within geology were that there are no rocks.

How He’s Wrong

I’m only talking about objections that I like here, so I’ll ignore objections based on Thomas Kuhn.

The Duhem-Quine Thesis

This is something widely accepted in philosophy of science: a typical scientific theory doesn’t entail any observational predictions by itself. You at least need some auxiliary assumptions.

Ex.: Newton’s Theory of Gravity, together with Newton’s Laws of Motion, are sometimes said to entail predictions about the orbits of the planets (in particular, to predict Kepler’s laws). But that’s false. Newton’s second law only gives the acceleration of a body as a function of the total force acting on it. The Law of Gravity doesn’t tell you the total force on anything; nor is “total force” observable. So there are no observational predictions from this set of laws, even when combined with all our observations.

Of course, the laws and the observations make certain patterns of motion more plausible or likely than others. If the planets moved in squares around the night sky, it would be very implausible to explain that using Newton’s laws + the hypothesis of some unknown, unobservable forces. But that is completely irrelevant for Popper. Again, for Popper, the only thing that counts as scientific reasoning is deducing the falsity of a theory from observations. You’re not allowed to appeal to any probabilistic judgments to support a theory.

Probabilistic Theories

Another counter-example: Quantum Mechanics. It’s a scientific theory if anything is. But it is clearly unfalsifiable, because all of its observational predictions are probabilistic. It enables one to calculate the probabilities of different possible observed results, but a probabilistic claim (as long as the probability isn’t 0 or 1) can never be falsified — i.e., you can’t logically deduce the falsity of the probability claim from observations. And again, that’s the only thing you’re allowed to appeal to. So, on Popper’s view, quantum mechanics must be unscientific.

Evolution

But QM is weird. So let’s think about some perfectly ordinary, paradigmatic examples of scientific theories. Real scientific theories, by the way, are not normally of the form “All A’s are B” (as in philosophers’ examples).

Here’s one: the Theory of Evolution. Humans and other living things evolved by natural selection from simpler organisms, over a long period of time. Here’s an example of the evidence for this: some of the larger constrictor snakes have degenerate hind limbs underneath the skin. This can be explained, in the theory of evolution, by the hypothesis that they evolved from lizards. On the rival theory (Creationism), there’s no obvious explanation.

This isn’t a matter of deduction. The Theory of Evolution does not entail that the larger constrictors would have subcutaneous degenerate hind limbs. It merely gives a reasonable explanation of the phenomenon, which the rival theory doesn’t (but creationism doesn’t entail that there wouldn’t be such degenerate hind limbs; it merely fails to explain why there would).

The Dinosaur Extinction

Here’s another theory: the dinosaurs were driven extinct by a large asteroid impact. And here’s some evidence: there is an enormous crater at the edge of the Yucatan Peninsula in Mexico (the Chicxulub Crater), partly underwater. The crater has been dated to about the time of the Cretaceous–Paleogene extinction event. That’s evidence that an asteroid impact caused the mass extinction.

Again, that’s not deductive. The asteroid-impact theory of the extinction does not entail that we would find a crater. (It’s logically possible that the crater would have been filled in, or that the asteroid hit a mountain and didn’t leave a visible crater, or that the crater was somewhere we haven’t looked, etc.) It merely makes it much more likely that we would find a crater.

So, Popper’s philosophy entails that the Theory of Evolution and the asteroid-impact theory are unscientific, besides that we have no evidence at all for either of them.

The Obvious

Of course, the obvious problem is that it’s absurd to say that we don’t have any reason to think any scientific theory is true. We have excellent reasons, for example, to think that humans evolved by natural selection. There is not any serious doubt about that in biology, and that is not something that we should be arguing about. And in fact, I’m not going to argue about it, because I just don’t think that’s serious.

The Correct Theory

What is the correct view of scientific reasoning? Basically, the Bayesian view: it’s probabilistic reasoning.

Take the example of the degenerate hind limbs again: that is evidence for the theory of evolution because it’s more likely that we would see stuff like that if organisms evolved by natural selection, than it is if they were all created by God in 4004 B.C. (or, in general, if they did not evolve). In standard probability theory, that entails that Evolution is rendered more probable by our seeing things like the degenerate hind limbs.

Why Care About Falsifiability?

There really is something important about falsifiability. Intuitively, there is something bad about unfalsifiable theories, and we have Popper to thank for drawing attention to this.

Unfortunately, almost no one seems to have any idea why falsifiability matters, and Popper did not help with that situation, because his theory is incapable of accepting the correct explanation.

The correct explanation is a probabilistic/Bayesian account. In the Bayesian view, a hypothesis is supported when P(h|e) > P(h) (the probability of hypothesis h given evidence e is greater than the initial probability of h). It is a trivial theorem that, for any e, P(h|e) > P(h) iff P(h|~e) < P(h). In other words: e would be evidence for h if and only if the falsity of e would be evidence against h.

That means that if nothing counts as evidence against h, then nothing counts as evidence for h either. But if there’s no evidence for h, then one typically shouldn’t believe it. This is why you shouldn’t believe unfalsifiable theories. By contrast, falsifiable theories are supportable: when one tries and fails to falsify them, their probability goes up.

The point is more general than a point about Popperian falsifiability. Let’s say a theory is “falsifiable” iff one could deduce its falsity from some possible observations, and a theory is “testable” iff there are some observations that would lower the probability of the theory. Then the general point to make is that one can have evidence for a theory iff the theory is testable (it needn’t also be falsifiable), and good scientific theories should be testable.

Popper couldn’t say this, because he was obsessed with deduction, and this is a point about probabilistic reasoning, not deduction. Popper didn’t eschew all talk of probabilities; he just insisted on the most perverse probability judgments (e.g., that scientific theories are less likely to be correct than unscientific ones, even after they survive stringent tests). Which would make one wonder why anyone should prefer scientific theories. Obviously, the correct view is that scientific theories, after surviving tests, become more probable.

The Positive Side of Murder

I’ve seen several people commenting on the recent assassination of the Iranian general Qassem Suleimani (https://en.wikipedia.org/wiki/2020_Baghdad_International_Airport_airstrike). From what I can see, it appears that everyone is against it (but maybe that’s just liberals and libertarians). I’m not quite sure why, though. This is a good time for some comments on assassination.

Is Assassination Just?

Assume you have someone who is going to cause a lot of unjust harm to others. In the present case, I assume that U.S. intelligence is basically correct that Suleimani was a murderer and terrorist, and he was going to murder more people in the future if someone didn’t stop him. I don’t know the details about this, but this doesn’t seem to be disputed. (I do not assume that any attack was imminent, as there seems to be no evidence of this.)

Suppose further that the only way to stop this person is to kill him, or to kill some other people, like several of the people who work for him. That also seems plausible in this case.

In that case, should you assassinate the evildoer?

I don’t see why not. If Suleimani wasn’t a morally legitimate target, I don’t know who would be. The person who is ordering other people to commit evil deeds is surely at least as responsible as the people who directly carry them out. So the top military official is at least as legitimate a target as rank-and-file soldiers in the military. And it’s obviously better to kill him than to kill multiple other people.

According to the NYT, killing Suleimani was the “most extreme” option that Pentagon officials offered to Trump, to respond to Iranian aggression. (https://www.nytimes.com/2020/01/04/us/politics/trump-suleimani.html) Apparently, though, the more “moderate” and more conventional options would have involved killing many people, but only lower-ranking soldiers or militia members, instead of a general.

By what deranged moral metric is it better to kill multiple lower-ranking people than the guy at the top who is ordering people to commit evil deeds?

Why Are People Shocked?

When U.S. forces were hunting Osama bin Laden, no one seemed to have any trouble understanding why it made sense to go particularly after him, as the leader of al Qaeda, rather than merely pursuing his footsoldiers. Of course you would want to take out the guy at the top, or someone close to the top. And when Seal Team Six assassinated bin Laden, no one seemed to have any problem with that. As a matter of fact, I recall people celebrating.

So why are commentators so shocked by the “extreme” action of assassinating Suleimani? One possibility is that people are shocked because it’s an action by Trump. Anything that Trump does, we’re primed to see as crazy.

Another possibility is that the action seems extreme and shocking because . . . Trump attacked a high government official. Usually, you just attack soldiers. In the bin Laden case, bin Laden was the leader of a non-governmental terrorist group. So of course you can assassinate someone like that. And of course you can kill multiple completely innocent civilians in the course of targeting members of private terrorist groups. But killing a government official? Dear God, is nothing sacred? Has Mr. Trump no limits?!

Is It Prudent?

Perhaps that’s uncharitable. Perhaps the main worry is that assassinating foreign government officials is likely to start a war. (If that’s your worry now, then when the war fails to materialize, then presumably you will admit that the assassination was a good move. Right?)

The theory, I suppose, is that assassinating officials is more “provocative” than killing ordinary soldiers or civilians (as we traditionally do) — it’s more provocative to the people who run the foreign government, because those people don’t give a crap about footsoldiers and civilians.

Well, that’s one theory. Here’s another theory, one that relies on premises that my libertarian friends seem to accept in other contexts. Assassinating a high-ranking government official makes it less likely that we will go to war with the foreign country, as compared to merely attacking their troops.

Why? Because the government officials who make the decisions about whether to go to war don’t give a crap about soldiers and civilians. If attacking the U.S. means that the U.S. kills more Iranian soldiers or civilians, then, plausibly, the Iranian government will continue to attack the U.S. They might just be fine with sacrificing multiple such “unimportant” lives, year after year, for the sake of their ideology and their hatred of the U.S.

Assassinating a top government official changes the calculus. It lets the Iranian government know that you’re not just going to target the civilians and soldiers that they don’t care about. You’re going to target them, the people making the decisions. Then they have to think hard about how much they really care about their ideology and their hatred. Do they care enough to sacrifice their own lives?

This is one reason why there is so much war in the history of the state: because the people who make the decisions about going to war almost never have to bear the costs. It’s a lot easier to see the merits of sending other people into a war than it is to see the merits of starting a war in which you yourself are likely to die.

From that point of view, Iran’s decision to essentially back down (only launching missiles that killed no one) (https://www.theguardian.com/us-news/2020/jan/08/irans-assault-on-us-bases-in-iraq-might-satisfy-both-sides), is utterly unsurprising. The Ayatollah is not stupid. He knows what will happen in a war with the U.S. The Iranian government, obviously, would be defeated. Very likely, the U.S. would topple the government and kill the leader, since that’s what happened the last time we went to war in the region. (This might in turn cause ISIS or other terrorist groups to gain more followers, but not before Khamenei himself was dead.)

In fact, Trump just might order Khamenei to be assassinated even without a war. This realization is more likely to curtail Iranian aggression than any amount of economic sanctions or attacks on Iranian soldiers.

Is It Legal?

Having said all that, there is a distinct, purely descriptive, legal question: was the assassination (and other, similar assassinations) legal?

To that, the answer is “obviously not”. There is a little crime called “murder” that you’re considered guilty of when you deliberately kill people. You’re also legally considered to be guilty of it when you successfully direct someone else to kill someone. So, on the face of it, Donald Trump is guilty of murder. That’s not partisan rhetoric, by the way. That’s just an objective, descriptive fact.

Now, there are exceptions, where you can kill someone and not be legally considered a murderer. One exception is for killing in war. But the U.S. is not at war with Iran (thus far!), so that would be a tough case to make.

Another exception is for self-defense or defense of innocent third parties. This is what the Trump administration would like to claim. But, in American law, in order to employ the “defensive killing” defense, you have to argue that the person you killed posed an imminent threat to life or limb. It’s not enough to say that the person was eventually going to kill you or some innocent third party. You have to argue that the person was just about to do it, right when you killed him. (If they were merely going to do it at some further future time, you’re supposed to call the police, run away, or something like that.)

That, obviously, is why Trump administration officials keep insisting that Suleimani was planning an “imminent” attack, at the time U.S. forces killed him. Because if it wasn’t imminent, then, according to our own law, we murdered him. We sure don’t want to say we’re murderers; therefore, it must have been imminent . . . but we can’t provide any information whatsoever about this attack that we’re talking about. We don’t know where it would have been, or when, and we can’t tell you a single thing about the evidence we have for this, but . . . believe us.

Obviously, they’re lying. If you don’t know that, then I would like to sell you the Brooklyn Bridge seven times, because that’s how absurdly gullible you are.

The Obama administration, by the way, ran into the same issue. They also liked to assassinate people with drones, but they didn’t want to call themselves murderers. (Bush also used some drone strikes, but nowhere near as many.) So the Obama Justice Department wrote up a memo explaining that the people they were targeting were all posing “imminent” threats. They seem to have vastly expanded the meaning of “imminent”. In the ordinary context (i.e., if you’re not a government official), an imminent attack pretty much has to be just about to occur — surely within hours, if not seconds. But in the new sense introduced to justify government killings, an “imminent” attack might be coming within a few years.

So the last three Presidents were probably murderers, legally speaking. Of course, you might think that these were mostly good murders. Be that as it may, the Suleimani murder really doesn’t seem to have been a particularly bad or shocking one, among the murders that our leaders have been carrying out over the past couple of decades.

Outlaw Universities

Discrimination in Academia

Probably everyone in the academy knows that affirmative action is widely practiced: racial minorities (except Asians) and women are commonly given preference in hiring and admission decisions at American universities. I would guess, however, that some academics — and many more non-academics — are unaware that typical university hiring practices are blatantly illegal. So I’m going to talk about that for a while, in case you find that interesting.

Job advertisements commonly say things like that the university rejects discrimination, supports equal employment opportunity, and considers all applicants “without regard to” race, sex, religion, etc. What they actually mean by this is that they only discriminate in certain specific ways. E.g., they don’t discriminate against blacks or Hispanics, but only against whites and Asians. They don’t discriminate against women, only against men. And so on. (This sounds to me like a rather Orwellian use of “equal opportunity”. But what do I know? I’m just some crazy libertarian philosopher.)

I think pretty much everyone in the academic subculture knows this. I don’t know if this is widely known outside academia, though.

Here are a couple of examples:

Example 1: the University of California

This was in the news recently. The University of California, in 8 of its 10 campuses, requires all job applicants to submit a “diversity statement” explaining how they will add to the “diversity” of the faculty. You can read about it on the UCLA site: http://ucla.app.box.com/v/edi-statement-faqs

Great — they want diverse viewpoints! Maybe they’re finally going to add some differing ideological perspectives to the chorus of left-leaning professors, right?

Well, if you think that, I have some bridges to sell you. The requirement is obviously designed to help exclude whites, men, and non-leftists. Here, you can read about some of the results achieved at UC Berkeley: https://ofew.berkeley.edu/sites/default/files/life_sciences_inititatve.year_end_report_summary.pdf. Here are a couple of tables from that report:

That’s from two searches they did. They’re so proud of what they’ve done that they posted that publicly. Can anyone doubt that their aim is to increase women and racial minorities?

Now, as I say, every professor already knows the score. You know that if you apply to UC, and you belong to a minority, you should talk about that in your “diversity statement”. You know that the hiring committee wants to ask you your race and gender, but they can’t legally do this, so they put in the “diversity statement” requirement as a proxy. If you want the job, you’ll play ball.

Example 2: The Other UC

We have a more limited version of the idea at my own school: https://www.colorado.edu/postdoctoralaffairs/current-postdocs/chancellors-postdoctoral-fellowship-diversity-program

Those are special post-doctoral fellowships earmarked for candidates who “contribute to diversity”. The successful candidates are meant to become regular tenure-track professors at the end of the fellowship. The candidates, again, have to submit a diversity statement. And again, the university does not explicitly ask you about your race or gender (since that would be illegal), nor do they explicitly say that they are going to exclude white men. But everyone knows the score.

The Relevant Law

Now, what is remarkable about all this? Why do I claim it is illegal?

The Civil Rights Act

Here is one relevant law, with which anyone who makes hiring decisions ought to be familiar: it’s called the Civil Rights Act of 1964. Here are some pertinent quotations (Title VII):

“It shall be an unlawful employment practice for an employer – (1) to fail or refuse to hire or to discharge any individual, or otherwise to discriminate against any individual with respect to his compensation, terms, conditions, or privileges of employment, because of such individual’s race, color, religion, sex, or national origin . . .”
“Notwithstanding any other provision of this subchapter, (1) it shall not be an unlawful employment practice for an employer to hire and employ employees . . . on the basis of his religion, sex, or national origin in those certain instances where religion, sex, or national origin is a bona fide occupational qualification reasonably necessary to the normal operation of that particular business or enterprise . . .”

Notice some things about this:

  • It doesn’t say that it’s illegal to discriminate against women; it says that it is illegal to discriminate on the basis of sex. It doesn’t say it’s illegal to discriminate against blacks; it says it’s illegal to discriminate on the basis of race. There is no asymmetry drawn between the genders, nor between races. It doesn’t make any distinction between the majority and the minority, or between historically privileged and historically disadvantaged groups, or anything like that. So there is just no legal basis for saying that it would be okay to discriminate in one direction but not the other.
  • The second paragraph quoted above makes an exception: you can discriminate on the basis of religion, sex, or national origin, if one of those things is a genuine qualification for the job.
  • Notice what is conspicuously absent from that exception: the text pointedly does not mention race or color in stating the exception. That means the exception does not apply to race and color. Meaning that it remains illegal to discriminate based on race even if a person’s race is a genuine job qualification. This rules out claiming that minority professors will do better at teaching, or in some other way be more suited to the job.
  • You also can’t claim that excluding white men doesn’t count as “discrimination” (say, because it’s only possible to “discriminate against” minorities). Besides that that’s factually false, the first clause prohibits “failing or refusing to hire” a person because of their race, etc. There is no way that these universities can deny that they have “failed to hire” certain people for particular positions because of those people’s race and sex.

This really is not ambiguous at all. The racial preferences are just clearly completely illegal.

It’s ironic that this law — one of the great triumphs of the Civil Rights Movement of the 1960’s — is precisely the one that modern-day leftists, who fancy themselves heirs to that movement, are most at odds with. This indicates how far that movement has strayed from its founding ideals.

Proposition 209

The policies are extra-illegal in California, because, in addition to the Civil Rights Act, California has a law called Proposition 209, which was passed directly by the voters in 1996. This law was specifically written to exclude affirmative action, and it specifically mentions the University of California:

“The state shall not discriminate against, or grant preferential treatment to, any individual or group on the basis of race, sex, color, ethnicity, or national origin in the operation of public employment, public education, or public contracting. … Nothing in this section shall be interpreted as prohibiting bona fide qualifications based on sex which are reasonably necessary to the normal operation of public employment, public education, or public contracting. … ‘[S]tate’ shall include, but not necessarily be limited to, the state itself, any city, county, city and county, public university system, including the University of California …”

It is crystal clear that this was supposed to prohibit exactly what UC is presently doing.

Note again that an exception is made for cases in which sex is a genuine occupational qualification, with no parallel exception made for race. So racial discrimination is illegal no matter what.

So What?

Why draw attention to all this? One reason is that I think this kind of discrimination is wrong, counterproductive, and bad for society. I’m not going to discuss that in detail, though, both because it’s too large of an issue and because you probably already understand the basic reasons why someone would think that.

I think it’s particularly wrong for the government to discriminate in these ways, and this discrimination is indeed going on at government schools.

Also, I think this points up the left-wing bias of universities. The faith in affirmative action is not shared by most of the public, but academics are so strongly and so uniformly in favor of it, that universities across the country are happy to explicitly defy the law. They don’t care if everyone knows they’re doing it. They don’t care if they’re opening themselves up to lawsuits.

The few people in the academy who don’t share this ideological commitment are thus put in a difficult position, if they should happen to get on a hiring committee. They’ll be expected to implement the university’s policy of discrimination. They may be morally opposed to this policy, and they may in any case not be happy to break the law in service of the ‘identity politics’ ideology.

How Are We Getting Away with This?

How has academia’s commitment to race and gender discrimination survived so long, so openly? (It has to be said that the “diversity statement” is a paper-thin disguise. Nor have we made any very serious effort to conceal our discrimination for the past few decades.)

I wonder why the system hasn’t been undermined by the simple measure of people lying about their race. Why don’t half the college applicants just claim to be black on their applications? Don’t they know that that would greatly enhance their chances of getting in? Don’t they know that there is no verification of race other than the applicant’s own self-identification?

I also wonder why the federal EEOC (Equal Employment Opportunity Commission) hasn’t enforced the law against all the universities across the country that are openly thumbing their noses at it.

Here’s my hypothesis: the EEOC is staffed by leftists. You don’t join an agency like that because you’re concerned about discrimination against white men. You join it because you’re concerned about women and minorities, and you’re hoping to have the chance to slap down some employers who are discriminating against them.

It does not matter what the law says. It matters what the people who are supposed to enforce the law want and believe. If those people are in favor of specific forms of discrimination, then they’re going to look the other way, no matter that the law prohibits it.

I think this is the sort of thing that’s supposed to be ruled out by “the principle of the Rule of Law”. But it isn’t — or rather, we simply don’t have the rule of law. Everyone believes in the Rule of Law for the laws that they agree with. Many also believe in it for laws that they don’t care about — like laws that only hurt other people. But very few people actually support the Rule of Law for laws that disagree with their ideology.

You might think the law would be enforced through the courts. Why aren’t there dozens of people suing universities across the country?

I think the answer is that it is very difficult to prove harmful discrimination. To win a lawsuit, as far as I understand it, you would have to prove that a specific university passed you over because of your race, color, sex, national origin, or religion. To prove that, you would have to show that you specifically would have been hired if not for the university’s affirmative action program. It’s not enough that the university is showing preference for some races over others in general. You don’t have standing to sue unless you personally were harmed.

But there’s no way to prove that. When universities have hundreds of applicants for any given job, including dozens who are highly qualified, it’s going to be impossible to establish a probability that a specific person would have been hired by a race-blind process. The left-wing professors from the hiring committee are of course not going to admit to a court that they excluded a candidate for being white. They’re going to make up some other reason, which will be impossible to disprove. And so it goes.

One lesson for policy-makers: a law that there’s no way of enforcing is pointless. Prohibiting behavior that can’t be proved is pointless.

Against History

In my previous two posts, I attacked Continental philosophy and Analytic philosophy, respectively. But some philosophers remain unoffended, so now it’s time for me to attack the third main thing that people do in philosophy departments: the history of philosophy. I don’t understand why we have history of philosophy. I’ve taken several courses in history of philosophy, and listened to many lectures on it over the years, and occasionally I have raised this question, but no one has ever told me why we have this field.

1. What Is History of Philosophy?

Don’t get me wrong. I understand why we read historical figures, and why we cover them in classes — because the famous philosophers of the past are usually interesting, and they gave canonical formulations of very important views that are often still under discussion today. They also tended to have a breadth of scope and a boldness missing from most contemporary work.

What I don’t understand is why we have history of philosophy as a field of academic research. For those who don’t know, philosophers in the English-speaking world have whole careers devoted to researching a particular period in the history of philosophy (almost always within Western philosophy), and sometimes just a single philosopher.

What are these scholars trying to find out? Are they looking for more writings that have been lost or forgotten? Are they trying to trace the historical roots of particular ideas and how they developed over the ages? Or are they perhaps trying to figure out whether particular theories held by historical figures were true or false?

No, not really. Not any of those things. Scholarship in the history of philosophy is mainly like this: there are certain books that we have had for a long time, by a certain list of canonical major figures in philosophy. You read the books of a particular philosopher. Then you pick a particular passage in one of the books, and you argue with other people about what that passage means. In making your arguments, you cite other things the philosopher said. You also try to claim that your interpretation is “more charitable” than some rival interpretation, because it attributes fewer errors, or less egregious errors, to the great figure.

What you most hope to do is come up with some startlingly new way of interpreting the great philosopher’s words, one that no one thought of before but that turns out to be surprisingly defensible. It’s especially fun to deny that the philosopher said one of the main things that he’s known for saying. For instance, wouldn’t it be great if you could somehow argue that Kant was really a consequentialist?*

*Kant might actually be a consequentialist — just a weird kind of consequentialist, who thinks that a good will is lexically superior to (of infinitely greater value than) any mere object of inclination.

2. History of Philosophy Isn’t History or Philosophy

Now, let’s suppose that you have a really good historian of philosophy, who does a really great piece of work by the standards of the field, which also is completely correct and persuasive. What is the most that can have been accomplished?

Answer: “Now we know what philosopher P meant by utterance U.” Before that, maybe some people thought that U meant X; now we know that it meant Y.

This is of no philosophical import. We still don’t know whether X or Y is true. You might think that, because the great philosopher thought Y, this is at least some evidence for Y. But that would be extremely weak evidence (almost all of the major doctrines of the major philosophers are false). It would also be a crazy way of going about investigating the issue. It would be much better to just directly consider what philosophical reasons there are for believing Y.

It is also of minimal historical import. “What thoughts were occurring in the mind of this specific person, when he wrote this specific passage?” is technically a historical question. But it is a trivial historical question, unrelated to understanding any of the major events in history. It’s not as if, for example, we’re going to understand why Rome fell, if only we get the right interpretation of Aristotle’s Metaphysics Gamma.

Even when it comes to purely intellectual history, what is historically important is how Aristotle was understood by the people who read him, whether or not what they understood was what Aristotle truly meant.

Historians of philosophy, in brief, are expending a great deal of intellectual energy on questions that do not matter.

You might ask: What’s wrong with that? At least the historians seem to like what they’re doing, so it’s interesting enough to them. True. But intelligent people are a scarce resource in society. It’s fine for you to use your brainpower on questions that don’t matter. But the rest of society has no reason to pay you for doing that, when there are important questions that society would benefit from having more brainpower devoted to.

3. Why Do We Have History of Philosophy?

Why, then, does this field of academic research exist?

Because research-oriented philosophy departments (like all philosophy departments) have courses in the history of philosophy. When they hire someone to teach these courses, they think they have to hire someone who specializes in history of philosophy. That person will also be expected to do research in addition to teaching, since they are at a research school. So they do the stuff I described above.

A solution: You don’t actually need a historian to teach history of philosophy. Any ordinary philosopher can teach history of philosophy, because any ordinary philosopher can read a few major works of the given historical figure, and explain them well enough for undergraduate students. The more complicated, subtle interpretive questions that scholars in history of philosophy debate are not suitable for undergraduate courses. Scholars in history might even be worse at teaching it than ordinary philosophers, since the history specialists are more likely to confuse students by talking over their heads and getting lost in small interpretive details.

So, just hire any philosopher.

4. When History Is Bad for You

The Problem with Religious Texts

Being too focused on history of philosophy is bad for your mind, in something like the way that being overly religious can be bad for your mind.

Religious people are sometimes prevented from looking at and understanding the real world, because of their focus on a religious text. If you take the Bible, the Koran, etc., as a sacred text, then you might try to understand the whole world in terms of it, and thus have an overly narrow perspective. There is also a good chance that the book contains errors or misleading passages, and that the religious person will arrive at false beliefs by trying to rationalize those errors.

Folie a Deux

The great texts in the history of philosophy are not treated quite like religious texts. But they aren’t treated entirely unlike religious texts. Historians commonly treat their chosen historical figures with more respect and deference than you would treat any contemporary figure, and probably more than you should treat any human being. They try everything in their power to avoid admitting that the great philosopher was wrong or confused about a major philosophical point.

Almost all philosophers are mostly wrong. But if you spend too much time studying a particular philosopher, you get drawn into a sort of folie a deux, in which you start to perceive the world in terms of that philosopher’s ideas. Most historians of philosophy appear to believe that the philosopher they study was basically right (though they do not argue for this in their work, which, as noted above, focuses instead on exegesis).

Prima facie, it’s really unlikely that you should be a follower of some philosopher of the distant past (say, over 200 years ago). One reason is that human knowledge as a whole was in a completely different state two hundred and more years ago. Science scarcely existed when most great philosophers wrote. Even philosophy has developed a great deal in the last two centuries. Contemporary philosophers have the advantage of access to earlier philosophers’ work, as well as more rigorous training, and fruitful interactions with a very large, diverse, and active group of other professional philosophers.

Now, if your philosophy basically corresponds to that of some philosopher who lived hundreds or thousands of years ago, then you’re basically saying that none of the vast expansion in human knowledge that has occurred since then, nor any of the work done by philosophers themselves in the past couple of centuries, is philosophically important. None of that has taken us significantly farther, when it comes to philosophical questions, than some guy who lived in prescientific times.

I think that’s super-unlikely.

Please Don’t Be an Aristotelian

To give one important example, there are people today who are followers of Aristotle. I think that’s crazy. If Aristotle lived today, there is no way that he would be an Aristotelian. If we brought him through time to the present day, he would swiftly start learning modern science, whereupon he would throw out his outdated worldview, and he’d probably laugh at the modern Aristotelians.

Aristotle might have been the greatest thinker of all time. But being a great thinker, even the greatest, is not as important as having access to the accumulated human knowledge of the last 2,000 years. This is why the work of much less-great thinkers who are born today is more likely to be correct than the work of Aristotle. Aristotle’s philosophical method is largely about reconciling the opinions of the many and the wise (the endoxa). But of course, those would be the opinions of the people of his day — who knew next to nothing.

To be a little more specific, Aristotle’s philosophy is shot through with teleology. Things are supposed to have built-in goals or functions — not just conscious beings and artefacts, but everything in nature. This is just a completely false conception of the world. It’s not a dumb thing to think if you’re living 2,000 years ago. But we now have a vast body of detailed and rigorously tested scientific explanations of all manner of natural phenomena. Natural teleology — purposes or ‘functions’ that exist in nature apart from any conscious being’s desires — contributes nothing to any of them.*

*I know that someone is now going to post a comment claiming that natural functions appear in biology. But evolutionary functions are not equivalent to Aristotelian teloi. All biological phenomena are explicable by mechanistic causation.

So if you’re still talking about natural functions, I think that’s kind of like a doctor who’s still worrying about imbalances of the four bodily humors.

Look Outside the Text

Part of the attraction of doing history of philosophy, I believe, is its insularity: one can simply dwell entirely within the world of Philosopher A’s texts. You can read all of those texts, and you can know that there will never be any more, since the philosopher is now dead. (Though, admittedly, there will be more secondary literature.)

If you’re a regular philosopher, you have to worry about objections and evidence that could come from anywhere. Some philosopher could devise a completely new argument that you have to address. Some scientific development (not even a development in your own field!) could cast doubt on one of your theories (unless, of course, you have carefully defined the questions you talk about so that they are ‘purely conceptual’; see previous post).

If you’re a historian, you don’t have to worry about all that, because you’re not arguing that any philosophical thesis is true. You’re just saying that some philosophical thesis is supported by the texts. That makes life simpler and easier.

But that is also why history of philosophy is bad for you. Because people actually should think about the big questions of philosophy. And when we think about them, we need to do so in a way that is open to the many different considerations that are relevant to the truth.

We should think, for example, about what is the right thing to do, not what Kant said was the right thing to do; we should think about what is real, not what Plato said was real.

The Failings of Analytic Philosophy

In my last post, I discussed what is good about analytic philosophy. After that, I was going to note some of the shortcomings of analytic phil, but the post was getting too long. So now, here’s what’s wrong with analytic phil, as currently practiced.

The main problem: too analytic.

Background: analytic statements are, roughly, statements that you can see to be true just from understanding the meanings of words. Like “all rhombuses have four sides” and “the present is before the future” [if the present and future both exist]. There are issues about how exactly to define “analytic” sentences, but let’s not worry about that.

Analytic philosophers used to think that philosophy was or ought to be a body of analytic knowledge, and that analytic knowledge was essentially about the meanings of words, or the relationships between concepts, or something like that, and did not concern substantive, mind-independent facts. So they spent a lot of time talking about word meanings, how to analyze concepts, and boring stuff like that. They never did succeed in analyzing anything, though.

I don’t know how many people still think the job of philosophy is to analyze language/concepts. I don’t think it’s very many. But the field retains leftover influences of that early doctrine. And the central problem with this is that most questions that are amenable to typical analytic-philosophy methods are just not very interesting.

More specifically, I see three things that we’re doing too much of.

1. Fruitless Analysis

We spend too much time trying to analyze concepts. For instance, there are dozens of theories that start with “S knows that P if and only if . . .”, followed by some set of conditions — which get increasingly convoluted and hard to follow as time goes on. This has occupied a large portion of the literature in epistemology over the last 50 years.

There are two problems with this kind of philosophizing. One is that the analyses are always false.

That’s controversial; some philosophers think that they themselves have correctly analyzed one or more philosophically interesting concepts. But most will agree that there are almost no successes, and that no philosophical analysis has attained general acceptance. For every attempted analysis, there are many philosophers who would say that that analysis is wrong and has been refuted.

For discussion of why no one ever successfully analyzed anything, see my “The Failure of Analysis and the Nature of Concepts” in The Palgrave Handbook of Philosophical Methods (2015).

This probably is not just about to change. We’ve had a lot of highly intelligent, highly-educated, dedicated people working on the analysis of various philosophically interesting terms, for decades now. If we don’t have a single clear success by now, I don’t think we need to keep doing this for another 50 years. There ought to be a time when you move on.

Another reason this is fruitless is that the analyses we devise would not be particularly useful, even if one of them were widely accepted. The analyses that epistemologists now debate are so complicated and confusing that you would never try to actually explain the concept of knowledge to anyone by using them. So what is the point?

Perhaps the value of these analyses is purely for the theoretical understanding of philosophers. But understanding of what — how a specific word is used in a specific language? The exact contours of a conventionally defined category? Is that what we need to expend decades’ worth of reflection by a host of highly sophisticated minds to figure out?

2. Semantic Debates

A fair amount of debate in analytic philosophy, even when it is not directly about the analysis of some word or concept, looks to me like essentially semantic debates. And as I’ve just suggested, I don’t find semantic debates especially interesting.*

*Counterpoint: maybe our current conceptual scheme reflects the accumulated wisdom of our society, about what are the important and useful-to-discuss phenomena in the world. And maybe that’s why figuring out the correct account of that conceptual scheme is helpful?

Example 1: Justification

The debates between internalism and externalism in epistemology look semantic. Roughly, the debate concerns whether justification for a belief is entirely determined by the subject’s internal mental states (or states the subject has access to, or something like that).

Ex.: Reliabilists (the most common kind of externalists) sometimes say that a belief is “justified” as long as the subject formed it in a reliable way, whether or not the subject knows or has reason to believe that the belief-forming method is reliable. Internalists say this is not enough.

That looks to me semantic. As an internalist, I don’t deny that reliability exists or is good. I just don’t think that’s what “justification” refers to.

Example 2: Composition

There are debates in metaphysics about the “existence” of various things. These include debates about when a composite object exists.

One view holds that there are no composite objects. That is, if you take some elementary particles, there is nothing you can do to them that will make them collectively comprise a larger object. So tables don’t exist, people don’t exist, etc. (Don’t worry. There are still particles arranged table-wise; it’s just that they don’t add up to a single object.)

Other philosophers say that any objects compose a further object. If you have an object A, and an object B, then there is always a third object that has both A and B as parts. E.g., there’s an object composed of my left eye and Alpha Centauri.

Still other philosophers say that some but not all ways of arranging simple things make them compose a further thing.

All that strikes me as a very semantic sort of debate to have. (There are arguments that it isn’t semantic in the literature. But it really feels semantic.)

3. Defining Down the Issue

Okay, here is my biggest complaint. Philosophers will actually decide what questions to ask based on the consideration of to what questions they can apply purely a priori methods, especially conceptual analysis and deductive arguments. This often involves shifting attention away from questions that matter, to questions that are in the vicinity but that in fact do not matter at all.

Example 3: Authority

Say we’re doing political philosophy. And suppose I have raised the issue (as I have been known to do) of why any government should be thought to have any moral authority over anyone. Why should those clowns in Washington get to tell us what to do, and why should any of us obey them?

As a contemporary academic political philosopher, let’s suppose, you would like to say something about this. But it could be difficult, as I’ve formulated it. So you’re first going to want to change the question to something more “analytic philosophical”. How about this: how should an ideal “liberal” political order make decisions so that we would have reason to respect them, assuming that they were not independently unjust?

You then go on to discuss a theory of the ideal conditions for public deliberation among a group of rational agents who all regard each other as free and equal citizens. These conditions might include, for example, that everyone has a full opportunity to be heard, that all ideas receive fair consideration, and that the outcome of deliberation is determined only by the merits of the arguments.

I then point out (as is my wont*) that none of those conditions obtains in any society, so we still have no basis for political authority. You respond that that is an empirical matter outside your purview. Your interest was simply in describing an ideal.

*See The Problem of Political Authority, sec. 4.2.

That’s lame. That’s essentially replacing the question that matters with a question that doesn’t matter, because the latter doesn’t require getting off the armchair.

We can’t answer whether any state actually has authority by reflecting on concepts. But that question is nevertheless what matters.

Example 4: God

When I worked as a TA in grad school, some of the classes covered the Problem of Evil. Here’s a simple way of understanding the problem:

  • God, if he exists, is supposed to be all-knowing, all-powerful, and maximally good.
  • If God is unaware of all the evils in the world, then he is not all-knowing.
  • If He is aware of evil but unable to do anything about it, then he is not all-powerful.
  • If He is aware of evil and able to eliminate it, but unwilling to do so, then he is not maximally good.
  • But if God is aware of evil and both willing and able to eliminate it, then how can evil exist?

Here is a possible response: maybe God isn’t all-powerful after all. (Or he could fail to be all-knowing, or maximally good, but the ‘all-powerful’ attribute is the one theists are most likely to give up.)

I saw this discussed in one of these introductory philosophy textbooks that the students were reading. The author (who was defending atheism based on the Problem of Evil) said something like “we are merely bored by such replies” — I guess because it’s not interesting to defend a thesis by redefining it. (Well of course you can defend the existence of “God” in some sense of that word!)

I found this kind of amazing. So if it turns out that there is an extremely powerful, intelligent, and good being who created the physical universe, but the being isn’t capable of all logically possible actions, then that would be completely uninteresting to a philosopher, because … it doesn’t satisfy the definition of a certain word that we stipulated at the start? That sounds to me like caring more about word games than about reality.*

*In fairness, there are cases where defending a thesis by redefining it renders it uninteresting. E.g., if you defended “theism” by defining “God” to refer to nature, that would be uninteresting. That’s partly because the new thesis would be uncontroversial, and also because it is too far from what we initially were interested in.

Why do we think the traditional philosophers’ conception of God (the O3 world-creator) is the interesting thing to discuss? I suspect the answer is, at least in part, that this conception makes it easy to construct a priori, deductive arguments about “God”, and spawns lots of fun conceptual/logical debates (“Is omnipotence logically coherent?” “Is it compatible with perfect goodness?” “Is maximal goodness compatible with free will?” Etc.)

In other words: the traditional definition creates jobs for armchair philosophers.

But that’s not a rational, reality-oriented basis for selecting a definition. A rational basis for selecting a definition would be something like: “This is the definition that best fits with what we (seem to) have evidence for.” Or: “This is the definition that enables us to formulate the questions that are important.” (Granted, the existence of the O3 world-creator would be important. But a sub-omnipotent creator would also be important.)

Academic philosophers are so used to defining issues in this way (to create jobs for conceptual analysts, so to speak) that, if you try to discuss a normal point, philosophers will often misunderstand you, because they will try to get you to be making some ‘conceptual point’ that can be verified or refuted by analysis, deduction, and hypothetical examples. They default to hyper-strengthening or hyper-weakening issues. E.g., if someone wants to discuss whether A’s are generally B, philosophers will try to talk about (i) whether it’s conceptually possible for an A to be B, and/or (ii) whether it’s logically necessary that all A’s are B. This shows a greater interest in ideas, and how they relate to each other, than in the actual world.


Tl;dr: Analytic philosophers focus too much on playing with concepts, and not enough on thinking about the parts of reality that matter.

Analytic vs. Continental Philosophy

Some of you might know that there is a split in contemporary philosophy between “analytic” and “continental” styles, but not know what this split is about, or why analytic philosophy is better. This is to remedy that.

I. About Analytic Phil

Analytic philosophy is mainly written in the English-speaking countries (England, America, etc.). Think of people like G.E. Moore, Bertrand Russell, A.J. Ayer, and most people in the high-ranked philosophy departments in the U.S. today.

“Analytic” philosophers used to be people who thought that the main task of philosophy should be to analyze language (explain the meanings of words), or analyze concepts. But now they are basically just people who do philosophy in a certain style (regardless of their substantive views).

What is that style? There is generally a fair attempt to say what one means clearly, to give logical arguments for one’s theses, and to respond (logically) to objections to one’s arguments.

Also, there is still a fair amount of attention paid to questions about the meanings of words, or the logical and semantic relations among concepts, that are of philosophical interest. If an analytic philosopher is discussing justice, you can usually expect discussion of such things as the meaning of “just”; how the concept of justice relates to such concepts as those of fairness or rightness; etc.

II. About Continental Phil

Continental philosophy mainly comes from continental Europe, especially France and Germany. Think of people like Heidegger, Foucault, and the existentialists.

The style is largely the opposite of that of analytic phil. Continental writers are generally much less clear about what they’re saying than the analytic philosophers. They won’t, e.g., explicitly define their terms before proceeding. They use more metaphors without any literal explanation, and they use more idiosyncratic, abstract jargon.

When they advance an idea, they sort of give arguments for it, but it’s hard to isolate specific premises and steps of reasoning. A continental author would never tell you that he has three premises in his argument, and then write them down as statements (1), (2), (3) (as analytic philosophers often do). You also would find a lot less effort to directly address objections or confront alternative theories. They say things that are supposed to lead you along a line of thought. It’s just that at the end, it’s very hard to answer questions like “How many premises were there?”, “What was the 2nd premise?”, and “What was the first objection?”

There are also certain doctrinal themes. Works of continental philosophy are much more likely than analytic philosophy to communicate some kind of subjectivism or irrationalism. That is, you are more likely to find passages that (when you sort of vaguely figure out what they might be saying) seem to be arguing that reality depends on observers, that it is not possible or not desirable to think objectively, or that it’s not possible or not desirable to be rational.

Martin Heidegger

Philosophers generally tend to lean to the left politically. But Continental philosophers tend to lean very far left (more so than other philosophers).*

*Aside: Heidegger, a scion of Continental philosophy, was literally a Nazi, which is a “right-wing” view. So a more complete statement would be that continental philosophers are more likely to hold crazy and horrible extreme political views, like communism and fascism.

III. Carnap v. Heidegger

Rudolf Carnap

One can’t mention the analytic/continental divide without mentioning the disagreement between (continental philosopher) Martin Heidegger and (analytic philosopher) Rudolf Carnap. In “The Elimination of Metaphysics through Logical Analysis of Language,” Carnap discusses nonsensical utterances that can be made in natural language (http://www.ditext.com/carnap/elimination.html). He gives as an example the following excerpts from Heidegger:

What is to be investigated is being only and—nothing else; being alone and further—nothing; solely being, and beyond being— nothing. What about this Nothing? . . . Does the Nothing exist only because the Not, i.e. the Negation, exists? Or is it the other way around? Does Negation and the Not exist only because the Nothing exists? . . . We assert: the Nothing is prior to the Not and the Negation. . . . Where do we seek the Nothing? How do we find the Nothing. . . . We know the Nothing. . . . Anxiety reveals the Nothing. . . . That for which and because of which we were anxious, was ‘really’—nothing. Indeed: the Nothing itself—as such—was present. . . . What about this Nothing?—The Nothing itself nothings.

(Note: that is much clearer than most of Heidegger’s writing.)

Carnap goes on to explain (using predicate logic) how in a logically proper language, such statements could not be formulated.

IV. Analytic Philosophy Is Obviously Better

Many people interested in Continental philosophy are perfectly nice people. That said, analytic philosophy is obviously better. Why?

A. Style

My description above of the difference between the two schools should make it clear why I say analytic phil is better. These things:

  1. Clear theses
  2. Clear, logical arguments
  3. Direct responses to objections

I would say are the main virtues of philosophical (or other intellectual) writing. And by the way, I don’t think my description of the difference between Continental and Analytic Phil is very controversial. I think almost anyone who looks at samples of the two kinds of work is going to notice those three differences.

Why are those things important? Because (and I assume this without argument) philosophical work has a cognitive purpose. The purpose is to improve the reader’s knowledge and understanding of something. The purpose is not, e.g., to confuse people, to impress people with your vocabulary, to enjoy the contemplation of complex sentence structures, or to induce people to shut up and stop questioning you.

For a work to increase the audience’s knowledge and understanding, it is generally necessary that the reader understand what the work is saying. Therefore, clear expression is a cardinal virtue of philosophical writing.

Also, to increase a reader’s philosophical knowledge and understanding, one generally has to give the reader good reasons for believing what one is saying. That is because, in most cases, philosophical ideas that are worth discussing are not so self-evident that readers can be assumed to see their truth immediately upon their being stated. Usually, one’s main thesis is something that other smart people would disagree with.

Thus, if the reader is going to rationally adopt your position, they will generally need reasons. Also, they will generally need to understand what is wrong with the main objections that other philosophers would raise. If, instead, they adopt your view because of your rhetorical skill, because they’re impressed with your sophistication, because you’ve confused them too much for them to think of objections, etc., then they will not have acquired knowledge and understanding of the subject.

So, giving logical arguments and responding to objections are also cardinal virtues of philosophical work.

You might think this is all trivial and in no need of being explained. But it appears that many people (who prefer the Continental over the Analytic style) don’t appreciate these points.

B. Doctrines

The other thing to point out is that the substantive doctrines most commonly associated with continental philosophers are false.

(1) There is an objective reality.

For example, when you close your eyes, the rest of the world doesn’t pop out of existence. The world was around long before there were humans (or even non-human observers). The Earth has been here for 4.5 billion years, whereas human observers have only existed for 20,000 – 2 million years (depending on what you count as “human”). Therefore, the world doesn’t depend on us.

What’s the objection to this? As far as I can tell, there is one main argument against objective reality. A version of it first appeared, as far as I know, in Berkeley. Berkeley’s argument was something like this (my reconstruction):

  1. If x is inconceivable, then x is impossible. (premise)
  2. It is not possible to conceive of a thing that no one thinks of. (premise)
    • Explanation: if you conceive of x, then you’re thinking of it.
  3. Therefore, [a thing that no one thinks of] is inconceivable. (From 2)
  4. Therefore, [a thing that no one thinks of] is impossible. (From 1, 3)
  5. If there were objective reality, then it would be possible for there to be things that no one thinks of. (From meaning of “objective”)
  6. Therefore, there is no objective reality. (From 4, 5)

(David Stove refers to this argument as “the Gem”, and he has awarded it the prize for the Worst Argument in the World.)

Here, I will just point out the equivocation in (2). In analytic speak, it’s a scope ambiguity. The problem is that 2 could be read as either 2a or 2b:

2a. Not possible: [For some x,y, (x conceives of y, and no one thinks of y)].
2b. Not possible: [For some x, (x conceives: {for some y, no one thinks of y})]

Reading 2a is needed for the premise to be true, but reading 2b is needed for 3-6 to follow.

Usually, the argument for subjectivism is stated a lot less clearly. Usually, people say something that sounds more like this:

“It’s impossible for us to know anything without using our minds/conceptual schemes/perceptions/etc. Therefore, we can only know things-as-we-conceive-them/as-we-perceive-them/as-our-minds-represent-them/etc. Therefore, it makes no sense to talk about things as they are in themselves. Therefore, the idea of ‘objective reality’ just makes no sense; it’s meaningless.”

I’m doing my best to make sense of the sorts of arguments I’ve heard and to make them sound sort of logical, but when you hear actual subjectivists talk, it’s usually much less clear than that. (Bishop Berkeley was actually the clearest subjectivist.)

Anyway, the above statement is essentially a (more muddled) version of the Gem, and it basically has the same mistake. The mistake is confusing the statement that a given object of knowledge is represented by a particular mind with the idea that its being represented by that mind is part of the content of the representation.

In other words: I can only imagine the Earth when I’m imagining it. It doesn’t follow from this that I can only imagine the Earth as being imagined by me. I.e., it doesn’t follow that I can’t picture the Earth being there back when I wasn’t around. (And it would be very silly to deny that I can do that.)

(2) Be rational & objective.

I’m not going to discuss at length why you should think rationally. I think that’s basically a tautology. (Rational thinking is just correct thinking. If there was a good reason for ‘not being rational’, that would just prove that the thing you’re calling ‘not being rational’ is in fact rational.)

But I am going to just comment on what’s going on when a thinker starts attacking rationality or objectivity. To be fair, few people will outright say, “Hey, I’m irrational, and you should be too!” (Mostly because “irrational” sounds like a negative evaluative term.) But you can hear people rejecting central tenets of rationality, such as that one should strive to be objective and consistent.

So here’s what I think is going on when that happens: the thinker knows that he himself is wrong. He doesn’t know this fully explicitly, of course; he is self-deceived, and is trying to maintain that self-deception. If you’re wrong, and you want to keep holding your wrong beliefs, then you kind of implicitly know that you need to avoid thinking rationally or objectively. You also have to avoid letting things be clearly stated. Fog, bias, and confusion are the key things that are going to help you keep holding false beliefs.

(Alternately, the thinker may simply want other people to hold false beliefs, and know that bias and confusion are the keys to making that possible.)

So when I hear someone more or less attacking rationality and objectivity, or trying to avoid clear formulations of ideas, I take that as almost a proof that most of the rest of what that person has to say is wrong.


(Still ahead next week: What’s wrong with analytic philosophy.)