We Are Doomed

Far Future Doom

Obviously, humanity will at some future time be extinct. That goes without saying. That’s almost a metaphysical truth; nothing (of the relevant kinds) lasts forever.

There is a fascinating Wikipedia article about the far future, https://en.wikipedia.org/wiki/Timeline_of_the_far_future, which includes (among other things) many events that could extinguish life on Earth. The Sun will leave the main sequence (running out of hydrogen) within about 5 billion years. It will probably engulf the Earth within 8 billion years. Long before that, though, multiple other disastrous things are expected to happen. One item says that within only 600 million years, all plants that use C3 photosynthesis (99% of all plant species) will die. Another item says that the rest of the plants will probably die within 800 million years.

I don’t think any people are going to live to see any of that happen, though. I think we’ll die of stupidity long before that. (Life will probably still continue without us, though. E.g., the bacteria will have hundreds of millions of years to flourish without us.)

A Story of Early Doom

Hypothetical: Suppose we learned that a large asteroid was on a collision course for Earth. To best illustrate my point, let’s make up some more details. Suppose that somehow, we know 30 years in advance that the asteroid is coming. Scientists are unsure of whether the asteroid will actually hit us. Some say it will probably hit us; others say that it will almost certainly miss. The median estimate, let’s say, is a 5% chance of hitting the Earth. All agree that the impact would be disastrous, though they disagree on exactly how disastrous it would be.

Suppose, further, that engineers have devised various plans for averting the collision, each of which would require at least several years to implement, and would cost billions of dollars. There is disagreement on exactly how effective each plan is, how much each would cost, and how long each would take to complete. Every expert agrees, though, that at least some plan (if not multiple plans) should be attempted. Finally, to make my point clear, assume that the asteroid will in fact hit the Earth if nothing is done (though the scientists in the scenario are not yet certain of this), but that some of the plans people have devised would in fact work, if adopted in a timely manner.

Question: Would we avoid the asteroid impact?

If you asked me this hypothetical 20 years ago, I would have taken for granted that humanity would, one way or another, come together to stop the threat. The last several years, however, have showed that human beings are a good deal stupider and all-around crappier than I previously comprehended. So today, I think there is a pretty high chance that some of the following would happen:

(a) Some political party takes up the cause to avoid the asteroid impact. The opposing political party or parties then immediately decide that “their side” must be pro-asteroid (or anti-asteroid-avoidance). The latter party uses their political power to stall asteroid avoidance plans. Members of the pro-asteroid party who cross the aisle and try to cooperate on asteroid-avoidance get labeled traitors by their party, whereupon they face primary challengers and are kicked out of office.

(b) Asteroid skeptics point to uncertainties in the science, arguing that we have no solid evidence that the asteroid is actually going to hit the Earth. They tout the most optimistic arguments about the asteroid, and magnify all uncertainties in the case for global disaster. They also point to common cost overruns in government programs and argue that we shouldn’t commit to spending unknown billions of dollars to avert a threat that almost certainly isn’t even going to kill anyone.

(c) Different groups of humans can’t agree on who should pay for asteroid avoidance. The Americans want China to pay more; China wants America to pay more. Both are angry at the Russians for refusing to pay anything, and nobody wants to be a sucker and let other nations free ride.

(d) The average human, having never witnessed an asteroid impact, does not intuitively believe that such things happen, and he refuses to believe the “arrogant”, egghead scientists. Web sites appear from trolls and opportunists with conspiracy theories about how the mainstream scientists are all lying and/or incompetent. Some say that you can just look up in the sky and see that there are no asteroids. They argue that there are no large asteroid impacts reported in all of human history, and thus this one is probably a hoax. They try to associate the asteroid theory with particular “identity groups”, and people who don’t belong to those groups then instinctively reject asteroid avoidance. These trolls get money because their unhinged claims attract clicks and hence generate revenue.

(e) More balanced news sites give equal attention to the orthodox position and the skeptical position, represented by the three scientists in the world who think the asteroid isn’t a serious threat.

(f) The U.S. President (who knows that he personally will not be around in 30 years) declares that the asteroid is “fake news” and a very dishonest hoax invented by the biased media and/or greedy astronomers trying to shake down the government for more money for their field. He tweets that if anyone just looks at the telescope images, they can see that the asteroid is a hoax. Millions of his followers retweet these comments, without in fact looking at the telescope images. A few others look at the images and find themselves unable to verify that the asteroid is really on a collision course, whereupon they conclude that the mainstream scientists are wrong.

(g) As it becomes clear that nothing is being done about the asteroid, scientists become increasingly active in trying to convince the masses. They try a variety of approaches. Some try sober, well-reasoned analysis. These scientists, however, are ignored because they are boring; also, skeptics take the scientists’ calm demeanor as proof that the scientists must be lying about the seriousness of the threat. Other scientists make increasingly alarmed and emotional appeals. The latter scientists, however, are dismissed by skeptics as being too emotional and obviously partisan.

(h) Nearly every person working in government knows that the asteroid threat is real, but many of them worry that they’ll be voted out of office if they try to do anything about it, because the asteroid issue is unpopular among the masses. They reason that it’s not worth losing their jobs for a small chance of saving humanity; also, they correctly reason that any given one of them can’t actually make a difference, if they don’t have the rest of their party behind them. Therefore, around half of the political leaders vote to do nothing. Or they vote for a “compromise plan” that takes only very weak, unlikely-to-succeed measures against the asteroid.

(i) When a member of party X complains about the asteroid threat and our failure to do anything about it, members of party Y ignore the issue and immediately start babbling comments like, “What about all the issues that your party hasn’t done anything about? What about the crimes committed by such-and-such politician? What about the threat of nuclear war, or biological weapons, or terrorism, or cancer? Cancer has killed a lot more people in history than asteroids!” This succeeds in derailing the conversation and preventing people from party Y from thinking about the asteroid for more than a few seconds at a time.

(j) The above goes on for 29 years. In the last year, scientists come to 100% agreement that the asteroid is in fact going to hit the Earth within a year and kill everyone. They also agree that it is too late to do anything about it. About half of all average human beings refuse to accept this, up until the day that the asteroid hits, killing billions of people and triggering Earth’s largest mass extinction.

Dying of Stupidity

That’s an example of what I mean by “dying of stupidity”. Specifically, I have in mind a scenario in which:

  1. A threat is identified by experts well in advance,
  2. It is agreed among experts to be serious (though there need not be agreement on exactly how serious),
  3. There are technically feasible plans known to experts that would stop the threat,
  4. The cost of trying to avert the problem is easily worth it and well-known, among the experts, to be so, and yet
  5. The threat is not stopped.

Any minimally smart species would in fact avert any such threat. From what I’ve observed of human beings, however, we are not such a species. There is thus a good chance that we will die of stupidity in the above sense.

Existential Threats

The above story is just an example. I don’t think we are actually going to die of an asteroid impact. We’ll probably have died of something else long before the next big asteroid hits.

In some ways, the asteroid scenario is actually a poor choice to illustrate my point. The asteroid threat is already on some people’s radar screen (pun intended), and some people are already looking out for asteroids. The scenario of a huge asteroid impact is simple, sensational, easy to understand, and relatively far from the main hot-button ideological issues. It’s also not all that expensive to avert. So there is a pretty good chance that we will develop an adequate asteroid defense before one becomes needed, assuming that nothing else kills us first.

The real thing to worry about would be a threat that is complicated or subtle, so that you need expertise to even understand how there is a threat; one that works over a long period of time and with no well-defined ending point; one that touches on hot-button ideological issues; or one whose probability and time of occurrence are extremely difficult to estimate. Those are the kinds of threats that we’re not going to address until it is too late.

By the way, in case you think my story is a metaphor for global warming, it isn’t. My story is meant as an example of one possible existential threat among many, most of which we probably cannot now anticipate. Global warming is not actually an existential threat – although the way people have responded to it should give us apprehension about how we would respond to a genuinely existential threat.

Take, for example, the projected end of C3 photosynthesis in 600 million years (about which I have almost no knowledge, as I am no biologist). To believe that this event is going to happen, one has to have a certain degree of intelligence, plus either expertise in biology or significant trust in experts – none of which the average American presently has. Also, it’s plausible that averting this event (if it could be averted at all) would require planning and very large-scale action, very long in advance. There would not, however, be any specific time – no particular year, or even any particular century – at which one could say that the plan had to be started. So it’s likely that there would be no particular point at which that issue would rise to the top of the political agenda. There would be no election year in which it would be politically advantageous to campaign on a promise to save C3 plants from extinction millions of years in the future.

Species Suicide

I don’t know what the most likely existential threat is. But here is one kind of scenario that I think is particularly likely: humanity is wiped out by one or more human beings working deliberately for that goal.

Aside: Some people worry that out-of-control AI may kill us. But I think we should worry more about out-of-control humans. We already have those. They already possess intelligence, unpredictable motives, and often insane, evil beliefs. Computers are way more predictable and controllable.

In the future, humans are going to have access to more and more powerful technology (including increasingly sophisticated computers, for any technical reasoning that is needed to carry out their plans). So it will become easier and easier to cause a large amount of harm. Now, you might say that advanced technology can also be used for defense, to protect against out-of-control people. That is true; however, it is almost a law of nature that it is easier to destroy than it is to protect or create things of value.

For example, at some not-too-distant future time, it might be technically feasible for an individual person to genetically engineer a virus capable of wiping out humanity. We might develop technology whereby a moderately intelligent person could produce stuff like that – perhaps with computer assistance, this person would not even have to be particularly expert in biology or medicine. If that technology appears, we are doomed. Someone is going to do it. Once the virus is released, it might be difficult or impossible to stop it.

Again, that is just an example. We will probably develop other technologies, thus far undreamt of, that will make it easy for individuals or small organizations to cause enormous amounts of harm. Most likely, these technologies would not be originally designed to cause harm. It’s just that powerful technologies that can do extremely valuable things will also generally let you do extremely bad stuff, if you have the opposite motives. Since we haven’t figured out how to control insane humans, and since a good number of us are crazy, we are very likely going to kill ourselves long before nature destroys us.

I don’t have a particular solution for this. I think we have to hope that humanity becomes less stupid and evil over the next few centuries, before whatever unknown threat appears that’s going to require coordinated action to prevent our extinction. Of course, since we are so awful now, most of us don’t give a crap whether that happens or not.

Great Philosophers Are Bad Philosophers

My introduction to philosophy was largely through the great philosophers of the past — the likes of Plato, Aristotle, Descartes, Hume, Kant. From the beginning, I was struck by how bad they were at thinking. Sometimes, they just seemed to be bad at logic, committing fallacies and non sequiturs that even an undergraduate such as myself could quickly see. Other times (almost always!), they seemed to have extremely poor judgment, happily proclaiming absurd conclusions to the world, rather than going back and questioning their starting points. I wondered why that was. Were these really the best philosophers humanity had produced?

Later, I figured out what (I think) was going on. These were not the best philosophers of the past that we were reading. They were merely the greatest philosophers. Skill at thinking was merely one criterion among many — and not a particularly central one at that — for ‘greatness’.

Bad Thinking

“What? How dare he say such things about Plato, and Kant, and even David Hume! Who does this Huemer guy think he is??”

Well, not a great philosopher. Just a good one. Let me give you some examples of what I’m talking about. Incidentally, of the great philosophers, Aristotle is perhaps the best at thinking. Most of his errors are pretty reasonable mistakes, if you lived in his time. Plato, Hume, and Kant, on the other hand, are all very unreasonable, illogical thinkers.

Plato

In The Republic (book I), Thrasymachus says that government leaders rule solely for their own good, and treat the populace the way a shepherd treats sheep, to be used for their wool and meat. Plato has Socrates respond to this by arguing that the art of the shepherd, as such, is only concerned with the good of the sheep. He goes on to talk about how the art of medicine is concerned with health. He also claims that no one would agree to be a ruler without being paid.

This is all just a terrible way of responding to the challenge. Thrasymachus’ statements about the sheep are just an analogy, which only serves to illustrate Thrasymachus’ point — whether shepherds really are concerned for the good of sheep is completely irrelevant. The relevant point is how leaders actually behave in the real world, which requires empirical evidence about leaders. Arguing about shepherds or doctors is pretty irrelevant to that, and arguing about what is the true “art” of medicine or of governing is definitely irrelevant. The one relevant point Socrates makes is that rulers would not be willing to rule without receiving payment. That, of course, is false. (But maybe this was less obvious in Socrates’ time?)

This isn’t an outlier case, either. The Platonic dialogues have these sorts of useless arguments by analogy all over the place.

Hume

The biggest problem with Hume was that he was constantly drawing absurdly skeptical conclusions, about almost everything, and this doesn’t seem to bother him. He doesn’t stop and say, “Wow, that sounds crazy. Maybe my starting assumptions are wrong?” This sort of dogmatism and lack of judgment is amazingly common among philosophers.

Here’s an example of Hume’s thinking. He starts with a hypothesis: all concepts (he calls them “ideas”) are formed by your first having a sensory experience (which he calls an “impression”), and then your mind making a sort of fainter copy of that sensory experience. In brief: all ideas are copies of impressions.

Later, he notices that certain concepts really don’t seem as if they could have been formed by copying impressions. For instance, the concept of the self, or the concept of causation as normally understood (I have to put that qualifier in there, because Hume also gives a revisionary account of causation). What would a rational person say at that point? “Ok, so my hypothesis was false. I wonder what the right account is?”

Not Hume. He just declares that we do not in fact have those concepts.

In another (in)famous passage, he talks about the “missing shade of blue”: imagine a person who has seen many shades of blue, but there is one particular shade he has never seen. You show the person a series of color swatches, arranged by hue, with the swatch for that particular shade removed. Could the person imagine the missing shade? Hume agrees that the person probably could imagine it. Then he basically says, “Yeah, that’s a counterexample to my theory, but it’s a weird case, so let’s not worry about it,” then continues using his hypothesis as if it were a known fact.

Kant

We all know about Kant’s ethical theory, centered on the “Categorical Imperative” (which has “three formulations” that are somehow one, kind of like the Holy Trinity). According to one formulation, you always have to act in such a way that you could will that the maxim of your action was a universal law. That’s supposed to capture all of morality, and you have to follow that principle no matter what. E.g., if a murderer comes to your door looking for his intended victim, and the victim happens to be hiding in your house, you can’t just lie to the murderer and tell him the victim is somewhere else. Because you can’t will that everyone always lie.

What about Kant’s argument for the Categorical Imperative? I bet you can’t say what the argument was, can you? That’s because almost no one covers it in classes or discusses it in the literature. And that’s because it’s completely unconvincing and not worth discussing, except to make points about bad arguments. Here is a key statement:

But if I think of a categorical imperative, I know immediately what it contains. For since the imperative contains besides the law only the necessity that the maxim should accord with this law, while the law contains no condition to which it is restricted, there is nothing remaining in it except the universality of law as such to which the maxim of the action should conform; and in effect this conformity alone is represented as necessary by the imperative.
There is, therefore, only one categorical imperative. It is: Act only according to that maxim by which you can at the same time will that it should become a universal law. (Foundations of the Metaphysics of Morals [1969], 44)

Got that? Okay, so that explains why you can’t lie to murderers to stop them from finding their intended victims.

How Bad Is That?

I note that the above are not examples of very subtle mistakes, nor are they attributable, say, to not having access to modern science or other modern discoveries. Those really are simply examples of people being very bad at thinking. No skilled thinker of any time should have said that kind of stuff. It’s not as if, e.g., you had to study quantum mechanics or predicate logic in order to realize that you should not cling to a hypothesis after finding multiple counter-examples to it.

Greatness

What Is It?

So that explains why I say those people are bad philosophers. But then . . . in what sense can they possibly be deemed great philosophers?

Well, being “great” in the sense of “the Great Philosophers” isn’t really about being good at thinking in the normal sense. (The normal sense of being good at thinking, I take it, is about forming beliefs that are supported by the available evidence and likely to be correct, and only forming such beliefs.)

Greatness, however, is more about influence. Western philosophy is a big, 2000-year+ conversation. The “great” philosophers are the participants who had the greatest influence on that conversation. They said stuff that other people found interesting, and kept thinking about, and telling other people about for centuries after the great philosophers’ deaths. That’s what a ‘great’ philosopher is – not a philosopher who said a bunch of true stuff based on good reasons.

The Greatness-Badness Connection

Saying true stuff based on good reasons is not incompatible with greatness in that sense. But it isn’t particularly conducive to greatness; in fact, it is strongly anti-correlated with greatness.

Why is that? Think about how one goes about influencing a conversation, and getting other people to talk about oneself. It’s not by saying the most reasonable things. It’s by saying things that are interesting or enjoyable to discuss. You can’t be completely stupid, or else people won’t want to talk about your ideas, but it actually helps if your ideas are implausible. If someone says, “Things are pretty much the way they seem here,” that’s not going to stimulate much discussion. It’s when someone comes up with an idea that is new and amazing or outrageous that other people want to talk about it.

For perhaps the best example, look at Kant. His idea of locating space and time in us is startlingly original. Wouldn’t it be amazing if that was true? Or Hume: isn’t it just outrageous and amazing how he argues against the justification of basically everything we know? Plato is kind of amazing too, with the whole realm of perfect circles, perfect Justice, etc., that the soul grasped before our birth.

But, of course, most ideas that are amazing or outrageous are also very badly wrong. And most arguments for such ideas are of course going to be bad arguments. And so, most people who believe such arguments and ideas are going to have to be bad at thinking – that is, not reliably oriented toward the truth. They’re going to be people with poor judgment and/or poor reasoning skills, since otherwise they’d realize that these amazing ideas are almost certainly wrong.

Our Confusion

Not everyone realizes all this. Most people, I suspect, believe that the Great Philosophers are actually good at philosophy. The reason they think this is that they know the Great Philosophers are the ones whom our ancestors have passed down to us, out of a large number of people who wrote and thought in the past. I guess people assume that “philosophy” or “fame” is sort of like a conscious agent, so if it chooses to focus on certain thinkers and certain works to tell us about, it must be that those are the best and most important things. Or at least very good and important.

But it doesn’t work like that. The current canon of Western philosophy is a spontaneous order, and the factors by which people were selected for inclusion need not involve truth or cogency of reasoning. There is no gatekeeper to say, “This idea is too obviously wrong; I’m not going to let people talk about it.”

Caveat

I don’t know the other thinkers of the past who didn’t get into the canon. So I don’t actually know if they were generally any better at reasoning than the Greats. So it could be that the Greats, despite how bad they were thinking, were still better than the other thinkers of their time.

(Why) Does Terrorism Work?

Evidence that it Doesn’t Work

I looked at this question briefly in The Problem of Political Authority. The evidence I found from political science suggested that terrorism is rarely effective – i.e., terrorist groups hardly ever attain their stated political goals. In many cases, their goals are set back because of their attacks. This is especially true when they target civilians (cases of successful terrorism tend to be attacks on military targets).

The reason is basically that terrorist attacks, especially on civilians, increase public support for hardline, hawkish politicians, who tend to do the opposite of whatever the terrorists wanted. E.g., the 9/11/2001 attack resulted in two decades of hawkish politicians in America, in which nearly everyone in both parties agreed on a highly aggressive stance in the Middle East. It caused America to invade two countries and topple their governments, killing hundreds of thousands of (mostly Muslim) civilians, in addition to bombing many more people in multiple countries in the Middle East. So, even though the attack succeeded in causing harm to Americans, it was a spectacular failure from the standpoint of benefiting Islam or people in Muslim countries.

Wtf Are They Thinking?

I think if you’re planning to kill people, you’re obligated to think hard about it first, and to try very hard to verify that your plan will actually produce the (putatively) good consequences that you anticipate.

The stuff I said above about 9/11 was not hard to anticipate. It was completely obvious that the U.S. government was not going to swiftly withdraw from the Middle East and that it would instead go to war in the Middle East. Any idiot could also tell that this war or collection of wars would be extremely destructive to the people in the Middle East.

The general empirical facts about terrorist success are no secret either. Anyone who was thinking about doing some terrorism could do some investigation — “Hm, I wonder how often this works?” — and they would find out that it rarely works and often backfires.

So, one can only conclude that terrorists don’t make serious efforts to verify that their killings will advance, rather than hinder, their stated goals.

That’s odd. Why would you go kill a bunch of people, and even sacrifice your own life, when you have approximately no evidence for thinking that it will help your cause, and it may instead harm the cause?

I am not sure. But this makes me suspect that terrorists’ actual motives are not what they often say. E.g., Islamic terrorists don’t care about advancing Islam, or helping Muslim people, or even reducing foreign intervention in Muslim countries. None of that. They don’t want to help the ingroup. They just want to hurt the outgroup. In other words, their motive is hate.

A lot of human behavior is like that.

Wait, Maybe it Worked

As I say, I thought that 9/11 was an excellent example of how a “successful” terrorist attack (as in, the operation was carried out according to plan) can be a complete failure from the standpoint of advancing the motivating cause. I thought that in 2012, when I wrote that book.

More recently, it occurred to me that the 9/11 attack might have worked, to a certain extent. The U.S. didn’t get out of the Middle East, no, and things have gotten worse for Muslims in the Middle East. But Islam has made cultural inroads in the West. Bizarrely enough, I think Islam is more popular in the West, much better respected in certain (non-Muslim) circles, than it was before that attack. I would not have predicted this at all.

More specifically, Islam has become popular among left-wing people. (Of course, not among right-wingers.) Muslims are now one of their favored identity groups – along with women, blacks, Latinos, etc.

I think this is due to 9/11, because as I recall, before 9/11, Islam was not on the radar screen of mainstream U.S. society. People did not talk about it. 9/11 evoked some “Islamophobia” (as people say), which in turn evoked a counter-reaction from the left, turning a fair number of them pro-Islam.

As a case in point, see Ayaan Hirsi Ali, the Muslim apostate who escaped from severe and very tangible oppression in Somalia and has gone on to speak out boldly and insistently, and at great risk to her own physical safety, against the oppression of women in the Islamic world.

Hirsi Ali spoke at CU-Boulder on Monday evening. She spoke of how her home country was riven with tribal conflict, and of her concern that America and the West are descending into tribalism and abandoning the values that made us peaceful and prosperous.

You might assume that Hirsi Ali would be a hero for feminists and social justice crusaders across the world. Instead, many progressives are more interested in attacking her as an “Islamophobe” (https://en.wikipedia.org/wiki/Ayaan_Hirsi_Ali#Criticism) – in effect siding with the oppressors against an oppressed woman of color.

Wtf Are We Thinking?

This is on its face odd, because Islam is not exactly a progressive belief system. The Islamic world is, let us say, not full of liberal feminists and LGBTQ activists. (What if progressives were protesting against people who criticize Trump, calling them “Trumpophobes”?) So what’s going on?

One possibility: appeasement. Left-wing individuals seem to like the idea of appeasement more than right-wingers. Maybe terrorism has worked on the progressives in exactly the way the terrorists intended: left-wing Westerners have been frightened into deferring to Islam.

That is the charitable read. Here is another possibility. Remember how I said above that the motives of terrorists are not what they say? That they don’t care about benefiting their own side, but only about harming their enemies?

The same may be true of a significant portion of Western progressives: they don’t mainly want to help their favored groups; they mainly want to harm their disfavored groups. They don’t, for example, aim to help the poor, or women, or blacks, but simply to harm the rich, men, and whites. Of particular import here: they don’t want to help other countries; they want to hurt America.

That would explain how progressives could be on the same side as Islamic extremists – how they could, for example, side with Muslim traditionalists against Hirsi Ali. If progressives just wanted to help oppressed minorities, they would join people like Hirsi Ali in calling for an Islamic reformation. But if they mainly wanted to hurt America, then they would probably side with the Islamic extremists.

Hostility to One’s Own

Obviously, the above would apply only to some leftists. Many others would in fact side with (e.g.) Hirsi Ali, just as one would expect a feminist or advocate for social justice to do.

You may still think that I have given an excessively uncharitable interpretation, even of just a significant number of progressives. If you’re tempted to say that, then you probably haven’t read enough academic political discourse. So let me help you out.

The following is an excerpt from an essay by Ward Churchill, formerly a Professor of Ethnic Studies at my own university (before he was fired for plagiarism). Churchill wrote this the day after the 9/11 attack, discussing the terrorists who destroyed the World Trade Center (https://cryptome.org/ward-churchill.htm):

They did not license themselves to “target innocent civilians.”
[…]
Let’s get a grip here, shall we? True enough, they [the people in the World Trade Center] were civilians of a sort. But innocent? Gimme a break. They formed a technocratic corps at the very heart of America’s global financial empire – the “mighty engine of profit” to which the military dimension of U.S. policy has always been enslaved – and they did so both willingly and knowingly. […] To the extent that any of them were unaware of the costs and consequences to others of what they were involved in – and in many cases excelling at – it was because of their absolute refusal to see. More likely, it was because they were too busy braying, incessantly and self-importantly, into their cell phones, arranging power lunches and stock transactions, each of which translated, conveniently out of sight, mind and smelling distance, into the starved and rotting flesh of infants. If there was a better, more effective, or in fact any other way of visiting some penalty befitting their participation upon the little Eichmanns inhabiting the sterile sanctuary of the twin towers, I’d really be interested in hearing about it.

Caveat: This is anecdotal. But, reading the above passage, I get a really strong sense that (a) the author hates America and a large portion of Americans, (b) the author is happy about the 9/11 attack, (c) the author puts himself on the same side as the terrorists.

Granted, that is an extreme case (which is why it caused a great controversy when people noticed the article). But it is not unusual in the academic world to read texts with a strong anti-American (and anti-white, anti-male, anti-wealthy, etc.) emotional tone. My subjective impression is that my fellow academics do not merely criticize America (or men, white people, etc.) in the hope of improving these things. Many have a deep-seated emotional hostility that they just need to let out.

I’m not going to talk now about whether such hostility is justified. My point right now is that this could explain how terrorism might succeed: violent attacks on a society could further one’s cause, if a sufficient number of influential people within that society are actually extremely hostile to their own society. Those people might then gain sympathy for the terrorists and their cause.

I think it would be a very rare circumstance that this would work — it is very unusual that a significant portion of influential people are that hostile to their own society. But, strange as it seems, I think that has happened to us.

They’re Not on Your Side

I guess I have one more point to make, on the off chance that this isn’t completely obvious to everyone reading this. If you’re a progressive, and you think Islamic terrorists are on your side, then you’re an even bigger fool than the people who read Donald Trump’s tweets and believe them. The Islamists – the ones who blow up buildings and issue fatwahs – would not hesitate for one second to kill you, if they somehow had the power in your society. They would have no problem with murdering people for being gay, either, or for being atheists, or for criticizing a theocratic government, or for being victims of rape.

So I think we in the West need to put a leash on our self-hatred long enough to realize that not everyone who attacks us does so in the name of justice.

You Don’t Agree with Karl Popper

I’m teaching Philosophy of Science this semester. It’s a fun class. I just had occasion to discuss the philosophy of Sir Karl Popper, who was among the most influential philosophers of science of the last century (and therefore of all time). He had an enormous influence on our intellectual culture, especially on scientists, and he is the reason why you hear complaints about “unfalsifiability” from time to time (accusations that religion is “unfalsifiable”, etc.).

If you’re a philosopher of science, you probably don’t take Popper’s philosophy seriously. But if you’re an ordinary person, or a libertarian, or especially a scientist, there is a pretty good chance that you think you agree with Karl Popper.

So, as a public service, I am here to explain to you that no, you probably do not agree with Popper at all — unless you are completely out of your mind.

What He Said

You probably associate Popper with these ideas: It’s impossible to verify a theory, with any number of observations. Yet a single observation can refute a theory. Also, science is mainly about trying to refute theories. The way science proceeds is that you start with a hypothesis, deduce some observational predictions, and then see whether those predictions are correct. You start with the ones that you think are most likely to be wrong, because you’re trying to falsify the theory. Theories that can’t in principle be falsified are bad. Theories that could have been falsified but have survived lots of attempts to falsify them are good.

I wrote that vaguely enough that it’s kind of what Popper said. And you might basically agree with the above, without being insane. But the above paragraph is vague and ambiguous, and it leaves out the insane basics of Popper’s philosophy. If you know a little bit about him, there is a good chance that you completely missed the insane part.

The insane part starts with “deductivism”: the view that the only legitimate kind of reasoning is deduction. Induction is completely worthless; probabilistic reasoning is worthless.

If you know a little about Popper, you probably think he said that we can never be absolutely certain of a scientific theory. No, that’s not his point (nor was it Hume’s point). His point is that there is not the slightest reason to think that any scientific theory is true, or close to true, or likely to be true, or anything else at all in this neighborhood that a normal person might want to say.

Thus, there is no reason whatsoever to believe the Theory of Evolution. Other ways of saying this: we have no evidence for, no support for the Theory of Evolution. There’s no reason to think it’s any more likely that we evolved by natural selection than that God created us in 4004 B.C. The Theory of Evolution is just a completely arbitrary guess.

(This is not something special about Evolution, of course; he’d say that about every scientific theory.)

This is not a minor or peripheral part of his philosophy. This is the core of his philosophy. His starting point is deductivism, which very quickly implies radical skepticism. The deductivism is the reason for all the emphasis on falsification: he decided that since one can’t deduce the truth of a theory from observations, the goal of science must not be to identify truths. But, since we can deduce the falsity of a theory from observations, the goal of science must be to refute theories.

As I say, most people don’t realize that this is Popper’s view — even though he makes it totally clear and explicit. I think there are two reasons why people don’t realize this: (a) the view is so wildly absurd that when you read it, you can’t believe that it means what it says; (b) Popper’s emotional attitude about science is unmistakably positive, and he clearly doesn’t like the things that he calls “unscientific”. So one would assume that his philosophy must give us a basis for saying that scientific theories are more likely to be correct than unscientific ones. But then one would be wrong.

Now, in case you still can’t believe that Popper holds the irrational views I just ascribed to him, here are some quotations:

“We must regard all laws and theories as guesses.” (Objective Knowledge, 9)

“There are no such things as good positive reasons.” (The Philosophy of Karl Popper, 1043)

“Belief, of course, is never rational: it is rational to suspend belief.” (PKP, 69)

“I never assume that by force of ‘verified’ conclusions, theories can be established as ‘true’, or even as merely ‘probable’.” (Logic of Scientific Discovery, 10)

“[O]f two hypotheses, the one that is logically stronger, or more informative, or better testable, and thus the one which can be better corroborated, is always less probable — on any given evidence — than the other.” (LSD, 374)

“[I]n an infinite universe […] the probability of any (non-tautological) universal law will be zero.” (LSD, 375; emphasis in original)

About the last quotation, note that many scientific theories contain universal laws (e.g., the law of gravity). So, Popper is not just denying that we can be certain of these theories, and not just denying that they are likely to be true; he claims that they are absolutely certain to be false.

In the penultimate quotation, note the “on any given evidence” clause: When you get done testing your scientific theory, and it survives all tests, you can’t say that it’s likely to be correct; it’s less likely to be correct, even after you’ve gathered all the evidence, than some unfalsifiable, unscientific theory.

All of this is the sort of view that you would expect from the most extreme science-hater. The weird thing about Popper is that he inexplicably combines this stuff with a strong positive evaluation of science. We have no reason to believe in science, and pseudoscience is more likely to be correct, and in fact the paradigmatic scientific theories are definitely wrong . . . but hey, isn’t science great? Okay, now let’s get on with the wonderful science stuff!

I can’t explain this combination of attitudes. I don’t think Popper ever attempted to explain it himself.

By the way, the core idea — deductivism, and inductive skepticism — seems to be surprisingly popular among philosophers. It’s ridiculous. It’s like if a major position within geology were that there are no rocks.

How He’s Wrong

I’m only talking about objections that I like here, so I’ll ignore objections based on Thomas Kuhn.

The Duhem-Quine Thesis

This is something widely accepted in philosophy of science: a typical scientific theory doesn’t entail any observational predictions by itself. You at least need some auxiliary assumptions.

Ex.: Newton’s Theory of Gravity, together with Newton’s Laws of Motion, are sometimes said to entail predictions about the orbits of the planets (in particular, to predict Kepler’s laws). But that’s false. Newton’s second law only gives the acceleration of a body as a function of the total force acting on it. The Law of Gravity doesn’t tell you the total force on anything; nor is “total force” observable. So there are no observational predictions from this set of laws, even when combined with all our observations.

Of course, the laws and the observations make certain patterns of motion more plausible or likely than others. If the planets moved in squares around the night sky, it would be very implausible to explain that using Newton’s laws + the hypothesis of some unknown, unobservable forces. But that is completely irrelevant for Popper. Again, for Popper, the only thing that counts as scientific reasoning is deducing the falsity of a theory from observations. You’re not allowed to appeal to any probabilistic judgments to support a theory.

Probabilistic Theories

Another counter-example: Quantum Mechanics. It’s a scientific theory if anything is. But it is clearly unfalsifiable, because all of its observational predictions are probabilistic. It enables one to calculate the probabilities of different possible observed results, but a probabilistic claim (as long as the probability isn’t 0 or 1) can never be falsified — i.e., you can’t logically deduce the falsity of the probability claim from observations. And again, that’s the only thing you’re allowed to appeal to. So, on Popper’s view, quantum mechanics must be unscientific.

Evolution

But QM is weird. So let’s think about some perfectly ordinary, paradigmatic examples of scientific theories. Real scientific theories, by the way, are not normally of the form “All A’s are B” (as in philosophers’ examples).

Here’s one: the Theory of Evolution. Humans and other living things evolved by natural selection from simpler organisms, over a long period of time. Here’s an example of the evidence for this: some of the larger constrictor snakes have degenerate hind limbs underneath the skin. This can be explained, in the theory of evolution, by the hypothesis that they evolved from lizards. On the rival theory (Creationism), there’s no obvious explanation.

This isn’t a matter of deduction. The Theory of Evolution does not entail that the larger constrictors would have subcutaneous degenerate hind limbs. It merely gives a reasonable explanation of the phenomenon, which the rival theory doesn’t (but creationism doesn’t entail that there wouldn’t be such degenerate hind limbs; it merely fails to explain why there would).

The Dinosaur Extinction

Here’s another theory: the dinosaurs were driven extinct by a large asteroid impact. And here’s some evidence: there is an enormous crater at the edge of the Yucatan Peninsula in Mexico (the Chicxulub Crater), partly underwater. The crater has been dated to about the time of the Cretaceous–Paleogene extinction event. That’s evidence that an asteroid impact caused the mass extinction.

Again, that’s not deductive. The asteroid-impact theory of the extinction does not entail that we would find a crater. (It’s logically possible that the crater would have been filled in, or that the asteroid hit a mountain and didn’t leave a visible crater, or that the crater was somewhere we haven’t looked, etc.) It merely makes it much more likely that we would find a crater.

So, Popper’s philosophy entails that the Theory of Evolution and the asteroid-impact theory are unscientific, besides that we have no evidence at all for either of them.

The Obvious

Of course, the obvious problem is that it’s absurd to say that we don’t have any reason to think any scientific theory is true. We have excellent reasons, for example, to think that humans evolved by natural selection. There is not any serious doubt about that in biology, and that is not something that we should be arguing about. And in fact, I’m not going to argue about it, because I just don’t think that’s serious.

The Correct Theory

What is the correct view of scientific reasoning? Basically, the Bayesian view: it’s probabilistic reasoning.

Take the example of the degenerate hind limbs again: that is evidence for the theory of evolution because it’s more likely that we would see stuff like that if organisms evolved by natural selection, than it is if they were all created by God in 4004 B.C. (or, in general, if they did not evolve). In standard probability theory, that entails that Evolution is rendered more probable by our seeing things like the degenerate hind limbs.

Why Care About Falsifiability?

There really is something important about falsifiability. Intuitively, there is something bad about unfalsifiable theories, and we have Popper to thank for drawing attention to this.

Unfortunately, almost no one seems to have any idea why falsifiability matters, and Popper did not help with that situation, because his theory is incapable of accepting the correct explanation.

The correct explanation is a probabilistic/Bayesian account. In the Bayesian view, a hypothesis is supported when P(h|e) > P(h) (the probability of hypothesis h given evidence e is greater than the initial probability of h). It is a trivial theorem that, for any e, P(h|e) > P(h) iff P(h|~e) < P(h). In other words: e would be evidence for h if and only if the falsity of e would be evidence against h.

That means that if nothing counts as evidence against h, then nothing counts as evidence for h either. But if there’s no evidence for h, then one typically shouldn’t believe it. This is why you shouldn’t believe unfalsifiable theories. By contrast, falsifiable theories are supportable: when one tries and fails to falsify them, their probability goes up.

The point is more general than a point about Popperian falsifiability. Let’s say a theory is “falsifiable” iff one could deduce its falsity from some possible observations, and a theory is “testable” iff there are some observations that would lower the probability of the theory. Then the general point to make is that one can have evidence for a theory iff the theory is testable (it needn’t also be falsifiable), and good scientific theories should be testable.

Popper couldn’t say this, because he was obsessed with deduction, and this is a point about probabilistic reasoning, not deduction. Popper didn’t eschew all talk of probabilities; he just insisted on the most perverse probability judgments (e.g., that scientific theories are less likely to be correct than unscientific ones, even after they survive stringent tests). Which would make one wonder why anyone should prefer scientific theories. Obviously, the correct view is that scientific theories, after surviving tests, become more probable.

The Positive Side of Murder

I’ve seen several people commenting on the recent assassination of the Iranian general Qassem Suleimani (https://en.wikipedia.org/wiki/2020_Baghdad_International_Airport_airstrike). From what I can see, it appears that everyone is against it (but maybe that’s just liberals and libertarians). I’m not quite sure why, though. This is a good time for some comments on assassination.

Is Assassination Just?

Assume you have someone who is going to cause a lot of unjust harm to others. In the present case, I assume that U.S. intelligence is basically correct that Suleimani was a murderer and terrorist, and he was going to murder more people in the future if someone didn’t stop him. I don’t know the details about this, but this doesn’t seem to be disputed. (I do not assume that any attack was imminent, as there seems to be no evidence of this.)

Suppose further that the only way to stop this person is to kill him, or to kill some other people, like several of the people who work for him. That also seems plausible in this case.

In that case, should you assassinate the evildoer?

I don’t see why not. If Suleimani wasn’t a morally legitimate target, I don’t know who would be. The person who is ordering other people to commit evil deeds is surely at least as responsible as the people who directly carry them out. So the top military official is at least as legitimate a target as rank-and-file soldiers in the military. And it’s obviously better to kill him than to kill multiple other people.

According to the NYT, killing Suleimani was the “most extreme” option that Pentagon officials offered to Trump, to respond to Iranian aggression. (https://www.nytimes.com/2020/01/04/us/politics/trump-suleimani.html) Apparently, though, the more “moderate” and more conventional options would have involved killing many people, but only lower-ranking soldiers or militia members, instead of a general.

By what deranged moral metric is it better to kill multiple lower-ranking people than the guy at the top who is ordering people to commit evil deeds?

Why Are People Shocked?

When U.S. forces were hunting Osama bin Laden, no one seemed to have any trouble understanding why it made sense to go particularly after him, as the leader of al Qaeda, rather than merely pursuing his footsoldiers. Of course you would want to take out the guy at the top, or someone close to the top. And when Seal Team Six assassinated bin Laden, no one seemed to have any problem with that. As a matter of fact, I recall people celebrating.

So why are commentators so shocked by the “extreme” action of assassinating Suleimani? One possibility is that people are shocked because it’s an action by Trump. Anything that Trump does, we’re primed to see as crazy.

Another possibility is that the action seems extreme and shocking because . . . Trump attacked a high government official. Usually, you just attack soldiers. In the bin Laden case, bin Laden was the leader of a non-governmental terrorist group. So of course you can assassinate someone like that. And of course you can kill multiple completely innocent civilians in the course of targeting members of private terrorist groups. But killing a government official? Dear God, is nothing sacred? Has Mr. Trump no limits?!

Is It Prudent?

Perhaps that’s uncharitable. Perhaps the main worry is that assassinating foreign government officials is likely to start a war. (If that’s your worry now, then when the war fails to materialize, then presumably you will admit that the assassination was a good move. Right?)

The theory, I suppose, is that assassinating officials is more “provocative” than killing ordinary soldiers or civilians (as we traditionally do) — it’s more provocative to the people who run the foreign government, because those people don’t give a crap about footsoldiers and civilians.

Well, that’s one theory. Here’s another theory, one that relies on premises that my libertarian friends seem to accept in other contexts. Assassinating a high-ranking government official makes it less likely that we will go to war with the foreign country, as compared to merely attacking their troops.

Why? Because the government officials who make the decisions about whether to go to war don’t give a crap about soldiers and civilians. If attacking the U.S. means that the U.S. kills more Iranian soldiers or civilians, then, plausibly, the Iranian government will continue to attack the U.S. They might just be fine with sacrificing multiple such “unimportant” lives, year after year, for the sake of their ideology and their hatred of the U.S.

Assassinating a top government official changes the calculus. It lets the Iranian government know that you’re not just going to target the civilians and soldiers that they don’t care about. You’re going to target them, the people making the decisions. Then they have to think hard about how much they really care about their ideology and their hatred. Do they care enough to sacrifice their own lives?

This is one reason why there is so much war in the history of the state: because the people who make the decisions about going to war almost never have to bear the costs. It’s a lot easier to see the merits of sending other people into a war than it is to see the merits of starting a war in which you yourself are likely to die.

From that point of view, Iran’s decision to essentially back down (only launching missiles that killed no one) (https://www.theguardian.com/us-news/2020/jan/08/irans-assault-on-us-bases-in-iraq-might-satisfy-both-sides), is utterly unsurprising. The Ayatollah is not stupid. He knows what will happen in a war with the U.S. The Iranian government, obviously, would be defeated. Very likely, the U.S. would topple the government and kill the leader, since that’s what happened the last time we went to war in the region. (This might in turn cause ISIS or other terrorist groups to gain more followers, but not before Khamenei himself was dead.)

In fact, Trump just might order Khamenei to be assassinated even without a war. This realization is more likely to curtail Iranian aggression than any amount of economic sanctions or attacks on Iranian soldiers.

Is It Legal?

Having said all that, there is a distinct, purely descriptive, legal question: was the assassination (and other, similar assassinations) legal?

To that, the answer is “obviously not”. There is a little crime called “murder” that you’re considered guilty of when you deliberately kill people. You’re also legally considered to be guilty of it when you successfully direct someone else to kill someone. So, on the face of it, Donald Trump is guilty of murder. That’s not partisan rhetoric, by the way. That’s just an objective, descriptive fact.

Now, there are exceptions, where you can kill someone and not be legally considered a murderer. One exception is for killing in war. But the U.S. is not at war with Iran (thus far!), so that would be a tough case to make.

Another exception is for self-defense or defense of innocent third parties. This is what the Trump administration would like to claim. But, in American law, in order to employ the “defensive killing” defense, you have to argue that the person you killed posed an imminent threat to life or limb. It’s not enough to say that the person was eventually going to kill you or some innocent third party. You have to argue that the person was just about to do it, right when you killed him. (If they were merely going to do it at some further future time, you’re supposed to call the police, run away, or something like that.)

That, obviously, is why Trump administration officials keep insisting that Suleimani was planning an “imminent” attack, at the time U.S. forces killed him. Because if it wasn’t imminent, then, according to our own law, we murdered him. We sure don’t want to say we’re murderers; therefore, it must have been imminent . . . but we can’t provide any information whatsoever about this attack that we’re talking about. We don’t know where it would have been, or when, and we can’t tell you a single thing about the evidence we have for this, but . . . believe us.

Obviously, they’re lying. If you don’t know that, then I would like to sell you the Brooklyn Bridge seven times, because that’s how absurdly gullible you are.

The Obama administration, by the way, ran into the same issue. They also liked to assassinate people with drones, but they didn’t want to call themselves murderers. (Bush also used some drone strikes, but nowhere near as many.) So the Obama Justice Department wrote up a memo explaining that the people they were targeting were all posing “imminent” threats. They seem to have vastly expanded the meaning of “imminent”. In the ordinary context (i.e., if you’re not a government official), an imminent attack pretty much has to be just about to occur — surely within hours, if not seconds. But in the new sense introduced to justify government killings, an “imminent” attack might be coming within a few years.

So the last three Presidents were probably murderers, legally speaking. Of course, you might think that these were mostly good murders. Be that as it may, the Suleimani murder really doesn’t seem to have been a particularly bad or shocking one, among the murders that our leaders have been carrying out over the past couple of decades.

Outlaw Universities

Discrimination in Academia

Probably everyone in the academy knows that affirmative action is widely practiced: racial minorities (except Asians) and women are commonly given preference in hiring and admission decisions at American universities. I would guess, however, that some academics — and many more non-academics — are unaware that typical university hiring practices are blatantly illegal. So I’m going to talk about that for a while, in case you find that interesting.

Job advertisements commonly say things like that the university rejects discrimination, supports equal employment opportunity, and considers all applicants “without regard to” race, sex, religion, etc. What they actually mean by this is that they only discriminate in certain specific ways. E.g., they don’t discriminate against blacks or Hispanics, but only against whites and Asians. They don’t discriminate against women, only against men. And so on. (This sounds to me like a rather Orwellian use of “equal opportunity”. But what do I know? I’m just some crazy libertarian philosopher.)

I think pretty much everyone in the academic subculture knows this. I don’t know if this is widely known outside academia, though.

Here are a couple of examples:

Example 1: the University of California

This was in the news recently. The University of California, in 8 of its 10 campuses, requires all job applicants to submit a “diversity statement” explaining how they will add to the “diversity” of the faculty. You can read about it on the UCLA site: http://ucla.app.box.com/v/edi-statement-faqs

Great — they want diverse viewpoints! Maybe they’re finally going to add some differing ideological perspectives to the chorus of left-leaning professors, right?

Well, if you think that, I have some bridges to sell you. The requirement is obviously designed to help exclude whites, men, and non-leftists. Here, you can read about some of the results achieved at UC Berkeley: https://ofew.berkeley.edu/sites/default/files/life_sciences_inititatve.year_end_report_summary.pdf. Here are a couple of tables from that report:

That’s from two searches they did. They’re so proud of what they’ve done that they posted that publicly. Can anyone doubt that their aim is to increase women and racial minorities?

Now, as I say, every professor already knows the score. You know that if you apply to UC, and you belong to a minority, you should talk about that in your “diversity statement”. You know that the hiring committee wants to ask you your race and gender, but they can’t legally do this, so they put in the “diversity statement” requirement as a proxy. If you want the job, you’ll play ball.

Example 2: The Other UC

We have a more limited version of the idea at my own school: https://www.colorado.edu/postdoctoralaffairs/current-postdocs/chancellors-postdoctoral-fellowship-diversity-program

Those are special post-doctoral fellowships earmarked for candidates who “contribute to diversity”. The successful candidates are meant to become regular tenure-track professors at the end of the fellowship. The candidates, again, have to submit a diversity statement. And again, the university does not explicitly ask you about your race or gender (since that would be illegal), nor do they explicitly say that they are going to exclude white men. But everyone knows the score.

The Relevant Law

Now, what is remarkable about all this? Why do I claim it is illegal?

The Civil Rights Act

Here is one relevant law, with which anyone who makes hiring decisions ought to be familiar: it’s called the Civil Rights Act of 1964. Here are some pertinent quotations (Title VII):

“It shall be an unlawful employment practice for an employer – (1) to fail or refuse to hire or to discharge any individual, or otherwise to discriminate against any individual with respect to his compensation, terms, conditions, or privileges of employment, because of such individual’s race, color, religion, sex, or national origin . . .”
“Notwithstanding any other provision of this subchapter, (1) it shall not be an unlawful employment practice for an employer to hire and employ employees . . . on the basis of his religion, sex, or national origin in those certain instances where religion, sex, or national origin is a bona fide occupational qualification reasonably necessary to the normal operation of that particular business or enterprise . . .”

Notice some things about this:

  • It doesn’t say that it’s illegal to discriminate against women; it says that it is illegal to discriminate on the basis of sex. It doesn’t say it’s illegal to discriminate against blacks; it says it’s illegal to discriminate on the basis of race. There is no asymmetry drawn between the genders, nor between races. It doesn’t make any distinction between the majority and the minority, or between historically privileged and historically disadvantaged groups, or anything like that. So there is just no legal basis for saying that it would be okay to discriminate in one direction but not the other.
  • The second paragraph quoted above makes an exception: you can discriminate on the basis of religion, sex, or national origin, if one of those things is a genuine qualification for the job.
  • Notice what is conspicuously absent from that exception: the text pointedly does not mention race or color in stating the exception. That means the exception does not apply to race and color. Meaning that it remains illegal to discriminate based on race even if a person’s race is a genuine job qualification. This rules out claiming that minority professors will do better at teaching, or in some other way be more suited to the job.
  • You also can’t claim that excluding white men doesn’t count as “discrimination” (say, because it’s only possible to “discriminate against” minorities). Besides that that’s factually false, the first clause prohibits “failing or refusing to hire” a person because of their race, etc. There is no way that these universities can deny that they have “failed to hire” certain people for particular positions because of those people’s race and sex.

This really is not ambiguous at all. The racial preferences are just clearly completely illegal.

It’s ironic that this law — one of the great triumphs of the Civil Rights Movement of the 1960’s — is precisely the one that modern-day leftists, who fancy themselves heirs to that movement, are most at odds with. This indicates how far that movement has strayed from its founding ideals.

Proposition 209

The policies are extra-illegal in California, because, in addition to the Civil Rights Act, California has a law called Proposition 209, which was passed directly by the voters in 1996. This law was specifically written to exclude affirmative action, and it specifically mentions the University of California:

“The state shall not discriminate against, or grant preferential treatment to, any individual or group on the basis of race, sex, color, ethnicity, or national origin in the operation of public employment, public education, or public contracting. … Nothing in this section shall be interpreted as prohibiting bona fide qualifications based on sex which are reasonably necessary to the normal operation of public employment, public education, or public contracting. … ‘[S]tate’ shall include, but not necessarily be limited to, the state itself, any city, county, city and county, public university system, including the University of California …”

It is crystal clear that this was supposed to prohibit exactly what UC is presently doing.

Note again that an exception is made for cases in which sex is a genuine occupational qualification, with no parallel exception made for race. So racial discrimination is illegal no matter what.

So What?

Why draw attention to all this? One reason is that I think this kind of discrimination is wrong, counterproductive, and bad for society. I’m not going to discuss that in detail, though, both because it’s too large of an issue and because you probably already understand the basic reasons why someone would think that.

I think it’s particularly wrong for the government to discriminate in these ways, and this discrimination is indeed going on at government schools.

Also, I think this points up the left-wing bias of universities. The faith in affirmative action is not shared by most of the public, but academics are so strongly and so uniformly in favor of it, that universities across the country are happy to explicitly defy the law. They don’t care if everyone knows they’re doing it. They don’t care if they’re opening themselves up to lawsuits.

The few people in the academy who don’t share this ideological commitment are thus put in a difficult position, if they should happen to get on a hiring committee. They’ll be expected to implement the university’s policy of discrimination. They may be morally opposed to this policy, and they may in any case not be happy to break the law in service of the ‘identity politics’ ideology.

How Are We Getting Away with This?

How has academia’s commitment to race and gender discrimination survived so long, so openly? (It has to be said that the “diversity statement” is a paper-thin disguise. Nor have we made any very serious effort to conceal our discrimination for the past few decades.)

I wonder why the system hasn’t been undermined by the simple measure of people lying about their race. Why don’t half the college applicants just claim to be black on their applications? Don’t they know that that would greatly enhance their chances of getting in? Don’t they know that there is no verification of race other than the applicant’s own self-identification?

I also wonder why the federal EEOC (Equal Employment Opportunity Commission) hasn’t enforced the law against all the universities across the country that are openly thumbing their noses at it.

Here’s my hypothesis: the EEOC is staffed by leftists. You don’t join an agency like that because you’re concerned about discrimination against white men. You join it because you’re concerned about women and minorities, and you’re hoping to have the chance to slap down some employers who are discriminating against them.

It does not matter what the law says. It matters what the people who are supposed to enforce the law want and believe. If those people are in favor of specific forms of discrimination, then they’re going to look the other way, no matter that the law prohibits it.

I think this is the sort of thing that’s supposed to be ruled out by “the principle of the Rule of Law”. But it isn’t — or rather, we simply don’t have the rule of law. Everyone believes in the Rule of Law for the laws that they agree with. Many also believe in it for laws that they don’t care about — like laws that only hurt other people. But very few people actually support the Rule of Law for laws that disagree with their ideology.

You might think the law would be enforced through the courts. Why aren’t there dozens of people suing universities across the country?

I think the answer is that it is very difficult to prove harmful discrimination. To win a lawsuit, as far as I understand it, you would have to prove that a specific university passed you over because of your race, color, sex, national origin, or religion. To prove that, you would have to show that you specifically would have been hired if not for the university’s affirmative action program. It’s not enough that the university is showing preference for some races over others in general. You don’t have standing to sue unless you personally were harmed.

But there’s no way to prove that. When universities have hundreds of applicants for any given job, including dozens who are highly qualified, it’s going to be impossible to establish a probability that a specific person would have been hired by a race-blind process. The left-wing professors from the hiring committee are of course not going to admit to a court that they excluded a candidate for being white. They’re going to make up some other reason, which will be impossible to disprove. And so it goes.

One lesson for policy-makers: a law that there’s no way of enforcing is pointless. Prohibiting behavior that can’t be proved is pointless.

Against History

In my previous two posts, I attacked Continental philosophy and Analytic philosophy, respectively. But some philosophers remain unoffended, so now it’s time for me to attack the third main thing that people do in philosophy departments: the history of philosophy. I don’t understand why we have history of philosophy. I’ve taken several courses in history of philosophy, and listened to many lectures on it over the years, and occasionally I have raised this question, but no one has ever told me why we have this field.

1. What Is History of Philosophy?

Don’t get me wrong. I understand why we read historical figures, and why we cover them in classes — because the famous philosophers of the past are usually interesting, and they gave canonical formulations of very important views that are often still under discussion today. They also tended to have a breadth of scope and a boldness missing from most contemporary work.

What I don’t understand is why we have history of philosophy as a field of academic research. For those who don’t know, philosophers in the English-speaking world have whole careers devoted to researching a particular period in the history of philosophy (almost always within Western philosophy), and sometimes just a single philosopher.

What are these scholars trying to find out? Are they looking for more writings that have been lost or forgotten? Are they trying to trace the historical roots of particular ideas and how they developed over the ages? Or are they perhaps trying to figure out whether particular theories held by historical figures were true or false?

No, not really. Not any of those things. Scholarship in the history of philosophy is mainly like this: there are certain books that we have had for a long time, by a certain list of canonical major figures in philosophy. You read the books of a particular philosopher. Then you pick a particular passage in one of the books, and you argue with other people about what that passage means. In making your arguments, you cite other things the philosopher said. You also try to claim that your interpretation is “more charitable” than some rival interpretation, because it attributes fewer errors, or less egregious errors, to the great figure.

What you most hope to do is come up with some startlingly new way of interpreting the great philosopher’s words, one that no one thought of before but that turns out to be surprisingly defensible. It’s especially fun to deny that the philosopher said one of the main things that he’s known for saying. For instance, wouldn’t it be great if you could somehow argue that Kant was really a consequentialist?*

*Kant might actually be a consequentialist — just a weird kind of consequentialist, who thinks that a good will is lexically superior to (of infinitely greater value than) any mere object of inclination.

2. History of Philosophy Isn’t History or Philosophy

Now, let’s suppose that you have a really good historian of philosophy, who does a really great piece of work by the standards of the field, which also is completely correct and persuasive. What is the most that can have been accomplished?

Answer: “Now we know what philosopher P meant by utterance U.” Before that, maybe some people thought that U meant X; now we know that it meant Y.

This is of no philosophical import. We still don’t know whether X or Y is true. You might think that, because the great philosopher thought Y, this is at least some evidence for Y. But that would be extremely weak evidence (almost all of the major doctrines of the major philosophers are false). It would also be a crazy way of going about investigating the issue. It would be much better to just directly consider what philosophical reasons there are for believing Y.

It is also of minimal historical import. “What thoughts were occurring in the mind of this specific person, when he wrote this specific passage?” is technically a historical question. But it is a trivial historical question, unrelated to understanding any of the major events in history. It’s not as if, for example, we’re going to understand why Rome fell, if only we get the right interpretation of Aristotle’s Metaphysics Gamma.

Even when it comes to purely intellectual history, what is historically important is how Aristotle was understood by the people who read him, whether or not what they understood was what Aristotle truly meant.

Historians of philosophy, in brief, are expending a great deal of intellectual energy on questions that do not matter.

You might ask: What’s wrong with that? At least the historians seem to like what they’re doing, so it’s interesting enough to them. True. But intelligent people are a scarce resource in society. It’s fine for you to use your brainpower on questions that don’t matter. But the rest of society has no reason to pay you for doing that, when there are important questions that society would benefit from having more brainpower devoted to.

3. Why Do We Have History of Philosophy?

Why, then, does this field of academic research exist?

Because research-oriented philosophy departments (like all philosophy departments) have courses in the history of philosophy. When they hire someone to teach these courses, they think they have to hire someone who specializes in history of philosophy. That person will also be expected to do research in addition to teaching, since they are at a research school. So they do the stuff I described above.

A solution: You don’t actually need a historian to teach history of philosophy. Any ordinary philosopher can teach history of philosophy, because any ordinary philosopher can read a few major works of the given historical figure, and explain them well enough for undergraduate students. The more complicated, subtle interpretive questions that scholars in history of philosophy debate are not suitable for undergraduate courses. Scholars in history might even be worse at teaching it than ordinary philosophers, since the history specialists are more likely to confuse students by talking over their heads and getting lost in small interpretive details.

So, just hire any philosopher.

4. When History Is Bad for You

The Problem with Religious Texts

Being too focused on history of philosophy is bad for your mind, in something like the way that being overly religious can be bad for your mind.

Religious people are sometimes prevented from looking at and understanding the real world, because of their focus on a religious text. If you take the Bible, the Koran, etc., as a sacred text, then you might try to understand the whole world in terms of it, and thus have an overly narrow perspective. There is also a good chance that the book contains errors or misleading passages, and that the religious person will arrive at false beliefs by trying to rationalize those errors.

Folie a Deux

The great texts in the history of philosophy are not treated quite like religious texts. But they aren’t treated entirely unlike religious texts. Historians commonly treat their chosen historical figures with more respect and deference than you would treat any contemporary figure, and probably more than you should treat any human being. They try everything in their power to avoid admitting that the great philosopher was wrong or confused about a major philosophical point.

Almost all philosophers are mostly wrong. But if you spend too much time studying a particular philosopher, you get drawn into a sort of folie a deux, in which you start to perceive the world in terms of that philosopher’s ideas. Most historians of philosophy appear to believe that the philosopher they study was basically right (though they do not argue for this in their work, which, as noted above, focuses instead on exegesis).

Prima facie, it’s really unlikely that you should be a follower of some philosopher of the distant past (say, over 200 years ago). One reason is that human knowledge as a whole was in a completely different state two hundred and more years ago. Science scarcely existed when most great philosophers wrote. Even philosophy has developed a great deal in the last two centuries. Contemporary philosophers have the advantage of access to earlier philosophers’ work, as well as more rigorous training, and fruitful interactions with a very large, diverse, and active group of other professional philosophers.

Now, if your philosophy basically corresponds to that of some philosopher who lived hundreds or thousands of years ago, then you’re basically saying that none of the vast expansion in human knowledge that has occurred since then, nor any of the work done by philosophers themselves in the past couple of centuries, is philosophically important. None of that has taken us significantly farther, when it comes to philosophical questions, than some guy who lived in prescientific times.

I think that’s super-unlikely.

Please Don’t Be an Aristotelian

To give one important example, there are people today who are followers of Aristotle. I think that’s crazy. If Aristotle lived today, there is no way that he would be an Aristotelian. If we brought him through time to the present day, he would swiftly start learning modern science, whereupon he would throw out his outdated worldview, and he’d probably laugh at the modern Aristotelians.

Aristotle might have been the greatest thinker of all time. But being a great thinker, even the greatest, is not as important as having access to the accumulated human knowledge of the last 2,000 years. This is why the work of much less-great thinkers who are born today is more likely to be correct than the work of Aristotle. Aristotle’s philosophical method is largely about reconciling the opinions of the many and the wise (the endoxa). But of course, those would be the opinions of the people of his day — who knew next to nothing.

To be a little more specific, Aristotle’s philosophy is shot through with teleology. Things are supposed to have built-in goals or functions — not just conscious beings and artefacts, but everything in nature. This is just a completely false conception of the world. It’s not a dumb thing to think if you’re living 2,000 years ago. But we now have a vast body of detailed and rigorously tested scientific explanations of all manner of natural phenomena. Natural teleology — purposes or ‘functions’ that exist in nature apart from any conscious being’s desires — contributes nothing to any of them.*

*I know that someone is now going to post a comment claiming that natural functions appear in biology. But evolutionary functions are not equivalent to Aristotelian teloi. All biological phenomena are explicable by mechanistic causation.

So if you’re still talking about natural functions, I think that’s kind of like a doctor who’s still worrying about imbalances of the four bodily humors.

Look Outside the Text

Part of the attraction of doing history of philosophy, I believe, is its insularity: one can simply dwell entirely within the world of Philosopher A’s texts. You can read all of those texts, and you can know that there will never be any more, since the philosopher is now dead. (Though, admittedly, there will be more secondary literature.)

If you’re a regular philosopher, you have to worry about objections and evidence that could come from anywhere. Some philosopher could devise a completely new argument that you have to address. Some scientific development (not even a development in your own field!) could cast doubt on one of your theories (unless, of course, you have carefully defined the questions you talk about so that they are ‘purely conceptual’; see previous post).

If you’re a historian, you don’t have to worry about all that, because you’re not arguing that any philosophical thesis is true. You’re just saying that some philosophical thesis is supported by the texts. That makes life simpler and easier.

But that is also why history of philosophy is bad for you. Because people actually should think about the big questions of philosophy. And when we think about them, we need to do so in a way that is open to the many different considerations that are relevant to the truth.

We should think, for example, about what is the right thing to do, not what Kant said was the right thing to do; we should think about what is real, not what Plato said was real.

The Failings of Analytic Philosophy

In my last post, I discussed what is good about analytic philosophy. After that, I was going to note some of the shortcomings of analytic phil, but the post was getting too long. So now, here’s what’s wrong with analytic phil, as currently practiced.

The main problem: too analytic.

Background: analytic statements are, roughly, statements that you can see to be true just from understanding the meanings of words. Like “all rhombuses have four sides” and “the present is before the future” [if the present and future both exist]. There are issues about how exactly to define “analytic” sentences, but let’s not worry about that.

Analytic philosophers used to think that philosophy was or ought to be a body of analytic knowledge, and that analytic knowledge was essentially about the meanings of words, or the relationships between concepts, or something like that, and did not concern substantive, mind-independent facts. So they spent a lot of time talking about word meanings, how to analyze concepts, and boring stuff like that. They never did succeed in analyzing anything, though.

I don’t know how many people still think the job of philosophy is to analyze language/concepts. I don’t think it’s very many. But the field retains leftover influences of that early doctrine. And the central problem with this is that most questions that are amenable to typical analytic-philosophy methods are just not very interesting.

More specifically, I see three things that we’re doing too much of.

1. Fruitless Analysis

We spend too much time trying to analyze concepts. For instance, there are dozens of theories that start with “S knows that P if and only if . . .”, followed by some set of conditions — which get increasingly convoluted and hard to follow as time goes on. This has occupied a large portion of the literature in epistemology over the last 50 years.

There are two problems with this kind of philosophizing. One is that the analyses are always false.

That’s controversial; some philosophers think that they themselves have correctly analyzed one or more philosophically interesting concepts. But most will agree that there are almost no successes, and that no philosophical analysis has attained general acceptance. For every attempted analysis, there are many philosophers who would say that that analysis is wrong and has been refuted.

For discussion of why no one ever successfully analyzed anything, see my “The Failure of Analysis and the Nature of Concepts” in The Palgrave Handbook of Philosophical Methods (2015).

This probably is not just about to change. We’ve had a lot of highly intelligent, highly-educated, dedicated people working on the analysis of various philosophically interesting terms, for decades now. If we don’t have a single clear success by now, I don’t think we need to keep doing this for another 50 years. There ought to be a time when you move on.

Another reason this is fruitless is that the analyses we devise would not be particularly useful, even if one of them were widely accepted. The analyses that epistemologists now debate are so complicated and confusing that you would never try to actually explain the concept of knowledge to anyone by using them. So what is the point?

Perhaps the value of these analyses is purely for the theoretical understanding of philosophers. But understanding of what — how a specific word is used in a specific language? The exact contours of a conventionally defined category? Is that what we need to expend decades’ worth of reflection by a host of highly sophisticated minds to figure out?

2. Semantic Debates

A fair amount of debate in analytic philosophy, even when it is not directly about the analysis of some word or concept, looks to me like essentially semantic debates. And as I’ve just suggested, I don’t find semantic debates especially interesting.*

*Counterpoint: maybe our current conceptual scheme reflects the accumulated wisdom of our society, about what are the important and useful-to-discuss phenomena in the world. And maybe that’s why figuring out the correct account of that conceptual scheme is helpful?

Example 1: Justification

The debates between internalism and externalism in epistemology look semantic. Roughly, the debate concerns whether justification for a belief is entirely determined by the subject’s internal mental states (or states the subject has access to, or something like that).

Ex.: Reliabilists (the most common kind of externalists) sometimes say that a belief is “justified” as long as the subject formed it in a reliable way, whether or not the subject knows or has reason to believe that the belief-forming method is reliable. Internalists say this is not enough.

That looks to me semantic. As an internalist, I don’t deny that reliability exists or is good. I just don’t think that’s what “justification” refers to.

Example 2: Composition

There are debates in metaphysics about the “existence” of various things. These include debates about when a composite object exists.

One view holds that there are no composite objects. That is, if you take some elementary particles, there is nothing you can do to them that will make them collectively comprise a larger object. So tables don’t exist, people don’t exist, etc. (Don’t worry. There are still particles arranged table-wise; it’s just that they don’t add up to a single object.)

Other philosophers say that any objects compose a further object. If you have an object A, and an object B, then there is always a third object that has both A and B as parts. E.g., there’s an object composed of my left eye and Alpha Centauri.

Still other philosophers say that some but not all ways of arranging simple things make them compose a further thing.

All that strikes me as a very semantic sort of debate to have. (There are arguments that it isn’t semantic in the literature. But it really feels semantic.)

3. Defining Down the Issue

Okay, here is my biggest complaint. Philosophers will actually decide what questions to ask based on the consideration of to what questions they can apply purely a priori methods, especially conceptual analysis and deductive arguments. This often involves shifting attention away from questions that matter, to questions that are in the vicinity but that in fact do not matter at all.

Example 3: Authority

Say we’re doing political philosophy. And suppose I have raised the issue (as I have been known to do) of why any government should be thought to have any moral authority over anyone. Why should those clowns in Washington get to tell us what to do, and why should any of us obey them?

As a contemporary academic political philosopher, let’s suppose, you would like to say something about this. But it could be difficult, as I’ve formulated it. So you’re first going to want to change the question to something more “analytic philosophical”. How about this: how should an ideal “liberal” political order make decisions so that we would have reason to respect them, assuming that they were not independently unjust?

You then go on to discuss a theory of the ideal conditions for public deliberation among a group of rational agents who all regard each other as free and equal citizens. These conditions might include, for example, that everyone has a full opportunity to be heard, that all ideas receive fair consideration, and that the outcome of deliberation is determined only by the merits of the arguments.

I then point out (as is my wont*) that none of those conditions obtains in any society, so we still have no basis for political authority. You respond that that is an empirical matter outside your purview. Your interest was simply in describing an ideal.

*See The Problem of Political Authority, sec. 4.2.

That’s lame. That’s essentially replacing the question that matters with a question that doesn’t matter, because the latter doesn’t require getting off the armchair.

We can’t answer whether any state actually has authority by reflecting on concepts. But that question is nevertheless what matters.

Example 4: God

When I worked as a TA in grad school, some of the classes covered the Problem of Evil. Here’s a simple way of understanding the problem:

  • God, if he exists, is supposed to be all-knowing, all-powerful, and maximally good.
  • If God is unaware of all the evils in the world, then he is not all-knowing.
  • If He is aware of evil but unable to do anything about it, then he is not all-powerful.
  • If He is aware of evil and able to eliminate it, but unwilling to do so, then he is not maximally good.
  • But if God is aware of evil and both willing and able to eliminate it, then how can evil exist?

Here is a possible response: maybe God isn’t all-powerful after all. (Or he could fail to be all-knowing, or maximally good, but the ‘all-powerful’ attribute is the one theists are most likely to give up.)

I saw this discussed in one of these introductory philosophy textbooks that the students were reading. The author (who was defending atheism based on the Problem of Evil) said something like “we are merely bored by such replies” — I guess because it’s not interesting to defend a thesis by redefining it. (Well of course you can defend the existence of “God” in some sense of that word!)

I found this kind of amazing. So if it turns out that there is an extremely powerful, intelligent, and good being who created the physical universe, but the being isn’t capable of all logically possible actions, then that would be completely uninteresting to a philosopher, because … it doesn’t satisfy the definition of a certain word that we stipulated at the start? That sounds to me like caring more about word games than about reality.*

*In fairness, there are cases where defending a thesis by redefining it renders it uninteresting. E.g., if you defended “theism” by defining “God” to refer to nature, that would be uninteresting. That’s partly because the new thesis would be uncontroversial, and also because it is too far from what we initially were interested in.

Why do we think the traditional philosophers’ conception of God (the O3 world-creator) is the interesting thing to discuss? I suspect the answer is, at least in part, that this conception makes it easy to construct a priori, deductive arguments about “God”, and spawns lots of fun conceptual/logical debates (“Is omnipotence logically coherent?” “Is it compatible with perfect goodness?” “Is maximal goodness compatible with free will?” Etc.)

In other words: the traditional definition creates jobs for armchair philosophers.

But that’s not a rational, reality-oriented basis for selecting a definition. A rational basis for selecting a definition would be something like: “This is the definition that best fits with what we (seem to) have evidence for.” Or: “This is the definition that enables us to formulate the questions that are important.” (Granted, the existence of the O3 world-creator would be important. But a sub-omnipotent creator would also be important.)

Academic philosophers are so used to defining issues in this way (to create jobs for conceptual analysts, so to speak) that, if you try to discuss a normal point, philosophers will often misunderstand you, because they will try to get you to be making some ‘conceptual point’ that can be verified or refuted by analysis, deduction, and hypothetical examples. They default to hyper-strengthening or hyper-weakening issues. E.g., if someone wants to discuss whether A’s are generally B, philosophers will try to talk about (i) whether it’s conceptually possible for an A to be B, and/or (ii) whether it’s logically necessary that all A’s are B. This shows a greater interest in ideas, and how they relate to each other, than in the actual world.


Tl;dr: Analytic philosophers focus too much on playing with concepts, and not enough on thinking about the parts of reality that matter.

Analytic vs. Continental Philosophy

Some of you might know that there is a split in contemporary philosophy between “analytic” and “continental” styles, but not know what this split is about, or why analytic philosophy is better. This is to remedy that.

I. About Analytic Phil

Analytic philosophy is mainly written in the English-speaking countries (England, America, etc.). Think of people like G.E. Moore, Bertrand Russell, A.J. Ayer, and most people in the high-ranked philosophy departments in the U.S. today.

“Analytic” philosophers used to be people who thought that the main task of philosophy should be to analyze language (explain the meanings of words), or analyze concepts. But now they are basically just people who do philosophy in a certain style (regardless of their substantive views).

What is that style? There is generally a fair attempt to say what one means clearly, to give logical arguments for one’s theses, and to respond (logically) to objections to one’s arguments.

Also, there is still a fair amount of attention paid to questions about the meanings of words, or the logical and semantic relations among concepts, that are of philosophical interest. If an analytic philosopher is discussing justice, you can usually expect discussion of such things as the meaning of “just”; how the concept of justice relates to such concepts as those of fairness or rightness; etc.

II. About Continental Phil

Continental philosophy mainly comes from continental Europe, especially France and Germany. Think of people like Heidegger, Foucault, and the existentialists.

The style is largely the opposite of that of analytic phil. Continental writers are generally much less clear about what they’re saying than the analytic philosophers. They won’t, e.g., explicitly define their terms before proceeding. They use more metaphors without any literal explanation, and they use more idiosyncratic, abstract jargon.

When they advance an idea, they sort of give arguments for it, but it’s hard to isolate specific premises and steps of reasoning. A continental author would never tell you that he has three premises in his argument, and then write them down as statements (1), (2), (3) (as analytic philosophers often do). You also would find a lot less effort to directly address objections or confront alternative theories. They say things that are supposed to lead you along a line of thought. It’s just that at the end, it’s very hard to answer questions like “How many premises were there?”, “What was the 2nd premise?”, and “What was the first objection?”

There are also certain doctrinal themes. Works of continental philosophy are much more likely than analytic philosophy to communicate some kind of subjectivism or irrationalism. That is, you are more likely to find passages that (when you sort of vaguely figure out what they might be saying) seem to be arguing that reality depends on observers, that it is not possible or not desirable to think objectively, or that it’s not possible or not desirable to be rational.

Martin Heidegger

Philosophers generally tend to lean to the left politically. But Continental philosophers tend to lean very far left (more so than other philosophers).*

*Aside: Heidegger, a scion of Continental philosophy, was literally a Nazi, which is a “right-wing” view. So a more complete statement would be that continental philosophers are more likely to hold crazy and horrible extreme political views, like communism and fascism.

III. Carnap v. Heidegger

Rudolf Carnap

One can’t mention the analytic/continental divide without mentioning the disagreement between (continental philosopher) Martin Heidegger and (analytic philosopher) Rudolf Carnap. In “The Elimination of Metaphysics through Logical Analysis of Language,” Carnap discusses nonsensical utterances that can be made in natural language (http://www.ditext.com/carnap/elimination.html). He gives as an example the following excerpts from Heidegger:

What is to be investigated is being only and—nothing else; being alone and further—nothing; solely being, and beyond being— nothing. What about this Nothing? . . . Does the Nothing exist only because the Not, i.e. the Negation, exists? Or is it the other way around? Does Negation and the Not exist only because the Nothing exists? . . . We assert: the Nothing is prior to the Not and the Negation. . . . Where do we seek the Nothing? How do we find the Nothing. . . . We know the Nothing. . . . Anxiety reveals the Nothing. . . . That for which and because of which we were anxious, was ‘really’—nothing. Indeed: the Nothing itself—as such—was present. . . . What about this Nothing?—The Nothing itself nothings.

(Note: that is much clearer than most of Heidegger’s writing.)

Carnap goes on to explain (using predicate logic) how in a logically proper language, such statements could not be formulated.

IV. Analytic Philosophy Is Obviously Better

Many people interested in Continental philosophy are perfectly nice people. That said, analytic philosophy is obviously better. Why?

A. Style

My description above of the difference between the two schools should make it clear why I say analytic phil is better. These things:

  1. Clear theses
  2. Clear, logical arguments
  3. Direct responses to objections

I would say are the main virtues of philosophical (or other intellectual) writing. And by the way, I don’t think my description of the difference between Continental and Analytic Phil is very controversial. I think almost anyone who looks at samples of the two kinds of work is going to notice those three differences.

Why are those things important? Because (and I assume this without argument) philosophical work has a cognitive purpose. The purpose is to improve the reader’s knowledge and understanding of something. The purpose is not, e.g., to confuse people, to impress people with your vocabulary, to enjoy the contemplation of complex sentence structures, or to induce people to shut up and stop questioning you.

For a work to increase the audience’s knowledge and understanding, it is generally necessary that the reader understand what the work is saying. Therefore, clear expression is a cardinal virtue of philosophical writing.

Also, to increase a reader’s philosophical knowledge and understanding, one generally has to give the reader good reasons for believing what one is saying. That is because, in most cases, philosophical ideas that are worth discussing are not so self-evident that readers can be assumed to see their truth immediately upon their being stated. Usually, one’s main thesis is something that other smart people would disagree with.

Thus, if the reader is going to rationally adopt your position, they will generally need reasons. Also, they will generally need to understand what is wrong with the main objections that other philosophers would raise. If, instead, they adopt your view because of your rhetorical skill, because they’re impressed with your sophistication, because you’ve confused them too much for them to think of objections, etc., then they will not have acquired knowledge and understanding of the subject.

So, giving logical arguments and responding to objections are also cardinal virtues of philosophical work.

You might think this is all trivial and in no need of being explained. But it appears that many people (who prefer the Continental over the Analytic style) don’t appreciate these points.

B. Doctrines

The other thing to point out is that the substantive doctrines most commonly associated with continental philosophers are false.

(1) There is an objective reality.

For example, when you close your eyes, the rest of the world doesn’t pop out of existence. The world was around long before there were humans (or even non-human observers). The Earth has been here for 4.5 billion years, whereas human observers have only existed for 20,000 – 2 million years (depending on what you count as “human”). Therefore, the world doesn’t depend on us.

What’s the objection to this? As far as I can tell, there is one main argument against objective reality. A version of it first appeared, as far as I know, in Berkeley. Berkeley’s argument was something like this (my reconstruction):

  1. If x is inconceivable, then x is impossible. (premise)
  2. It is not possible to conceive of a thing that no one thinks of. (premise)
    • Explanation: if you conceive of x, then you’re thinking of it.
  3. Therefore, [a thing that no one thinks of] is inconceivable. (From 2)
  4. Therefore, [a thing that no one thinks of] is impossible. (From 1, 3)
  5. If there were objective reality, then it would be possible for there to be things that no one thinks of. (From meaning of “objective”)
  6. Therefore, there is no objective reality. (From 4, 5)

(David Stove refers to this argument as “the Gem”, and he has awarded it the prize for the Worst Argument in the World.)

Here, I will just point out the equivocation in (2). In analytic speak, it’s a scope ambiguity. The problem is that 2 could be read as either 2a or 2b:

2a. Not possible: [For some x,y, (x conceives of y, and no one thinks of y)].
2b. Not possible: [For some x, (x conceives: {for some y, no one thinks of y})]

Reading 2a is needed for the premise to be true, but reading 2b is needed for 3-6 to follow.

Usually, the argument for subjectivism is stated a lot less clearly. Usually, people say something that sounds more like this:

“It’s impossible for us to know anything without using our minds/conceptual schemes/perceptions/etc. Therefore, we can only know things-as-we-conceive-them/as-we-perceive-them/as-our-minds-represent-them/etc. Therefore, it makes no sense to talk about things as they are in themselves. Therefore, the idea of ‘objective reality’ just makes no sense; it’s meaningless.”

I’m doing my best to make sense of the sorts of arguments I’ve heard and to make them sound sort of logical, but when you hear actual subjectivists talk, it’s usually much less clear than that. (Bishop Berkeley was actually the clearest subjectivist.)

Anyway, the above statement is essentially a (more muddled) version of the Gem, and it basically has the same mistake. The mistake is confusing the statement that a given object of knowledge is represented by a particular mind with the idea that its being represented by that mind is part of the content of the representation.

In other words: I can only imagine the Earth when I’m imagining it. It doesn’t follow from this that I can only imagine the Earth as being imagined by me. I.e., it doesn’t follow that I can’t picture the Earth being there back when I wasn’t around. (And it would be very silly to deny that I can do that.)

(2) Be rational & objective.

I’m not going to discuss at length why you should think rationally. I think that’s basically a tautology. (Rational thinking is just correct thinking. If there was a good reason for ‘not being rational’, that would just prove that the thing you’re calling ‘not being rational’ is in fact rational.)

But I am going to just comment on what’s going on when a thinker starts attacking rationality or objectivity. To be fair, few people will outright say, “Hey, I’m irrational, and you should be too!” (Mostly because “irrational” sounds like a negative evaluative term.) But you can hear people rejecting central tenets of rationality, such as that one should strive to be objective and consistent.

So here’s what I think is going on when that happens: the thinker knows that he himself is wrong. He doesn’t know this fully explicitly, of course; he is self-deceived, and is trying to maintain that self-deception. If you’re wrong, and you want to keep holding your wrong beliefs, then you kind of implicitly know that you need to avoid thinking rationally or objectively. You also have to avoid letting things be clearly stated. Fog, bias, and confusion are the key things that are going to help you keep holding false beliefs.

(Alternately, the thinker may simply want other people to hold false beliefs, and know that bias and confusion are the keys to making that possible.)

So when I hear someone more or less attacking rationality and objectivity, or trying to avoid clear formulations of ideas, I take that as almost a proof that most of the rest of what that person has to say is wrong.


(Still ahead next week: What’s wrong with analytic philosophy.)

Professorial Bias

Sometimes people ask me what it’s like to be a libertarian in a far-left-dominated institution. How much bias is there? Do left-wing academics try to exclude libertarians and conservatives?

I was thinking about this partly because I am writing a chapter for a forthcoming volume on philosophers with unorthodox (non-leftist) views in the academy (Dissident Philosophers, ed. with Tully Borland and Alan Hillman).

I’m not sure how much ideological bias there is, because it is hard to detect, and biased people rarely announce what they’re doing. But some observations:

The Political Views of Prof’s

The academy is definitely left-leaning. The overall numbers are less extreme than you’d think; surveys find large numbers of faculty who identify as “liberal”, similarly large numbers of “moderates”, and smaller (maybe half as many) who identify as “conservative”.

It varies a lot by field, though. Humanities and social sciences are much more leftist than average; engineering, business, and professional schools are much less so. Conservatives who worry about leftward bias are reasonable to do so, because it is the fields where people are most likely to discuss politics in their teaching and research that are most left-dominated. (Who cares if the engineering school has adequate representation of conservatives?)

I think analytic philosophy departments are much better than most of the humanities and social sciences — particularly in trying to evaluate arguments rationally, instead of just being an ideologue. I can’t really speak to Continental philosophy departments, because that is a different culture of which I am not a part.

(Aside: you might have the impression that most philosophy departments in the U.S. are analytic, since analytic philosophy fills the major journals and dominates the prestigious departments. Nonetheless, I would guess that most departments are more Contintental. It’s just that these are mainly low-profile departments at teaching schools.)

To be sure, analytic, academic philosophers definitely lean left of center. But insane, dogmatic ideologues are relatively uncommon among them, compared to, say, professors in English or Ethnic Studies.

The left-wing dominance also varies by issue. There are just certain issues that are sacred in the academy — race, gender, and other “identity politics” issues. Even those who disagree with leftism on other issues had best not question the identity-politics orthodoxy.

Does It Lead to Bias?

It would be kind of amazing if the leftward slant of academic culture didn’t lead to some bias in hiring decisions, publication decisions, grant and fellowship awards, and so on. I mean, if there is one thing that we can say people are biased about, it’s politics.

(People who are quick to call out any hint of racial and gender bias often seem strangely unconcerned about ideological bias. Those who advocate most forcefully for the importance of exposing students to the differing viewpoints of people with different skin colors, body shapes, and sexual preferences, also often seem oddly uninterested in exposing students to the differing viewpoints of people who actually have different beliefs from the other faculty.)

In one survey, about a third of professors in social psychology admitted that they would discriminate against conservatives in hiring decisions.

(Source: Inbar, Yoel and Joris Lammers. 2012. “Political Diversity in Social and Personality Psychology,” Perspectives on Psychological Science 20: 1-8.)

The actual number who would discriminate is probably larger, since in general, people under-report traits that are considered improper or socially undesirable.

Now, I don’t know what the numbers would be for other fields. But I suggest there are probably at least a significant number of people who would discriminate, in left-leaning disciplines in general. To be clear, it’s not a majority. I think the majority of people, especially in philosophy, would try to be fair — and would mostly succeed in doing so.

But here is the problem: philosophy is an incredibly competitive field, as are most academic disciplines. There are commonly hundreds of applicants for a given job. In that context, for a person to succeed, they need every advantage they can get. If being a conservative is a small disadvantage — if, for example, it causes you to lose one or two votes in a departmental hiring decision — you can’t afford that. Those 1 or 2 votes can easily make the difference between you getting a job, and the next candidate getting it; and that can easily make the difference between your having a job in the profession at all and your having to choose another career. That is how tough the job market is. So there are strong incentives for conservatives, or other dissenters, to suppress their views.

That is to say nothing of the normal, familiar sort of social pressure to conform. Once a group reaches a certain critical mass of people who subscribe to a particular ideology, groupthink takes over, and people start to compete in advancing ever more extreme positions. Those who disagree with the consensus tend to keep their mouths shut.

Needless to say, none of this can be good for an institution engaged in the pursuit of truth.

My Experience

Hidden Bias?

But have I personally experienced bias? I don’t really know. When editors make publication decisions, or search committees make hiring decisions, they generally don’t announce if their decision was due to political bias.

To take one example, my book, The Problem of Political Authority, was rejected by about a dozen publishers before being taken by Palgrave Macmillan. I am very confident that it was an error for all those publishers to reject it. (Leaving aside whether the book is correct, it has >200 citations on Google scholar since 2013, which is much more than most philosophy books that those publishers accepted.) Did some of them reject it because of its “right-wing” political orientation — or was it simply ordinary errors of judgment?

There’s no way of knowing. No one outwardly expressed a political bias, but decisions like this are so subjective that it is always possible to devise a facially impartial rationale. There’s no need for even the biased person himself to know. People just find it easier to see flaws in work whose conclusions they reject than in work whose conclusions they find congenial, and they may not notice anything noteworthy about the situation.

Ethical Intuitionism, by the way, was also rejected by more than a dozen publishers. Which was also a mistake on their part. In that case, it was not due to political bias, though broader philosophical bias can’t be ruled out.

Sometimes, it’s not even clear what counts as political bias. Suppose certain styles of writing and thinking are more common among libertarians. And suppose a particular referee dislikes those styles. Is that a political bias? Not exactly. But it’s not exactly complete impartiality either. But it would be almost impossible to avoid that kind of “bias”.

Explicit Bias

That said, I have found the profession surprisingly non-biased in observable ways. No one has discriminated in any obvious way, that I can think of, because of my politics, and that sort of thing certainly could have happened.

Many have in fact openly and rationally discussed political philosophy with me, and when I was a student, professors were happy to help me develop my ideas, including political ones. (Of course, one sees bias all the time in people’s assessments of issues, but that’s another matter.)

Tales of Two Biases

Ironically, the two times I can recall that I experienced clearly unprofessional treatment because of my philosophical views, it was entirely non-political.

(1) Discrimination in Logic

The first was when, as an undergraduate student, I tried to enroll in a logic course. I had missed the regular enrollment period, and I needed the logic professor’s signature on a course add form. I went to the first class session, where I intended simply to listen quietly. In the course of his lecture, the professor posed a question to the class: “Is the empty set a member of every set?” No one answered. He pointed at me and told me to answer. I answered that the empty set does not exist; however, if we suppose that there is such a thing, then I guessed that it would be a member of every set. The professor then explained that the empty set is not a member of every set, but is instead merely a subset of every set.

After class, I followed the prof to his office, where he signed the form to enable me to enroll in his course. I then engaged him in a discussion of whether the empty set exists. He could not understand my view that just the empty set didn’t exist; I had no objection to sets that have actual members. (No doubt I failed to give a clear explanation; here is how I would explain the point today: a set is supposed to be a collection. If there are some objects, then you can talk about the collection of those objects. If, however, you have no objects at all to start with, then there is no collection to speak of either. Hence, there cannot be a set with no members.)

At the time, I had a very direct, matter-of-fact way of stating points (as opposed to the present day, when I convey tentative assessments using all manner of subtle, indirect speech, as the reader will doubtless notice in this blog). So I would say, “By the way, the empty set doesn’t exist,” in the same tone in which you would say, “By the way, it’s raining outside, so take an umbrella.” The professor could not convince me otherwise (mostly because he did not understand my view and he had no arguments for his own). At the end of the conversation, he took back the course add form and crossed out his signature, claiming that I was “not open to learning from” him.

This rather shocked me, as, until that point, I thought that we were just having an interesting intellectual discussion, which, I had assumed, surely a philosophy professor would welcome. He was not interested in any further discussion of the empty set issue either. (I should be inclined to say that it was he who was not open to learning.) So I had to take logic from another professor in a later semester.

(2) Discrimination in Metaphysics

The second time that I outraged a professor with my philosophical views was during a graduate seminar in metaphysics. The seminar centered on the professor’s own in-progress book manuscript in which he expounded his view that human beings are not smart enough to solve (some wide range of) philosophical problems. Each week, we would sit for three hours, while the prof expounded on how it was theoretically possible for his view to be true (after all, there are certainly problems that dogs can’t solve; so why couldn’t there be ones that we can’t solve?, etc.). I periodically tried to get him to say something nontrivial, but he never would.

One week, we were discussing his chapter on the problem of free will, in which he said (at great length) that human beings are not smart enough to grasp how freedom is compatible with determinism. Thinking that I might finally get him to make an interesting, substantive argument, I pointed out to this professor that there were actually arguments in the philosophical literature that claim to show that free will is not compatible with determinism. So, before claiming that we aren’t smart enough to see how the two are compatible, it would behoove him to start by giving an argument to show that they are in fact compatible. The professor replied, “I have given arguments. They were this whole chapter.” (For the record, there were no such arguments in the chapter manuscript, which in fact did not evince any awareness of the literature on incompatibilism.) I replied, “It seems like the only argument is that you want to believe in free will, and you want to also accept determinism.” Then came the professor’s devastating retort: “That’s the stupidest thing I’ve heard you say. And that’s saying something.”

The class fell silent. In the awkward silence, he proposed that we take a ten-minute break. I never returned to the class, and never spoke to that professor again.


So those are my stories of “ideological discrimination”, so to speak. If they had been about political issues, you would probably see them as evidence of the oppression sponsored by the American left. But in fact, they were just examples of two individuals being assholes.

Bitcoin

I posted about this on FB in January 2018. Almost 2 years have passed, during which bitcoin has fluctuated in price, but it continues to have a large market value.

In case anyone has been living under a rock, bitcoin (BTC) is a privately-created, purely electronic currency, which you can easily transfer securely, and you can use it to buy various things online (and I guess at some physical stores?). E.g., you can use it to buy stuff on Overstock.com (https://help.overstock.com/help/s/article/Bitcoin).

Anyway, here’s what I said about it in 2018 (with minor edits today):

Is bitcoin overvalued or undervalued? Below are the arguments of Bitcoin Skeptics (“BTC is overvalued”) versus Bitcoin Enthusiasts (“BTC is undervalued”), as I see it:

Bitcoin Skeptics

I. No Intrinsic Value

“Intrinsic value” = The value people place on a good for reasons independent of the expectation that other people value it; value independent of the market price. E.g., orange juice has intrinsic value, because (some) people like to drink it, independent of whether others value it. In the case of a stock, intrinsic value is based on the company’s assets and future income stream (as opposed to the price of the stock).

Skeptic claim: BTC has 0 intrinsic value. Therefore, its current price can only be a speculative bubble.

Enthusiast reply: BTC isn’t a security; it’s a currency. Currencies, such as the dollar, typically have zero intrinsic value. This does not mean they have zero value period.

Skeptic: But the dollar’s exchange value was established first by having it backed by gold. Only once it was in widespread use was its backing removed. BTC can’t get established to begin with without some backing.

Enthusiast: That is an empirical, causal claim, which is being empirically tested right now. The empirical evidence is that it’s false. In 2010, a pair of pizzas was sold for 10,000 BTC. Today (12/2019), one BTC is worth over $7,000. Its market value appears to have been established. (https://www.bitcoin2040.com/bitcoin-price-history/)

II. No Rational Valuation

Skeptic: We have no objective method of figuring out how much a bitcoin is worth. It doesn’t pay dividends, there’s no established interest rate for bitcoin loans, and it doesn’t have any intrinsic value. (http://www.businessinsider.com/morgan-stanley-on-bitcoin-value-2017-12) Since we have no rational way to assign a value to BTC, its current price is based only on risky speculation.

Enthusiast: The value of a currency is determined by the value of transactions conducted using the currency, per unit time, and the number of units of the currency in circulation. This is based on the economic principle that the total value of goods and services exchanged using the currency (in a given time period) must equal the total value of the currency that is exchanged in that time period.

So there is a rational basis; it is not purely a matter of predicting other people’s feelings.

III. BTC Is Rarely Used

Skeptic: Very few vendors are accepting BTC. Therefore, based on the above, it is worth very little. (http://www.businessinsider.com/morgan-stanley-on-bitcoin-value-2017-12) Almost everyone buying BTC is doing so for purposes of speculation, not to use it as a currency.

Enthusiast: Correction to earlier statement: The value of any asset is affected by its anticipated future value. Thus, if BTC *will* be widely used in the future, and this can be rationally anticipated, then it *now* has great value. If it *might* be widely used in the future, then it has some present value based on the probability that this will happen.

IV. Price Increases Look Bubble-like

Skeptic: In 2017, BTC’s price in dollars rose by 1400%. This could not plausibly correspond to any actual change in its objective value. So it must be a speculative bubble.

Enthusiast: Price changes need not be justified by a change in the objective value of an asset. They could instead be justified by a *recognition* of facts previously neglected. The rise in BTC’s price may reflect the fact that it suddenly came to the attention of many investors who previously took no notice of it. They then recognized its enormous potential value.

Skeptic: But this posits a huge market inefficiency before the price increase. Markets are not usually hugely inefficient.

Enthusiast: That is true for well-established assets and asset-classes. But huge mis-valuings, especially undervaluings, could be expected for a totally new asset.

V. Dogecoin Isn’t Worth a Billion Dollars

Skeptic: There are >1300 cryptocurrencies [now almost 3,000, in 12/2019]. The top 100 are all valued by the market at over $180 million [now $35 million, in 12/2019; see https://coinmarketcap.com/all/views/all/]. Dogecoin, for instance, has a market capitalization close to a billion dollars [now <$300 million, 12/2019]. But all these crypto’s are obviously not going to become widely used. So their value is pure speculative bubble.

Enthusiast: True, but that doesn’t mean BTC is a bubble.

Skeptic: But the forces driving up the price of Dogecoin are presumably also affecting the price of BTC.

Enthusiast: True, but that still doesn’t mean BTC is overvalued. It could be that irrational forces are putting upward pressure on the price and that it is still worth more than its current price. This is not implausible. There can be multiple errors in the market.

VI. BTC Is Evil

Skeptic: BTC can be used for black market transactions and money laundering. Also, libertarians like it. It’s evil because it’s not under the control of the state. (https://www.huffingtonpost.com/entry/the-bitcoin-hoax_us_5a3fd6dce4b025f99e17bb2f)

Enthusiast: That is irrelevant to the present question, which is the fair market valuation of the asset.

VII. BTC Is Vulnerable to Hackers

Skeptic: BTC is not a safe store of value. There are multiple cases of hackers stealing large amounts of bitcoin. (https://www.theguardian.com/technology/2014/mar/18/history-of-bitcoin-hacks-alternative-currency)

Enthusiast: Hopefully people will be more careful about security in the future.

VIII. Bitcoin Is Just a Means of Transmitting Money

Warren Buffett says: “Bitcoin is a mirage. It’s a method of transmitting money. It’s a very effective way of transmitting money and you can do it anonymously and all that. A check is a way of transmitting money, too. Are checks worth a whole lot of money just because they can transmit money?” (https://www.cnbc.com/2014/03/14/buffett-blasts-bitcoin-as-mirage-stay-away.html)

Enthusiast: *A bitcoin* is not a method of transmitting money. It is itself a bit of money. *Sending bitcoin* is, of course, a method of transmitting money. It does not follow from this that the money that is sent has no value. On determining the value of a unit of money, see (II) above.

IX. Bitcoin Isn’t Scarce Enough

Skeptic: A bitcoin is just a number. But there are infinitely many numbers. So bitcoin isn’t scarce at all. So it must have minimal value. (I am not making this up! This guy actually said that: https://www.forbes.com/sites/jayadkisson/2017/12/28/the-great-bitcoin-scam/2/)

Enthusiast: Um, a bitcoin isn’t a number.

Smarter Skeptic: Okay, but it’s possible to create any quantity of any number of other cryptocurrencies. Therefore, cryptocurrency in general is not scarce.

Enthusiast: It is also possible to create any quantity of any number of fiat currencies. That doesn’t mean that, say, the dollar has minimal value. Example: the supply of Venezuelan Bolivars has increased massively in recent years, leading to runaway inflation in Venezuela. But that doesn’t cause inflation in the U.S., or destroy the value of the U.S. dollar, because people who use dollars don’t have to use Bolivars. Similarly, the existence of other cryptocurrencies would not lower the value of BTC, unless these other currencies were widely used as substitutes for BTC.

X. The Government Will Shut it Down

Skeptic: The U.S. government may try to shut down bitcoin, e.g., making bitcoin transactions illegal. They would do this because BTC is useful for money laundering, black market transactions, and tax evasion.

Enthusiast: This was a more plausible worry in the Silk Road days. Now it is becoming increasingly unlikely that they will do this. The total market capitalization of all cryptocurrencies is over $600 billion (https://coinmarketcap.com/) [now ~$200 billion, 12/2019]. Regardless of whether you think those prices are justified, it is highly unlikely that the government will try to eliminate a whole asset class valued by the market at [>$200 billion].

XI. BTC Is Inconvenient

Skeptic: BTC transactions take too long and cost too much. Large numbers of people are not going to put up with that.

Enthusiast: The bitcoin community will probably fix those things.

Bitcoin Enthusiasts

I. Large Potential Use as Currency

The value of BTC depends on the value of transactions that will be conducted with it in the future. It has presently reached only a tiny portion of its potential, and it is increasing. The number of businesses accepting BTC increased steadily throughout 2017 (from ~8200 to ~11,000) (https://cointelegraph.com/news/bitcoin-adoption-by-businesses-in-2017). Given all the recent media attention, the number will probably increase even more in 2018 and thereafter.

II. Future Investors

The current price of BTC is driven by a relatively tiny (compared to the class of people who reasonably could invest) number of bitcoin enthusiasts. Only a tiny portion of investors have yet accepted it as a reasonable thing to invest in. This is partly because it is new and unfamiliar, partly because they don’t see enough other people doing it, partly because of the high volatility, and partly because of uncertainty as to whether it is a ‘real currency’.

All those factors will subside in the next several years. As they do, the number of people investing in it can hardly do anything but increase. That means the price will almost inevitably go up, a lot.

III. Sentiment Is Not Excessive Enough

Many are saying that bitcoin is “a bubble”. In an actual asset bubble, the peak price occurs when sentiment is at the maximum. If bitcoin is a bubble, it is nowhere near its peak. We can know this because there are so many skeptics declaring it to be a bubble. If it’s a bubble, the time to get out will be when almost all the skeptics are gone and almost everyone is touting BTC as a great investment.

A Lucky Stalemate

1. An Alleged Problem

Sometimes, we hear lamentations about political “gridlock” — the alleged problem wherein politicians are unable to agree on what should be done, and so “nothing gets done”, i.e., no (or relatively few) new laws are passed.

More generally, we have two major parties in the U.S., and — despite periodic complaints to the contrary when one side loses an important race — they are about evenly matched, and have been for decades. Neither party is able to fully implement their agenda, since there is always the other party there to oppose them. As I write this, the Democrats control the House of Representatives, while the Republicans control the Senate. Since any legislation must pass both houses, and the two parties disagree on many things, it is difficult to get any laws passed.

Many on both sides decry this situation, wishing that their side could finally gain undisputed control so they could just implement their agenda and save the country at last. They just disagree on which side ought to have total power.

Libertarians, on the other hand, often praise gridlock — we don’t want the politicians to be “getting stuff done”, because most of what they would do is bad.

Now I want to make an argument in praise of the political stalemate we’ve been enjoying, but I don’t want to appeal to specifically libertarian premises. I want to say that if you’re a reasonable person, of whatever party, you should hope that the stalemate continues.

2. Moderation

The first point is that the political stalemate — more accurately, the balance of power — exerts a moderating influence on government. The most extreme ideas that might be devised by our leaders are politically infeasible, because they will be rejected by the other party.

The only proposals that can get through the government are ones that can get at least some support from both sides. Some moderate Republicans/Democrats have to be persuaded to vote for it. On average, these proposals will tend to be those that are relatively less stupid and destructive. The most stupid and destructive policies are most likely to occasion dissent from the “other side” (the party other than the one that proposed the policy).

Many have noticed that our politics has become more polarized in recent years. This has some bad effects, but at least one good one: maybe there will be fewer bad policies passed.

Now, you might think this a surprising argument to come from an avowed political extremist. My own policy ideas are probably more extreme than those of anyone currently in Congress (in most cases, much, much more extreme), and thus my ideas could not get passed. So that’s a bad thing.

True. But, while I think most of the ultimately correct ideas are extreme, I do not think that most of the extreme ideas are correct. I am not in favor of extremism merely as such; in fact, most extreme ideas are terrible. Even though the status quo is badly flawed, most extreme policy proposals that I hear are much worse than the status quo. Therefore, on average, I expect that if more extreme ideas (of the sort that politicians are likely to come up with) get passed, things will get worse.

Of course, you might disagree with this if you are a committed partisan of either the left or the right. Maybe you think that extreme left-wing ideas tend to be good. Obviously, I can’t examine all extreme left(/right)-wing ideas right here, since I’m not going to type in a 2000-page book. All I can do is give a general sort of prima facie, meta-consideration.

In the U.S. at present, things are going relatively well. Not compared to the ideal utopia, of course, but compared to other actual societies, from other times and places on the Earth. In fact, things are going incredibly well by that standard; almost all societies have had things vastly, horribly worse. You could say we are way above the mean of the “goodness” distribution.

In general, most large perturbations of a chaotic system should be expected to push it towards whatever is the most common state for that kind of system. Human societies are complicated, chaotic and their behavior is difficult to predict. You could think of extreme policy changes as big perturbations of a chaotic system. We should expect such changes, on average, to have the effect of making our society more similar to the average human society. Which is terrible.

Of course, if you have some specific (extreme) policy proposal, and you think you have very powerful arguments demonstrating the advantages of that particular policy, then the above very abstract consideration wouldn’t and shouldn’t change your mind about that proposal. But I think it gives prima facie evidence that in general, making it easier to pass extreme policies (especially if you do not have a great deal of confidence in the judgment and character of typical political leaders, which you should not!) is a bad thing.

3. Corruption

Here is the other thing, which I think is the main point. As Lord Acton says, power tends to corrupt, and absolute power corrupts absolutely. If the American political stalemate were to end with one party assuming unambiguous control of the government, then the U.S. would become a one-party democracy. What’s another word for a one-party democracy? “Dictatorship”.

You might think that your side’s politicians are nice now. (Though chances are, you don’t think that. You probably just think that they are relatively less corrupt and destructive than the other side.) But once they gain unchecked power, it’s a different story. There are almost no cases in history when a group had unchecked power, and they decided to use it mainly for good, and not to benefit themselves at the expense of others.

An interesting case is the communists. If you meet communists today (which you can sometimes find on university campuses), they seem like such nice people, who are just deeply concerned about justice, the plight of the downtrodden, and so on. They do not at all seem like power-mad potential killers. But in actual history, in all cases where communists gained actual political power, they fairly destroyed their societies. They suppressed dissent, oppressed the people, and often murdered millions.

The more morally committed a ruling elite is, the more oppressive they are likely to be. The communists were so oppressive because they wanted to remake society according to their ideals, whereas the kings and queens of old mainly just wanted to live in luxury. That’s why societal dominance by an ideologically defined group is especially dangerous.

4. How Have We Been So Lucky?

When I think about this, it seems to me that America has been very lucky to have such a long-running stalemate. Our political system, with its winner-take-all elections, seems to be designed to favor only two parties. Parliamentary systems such as the U.K.’s are much more favorable to multiple parties, which would seem to make it harder for any one party to gain dominance.

It’s a striking thing that Democrats and Republicans are so evenly divided in the U.S. Most elections turn on a few percentage points of the vote. The overwhelming majority of voters vote along party lines, but it just happens that a little less than half of voters identify with the Dems and a little less than half with the Reps; then there are a few percent of swing voters.

You could easily imagine shifts in the population so that more people start identifying with, say, the Democrats. With a shift of just a few percent of the population, the Dems would sweep the elections, and then we’re on to a one-party democracy.

What is maintaining this equilibrium?

Perhaps the explanation (related to the median voter theorem) is that politicians of both parties deliberately adjust their positions to be just to one side of the middle of the political spectrum (where the “middle” is defined in terms of the preferences of the median voter). Why would they do this? Well, suppose one politician in a race takes a position far to the right of center. Then his opponent should take a position distinctly, but not too far, to the left of the first politician. The second politician will then presumably get the votes of a large majority of voters (everyone who is clearly to the left of the first politician in their preferences).

Obviously, there are some complications (if your position is too moderate, then your party’s extreme members may not show up at the polls, etc.). But this general sort of dynamic could explain the long-lasting equilibrium in which Dems and Reps are about evenly matched.

One problem with this story — and this is more than a minor adjustment — is that there’s a fair amount of evidence that voters don’t really vote on the basis of policy preferences. They vote more on the basis of “identity”. They just personally prefer to identify with either Democrats or Republicans. Maybe their parents always voted Republican, or their ethnic group generally votes Republican, and so they “identify as” Republicans. (See Achen and Bartels’ Democracy for Realists.)

That being the case, if the balance between Reps and Dems starts to shift in the voting population, it is not so straightforward for a politician to simply “adjust his position” in a way that will still get him close to half of the votes. If demographic changes start to happen that would produce a decisive majority who prefer to identify with the Democratic tribe, it’s not clear that the Republicans could do anything about this — it’s not clear that there’s anything they could change about themselves that would move them back toward the median voter’s preference. This would make it surprising that the balance has been maintained for so long.

I can think of two possibilities. Either Achen and Bartels are wrong, and policy preferences are a significant part of voting decisions, or, even though voters’ preferences are not mainly based on policy ideas, there are still some adjustable attributes of each party that voter preferences are based on. As the population changes, the Republicans/Democrats can perhaps adjust aspects of their image that appeal to voters’ non-cognitive, emotive preferences, so as to place their party close to the median voter’s preferences.

5. Conclusion: How Progress Works

An implication of my view is that major political progress is not to be hoped for from defeating one’s political opponents (whether those be the Republicans, or the Democrats, or both). If voter choices are based on policy preferences, then long-run progress must come from shifting the median voter.

If (as Achen and Bartels argue) policy preferences are largely epiphenomenal for average voters, then I think progress would have to come from shifting the median policy opinion among the elites.

Impeachment Defenses

I’ve been watching some of the impeachment news. It looks like opinions are extremely divided on partisan lines: ~80% of Democrats think the President should be impeached, while ~80% of Republicans think he shouldn’t. (https://projects.fivethirtyeight.com/impeachment-polls/) Politicians are even more partisan: not one Republican voted in support of an impeachment inquiry.

Who is being unreasonable here? The Republicans are. Here, I’m going to discuss the defenses of the President that I have seen. I don’t know if the politicians and Fox contributors who say these things believe them, but in case someone does, here is what is wrong with them:

Trump Defenses

  1. Trump was duly elected. The Democrats are trying to overturn the election results.
    • Comment: This complaint would make sense if the impeachment was taken up right after the election, before Trump did anything. But it does not make sense if, after Trump was elected, he went on to abuse his powers in specific ways, and that is what prompted the impeachment inquiry. That is exactly what impeachment is for. Presumably it is not the case that no elected official should ever be impeached; yet any such case would “overturn an election result”.
  2. The Democrats have been wanting to impeach Trump from the beginning.
    • Some were, some were not. Notably, Pelosi was against impeachment until the Ukraine information came out, and no impeachment inquiry started until then. So the Democrats did not in fact consider the President to be sufficiently impeachable until this story happened.
    • The fact that some wanted to impeach for bad reasons, before the Ukraine story came out, does not show that the Ukraine deal isn’t impeachable. This complaint is irrelevant.
  3. We should wait for the 2020 election.
    • No, we shouldn’t. This amounts to saying that there should be no cost for Trump’s actions. He was already going to face an election in 2020, which he might have lost even if he hadn’t committed this abuse, so having to face an election in 2020 imposes no clear cost for the abuse of power. (Maybe he’ll lose some votes because of it, but no one will ever be sure of that.)
    • It is in fact Congress’ job, under the Constitution, to assess whether the President committed high crimes. This is pretty much the only legal mechanism for enforcing the law against the President. If Congress declines to impeach, they are in effect saying that the President did nothing very wrong. That will be noted, both by voters and by future Presidents.
  4. Impeachment is divisive.
    • True. But voting against it on purely party lines, and making up rationalizations like this, does not reduce the divisiveness. It’s not as though if everyone in Congress votes in support of Trump, then the country will suddenly join together and sing Kumbaya. In general, “Let’s end these divisions by everyone coming over to my side” is not a persuasive proposal.
    • What would actually reduce the divisiveness would be if some people were to vote based on the facts of the case and general principles, rather than based on party affiliation.
    • Pretty much any impeachment would be divisive. It does not follow that we shouldn’t have such a process. A President should not be allowed to commit abuses with impunity just because he has some supporters.
  5. The Bidens were corrupt.
    • Hunter Biden’s taking that job at Burisma was inappropriate and a conflict of interests. And if he or his dad were in office, then maybe we’d be talking about his impeachment. But he’s not, so we’re not. Trump is the one in office now, and we’re talking about him. So this complaint is irrelevant.
  6. What about Hilary Clinton’s email?!
    • Irrelevant.
  7. The whistleblower is untrustworthy, possibly a Democrat, a never-Trumper, or a Deep State operative.
    • Everything said by the whistleblower has been independently corroborated, so, again, this is irrelevant.
  8. Trump said there was no quid pro quo.
    • This does not show there was no quid pro quo. It more likely just indicates that Trump knew that what he was doing was wrong. People who know that they are committing a crime very often deny that they are doing so.
    • There is plenty of independent evidence (other than Trump’s explicitly saying it) that there was a quid pro quo.
    • If you shoot someone while saying, “This is not a murder,” that won’t insulate you from murder charges.
  9. A lot of the testimony is second-hand.
    • Trump’s efforts to prevent first-hand testimony from being given (ordering his people not to testify) suggests that he thinks the first-hand testimony would worsen his case. So the first-hand information is probably even more damning.
    • We now have more first-hand testimony (David Holmes, Gordon Sondland) about Trump’s scheme. The only more direct evidence would be Trump himself testifying, but I don’t think that’s reasonable to expect.
    • Trump already released a call transcript in which he tries pressuring Zelensky, and Mick Mulvaney already admitted that Trump was doing it.
    • Hearsay is bad when it is unreliable, but you can hardly complain about hearsay when you have already admitted to the main substance of what the hearsay supports, nor when you yourself are trying to stop the firsthand information from being used.
  10. Trump was just trying to fight crime, which is good.
    • If you think that Trump was just concerned about upholding the law, and it is just a coincidence that the only alleged crimes he knew of happened to involve (i) a conspiracy theory about someone helping his political opponent in the 2016 election, and (ii) alleged corruption by his leading political rival for the 2020 election, I have some bridges to sell you.
    • The President does not have the legal authority to, for his own reasons, withhold military aid that Congress has appropriated. He could not legally do this, even if he had a good reason for it.
    • He did not have a good reason. We heard testimony that, according to Trump ally Gordon Sondland, Trump did not care about Ukraine and only cared about “big stuff” that benefits himself. By the way, if you didn’t know that Trump was like this, then I would like to talk to you, again, about some very nice bridges that I can get you a great deal on.
    • Trump was in fact putting the security of a U.S. ally in jeopardy, to the benefit of an American enemy, Russia.
  11. We do this all the time (says Mick Mulvaney).
    • I don’t think so. No doubt, the U.S. frequently attaches conditions to the receipt of foreign aid. What does not happen all the time is that the President decides to refuse to implement the policy legally established by Congress, as part of a ploy for harming his political opponents. Nor do Presidents commonly side with a U.S. enemy that is invading a U.S. ally.
    • I know that this is not a routine thing, because of the reactions of career officials in the U.S. government — state department officials and intelligence officials who have worked in the government for many years, and have worked for other Presidents from both parties, found the incident disturbing, inappropriate, and “crazy”. That’s why Trump had to replace the ambassador to Ukraine with a yes-man; that’s why there was this whistleblower report, etc.
  12. Zelensky said he wasn’t being pressured.
    • Trump was sitting right next to Zelensky. A reporter asked Zelensky if he felt pressured by Trump to investigate Biden. Zelensky says, hesitantly: “I think you read everything. So, I think you read text. I’m sorry, but I don’t want to be involved to democratic open, uh, elections, elections of USA. No, you heard that we had, uh, I think good, uh, phone call. It was normal, we spoke about many things, and I, so . . . I think, and you read it, that nobody pushed me.” Trump interjects, “In other words, no pressure.” (https://www.youtube.com/watch?v=SSMmZLud3I4) Frankly, this really did not come across like a person feeling no pressure. He looked about as comfortable as Lando in this clip: https://www.youtube.com/watch?v=OXyH1XkQo44 (“Perhaps you feel you are being treated unfairly?”)
    • Trump was withholding $400 million in military aid, while Zelensky’s country was under Russian invasion. That seems like pressure to me.
  13. It was wrong but does not rise to the level of impeachment; impeachment is very serious.
    • No, giving someone the powers of the Presidency is very serious. We don’t need incredibly high standards for impeaching a President. We need very high standards of conduct for the President. The correct thinking is not that we can’t afford to mistakenly remove a President; the correct judgment is that we can’t afford to have a criminal President. The President losing his job isn’t a terrible thing; entrusting vast power to a corrupt leader is the terrible thing.
    • If he’s not impeached, there is no cost for his violating the law, abusing his powers, etc. If there is no cost, he will of course continue doing it. And so will other Presidents. This case is going to establish the precedent — either that Presidents can be held accountable, or that they are unaccountable as long as their party holds one house of Congress. Next time there is a Democrat in office, don’t be surprised if they violate the law and abuse their powers in order to harm Republican rivals and hold on to power.
    • Again, what Trump did was selling out a U.S. ally, in favor of a U.S. enemy. (Much as he just did with the Kurds.) This might not be impeachable if it were done legally. But if he violates a Congressional mandate to do this, while also using the powers of the government to harm a political rival, I should think that is very serious. And I really don’t think Republicans would have trouble seeing this, if the President in office were a Democrat. If Obama had refused to implement a law passed by Congress, so that he could somehow harm Mitt Romney in 2012, and if in the process he was betraying a U.S. ally that was then being invaded by the Russians, I think Republicans would have voted for impeachment in a New York second.
  14. There has been no due process.
    • An impeachment by Congress is not subject to the legal process requirements of a courtroom trial. That’s partly because it’s just a procedure to fire someone from his job, not a procedure to send someone to jail. In fact, Trump is getting much more process than you normally get when you’re being fired.
    • It’s also partly because, again, we can’t afford to have a corrupt leader in office.
    • Congress, again, is carrying out its Constitutionally prescribed function, exactly as it is legally supposed to do, and with the normal process.
  15. Zelensky never started the Biden investigation, yet he did get the military aid. Therefore, there really was no quid pro quo.
    • What this shows is that Trump’s attempted blackmail/bribery/extortion did not succeed (because it was exposed by the whistleblower). Attempts, however, are typically punishable.
    • If Trump had succeeded, we would probably never have known about his crime. Zelensky would have made the announcement, Trump would have released the aid, and no one would have said anything. Surely the rule can’t be that we don’t punish you in the case where we actually find out about your crime.
  16. We hate the Democrats.
    • Okay, that’s a good point. I have no answer to that.

The way it looks now is that the lesson we will be teaching, for all future politicians, is that a President can do whatever he likes, legal or illegal, as long as his party controls one house of Congress. 30 years from now, we’re not going to be happy that we established that principle. 30 years from now, it’s still going to be Republicans and Democrats vying for power, probably still about evenly matched. Only both will be quicker to abuse their powers for corrupt purposes. We’re selling out the future for ephemeral and doubtful political gains.

The only realistic way this doesn’t happen is if public opinion turns strongly against Trump, at which point Republican senators will switch sides as well.

Addendum

More on the confusion that it’s all about whether Biden should be investigated:

No, the question is not whether Biden is bad. Maybe Biden should be investigated, but:

a. In the American system, the President does not order up investigations of individuals. That’s not how it works. We have law enforcement people who are responsible for starting and running investigations.

b. Having politicians directing investigations of their political opponents is an obvious conflict of interests. I didn’t think that had to be explicitly pointed out, but apparently Trump Derangement Syndrome is now preventing people from seeing the most obvious, standard principles of justice, when applied to Trump.

c. In the context, Trump was obviously pressuring the Ukrainians to say something bad about Biden, whether it was true or not. If you’re sitting in Zelensky’s chair, and you’re not a complete moron, it’s going to be obvious to you that that’s what Trump wants. That’s part of the conflict of interests problem.

d. Anyway, why was Trump insisting on a public announcement? That is not generally how law enforcement works – they don’t publicly announce that they’re investigating someone at the start. The answer is, again, completely obvious: Trump’s purpose was obviously to cause political harm to Joe Biden, his leading rival for the next election, not to get justice.

e. Apart from the problem of potential injustice to the individual, the practice of having the current rulers use the power of the state to suppress their political opponents is a threat to the democratic system generally. This is the sort of thing that happens in dictatorships, not in free countries, the sort of thing that turns a country into a dictatorship. If you like this practice, imagine how you’re going to feel when a Democrat takes office, and suddenly Republicans are all getting their tax returns audited by the IRS. See if then you feel like saying, “Well, the President is just doing his job, enforcing the law.”

f. Even leaving aside all of the above, the President still cannot legally violate a policy passed by Congress. If Congress has appropriated funds to some purpose, the President does not have the legal, Constitutional authority to stop the funds from going through.

None of these are difficult points to see, if one isn’t suffering from TDS. Republicans would have no trouble grasping these points if we were talking about a Democratic politician. And my evidence for this:

g. No one in government who isn’t a partisan politician performing for a public audience, thought that Trump’s behavior was okay. Multiple career officials, who had served Presidents from both parties, were shocked. Trump had to remove the ambassador to Ukraine. Then he had to conduct his Ukraine/Biden deal in secret, and with a special group led by his personal lawyer, instead of the normal diplomats. Why do you think that was? His own people, as soon as they saw his infamous phone call with Zelensky, quickly edited out parts of the transcript and then hid it. Why was that? Trump himself tried to insist that he didn’t do it (“no quid pro quo”; “no pressure”). Why was all of that the case, if Trump was just doing his job perfectly normally?

Jesus F. Christ, come on. Every damned person in the government, including Trump allies, including Trump himself, obviously knew that this was not kosher. And yet they’ve still got the Republican base convinced that it’s all perfectly fine.

More on the Health Care Problem

Here is a fascinating podcast: https://www.econtalk.org/keith-smith-on-free-market-health-care/

It’s an interview with Dr. Keith Smith, who runs the Surgery Center of Oklahoma. It’s a for-profit business that provides surgery of various kinds. They don’t take insurance, and their prices are all posted online. Their prices are also radically lower than the prices at regular, “not-for-profit” hospitals.

Near the beginning, Smith describes quoting a price for a surgery to a patient over the phone. He quotes her a price of $1,900. The patient says, “That’s interesting, because the non-profit hospital I talked to before you quoted me a price of $19,000.”

Later in the podcast, he describes some of the scams that go on in the industry. When hospitals claim that they were underpaid (the patient didn’t pay the full cost of treatment), they get money from the government. That sounds reasonable, right? They should be compensated for their good work.

This has caused hospitals to jack up prices to absurd levels, so they can regularly claim they were paid only a tiny fraction of the costs, so they can get more money from the government.

The hospitals make agreements with insurance companies whereby the insurance company only pays a fraction of the absurdly inflated price. This is all cool with the insurance companies too, because it enables them to claim that they negotiated unbelievable discounts (like an 80% or 90% discount) for their customers. It also makes it cost-prohibitive for a patient to get medical care without insurance, which is also fine with the insurance companies.

The Health Care Problem

America’s health care system is obviously f—‘ed up. Even politicians recognize it. Almost everyone I hear talking about health care (a) knows that our system is terrible, and (b) has some radical proposal that would make things much worse and would ignore all the main problems. I’m getting kind of tired of this situation, and I’m worried that when I get old and need more health care, I will have to leave the country in order to avoid being bankrupted. Of course, no one with power will listen to me, and the problem will probably continue worsening for decades, especially if politicians pass more “reforms”. But I’m going to say what I think anyway.

1. The Problem

What’s wrong with American health care? Is it

  1. Its quality
  2. Its quantity
  3. Its cost
  4. Its distribution?

There are some quality issues (like people dying from medical errors), but overall America has high quality health care. The main problem is obviously (3). Costs are absurdly high, which in turn prevents many people from getting health care, so that affects (2) and (4) as well. Anecdotally, you can literally buy some drugs from Canada for less than a tenth the price you would pay in the U.S. If you take your cat to the vet, the cat can get health care for a fraction of the price you would pay for similar care for yourself. So we know that health care does not inherently have to cost this much. Now some statistics:

We spend over a sixth of the entire GDP on health care (18%), over $10,000 per person per year, much more than other developed nations. Costs have been skyrocketing for the past few decades, as shown in these graphs:

If this goes on, we’re all going to go bankrupt from medical costs.

We’re outspending the rest of the world, including other developed nations, but we’re not getting better health outcomes than other developed nations. This is a graph of deaths amenable to prevention by health care, per 100,000 population, in several developed countries:

Notice how the U.S. is doing worst (the red bars).

(Sources: https://en.wikipedia.org/wiki/Health_care_in_the_United_States, https://www.researchgate.net/publication/262881094)

Let’s keep this in mind. The main problem isn’t that we don’t have enough insurance, or that the costs are being borne by the wrong people, or any dumb thing like that. The main problem is the total costs are too high. If we can address that, all other problems are going to get easier. If we don’t address that, then nothing else can really be fixed. No matter how you shuffle around the costs or modify the method of paying, it really cannot produce a satisfactory outcome if you don’t do something to drastically reduce the total.

2. Why Is Health Care So Damned Expensive?

There are many factors. Here are a few big ones.

a. Insurance.

Note: the problem here is not that the greedy insurance companies are making excessive profits. Profit margins in health insurance are not out of line with profit margins for companies in the U.S. economy generally. Rather, the problem is:

  1. Any time you add a middle man, you’re increasing costs. The middlemen have to be paid. If we all bought food using “food insurance”, then food costs would probably double.
  2. Insurance companies want to make sure that they don’t overpay or pay for unnecessary procedures. So they require paperwork to be filled out to convince them of this. There then have to be experts on all the complicated rules about what the insurance company pays for, how much, etc. This greatly increases costs.
  3. Because a third party is paying, patients ignore the prices. Because patients ignore the prices, providers jack up the prices. Since it’s being paid by a big, faceless corporation, no one feels bad about this.

This is part of why veterinary care is much cheaper than human care. It’s also why elective procedures (e.g., cosmetic surgery) are cheaper than “necessary” procedures — because insurance won’t pay for the former.

b. Shadow Prices.

If you go to a doctor or hospital, they never tell you the price of anything before you take it. You just have to take the medical care, whatever it is, and then wait to get a bill in the mail. For this reason, there is no question of going to a lower-cost provider, even if you wanted to.

c. Supply Restrictions.

Basic economics: the Law of Supply and Demand: for a given level of demand, if you restrict the supply of a good, the price goes up. Also, if you add very costly hurdles that suppliers have to jump over, you increase the prices.

In this case, to be allowed to practice medicine in the U.S., you have to:

  1. Get an undergraduate degree (~4 years),
  2. Go to medical school (~4 years),
  3. Do an internship (1 year, possibly included in the following requirement), and
  4. Do a medical residency (~3-7 years).

All of this is extremely costly in both time and money. Therefore, prices of medical care have to go up, a lot, in order to make it worthwhile for people to enter the field. Health care providers have to be compensated for the enormous up-front costs, else there wouldn’t be any providers.

There also is a limited number of residencies available in various specialties, making it impossible to increase the supply of providers to meet demand. Residency Review Committees, staffed by doctors in a given specialty, have control over how many residencies are offered, and they use this power to restrict the supply and raise prices. (Source: Sean Nicholson, https://www.nber.org/papers/w9649.pdf.)

d. Because We Care.

Why is it so easy to get health care for your cat, and why can he get excellent care for a fraction of what it would cost to care for you? Because he’s just a cat.

Because he’s “just a cat”, people don’t freak out as much about giving him care. They don’t do as many unnecessary tests and procedures. If something goes wrong, you probably won’t sue the vet. If you do sue, you probably won’t be awarded millions of dollars. Therefore, the vet doesn’t have to practice defensive medicine, and doesn’t have to buy malpractice insurance. And because he’s just a cat, there is less regulation governing his care.

When your cat is terminally ill and in pain, the vet will offer euthanasia. When the same situation befalls you, in most states, your doctor will not be permitted to offer the same mercy. This may result in spending much more money on care in the last few months or weeks of life.

3. Bad Ideas that Ignore or Worsen the Problem

Here are some examples of the incredibly bad ideas people come up with to “reform” the system. I think these are products of ideology rather than serious reflection or even awareness of the basic problem:

a. Make everyone buy health insurance. (per Obamacare)

This ignores the main problem. The main problem was never a shortage of insurance policies. The main problem is the total cost. Buying insurance spreads the costs over the insurance pool, which of course might be worth doing, but it does nothing to reduce the total, which is the main thing we need.

In fact, insurance is one of the main reasons why health care costs are so high in the first place, so we should expect this idea to increase the total cost of health care. And indeed, health care costs have continued, utterly unsurprisingly, to rise since the ACA (Obamacare) was adopted (https://ldi.upenn.edu/brief/effects-aca-health-care-cost-containment).

(Note: I know there is more to the ACA than the individual mandate.)

b. Have the government subsidize health care.

On an individual level, this might seem like a solution: if you are facing high health care costs, your problem is solved by having the government pay for (part of) your health care. But this can’t be the solution for society if society as a whole is facing excessive health care costs. This just redistributes the cost; it doesn’t reduce the cost borne by society overall.

In fact, basic economics tells us that if a product is subsidized, then (a) more people will try to buy it, and (b) the prices will rise. So this idea increases total costs.

If the supply is fixed (per section 2c above), the result will be that suppliers raise their prices so that the same number of people receive the good as before. We just spend more money for the same amount of the good.

c. Have the government pay for all health care. (“Single payer”)

Another proposal to shuffle around who pays the cost or how it is paid. In fairness, there is an economic theory whereby a single payer can reduce prices. It’s just the reverse of a monopoly: if you have a monopoly (single seller), you can raise the prices above what would be the competitive market level. If you have a monopsony (single buyer), you can lower the prices below the competitive market level. Both, by the way, result in a lower total quantity of the good being consumed, and lower economic efficiency (monopoly benefits the seller but harms the buyers by more; monopsony benefits the buyer but harms the sellers by more).

However, this theoretical possibility does not mean that things would in fact work out that way, in the United States as it actually is, with the federal government as the single payer. The U.S. government, as it happens, is extremely subject to influence from rent-seeking special interest groups, and is not extremely good at reducing costs or balancing its budget. Another factor is that, when the government is paying for something, many Americans have a tendency to spend as much as they can. Thus, the following are all possible results of a single-payer American system:

  1. Prices might be reduced, maybe by a lot, due to the state’s bargaining power.
  2. The number of health care providers might decrease because of (1). (When prices go down, fewer people want to enter the industry.) There would then be less health care available.
  3. There may be more rent-seeking lobbying from the medical industry, with unpredictable results. This could increase health care costs, as providers use political influence to get the government to cover things that private insurers would not have covered, or even to get the government to pay above market rates for procedures.
  4. Health care providers might start to recommend more, and more expensive forms of, medical care, and patients might agree, since the government is paying.
  5. The federal debt would of course explode, even more than it already has. Because there’s no way we’re going to raise taxes by trillions of dollars per year to pay for it.

. . .

Before proposing a reform, we should think about this: what is going wrong in the health care industry, as compared with normal industries?

In most industries, prices decrease relative to wages over time. Subjectively and anecdotally speaking, the products of most industries seem really affordable yet high-quality. (E.g., for $350, you can buy a machine that can literally perform a billion calculations a second, can store millions of pages of information, connects you effortlessly to a world-wide network so you can share your fake-nous thoughts, and other amazing things.) What is different in the health care industry, compared to the industries that are going well?

I’ve answered that in section 2. The answer is not “the industries where things are going well are run by the government” or “they have a single payer for all of their products” or “everyone has insurance”.

Notice how all of the reforms discussed in this section completely ignore all the points I raised about health care costs in section 2. It’s as if those were completely unknown or incomprehensible points. But they’re not; they’re totally obvious with a basic understanding of economics and basic facts about the American health care system.

4. Non-Stupid Ideas that at Least Acknowledge the Problem

What would a minimally smart person say who was trying to think about the problem, rather than trying to score points with know-nothing ideologues? You can doubtless find more developed and better answers than this, but the following is a start:

(a)

We should eliminate shadow prices. My proposed law: For any medical care accepted by a patient, the price of that care must be made available in advance, or else the patient is not legally obligated to pay. In the case of emergency care, for which there may not be time to discuss pricing information when the particular patient shows up, the information should still be published in a publicly available location.

(A legal rationale: you can’t claim that someone has implicitly agreed to pay some price, if there was no way for them to know what the price was. If patients have not agreed to pay the price, then they should not have to pay it.)

(b)

There should be less reliance on insurance. Insurance should only pay for large, unanticipated expenses, not routine medical care.

Also, insurance should be purchased by the individual, not the individual’s employer. (Take the money that employers are already spending on health insurance: they can just give that money to the employees, then employees can buy insurance themselves.) Then insurance plans could be expected to become better tailored to individuals’ desires. You also wouldn’t lose your insurance if you changed jobs.

(I don’t know exactly how these outcomes would be brought about.)

(c)

Remove the goddamned rent-seeking supply restrictions.

Supply restrictions, unfortunately, are among the most popular anti-libertarian laws. People love being fleeced by industries. Whenever you propose relaxing restrictions on the supply of some good, consumers immediately and indignantly start to repeat the rationalizations originally invented by the industry to serve the industry’s financial interests.

FYI, putting huge hurdles in the way of people providing a good does not generally result in higher quality. Many economic studies have been done of licensing requirements. To quote one review of the evidence, “most research does not find that licensing improves quality or public health and safety.” That’s not some libertarian propaganda piece; that’s a quotation from an Obama administration report on professional licensing laws (https://obamawhitehouse.archives.gov/sites/default/files/docs/licensing_report_final_nonembargo.pdf). Everyone who looks at it agrees, however, that these sorts of barriers definitely do increase prices.

The industries where things are going well — products are affordable, well-adapted to consumer needs, and there is frequent innovation — are not industries with large barriers to entry. They’re the opposite, areas where new entry is easy and frequent. And they do not have terrible quality. If anything, quality tends to be higher in the industries with lower barriers, because they are more competitive. Competition drives up quality. (The tech industry has little regulation and low barriers to entry; that’s why it has frequent innovation, high quality, and decreasing prices.)

(d)

We should reduce legal liability in medicine. Doctors should not fear multi-million-dollar lawsuits for honest mistakes.

. . .

None of suggestions (a)-(c) are even on the public radar screen. No political leader comes close to entertaining any of them. I assume the reason is either (1) that our leaders want prices to keep going up, because they’re in the pocket of the industry, or (2) that no leader (and hardly any citizens either) has given serious, informed, and non-ideological thought to the American health care problem — nor bothered to consult anyone who has.

As long as this continues to be the case, we’re just going to keep pouring more and more of our economy into this industry, without getting any more benefits in return.

Case Against Education

(Based on an FB status update from 11/6/2017.)

My comments on Bryan Caplan’s book, The Case Against Education, https://www.amazon.com/dp/B076ZY8S8J/. (Libertarian scholars like to bite the hand that feeds them. Compare Jason Brennan’s Cracks in the Ivory Tower, https://www.amazon.com/dp/0190846283.)

I read Bryan’s ms. before publication, and it was excellent – very clear, compelling, and interesting.

Thesis: The economic value of education is mainly due to signaling.

Explanation: Education pays off for students because getting a degree signals to employers that you have certain desirable traits, such as intelligence, perseverance, and the ability to follow instructions. It’s not that college gives you those traits; it’s just that those who lack them don’t complete a college degree. College generally does not teach useful skills.

Evidence: Caplan reviews a ton of scholarly evidence, which is collectively very compelling. I can’t review it all here. But a few interesting observations:

  • Education is one of very few products where the buyer doesn’t seem to want it. If you cancel class, the students aren’t angry; they’re happy. For what other product would the customer be happy if they didn’t receive it? The implication is that students don’t want to be educated; they’re not paying for knowledge.
  • Students hardly remember anything they learn in a typical college (or even high school) class. Six months after the class is over, they’re going to have forgotten most of what the course was about. (Exception: basic reading and arithmetic skills.) This is widely recognized. So it can’t be that the economic value of schooling is due to the valuable information one learns.
  • Just look at a typical college class. We teach all kinds of obviously impractical subjects. (Exceptions: engineering, business.)
  • A favorite claim of educators: “We don’t teach people what to think; we teach them how to think.” Problem: there is virtually no empirical evidence that we actually do this. People in educational psychology have studied “transfer of learning” – roughly, the extent to which students transfer lessons they learned in one context to a slightly different context; that is, the extent to which their learning generalizes to make them better at tasks they weren’t explicitly taught to do. The results are generally negative: transfer of learning typically just does not happen, as far as we can measure. Given this, the idea that we make people generally better at thinking is only a wishful assertion. And that is how the claims on behalf of college education generally are: hopes with no empirical support. Everything that we can measure says that schooling is (with a few exceptions) ineffective, so wishful defenders of schooling move to claims of unmeasurable, unobservable benefits.
  • Statistically, almost all of the economic value of schooling accrues at the end. If you complete 3.5 years of your degree program but don’t get the degree, you get little economic value (little increase to your expected earnings). You get the big bump in earnings when you get the diploma. This is hard to square with the theory that the economic value of education is due to learning useful information and skills (do colleges wait until the very end to confer the useful information and skills?).

Some potential implications:

  • a. More education won’t obviously increase productivity for society. Getting a degree increases your earnings, not by making you more productive, but merely by enabling you to outcompete other candidates for a job. If everyone gets more schooling, that will just raise the bar for what you have to do to edge out other candidates.
  • b. It doesn’t really matter (to the economic function of schooling, or what the students are really paying for) what we educators teach, or how well we teach the material. We only need to make it sufficiently challenging that it qualifies as a test of general intelligence, perseverance, and similar traits. (Whew! That makes me feel better for teaching lessons about brains in vats and runaway trolleys.)

My comment: Most of the above is summary of Caplan (except implication b above, which I think I added). But I think Caplan makes a powerful case. We educators, for obvious reasons, would not like to believe all this. But we should fight against our own biases to try to evaluate the argument objectively.

An objection to (a): Actually, signaling is economically valuable. You don’t have to be producing good things in order to be contributing to the economy; you could contribute by providing useful information about where good things are and thus where resources should be directed.

Analogy: Geological surveyors do not produce any natural resources; they merely identify where already-existing resources are located. But that is an extremely valuable service; we should not drastically reduce geological surveying activities upon realizing that they don’t physically produce the natural resources. Similarly, perhaps higher education’s signaling function is highly useful, since it prevents employers from wasting a lot of resources on employees who will not meet their needs.

Now, on the face of it, it seems that this function should be able to be provided at much lower cost in time and money. But economists usually do not second guess the market. The market has selected this method of sorting people, so maybe that means that it is really the most efficient. If you think you can do it more efficiently, you can start a business implementing your idea, and make a bunch of money. If no one is doing that, it may be because there isn’t really a good way of doing it.

This of course is not the defense of education that most educators would prefer. But it’s the best defense I can see of the economic value of college education that’s consistent with Caplan’s evidence.

Practical Deference

UT-Austin

I’m in Austin for the AGENT conference (Austin Graduate Ethics & Normativity Talks). I did the keynote address yesterday. Here is a summary of what I talked about.

1. Practical Deference as Solution to Disagreement

The Problem of Disagreement

In any human social group that is trying to cooperate for some common ends, it is almost inevitable that there will arise intractable disagreements about what course of action the group should follow. These disagreements may stop the group from undertaking any coordinated action, may induce group members to directly harm each other, and may waste resources. If there is no solution to the problem, it threatens to disrupt all social cooperation.

The Solution

Social groups have developed one extremely common type of solution. It is that the group members who disagree on the best course of action can nevertheless agree on a process for deciding on what to do. For this to work, three things are required:

  1. It must be easier to agree on what is a fair process than it is to agree on what is the right course of action.
  2. It must also be relatively easy to agree on what the process’ actual outcome is in individual cases. E.g., people who don’t agree on the right policy can usually still agree that taking a vote is a fair way of deciding, and can usually agree on what the actual outcome of the vote is.
  3. Group members must be disposed to defer to the group’s decision, once that process has been employed, even when they regard its outcome as the wrong decision. This deference includes obeying (not outright violating) the decision, as well as more broadly going along with the decision, which may require positively helping to implement it.

A Puzzle

There is a prima facie question about the last condition. How is it rational to defer to the decision that you, ex hypothesi, regard as wrong? We can suppose even that you know the decision to be wrong. Why isn’t this an objectionable demand to sacrifice your autonomy and personal integrity?

Thesis

I argue that there are good moral reasons for such deference, in most typical private groups. But these reasons do not apply to the state or “society” as a whole, so there typically are no such good reasons to defer to the state.

2. Why Defer: The Private Case

In most normal cases, members of some private (non-government) group have some obligation to defer to the decisions of the group. There are several reasons for this:

  • There may be an obligation to show respect to other group members and their judgment.
  • Group members often have a sort of implicit contract with each other, which requires going along with the group decision. This implicit contract is accepted by voluntarily joining the group. Failure to go along with group decisions can thus be seen as breaking faith with one’s fellow group members, and disappointing their justified expectations.
  • Going along with the group’s decision helps to maintain the group’s spirit of cooperation. Going against the group’s decision risks causing other group members to similarly fail to go along with group decisions that they disagree with, and risks causing a breakdown of cooperation.

Example: A philosophy department is hiring a new professor. The department has voted to rank candidate A over candidate B, and thus to make an offer to A, and then, if A turns it down, to give the job to B next. The dept Chair, however, reasonably believes that B is better than A, and that B would definitely accept an offer. The Chair has to extend the offer to A. He could, however, sabotage the hire. He could, e.g., give a negative impression of the department to the candidate, in talking to A over the phone. He could forget to mention some of the advantages of his department, while being admirably forthright about its disadvantages. These things would make A likely to reject the offer, so the job will go to the better candidate, B. Is that what Chair should do?

Of course the answer is no. The Chair has to do his best to recruit A. Trying to sabotage A would (a) disrespect the other dept members, (b) violate the implicit job requirements that he accepted in becoming chair, and (c) risk damaging the cooperative culture of the department.

3. Why Defer: The Political Case

I think the above are good moral reasons for deferring in most typical private cases, where the group you belong to is a private group. You might think this could be extended to the political case, so that we would have obligations to go along with the rules that are made by your government (perhaps only if it is a democratic government).

This would be a stronger obligation than merely obeying the law. Going along with the law may require actively helping to implement it. E.g., if you’re a government official, you might be tasked with arresting law-violators, prosecuting them, or punishing them. If you’re on a jury, you might be asked to convict them. Refusing to do these things would not be actually breaking the law, but it would be refusing to go along with the law.

I claim that the reasons in section 2 fail to apply in the political case. As a result, I think you have no significant moral reason, in typical cases, to defer to the policies made by the state.

Here is how the state (typically) differs relevantly from most private organizations:

  1. In the political case, you have about zero chance of changing the cooperative culture of your society. If, for example, you practice jury nullification, you really don’t need to worry that future juries will retaliate against you by convicting some innocent people, or acquitting some people who deserve to be convicted.
    • Exception: if you are a high-ranking government official, then you may have a good chance of changing your nation’s political culture.
  2. In the political case, the state has already broken its implicit agreement with the people, many times. Here are some of the legal doctrines the state has adopted:
    • Sovereign immunity: No one can sue the government, unless the government agrees to be sued.
    • Judge/prosecutor immunity: No one can sue a judge or a prosecutor for anything that they do in carrying out their job. E.g., you can’t sue a prosecutor even if he deliberately sends an innocent person to the electric chair.
    • Government agents have no duty to do anything for citizens. E.g., police have no duty to protect citizens; the Department of Social Services has no duty to protect children from abuse; etc. (See Warren v. District of Columbia, DeShaney v. Winnebago County, et al.)
    • The Constitution authorizes the federal government to do almost anything, because it authorizes Congress to “regulate commerce among the states” (article I, sec. 8), and this means the government can make any law that has an effect on interstate commerce. (See Wickard v. Filburn.)
    • My point about these examples is not simply that these are mistaken doctrines. My point is that these doctrines have no reasonable defense, are obviously in bad faith, and are obviously just invented to serve the state’s own interests. Thus, even if you think there is a contract between the state and the people, you should think the state has broken the contract, and therefore, you may break the contract too.
  3. The idea of an implicit contract with the state or with “society” is much less plausible than the idea that there is an implicit contract between a dept chair and the department, or between an individual member of some other voluntary organization and that group.
  4. Background legitimacy: Most private organizations are basically legitimate & voluntary organizations. But the state is an organization originally created by violence and exploitation, and coercively imposed on people still.
  5. Justice: Few of the mistakes made by private groups are actually major injustices. But nearly all mistakes made by the government are injustices, and many or most of them are major injustices.
    • To make a private case that was analogous to a typical political case, imagine again that you’re in a philosophy department, and you disagree with the department’s decision on some policy. This time, though, imagine that the department has voted to kidnap a student who has been complaining about her professors, and lock that student in the department basement for three years. Now do you have a moral obligation to go along with the departmental decision?
  6. Competence: In typical cases, the original decision-makers (voters) are utterly incompetent, much worse than the decision-makers in a typical private group. Most voters in the U.S., for example, don’t know the name of their Congressional representative. Most can’t name the form of government they live under (given the choices “Direct democracy,” “republic”, “oligarchy”, and “confederacy”). Most cannot name the three branches of their government. I mention these things because these seem like the most minimal things you could know about the government. And the decision-making process in which they engage commonly completely ignores obvious justice-based considerations. For instance, voters commonly completely ignore the question of whether drug prohibition violates the rights of drug users.
    • To make an analogous case with a private organization, return to the philosophy department choosing a new hire. Imagine that most of the department members don’t even know the names of the candidates until they receive their ballots in the department meeting. While people are arguing about the candidates, most members are sitting there playing games or viewing cat pictures on their phones. Most of them don’t even know that they’re in a philosophy department; some think they’re in an English department, or a History department, etc. And after ignoring the arguments, the members vote in an obviously unjust way — e.g., they vote to give the job to Charles Manson, because the other candidate was black. In this case, do you have an obligation to respect that decision?

Conclusion

Even though we are often obligated to defer to misguided group decisions, in ordinary private contexts, we are not similarly obligated to defer to government-made laws in democratic (or other) societies.

Click-bait journalism is selfish and immoral. But it works.

Think of the epistemic environment as an intellectual commons. To get a deeper understanding of the world, it helps to have a wide range of ideas to consider, especially when arguments are given to support them. This helps us justify our beliefs, which tends to be good for everyone.

But some people are motivated to pollute the commons with false beliefs that aim to please an audience or promote a political outcome. Examples include dishonest political advertisements and biased journalism intended to provoke outrage. Each of us internalizes the benefit of adding a bit of epistemic pollution, but all of us share the costs. Journalists and activists know that motivated reasoning pervades politics, and that if they repeat dubious accusations or false beliefs that titillate their audience, the beliefs are more likely to stick, and the articles they write are more likely to be shared. Mike Huemer summarizes some of the reasons for this in a previous post.

Dishonest journalism is nothing new, and to some extent it just reflects what consumers of news want to hear. The benefits of making bombastic claims go to the writer of a story, and to those who get a dopamine hit from consuming it and feeling outraged. But the costs accrue to all of us in the epistemic commons, who find it harder to figure out what to believe.

When our indignation is activated, we feel like our lives gain meaning. Don Quixote is a comical character who symbolizes the rush we get from fighting injustice, whether real or imagined.

Don Quixote slaying giants

The social isolation of large, liberal societies tends to make this worse. Many people find meaning in political movements in the same way true believers find direction from religious institutions. As religious belief wanes in free and prosperous societies, religious wars give way to political battles. Journalists and academics assume the position of a secular clergy who take it upon themselves to burn the heretics who challenge sacred dogmas. Those of us who attempt to bend or break the Overton Window of acceptable belief are crucified by the Cathedral. Our ideas are attacked as wicked rather than engaged with. Journalists cash in on indignant mobs. They also create them.

There are many ways of writing click bait journalism, and many sources of “evidence” to appeal to. Some organizations specialize in producing biased data that journalists can go to when they want to assassinate a political opponent. For example, some journalists use the SPLC’s “hate map” to discredit political opponents who they don’t know how to argue against. Everyone who’s studied the issue knows the SPLC is a rapacious and morally bankrupt organization that manipulates data to generate outrage and increase their donations.

The thing is, it works. The SPLC smears people like former Muslims Aayan Hirsi Ali and Maajid Nawaz as “Islamophobic” for criticizing female genital mutilation, and for exalting the virtues of Western societies.

There are many other examples online. A website called RationalWiki is an especially egregious offender, and it is – predictably enough – promoted by Google’s search algorithm. It provides an outlet for zealots to create hit pieces against people and publishers whose ideas they disagree with. Because it mimics the template of Wikipedia, which is considered somewhat credible by casual readers, journalists and activists sometimes use it as a source.

Click-bait journalism benefits those who create and consume it. But in the long run it harms all of us. When we’re confronted with hit pieces aimed at indulging our desire to be outraged, readers should refuse to click and targets should refuse to apologize. The only way to clean up the intellectual commons is to shame the mobs and protect their targets.

Research — Who Needs It?

Here’s the background for the title question. An enormous amount of research is being done by academics (myself included), in all manner of fields. This research is valued by the prestigious universities, which are all “research universities”. They generally expect their (tenure-track) faculty to publish, and to keep publishing year after year. And there is a good deal of support of various kinds for academic research, from universities and government and private donors.

It would be of interest to have some sort of assessment of the typical value of a piece of academic research. Is most research, perhaps, intrinsically valuable? Does it produce benefits for academics? Does it produce benefits for society? If it produces benefits, are they large benefits, or small ones?

I won’t comment on intrinsic value. As to the rest, I think the benefits of the vast majority of academic research are tiny at best, possibly negative, and much less than the costs.

1. The Research Status Quo

In case you’re not an academic and don’t know this, let me paint a brief picture of the academic research situation. (Some of this was touched on in an earlier post, http://fakenous.net/?p=768.)

a. Quantity

Start with the simplest feature: quantity. Now, I don’t know how much total research exists, but here’s what I know about my own field. I estimate there are 600 new academic philosophy articles and books written per week in the English-speaking world, or over 30,000 a year.* As this has been going on for some time, there are at least hundreds of thousands of philosophy books and articles, possibly millions.

*Basis: Many years ago, I read that the Philosopher’s Index, which indexed most English-language books and articles published in philosophy, received over 14,000 new records per year. Today, a search on Philpapers.org, which again indexes most English-language philosophy papers & books (including some unpublished mss.), finds 614 records added in the past week containing the words “a”, “the”, “of”, or “is” (restricted to professional authors only).

Philosophy, alas, is not the only field of study. I don’t know exactly how many fields there are, or how philosophy compares to the others in publication quantity. But I can tell you that my university offers, by my count, a total of 79 distinct majors. I assume that all or almost all of those represent fields with their own journals, publishing their own articles. If they were all as prolific as philosophy, that would be two-and-a-half million new academic articles and books a year.

It is literally impossible to generally keep up with the literature in any academic discipline. And yet, in order to publish your own articles, you are generally expected to be familiar with the literature relevant to your topic.

The casualty of this is discussion of broad questions that bear on many other questions that we are interested in — in other words, the most interesting sort of discussion. If you try to write about a broad question that intersects with many other issues, then you multiply the mountain of papers you have to go through to be a responsible scholar.

b. Topics

Which brings me to the second typical feature of academic research: the topics that we research. To make it possible to stay reasonably current on the relevant literature, academics have divided their fields of study into increasingly narrow sub-fields, choosing the most microscopic little questions to focus on. Even then, staying current on the literature in your own sub-sub-field is challenging.

For example, a philosopher might work on free will, in which case that might be the only thing, or almost the only thing, that that philosopher ever writes about in his entire career. And the free will philosopher’s typical paper would not (as you might naively assume) address whether we have free will or not. There are too many papers and books already talking about that, so it’s almost impossible to publish anything straight-out addressing that question. Rather, a typical paper would say something like: “I argue that Smith’s latest response in the Northeastern Journal of Nitpicking Philosophers (2018) to Jones’ objection to consequence-style arguments for the incompatibility of free will and determinism does not succeed in refuting Jones’ objection. But I’m not saying Jones’ objection is right, or that other responses don’t refute it, and I’m not saying anything about whether free will is compatible with determinism, let alone whether we have free will.”

Again, I only know my own field. But I’m sure that other academic fields are focused on similarly tiny, recherche topics that it would be hard to get anyone outside the field to care about. E.g., if someone is studying literature, they’re probably writing papers on the use of arboreal imagery in the early writings of the 14th-century Ignoterran author Blabbius Obscurius.

c. Style

So the quantity of academic writings guarantees that almost all of them will be read by almost no one. The topics of these writings further guarantee that. In addition, though, even if you for some reason had a great interest in Blabbius Obscurius’ early arboreal imagery, there is one more obstacle to reading most academic work: the style of the writing. Most academic writing is designed to be a creditable substitute for Benadryl, for those who need help falling asleep at night.

I take an example from Steven Pinker, who found this sentence in an academic psychology article:

“Participants read assertions whose veracity was either affirmed or denied by the subsequent presentation of an assessment word.”

After some detective work, Pinker figured out what this meant: the people who participated in the study read some statements, each followed by the word “true” or “false”. (https://stevenpinker.com/files/pinker/files/why_academics_stink_at_writing.pdf)

Now imagine reading a whole 25-page article written in that style. Most academic authors either don’t care about readability, or don’t have a clue how to produce it. Now you know why academics rarely complain of insomnia.

Here, by the way, are the actual article titles from a recent academic journal issue:

“Local Grammars and Discourse Acts in Academic Writing: A Case Study of ‘Exemplification’ in Linguistics Research Articles”
“Bringing a Social Semiotic Perspective to Secondary Teacher Education in the United States”
“Investigating the Effects of Reducing Linguistic Complexity on EAL Student Comprehension in First-year Undergraduate Assessments”

d. Support

As I say, there is a good deal of support out there for academic research. To begin with, at research universities (including all the “good” universities), the tenure-track faculty are generally given lower teaching loads — like, half the number of classes that people have at teaching-focused schools — to enable us to churn out more research.

Then there are things like sabbaticals (where we get to take off either half a year or a year to go do research), research fellowships (where someone provides monetary support for academics doing research for some time period, free of teaching), and grants provided by government and private donors to pay for research that has nontrivial expenses associated with it. There are things like, e.g., the NEH, the ACLS, and the Templeton Foundation.

There are also prizes given out for academic research, like the APA book prizes and various article prizes.

There are also funds available from universities and donors for things like inviting speakers from another school to come to your university, or organizing academic conferences with speakers from various universities.

And of course, prestige in the academic world is basically 100% connected to research.

2. How Good Is Research?

I had to tell you all that to give you some basis for assessing how valuable the typical piece of academic research is. Or: how much does society need more research?

a. Background Concepts from Econ

Two important ideas from economics apply here:

(1) Opportunity Costs: When we evaluate anything, what matters is usually how that thing compares to its realistic alternatives. Academic research consumes resources. So, is this research more or less valuable than the other things that we could do with those resources?

(2) Marginal Costs and Benefits: Also, when we evaluate anything, what matters for practical purposes is generally the thing’s marginal cost or benefit. E.g., we have millions of academic books already, and they’re here to stay. The practically relevant question is: how much value is added when another book is produced, given the supply that we already have?

When I say most academic research has little to no value, you might misunderstand this to be saying that this work lacks cognitive value, value judged by purely intellectual standards, and thus, e.g., that I would give most published articles an “F” if I were grading them. Of course that’s false. Most of it is intellectually sophisticated and contains some rational reasoning contributing to some kind of knowledge. But the question is whether the addition of more such work to what we already have increases overall value to society, compared to other things we could be doing. To that, the answer is almost always no.

b. Opportunity Cost

Part of the cost of academic research is the money that goes to support it. Some of this money is charitable dollars that could of course go to support much more cost-effective measures to help the world. (See https://animalcharityevaluators.org and https://www.givewell.org for the best charitable causes.)

Here’s the other main cost: academic research is absorbing the time and energies of thousands of academics. These are generally the sort of people who could be among society’s most productive members. They tend to be

(i) highly intelligent; indeed, research universities house some of the highest-IQ people you’ll find anywhere; and

(ii) conscientious and hardworking — particularly those who are successful researchers at prestigious universities, which is an extremely competitive area.

Those are the main things that could make someone extremely productive and beneficial to society . . . unless that person has their prodigious abilities and energies diverted into playing insular intellectual games with other smart people.

So, to justify the amount of academic research we’re doing, we would have to say that producing more of the material described in section 1 above tends to be better for society than having these people working in other areas — whatever other professions the smartest, most hardworking people would be most likely to join if they weren’t academic researchers. Does that sound likely to you?

c. Diminishing Returns

Academic publications are not like other goods, where the more of it we produce, the more of the good people can enjoy. E.g., if we produce more cars, then (for a while) more people can enjoy the benefits of a car. Not so for, e.g., philosophy books: there is no need to write additional books in order to supply additional readers of philosophy, since every person can already enjoy every philosophy book that has been written. All we have to do is copy some existing books onto new consumers’ computers. As economists say, cars are a rivalrous good; books are non-rivalrous.

Furthermore, the existing books are already more than any human being could read in a million years. So no one has to write more books in order for more people to read books.

It’s a little bit like if you had a thousand people working in a factory every day manufacturing blue jeans for you, when you already have a pile of 10 million pairs of blue jeans. (And you can’t sell them.) The marginal value of jeans would have reached zero, if not negative.

That being said, there are still reasons why writing a new book could add some value. Perhaps none of the existing books suffices to satisfy certain consumer demands — perhaps because some consumers have extremely specific tastes, or there are important truths that the existing books have not yet uncovered. (For example, see the books listed here: https://www.amazon.com/Michael-Huemer/e/B001H6GHNU. These obviously added enormous value to the stock of philosophy books.)

But the above reflections set a high bar. Imagine that you made a reading list containing only the best books for you to read that exist. The list contains enough books to fill up all your reading time for the rest of your life. Let’s say, 500 books (I don’t suppose you’re going to read more than that). By stipulation, they’re the best books from the standpoint of whatever you want from a book.

Now, my writing a new book benefits you only if my new book is good enough to displace one of the books on that list. So, in this example, my new book would have to be one of the 500 best books ever written (from your standpoint). There are tens of millions of books in existence, so that is a really, really tall order.

Caveat: These would be the books most suited to you as an individual reader, which is not necessarily the books that you would take to be “best” in an objective or quasi-objective sense. E.g., you might prefer to read Goodnight Moon over Principia Mathematica, even though the latter is objectively better. Readers differ a lot in what they want, so there could be many books that are among someone’s top 500. So that makes it less ridiculous to think that I could now write a book that adds some value.

After thinking about the above points (even with the caveat), it is very plausible to think that most new academic books add either very little or no value at all to the existing stock of books.

Side note about science vs. humanities: It is plausible that there is a lot more room for useful research in natural science, because there really are a whole lot of extremely specific bits of scientific knowledge about tiny questions that have some practical application. The case is otherwise for knowledge whose main value is the satisfaction that readers get from learning it, which would be the case for most (putative) knowledge in the humanities. Social science is an in-between case.

d. Falsity

There are reasons why a piece of research might actually have negative value. The first reason is that, rather than expanding human knowledge, it might promote false ideas.

“Yes, but is that plausibly the case for most academic books and articles?”

Does the Pope wear a funny hat? Absofuckinglutely. Of course most philosophical articles and books are largely false. I know that because the vast majority of them conflict, in their main claims, with multiple other articles and books (usually by other philosophers). It has been said that the one thing a philosopher can be counted on to do is to disagree with other philosophers. That’s how we know that philosophers are usually wrong.

Well, maybe you could be reliably right if you mainly just make negative points, pointing out errors that other philosophers have made, not advancing your own theories. Sure, and a large amount of published philosophy is like that. But that’s also not a very interesting or valuable kind of philosophy, and it would be odd to try to justify producing 30,000 philosophy articles a year just to correct the errors in other philosophy articles. Unless we sometimes learn significant philosophical truths, other than truths about other philosophers’ mistakes, it’s hard to justify the amount of effort going into the activity.

But, again, when a philosopher advances important, substantive philosophical theses, they are usually wrong. To the extent that they persuade people, they will be worsening our understanding of the things that matter.

In case you think that this is all radically different from scientific research, which is presumably advancing our knowledge all the time, have a look at this famous article by John Ioannidis: http://robotics.cs.tamu.edu/RSS2015NegativeResults/pmed.0020124.pdf,
which explains why most published scientific research results are false. E.g., most of the time, when someone tries to replicate a published result, they can’t replicate it.

e. Distraction

Here is the other reason why research may have negative value: research that is of low to medium quality makes it harder to find the research that is of the highest quality.

Imagine there is a pile of hay with some needles in it. People like to find the needles. If you add a bunch of hay to the pile, you’re making things worse. Each strand of hay that isn’t a needle makes it harder to find the needles.

Similarly, each mediocre article makes it harder to find the best articles. Since there are already more top-quality articles than anyone could read in their lifetime, there is no need to add anything other than top-quality articles to the pile. But of course most new articles people write are only about average, not top-quality. So there’s no reason to add them to the pile.

3. The Economic Function of Research

If an alien showed up and looked at this system, they would find this all pretty strange. Why do we have hundreds of thousands of our most talented people every year spending their time churning out millions of badly-written papers, about unimportant topics, that nobody wants to read? Maybe the professors are doing it because they get paid to, or they enjoy it. But how on Earth is there money to hire all these people to do this?

Part of the answer is that the government subsidizes universities. But that really isn’t the main answer. The main subsidies are student loans and grants, which explains why we have a lot of teaching but doesn’t really explain why we’re doing so much research. Why not cut costs by only hiring instructors and adjuncts, who get paid a fraction of what tenure-track faculty earn and do twice as much teaching? Universities must think they are getting something really important out of all this research. What is it?

The answer is prestige. The prestigious universities are the ones with the “best” faculty, which means the most well-known researchers, the ones whose work is being talked about by the other researchers. Universities care about prestige because students want to go to the most prestigious university they can. In terms of the signaling model of education, we could explain this by saying that going to a top university signals that one is extra-smart and hardworking.

So if a university hires better faculty, it can perhaps, over a long period of time, increase its prestige, which attracts more students to apply, which means it can select smarter students to come, which means that its graduates are smarter, which further increases the university’s prestige and also enables the university to attract better faculty. It’s a positive feedback loop.

So the goal of a piece of research is to get people to talk about that research. The literature is a bunch of intellectuals all clamoring for attention.

Prestige is different from most goods. It is a positional good, and it is zero-sum. Universities commonly measure prestige by rankings, like this one, which is very widely consulted in philosophy: https://www.philosophicalgourmet.com/overall-rankings/
The total amount of ranking-related goodness is fixed. If I help my department go up in the rankings, this is by definition at the expense of other departments.

From the standpoint of society overall, resources spent on prestige-chasing are wasted resources. If everyone spent half as much resources trying to garner prestige, the rankings would be unchanged, and the overall quantity of prestige would be about the same.

4. Will I Stop? Not a Chance.

How can I go on and on about how we don’t need all this research, when I myself have already contributed more than my share to the mountains of contemporary philosophical research that hardly anyone reads? Given all I’ve said above, you might think that I should stop piling on.

Of course I have no intention of ceasing philosophical research. I’m going to keep turning out papers and books until I’m too feeble to type. There are a few reasons for this:

  1. Whatever you think about the above remarks, it still is clearly the case that I individually will be appreciated and rewarded for producing more publications.
  2. I love philosophy, and I like writing and communicating interesting ideas to others in a clear way. (E.g., this blog does not earn money or academic prestige. It’s just for communicating ideas to people.)
  3. My research is unusual in that it actually adds value to the existing body of literature, because it is better than almost all other research. In particular,
    • It is mainly true.
    • It is also reasonably well written.
    • It is also generally about important and interesting things.

Are my books actually among the 500 best books ever written? Well, not for all readers, of course. But for a significant portion of readers, yes, they are.

With that, I refer you to my list of books :): http://www.owl232.net/books.htm.

In Defense of Illegal Immigration

Today (Oct. 19, 2019), I’m participating in the Open Borders Conference at the New School in New York — https://freemigrationproject.org. In keeping with that, here is a post about why illegal immigration is cool. There’s no reason to respect immigration laws.

1. Ethical Questions About Illegal Migration

  • Many people — at least politicians during campaigns — appear to express some indignation and condemnation at people who have “violated our laws!!” by immigrating illegally. Are such attitudes appropriate?
  • Sometimes it is suggested that these illegal migrants should be punished for their presumed misdeed. Would that be just?
  • If you have the chance to migrate and thereby improve your overall welfare, but doing so would be illegal, do you have a reason to refrain out of respect for the laws of your desired destination country?

I suggest that the answer to all of these is “no”. Potential illegal migrants have no moral reason to respect immigration laws.

2. Things I Am Not Assuming

I am skeptical about political authority. I think that no state is legitimate, no one has political obligations, and no one has political authority. That stance would make it easy to answer the above questions. Since there is no obligation to obey the law in general (merely because it’s the law), there is also no obligation to obey immigration laws.

Unfortunately, not everyone has read my book on The Problem of Political Authority. Worse, even some who have read it don’t agree with it. So let’s suppose we bracket that. I mention the issue of authority only to make clear that I am not assuming any controversial stance on that.

I also am not assuming anything controversial about the justification of immigration laws.

In other words, my claim is that even if the state has authority, and even if the immigration laws are entirely justified, there is still no moral reason at all for potential immigrants to respect the immigration laws.

How can that be? Briefly, (1) even if the state has authority, that authority only extends to its own citizens; and (2) even if there are good reasons for the state to adopt immigration restrictions, those reasons are not reasons that the migrants themselves have. (You might have a reason to try to prevent me from doing something, even while I have no reason not to do it.)

3. Accounts of Authority

Basically, all extant accounts of authority imply that the state’s authority would not extend to potential illegal immigrants. I don’t agree with any of these theories, but let’s pretend that we think one of them is correct.

a. The Social Contract Theory

On this account, the state’s authority derives from a contract with its citizens. We have to obey the government’s laws, because we somehow (“implicitly”?) agreed to do so, in exchange for the state’s protection.

On that view, this authority would not extend to would-be immigrants. People living in other countries are not parties to this state’s social contract, even if the social contract exists. So they would not be obligated to obey this state’s laws.

You might think: “Okay, they’re not now obligated by our laws. But once they migrate into ‘the government’s territory’, or perhaps when they start benefiting from some government services, they will become so obligated (even if they are not citizens). At that point, they will acquire an obligation to deport themselves, in order to stop violating the law.”

Here is the problem. A contract involves some sort of exchange of value. When you sign a rental agreement, you agree to exchange some amount of money for a space to live in. When you sign an employment contract, you agree to exchange labor for money.

In general, it cannot be a condition on a contract that one party give up the very good that they are entering into the contract to receive. For instance, in a rental agreement, there couldn’t be a clause that says the tenant can’t use the apartment in any way. In an employment contract, the employee could not be required to never cash his paychecks.

Now, the central point of the social contract is for individuals to secure the benefits of peaceful cooperation in a given society. That is what the individual is supposed to be entering into the contract for. Therefore, the individual cannot be required, under that contract, to renounce living in that very society. So there could not be a social-contract-based obligation to deport oneself.

b. Fair Play

The Fair Play account says that, when there is a cooperative scheme that produces benefits for all (including you), you generally ought to do your fair share in supporting it; you should not free ride on the efforts of others. Thus, you should obey the law, because general obedience to the law in society is required for there to be peaceful and orderly social cooperation. If you decide to break the law while others are obeying it, then you’re seeking to free ride on the sacrifices of others.

But it is a condition on having a fairness-based obligation that one actually receive a fair share of the benefits of the cooperative scheme in question. If the scheme produces benefits for others but excludes you, then you are not obligated to support it (at least, not by fairness considerations).

Thus, there is a similar constraint here as in the case of the social contract: it cannot be part of doing your “fair share” in supporting a cooperative scheme that you exclude yourself from the benefits of that very cooperative scheme.

Example: say you’re on a lifeboat with several other people. The boat is taking on water and needs to be bailed. You should do your fair share by bailing out the boat. On the other hand, if someone suggests that you should do your fair share by jumping overboard to lighten the load on the boat, that’s ridiculous. You don’t have to do that.

In the case of our alleged political obligations, fair play might require one to obey almost all other laws. But it could not require one to obey a law whereby one would be entirely excluded from the society, for then one would receive none of the benefits of the social cooperation that one was contributing to. Hence, fair play cannot support an obligation to obey immigration restrictions.

c. Democracy

Some say that the state’s authority is based on the will of the majority. Provided that the state is democratic, it has been authorized by the people to make the laws that it makes. For some reason, we have to obey the will of the people, so we have to obey those laws.

This theory obviously implies that non-democratic governments lack authority. Slightly less obviously, I think it also implies that those who are excluded by the state from the democratic process are also outside the state’s authority. Defenses of democracy tend to emphasize the good of equality, which democracy is said to realize (see Christiano), or the good of well-conducted public deliberation (see Habermas, Cohen).

But the immigration laws are made without any participation from the main group that is affected by those laws — the potential immigrants. They are given no vote, and no say in the making of those laws. So even if you think the democratic process confers authority on its outcomes in general, the potential immigrants in particular have no moral reason to respect these laws.

d. Freedom of Association

People in general have a right to freedom of association, which includes a right not to associate with others whom we do not wish to associate with. Perhaps this can be used to explain why the state has the right to make immigration restrictions — perhaps this is an exercise of a right to freedom of (non-)association on the part of the nation’s existing citizens.

Two problems: first, the right to avoid unwanted associations generally applies to substantial relationships, not extremely vague and tenuous “associations”. For instance, if I don’t like you, I can refuse to have you as an employee, friend, or spouse. I cannot, however, refuse to allow you to live in the same neighborhood as me — that is much too weak of an “association” for me to claim rights over. I have a right against your moving into my house, but not against your moving into my neighborhood.

Second, the right to freedom of association includes a right to veto one’s own associations with others, but does not include a right to veto other people’s associations. E.g., if I don’t like you, I can, again, refuse to hire, befriend, or marry you. But I cannot demand that third parties not hire, befriend, or marry you.

Now, immigration restrictions, to be justified by freedom of association, would require rights over extremely tenuous, vague associations — such as a right not to merely be in the same vast geographical region as a certain other person — or rights over other people’s associations — such as a right to demand that employers who want to hire a person not hire that person simply because you yourself don’t want them to.

In my view, individuals have a right not to associate with immigrants or would-be immigrants, if they so choose. That is, one could refuse to hire, befriend, marry, etc., an immigrant. But one cannot justly demand that an immigrant not live anywhere in the same country as oneself (on someone else’s property), nor can one justly demand that someone else not hire an immigrant (using their own money).

4. Reasons for Restriction

a. Jobs

Why should the state restrict migration? Some say that the state should do this in order to protect employment opportunities for low-skill native workers.

This might count as a reason for the state to restrict, provided that the state is justifiably partial toward low-skill native workers. But notice that this reason would not apply to the immigrants themselves — the immigrants themselves have no reason to be partial to native workers. The immigrants rationally prefer themselves to be hired, just as native workers prefer themselves to be hired. If it’s not wrong for a native worker to take a job that an immigrant could hold, then it’s not wrong for an immigrant worker to take a job that a native could hold.

So, even if this is a good reason for the state to restrict, it isn’t a good reason not to illegally migrate.

b. Fiscal Burdens

Some say that immigration imposes fiscal burdens on the government because immigrants consume social services and pay lower-than-average taxes (due to being poor).

This might be a reason for the state to restrict, again assuming the state is justifiably partial to native citizens. But again, there is no reason for the immigrants themselves to be partial to native citizens. If it’s permissible for native citizens to consume social services and impose fiscal burdens on the state, then this should also be permissible for immigrants.

Now, some would say that it is impermissible to take handouts from the government. In that case, both natives and immigrants should refrain from doing it. But they still don’t have to completely exile themselves from the society.

c. Political Change

Some say that we should restrict migration to prevent immigrants from changing the political culture of our society, e.g., by voting for authoritarian policies and leaders.

Again, assume that this is a good reason for the state to restrict. It still wouldn’t be a good reason for the immigrants themselves not to illegally migrate. Of course, the immigrants, like all of us, should support freedom and not authoritarianism. If you have been voting for authoritarian candidates, what you should do is either start voting for good candidates or stop voting.

Many native-born voters support authoritarian values. No one thinks that they’re obligated to exile themselves. Similarly, there’s no reason why immigrants who support authoritarian values would be obligated to exclude themselves from the society. In both cases, they should change their values to support freedom; or, failing that, not participate in the political process.

Of course, illegal immigrants can’t vote anyway, so they don’t have to worry about directly supporting bad policies. Perhaps they should also be careful not to speak in favor of bad policies either.

Conclusion

Conclusion: even if the state has good reasons to restrict immigration and even if the state has authority (both of which points I doubt), there would still be no moral reason for the migrants themselves not to migrate illegally.

I discussed this all at greater length in my chapter contributed to this book: www.amazon.com/Open-Borders-Movement-Geographies-Transformation/dp/0820354260/