In an earlier post, I explained my views on morality, including my belief in moral fictionalism. Moral fictionalism as defined there entails moral nihilism. Unsurprisingly, then, I also believe in moral nihilism. It goes without saying that moral nihilism is not a popular position. Instead, most philosophers (and people in general) are moral realists. Although the case for moral nihilism has been ably made by others, many moral realists resist it on the basis of a well-known counterargument. This counterargument can be called *the Moorean shift argument for moral r**ealism* (MSAMR). In this post, I shall explain the MSAMR. Then, I shall show why it fails to refute moral nihilism.

### Preliminaries

To understand the MSAMR, it is necessary to introduce some basic terminology. Recall from my earlier post that moral propositions are sentences asserting that something has a moral feature or quality. An *objective moral proposition* (OMP) can be understood as a moral proposition that is meant to be either objectively true or objectively false. Here, to say that a moral proposition is objectively true or objectively false means that its truth or falsity is disagreement-independent: if multiple parties disagree about its truth or falsity, only one of them can be correct.^{1} Moral realism, then, can be understood as the view that there are some OMPs that are actually true.

Central to the MSAMR is the notion of the Moorean shift. For the purposes of this post, *the Moorean shift* refers to an argumentative maneuver in which the conclusion of an argument against some claim is rejected because the justification for that claim exceeds the justification for each of the argument’s premises.^{2} To see how the Moorean shift works, consider the claim that I have a hand.^{3} This claim seems very obviously correct to me on the basis of my sensory perception.^{4} I can, after all, see my hand (such when I examine its palm), hear my hand (such as when it snaps its fingers), feel my hand (such as when it types), and so on.

But suppose I now am confronted with an argument against this claim. Specifically, suppose that according to the argument, I don’t have a hand because I can’t rule out the possibility that *everything* I perceive, including my hand, is actually an illusion. For instance, it is possible that it’s the same sort of illusion created by the machines in the film *The Matrix.* And the possibility that I am in an elaborately crafted *Matrix*-like illusion is something I can’t rule out.

Now, this argument assumes as a premise that if I can’t rule out the possibility that something I perceive is an illusion, then I am not justified in refusing to believe that possibility. That’s the only way the argument, as stated, could ever show that I don’t have a hand. And if I’m not justified in refusing to believe that possibility, then I’m not justified in believing that I have a hand. After all, believing that I have a hand requires me to *not* believe that my hand is actually an illusion. But what’s the justification for this assumption: that if I can’t rule out the possibility that something I perceive is an illusion, then I’m not justified in not believing that possibility? That premise appears to be a very abstract principle that’s not obvious. In fact, it seems *far* from obvious. It’s certainly much less obvious than the reality of my hand is from my perception. So whatever justification exists for this premise will be weaker than the justification for the claim that I have a hand.

Suppose now that something like this were true for every premise of this argument. In other words, suppose that the justification for the claim that I have a hand exceeded the justification for every premise of this argument. Under these conditions, the justification for the claim would also end up exceeding the justification for the conclusion of the argument. Why? The thought here is that the justification for the conclusion of an argument is only as strong as the justification for all of its premises. But since the justification for the claim that I have a hand exceeds the latter, it exceeds the former also. This means that I’d never be justified in believing the conclusion of this argument. It is this reasoning that constitutes the Moorean shift.

### The MSAMR

We now can proceed to the MSAMR itself. The Moorean shift employed earlier was done to defend a claim about the physical world, namely the claim that I have a hand. The MSAMR contends that the Moorean shift can be employed to defend claims about morality too. Specifically, it contends that there are fundamental OMPs which are so obviously correct that we are never justified in believing the conclusions of arguments to the contrary. Since any argument for moral nihilism is an argument against all OMPs, this means one is never justified in believing in moral nihilism on the basis of such an argument.

We can see the MSAMR in action in the following example. Take the argument from disagreement, a common argument for moral nihilism. According to that argument, it is difficult to explain the existence of widespread, intractable, but sincere moral disagreement between (ostensibly) equally reasonable and informed parties if moral realism is true. After all, how is it that some parties are able to know OMPs while others do not or cannot? And. when the parties in question are entire societies, why is it that they so often believe in or endorse OMPs that just so happen to support their own interests or institutions?

The *best* explanation of this disagreement, says the argument, is the nihilist explanation, which states that no OMPs are in fact true. If no OMPs are true, then morality is at best socially constructed by humans, akin to how laws in a polity or rules in board games are. And if morality is at best socially constructed, then it’s entirely expected that different societies would endorse different moral systems and that these systems would tend to support their own interests and institutions. Meanwhile, the question of why some parties know OMPs while others don’t would evaporate: under this explanation, there simply aren’t any true OMPs to be known.

By contrast, alternative explanations more friendly to realism are forced to come up with a story of some sort for why some parties are, despite appearances, not in fact as reasonable or informed as others. But invoking these stories ends up rendering these alternative explanations deficient relative to the nihilist explanation. For once such a story is woven into one of these alternative explanations, that explanation frequently fares worse than the nihilist explanation in terms of explanation virtues like simplicity and degree of ad-hocness.

Now, if an available explanation is the *best* explanation of a fact, then, if any available explanation is to be believed at all, it is *that* explanation and not the alternatives.^{5} Here this would mean not believing in realist-friendly alternative explanations.

So far, things don’t look good for the moral realist. But it’s here that the MSAMR comes into play. Take the OMP that torturing infants for fun is wrong. Most people not only believe this OMP, but also believe that it is manifestly obvious. Indeed, when analyzed alongside the argument from disagreement, that claim might seem more obvious than any of the premises of the argument. That it’s wrong to torture infants for fun, for example, might seem more obvious than the argument’s premise that adding a story to an alternative explanation makes it fare worse than the nihilist explanation.

If this is true for every premise of the argument from disagreement, then the Moorean shift can be employed to resist its nihilist conclusion. It would be just like how it was employed before to defend the claim that I have a hand. And this would mean that we are never justified in believing that the nihilist explanation best explains moral disagreement. And if the nihilist explanation isn’t the best explanation of moral disagreement, that’s one less reason to reject moral realism in favor of moral nihilism.

Moral realists making the MSAMR insist that something like this will happen for any nihilist argument. That is, for any nihilist argument, there is going to be an OMP so obviously correct that the Moorean shift can be employed to defend it against that argument. If they are right about this, then one can never justifiably infer moral nihilism from *any* evidence. And this would mean that moral nihilism is indefensible.

### Problems with the MSAMR

Fortunately for the nihilist, however, the Moorean shift upon which the MSAMR relies is an illicit maneuver. To step back briefly, the idea underlying the Moorean shift appears to be that if a claim has greater justification than each premise in an argument to the contrary, then it must also have greater justification than the argument’s conclusion. After all, if the claim has greater justification than each premise in the argument, then it has greater justification than even the most justified premise in the argument. And, the idea goes, since the most justified premise in an argument is its most probable premise the conclusion can be no more probable than that premise. This much is supposed to be guaranteed by the fact that multiplying the probabilities of the premises to find the probability of the conclusion will never result in a value greater than the probability of the most probable premise.^{6}

But this rationale for the Moorean shift is flawed in at least two respects.

First, the fact that a claim has greater justification than every premise of an argument to the contrary does not mean that it has greater justification than all of the argument’s premises *taken together.* It sometimes happens that the premises of an argument mutually reinforce each other such that they lend greater justification to the conclusion than what one would expect if one were to merely add up the justification provided by each premise on its own. This is especially evident in abductive arguments. Unlike a deductive argument, where the conclusion is meant to be a logical consequence from the premises, an abductive argument offers a conclusion that is purported to be the best explanation of certain data contained within the premises.^{7}

Consider, for example, the case of a criminal prosecutor in a court of law aiming to show that the defendant committed a certain murder. Suppose that the three facts he can prove are that the murderer in question was a man, that the murderer wore a dress at the time of the murder, and that the defendant is the only male cross-dresser in town. Each of these facts on their own doesn’t provide much justification for the prosecutor’s conclusion that the defendant is the murderer. In fact, each fact by itself provides virtually no justification for the conclusion at all.

Consequently, if one were to sum up the justification provided by each fact in isolation for the prosecutor’s conclusion, it would probably be quite low. But when the facts are taken and analyzed *together* in an abductive argument, the opposite occurs: they provide resoundingly powerful justification for the conclusion that the defendant is the murderer. The same could hold true for an abductive argument for moral nihilism, such as the argument from disagreement described in the previous section. Such an argument may have a conclusion with justification far greater than not only each premise, but also the sum of the justifications of each premise.

So, at least for abductive arguments, showing that a claim has greater justification than every premise isn’t enough to show that it has greater justification than the conclusion.

Second, the probabilistic reasoning used in the rationale is fallacious *regardless of whether or not the argument in question is abductive.* Even if an argument is *de*ductive, the probability of its conclusion usually cannot be calculated merely by multiplying the probabilities of its premises together. This is because such multiplication requires that the premises be probabilistically independent.

To understand this problem, it helps to look at how calculating the probability of a series of events proceeds. The probability of a series of events can indeed be calculated by multiplying together the probability of each individual event, but this guarantees an accurate result only if the events are independent of one another. For instance, the probability that I will flip heads in the next two coin tosses can be calculated by multiplying the probability that I flip heads in each toss. Since that probability is ^{1}/_{2}, the probability that I will flip heads in the next two coin tosses is ^{1}/_{4} since ^{1}/_{2} × ^{1}/_{2} = ^{1}/_{4}. But this procedure works only because each coin flip is (ostensibly) independent of the other: what I flip for one coin toss has no bearing upon what I flip in the next toss.

Similarly, multiplying the probabilities of the premises of a deductive argument to find the probability of its conclusion works only if the premises are independent of one another in an analogous way.

But in the potential deductive arguments for moral nihilism, it is not at all clear that their premises are probabilistically independent of one another in this way. Unlike coin flips, the states of affairs described by these premises are often very complex. Accordingly, it is entirely reasonable to expect that these states of affairs might interact with one another in correspondingly complex ways, through long but subtle causal chains. In fact, given the difficulties involved in detecting the presence or absence of these chains, it isn’t clear how their probabilistic independence could *ever* be demonstrated. This means that multiplying the probabilities of their premises to find the probabilities of their conclusions is an invalid procedure.

Indeed, it turns out that calculating a *precise* probability for the conclusion from the premises alone usually isn’t possible.^{8} However, it is possible to calculate a range of potential probabilities for the conclusion. As demonstrated in the appendix of this post, the uncertainty of a conclusion—that is, the probability that it is false—is less than or equal to the sum of the uncertainties of each individual premise.

And this fact is fatal to the MSAMR. For it means that even if a claim has greater justification—and, as a result, less uncertainty—than every premise in a deductive argument against it, it doesn’t necessarily have less uncertainty than the conclusion. Furthermore, it doesn’t have less uncertainty even if the claim has less uncertainty than *all of the premises together do.* The uncertainty of the conclusion, after all, is only *at maximum* the sum of the uncertainties of the premises. But it could very well be far less than that. To truly threaten the nihilist, it must be further *demonstrated* that the conclusion is near or at the maximum possible uncertainty. Hence, the Moorean shift that MSAMR requires rests upon a fallacy.

### Conclusion

The MSAMR therefore fails to refute moral nihilism. No longer can realists waive away the powerful arguments for nihilism by appealing to Moorean shifts. Of course, I have been granting to the realist throughout this post that the sorts of OMPs employed by the MSAMR really are that intuitively obvious. One can reasonably doubt this: there are good grounds for believing that we often unwittingly conflate emotional reactions towards certain things with intuitions about their having certain moral features or qualities. But that subject is beyond the purposes of this post and best left for another time.

### Appendix

Here, I shall explain how to calculate the part of the probability range for the conclusion of a deductive argument when given the probabilities of the argument’s premises. The following explanation isn’t meant to be rigorous to the extent needed for a formal logical proof. But it should be clear enough to follow and to understand for readers with even a rudimentary understanding of probability theory.

The probability that any arbitrary proposition A is true is usually represented by the notation P(A). The *uncertainty* of A is simply the probability that A is false, and it can be represented too, by the notation U(A). Since the probability that something is true and the probability that something is false must add up to 1,

U(A) = 1 – P(A).

Now, suppose that some proposition B is a logical consequence of A. Since the laws of logic and their application are (presumably) certain, no uncertainty can creep in from the inference from A to B. So the probability of B, whatever it might be, at least as large as the probability of A. This means that

P(A) ≤ P(B).

From this result, an inequality in terms of uncertainties can be shown. After all, since

U(A) = 1 – P(A)

and, similarly,

U(B) = 1 – P(B),

it follows that

P(A) = 1 – U(A) and P(B) = 1 – U(B)

from adding P(A) and subtracting U(A) from both sides of the first equation and adding P(B) and subtracting U(B) from both sides of the second equation. Substituting these new equations in the original inequality yields

1 – U(A) ≤ 1 – U(B).

Now, subtracting 1 from both sides of the inequality and then adding U(A) and U(B) to both side results in

U(B) ≤ U(A).

So, the uncertainty of B can never exceed the uncertainty of A. This is a very useful fact to know. However, in most deductive arguments, the conclusion is a logical consequence of *multiple* propositions, not just a single one. No matter: similar reasoning can be used to calculate the probability of that conclusion in terms of the probabilities of its multiple premises. First, for any conclusion C that is a logical consequence of premises P_{1}, P_{2}, P_{3}, . . . , P_{n}, C is also a logical consequence of the conjunction of the premises: P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}. And this conjunction is a single proposition. Thus,

U(C) ≤ U(P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}).

The uncertainty of P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n} can then be calculated in terms of its premises as follows. Let Q be P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}. Substituting this into the inequality above yields

U(C) ≤ U(P_{1} ∧ Q).

What, then, is the uncertainty of P_{1} ∧ Q? This uncertainty can be determined by thinking about how the probability of the *dis*junction of P_{1} and Q—that is, P_{1} ∨ Q—would be calculated. The probability that either P_{1} *or* Q is true must involve adding the individual probabilities of P_{1} and of Q together. But this isn’t the whole story. For if P_{1} and Q can both happen together, then the probability of one of them is going to be counted twice by mistake in this calculation. As an illustration of this problem, consider the following propositions:

D_{1}: *On the next roll of the dice, I will roll a 1 or a 2.*

and

D_{2}: *On the next roll of the dice, I will roll a 2 or a 3.*

The probability of D_{1} is ^{2}/_{6}, and the probability of D_{2} is also ^{2}/_{6}. Adding them together yields ^{4}/_{6}. But this cannot be the probability of D_{1} ∨ D_{2}. The probability of D_{1} ∨ D_{2} is only ^{3}/_{6} because the probability of rolling a 1, a 2, or a 3 is only ^{3}/_{6}. So something went wrong. What went wrong was that the probability of my rolling a 2 on the next roll was counted twice. Specifically, it was counted twice because my rolling a 2 was described both in D_{1} and D_{2}. To correct for this, the probability of my rolling a 2 must be subtracted from the sum of the probabilities of D_{1} and of D_{2}. Performing this subtraction requires finding a way to express the probability of my rolling a 2 in terms of D_{1} and D_{2}. How might this be done? A reasonable (and, it so happens, correct) first guess is as the *con*junction of D_{1} and D_{2: }D_{1} ∧ D_{2}. After all, D_{1} ∧ D_{2} is true if and only if I roll a 2. So, we can conclude that

P(D_{1} ∨ D_{2}) = P(D_{1}) + P(D_{2}) – P(D_{1} ∧ D_{2}).

This can be generalized to the finding that, for *any* propositions X and Y,

P(X ∨ Y) = P(X) + P(Y) – P(X ∧ Y).

Returning to the analysis of P_{1} and Q and applying this discovery there, this means that

P(P_{1} ∨ Q) = P(P_{1}) + P(Q) – P(P_{1} ∧ Q).

This now can be put in terms of uncertainties. Again, it was shown that for any proposition A, P(A) = 1 – U(A). So, for P_{1} and Q, this means that

1 – U(P_{1} ∨ Q) = 1 – U(P_{1}) + 1 – U(Q) – (1 – U(P_{1} ∧ Q)),

which simplifies to

1 – U(P_{1} ∨ Q) = 1 – U(P_{1}) – U(Q) + U(P_{1} ∧ Q).

Subtracting 1 from both sides of the equation and then multiplying both sides by –1 yields

U(P_{1} ∨ Q) = U(P_{1}) + U(Q) – U(P_{1} ∧ Q).

The end result is now getting close. To reach it, recall that for any proposition A, U(A) ≥ 0. This is because uncertainties must be between 0 and 1, inclusive. So,

U(P_{1} ∨ Q) ≥ 0.

Since U(P_{1} ∨ Q) = U(P_{1}) + U(Q) – U(P_{1} ∧ Q), the right side of the equation can be substituted for the left side of the inequality to produce

U(P_{1}) + U(Q) – U(P_{1} ∧ Q) ≥ 0.

Adding U(P_{1} ∧ Q) to both sides of the inequality results in

U(P_{1}) + U(Q) ≥ U(P_{1} ∧ Q),

which is equivalent to

U(P_{1} ∧ Q) ≤ U(P_{1}) + U(Q).

So, the uncertainty of P_{1} and Q cannot exceed the sum of the uncertainties of P_{1} and Q individually. The uncertainty of Q can be expressed in more basic terms by representing Q as two conjuncts and following the same sort of steps from earlier until an inequality in terms of all of the uncertainties of each individual premise is produced. This will mean that

U(P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}) ≤ U(P_{1}) + U(P_{2}) + U(P_{3}) + . . . + U(P_{n}).

Since U(C) ≤ U(P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}), it follows that, finally,

U(C) ≤ U(P_{1} ∧ P_{2} ∧ P_{3} ∧ . . . ∧ P_{n}) ≤ U(P_{1}) + U(P_{2}) + U(P_{3}) + . . . + U(P_{n}),

which means that

U(C) ≤ U(P_{1}) + U(P_{2}) + U(P_{3}) + . . . + U(P_{n}).

This is the final and key result: the uncertainty of the conclusion of a deductive argument cannot exceed the sum of the uncertainties of each individual premise in that argument.

1. This additional terminology about objectivity excludes moral propositions that are meant to be merely subjective claims, which are not of interest here.

2. Many philosophers speak of the Moorean shift somewhat differently, referring to the maneuver by which one resists a skeptical argument in the form of a *modus ponens* by reversing it into a *modus tollens* and using it against the skeptic. This understanding of the Moorean shift is not the focus of this post.

3. The example of having a hand is taken from the British philosopher G. E. Moore, after whom the Moorean shift is named.

4. To be clear, the claim in question is not equivalent to the claim that *O’Brien* has a hand. The first-person pronoun “I” in the claim is meant to refer to whoever reads and utters the claim. Readers who have hands and can sense them will be able to follow the explanation of the Moorean shift works in this example by inserting themselves in my place as they read it.

5. There is, of course, the option of not believing any of the explanations at all, perhaps because none of them meet a minimum threshold of plausibility or credence for belief. The point remains, though, that if one of them *is* going to be believed, it must be the best explanation and no other.

6. The probability of anything must be between 0 and 1, inclusive. And this means that multiplying any probability by another will never result in a greater value. This is because the highest probability possible is 1, and anything multiplied by 1 is itself. Multiplying it by any nonnegative number less than 1 results in a smaller value. So, the probability of the most probable premise puts an upper limit on the probability of the conclusion if it is calculated by multiplying that probabilities of all the premises.

7. A conclusion of an argument is a logical consequence of the argument’s premises if and only if the conclusion follows from the premises by the rules of logic. When it does, it is impossible for both the premises to be true but the conclusion to be false

8. There are some exceptions to this rule. One exception occurs when all of the premises of the argument are certain and each have a probability of 1. In that case, the conclusion is also certain and has a probability of 1.