Tag: induction

Induction – Part 1: The Problem

The so-called problem of induction, plainly stated, comes down to this: inductive reasoning appears to have no rational justification. Unlike deductive reasoning, which offers apparent justification in its formal structure, the form of an inductive argument can at best only offer probabilistic confidence, and at worst, no justification at all, if we examine it’s application in the context of, say, a causal explanation. To see why this is the case, let’s examine some formal examples.

First, let’s have a look at a deductive argument to see why it appears to be rational:

P1: Thomas is a Catholic Monk
P2: Catholic monks believe in a triune God
C: Therefore, Thomas believes in a triune God

In this classic example of a deductive syllogism, the premises are propositional assertions that are independent of each other. That is to say, they are assertions about individual objects, to which a predicate coherently applies, that could be uttered individually, without reliance upon the other.

Yet, together, they share a common feature that links them in an important way. The shared feature is the property of “Catholic monkness”. In the first premise, that property is a predicate applied to Thomas. In the second premise, it is the object to which a belief in a triune God is applied. Understood this way, you could abstract the assertions into a kind of formula:

T(homas) = M(onk) = G(od belief). Or, mathematically: a = b; b = c.

This is what is known as the “transitive law” of logic (which has an analogue in math as well). This property is what gives our conclusion it’s deductive weight. If Thomas is a monk, and monks believe in God, then obviously, Thomas believes in God. To put it in formal logical terms: If aRb and bRc, then aRc.

There are indeed linguistic and ontological questions in philosophy that call into question the nature of the transitive property and logical necessitation, and by extension, the rational basis for accepting this law as read, but that is beyond the scope of this essay, and beyond the scope of everyday usage. Suffice it to say, the only point here is that relative to a valid deduction like this, we have even less reason to claim rationality for our inductive conclusions, if folks like David Hume are correct. Let’s now juxtapose an inductive syllogism against this, to see the problem in a more clear light:

P1: On Monday, Thomas made his morning offering in the chapel.
P2: On Tuesday, Thomas made his morning offering in the chapel.
C: Therefore, on Wednesday, Thomas will make his morning offering in the chapel.

This is what is known as a simple “enumerative induction”, because it simply enumerates instances, and infers a prediction from them. This form of argument suffers from two problems: first, despite the fact that the enumerative premises are independent of each other (like in the deduction), they do not share a transitive property between them. Nothing “logically links” the enumerations. They are like random pebbles on the beach.

Next, If you look at the conclusion, it too appears to be nothing more than another enumeration, with one difference: it is a prediction. Our premises are statements about the past, and our conclusion is a statement about the future. What, in the two premises, compels the conclusion? What makes it true, that Thomas will be in the chapel on Wednesday morning? Hume offered a tentative theory to explain this. He would have said that the “constant conjunction” of experiences of Thomas in the Chapel each morning, impresses upon us a psychological disposition to expect Thomas in the chapel on subsequent mornings. Perhaps this is so. But, if it is, it renders inductive inference a wholly irrational phenomenon, because rather than from our reasoning, we derive the expectation from phenomenal “impressions” that give rise organically to an idea of Thomas in the chapel on future mornings.

To be a bit more charitable, let’s restate this induction in a way that appears as deductive as possible:

P1: During his career as a monk, Thomas has always made his morning offering in the chapel.
P2: Presently, it is morning.
C: Therefore, Thomas will soon be making his morning offering in the chapel.

At first glance, this appears to contain a transitive relation between premise one and two, in the circumstance of the morning. But this is illusory. To see why, it will help to formalize this a bit more:

Let’s call “has made his morning offering”, “was-A”;

Next, let’s call “it is morning”, “is-B”

Finally, let’s call “will soon be making his morning offering”, “will-be-C”

Before I even formulate this, the problem should begin to whisper itself in how I labeled the terms. But, here is the formula: was-A = is-B = will-be-C. Surely, it’s obvious by now: deduction deals with what is, and only with what is. It cannot cope with movement through time, because it is not possible to formalize epistemic certainty about the future. This syllogism is attempting to masquerade as a deduction, in order to give deductive weight to modal ways of thinking. In other words, inductive inferences draw conclusions about what is possible, while deductive inferences draw conclusions about what is necessarily so. But the conclusion in our present argument is no more necessitated than in the first induction. There are extensions that have been made to classical logic, in an attempt to deal with this problem, with varying degrees of success, but none of them is definitive. This is again beyond the scope of this essay. So, the problem of induction remains for us.

There is a second problem with our second induction. In the case of the deduction, part of what facilitates the transitive property, and imposes necessity upon our conclusion, is the definitional nature of our propositional assertions, and thus, the syllogism as a whole. Thomas must believe in the triune God, because by definition, Catholic monks believe in the triune God. But, there is nothing in the definition of a Catholic monk, that necessitates morning offerings in the chapel; nothing in the definition of morning, that necessitates that Catholic monks will be in chapels; and so forth. Thomas could just as easily make his offerings in his billet, or in the garden, or if he is ill, not at all, and he would still be a Catholic monk, and mornings would still occur (presumably).

So, the question becomes, is it only rational to believe things that can be derived from valid deductive arguments, or definitional tautologies? Or, contra Hume, is a man reasonable for belief in things that could only be, at best, probabilistically true? Intuitively, it seems insane to suggest that believing that the sun will rise tomorrow is irrational. Scientists, for example, often take the “regularity of nature” as an ontological given, or axiom. They do this, because they assume the truth of the optimistic meta-induction: inductive inferences have yielded many successful results in the past, so they will in the future. But this is circular reasoning. And yet, induction does seem to “work”. Even in the small things. Each time a breath in, my expectations are satisfied. Each time I put one foot in front of the other, on my way to the coffee shop, my foot lands on the pavement, and I move forward. Surely, this is a rational expectation?

But perhaps we are confusing the nature of the term “rational”, with something like “sane” or “acceptable” or “appropriate”. These are value-laden, normative terms. You’re a “right-thinking” or “sane” person, we might say, to expect that your pencil will not suddenly turn into an inflatable raft, or your girlfriend to suddenly turn into a cucumber. This is clearly an appeal to a psychological state, rather than a reasoned worldview. So, perhaps there is something to what Hume was suggesting. In which case, our task is to figure out what sort of irrational beliefs are also acceptable or appropriate to have, and on what sort of standard we would base this distinction between acceptable and unacceptable irrational beliefs. The alternative, is that we need to rationally account for expectation, which is to say, justify induction, in order to count inductive inferences among the rational set of beliefs, and escape the pit of irrationality we seem to be sliding into.

That justification will be the subject of my next post.

Haack and Dummett on The Justification of Deduction

Susan Haack nicely diagrammed the problem of circularity in her 1976 paper, The Justification of Deduction. In that diagram, she drew a direct parallel to the circularity of the inductive justification of induction, as outlined originally by Hume. Haack argues that justification must mean syntactic justification, and offers an illustrative example argument to show why semantic justification fails – namely, that it is an axiomatic dogmatism: deduction is justified by virtue of the fact that we have defined it to be truth preserving.

Haack goes on to argue on syntactic grounds that justification is a non-starter on at least five other fronts, in addition to being circular. However, Dummett in his 1973 paper by the same name, showed that the only kind of justification that made any sense was semantic justification. First, because syntax necessarily relied on semantics for its meaning, and secondly because the whole point of justification in the first place is confidence in the function of logic as a means of preserving truth values.

Still, Dummett was able to show not only that the justification of deduction was circular, but also that any attempt to do so leads inevitably down one of the horns of Agrippa’s Trilemma. As Haack pointed out, we could simply assert the justification definitionally. But, in attempting to avoid this dogmatic horn, Dummett points out that we have only two other options: the regress horn, or the circularity horn. In the first case, this would mean crafting a set of rules of inference that could be used to independently justify deduction. These new rules would require a language and a theory of soundness and completeness all their own, which in turn, require justification, and then the process would descend yet another level. In the latter case, two different sets of rules of inference might be used to justify each other, in perpetuity. Obviously, none of these options is satisfying.

Later in his paper, Dummett attempts to explain how a set of inferential rules might be justified by reference to a theory of meaning for the object language within which it is contained. Essentially, he argues that the soundness and completeness theory of logic provides what a theory of meaning provides for a language: a functional understanding of its use. In other words, if we are to justify logic at all, we must first have a theory of meaning that shows how sentences can carry truth values. But this seems to me to begin the slide back into circularity, because as Dummett goes on to explain, our definitions of true and false themselves determine the means by which we achieve the meanings of sentences judged by those definitions. All we’ve done is to shrink the circle.

Haack and Dummett continue the debate in subsequent papers, but reach no conclusion. I am inclined to wonder, myself, whether any of it matters. The justification problem in induction has been evident for over three hundred years, and the problem of deduction for around seventy-five. Yet somehow, both of these tools of inference continue to be used and taught — and both still seem to be yielding results that most of us find satisfying most of the time.

In a word, yes, some forms of justification are circular (and it seems that no form of justification actually appears to work). But perhaps the problem isn’t what we think it is. Perhaps the process of inference is somehow more fundamental than language. Perhaps it is a feature of consciousness that resides below the level of language, rendering it impervious to notions like justification. Or, perhaps the justification of logic will someday come out of the neurological study of the brain, as an explanation of the evolutionary advantage of a linguistic mind, to a primate that would have otherwise perished on the plains of Africa.