Category: logic

London School of Philosophy – Summer School Conference

I decided to spend three of my vacation days on the London School of Philosophy’s “Summer School” conference, this week. The theme of the conference was “Philosophy: Past, Present, and Future”, and the talks focused heavily on the broad questions like the nature of philosophy, it’s role and purpose in society, it’s place in history, its relationship to art and literature, and the implications drawn from consideration of these questions, for the future.

Day One: The End In The Beginning

The first day carried us into the past, to ask the question “where did we come from?”. The day opened with a lecture by Tom Rubens on Schopenhauer’s World as Will and Representation, and ended with a lecture by Tim Beardsmore-Gray, on Nietzsche’s theory of Eternal Recurrence. These two lectures functioned as profound book-ends, framing the picture of the entire day. The never-ending quest to understand ourselves, the universe within which we must take our place, and the significance of that role as self-aware and self-examining creatures, was a quest taken up with great gusto by the German half of the Enlightenment project, and they provide a powerful signpost in the history of philosophy. Though the German outlook was deeply pessimistic in character, it was also deeply optimistic in its ambitions, and this sense of conflicting attitudes about the past, present, and future, seemed to resonate throughout the conference.

The dualistic character of the day was made almost comical, by the juxtaposition of Dr. Hurley’s lecture on the history of Truth, and development of theories of truth, directly with Dr. Golob’s discussion of the nature and evolution of stupidity. Questions of what we can justifiably say that we know, when certainty transforms into absurdity, how we can tell the difference, and what implications this has in practice, are as old as philosophy itself. While stupidity might seem to be one of those common sense “I know it when I see it” problems, Dr. Golob made it amusingly clear that the answer is not so simple after all. Likewise, famously, the problem of defining Truth, was humbly demonstrated by Dr. Hurley. For all our progress, philosophy still struggles with the most fundamental questions, it seems.

Into this mix, entered Descartes, and the problem of the self. Grant Bartley’s lecture walked us through the core problem in Descartes’ Meditations – the problem of what we can know without doubt, including ourselves, reminding us of the need of philosophy to continually renew and remake itself – and in the process, remaking ourselves. As Iris Murdoch puts it in her essay, “The Idea of Perfection”:

“I think it is an abiding and not a regrettable characteristic of the discipline, that philosophy has in a sense to keep trying to return to the beginning; a thing which is not at all easy to do…”

Jane O’Grady carried this notion forward in her outline of the project of the Enlightenment, showing its central characters to be the embodiment of what Iris Murdoch, again, described as the “two-way movement in philosophy… toward the building of elaborate theories, and… back again toward the consideration of simple and obvious facts…” Dr. O’Grady suggests that this movement is how best to understand the Enlightenment, and offered Theodore Adorno’s book, “The Dialectic of the Enlightenment” as a guide to the way the process might work.

This idea of a cyclical ebb-and-flow, or recreation, of philosophy and of the self, reached its crescendo and resolution in the talk by Tim Beardmore-Gray, on The Eternal Recurrence. It would be easy to view Nietzsche’s idea as an attempt to achieve some sort of Transcendence without calling it Transcendence. But, I think the more correct interpretation is one in which Nietzsche is trying to find a path to the resolution of all of philosophy’s great dualisms. Self-creation and the embrace of the eternally returning past, is not just an embrace of suffering for the sake of the good, it is an acknowledgment and acceptance of all the Heraclitian oppositions of existence, and experience (an opposition itself), and an awareness of their necessity to each other. But this view carries us beyond what Beardmore-Gray is likely to ascent to. My views are my own, of course.

Day Two: Transcendence, Order, Chaos, and Pessimism

The second day of lectures, addressing the question, “where are we now?”, opened with the triumphal optimism of Dr. Steinbauer’s seminar exploring what philosophy is, and what it can be. At issue in this talk, was nothing less than the nature of philosophy itself, and how we ought to regard ourselves, as philosophers, partaking of that nature. Are we scientists? Are we theologians? Are we something else entirely? Ultimately, Dr. Steinbauer eloquently argued that what it means to be a philosopher today, is to be a catalyst for understanding, both of the world and of ourselves. The right path seems to be, for Dr. Steinbauer, somewhere between the ancient Greek love of wisdom, and the modern mechanistic notion of philosophers as Conceptual Engineers.

As if on cue, John Heyderman then offered up an attempt to unify the notion of wisdom traditions and conceptual engineering, in the form of Spinoza’s pantheistic monism. According to this view, mind and body are two sides of the same coin. Heyderman explained that Spinoza saw all of reality as a consequence of the activity of the mind of God. To put it more succinctly: the universe is an idea in the mind of God, and by analogy, the body of man is an idea in the mind of man. This, perhaps, takes Descartes’ speculations about the sustenance of real experience (as a consequence of God’s goodness) to another level, by suggesting that his goodness is not enough. It is his existence that makes all of existence possible – his existence is as a mind, which as ideas. God, on this view, could be said to be the ultimate conceptual engineer.

Professor Fiona Ellis, later in the day, seemed to borrow from Heyderman on the basic idea of Spinoza, but painted the picture in a more naturalistic light. On her model, the universe of facts – the universe explained to us by modern physics and chemistry – is the correct view, but not the complete view. She described a reality in which various features of existence are co-mingled: Nature, Value, and God, all count as aspects to be reckoned with, and modern science is only capable of addressing the first. The specter of the fact-value dichotomy, and the is-ought problem, loom large in this picture, and Professor Ellis struggled to elaborate a coherent reconciliation of these distinctions. She invoked Levinas, in her own defense, who apparently argued that attempting to know God is attempting total control of reality, which is nothing less than deluding ourselves. Professor Ellis, in addition, argued inspiringly for a kind of knowledge of God as an experience had in essential relationships. Something that is not quite “God is Love”, but akin to the notion.

Returning to earth, Kieth Barrett gave what I believe to be the highlight lecture of the conference. His, was a tour de force defense of the idea of philosophy as a sense-making apparatus, extracting rational order from the chaos of existence around us. To open the discussion, Dr. Barrett provided two fascinating conceptions of order. One Transcendent, and one Immanent. The transcendent order comes to us from the ideal, and is realized by careful study and contemplation. This is the order of Plato’s Republic or Augustine’s City of God, it is static and uniform. The immanent order is not revealed, but discovered in patterns of essential characteristics made apparent through consistent observation. This is the order of Aristotle’s Organon.

Dr. Barrett’s bridging synthesis of the thesis of Transcendence, and its antithesis of Immanence, is the Enlightenment. Here, he argues, the modern natural philosophers take their inspiration from Aristotle, but their ideological commitments from Plato. The science of the Enlightenment, says Barrett, is not a genuinely empirical endeavor, because it goes far beyond the justifiable claims of sense experience, and posits a completely new conception of Transcendent order in the mathematics of Newton, and the abstractions of the presocratic Atomists. This, then, coupled with the Judeo-Christian ethical tradition of the 17th and 18th century, forms the basis of the Enlightenment worldview, and the construction of “The Rational Subject”, as posited by Zaretsky, in Secrets of the Soul. Dr. Barrett concludes his case by outlining Zaretsky’s evolution of the self, as a primary feature of the evolution of the Enlightenment, ultimately arguing, in a similar vein as Professor O’Grady earlier, that the Enlightenment never really ended, it has simply evolved into new forms in the present. The Rational Subject of Descartes, in synchrony with this transformation, has itself transformed into the Situated Subject of Freud, and finally the Deconstructed Subject of Levinas.

Which brings us up to the (philosophical) present, and all the political chaos it presently entails. Mark Fielding’s contribution to this effort, was a view of the present political landscape through the lens of Hannah Arendt’s famous “Truth and Politics” essay of 1967. Here, the opposition presented is between Truth as a value and Power as a value, and the implications of that choice. This talk was, by far, the most confounding to me. The argument seems to run something like this: politicians are expected, as a normative condition, to be liars. The polity loves to be lied to. Successful politicians, then, are the best at offering the lies that the polity most want to hear. The most successful liars are the ones who are best able to lie to themselves, especially. However – so goes the rest of the argument – it is also the case that truth is necessary for making sense of the world, and power is the capacity to get things done in the world.
The implications of this paradox are peculiar. If the most successful politicians are indeed the most successful liars, then either those politicians are not actually getting anything done in the world and thus have no real power, or the truth is somehow not necessary for making sense of the world or getting things done within it.

It is utterly unclear how this conundrum is to be solved. But I would venture a guess that the first implication is the correct one – albeit counterintuitive. Political power is one of the most illusory powers on earth. It often seems as though politicians are getting loads of things done in the world, but when you watch what they do, rather than listen to what they say, you begin to realize that the world of politics is great deal of sound and fury signifying nothing at all, and that the vast majority of politicians actually do not in fact, get anything done in the world. This suggests that Arendt was right to recognize the lying, but failed to see its impotence, as manifest in political power, because she could not square impotence with political power. But, had she remembered her Plato, she might have recalled the story of Archelaus from The Gorgias, and Socrates’ judgment of him as the least powerful man in Macedon, or his discussion with Glaucon or Thrasymachus in The Republic, on the nature of the truly just man. Perhaps Arendt found these unconvincing, but if Fielding’s reading is correct, it is hard to see why anyone would find her convincing.

The night was capped off by adding bitter herbs to this simmering broth of pessimistic cynicism. A four man panel was convened to discuss “Philosophy in a Post-Truth Age” (whatever that means). The discussion centered primarily around “fake news”, “free speech”, and the overwrought political dialogue of the popular press. The opening speeches were awkward, curt, and uninteresting, and the room was more or less paralyzed by an overarching anxious malaise that prevented any real discussion from taking place. I left the conference on the second night, wondering whether I should come back or not. The contrast from the morning’s lecture by Anja could not have been more stark, in terms of the pessimism, and I seriously questioned whether philosophy could — let alone did — have any traction in the “real world”. The chaos of the present has just about scrubbed away most the enthusiasm for the orderly universe engendered over the course of the rest of the day.

Day Three: Idealism, Utopianism, and The Disintegrating Self
Day three of the conference purports to address the question “where are we going?”, beginning with a deep discussion of who we are, and want to become. The final lecture of Thursday night, “Human and Robot Minds”, by Richard Baron, and the opening lecture of Friday, “Philosophical Zombies”, by Rick Lewis, examined the problem of consciousness from the opposition of internal and external perspectives. Robot minds, it turns out, force us to look inward to discover what matters most about being human, meanwhile Zombies force us to look outward and face the possibility that there may not actually be anything significantly different. A key point raised by Lewis, is Chalmers’ conceivability criterion. Chalmers invents the Zombie as a means of asking whether it is conceivable that a creature emptied of whatever it is that makes a human special, but behaved in every way the same, could fool us into thinking it was the same. This is the mirror image of the Turing Test, really, and we are now getting to the point where in some settings, it is difficult to distinguish between a machine brain, and a human mind. The point is that it is now conceivable that, in the distant future, philosophical zombies could exist – as robot minds. At that point, how would we tell the difference? And, if we can’t, then what is it, exactly, that defines the human experience? As dazzlingly futuristic and apparently escapist a topic as this seems, it is profoundly distressing because it suggests that the mind-body problem resolves not into only mind, but into only body. Perhaps the hard determinists and physicalists are correct, and there are only bodies in motion. Maybe Sam Harris and his ilk are correct, and the self is just a complex delusion, required for the survival of the human organism.

But, the intractability of the subjective, first-person, conscious experience (what “it is like” to be “me”), is a problem only for the empirical disciplines. Notice how all the tests require a third-person perspective, and the sort of data that cannot tell you what you want to know anyway. From the perspective of science, it is an unfalsifiable problem, and as such, is not a scientific one. But it does not follow logically that the “self doesn’t exist”. This is a physicalist presupposition similar to the old business management maxim: if it can’t be measured, it doesn’t matter. But conscious experience does matter. In humans, it is the one thing that seems to matter the most, of all our characteristics. What is needed, is a new toolset, or some new methodology, which is capable of accounting for subjective conscious experience. In the absence of such a thing, philosophers will have to continue to do battle in the realm of speculation, mytho-poetics, and moral philosophy.

The next round of speakers for the day all moved us beyond the self, pressing the problem of the relation between the individual and society. Christian Michel, John Holroyd, and Sam Freemantle, each addressed this problem in ways that were simultaneously naively optimistic and yet weighed down by skeptical wariness born of experience. Christian Michel offered a defense of Nozick’s conception of a property-based anarchist utopia. Christian buoyed us with his deeply moving memories of post-WWII France and Charles de Gaulle, and provided a powerful critique of the property-less communist ideal of French intellectualism of that time. But his exposition of the alternative, while enthusiastic and inspiring, was nonetheless unconvincing because of its superficiality. There are hundreds of critiques of Nozick’s book, and numerous treatments of the problems of a stable property-rights regime in an anarchist world, that once understood, render this dream somewhat stale. His particular lecture was especially poignant and frustrating for me, because I have my own experience of just this sort of enthusiastic zeal on first discovering the likes of Mises and Rothbard, Nozick and Nock, Friedman and Hans-Hoppe. There is no question that the nation-state, as we presently experience it, is not quite right; that something needs to change, and — if you’re disposed to think as I do — the most likely improvement is going to be in the direction of minimalism and decentralization.

John Holroyd, by comparison, was much more circumspect in his aspirations. Holroyd’s talk offered interesting perspectives on the problem of localism and sense of community, in an increasingly globalized world. He highlights Michael Ignatieff’s book “Ordinary Virtues”, as a possible approach to thinking about these problems, and the book contains many allusions to the earlier iterations of globalization (before and during WWI, for example). Next, he takes on the question of “trans-humanism” – the movement eager to expand the conception of the improvement of human life through medical technology, to include things like cyborg augmentation (e.g., Neuro-link), and life-extension. The problem here, for Holroyd, is how to maintain a sense of humanity in all this augmentation, and what we do about the unintended consequences such changes are likely to have on quality of life and our sense of fulfillment. Lastly, Holroyd wants to tie the answers to these problems to an education system geared toward more human contact. The idea seems to be that, as technology begins to crowd out more and more of our time and attention, a conscious effort is going to have to be made to incorporate a more “organic”, local, human-to-human social culture.

Sam Freemantle’s talk purported to address the question of the future of Liberalism. This may be the most important political question of our age. We are literally on the precipice of ending Liberalism as a political project, and without any serious consideration, that death is likely to come all too quietly. Dr. Freemantle’s talk was useful, in the sense that he rightly pointed out many of the biggest problems posed by Liberalism’s reliance on traditional Utilitarianism, and he rightly lamented several failed attempts to rescue Utilitarianism from itself (namely, in the form of Rawls’ Theory of Justice). However, this talk left me feeling painfully aware of just how much more work needs to be done to revive the Classical Liberal tradition in the mind of the popular demos. Dr. Freemantle only offered tantalizing sketches and suggestions, and while one can’t be faulted for not having “The Answer” in a single one-hour talk, it remains to be seen whether anyone will ever have a sufficient answer. Perhaps that’s not such a bad thing, but unless someone can explain how something could come out of the slow death of English Liberalism, I remain fearful for the future on this front.

Putting It All Into Perspective
There were a few folks whose talks I did not mention here, but this should not be construed to mean they were not worth attending, or that they were not germane to the theme of the convention. On the contrary, it would be difficult to say that any of the talks “didn’t belong”. The problem, as I see it, is that the subject matter is so broad and so deep, finding ways to integrate it all into a summary such as this, and still do it all justice, is a task for a much better writer than myself. Also, there seems to be an analogy here, to the problem of the discipline of philosophy itself. Socrates takes Gorgias to task for being unable to answer the question of what subject rhetoric is “about”. In a sense, philosophy itself suffers from this problem. Plato wanted to answer the question by asserting that it was Justice and Truth, as such. But, we seem to have collectively rejected that conception throughout history, as simultaneously too narrow, and too ill defined. What philosophy is “about”, and what it is “for”, is not something I can tackle in this post. And perhaps it is too big a question for any one conference, no matter how thorough or lengthy it is.

As is the case with most philosophical inquiry, this conference generated more new unanswered questions, than it answered. Some argue that philosophy is a tool for sense-making, finding the rational order in the chaos of existence, or seeking understanding. Indeed, it seems even I made overtures to such an explanation earlier in this post. But I think now, that maybe the main job of philosophy is not so much “sense-making”, as it is just discovering what the right questions are, in any given age. This, it seems to me, is a task that is needed now, more than ever. We are awash in a sea of noise, from the internet, from the political sphere, and from our various social spheres. One good question can pierce that noise, like a siren in the fog. If this conference has managed to accomplish that, then it was well worth the effort to organize, and well worth the effort to attend. I have indeed heard several sirens throughout the course of the last three days, and as such, count this conference as a rousing success.

Book Review: The Art of The Argument, Stefan Molyneux

This weekend I had a little extra time on my hands, because of the bank holiday. It’s been quite a while since I’ve looked at any work by the growing cadre of freelance internet philosophers. So, I decided to have a look at the latest offering by Stefan Molyneux. Not a man to shy away from dramatic overstatement, the book is titled, “The Art of The Argument: Civilization’s Last Stand“.

The basic thesis of the book is that “sophists” – described as those who manipulate language and appeal to emotion to gain power for themselves – are undermining the basic capacity for good people to negotiate terms amongst themselves in good faith, and that without this capacity to engage in rational debate, civilization itself will descend into a chaos of brute force misery and destruction. He has taken it to be his task, then, to recruit and educate the new generation of soldiers in the war of the rational against the “relativist” and the “sophist”, and to train them up in the art of ‘The Argument’.

Some have confused the purpose of this book, because of it’s title. Several reviewers on Amazon took it to be an attempt at a layman-accessible textbook or tutorial, and have heavily critiqued the book in ways that, though largely correct, are far too stringent for a polemical tract of this kind, and fall directly into the trap Stefan sets for them, in his preface (hilariously titled “Trigger Warning”):

’The Art of the Argument’ is an outright battle manual, not a prissy abstract academic paper… As we approach Western Civilization’s last stand for survival, loftily lecturing people on arcane terms is a mere confession of pitiful impotence…

That ought to give some context as to what to actually expect from this book. Stefan thinks he’s distributing a basic survival manual in a state of impending cultural apocalypse (cue the picture of Patton standing in front of the flag). What of those who actually care to be precise, methodical, and try to practice a little epistemic humility? Well, Stefan just thinks they’re “whining”, and “turning logic into wingdings”.

I Am Absolutely Certain

Still, precision and clarity is precisely what one would want if one were arming people for an ‘intellectual battle’, and it is true that his explanations of deduction and induction are rushed straw-men that, at times, are incoherent or just plain wrong. He is telling readers that he is equipping them with broad-swords, but handing them broom handles instead. We’ll get to examples of all this shortly, but first, a note about Stefan’s main object of un-ironic attention in the first part of the book: absolute certainty. Unlike most of us, who’ve come to understand that such a thing probably doesn’t exist, and that believing one has obtained such a thing is dangerous to the point of precipitating wars and genocides, Stefan on the other hand, has come to see absolute certainty as a special place one can go, to escape the “relativists”:

if you surrender to the peace of absolutism – if the premises are correct, and the reasoning is correct, the conclusion is absolute and inescapable – you will quickly find it a beautiful place to be, and that relativists are trying to deny you the peace, Zen, and beauty of the paradise called certainty.

Rather than understanding, or self-knowledge (something he used to talk a lot about), or curiosity, or mindfulness, it is the absolute certainty of deductive rigor that will get his readers to the truth, and it is absolute certainty that will make his readers the winners of The Argument.

It is this fixation that sets the tone for the opening explication of deductive and inductive reasoning. He rightly describes inductive reasoning as the method of reasoning to probabilities, and deductive reasoning as the method of reasoning to certainties. But because certainty is king, inductive reasoning plays only a secondary submissive role in Stefan’s jungle story known as the The Argument, and he equates probabilistic thinking simply with ‘rank relativism’:

A predator must be absolute in its reasoning. The lion must correctly identify and stalk the zebra, must calculate speed and interception without error, must attack and bite accurately, and must persist until the prey is down. All this must serve the conclusion: the meal. However, prey has a different set of calculations because a predator can see the prey, but the prey usually cannot see the predator – at least until it is too late… Dominant life forms revel in absolutes and fight hard against any encroaching fumes of rank relativism. A tiger cannot hunt if it doubts the evidence of its senses. The life of a zebra is a life of doubt, of fear. [emphasis added]

This zeal for absolute certainty traps him in something of a bind later. When describing the scientific method, he has to characterize it as fundamentally deductive:

The Scientific Method is absolute – deductive – but individual hypotheses are usually conditional… inductive reasoning must be subject to the absolutes of deductive reasoning…

While it’s true that deductive reasoning plays a significant role in evaluating hypotheses and the research products of scientific disciplines, it is wrong to assert that deduction is a primary in all cases. Deduction and induction play complementary roles in the methods of science, and which has primacy depends on the method and the context (though, for Stefan, The Scientific Method is just one thing).

Stefan says, “All valid hypotheses must conform with – and predict – empirical observations“. Embedded implicitly in this assertion, is an idea never explicitly referenced, but clearly implied by his rhetoric about the scientific method. He wants to use Popperian falsificationism as a proxy for deductive certainty. While its true that Popper sought a way to give scientific conclusions a certainty akin to those of deductive arguments, he would never have pretended that falsification was equivalent to deductive certainty. The point was not to inject the absolutism of Augustinian faith declarations into scientific conclusions. Rather, it was to reduce the potential for catastrophic error – a brick wall into which Molyneux seems determined to drive himself.  All of this effort comes on the heals of labeling deductive reasoning “alpha”, and inductive reasoning “beta”. He needed a way to rescue sissy science from the beta-cuck basement; and the way he does it, is by making it the twee Robin beside the manly Batman of deduction.

But why is absolute certainty so important to Stefan? Because, for him, no rational action is possible without it:

The lion stalking the zebra is engaged in proactive behavior, and thus, by initiating the encounter, is in far greater control of the variables… Initiating action requires the certainty of deductive reasoning, and control over variables increases that certainty… The pursuit of the lion is the initiating action, the flight of the zebra is the reaction.

Deductive lions are proactive, and inductive zebras are reactive. Neither act at all, without having achieved the absolute certainty of empirical verification. But is this actually how we act? I would argue that it is not. There are many things we do, day to day, without the absolute certainty of a deductive conclusion. In fact, most things we do are this way. He offers the example of deciding to bring an umbrella. But one could easily imagine deciding which arguments to deploy in a debate as well. The fascination with certainty also seems to run counter to Stefan’s commitment to free will, as well. The kind of certainty he describes could easily be imagined as the kind of certainty that results in perfect prediction (something akin to what he says above about hypotheses). Does this not imply some sort of threshold determinism? Given this, why, if I were an adherent of some common-sense conception of the free will, would I want to believe this was the only way I could act? Buried in this fixation, is the need to be morally justified, in order to act. For Stefan, acting without certainty is acting without the necessary moral authority. To act instead, as most of us do, on varying degrees of confidence in beliefs, is moral corruption. He needs to be certain, because he needs to be good. If I am absolutely certain, then your condemnations of me are like arrows bouncing off a tank.

What’s The Argument?

In addition to the poor analogy to lions and zebras, and the failure to provide a stable definition of truth (or ‘virtue’, or ‘happiness’, or a half-dozen other things) Stefan never takes the time to explain what propositions are, or what makes them a proper part of an argument. This, to me, seems like it would not be too big a leap of effort, even for his readers. Clearly, he knows what they are, because he provides lots of them in this book. But he is terribly inconsistent about it. At one point, late in the book, he even seems to confuse validity and truth, and incorrectly marks out a single proposition as an argument:

‘Ice cream contains dairy’ is an argument, since it claims to describe a property objectively measurable and testable…

Being “objectively measurable and testable” does not meet the definition of an argument by even the most rudimentary general definition, as a ‘collection of reasons supporting a conclusion’. What’s worse, it doesn’t even meet Stefan’s own initial definition, as being:

an attempt to convince another person of the truth or value of your position using only reason and evidence.

All we have here, is an asserted conclusion in the form of a subject-predicate proposition. There are no reasons supporting it, and no evidence offered to ‘verify’ it’s ‘objective reality’. Even if we take the colloquial presumptions, and accept that the subject ‘ice cream’ does refer to something in reality, and that the predicate ‘contains dairy’ accurately modifies this subject with – as he puts it – “a property objectively measurable and testable”, it still remains that a measurement must be made, and the result of that added to this proposition, in order to make it an argument. So, perhaps something like:

  1. Ice cream is made with milk
  2. Milk is a dairy product
  3. Therefore, ice cream contains dairy

Note that this is a standard example of the transitive property applied to the propositions of a logical argument. If Stefan were trying to outfit his army with a sharp argumentative blade, then this was definitely a missed opportunity.

A bit later, he wants to say that ’inequality is bad’ is not an argument, and he tries to sustain this claim by way of this newly minted definition of an argument (its needing to be “objectively measurable and testable”). But rather than argue that “bad” is not “objectively measurable”, which would be the obvious thing to do given the new definition, he says this is because “bad” is a false-substitute for a preference claim, e.g., “I don’t like inequality”. But this only makes his explanation inscrutable. Surely, I can objectively measure a man’s preferences. Even if we reject self-reporting as acceptable, one could still measure pleasure responses neurologically, to obtain the truth of his statement, and in doing so, show that “bad” is an acceptable substitute for “dislike”.

But it turns out that’s not why Molyneux makes this turn in the story. Instead, he wants to lodge an entirely new complaint about how personal preference isn’t a reasonable standard for moral judgment. On this point, I might be in agreement (were I to see an argument), but the problem is that it’s not germane to the explanation of what is and is not an argument. He’s lost visibility of the form of his argument, because he’s utterly distracted by the content. Perhaps those “wingdings” would come in handy about now?

Getting An Ought From An Ought…

Now, we move beyond the logic lessons, and on to some specific content problems with this book. There are dozens of inaccuracies, exaggerations, and hyperbolic misreadings to be found littered across the pages of this book. I am going to focus on just three instances. First on the list (the most challenging to untangle) is his ham-fisted attempt at a refutation of Hume’s Is-Ought dichotomy. He states Hume’s case this way:

David Hume, the famous Scottish philosopher… introduc[ed] the concept of Humean scepticism, or the idea that you cannot get an “ought” from an “is.” While it is true that cutting off a man’s head will kill him, there is nothing in the basic biology that tells us we ought not to do it: in other words, there is no morality in physics.

This is a common simplification of the is-ought dichotomy, and it suffers from the common problem of misunderstanding Hume’s logic problem as a reification problem (that “moral” properties are “real”). His rebuttal to this formulation amounts to two objections. First, predictably, that the is-ought problem is a non-problem (“irrelevant”, in his words):

There is no such thing as logic in material physics either, but we do not think that logic is unnecessary or irrelevant or subjective.

This argument fails, because it doesn’t actually prove the case of irrelevance. Rather, he beats down the straw-man of reification. There is no “logic in material physics” (by which, he means ‘physical reality’), because logic (loosely speaking) is a set of rules defining a means of describing certain features of physical matter (as in, Aristotle’s three laws). Likewise, there is no “morality in material physics”, because morality (loosely speaking) is a set of rules defining a means of evaluating certain features of human character and behavior. The problem is to be found not in where any properties lie, but in the two words I highlighted: “describing” and “evaluating”, and their usage in arguments. Molyneux is aware of this difference. It is a key component of his opening claims about arguments. We’ll recall, that there are two kinds, according to him:

Truth arguments aim to unite fragmented and subjective humanity under the challenging banner of actual reality… Value arguments aim at improvements in aesthetic or moral standards… A truth argument can tell us who killed someone. A value argument tells us that murder is wrong.

So, it’s clear that we have two different categories of argumentation, and that they need to be accounted for independently (indeed, justified independently), and reconciled. Which gets us to Stefan’s second objection:

…considering Hume’s argument that you cannot get an “ought” from an “is,” we can easily see that the mirror of The Argument destroys The Argument. If we cannot get an “ought” from an “is,” then anyone who tries to argue that we can is wrong. In other words, we “ought not” get an “ought” from an “is.” Arguing that we cannot derive universally preferable behavior from mere matter and energy argues that it is universally preferable behavior to not derive an “ought” from an “is.” If we cannot derive an “ought” from an “is,” this means that we can derive an “ought” from an “is,” which is that we ought not try it: a self detonating argument.

There are two major problems with this argument. First, contrary to popular misconception, Hume never actually asserted that one cannot not derive an ought from an is. Second, Molyneux is exemplifying in his prose, precisely the problem that Hume was actually describing in his Treatise. Let’s take a look at Hume’s actual words:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprized to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. (Treatise 3.1.1)

This may seem too subtle for a general overview, but Hume is not saying you “cannot derive an ought from an is”. He’s saying exactly what I said above: there appears to be two categorically different kinds of reasoning, and authors are mixing them in their writings, without explaining how the relations work. The relation is simply assumed, without justification. That is a problem that is hardly irrelevant to philosophy. Here is a rudimentary example:

  1. Some humans go hungry in winter
  2. Those with food ought to feed the hungry in winter

Just like Stefan’s example of murder above, we have one proposition that is in the ‘descriptive’ category, and one that is in the ‘evaluative’ category (or, in this case, injunctive which – loosely speaking – implies normative evaluation). By what laws of logic can the second proposition be transformed into a conclusion from the first? Or, at lease, how can we show logical linkage between proposition 1 and proposition 2? That is what Hume was asking his reader to consider. In order for Hume to sustain the broader positive assertion that “one cannot derive an ought from an is”, he would’ve had to construct a theory of deduction that categorically (and absolutely, ironically) excluded evaluation statements or injunctions as meaningful propositions (in the true/false sense of meaning). He didn’t do that.

But what of Stefan’s clever turn? If we take the colloquial assertion as read (regardless of what Hume was saying), does Molyneux successfully refute it? I still don’t think so. First, note his usage of the word “wrong” in that passage:

If we cannot get an “ought” from an “is,” then anyone who tries to argue that we can is wrong.

Does he mean “incorrect”, or does he mean “bad”? Fortunately, Molyneux provides a clarification:

In other words, we “ought not” get an “ought” from an “is.”

Is saying that we cannot derive an ought from an is, the same as saying we ought not derive an ought from an is (i.e., that it would be ‘bad’ for us to do this)? On broad broad reading, Molyneux may have a point. The rules of logic are often described as normative as well as descriptive (see Guttenplan, for example). In other words, the rules ‘guide good behavior’ in argumentation, in some sense, in addition to simply describing the methods of thinking. But that’s not what’s going on here. As I pointed out above, nobody is saying that it is morally wrong to derive an ought from an is, merely that it doesn’t seem possible, given the present theories of logic available to us. The task would be to build a logical system that incorporated normative evaluation and injunction into the system (in other words, to somehow provide a truth-bearing meaning for those sorts of statements). Not an easy task, but also not necessarily an impossibility.

In any case, to make this objection stick, and condemn the dichotomy, Stefan has to appeal to his own moral theory (known as “Universally Preferred Behavior”):

Arguing that we cannot derive universally preferable behavior from mere matter and energy argues that it is universally preferable behavior to not derive an “ought” from an “is.” If we cannot derive an “ought” from an “is,” this means that we can derive an “ought” from an “is,” which is that we ought not try it: a self detonating argument.

This should raise a red flag. Because, again, Hume isn’t making a moral condemnation of the traversal from descriptive to normative. He’s asking how it’s possible to do so in moral theories, given the normal rules of logic. What’s more, even if we granted the normative complaint, it’s still a stretch to say that all normative evaluations are moral evaluations, and therefore, must be held to the same definitional standard – and in doing so, ruling out moral complaints about evaluative language in logic. In other words, to make the objection from UPB work, Stefan has to commit exactly the same sleight of hand that Hume was complaining about: “ instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not”.

Democracy Is For Dummies

The second example is a straw man the size of the Wicker Man. For a man with both a BA and an MA in history, I am continually in awe of Stefan’s utter disregard and contempt for it. Often, he actually seems proud of the contempt he sprinkles liberally throughout this book. One breathtaking example of these random turds can be found here:

Less intelligent people invented democracy (more intelligent people invented Republics), because, being less intelligent, they could not influence society through the brilliance of their writing and oratory. But naturally they wish to have such influence, and therefore invented the concept of “one adult, one vote.” This makes their political perspectives equally valuable to the greatest genius in the land. In other words, they get the effects of genius, without the genetics or hard work of becoming a genius. From an amoral, biological standpoint, who can blame them?

Setting aside the comparative confusion (“less intelligent” than whom? “more intelligent” than what?), I have to wonder why this was even included in the book. It’s one of the most cartoonishly ignorant and cynical descriptions of the invention of democracy I’ve ever read (and I’ve read some bad ones). But, let’s entertain the possibility, for the moment. Is it reasonable to suppose that Solon, Cleisthenes, and Ephialtes were indeed low-IQ, genetically inferior parasites who, unable to “influence society through the brilliance of their writing and oratory”, still somehow managed to convince the population of Athens over a generation, to divide themselves into classes, and organize themselves around a complex system of representative bodies like the Ecclesia, the Boule, and the Areopagus? In a book touting the power of The Argument, dare I ask for An Argument for this claim? What would such an argument look like? What sort of society would have replaced early Greek democracy, had these intellectual inferiors not succeeded to transform their society? Stefan doesn’t bother to elaborate. Rather, he wants this assertion to stand on its own, as a stepping-stone in a chain of shallow reasons he thinks cinches the case for the moral superiority of high intelligence.

But it’s not even a good reason to think that. For a man who is constantly pounding his chest in honor of “empirical evidence”, he never seems to leave any room for that evidence when making claims such as this one. For anyone who has read any serious history on ancient greek society and politics, it should be obvious: there is absolutely no evidence to suggest that the founders of Greek democracy were “less intelligent” than their social peers. What’s more, far from being incompetent writers and orators, the Athenians were some of the most accomplished writers and orators in the Hellenistic world. It’s precisely because of how accomplished they were, that we can talk about them at all, now. In other words, the evidence stands in direct opposition to this claim. And he doesn’t seem to notice.

This is a problem throughout this book. There really is no good reason for the inclusion of the brief line of “reasoning” within which this claim is couched. It’s arbitrary and random. It neither sustains his primary claim – that reasoned debate is essential to stable civilizations – nor offers a challenge to it that he can respond to. In fact, it’s such an outlandish and distracting empty assertion, that it damages his main case. It’s not an argument. There are other places where similar interruptions are not nearly as damaging (such as the parenthetical mention of “abduction” at the end of his explication of induction and deduction). Which leads me to believe this book probably would have really benefited from a good editor.

Consequences And Principles

The last example may seem too subtle for some. But I raise the objection here, because I think it’s important to point out that Stefan claims the mantle of a “public intellectual”, and has been, ostensibly, hard at work as an “internet philosopher” for at least 10 years. Why is this important? Because lay-people who read this book will take the misreadings as more-or-less correct, and find themselves with their pants down, when faced with someone who knows better. In particular, I’m referring to his mischaracterization of Consequentialism, as a ‘pragmatic’ (meaning ‘unprincipled’) doctrine:

Atheists also tend to prefer consequentialism, or outcome-based moral standards. That which produces direct and immediate benefits in society is considered the good: the greatest good for the greatest number, and so on. These are not principled arguments, but pragmatic arguments. The principled argument against the welfare state is that it violates property rights (thou shalt not steal). The consequentialist argument for the welfare state is that it immediately reduces the amount of poverty in society. If your goal is consequentialist, principled arguments often stand in your way.

I am certainly no fan of consequentialist ethics, as readers of this blog will know. But to simply dismiss the theory out of hand as “unprincipled” or “pragmatic” is a weak approach at best. Mainly, because consequentialism is not unprincipled. Since Stefan has made indirect reference to Mill’s Greatest Happiness principle, I will then let Bentham and Mill speak for themselves:

Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do. On the one hand the standard of right and wrong, on the other the chain of causes and effects, are fastened to their throne. They govern us in all we do, in all we say, in all we think: every effort we can make to throw off our subjection, will serve but to demonstrate and confirm it. In words a man may pretend to abjure their empire: but in reality he will remain subject to it all the while. The principle of utility recognizes this subjection, and assumes it for the foundation of that system, the object of which is to rear the fabric of felicity by the hands of reason and of law. Systems which attempt to question it, deal in sounds instead of sense, in caprice instead of reason… By the principle of utility is meant that principle which approves or disapproves of every action whatsoever according to the tendency it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words to promote or to oppose that happiness. (Jeremy Bentham, Introduction to the Principles of Morals and Legislation)

And:

The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure. To give a clear view of the moral standard set up by the theory, much more requires to be said; in particular what things it includes in the ideas of pain and pleasure; and to what extent this is left an open question. But these supplementary explanations do not affect the theory of life on which this theory of morality is grounded— namely, that pleasure, and freedom from pain, are the only things desirable as ends; and that all desirable things (which are as numerous in the utilitarian as in any other scheme) are desirable either for the pleasure inherent in themselves, or as means to the promotion of pleasure and the prevention of pain. (JS Mill, Utilitarianism)

Now, it is certainly reasonable to question the correctness of this principle. For example, why is happiness directly equated with pleasure? Or, how do you address the objections in the Protagoras? Or any number of other complaints I, and other more skilled philosophers have raised in this blog and elsewhere. The Utility Principle is notorious for its myriad problems. But, not being a principle is not one of them. Perhaps Stefan should have taken the time to give us a definition of “principle” before proceeding with a condemnation of consequentialism as “unprincipled”.

Later on, he tries to extend the meaning of consequentialism itself, in order to heap more scorn on it:

As the influence of women in society has grown, so has pragmatism, which can also be called consequentialism, which is the idea that an argument can be judged by its effects. If the effects are negative, The Argument is “problematic” or “inappropriate” or “offensive.”

Here, he is attempting to equate the fallacy of appeal to consequences with the moral theory of consequentialism. This is a profound category error. He lays blame for the fallacy of the appeal to consequences almost entirely at the feet of women (which also includes the only research paper he quotes – but doesn’t cite – in the entire book). But, setting that silliness aside (yet another distraction), he fails to provide an explanation for how consequentialism as a moral theory says incorrectly that propositions can be judged true or false as a result of the desirability of their effects. In this small, but incredibly disingenuous two-sentence passage, he’s managed to throw pragmatism, consequentialism, women, and The Argument under the bus.

Summary Conclusion (My Amazon Review)

I yearned for this to be a better book than it was. I sympathize with the sentiment that reason is beleaguered in modern society, and crave a good book on the topic. Alas, this is not the book. For all his railing against sophistry, confirmation bias, and appeals to emotion, Stefan relies heavily on an audience so steeped in its own prejudices, that it won’t notice the factual errors, logical incongruities, or interpretational biases littered throughout its pages. What’s worse, is that Molyneux attacks the disingenuous debater so strenuously in this book, that he often ends up recriminating himself for his own sloppiness.

Molyneux’s book reads like a personal journal that was transcribed directly into print. It is haphazard, overwrought, and at times, stream-of-consciousness. If you’re not already familiar with the lingo of internet Libertarianism, you’ll be completely confused by numerous passages. If you’re not already rehearsed in, and in agreement with, the arguments and positions of right-leaning anarchism (“anarcho-capitalism”), you’ll find the presumption of foregone conclusions scattered throughout the book to be irritating at best.

At bottom, the main problem with this book, is that it doesn’t appear to have an audience. The dismissive and sneering tone taken toward the political left will put them off. The appeals to the political right will (and has) earned him podcast interviews, but they certainly aren’t interested in philosophical inquiry beyond their own prejudices. The academic community has already shunned him as a lightweight at best, crackpot at worst. The book is too polemical and doctrinaire to appeal to the mainstream (many of whom fear him as some sort of cult leader already). So, who is this book for?

He will say, of course, that it is for the ‘true philosophers’. But any true philosopher will find this book terribly disappointing at best, perniciously self-defeating at worst. His explication of logic is amateur and incomplete, and at times just plain wrong. He takes Popperian falsificationism as a given, as if it were just a fact. He makes a sophomoric straw man of consequentialism, misreads Hume, offers only common-sense intuition explanations for complex topics like virtue and happiness, and deftly shifts from normative to descriptive usages of “right” and “wrong”, where it suits him.

In the end, as near as I can tell, the audience for this book was himself, and the handful who share whatever psychology it is that produced this work. The only person who will be most convinced by this work, is his own faltering conscience. He is defending heavily against the anxiety of uncertainty; the vulnerability and insecurity of having more questions than answers – and nowhere to look for them.

For centuries, the medievals also sought the same security in the reified power of deductive logic which Stefan is groping so desperately for in this book. On that count, I surely sympathize with him. The seduction of certainty – its comforting, self-soothing lullaby of finality and the archimedean lever it offers against those who would use doubt and curiosity to hurt, to plunder, and to oppress, is something I have been drawn to at times in my own life.

But for those of us afflicted by the daemon of Socrates, these islands of comfortable absolutism will never make a permanent home. Eventually, the urge to set sail again on the sea of uncertainty – on the path to discovery – will overtake the fear of being unmoored, and away we will voyage, come what may.

Stefan’s book is one such island of comfortable certitude, for some. The philosophers may visit, but they won’t stay long. What concerns me, though, are those who end up shipwrecked on one of these islands, before they’ve even had an opportunity to understand the voyage they set themselves on.

I will close this review with a few quotes from Stefan, that I would like to offer as chastening advice to Stefan himself:

There are only two ways to achieve certainty: dogma and philosophy. Dogma is by far the easiest choice, of course, and while it may give you the illusion of certainty, it does not give you the reality of knowledge. Dogma arises, like most dysfunctions, from a greed for the unearned…

If The Argument begins with the conclusion, it is neither an argument, nor a proof of any kind…

Indeed.

Addendum: Making Stefan’s Case For Him

In researching for this review, I stumbled across this review of the book, by Alexander Douglas, a philosophy lecturer at St. Andrews in Scotland. On reading this review, I couldn’t help but cringe. Dr. Douglas has volunteered to make himself into precisely the boogy-man that Stefan points to as an example of why he’s so right, and everyone else is so wrong. As he says: “When the debate is lost, slander becomes the tool of the loser”.

If you are a professional philosopher, and you think Stefan Molyneux is not worth wasting a single breath on, then why write a “review” like this? Why acknowledge him at all? He’s not a professional philosopher, and only on the periphery of the political debate online, which itself, is already a periphery. If you do think he’s worth the effort to review honestly, then why this sort of silly screed, that only serves to entrench his fans, and (as they would put it) “virtue signal” to yours? Why fall into this trap at all?

This is one of the reasons I decided to proceed with this review (I considered abandoning it several times). There needs to be somebody reviewing Stefan Molyneux in an honest way, with rigor and discipline, who doesn’t have an axe to grind. People hovering in the orbits around the personalities of the “internet right”, need to be able to find genuine criticism, in order to be able to make rational decisions themselves. It’s the only way out of the morass.

The Sorites Paradox – Maybe It’s Not What We Think It is.

It has been asked how, if at all, one might resolve the Sorites paradox. I am not convinced a solution is possible, and in this paper I will explain the responses I have become aware of, and why they fail. In the end, I will conclude that there is no solution to the paradox, but I will offer a few suggestions for a way forward.

The first response might simply be to reject the first premise of the argument. In other words, simply deny that a man with 10 hairs is in fact bald, or that 100 grains of sand is in fact a heap. In essence, this would render vague predicates useless at best, meaningless at worst, since no predicate that allows for a vague border case would be permitted to apply to anything. There is one way in which we might stretch this into plausibility, but I will address the other responses first, before returning to this in the conclusion.

The second response is to set some arbitrary boundary. This means selecting one one among the indefinite number of secondary premises beyond which all others will be false. For example, we might say that thirty-thousand and one grains of sand is the boundary below which we no longer regard a collection of grains to be a heap. At first glance, this approach might seem plausible. After all, we do this frequently in practice: setting the legal drinking age, or the number of credit-hours required to count as a ‘full-time’ student, for example. However, there are two core problems with this. First, from the context of the formal argument, there is no good reason to reject any of the subsequent conditional statements, and there appears to be no means by which we could discover a reason. The implicit modus ponens of the conditional compels us to accept them all. Second, as Wright (Vagueness, 1997) pointed out, vague predicates are inherently coarse by virtue of their intended use. So attempts to impose some sort of specificity would destroy their meaning.

The next approach would be to attempt to define a knowledge gap within some middle range of propositions between the edge false and edge true statements. On one interpretation of the idea, we could use a three-value logic, in evaluating the propositions. At some point, starting with grain one, the proposition ‘this is a heap’ would cease being false, and would instead be valued ‘unknown’ or ‘undefined’. Later, the unknown state would transition to true, once we’ve reached the next threshold. This would make it possible to judge the argument invalid, since any number of its premises were neither true nor false. However, this seems to be attempting to win on a technicality, and it suffers from the same problem as the arbitrary boundary solution, in that we have no real way of determining when the states should change.

The next response might be some form of Edgington’s “degrees” of truth (Vagueness, 1997). But this suffers from it’s own serious flaws. For example, consider the statement ‘it is raining’. That has a ‘degree’ of truth of .5. It’s negation, ‘it is not raining’ will have a degree of truth of .5. The consequence of this, is that the following propositions have exactly the same truth value: ’It is raining, and it is raining’, ’It is raining, and it is not raining’. The same problem exists with our heap of sand. So, again, we’re left with no clear way to determine the truth of the conditionals in the Sorites case.

In the end, there does not seem to be any clear resolution to this paradox. However, I would offer one suggestion. vague predicates, in addition to being inherently coarse, also seem to be describing inherently subjective experiences or judgements. While Sorites arguments seem to want to talk about objective properties of objects. Perhaps there is no solution to the paradox, precisely because “tall” or “heap” or “red” or “bald” is not in fact a property of the object being considered, but a property of the experience the subject has of that object. The paradox is perhaps trying to square subjective interpretation with objective matters of fact. And that’s why it cannot be resolved.

Haack and Dummett on The Justification of Deduction

Susan Haack nicely diagrammed the problem of circularity in her 1976 paper, The Justification of Deduction. In that diagram, she drew a direct parallel to the circularity of the inductive justification of induction, as outlined originally by Hume. Haack argues that justification must mean syntactic justification, and offers an illustrative example argument to show why semantic justification fails – namely, that it is an axiomatic dogmatism: deduction is justified by virtue of the fact that we have defined it to be truth preserving.

Haack goes on to argue on syntactic grounds that justification is a non-starter on at least five other fronts, in addition to being circular. However, Dummett in his 1973 paper by the same name, showed that the only kind of justification that made any sense was semantic justification. First, because syntax necessarily relied on semantics for its meaning, and secondly because the whole point of justification in the first place is confidence in the function of logic as a means of preserving truth values.

Still, Dummett was able to show not only that the justification of deduction was circular, but also that any attempt to do so leads inevitably down one of the horns of Agrippa’s Trilemma. As Haack pointed out, we could simply assert the justification definitionally. But, in attempting to avoid this dogmatic horn, Dummett points out that we have only two other options: the regress horn, or the circularity horn. In the first case, this would mean crafting a set of rules of inference that could be used to independently justify deduction. These new rules would require a language and a theory of soundness and completeness all their own, which in turn, require justification, and then the process would descend yet another level. In the latter case, two different sets of rules of inference might be used to justify each other, in perpetuity. Obviously, none of these options is satisfying.

Later in his paper, Dummett attempts to explain how a set of inferential rules might be justified by reference to a theory of meaning for the object language within which it is contained. Essentially, he argues that the soundness and completeness theory of logic provides what a theory of meaning provides for a language: a functional understanding of its use. In other words, if we are to justify logic at all, we must first have a theory of meaning that shows how sentences can carry truth values. But this seems to me to begin the slide back into circularity, because as Dummett goes on to explain, our definitions of true and false themselves determine the means by which we achieve the meanings of sentences judged by those definitions. All we’ve done is to shrink the circle.

Haack and Dummett continue the debate in subsequent papers, but reach no conclusion. I am inclined to wonder, myself, whether any of it matters. The justification problem in induction has been evident for over three hundred years, and the problem of deduction for around seventy-five. Yet somehow, both of these tools of inference continue to be used and taught — and both still seem to be yielding results that most of us find satisfying most of the time.

In a word, yes, some forms of justification are circular (and it seems that no form of justification actually appears to work). But perhaps the problem isn’t what we think it is. Perhaps the process of inference is somehow more fundamental than language. Perhaps it is a feature of consciousness that resides below the level of language, rendering it impervious to notions like justification. Or, perhaps the justification of logic will someday come out of the neurological study of the brain, as an explanation of the evolutionary advantage of a linguistic mind, to a primate that would have otherwise perished on the plains of Africa.

Getting A Handle On The Truth

What is truth?” ~ Pontius Pilate

This is an interesting and surprisingly difficult question. If you look in the OED, what you’ll find there are entirely circular and self-referential explanations: “the quality or state of being true“, ” that which is true or in accordance with fact or reality“, and “a fact or belief that is accepted as true“.

So, the poor souls that rely on the dictionary are left with, essentially, “truth is what’s true”, and “what’s true is what we agree are the facts of reality.” But what if we’re wrong and we still agree? Or worse, what if we disagree, but one of us is right? This can’t be the last word on this topic. What can we say with any confidence about truth, as such? To put it in the words of Bertrand Russell:

“We may believe what is false as well as what is true. We know that on very many subjects different people hold different and incompatible opinions: hence some beliefs must be erroneous. Since erroneous beliefs are often held just as strongly as true beliefs, it becomes a difficult question how they are to be distinguished from true beliefs. How are we to know, in a given case, that our belief is not erroneous? This is a question of the very greatest difficulty, to which no completely satisfactory answer is possible. There is, however, a preliminary question which is rather less difficult, and that is: What do we mean by truth and falsehood?” — The Problems of Philosophy (p. 77)

Thinking on the question a bit, I realized I’m not quite sure what I mean. So, I decided to take a brief look at what what philosophy has had to say on the subject over the centuries, to see if I might find something I’m willing to accede to, at least in the short term.

As Russell is careful to point out in the book I just referenced, any real understanding of truth must start first with understanding what knowledge is. But even this is tricky. I wanted to simply stipulate to the classical definition, in order to shorten this post. But what we find in the traditional definition of knowledge, is yet another circular reference: knowledge is Justified True Belief. In other words, that which is known is that which satisfies all of the following three conditions:

  1. It is believed
  2. That belief is justified
  3. That belief is true

For the sake of brevity, I’ll let the Stanford encyclopedia explain these three conditions in detail, and I’ll set aside common objections to this formulation of knowledge for a later post. Nevertheless, in spite of Stanford’s assertion that “the truth condition is largely uncontroversial“, I think the fact that truth is present in the definition of knowledge is a serious problem for philosophy because it makes the two terms fundamentally dependent upon each other: truth is that which is known is that which is the truth.

As such, I find it hard to blame the dictionary for its circularity when it relies for its definitions on an academic discipline that can’t seem to provide a clear answer to this question. What’s more, I think it’s a little disingenuous for “serious” philosophers to scoff at Ayn Rand for her insistence on unjustified “axioms” like “Existence Exists“, or to laugh at Christians who, facing no real alternative, rely on Jesus’ pronouncement that actually it is he personally who is “…the way, the truth, and the life…” (John 14:6).

To be completely clear, my aim here is not to argue that there is no such thing as truth, or that we cannot know things or cannot justifiably claim to know the truth — or worse, that we should just throw our hands up and simply declare it to be whatever we want it to be. To do so, I’d have to employ the very tools of thought that I’d be condemning. All I am suggesting is that maybe we’re not as sure as we think we are, and that maybe we need to rethink some of these fundamental questions.

What Everyone Else Thinks

As one might expect, given what I have stated above, there are actually numerous philosophical theories of truth. The most popular among them, the “correspondence theory“, offers the greatest appeal to common sense. This theory is probably where the OED gets it’s turn of phrase “in accordance with fact or reality”. The theory states that “a proposition is true provided there exists a fact corresponding to it.” But what does “correspondence” mean? And what, exactly, are facts? Russell makes a lot of hay on this second question, in his own conception of correspondence. In short, this definition “works”, but it’s not entirely satisfying (as Russell notes in the above quote).

Some argue for something called “coherence“, in which each new statement is compared to a complete set of beliefs, and rejected if it does not “fit” within that collection. This theory seems to fail on two grounds: first, that it is not necessary for the collection of beliefs to have any relation to reality, and secondly, as Russell again points out, because of the first problem, there can be many equally “coherent” belief systems existing side by side. How do we know which one to choose? The problems point to a third problem, that I think also plagues the pragmatist, constructivist, and consensus theories of truth. Namely, that they all elevate mere belief to the ontological status of a fact, by virtue of some ex post facto rationale. What’s more, this equivocation seems to go unnoticed (or worse, dishonestly ignored) by the theories’ adherents.

What I think

I find Kant’s idea of the conjunction between the noumenal and phenomenal world somewhat compelling. Although, probably not for reasons Kant would approve. Science shows us that there is a reality that is outside the reach of the senses. Perhaps truth, then, is the extent to which we can apprehend these non-phenomenal parts of reality, and reconcile them with the phenomenal parts. Already, science has provided us with all sorts of tools for doing this (telescopes, microscopes, sensors, meters, etc.). If this is true (somewhat ironically), then the way to the truth is through scientific inquiry. This is certainly a different route to truth via science than the pragmatists propose, but I think the destination may be the same.

On the other hand, although I don’t quite understand his theory, Alfred Tarsky‘s emphasis on semantics got me to wondering.

I have heard truth described by some as a relationship between physical reality and conscious awareness. This is not quite the same thing as correspondence, because the focus here is not on the objects in the relation, but the relation itself. It’s an interesting idea, but I think this isn’t quite complete. Because, if conscious awareness of reality is all that is necessary for a “truth” relation, then beavers and ants and birds would be capable of apprehending the truth. Clearly, then, it must something more.

That difference is language. Truth is as much a semantic concept, as it is a metaphysical one. Like knowledge, the definition of truth is concerned with the objects of mind and reality, and primarily with the nature of the relationship between them. But what is it about the nature of this relation, that makes it truth? I think it is the meaning we assign to that relationship, and the value discovered in the contents of that relationship.

In short, truth is a kind of semantic value judgment of the perception of reality as it is apprehended, by a mind capable of apprehending and valuing. But what does this mean, in practice? Is this just another way of formulating correspondence? Not quite. Is it the same as claiming that the truth is whatever we want it to be? Not quite. Is it pragmatism in another suit of clothes? I don’t think so.

But I’m struggling to find the words necessary to develop the idea any further. And perhaps that’s a clue to the problem with all of these theories. Maybe the problem lies precisely with the fact that our language is woefully lacking, when it comes to the task of describing these sorts of relationships. This is why I am beginning to wonder if we don’t need a new language, or a new way of thinking, or of describing our thoughts, before we can properly answer this question.

Knowledge, Certainty, And Logic

The Epistemic Regress (specifically, the Skeptical variety) is a little out of my depth at the moment, but what is plainly obvious by various presentations of the problem, is that at it’s core lies the Problem of Knowledge. The key question that arises in the examination of major premises in any deductive argument, is “how do you know?” This suggests that something essential about the nature of the premises needs to be discovered, before we are going to solve the riddle.

Perhaps the root of the question actually lies in an unconscious equivocation of analytic and synthetic statements, when we ask it? The latter being knowledge derived from sense perception, the former from “pure reason” (as Kant might have put it). To that end, some suggest that we probably need to revisit the classic problem of Cartesian skepticism yet again. This paper from someone at the University of Alabama discusses a theory called “Foundationalism“, which despite the numerous objections to it, seems somewhat appealing.

However, I think the problem lies precisely in the form of logic itself. It is a tool designed around a positive conception of knowledge; one that presumes that certainty is reasonable and achievable as a standard of knowledge, and requires assertions that are absolute. There’s even a term for it: “Justified True Belief“, in which absolute certainty is the gold standard defining what “knowledge” really is. A view that drove Descartes to his maxim, Cogito Ergo Sum.

But I take my view more from Karl Popper, than from René Descartes: the regress exists, because the tool we’re using and the thing we’re trying to achieve with it are incompatible. We need a new form of critical reasoning, and a new conception of knowledge, that is capable of coping with degrees of uncertainty, and degrees of probability.

Traditional deductive logic (and even some forms of induction) rely too much on a conception of knowledge that demands of its users something that seems, upon very close inspection, to not exist and to not even be possible. We need to get out of the classical playpen of Aristotle and Plato, and grow up a little. What that will look like, is a bit beyond me right now. But maybe someone, somewhere has already beat me to the punch. I hope so. Maybe tentative uncertainty is the most anyone can hope for.