Category: ethics

Book Review: 12 Rules For Life, Jordan Peterson

Jordan Peterson’s “12 Rules For Life” is an admixture of continental philosophy, eastern mysticism, Jungian psychology, Christian theology, clinical psychotherapy insights, personal biography, and folk wisdom. At 368 pages, it’s just large enough to keep a thoughtful layman engaged without the more intimidating academic burden of his first book, “Maps of Meaning”. Dr. Peterson is obviously well read and quite thoughtful. In addition to some of his own occasional profundities, the book is absolutely littered with references to Shakespeare, Milton, Goethe, Dostoevsky, Orwell, Solzhenitsyn, and many others. If you’re a curious reader, following these up will take you weeks.

A Jungian at heart, Peterson loves to cast his arguments into metaphorical and mytho-poetic form, which can be remarkably frustrating for a more hard-nosed analytical thinker like myself (he does this much less so, in Maps of Meaning). But Peterson is still very careful to cite modern sources for most of his empirical assertions throughout the book (with one significant exception, which I’ll get to later).

It took reading nearly the entire book to figure out how each of the 12 rules were related to each other as a whole, and the effort was well worth it. Chapters 6 (“Set your house in perfect order…”), 7 (“Pursue what is meaningful…”), and 8 (“Tell the truth…”), constitute the heart of the book in my view, with chapters 10 (“Be precise in your speech…”), and 11 (“Do not bother skateboarders…”) serving to really drive home the overall message of the book. What is that message? First, that contra Descartes, the fundamental unshakeable truth of human existence is the experience of suffering – a pre-rational essential phenomena that is, as Descartes might have put it, the primary “clear and distinct” knowledge we have of ourselves and of our “Being” (Peterson’s term. It seems to mean something like the state of existing and experiencing existence). Moreover, that suffering is a result of our having awakened to the fact of our own Being, that this was in some sense a choice, and most importantly, that now leaves us facing the perpetual choice of either accepting or rejecting the burden of this knowledge. The implication of all this, for Peterson, is that this is the fundamental moral choice. Our burden of this conscious choice – and the selection itself – is the acting out of our fundamental value. The ultimate consequence is the wholehearted embrace or rejection of the whole of creation. Not simply, as Nietzsche or Camus might say, the choice of suicide, but the choice of becoming judge, jury, and executioner of all Being including your own. The moral man, then, chooses life, and makes that his ultimate value in the process.

These chapters are, by far, the most philosophical of the book. They are essentially Peterson’s response to Nietzsche’s famous critique of value found in Zarathustra and the Genealogy of Morals. His formulation and answer to this problem is clearly influenced by Kirkegaard (whom he quotes twice), but the far stronger influence it seems to me is the Judeo-Christian Bible. Peterson casts the opening books of the bible into Jungian archetypes, and uses them to make his case. The Priestly Genesis is the origin of all Being: The Word is self-conscious Truth spoken as a means of deriving order from the chaos of the deep. Eve chooses to invite chaos into the walled garden of Eden and Adam follows her lead. In their offspring – Cain and Abel – we are confronted with the choice of life stated above, only in archetypal form: Cain condemns the world, its creator, and himself, out of resentment for the suffering he encounters; and not just for the suffering, but for the apparently unequal distribution of that suffering between himself and his brother Abel. Abel, on the other hand, chooses to properly honor himself and his creator with honest sacrifice. Peterson draws upon this metaphor again later, in a masterful parallel between this and the parable of Christ’s temptation in the desert.

So, for those who do their philosophy metaphorically, this book is a feast. It is an homage to hope, and a powerful argument against the nihilistic despair that seems to permeate our present modern culture. Still, I think this book is only likely to find fertile ground with seekers still open to the intuitive and allegorical approach to philosophical investigation. More to the point, those jaundiced by academic cynicism or jaded by ideological or intellectual biases, will generally find nothing more than a twenty-first century Joseph Campbell.

To be sure, there are some problems with the book. Rule 5, for example, lacks much of the intellectual rigor and careful citations of the rest of the book. Peterson makes numerous appeals to the work of B. F. Skinner in this chapter, which is only obliquely relevant anymore, since decades of work has been done on the developmental psychology of children since then (none of which he notes). Worse, he also makes appeal to several trite and easily refutable arguments in support of his position (for example, what I like to call the “hot stove defense”), and fails to acknowledge that much of what he put in this chapter is very often used as post hoc justification by many very poor parents. I think Peterson could have left this chapter out, and it would have been a better book.

Also, it is possible to charitably dispute Peterson’s allegorical approach to the question of meaning. The Joseph Campbell complaint, while somewhat of a straw man, is not entirely without merit. Sam Harris makes an excellent illustration of this, in his book “The End of Faith”, in which he satirically describes the spiritual significance of a Hawaiian snapper recipe. Though it is hyperbole, it does raise the question of how one would anchor the claims drawn from allegory in something more empirical, in order to make them properly defeasible. Peterson has yet to address this objection fully, as far as I know.

Despite these problems, I think the book is still well worth the effort to read, for any lay-philosopher looking for an interesting angle of approach to the problem of value and meaning, and its application in a very real-world way. The parallel psychoanalytic threads running through this book, also make it an excellent tool for meditation and self-reflection. It might be tempting to think that the work is mere “self-help”, because of this and because of the title. Don’t be fooled. Peterson explicitly rejects “giving advice”, in the book. What’s more, he’s secretly not even giving you “rules” to follow. What he’s offering, through the mnemonic device of easy-to-remember “rules”, is a glimpse into a unique psycho-philosophical framework for making sense of our phenomenal experience of the world. Or, to put it as Peterson might, a means of forging some order out of the chaos of your own suffering existence. The principles that make up the framework will sound surprisingly familiar to anyone who’s read any Greek philosophy:

  • It is better to choose life, than death
  • Aim for the ideal of Truth, Beauty, and Goodness, but work on earth with what you have
  • The responsibility for these choices is yours, and yours alone.

In a nutshell, he implores us all to be philosophical before (but not to the exclusion of) theological, and he thinks that if we would be, we would make the world better, even if only a little. Who can argue with that?

Hume, Plato, and the Impotence of Reason

Hume infers from his insight that it is not reason but moral opinion that moves us to act, that reason is not the source of moral opinion. From this, he then further argues that moral opinion is a product of the passions – special emotions that arise out of the relations of ideas and impressions. In this essay, I will argue that Hume’s initial inference is correct, but that his subsequent inference is not. Passions may indeed arise from relations of ideas and impressions, but there is no good reason to presume passions, though necessary, are sufficient to produce a moral opinion.

So, what exactly is a “moral opinion”? Plato believed that opinion (doxa) was something that lay in the gray area between true and false belief. He argued that opinion did not deserve the respect of a truth because it lacked the justification of an eternal, unchanging quality necessary to rise to the level of êpistêmê (true belief). If we apply this standard to moral opinion, then, it would be a doxastic belief about the rightness or wrongness of an action, or the goodness or badness of a character. Moral knowledge, on the other hand, would be a belief of a much stronger type. To say, for example, that I believe stealing to be unpleasant, or that I wouldn’t do it, or that it seems wrong, would be to say something contingently true, relative to myself, and subject to correction. To say, “stealing is wrong”, on the other hand, is to make a truth claim asserting the real existence of a property in a certain class of actions. To make such an assertion, I would need to be able to identify the property, point it out, and name it. That would require some sort of perception, and perception requires a sense organ or a faculty of the mind, or both. Plato would rule out a sense organ as the source of our moral knowledge, because sensible phenomena are mere imperfect reflections of the ultimate reality of the form of the good.

The faculty of the mind that perceives such things as rightness or wrongness, according to Plato, is therefore a certain kind of judging faculty that need not rely on the senses. The traditional interpretation is to say that this is reason, and to make analogies to mathematics to bolster the claim because this is what Plato seems to do, but I disagree. In The Republic, Plato provides an ornate metaphor for his tripartite soul: that of a charioteer and two great horses. Plato puts reason in the charioteer’s seat, and assigns the role of appetite (passions) and judgment (moral judgment) to each of the two horses. This arrangement is important, because it speaks to Hume’s own assertion that “reason is and only ever ought to be the slave of the passions”. The charioteer is the apprehending ego, the “reasoning” member of the triad. He does not motivate the chariot. He only steers it. This is consonant with Hume’s view, that reason can only guide the passions, but where Hume fails is (to borrow Plato’s analogy), in thinking there is only one horse. Hume is presuming there is no judging faculty. With only one horse to pull the chariot, the best the driver can do is provide a bit of helpful guidance as the appetitive horse causes the chariot to careen in whatever direction its whim pulls it.

Jonathan Haidt, in his 2012 book The Righteous Mind, provides a more vivid metaphor: that of a thin, scantily clad tribesman mounted atop an unruly African elephant. In this metaphor, the elephant is almost entirely in control, and all the rider can do is suggest minor alterations in direction with a swat of reeds or a tug on a rope. His metaphor, like Hume, includes no faculty of judgment, no capacity to discern the difference between mathematical or empirical facts, and the normative consequences of actions taken in light of them. No capacity for selecting among possible goals. For Haidt, the elephant dictates the terms of engagement to the rider, and his only choice is over how enthusiastically he accepts them. Haidt is dutiful in his acceptance of Hume’s model of moral psychology. But Hume, as I have said, is wrong. Hume is indeed correct that the rider does not form his opinions on his own, but he is wrong to say that they necessarily derive from the elephant. Hume is ignoring the judging horse. Kant, reacting to his own observation of this problem, attempts to right the ship by overcorrecting in the opposite direction: he denies the importance of the appetitive horse, and gives control of the chariot exclusively to the driver. On Kant’s model, neither appetite nor judgment are empowered to lead us anywhere, and the charioteer is forced to get out and push the chariot, out of “respect for the moral law”. This will not do.

To properly form a “moral opinion”, as anything more than just opinion, requires judgment. Judgment is the reconciliation of “is” with “ought”, by means of a value determination. That determination requires a negotiation of experienced desires and reasoned principles. In this way, the rider and his two chariot horses have an equal say in the speed, direction, and ultimate destination of the chariot. For all it’s metaphorical mysticism, Plato’s model of the tripartite soul is a profound insight into human character that is lacking in almost all of his successors, save perhaps, Aristotle. The rational portion of the soul is the master of what is, the appetitive portion is the master of what I want to be, and the judging portion of the soul is the master of what ought to be. Our task, as thinking, self-conscious human beings, is to train ourselves so that these masters learn to live in harmony with one another. When we do, the result is eudaemonia.

Autism and Trolleys – One Good Reason To Reject Utilitarianism

In recent years, it has been speculated that Jeremy Bentham was an autist. This speculation arises out of Bentham’s extreme attempts at systematizing human interactions in his formulation of Utilitarianism. Though I realize modern Utilitarianism is much more sophisticated now (in various forms of sociology and econometrics), I think they all still suffer from the fundamental assumptions laid down by Bentham. In this essay, I will show how one of those basic tenets leads to absurd conclusions, and hides imported value assumptions from other forms of ethics. What better way to do this, than with Philippa Foot’s trolley problem, a common modern tool of the Utilitarian.

Initial assumptions

  1. I’m working with traditional Utilitarianism, not any of the more modern econometric notions of Utility. The more sophisticated versions of Utilitarianism would pretend to have an answer to this problem, but I don’t have the space to deal with that here.
  2. I’m assuming “aggregate” pleasure is what we’re after, and not individual pleasure, since neither Bentham nor Mill were willing to concede to pure individualistic hedonism.
  3. I’m assuming all the passive participants in the trolley scenario are “blank slates”, and are of equal absolutely “value” in some objective sense,  in order to force the dilemma (i.e., it wouldn’t be much of a dilemma if the 5 were orphans, and the 1 was Hitler).

The Groundwork

Now, Bentham had this idea that we might be able to parse pleasure and pain into quanta of measurable units. In keeping with the mindset of the time, and in an attempt to take Bentham’s idea to its logical limits (something he often did impulsively), let’s call these quanta, “hedons” and “dolors”. Where, Hedons are the finite quanta of pleasure (from ‘hedonism’), and Dolors (from the latin for “pain”) are the finite quanta of pain. For each individual, then, imagine a one-dimensional graph in which the zero-line runs through the horizontal center. Zero is equivalent to “indifferent”, anything above zero is equivalent to “pleasurable”, and anything below is equivalent to “pain” (like a barometer that can go into the negative). For example:

Where +10 would be something like an orgasm whilst simultaneously eating a custard eclair in a warm Jacuzzi bath, and -10 would be something like having your Johnson burned off with an acetylene torch, whilst rabid dogs gnaw your fingers off, in an ice storm.
Since we’re assuming “blank slate” participants, everyone starts out at zero (absolute indifference), and everyone has an equal capacity for either +10 or -10. Also, since we’re dealing with aggregates, rather than individuals, we need to take an accumulation of this for all six passive participants. That would be a maximum potential of +60 or -60 for the group. (6 people X 10). Lastly, since you can feel neither pleasure nor pain when you’re dead, you cease to count toward the aggregate once you are dead.
In the trolley case, we are assuming that the trolley is going to kill whichever passive participants it strikes, not just seriously maim them. That means whomever it hits is effectively removed from the aggregate of total hedons and dolors available to make our “greatest good” calculation. Next, I think it’s safe to assume a reasonably sympathetic disposition in most people. So, witnessing a horrible tragedy is going to cause some serious distress. Therefore, we have to decide how many dolors that amounts to. I am willing to concede, also, the possibility that the relief at realizing it’s not me that got hit by the train will result in the addition of some hedons. Let’s say, witnessing the tragedy is equivalent to 2 dolors, and the self-interested relief is equivalent to 1 hedon.

The Experiment

The trolley scenario I face today, is as follows:
* (a) If I pull the lever to the left, I drive the train over the five passive participants.
* (b) If I pull the lever to the right, I drive over one passive participant.
In situation (a), 5 individuals are removed from the aggregate total of hedons and dolors. So, we are left with only one person on the opposite track. He experiences 2 dolors witnessing the tragedy, and 1 hedon of relief, for a total aggregate score of -1 on the “greatest good” scale.
In situation (b), 1 individual is removed from the aggregate total of hedons and dolors. This leaves us with a total aggregate potential of +50/-50 (the five people on the other track). Each experiences 2 dolors at the witnessing of the tragedy on the other track. That is a total aggregate of 10 dolors. Each experiences 1 hedon at being relieved they weren’t the victim. That’s a total of 5 hedons. So, basic number line calculation would be: -10 + 5 = -5. In other words, we’re left with an aggregate “greatest good” scale calculation of -5.

So you see, since one dolor of pain is better than five dolors of pain, on an aggregate scale, it is therefore better to run over 5 people, than it is to run over one (all other things being equal).

Interpreting The Results

Now, outside of the framework of Utilitarianism as I have described it here, do I subscribe to this as a reasonable moral theory? Would I actually be willing to run over 5 people instead of 1? In real life, this is a choice I’m not likely to ever face. But if I were, my response to it is going to be driven by psychological and emotional causes, not Utilitarian calculations, which are far too speculative and complex to aid anyone in a moment of extreme stress. Of course, Mill would tell you that constant practice and study would leave you with something like a “second nature” that would respond to such situations. But this begs the question. In any case, I am inclined to refuse to answer the question of trolly scenarios.

Firstly, the natural impulse to run over one instead of five has more to do with the contrived nature of the trolley experiment, than it does with proving Utilitarianism. Why should we assume “blank slates” are standing on the tracks? What if the five are a euthanasia club awaiting their prize? If you pulled the lever, you would thus cause great distress because they would not have their wishes fulfilled. On the other hand, what if the one man on the other track is a Nobel winning agricultural scientist who is on the verge of solving the world hunger problem? Seems to me, killing five to save him is well worth the cost.

Secondly, these trolley scenarios, and Utilitarianism more generally, masquerade individual prejudices for objective values. Who am I to decide which people must die, and which must live? Why is my calculation of what’s more pleasurable, in any sense synonymous with the objective discovery of what’s good? Aristotle, for one, would have scoffed at such an equivocation.

Thirdly, the whole scenario is implicitly adopting life itself as a value above and beyond Utilitarian considerations of pain and pleasure. In other words, It would be better to be alive and suffering from the loss of a limb due to a trolley accident, than to be dead and suffer no pain at all. This value cannot be coherently established in Utilitarianism, and there are some philosophers who have actually committed themselves to therefore denying that value. David Benatar comes to mind, who argues more or less from the same Utilitarian presuppositions as I have established in this essay: the whole of the human race should be rendered impotent, so as to prevent any more human beings from coming into existence, because the accumulated dolors vs hedons (my terms) of existence outweigh the net null of not existing at all.

The Conclusion

Clearly, any framework for ethical calculus that can lead us to the conclusion that death is preferable to life, is fundamentally flawed. Even David Benatar himself asserts that the presently living have some sort of “interest” in remaining alive (confusingly, despite still insisting that their suffering far outweighs any interest that might promote being alive). Worse yet, any ethical system that implicitly requires the elevation of some individual or small group of individual judgments, as arbiters of an imaginary objective “greater good”, is demonstrably a bad thing. The late 19th, and all of the 20th century is a wasteland of Utilitarian utopianism – giant state bureaucracies filled with officious autistics, and political systems overrun by narcissistic do-gooders, all hell-bend on “making society compassionate”, at all costs.

The trolley scenario I have laid out here, is a metaphorical demonstration of just this problem. Utilitarianism, as an ethical system, is at best a decision-making tool to be used in very specific, very short-term situations, after we’ve already established a set of moral presuppositions from which to frame the calculations. The Utilitarianism of this trolley scenario relies on the presupposition of life as a value; specifically, human life. But Utilitarianism as a doctrine need not also presuppose such a value. This is why many philosophers criticize Utilitarianism for failing to properly protect rights – they’re intuitively recognizing the fact that Utilitarianism is anti-life. When human lives themselves becomes an expendable means to some other greater abstract goal, the ethical system that led us to that is highly suspect at best. There are all sorts of other problems with Utilitarianism, but this this problem is enough by itself to suggest that we ought not adopt it with any degree of confidence.

Judging Virtue

It has been put by some that Virtue ethics lacks a decision-procedure to help us make moral decisions, and is therefore, not a good moral theory. In this essay, I will argue that the decision-procedure is not a satisfactory standard for judging ethical systems because they do not take the full experience of human morality into account, and because the theories instrumenting them often achieve exactly the opposite of their stated goal. I then offer an approach to virtue ethics that I think might salvage the theory as a whole, and I conclude that, despite my moral skepticism, such a theory would be preferable to decision-procedure based approaches.

To begin with, why should a decision-procedure be the standard by which we judge a moral theory? It might be argued that decision-procedures are commonplace tools for making choices in many situations. So, why not under moral circumstances as well? While it is true that there are many contexts in which flowcharting and process modeling are useful, these are practical problems, not necessarily ethical ones. There is already a concrete goal in mind, risks have already been calculated, and processes are followed according to plan, in the hope of achieving the material goal. Decisions-procedures are explicitly useful in software development because, in fact, there is no other way to direct the behavior of the computer. It’s programs are its decision-procedures. It is designed to do nothing but execute those procedures against given inputs. Anyone familiar with the industry, knows that dozens of different languages for building decision-procedures proliferate, often for no other reason than that they are fun to invent. But the human mind is radically different from a computer. Not just in degree, but in kind. It is certainly true, as I just described, that the human mind can process decision-procedures. That’s what made computers possible in the first place. We’re very good at crafting tools to relieve ourselves of tedious burdens. Executing decision-procedures, though, is just one kind of operation in which the human mind engages. It is an organ (perhaps one of the most important organs) resting inside the head of a complex, dynamic, constantly changing biological organism with a sophisticated psychology that is capable of not just calculating sums or following instructions. It is an organ that is capable, in combination with the entire body of the organism, of emotional responses to its environment and, perhaps most important of all, making qualitative evaluations of the relationship between the sensed and calculated reality, and the subjective emotional response to that reality. This is where the realm of moral judgment lies: in the qualitative gap between subject and object. Decision-procedures, therefore, are the wrong “tool for the job”, because they fail (in all the prevailing theories) to account for the full moral experience of the human being.

What’s more, the prevailing theories all boil morality down to a single principle such as the “categorical imperative”, or a single linear dimension of value such as the “pleasure principle”, and then their proponents build unidimensional decision-procedure instruction sets that inevitably lead to distressing absurdities or outright horrors. Utilitarian calculi (pick whichever one you want, really) tend to lead to devastations like the agricultural famine in the Ukraine in the name of equalizing opportunities for the cessation of hunger, or radical insanities like anti-natalism which argues that the goal of reducing overall suffering requires that we mandate barrenness on all of humanity. Kantians, on the other hand, would have us giving alms to the poor in the name of our ontological duty, but simultaneously commanding us to not enjoy doing so, on pain of moral condemnation.

Lastly, Julia Annas1 points out that the decision-procedure (whatever it might be) looks suspiciously like a subtle substitution for mature judgment. Indeed, if we were mere robots or computers, with a slot in the side of the head into which one could insert an SD card with the appropriate set of procedural instructions, it would be hard to imagine why any such thing as philosophy, let alone ethics as a discipline, would even exist.

Virtue ethics, insofar as it recognizes the developmental nature2 and experiential complexity of moral maturity, ‘gets it right’. But Aristotle didn’t have the tools or the intellectual framework to conceive of a model sophisticated enough to make much sense outside of Athens in the third century BC. What’s more, later iterations have consistently failed for much the same reason to craft a system of values that can be claimed of all humans (let alone, a method of evaluating the mastery of those values). One recent valiant attempt at this, comes from Jonathan Haidt’s book, “The Righteous Mind”3 (though he would probably disagree that he was contributing to a system of Virtue Ethics). Haidt assembles a list of six “foundational” values that he attributes to everyone (in the west, at least) and argues that we differ with each other as human beings, only with respect to our psychological “sensitivity” to each of these six values (“care”, “fairness”, “loyalty”, “authority”, “sanctity”, and “liberty”). All six of these propensities are present and set to ‘default sensitivities’ at birth, but they fluctuate as we grow and are influenced by environmental pressures. It isn’t clear from his book whether these fluctuations are like studio sound-board knobs, that we consciously adjust (at least to some extent), or are merely barometer needles reporting the determined outcomes of causal factors. If the former is the case, then his psychological theory might provide the basis for an Aristotelian normative theory in which the position of each of these sensitivity ‘knobs’ is ‘tuned’ throughout life, for their optimal position. The point here isn’t to prove the case, but simply to show that the merger of psychology and normative ethics is at least plausible, and that such an approach would provide us with a developmental ethic that is simultaneously measurable.

This, it seems to me, is the basis for the opposition to virtue ethics. Not the lack of a ‘decision-procedure’ per se, but the lack of a measurable standard by which I can justifiably judge someone. With a sufficiently sophisticated understanding of human psychology, the “journeyman / apprentice” developmental approach to virtue ethics provides an ethical mentorship system with measurable outcomes. However, I can imagine two potential problems with this concept. First, who decides what the list of values are, how many there are, and what the optimal sensitivity settings are? This problem implies the need for some sort of ur-ethic that can be used to evaluate the evaluation system – and suddenly, we’re plummeting into an infinite regress. Secondly, such a system could ultimately end up stratifying the society into the ‘enlightened’ graduates, and the ‘benighted savages’ who haven’t had the privilege of studying yet. To the first objection, I must admit I have no reply. It seems a bit like the problem of set-theory, and like set-theory, it calls the whole system into suspicion. But, if we’re willing to continue using sets – merely coping with the edge-case problems of set-theory – then why not this moral theory as well? Perhaps because set-theory won’t get you killed by the state, if you run afoul of its paradoxes while using it. To the second objection, I would say that this doesn’t seem to me like a serious concern. If it were fully adopted in an already liberal democratic culture, the transition would be almost invisible. Much of the system is simply describing habits of human psychology that we already observe. The rest would be a matter of crafting environments that steer developing minds in the right direction, while modeling appropriate behaviors. The latter is already a natural parental impulse, and the former could be done by modifications to existing social organizations or minor changes to legislation.

Given these arguments, it seems to me that if virtue ethics deserves condemnation for its lack of a decision procedure, then the prevailing ethical systems that do implement a decision procedure deserve far greater condemnation for producing effects with those procedures that directly oppose their stated goals. Furthermore, given advances in psychology, and the flexibility of virtue ethics, it seems to me that give no other than these three options, a virtue theory coupled with a mature understanding of human psychology would be far superior, regardless of its lack of a formal decision procedure.

  1. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  2. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  3. Haidt, Jonathan. The Righteous Mind: Why Good People are Divided by Politics and Religion (p. 146). Penguin Books Ltd. Kindle Edition. 

Book Review: The Righteous Mind, Jonathan Haidt

Is it better to be truly just, or merely to seem so? This is the question put to Socrates by Glaucon in The Republic. Jonathan Haidt, in his book, “The Righteous Mind”, counts Glaucon among the cynics for putting this challenge to Socrates. But Haidt is missing a subtle and very powerful nuance in Plato’s story. Socrates had just finished embarrassing Thrasymachus for his weak defense of cynical egoism. Glaucon and Adeimantus were certainly entertained, but they were not satisfied with Socrates. They sought much stronger reasons for accepting the conclusion that true justice is preferable to appearance, because they did not want to merely seem to agree with Socrates. They really wanted to believe that genuine justice was better, and giving Socrates the strongest possible objection that could be mustered is the only way an honest man (if he is honest with himself) can do this.

Socrates’ initial response to Glaucon was not the description of the ideal state that the story has become famous for. Rather, it was a likening of the soul to the body. Repeated abuses and illnesses corrupt and degrade the health of the body over time, until at some point it is no longer possible to experience vigor and vitality. Likewise, says Socrates, repeated vices and injustices committed in pursuit of wealth or power or honor will eventually render the soul so degraded and corrupt that it will no longer be capable of achieving eudaemonia (aka ‘contentment’, ‘happiness’, or ‘flourishing’). This is the fate of the man who pursues a life of politics, without first tending to his soul.

Haidt seems almost proud of his “Glauconian cynicism” – a socio-biological view in which he believes he can show that, regardless of which is better, seeming just is what we humans actually seek. Haidt claims explicitly and confidently not to be offering an argument for what ought to be, only what is. But the enthusiasm with which he reports this supposed scientific fact suggests that he also thinks that what is, just is what ought to be. But this is precisely the challenge posed to Socrates by Glaucon: it certainly is true that many people (perhaps even most) are cynical and self-serving. So, why oughtn’t they be? Haidt’s response to this recurring implicit question seems to be to just keep reasserting the fact, in ever more sophisticated and complex ways.

Near the end of the book, in spite of already offering an explicit refusal to address the problem of normative ethics, Haidt tosses off a flippant endorsement of Utilitarianism as if this view has already settled the normative question, or simply to signal to the reader that the question just isn’t interesting enough to bother investigating. But this has profound implications for how seriously one can take some of the claims he makes in this book. The tension between what is and what ought to be plagues this book, and any reader eager for insight into the gap between descriptive and normative ethics will find it profoundly frustrating.

The Basic Theory, and It’s Problems

Haidt’s basic theory of the “Righteous Mind” comes down to two hypotheses. First, that the human brain has evolved for both “tribal” and “hive” social structures. To put it in his terms, “we are 90% chimp, and 10% bee”, and a special “hive switch” in the brain is flipped, when conditions are ideal, that suppress our self-interested “groups” psychology, and make us more altruistically “hive-ish”. It’s not quite clear what sort of mechanism this “switch” is, what causes it to flip, and how it gets reset. But he offers a lot of anecdotes from his research that describe evidence suggesting its presence.

The second, and much more complex portion of the theory, is his six-dimensional model of moral psychology. His system is powerfully reminiscent of David Hume’s own four-pole system of moral emotions (Pride-vs-Humility / Love-vs-Hate). But there is one extremely significant difference. Hume’s theory was one meant to describe morality as a system of “passions” (special kinds of emotions). These passions derive from a natural propensity for pleasure, and a natural aversion to pain (he presages the Utilitarians in this respect). What’s more, moral judgments are not reasoned, but felt. Morality, for Hume, just is emotions expressed. Haidt’s theory, on the other hand, describes six dimensions of values, not emotions: Care, Fairness, Loyalty, Authority, Sanctity, and Liberty. Haidt says that all human beings have this six-dimensional system built-in as a consequence of thousands of years of socio-biological evolution. He argues that the “sensitivity level” at which each of these is not permanently fixed, but is set to “defaults” at birth, and adjusted over a lifespan by experience. How, precisely, this happens and by what mechanism, is a bit murky, but again, he offers loads of anecdotal examples (and data from his studies) to show how each of these dimensions is expressed by individuals.

A few questions and objections arose for me, about these two hypotheses, as I read through the book, that never seemed to get a satisfactory answer. First, on the six aspects: are they like adjuster knobs on a sound board? Or, are they merely barometer needles reporting varying pressure levels set by environmental impacts on a biological system? If the former, then surely there are “optimal” positions for each of these knobs (even if only circumstantially optimal)? In that case, then there is indeed an opening for a normative ethical theory, describing these optimums. However, if the latter is true, then it is hard to understand how there could be any such thing as an “ought” at all, much less a system prescribing them. Haidt is constantly nudging up to the edge of this Humean is-ought cliff, and retreating from it just when things start to get interesting.

Second, Haidt never quite explicitly acknowledges that he’s describing a system of values, rather than a moral psychology. One might object that a system of values could be said to be a variety of moral psychology, but I would reply that by the time we get to values, we’re already one layer above fundamental psychology. Why these six, and not others? Indeed, in the book, Haidt explicitly acknowledges that some early reviewers of the book objected to the lack of “equality” as a value on his list of “aspects”. If “liberty” counts as a foundational psychological value, then why not “equality”? It has just as long a history, after all. More importantly, to talk of values at all, you’re once again in flirting in the realm of the normative. I would have to look more closely at the research he used to back this section of the book, but how do we know he didn’t just happen to find the set of six values that he and his team were particularly focused on already? That is a normative selection process: “these values are more important than those”.

Third, returning to the “hive switch”, Haidt emphasizes the “dangers” of too much hive-ishness or too much groupishness. But he never quite explains how there could be any such thing as a “right amount” of either, in the absence of a normative theory. Without any idea of what an ideal amount of either would look like, why would the horror of the Hobbesian anarchy or Stalinist oppression even count as “bad”? Lower primates seem perfectly satisfied with brutal inter-tribal conflict, and ants are obliviously willing to destroy themselves en masse for the sake of colony and queen. What’s worse, is that there’s no clear explanation for how the “hive switch” and the six-dimensional moral psychology fit together. Do certain knob settings produce hives instead of tribes? Do others produce tribes instead of hives? What are the right tension levels between the two modes? If the knob settings do influence this, how do we know what those should be? None of this is discussed in the book, except in passionate warnings to beware of extremes. A laudable sentiment, but so what?

Lastly, while Frans de Waal is largely an asset to Haidt’s book, there is one key notion from de Waal that highlights the primary problem with Haidt’s “Glauconian moral matrix”; de Waal captured it in a rather pithy phrase: Veneer Theory. In his book, “Primates and Philosophers”, de Waal uses the phrase to criticize Huxley and Dawkins for uncritically accepting a view of human nature that is Hobbesian without providing an explanation for how a self-serving egoist gets to altruism all on his own. Haidt’s book suffers from a similar problem. Though he does a great job of bridging the gap between egoist and “group-altruist”, what he fails to do is explain how the “Glauconian cynic” becomes a genuinely caring being. Haidt has concocted his own variety of Veneer Theory by redefining it as a complex inter-subjective social delusion that we all agree to participate in. He takes this as an answer to the problem of a “veneer” layer. But it only makes his own set of theories seem like a Rube Goldberg machine. Haidt makes a strong case for the biological and psychological reality of moral experience as a genuine phenomenon. But this works directly against the idea that we merely wish to appear to care, or to be virtuous. Why layer a “moral matrix” on top of a perfectly reasonable explanation of genuine moral emotions? More to the point, why would evolution tolerate such an expensive and convoluted cognitive load, such as layers of delusion, on top of the already demanding task of navigating the social world in real time? Even more curiously, why would we count the primitive primate morality of chimps and bonobos as “actual” or “genuine”, while regarding our own as a mere matrix-like delusion?

Final Thoughts

Anyone who has read the entirety of The Republic has to come to terms with a powerful dissonance in Plato’s tale. Either Socrates truly misunderstood human nature (perhaps he confused it with his own psychological projections), or he didn’t actually believe what he was saying. Some philosophers argue for the latter theory: that the ideal state was ideal intentionally. Socrates was never going to convince the Athenians to drive all the old folks out of the city in order to start afresh, or convince the educated classes to surrender their private property holdings to the commons, or convince them to put their women and children into a breeding commune to be tended by specially bred and trained guardians. He must have known that. What was really going on here? Remember that the tale was written by Plato, long after Socrates’ execution. Plato was engaging in his own bit of cynical rhetoric, grounded in bitterness. He wanted to demonstrate the utter impossibility of the larger task: convincing men to love virtue for its own sake; to be just, rather than simply to appear just. He had given up on the possibility, and the Republic was his way of showing this. It is hard to blame him, on one level. He’d watched these people destroy his master and teacher; a man for whom Plato had given up a promising life as a poet, in order to follow him in philosophy. Haidt, on the other hand, embraces his cynicism with zeal, because he believes the data tells him he must, and he refuses to even entertain the possibility that we might just be better than that. In effect, he takes Plato’s implicit condemnation of man and turns it into a simple matter-of-fact. But recasting the condemnation as mere description doesn’t change the moral reality; it just hides it behind a veil of cynicism.

Kant vs Aristotle: Virtue and the Moral Law

Kant’s critique of Aristotle is fascinating to me. He uses Aristotle’s own standard against him: to say that virtue consists in achieving excellence in the unique purpose of a human life, and that this unique purpose can be identified by isolating the unique features of the organism as opposed to other organisms, you then have the problem of explaining how it is that the unique feature of reason could be better suited to helping humans achieve excellence at attaining ‘material ends’ (aka ‘happiness’), than the much more efficient and much less costly instinct, which all other animals have as well.

This is enough for Kant to argue that reason must then have some other purpose — which for him, is accessing ‘universal absolutes’ and functioning as the standard of ‘value’ he ascribes to the “good” will. But in making this move, Kant is also implicitly conceding Aristotle’s notion of a teleological end for which man has been “formed”. He’s simply arguing that Aristotle was muddled about the particulars, and that he has managed to sort it all out for us.

But, in order to make his criticism of Aristotle, Kant needs to reduce the greek notion of eudaemonia to (apparently) nothing more than the continuous satisfaction of contingent desires. Since these desires are ‘merely subjective’, dependent on circumstance, and are governed exclusively by the ‘laws of nature’, the satisfaction of them can have no ‘moral worth’ because moral worth consists in the ‘good will’ acting on the recognition of necessary duties found in the ‘moral law’ by way of pure reason, which is independent of contingent circumstances. Thus, hypothetical imperatives cannot “be moral”.

What’s ironic about all of this, is that Kant seems to be arguing with Aristotle, from the point of view of Plato. Kant wants there to be an absolute truth about moral rules, in a mathematical sense (he even makes an analogy to geometry at one point). He is frequently making reference to the difference between the sensible and the intelligible world and with it he makes a distinction between absolute value and relative value. All of these notions are constantly present in Plato’s dialogues. Even the distinction between ‘material’ ends, and ‘ultimate’ ends is something of a dispute between Aristotle (Nicomachean Ethics) and Plato (The Timeaus, The Republic).

It seems to me, that the debate around free will and morality seems to always resolve itself to the same dichotomies: objective-subjective, ‘intelligible’-‘sensible’, necessary-contingent, absolute-relative, and of course descriptive-normative. Has Kant added anything new to this dispute beyond Plato and Aristotle? I’m not so sure about that. The appeal to absolutes is a seductive one. Intuitively, it seems like a moral ‘rule’ could not be valid, if it were not absolute. Because, anything less than “true for everyone, everywhere, at all times”, is simply a preference by definition. However, Kant’s hypothetical examples of the Categorical Imperative in the Groundwork are notoriously confused and in at least one case (false promises), seem to argue against the categorical itself. If Kant himself could not imagine at least one unequivocal practical example of his imperative, it’s hardly fair to expect anyone else to be able to. Kant, I suppose, would have argued that in spite of the fact that ‘normal’ folk aren’t philosophers, they still “get it, deep down”. Maybe that’s what I was doing when I mentioned the intuitive appeal of absolutes. Still, it seems a bit like “cheating”, for Kant to make appeals to common-sense, when all throughout this book, he’s arguing that a properly philosophical understanding of morality must be grounded in rigorous logical universals. I’ll have more to say about this, later…