Month: February 2018

Hume, Plato, and the Impotence of Reason

Hume infers from his insight that it is not reason but moral opinion that moves us to act, that reason is not the source of moral opinion. From this, he then further argues that moral opinion is a product of the passions – special emotions that arise out of the relations of ideas and impressions. In this essay, I will argue that Hume’s initial inference is correct, but that his subsequent inference is not. Passions may indeed arise from relations of ideas and impressions, but there is no good reason to presume passions, though necessary, are sufficient to produce a moral opinion.

So, what exactly is a “moral opinion”? Plato believed that opinion (doxa) was something that lay in the gray area between true and false belief. He argued that opinion did not deserve the respect of a truth because it lacked the justification of an eternal, unchanging quality necessary to rise to the level of êpistêmê (true belief). If we apply this standard to moral opinion, then, it would be a doxastic belief about the rightness or wrongness of an action, or the goodness or badness of a character. Moral knowledge, on the other hand, would be a belief of a much stronger type. To say, for example, that I believe stealing to be unpleasant, or that I wouldn’t do it, or that it seems wrong, would be to say something contingently true, relative to myself, and subject to correction. To say, “stealing is wrong”, on the other hand, is to make a truth claim asserting the real existence of a property in a certain class of actions. To make such an assertion, I would need to be able to identify the property, point it out, and name it. That would require some sort of perception, and perception requires a sense organ or a faculty of the mind, or both. Plato would rule out a sense organ as the source of our moral knowledge, because sensible phenomena are mere imperfect reflections of the ultimate reality of the form of the good.

The faculty of the mind that perceives such things as rightness or wrongness, according to Plato, is therefore a certain kind of judging faculty that need not rely on the senses. The traditional interpretation is to say that this is reason, and to make analogies to mathematics to bolster the claim because this is what Plato seems to do, but I disagree. In The Republic, Plato provides an ornate metaphor for his tripartite soul: that of a charioteer and two great horses. Plato puts reason in the charioteer’s seat, and assigns the role of appetite (passions) and judgment (moral judgment) to each of the two horses. This arrangement is important, because it speaks to Hume’s own assertion that “reason is and only ever ought to be the slave of the passions”. The charioteer is the apprehending ego, the “reasoning” member of the triad. He does not motivate the chariot. He only steers it. This is consonant with Hume’s view, that reason can only guide the passions, but where Hume fails is (to borrow Plato’s analogy), in thinking there is only one horse. Hume is presuming there is no judging faculty. With only one horse to pull the chariot, the best the driver can do is provide a bit of helpful guidance as the appetitive horse causes the chariot to careen in whatever direction its whim pulls it.

Jonathan Haidt, in his 2012 book The Righteous Mind, provides a more vivid metaphor: that of a thin, scantily clad tribesman mounted atop an unruly African elephant. In this metaphor, the elephant is almost entirely in control, and all the rider can do is suggest minor alterations in direction with a swat of reeds or a tug on a rope. His metaphor, like Hume, includes no faculty of judgment, no capacity to discern the difference between mathematical or empirical facts, and the normative consequences of actions taken in light of them. No capacity for selecting among possible goals. For Haidt, the elephant dictates the terms of engagement to the rider, and his only choice is over how enthusiastically he accepts them. Haidt is dutiful in his acceptance of Hume’s model of moral psychology. But Hume, as I have said, is wrong. Hume is indeed correct that the rider does not form his opinions on his own, but he is wrong to say that they necessarily derive from the elephant. Hume is ignoring the judging horse. Kant, reacting to his own observation of this problem, attempts to right the ship by overcorrecting in the opposite direction: he denies the importance of the appetitive horse, and gives control of the chariot exclusively to the driver. On Kant’s model, neither appetite nor judgment are empowered to lead us anywhere, and the charioteer is forced to get out and push the chariot, out of “respect for the moral law”. This will not do.

To properly form a “moral opinion”, as anything more than just opinion, requires judgment. Judgment is the reconciliation of “is” with “ought”, by means of a value determination. That determination requires a negotiation of experienced desires and reasoned principles. In this way, the rider and his two chariot horses have an equal say in the speed, direction, and ultimate destination of the chariot. For all it’s metaphorical mysticism, Plato’s model of the tripartite soul is a profound insight into human character that is lacking in almost all of his successors, save perhaps, Aristotle. The rational portion of the soul is the master of what is, the appetitive portion is the master of what I want to be, and the judging portion of the soul is the master of what ought to be. Our task, as thinking, self-conscious human beings, is to train ourselves so that these masters learn to live in harmony with one another. When we do, the result is eudaemonia.

Autism and Trolleys – One Good Reason To Reject Utilitarianism

In recent years, it has been speculated that Jeremy Bentham was an autist. This speculation arises out of Bentham’s extreme attempts at systematizing human interactions in his formulation of Utilitarianism. Though I realize modern Utilitarianism is much more sophisticated now (in various forms of sociology and econometrics), I think they all still suffer from the fundamental assumptions laid down by Bentham. In this essay, I will show how one of those basic tenets leads to absurd conclusions, and hides imported value assumptions from other forms of ethics. What better way to do this, than with Philippa Foot’s trolley problem, a common modern tool of the Utilitarian.

Initial assumptions

  1. I’m working with traditional Utilitarianism, not any of the more modern econometric notions of Utility. The more sophisticated versions of Utilitarianism would pretend to have an answer to this problem, but I don’t have the space to deal with that here.
  2. I’m assuming “aggregate” pleasure is what we’re after, and not individual pleasure, since neither Bentham nor Mill were willing to concede to pure individualistic hedonism.
  3. I’m assuming all the passive participants in the trolley scenario are “blank slates”, and are of equal absolutely “value” in some objective sense,  in order to force the dilemma (i.e., it wouldn’t be much of a dilemma if the 5 were orphans, and the 1 was Hitler).

The Groundwork

Now, Bentham had this idea that we might be able to parse pleasure and pain into quanta of measurable units. In keeping with the mindset of the time, and in an attempt to take Bentham’s idea to its logical limits (something he often did impulsively), let’s call these quanta, “hedons” and “dolors”. Where, Hedons are the finite quanta of pleasure (from ‘hedonism’), and Dolors (from the latin for “pain”) are the finite quanta of pain. For each individual, then, imagine a one-dimensional graph in which the zero-line runs through the horizontal center. Zero is equivalent to “indifferent”, anything above zero is equivalent to “pleasurable”, and anything below is equivalent to “pain” (like a barometer that can go into the negative). For example:

Where +10 would be something like an orgasm whilst simultaneously eating a custard eclair in a warm Jacuzzi bath, and -10 would be something like having your Johnson burned off with an acetylene torch, whilst rabid dogs gnaw your fingers off, in an ice storm.
Since we’re assuming “blank slate” participants, everyone starts out at zero (absolute indifference), and everyone has an equal capacity for either +10 or -10. Also, since we’re dealing with aggregates, rather than individuals, we need to take an accumulation of this for all six passive participants. That would be a maximum potential of +60 or -60 for the group. (6 people X 10). Lastly, since you can feel neither pleasure nor pain when you’re dead, you cease to count toward the aggregate once you are dead.
In the trolley case, we are assuming that the trolley is going to kill whichever passive participants it strikes, not just seriously maim them. That means whomever it hits is effectively removed from the aggregate of total hedons and dolors available to make our “greatest good” calculation. Next, I think it’s safe to assume a reasonably sympathetic disposition in most people. So, witnessing a horrible tragedy is going to cause some serious distress. Therefore, we have to decide how many dolors that amounts to. I am willing to concede, also, the possibility that the relief at realizing it’s not me that got hit by the train will result in the addition of some hedons. Let’s say, witnessing the tragedy is equivalent to 2 dolors, and the self-interested relief is equivalent to 1 hedon.

The Experiment

The trolley scenario I face today, is as follows:
* (a) If I pull the lever to the left, I drive the train over the five passive participants.
* (b) If I pull the lever to the right, I drive over one passive participant.
In situation (a), 5 individuals are removed from the aggregate total of hedons and dolors. So, we are left with only one person on the opposite track. He experiences 2 dolors witnessing the tragedy, and 1 hedon of relief, for a total aggregate score of -1 on the “greatest good” scale.
In situation (b), 1 individual is removed from the aggregate total of hedons and dolors. This leaves us with a total aggregate potential of +50/-50 (the five people on the other track). Each experiences 2 dolors at the witnessing of the tragedy on the other track. That is a total aggregate of 10 dolors. Each experiences 1 hedon at being relieved they weren’t the victim. That’s a total of 5 hedons. So, basic number line calculation would be: -10 + 5 = -5. In other words, we’re left with an aggregate “greatest good” scale calculation of -5.

So you see, since one dolor of pain is better than five dolors of pain, on an aggregate scale, it is therefore better to run over 5 people, than it is to run over one (all other things being equal).

Interpreting The Results

Now, outside of the framework of Utilitarianism as I have described it here, do I subscribe to this as a reasonable moral theory? Would I actually be willing to run over 5 people instead of 1? In real life, this is a choice I’m not likely to ever face. But if I were, my response to it is going to be driven by psychological and emotional causes, not Utilitarian calculations, which are far too speculative and complex to aid anyone in a moment of extreme stress. Of course, Mill would tell you that constant practice and study would leave you with something like a “second nature” that would respond to such situations. But this begs the question. In any case, I am inclined to refuse to answer the question of trolly scenarios.

Firstly, the natural impulse to run over one instead of five has more to do with the contrived nature of the trolley experiment, than it does with proving Utilitarianism. Why should we assume “blank slates” are standing on the tracks? What if the five are a euthanasia club awaiting their prize? If you pulled the lever, you would thus cause great distress because they would not have their wishes fulfilled. On the other hand, what if the one man on the other track is a Nobel winning agricultural scientist who is on the verge of solving the world hunger problem? Seems to me, killing five to save him is well worth the cost.

Secondly, these trolley scenarios, and Utilitarianism more generally, masquerade individual prejudices for objective values. Who am I to decide which people must die, and which must live? Why is my calculation of what’s more pleasurable, in any sense synonymous with the objective discovery of what’s good? Aristotle, for one, would have scoffed at such an equivocation.

Thirdly, the whole scenario is implicitly adopting life itself as a value above and beyond Utilitarian considerations of pain and pleasure. In other words, It would be better to be alive and suffering from the loss of a limb due to a trolley accident, than to be dead and suffer no pain at all. This value cannot be coherently established in Utilitarianism, and there are some philosophers who have actually committed themselves to therefore denying that value. David Benatar comes to mind, who argues more or less from the same Utilitarian presuppositions as I have established in this essay: the whole of the human race should be rendered impotent, so as to prevent any more human beings from coming into existence, because the accumulated dolors vs hedons (my terms) of existence outweigh the net null of not existing at all.

The Conclusion

Clearly, any framework for ethical calculus that can lead us to the conclusion that death is preferable to life, is fundamentally flawed. Even David Benatar himself asserts that the presently living have some sort of “interest” in remaining alive (confusingly, despite still insisting that their suffering far outweighs any interest that might promote being alive). Worse yet, any ethical system that implicitly requires the elevation of some individual or small group of individual judgments, as arbiters of an imaginary objective “greater good”, is demonstrably a bad thing. The late 19th, and all of the 20th century is a wasteland of Utilitarian utopianism – giant state bureaucracies filled with officious autistics, and political systems overrun by narcissistic do-gooders, all hell-bend on “making society compassionate”, at all costs.

The trolley scenario I have laid out here, is a metaphorical demonstration of just this problem. Utilitarianism, as an ethical system, is at best a decision-making tool to be used in very specific, very short-term situations, after we’ve already established a set of moral presuppositions from which to frame the calculations. The Utilitarianism of this trolley scenario relies on the presupposition of life as a value; specifically, human life. But Utilitarianism as a doctrine need not also presuppose such a value. This is why many philosophers criticize Utilitarianism for failing to properly protect rights – they’re intuitively recognizing the fact that Utilitarianism is anti-life. When human lives themselves becomes an expendable means to some other greater abstract goal, the ethical system that led us to that is highly suspect at best. There are all sorts of other problems with Utilitarianism, but this this problem is enough by itself to suggest that we ought not adopt it with any degree of confidence.