Month: January 2018

Judging Virtue

It has been put by some that Virtue ethics lacks a decision-procedure to help us make moral decisions, and is therefore, not a good moral theory. In this essay, I will argue that the decision-procedure is not a satisfactory standard for judging ethical systems because they do not take the full experience of human morality into account, and because the theories instrumenting them often achieve exactly the opposite of their stated goal. I then offer an approach to virtue ethics that I think might salvage the theory as a whole, and I conclude that, despite my moral skepticism, such a theory would be preferable to decision-procedure based approaches.

To begin with, why should a decision-procedure be the standard by which we judge a moral theory? It might be argued that decision-procedures are commonplace tools for making choices in many situations. So, why not under moral circumstances as well? While it is true that there are many contexts in which flowcharting and process modeling are useful, these are practical problems, not necessarily ethical ones. There is already a concrete goal in mind, risks have already been calculated, and processes are followed according to plan, in the hope of achieving the material goal. Decisions-procedures are explicitly useful in software development because, in fact, there is no other way to direct the behavior of the computer. It’s programs are its decision-procedures. It is designed to do nothing but execute those procedures against given inputs. Anyone familiar with the industry, knows that dozens of different languages for building decision-procedures proliferate, often for no other reason than that they are fun to invent. But the human mind is radically different from a computer. Not just in degree, but in kind. It is certainly true, as I just described, that the human mind can process decision-procedures. That’s what made computers possible in the first place. We’re very good at crafting tools to relieve ourselves of tedious burdens. Executing decision-procedures, though, is just one kind of operation in which the human mind engages. It is an organ (perhaps one of the most important organs) resting inside the head of a complex, dynamic, constantly changing biological organism with a sophisticated psychology that is capable of not just calculating sums or following instructions. It is an organ that is capable, in combination with the entire body of the organism, of emotional responses to its environment and, perhaps most important of all, making qualitative evaluations of the relationship between the sensed and calculated reality, and the subjective emotional response to that reality. This is where the realm of moral judgment lies: in the qualitative gap between subject and object. Decision-procedures, therefore, are the wrong “tool for the job”, because they fail (in all the prevailing theories) to account for the full moral experience of the human being.

What’s more, the prevailing theories all boil morality down to a single principle such as the “categorical imperative”, or a single linear dimension of value such as the “pleasure principle”, and then their proponents build unidimensional decision-procedure instruction sets that inevitably lead to distressing absurdities or outright horrors. Utilitarian calculi (pick whichever one you want, really) tend to lead to devastations like the agricultural famine in the Ukraine in the name of equalizing opportunities for the cessation of hunger, or radical insanities like anti-natalism which argues that the goal of reducing overall suffering requires that we mandate barrenness on all of humanity. Kantians, on the other hand, would have us giving alms to the poor in the name of our ontological duty, but simultaneously commanding us to not enjoy doing so, on pain of moral condemnation.

Lastly, Julia Annas1 points out that the decision-procedure (whatever it might be) looks suspiciously like a subtle substitution for mature judgment. Indeed, if we were mere robots or computers, with a slot in the side of the head into which one could insert an SD card with the appropriate set of procedural instructions, it would be hard to imagine why any such thing as philosophy, let alone ethics as a discipline, would even exist.

Virtue ethics, insofar as it recognizes the developmental nature2 and experiential complexity of moral maturity, ‘gets it right’. But Aristotle didn’t have the tools or the intellectual framework to conceive of a model sophisticated enough to make much sense outside of Athens in the third century BC. What’s more, later iterations have consistently failed for much the same reason to craft a system of values that can be claimed of all humans (let alone, a method of evaluating the mastery of those values). One recent valiant attempt at this, comes from Jonathan Haidt’s book, “The Righteous Mind”3 (though he would probably disagree that he was contributing to a system of Virtue Ethics). Haidt assembles a list of six “foundational” values that he attributes to everyone (in the west, at least) and argues that we differ with each other as human beings, only with respect to our psychological “sensitivity” to each of these six values (“care”, “fairness”, “loyalty”, “authority”, “sanctity”, and “liberty”). All six of these propensities are present and set to ‘default sensitivities’ at birth, but they fluctuate as we grow and are influenced by environmental pressures. It isn’t clear from his book whether these fluctuations are like studio sound-board knobs, that we consciously adjust (at least to some extent), or are merely barometer needles reporting the determined outcomes of causal factors. If the former is the case, then his psychological theory might provide the basis for an Aristotelian normative theory in which the position of each of these sensitivity ‘knobs’ is ‘tuned’ throughout life, for their optimal position. The point here isn’t to prove the case, but simply to show that the merger of psychology and normative ethics is at least plausible, and that such an approach would provide us with a developmental ethic that is simultaneously measurable.

This, it seems to me, is the basis for the opposition to virtue ethics. Not the lack of a ‘decision-procedure’ per se, but the lack of a measurable standard by which I can justifiably judge someone. With a sufficiently sophisticated understanding of human psychology, the “journeyman / apprentice” developmental approach to virtue ethics provides an ethical mentorship system with measurable outcomes. However, I can imagine two potential problems with this concept. First, who decides what the list of values are, how many there are, and what the optimal sensitivity settings are? This problem implies the need for some sort of ur-ethic that can be used to evaluate the evaluation system – and suddenly, we’re plummeting into an infinite regress. Secondly, such a system could ultimately end up stratifying the society into the ‘enlightened’ graduates, and the ‘benighted savages’ who haven’t had the privilege of studying yet. To the first objection, I must admit I have no reply. It seems a bit like the problem of set-theory, and like set-theory, it calls the whole system into suspicion. But, if we’re willing to continue using sets – merely coping with the edge-case problems of set-theory – then why not this moral theory as well? Perhaps because set-theory won’t get you killed by the state, if you run afoul of its paradoxes while using it. To the second objection, I would say that this doesn’t seem to me like a serious concern. If it were fully adopted in an already liberal democratic culture, the transition would be almost invisible. Much of the system is simply describing habits of human psychology that we already observe. The rest would be a matter of crafting environments that steer developing minds in the right direction, while modeling appropriate behaviors. The latter is already a natural parental impulse, and the former could be done by modifications to existing social organizations or minor changes to legislation.

Given these arguments, it seems to me that if virtue ethics deserves condemnation for its lack of a decision procedure, then the prevailing ethical systems that do implement a decision procedure deserve far greater condemnation for producing effects with those procedures that directly oppose their stated goals. Furthermore, given advances in psychology, and the flexibility of virtue ethics, it seems to me that give no other than these three options, a virtue theory coupled with a mature understanding of human psychology would be far superior, regardless of its lack of a formal decision procedure.

  1. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  2. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  3. Haidt, Jonathan. The Righteous Mind: Why Good People are Divided by Politics and Religion (p. 146). Penguin Books Ltd. Kindle Edition.