Category: ethics

Judging Virtue

It has been put by some that Virtue ethics lacks a decision-procedure to help us make moral decisions, and is therefore, not a good moral theory. In this essay, I will argue that the decision-procedure is not a satisfactory standard for judging ethical systems because they do not take the full experience of human morality into account, and because the theories instrumenting them often achieve exactly the opposite of their stated goal. I then offer an approach to virtue ethics that I think might salvage the theory as a whole, and I conclude that, despite my moral skepticism, such a theory would be preferable to decision-procedure based approaches.

To begin with, why should a decision-procedure be the standard by which we judge a moral theory? It might be argued that decision-procedures are commonplace tools for making choices in many situations. So, why not under moral circumstances as well? While it is true that there are many contexts in which flowcharting and process modeling are useful, these are practical problems, not necessarily ethical ones. There is already a concrete goal in mind, risks have already been calculated, and processes are followed according to plan, in the hope of achieving the material goal. Decisions-procedures are explicitly useful in software development because, in fact, there is no other way to direct the behavior of the computer. It’s programs are its decision-procedures. It is designed to do nothing but execute those procedures against given inputs. Anyone familiar with the industry, knows that dozens of different languages for building decision-procedures proliferate, often for no other reason than that they are fun to invent. But the human mind is radically different from a computer. Not just in degree, but in kind. It is certainly true, as I just described, that the human mind can process decision-procedures. That’s what made computers possible in the first place. We’re very good at crafting tools to relieve ourselves of tedious burdens. Executing decision-procedures, though, is just one kind of operation in which the human mind engages. It is an organ (perhaps one of the most important organs) resting inside the head of a complex, dynamic, constantly changing biological organism with a sophisticated psychology that is capable of not just calculating sums or following instructions. It is an organ that is capable, in combination with the entire body of the organism, of emotional responses to its environment and, perhaps most important of all, making qualitative evaluations of the relationship between the sensed and calculated reality, and the subjective emotional response to that reality. This is where the realm of moral judgment lies: in the qualitative gap between subject and object. Decision-procedures, therefore, are the wrong “tool for the job”, because they fail (in all the prevailing theories) to account for the full moral experience of the human being.

What’s more, the prevailing theories all boil morality down to a single principle such as the “categorical imperative”, or a single linear dimension of value such as the “pleasure principle”, and then their proponents build unidimensional decision-procedure instruction sets that inevitably lead to distressing absurdities or outright horrors. Utilitarian calculi (pick whichever one you want, really) tend to lead to devastations like the agricultural famine in the Ukraine in the name of equalizing opportunities for the cessation of hunger, or radical insanities like anti-natalism which argues that the goal of reducing overall suffering requires that we mandate barrenness on all of humanity. Kantians, on the other hand, would have us giving alms to the poor in the name of our ontological duty, but simultaneously commanding us to not enjoy doing so, on pain of moral condemnation.

Lastly, Julia Annas1 points out that the decision-procedure (whatever it might be) looks suspiciously like a subtle substitution for mature judgment. Indeed, if we were mere robots or computers, with a slot in the side of the head into which one could insert an SD card with the appropriate set of procedural instructions, it would be hard to imagine why any such thing as philosophy, let alone ethics as a discipline, would even exist.

Virtue ethics, insofar as it recognizes the developmental nature2 and experiential complexity of moral maturity, ‘gets it right’. But Aristotle didn’t have the tools or the intellectual framework to conceive of a model sophisticated enough to make much sense outside of Athens in the third century BC. What’s more, later iterations have consistently failed for much the same reason to craft a system of values that can be claimed of all humans (let alone, a method of evaluating the mastery of those values). One recent valiant attempt at this, comes from Jonathan Haidt’s book, “The Righteous Mind”3 (though he would probably disagree that he was contributing to a system of Virtue Ethics). Haidt assembles a list of six “foundational” values that he attributes to everyone (in the west, at least) and argues that we differ with each other as human beings, only with respect to our psychological “sensitivity” to each of these six values (“care”, “fairness”, “loyalty”, “authority”, “sanctity”, and “liberty”). All six of these propensities are present and set to ‘default sensitivities’ at birth, but they fluctuate as we grow and are influenced by environmental pressures. It isn’t clear from his book whether these fluctuations are like studio sound-board knobs, that we consciously adjust (at least to some extent), or are merely barometer needles reporting the determined outcomes of causal factors. If the former is the case, then his psychological theory might provide the basis for an Aristotelian normative theory in which the position of each of these sensitivity ‘knobs’ is ‘tuned’ throughout life, for their optimal position. The point here isn’t to prove the case, but simply to show that the merger of psychology and normative ethics is at least plausible, and that such an approach would provide us with a developmental ethic that is simultaneously measurable.

This, it seems to me, is the basis for the opposition to virtue ethics. Not the lack of a ‘decision-procedure’ per se, but the lack of a measurable standard by which I can justifiably judge someone. With a sufficiently sophisticated understanding of human psychology, the “journeyman / apprentice” developmental approach to virtue ethics provides an ethical mentorship system with measurable outcomes. However, I can imagine two potential problems with this concept. First, who decides what the list of values are, how many there are, and what the optimal sensitivity settings are? This problem implies the need for some sort of ur-ethic that can be used to evaluate the evaluation system – and suddenly, we’re plummeting into an infinite regress. Secondly, such a system could ultimately end up stratifying the society into the ‘enlightened’ graduates, and the ‘benighted savages’ who haven’t had the privilege of studying yet. To the first objection, I must admit I have no reply. It seems a bit like the problem of set-theory, and like set-theory, it calls the whole system into suspicion. But, if we’re willing to continue using sets – merely coping with the edge-case problems of set-theory – then why not this moral theory as well? Perhaps because set-theory won’t get you killed by the state, if you run afoul of its paradoxes while using it. To the second objection, I would say that this doesn’t seem to me like a serious concern. If it were fully adopted in an already liberal democratic culture, the transition would be almost invisible. Much of the system is simply describing habits of human psychology that we already observe. The rest would be a matter of crafting environments that steer developing minds in the right direction, while modeling appropriate behaviors. The latter is already a natural parental impulse, and the former could be done by modifications to existing social organizations or minor changes to legislation.

Given these arguments, it seems to me that if virtue ethics deserves condemnation for its lack of a decision procedure, then the prevailing ethical systems that do implement a decision procedure deserve far greater condemnation for producing effects with those procedures that directly oppose their stated goals. Furthermore, given advances in psychology, and the flexibility of virtue ethics, it seems to me that give no other than these three options, a virtue theory coupled with a mature understanding of human psychology would be far superior, regardless of its lack of a formal decision procedure.


  1. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  2. Ethical Theory: An Anthology (Blackwell Philosophy Anthologies) (p. 681). Wiley. Kindle Edition. 
  3. Haidt, Jonathan. The Righteous Mind: Why Good People are Divided by Politics and Religion (p. 146). Penguin Books Ltd. Kindle Edition. 

Book Review: The Righteous Mind, Jonathan Haidt

Is it better to be truly just, or merely to seem so? This is the question put to Socrates by Glaucon in The Republic. Jonathan Haidt, in his book, “The Righteous Mind”, counts Glaucon among the cynics for putting this challenge to Socrates. But Haidt is missing a subtle and very powerful nuance in Plato’s story. Socrates had just finished embarrassing Thrasymachus for his weak defense of cynical egoism. Glaucon and Adeimantus were certainly entertained, but they were not satisfied with Socrates. They sought much stronger reasons for accepting the conclusion that true justice is preferable to appearance, because they did not want to merely seem to agree with Socrates. They really wanted to believe that genuine justice was better, and giving Socrates the strongest possible objection that could be mustered is the only way an honest man (if he is honest with himself) can do this.

Socrates’ initial response to Glaucon was not the description of the ideal state that the story has become famous for. Rather, it was a likening of the soul to the body. Repeated abuses and illnesses corrupt and degrade the health of the body over time, until at some point it is no longer possible to experience vigor and vitality. Likewise, says Socrates, repeated vices and injustices committed in pursuit of wealth or power or honor will eventually render the soul so degraded and corrupt that it will no longer be capable of achieving eudaemonia (aka ‘contentment’, ‘happiness’, or ‘flourishing’). This is the fate of the man who pursues a life of politics, without first tending to his soul.

Haidt seems almost proud of his “Glauconian cynicism” – a socio-biological view in which he believes he can show that, regardless of which is better, seeming just is what we humans actually seek. Haidt claims explicitly and confidently not to be offering an argument for what ought to be, only what is. But the enthusiasm with which he reports this supposed scientific fact suggests that he also thinks that what is, just is what ought to be. But this is precisely the challenge posed to Socrates by Glaucon: it certainly is true that many people (perhaps even most) are cynical and self-serving. So, why oughtn’t they be? Haidt’s response to this recurring implicit question seems to be to just keep reasserting the fact, in ever more sophisticated and complex ways.

Near the end of the book, in spite of already offering an explicit refusal to address the problem of normative ethics, Haidt tosses off a flippant endorsement of Utilitarianism as if this view has already settled the normative question, or simply to signal to the reader that the question just isn’t interesting enough to bother investigating. But this has profound implications for how seriously one can take some of the claims he makes in this book. The tension between what is and what ought to be plagues this book, and any reader eager for insight into the gap between descriptive and normative ethics will find it profoundly frustrating.

The Basic Theory, and It’s Problems

Haidt’s basic theory of the “Righteous Mind” comes down to two hypotheses. First, that the human brain has evolved for both “tribal” and “hive” social structures. To put it in his terms, “we are 90% chimp, and 10% bee”, and a special “hive switch” in the brain is flipped, when conditions are ideal, that suppress our self-interested “groups” psychology, and make us more altruistically “hive-ish”. It’s not quite clear what sort of mechanism this “switch” is, what causes it to flip, and how it gets reset. But he offers a lot of anecdotes from his research that describe evidence suggesting its presence.

The second, and much more complex portion of the theory, is his six-dimensional model of moral psychology. His system is powerfully reminiscent of David Hume’s own four-pole system of moral emotions (Pride-vs-Humility / Love-vs-Hate). But there is one extremely significant difference. Hume’s theory was one meant to describe morality as a system of “passions” (special kinds of emotions). These passions derive from a natural propensity for pleasure, and a natural aversion to pain (he presages the Utilitarians in this respect). What’s more, moral judgments are not reasoned, but felt. Morality, for Hume, just is emotions expressed. Haidt’s theory, on the other hand, describes six dimensions of values, not emotions: Care, Fairness, Loyalty, Authority, Sanctity, and Liberty. Haidt says that all human beings have this six-dimensional system built-in as a consequence of thousands of years of socio-biological evolution. He argues that the “sensitivity level” at which each of these is not permanently fixed, but is set to “defaults” at birth, and adjusted over a lifespan by experience. How, precisely, this happens and by what mechanism, is a bit murky, but again, he offers loads of anecdotal examples (and data from his studies) to show how each of these dimensions is expressed by individuals.

A few questions and objections arose for me, about these two hypotheses, as I read through the book, that never seemed to get a satisfactory answer. First, on the six aspects: are they like adjuster knobs on a sound board? Or, are they merely barometer needles reporting varying pressure levels set by environmental impacts on a biological system? If the former, then surely there are “optimal” positions for each of these knobs (even if only circumstantially optimal)? In that case, then there is indeed an opening for a normative ethical theory, describing these optimums. However, if the latter is true, then it is hard to understand how there could be any such thing as an “ought” at all, much less a system prescribing them. Haidt is constantly nudging up to the edge of this Humean is-ought cliff, and retreating from it just when things start to get interesting.

Second, Haidt never quite explicitly acknowledges that he’s describing a system of values, rather than a moral psychology. One might object that a system of values could be said to be a variety of moral psychology, but I would reply that by the time we get to values, we’re already one layer above fundamental psychology. Why these six, and not others? Indeed, in the book, Haidt explicitly acknowledges that some early reviewers of the book objected to the lack of “equality” as a value on his list of “aspects”. If “liberty” counts as a foundational psychological value, then why not “equality”? It has just as long a history, after all. More importantly, to talk of values at all, you’re once again in flirting in the realm of the normative. I would have to look more closely at the research he used to back this section of the book, but how do we know he didn’t just happen to find the set of six values that he and his team were particularly focused on already? That is a normative selection process: “these values are more important than those”.

Third, returning to the “hive switch”, Haidt emphasizes the “dangers” of too much hive-ishness or too much groupishness. But he never quite explains how there could be any such thing as a “right amount” of either, in the absence of a normative theory. Without any idea of what an ideal amount of either would look like, why would the horror of the Hobbesian anarchy or Stalinist oppression even count as “bad”? Lower primates seem perfectly satisfied with brutal inter-tribal conflict, and ants are obliviously willing to destroy themselves en masse for the sake of colony and queen. What’s worse, is that there’s no clear explanation for how the “hive switch” and the six-dimensional moral psychology fit together. Do certain knob settings produce hives instead of tribes? Do others produce tribes instead of hives? What are the right tension levels between the two modes? If the knob settings do influence this, how do we know what those should be? None of this is discussed in the book, except in passionate warnings to beware of extremes. A laudable sentiment, but so what?

Lastly, while Frans de Waal is largely an asset to Haidt’s book, there is one key notion from de Waal that highlights the primary problem with Haidt’s “Glauconian moral matrix”; de Waal captured it in a rather pithy phrase: Veneer Theory. In his book, “Primates and Philosophers”, de Waal uses the phrase to criticize Huxley and Dawkins for uncritically accepting a view of human nature that is Hobbesian without providing an explanation for how a self-serving egoist gets to altruism all on his own. Haidt’s book suffers from a similar problem. Though he does a great job of bridging the gap between egoist and “group-altruist”, what he fails to do is explain how the “Glauconian cynic” becomes a genuinely caring being. Haidt has concocted his own variety of Veneer Theory by redefining it as a complex inter-subjective social delusion that we all agree to participate in. He takes this as an answer to the problem of a “veneer” layer. But it only makes his own set of theories seem like a Rube Goldberg machine. Haidt makes a strong case for the biological and psychological reality of moral experience as a genuine phenomenon. But this works directly against the idea that we merely wish to appear to care, or to be virtuous. Why layer a “moral matrix” on top of a perfectly reasonable explanation of genuine moral emotions? More to the point, why would evolution tolerate such an expensive and convoluted cognitive load, such as layers of delusion, on top of the already demanding task of navigating the social world in real time? Even more curiously, why would we count the primitive primate morality of chimps and bonobos as “actual” or “genuine”, while regarding our own as a mere matrix-like delusion?

Final Thoughts

Anyone who has read the entirety of The Republic has to come to terms with a powerful dissonance in Plato’s tale. Either Socrates truly misunderstood human nature (perhaps he confused it with his own psychological projections), or he didn’t actually believe what he was saying. Some philosophers argue for the latter theory: that the ideal state was ideal intentionally. Socrates was never going to convince the Athenians to drive all the old folks out of the city in order to start afresh, or convince the educated classes to surrender their private property holdings to the commons, or convince them to put their women and children into a breeding commune to be tended by specially bred and trained guardians. He must have known that. What was really going on here? Remember that the tale was written by Plato, long after Socrates’ execution. Plato was engaging in his own bit of cynical rhetoric, grounded in bitterness. He wanted to demonstrate the utter impossibility of the larger task: convincing men to love virtue for its own sake; to be just, rather than simply to appear just. He had given up on the possibility, and the Republic was his way of showing this. It is hard to blame him, on one level. He’d watched these people destroy his master and teacher; a man for whom Plato had given up a promising life as a poet, in order to follow him in philosophy. Haidt, on the other hand, embraces his cynicism with zeal, because he believes the data tells him he must, and he refuses to even entertain the possibility that we might just be better than that. In effect, he takes Plato’s implicit condemnation of man and turns it into a simple matter-of-fact. But recasting the condemnation as mere description doesn’t change the moral reality; it just hides it behind a veil of cynicism.

Kant vs Aristotle: Virtue and the Moral Law

Kant’s critique of Aristotle is fascinating to me. He uses Aristotle’s own standard against him: to say that virtue consists in achieving excellence in the unique purpose of a human life, and that this unique purpose can be identified by isolating the unique features of the organism as opposed to other organisms, you then have the problem of explaining how it is that the unique feature of reason could be better suited to helping humans achieve excellence at attaining ‘material ends’ (aka ‘happiness’), than the much more efficient and much less costly instinct, which all other animals have as well.

This is enough for Kant to argue that reason must then have some other purpose — which for him, is accessing ‘universal absolutes’ and functioning as the standard of ‘value’ he ascribes to the “good” will. But in making this move, Kant is also implicitly conceding Aristotle’s notion of a teleological end for which man has been “formed”. He’s simply arguing that Aristotle was muddled about the particulars, and that he has managed to sort it all out for us.

But, in order to make his criticism of Aristotle, Kant needs to reduce the greek notion of eudaemonia to (apparently) nothing more than the continuous satisfaction of contingent desires. Since these desires are ‘merely subjective’, dependent on circumstance, and are governed exclusively by the ‘laws of nature’, the satisfaction of them can have no ‘moral worth’ because moral worth consists in the ‘good will’ acting on the recognition of necessary duties found in the ‘moral law’ by way of pure reason, which is independent of contingent circumstances. Thus, hypothetical imperatives cannot “be moral”.

What’s ironic about all of this, is that Kant seems to be arguing with Aristotle, from the point of view of Plato. Kant wants there to be an absolute truth about moral rules, in a mathematical sense (he even makes an analogy to geometry at one point). He is frequently making reference to the difference between the sensible and the intelligible world and with it he makes a distinction between absolute value and relative value. All of these notions are constantly present in Plato’s dialogues. Even the distinction between ‘material’ ends, and ‘ultimate’ ends is something of a dispute between Aristotle (Nicomachean Ethics) and Plato (The Timeaus, The Republic).

It seems to me, that the debate around free will and morality seems to always resolve itself to the same dichotomies: objective-subjective, ‘intelligible’-‘sensible’, necessary-contingent, absolute-relative, and of course descriptive-normative. Has Kant added anything new to this dispute beyond Plato and Aristotle? I’m not so sure about that. The appeal to absolutes is a seductive one. Intuitively, it seems like a moral ‘rule’ could not be valid, if it were not absolute. Because, anything less than “true for everyone, everywhere, at all times”, is simply a preference by definition. However, Kant’s hypothetical examples of the Categorical Imperative in the Groundwork are notoriously confused and in at least one case (false promises), seem to argue against the categorical itself. If Kant himself could not imagine at least one unequivocal practical example of his imperative, it’s hardly fair to expect anyone else to be able to. Kant, I suppose, would have argued that in spite of the fact that ‘normal’ folk aren’t philosophers, they still “get it, deep down”. Maybe that’s what I was doing when I mentioned the intuitive appeal of absolutes. Still, it seems a bit like “cheating”, for Kant to make appeals to common-sense, when all throughout this book, he’s arguing that a properly philosophical understanding of morality must be grounded in rigorous logical universals. I’ll have more to say about this, later…

Reason Versus the Passions – Initial thoughts on Hume’s Treatise

…When in exerting any passion in action, we chuse means insufficient for the designed end, and deceive ourselves in our judgment of causes and effects. Where a passion is neither founded on false suppositions, nor chuses means insufficient for the end, the understanding can neither justify nor condemn it. It is not contrary to reason to prefer the destruction of the whole world to the scratching of my finger. It is not contrary to reason for me to chuse my total ruin, to prevent the least uneasiness of an Indian or person wholly unknown to me. It is as little contrary to reason to prefer even my own acknowledged lesser good to my greater, and have a more ardent affection for the former than the latter. A trivial good may, from certain circumstances, produce a desire superior to what arises from the greatest and most valuable enjoyment; nor is there any thing more extraordinary in this, than in mechanics to see one pound weight raise up a hundred by the advantage of its situation.In short, a passion must be accompanyed with some false judgment in order to its being unreasonable; and even then it is not the passion, properly speaking, which is unreasonable, but the judgment.

The consequences are evident. Since a passion can never, in any sense, be called unreasonable, [except] when founded on a false supposition or when it chuses means insufficient for the designed end, it is impossible, that reason and passion can ever oppose each other, or dispute for the government of the will and actions. The moment we perceive the falsehood of any supposition, or the insufficiency of any means our passions yield to our reason without any opposition. I may desire any fruit as of an excellent relish; but whenever you convince me of my mistake, my longing ceases. I may will the performance of certain actions as means of obtaining any desired good; but as my willing of these actions is only secondary, and founded on the supposition, that they are causes of the proposed effect; as soon as I discover the falsehood of that supposition, they must become indifferent to me…. [Book II, Part ii, Section iii]

Hume, David. A Treatise of Human Nature: Bestsellers and famous Books (pp. 388-389). anboco. Kindle Edition.

According to Hume, reason is but a slave to the passions. We are moved to act by a process of primary impressions (e.g., pleasure and pain, or grief and joy, or attraction and aversion), giving rise to relations of ideas (memories and reflections), which then give rise to secondary impressions (pride and shame, or love and hate, etc).

Hume was probably hyperbolizing for the sake of highlighting the point (later on in life Hume apparently lamented stating it so forcefully). But the point was not simply that passions are neither ‘reasonable’ nor ‘unreasonable’, it was primarily that reason is inert; That no calculation of circumstances or train of logic is capable of moving a man, all on its own. He reasoned that there must be some process by which ‘relations of impressions and ideas’ are converted into passions (the things that actually provide us with the impulse to act). Hume often depicts reason as lying somewhere between initial impressions, and final passions, acting merely as a conduit or proximal cause (though I suppose he would have balked at the word ’cause’ here).

His explication of that process and how it works is woefully naive and speculative (in addition to being incorrect in most respects). However, I think he was on the right track, and simply lacked a sophisticated enough science of human biology and psychology to render his theory into something that made better sense to a modern mind. For the most part, in the 18th century, the only tool available to him was introspection and a smattering of knowledge of human and animal anatomy. So, frankly, not only should he be excused, he ought to be lauded as a genius for (nearly) single-handedly inventing the science of psychology and the philosophical notion of moral psychology.

Still, I find myself disagreeing with Hume for the following reasons:

First, “reason” and “passion” are not separate ‘faculties’ of the mind, placed into a hierarchy with each other. Even Hume seemed to understand this (at least in part). They are functional capacities that express themselves in varying degrees in concert, under various circumstances. Reason is no more the slave of the passions, than the strings and woodwinds are the ‘slaves’ of the french horns and trumpets in an orchestra.

Second, Later on in the Treatise, he’ll introduce yet a third relation: that of moral judgment. At which point, all he’s really doing is describing Plato’s tripartite soul, in a much more complex way (Plato, of course, placed reason in the charioteer’s seat). Why philosophers traditionally have insisted on conceptions of consciousness as simple hierarchies is something I don’t quite understand, but in truth, the mind is more like an evolving ecosystem, not a top-down political structure.

Third, Hume reifies the phenomena in his model. He says that passions are derived from relations of impressions and ideas. He also says that the “self” is itself nothing more than an idea that arises from a relation of other impressions and ideas. But, he then says that pride and shame are passions, and that pride and shame have as “their object”, the self. And, for this to happen, there must be a “we” (i.e. the “self”) that receives an impression of something beautiful that “we” own. But this is to assign intentionality to mere phenomena. Hume never explains how this is possible. You can’t on the one hand, say that all the phenomena of the brain are merely the effects of causal inter-relations between impressions and ideas, and then on the other, somehow make the impressions and ideas capable of choosing objects at which to direct themselves.

The consequence of this, is that the most Hume could have reasonably said, was that he didn’t really know whether passions ‘ruled’ or reason ‘ruled’. At most, our cognitive and emotional capacities are cohabitants, and if you look at the modern scientific literature (admittedly, I am but a layman), there is little in the brain itself to distinguish them apart. Some would say the difference between the limbic system and the frontal lobes is enough to show this, but despite being separate physical structures, the actual neural activity isn’t so distinct. The limbic system, for example, in addition to being responsible for most of our emotions, is also responsible for several functions related to memory(something Hume would have counted as part of his ‘relations of ideas’ rather than as a sensation). The point is, rather than being master and slave to each other, they’re more like ‘dance partners’.

In fact, it seems to me, the core question here is exactly what role do each of the cognitive and emotive capacities of the brain play, in decision-making? Unfortunately, I’m no psychologist, and only have a layman’s familiarity with a smattering of the scientific literature on the question (which might help answer the question). But, I suppose one criticism you could levy at Hume, is that his overall theory (as its proposed here) is unfalsifiable: no matter what you decide, it’s always evidence of the passions at work. But then, it’s not like Hume had access to a rigorous methodology.

Morality in a Determined World

This essay will attempt an answer to the following question: If determinism is true, is morality an illusion? In other words, if we take the basic fact of causal necessity – the brute physical explanation that every effect has a cause – as a given, can we justify a belief in moral value and normative judgment in the narrow sense of “good” and “bad”? I will argue that there are good reasons to believe in the reality of both moral judgment and moral value in spite of causal necessity. Firstly, I will show that causal necessity does not entail what determinists insist of it. Secondly, I will argue that causal necessity leaves us no choice but to accept the responsibility of making moral choices, as members of the human community. Lastly, I will argue that the status of morality as a real phenomenon need not rest on naïve notions of ontological independence from the human mind.

The determinist insists on a universe in which all effects are perfectly determined from prior causes all the way back to the so-called “Big Bang”. He argues that we could, in principle, explain all effects in terms of their prior causes, if we only had the means to acquire enough knowledge to do so. We know from quantum physics that this is not actually true. Quantum indeterminacy shows that predictions at the sub-atomic level are a probabilistic affair, at best. Though this is not enough to claim free will (because brute randomness is just as much a causal driver as a perfectly predictable mechanical universe), it does show that the traditional view of determinism is in need of some updating. What’s more, as Peter Tse1 has argued, neural activity – one level up from the sub-atomic – is not a purely ballistic process (i.e., like billiard balls bouncing around). Rather, according to Tse, neurons behave more like a “store and forward” messaging system, in which groups of “epi-connected” neurons assemble into temporary networks, that collect and release electrochemical energy by way of criterial threshold triggers that may be pattern-specific. These criterial triggers can effect future neural states, and the arrangement of subsequent “epi-connected” networks, which makes their behavior indeterminate, but of a non-random nature. These two phenomenon (random quantum indeterminacy, and non-random neural indeterminacy) together, function as a necessary first condition for genuine choice-making activity in the brain. But none of this need be true, necessarily, to refute the main complaint of the determinist. Namely, that moral “responsibility” could not rest with the individual making the apparent choice, because the individual is not the “ultimate” cause of his behavior and because he’s not really making a choice. To begin with, there is no reason I know of, why responsibility can only rest on a causal terminus. So what if I’m not the ultimate cause of my choice? In fact, I can’t really think of any choice I’ve ever made, in which I was the originating source of the choice. By this reasoning, the Big Bang itself would become the ultimate scape goat. So, that objection seems spurious to me. On the question of whether I’m actually making a choice or not, this objection seems to beg the question it is trying to prove. Perhaps I am making an actual choice. The traditional determinist has yet to prove otherwise, and as I have shown, there is good evidence to suggest that he may be operating on obsolete information.

But Dr. Tse’s work is, at the moment, only an untested theory. So, prudence and charity suggests that taking the determinist’s position as a given might be the safer bet. If the human mind is indeed determined in a ballistic sense, just as the rest of physical matter in the universe, and the barrier to making all human activity predictable is not one of principle, but of mere technological prowess, would this mean our impulse to moral judgment is illusory, or that maintaining moral position is indulging in a fiction? When I consider what it means to be a human being, I think not. While we are animals that have evolved just like all others have, we are yet primates of a very peculiar variety. We are creatures driven by a psychology that was chiseled out of environmental pressures that determined a set of genetic traits that were necessary for the reproductive success of primates in that early environment. That process has, as Frans de Waal2 has outlined, equipped us with a highly sophisticated cognitive and emotional apparatus (whether as primary or secondary traits is somewhat irrelevant at the moment), that enabled such things as gratification deferment, long-term planning, cost/benefit calculation, and comparative judgment. These traits have enabled highly sophisticated social interactions and complex social structures in which genetic relatedness, reciprocal altruism, sympathetic resentment, emotional contagion, empathy, and a robust theory of mind, have all culminated in a “moral sense” that is both self- and other-directed. This collection of “moral sentiments” is necessarily normative, because each of us, as a specimen of the human primate species, requires a means by which we can determine our “fit” in the social order so as to budget our resource acquisition and mating opportunities. The ways in which this evaluative process expresses itself and the individual, and subsequently the group, is going to vary broadly with climatic conditions and population (and is a topic for another time), but in general, when attitudes arising from these evaluations are systematized into moral “codes”, or political philosophies, this we want to call “morality”. What this means, is that morality is not illusory, but it is also not what we typically think it is. It is a psycho-biological phenomenon that consists in a process of continuous negotiation, competition, and collaboration, that regulate the behavior of the species over time, and in response to environmental pressures. There is a further consequence: as a member of this species, I have no choice but to participate in this process. My brain is constructed to perform these evaluative judgments, and to seek commerce with (and protection within) my in- groups. The common question, “why should I be moral?”, is thus answerable by saying that you already are, necessarily. The only thing that remains is, what are you going to do about it?

It might be objected at this point, that these evaluations are purely mythical because no such qualities as “good” and “bad” can be identified in the actions of an individual in the way that angular momentum or velocity can be. Or, it may be contested that because such things as moral “value” are negotiated, they are a mere “social construct”, and therefore ought not be taken seriously. Both objections, it seems to me, come down to an unreasonably reductive insistence on a narrow conception of “existence” as nothing more than physical matter and energy. Even the most materialist of Marxist economists would be willing to acknowledge the “real” store of value present in a ten pound note. As noted above, such things as the capacity for long-term planning, gratification deferment, and cost-benefit calculation (as well as language, and a sense of reciprocity and expectation), enable the creation of real symbols of evaluative judgment – simple commodities upon which we psychologically project a certain qualitative or quantitative meaning. While it is true that different groups of people have used different commodities and have imbued those tokens with different degrees of importance and different kinds of meaning, they have all nonetheless engaged in the creation and representation of real value. If we are willing to accept this as an example of value in real form, why would we not accept the same for morality? Why would we take the collective store of moral value to be less “real”, than the collective store of economic value? Seems to me, without a principle for making such a choice, I’d only be engaging in an act of caprice, and denying my own nature in the process.

The arguments above cannot help us explain what kinds of things we need to evaluate, what means of evaluation we ought to engage in, how much importance the evaluations deserve, or even which evaluations are appropriate in any given situation. Rather, all I have tried to show, is that accepting a deterministic view of reality hardly excuses us from the fact of morality, as a human phenomenon. Far from excusing us from moral choice because mere illusion, a deterministic understanding, when coupled with the science of evolution and psychology, makes morality an inescapable inevitability for us (or, at least, a biological fact of life – even if accidental). What’s more, in trying to characterize morality as an “illusion” because of causal necessity or biological determination, we’re doing nothing less than trying to deny the responsibility with which nature itself has tasked us. For those of us who choose to take up this burden, the challenge is precisely this: to explain what it really is, and to show how we can best make use of it.


  1. Peter Tse, The Neural Basis of Free Will, MIT Press, Massachusetts, 2013 
  2. Frans de Waal, Primates and Philosophers, Princeton University Press, Princeton, NJ, 2006 

Is it possible to act selflessly?

The following is my attempt to answer a question posed to me recently.

When I look at the question, it seems to focus on the individual. So, I think the easiest way to begin this, is to start with the self. Since I’m no Derek Parfit or Bernard Williams, and the question seems to be focusing on moral sentiment and moral choice, I’m going to reduce the ‘self’ to just that part we always end up talking about, when we talk about choice: The Will. Lacking a more sophisticated understanding of consciousness, I’m going to cobble together a rudimentary theory of the conscious self from Schopenhauer (Freedom of The Will), Dennett (Freedom Evolves, Consciousness Explained), and Peter Ulric Tse (The Neural Basis of Free Will).

Schopenhauer’s basic sketch of the conscious self, while not scientifically accurate anymore, is vague and general enough not to conflict with the a simple “modular” or “functional” understanding of the mind (one theory currently being batted around in science these days). So, I’m going to take his model as read, with some technical embellishments: The conscious, motivated, active self is (for our purposes), a neurochemical process of the brain that gives rise to a sensual consciousness (awareness of the outside world), a self-consciousness (awareness of our desires and intentions), and a ‘will’ embedded within that self-consciousness, from which all intentions to act originate, and upon which all chosen intentions are acted by that self. 

Free will seems essential to this question. Schopenhauer, of course, argued against such a thing. But I don’t think his argument is conclusive. In short, if we’re looking backward in time, causal explanation chains are not necessarily evidence of causal necessity looking into the future (see Hume, on Induction). Further, it has been shown that sub-atomic indeterminacy can play a role in the way neurons function (see Tse, and Dennett). Thus, it seems there is room to suppose at a minimum, that absolute determinism is not a certainty (and, at best, that there is some sort of freedom of the underlying will that does not necessarily violate physical causation). Thus, I think we can tentatively accept the idea of an underlying will that is at least possibly free. 

But even if all that is wrong, Schopenhauer still gives us a get-out-of-jail-free card, for the purposes of the class. He defines a conventional notion of freedom at the beginning of his essay that he calls “negative” freedom; meaning, in a phrase, “I am free to do as I will” (regardless of whether the will itself is free). In short, I am unimpeded or uninhibited in the choices available to me, in the basic physical sense. Since this conception of freedom is enough to get us to the point where we have to start making choices, and value judgments about our choices, I think this might be an acceptable “plan B” for answering this question.

So, we have a conscious self that is free to choose to act or not, on the intentions presented to it by it’s will. The next question would be (restating the original question a bit) is it possible for this self to act ‘selflessly’? Here, I think we have a straightforward answer, in both the metaphysical and logical sense:

  1. The will is contained within the consciousness that corresponds to it’s identified self.
  2. The will cannot present intentions to any other self, than the one to which it corresponds.
  3. The self cannot act upon intentions not presented to it by its own will (I will expound on this point, below).
  4. Therefore, it is not possible to act selflessly. 

On premise three: Here, it might be tempting to ask, “but what about other people’s stated intentions? Can’t we act on those?”. To this, I would appeal to Schopenhauer’s conception of consciousness. If some friend is making an appeal to me (to act in some way), he would be presenting my sensual consciousness with a motive. The sensual consciousness would pass that motive to the self-consciousness, and the self-consciousness would report back the intentional desire to act from the will. At this point, I could choose to act on the impulse and respond positively, or I could deny the impulse and respond negatively to my friend. Even if I respond positively, I am still fundamentally acting from the self, because it is my will that gave rise to the impulse to act. This means (at least logically) that I could not possibly be acting selflessly. Therefore, all acts are selfish acts.

But this whole discourse from the raw metaphysical possibility seems a bit impoverished. Perhaps we mean to ask, “Is it possible to act purely from altruistic motives”? Or perhaps, “Is acting altruistically also acting selfishly”? These are much more difficult questions, I think. Mostly, because they’re intensely psychological. For the first question, I’d lean on my Schopenhauer again, and say yes it is possible to act from an ‘altruistic’ motive, because though we characterize the motive as “altruistic”, it is still fundamentally a motive experienced and intention exhibited by a self, thus it is necessarily (by definition?) acting ’selfishly’. 

The second question is more interesting, and more perplexing. It’s essentially asking how motives interact in the mind. Perhaps even, what are the basic nature of these two motives (altruism and selfishness). If these two are not mutually exclusive, what happens when they “mix” in the mind? Is it an additive mixture, or a subtractive one? However, if they are exclusive, what calculus is taking place to privilege one over the other? What circumstances or other motives might have an effect on that calculation? Is the mixture or calculus something we can reduce to a principle? If so, would that function as a moral fundamental (even if not THE moral fundamental)?

Arguing from the psychological, I would speculate that a kernel of selfish motive lies at the core of all actions, even those dominated by selfless motives. Logically speaking, this would still be an affirmative answer to your question (Yes, it is possible to act selflessly). However, perplexingly, I could also say, ‘no it’s not possible, because that kernel of selfishness is present’ — and both answers would be true, because both motives are present in my mind at the same time. But perhaps my speculation is incorrect? Perhaps only one motive can be present at a time? Somehow, I doubt that…

That’s what’s interesting about the psychological question. It’s not quite a paradox, because it’s not actually a binary. It’s like drops of black paint in a bucket of white paint. If there is a kernel of selfishness at the core of all my selfless acts, does it “pollute” my altruism? Am I being dishonest somehow if I don’t acknowledge it? Does my worry about being dishonest betray some “turtles all the way down” higher authority that I want to appeal to? Or is this more like microprocessor voltages (below 5volts of selfishness = altruism; above 5volts of selfishness = selfishness)? 

We might want to say this is where value judgments can help us out. Well, they may help us to clarify which of the two motives would dominate our intentions in some specific instance, but I’m not sure how that could get us to a universal principle (viz. Altruism Is Good, or Selfishness Is Good or Altruism is Bad or Selfishness is Bad). 

Mill, Kant, Preference and Universality

If you look closely at Mill’s arguments in Utilitarianism, he seems to be making a very strong response to Kant (perhaps against the Groundwork?). Mill accepts the notion of moral duty, just as Kant does. But he insists it derives not from any form of analytic (i.e., Kant’s notion of synthetic a priori) truth. Rather, Mill insists it derives from the apparently universal desire of mankind (individually, in aggregate) to seek its own pleasure. Aware of some of the contextual implications of this principle, Mill attacks head-on the charge of Epicureanism. But what strikes me as interesting, is the fact that, though he makes frequent reference to Kant, he never directly refutes Kant’s position, and never fully explains how the pleasure principle isn’t obviously and soundly refuted already by Kant’s explication of deontology (in the Groundwork). Mill just seems to ignore the problem of subjectivity in the hypothetical imperative, as described by Kant. Perhaps Mill is assuming that the apparently universal preference for pleasure somehow renders the hypothetical imperative a moot point? (i.e., since everyone prefers pleasure, it’s pointless to bother thinking in terms like, ‘if you seek pleasure, then you should do x’).

This idea of a universal preference is an intriguing one. Mill makes frequent appeals to preference – both implicit and explicit. What if we could actually identify a preference that is indeed universal to all human beings? I’m struggling, frankly, to think of one. Even something as intuitively obvious as “life” isn’t so obvious, when you consider the willingness of soldiers to throw themselves over the trenches, or the high rate of suicide among men in the west, today. Clearly, those folk do not have a preference for living. If something like life itself can’t be ascribed as a preference to all human beings, why should pleasure?

On the other hand, biology is notoriously fuzzy at the edges. Sometimes a horse is born with 5 legs. Is it no longer a horse? Sometimes humans are born with 3 x chromosomes, instead of an xy pair. Does that mean there’s no such thing as mammalian sexes? If we can accept these sorts of vaguenesses in distinction, then perhaps a “universal preference” could also be accepted as something slightly less than universal?

Perhaps, but when we start ascribing moral significance to such a thing as a preference, the game changes a bit. Because what are we really saying, when we say we can judge a behaviour as “right” or “wrong”? When I say something is or isn’t a preference of mine, nothing follows. I just go about my business, and you, yours. But when I take a preference of mine as a standard to judge you ‘right’ or ‘wrong’, I am implying a great deal more. It implies that, at the very least, I am licensed to condemn you for not sharing the preference — and at the most extreme end, that I am licensed to kill you.

But what if the standard isn’t some particular material preference (such as ice cream favors, or even living), but rather, for behavioral reciprocity? Now, if I have a preference for vanilla, and you have a preference for chocolate, but we both share (for example) a preference for not attacking people with differing preferences, then we might be able to negotiate a peaceful existence together. What’s more, we’d then be justified in self-defense against someone who didn’t share that meta-preference.

Perhaps this is what Mill was thinking when he suggested we all ought to regard each other equally, in the decisions we make? More thought must be done on this one…

Bernard Williams, Moral Dilemmas, and Utilitarianism

It would be amazing if ethics courses would stop trying to put me into some kind of Antonio Banderas / Heath Ledger fantasy nightmare, and actually start teaching me how to work on real problems in the real world. Everyone talks of trying to do "applied" ethics, and trying to remove all the 'abstractions' and deal 'directly' with our moral intuitions. But these scenarios just seem to me to be driving us further and further away from that goal

Read more

ISP Launch Event: Three Talks On Three Philosophers

This weekend I attended the launch event for the International School of Philosophy here in London. Three Talks on Three Philosophers was intended to showcase the kind of thought one could expect from the new school, as well as provide an opportunity for philosophical learning to the local community (greater Islington, mainly). Sam Freemantle, the founder of the new independent school, provided the first of the three lectures, in the form of an overview of his Phd thesis, “Reconstructing Rawls”. Following Sam, Adrian Brockless offered a passionate argument for a more thoughtful kind of education grounded in Socratic questioning. Lastly, Professor Ken Gemes of the University of London treated us with an extended version of his talk on Nietzsche’s Death of God.

Serendipitously, I also listened this weekend to a new reading of the introduction to Allan Bloom’s “The Closing of the American Mind” (a book I read years ago). I say “serendipitously”, because it turns out to be a powerful lens through which to interpret the messages coming out of Saturday’s lectures. In particular, the lectures of Professor Gemes and Mr. Brockless, which were laden with themes that could easily have been attributed to Bloom. The erosion of truth and goodness as absolute values (both in society and in the academy), the corruption of the academy to purposes other than the pursuit of the good life, the need for a renewal of these core values, the seemingly intractable challenge of re-establishing them in an educational environment so democratized and demoralized that even the hint of such an effort will raise accusations of elitism. All of these were core concerns of Allan Bloom, and his voice was clearly resonating in the words of both Professor Gemes and Mr. Brockless. Though, I suspect neither of them would agree.

For Professor Gemes the worry is societal, and spans generations. He began his talk with the story of the madman from Nietzsche’s The Gay Science, which illustrates the central problem for Nietzsche, as Gemes sees it: absent the catalyzing mythology of christianity, why would we continue to cling to it’s core values of truth and goodness? Given that the values of honor and glory held by civilization before Christianity seem more seductive, why wouldn’t we return to these, and abandon truth and goodness, in the absence of a dogma that focused us on them? According to Professor Gemes, Nietzsche believed we were clinging to truth as a value, by way of some sort of “hangover” from Christianity, and he wanted to know why. I think Nietzsche may have been disadvantaged by his proximity to the downfall of Christianity in the west. Over a century on now, in the “post-truth” era, it appears we have indeed begun to abandon truth and goodness as ultimate values, and have indeed begun replacing them with honor and glory once again.

Nowhere is this shift more clearly and startlingly present, than in the academy. Mr. Brockless highlighted this inadvertently, I believe, in his lecture. Using the Socrates of Gorgias and The Republic as a mentor, Brockless crisply argued for a conception of higher education that differentiates itself from the contemporary academy, by focusing on the pursuit of truth through “authentic” learning that exposes students to “meaning and understanding of the human condition”, rather than on the career advancement goals and academic advantages of its students. This plea explicitly demands that truth be reseated in our minds as an absolute value, pursued for its own sake. Although Mr. Brockless’ lecture came before Professor Gemes, his is a direct response to Nietzsche, in the form of a resounding and explicit affirmation of truth and goodness, above honor and glory, at least as far as the academy is concerned. To that end, Brockless counseled a return to the ancient classics, and glowed with a reverence for the Socratic dialogues themselves, even recommending them as a starting point for students.

Interestingly, a popular new voice has also converged on this question. I’ve recently seen a lecture by Jonathan Haidt of New York University, in which he suggests that a “new schism” ought to take place in the modern university, involving the realignment of ultimate values. In his view, these divergent ultimate values are “truth” versus “justice” (actually, “social justice”, which he contends is unjust at times). But rather than pressing for the conquest of truth over social justice, Haidt advocates for an amicable divorce. Haidt centers his lecture on a vision of education very similar to Brockless, in which universities that adopt truth as a core value dedicate themselves firmly to free expression, and open dialogue and debate in which no idea is off the table. In other words, the Socratic tradition. The same tradition Brockless described during the question and answer period of his lecture.

Allan Bloom’s book was a vanguard in this discussion, I think. Some might suggest that perhaps there really is no problem, and this is all just varying degrees of predictable conservatism occasionally surfacing above the white noise. After all, these sorts of complaints have been around for almost 50 years, and yet the generations leaving university then and now don’t seem to be too much different from each other. But are they really so much the same? Bloom (and proteges like E. D. Hirsch) would point to the degradation of “dead white males” in the academy, and their gradual replacement with relativist and anti-absolutist dogmas (in addition to the impulse toward radical activism) — and the pervasive cultural ignorance and growing hostility to truth of new students — as certain indicators. I’m not sure that Haidt, Brockless, or even Gemes would necessarily agree with that. But one thing that all of these voices seem to agree on, regardless of the reasons grounding it, is the loss of truth and goodness as guiding star values in our overall culture, and most profoundly, in the academy.

The question is what, if anything, should we do about it? Brockless and Haidt have slightly divergent opinions on this. One suggests lobbying to reestablish the traditional mission of all higher education, the other recommends a more “free market” answer (if I can call it that), by bifurcating the institution into two competing organizations, one focused on truth, the other on justice. Neither of these speakers’ solutions are entirely satisfying to me. I think this problem is bigger than all of us, and may be inevitable. I wonder if Nietzsche thought so, too.