The following essay is one outcome from my previous "research notebook" post. It is the second of four complete drafts. The fourth was the "official" work, sent off into the academic ether. This draft, however, is one I'm posting to my blog, because it offers a lot of food for thought, and isn't meant to be a completely polished argument. I want it to serve as a mile-marker, offering an opportunity for discussion and debate, and signifying my thinking thus far, on the topic. I hope you get some use out of it.
I am working on crafting a meaningful answer to the question posed in this heading. But I have decided that the question can’t be answered until two subordinate questions can be answered. The first is “What is The Will”? and the second, “What is Freedom”? I am holding off on the latter question, for now. The following, is a compilation of my collected notes and remarks on the will itself. Hopefully, you’ll find it useful, too.
What Is The Will?
The short answer is I don’t know. It is entirely unclear to me what, exactly, the “will” is. According to Schopenhauer1, it is a black box int which you pump motives and out of which you receive intentions to act that are utterly compelling. According to Peter Ulric Tse2, the will is:
Whatever it is that triggers actions in the domains of voluntary or endogenous motors or internal actions…
But this just sounds to me like a modern reiteration of Schopenhauer’s “volition… directed toward an object…”. Dennett3 urges us to “trade in mystery for mechanisms”, but as far as I can tell, we’ve just traded one mystery for another in the idea of a “will” as a real thing. I’m not sure it even makes sense to talk about it as a verb, either. In most of the modern scientific literature, authors speak in terms of consciousness, or parts of consciousness as biological processes or interrelated systems or networks of systems. The will, it seems, has been dissolved by the acid of modern neuroscience.
Can we even answer this question, then? If there is no “will”, in the sense that Schopenhauer, Kant, or Hume might have thought of it, then it would seem there could be no answer. But would it still work to think of the term as a generalizing metaphor (as Tse has done) to capture all the various processes into one convenient basket of thought? The danger here, it seems to me, is in the fact that metaphors encourage ignorance and frequently propagate destructive misconceptions. I would rather try to work with the bits I do understand, as incomplete as that might be, than to fool myself into thinking I’m with something that doesn’t exist.
But what do I understand? What bits am I working with? Since this is fundamentally a question that extends far beyond the scope of my own understanding (and indeed, still somewhat beyond the scope of the modern science of the mind) the best I’m going to be able to do is craft a tentative answer, cobbled together from a synthesis of scientific and philosophical sources. Let’s see what they have to say.
Robert Kane4 provides us with a nice encapsulation of the classical understanding of “will”, as a rational faculty of ‘practical reasoning’:
… practical reasoning can issue in two kinds of judgment – practical (normative) judgments, on the one hand, about what ought to be done… and choices or decisions, on the other hand, which announce that the agent ‘will’ do such-and-such… Thus ‘the will’ (as ‘rational will’, in the sense we are considering) is a set of conceptually interrelated powers or capacities, including the powers to deliberate, or to reason practically, to choose or decide, to make practical judgments, to form intentions or purposes, or to critically evaluate reasons for actions…
This is a definition of the will from the outside, so to speak. A description of the experience, or the observation of, conscious decision-making and the actions consequent to that decision-making. It is not quite the same thing as what Schopenhauer short-handed as “I can do as I will”. This view of practical reasoning might be better thought of, in Schopenhauer’s terms, as “I can will as I will”.
The notion in the passage from Kane seems to be echoed as well, by Mark Balaguer5, who says this:
…We want free will in connection with a certain subset of our conscious decisions. In particular, we want it in connection with what we can call ‘torn’ decisions. Torn decisions can be defined as… a conscious decision in which you have multiple options and you’re torn as to which is best; more precisely, you have multiple options that seem to you to be more or less tied for best, so that you feel completely unsure… and you decide while you feel torn…
This implicitly invokes the sense of practical reasoning and judgment outlined by Kane, in its description of the kinds of situations faced when such a faculty is necessary. However, it’s still not quite clear where this capacity resides, or how we would identify it.
Anyway, for all it’s apparent precision, Kane’s explanation is admittedly ‘conceptual’, and these interrelated ‘powers’ and ‘capacities’ just seem to be metaphors, rather than theories of mental functions. At least Tse’s basic sketch (more on that later) is referring to specific structures of the brain ‘triggered’ by ‘whatever’. Though, at the moment, this is still only slightly more specific than Schopenhauer’s ‘flint and steel’.
The Schopenhauer Problem
On the question of free will, Schopenhauer set up a dichotomy in his famous essay. Either you think you’re free because you are ‘free to do as you will’, or you think you’re determined because you’re not ‘free to will what you will’. Reading through the Oxford Readings in Philosophy text on free will6, every author in this book seems to accept this dichotomy explicitly, in the way they frame the problem.
The first essay in the book is by Chisolm7. This essay has apparently been ‘discredited’, according to a number of other authors in the book, but it still offers a number of thoughtful passages on the idea of the will itself, and the Schopenhauer dichotomy:
…even if there is such a faculty as ‘the will’, which somehow sets our acts a-going, the question of freedom, as John Locke said, is not the question ‘whether the will be free’; it is the question ‘whether a man be free’. For if there is a ‘will’, as a moving faculty, the question is whether the man is free to will to do the things that he does will to do – and whether he is free not to will any of those things that he does will to do, and again, whether he is free to will any of those things that he does not will to do…
Chisolm goes on to say this:
…the metaphysical problem of freedom does not concern actus imperatus; it does not concern the question whether we are free to accomplish whatever it is that we will or set out to do; it concerns the actus elicitus, the question whether we are free to will or set out to do those things that we will or set out to do…
It’s a slightly more readable version of the first passage, but it doesn’t include all of the combinations set out in the first. Eliminating them obscures the problem slightly. Probably a much more succinct way to put it, would be to say, ‘could I have willed otherwise’, in any situation of either action or inaction on my part. But this is slightly off topic.
Harry Frankfurt8 seems to reduce will to a collection of primitive desires, that necessarily compel action:
…the desire (or desires) by which [an agent] is motivated in some action he performs, or… the desire (or desires) by which he will or would be motivated when or if he acts. An agent’s will, then, is identical with one or more of his first-order desires…
He further tries to clarifies this, to be sure we understand that they are a specific set of desires:
…the notion of the will, as I am employing it, is not coextensive with the notion of first-order desires. It is not the notion of something that merely inclines an agent in some degree… Rather it is the notion of effective desire – one that moves (or would move) a person all the way to action…
The distinction between “inclinational” and “effective” desire is not entirely helpful. In fact, because this definition seems to work backward from apparent phenomena to explanatory theories, it seems circular to me. If I say that my will to act is ‘coextensive’ with whatever desire caused my action, I am saying that the will is only identifiable in observable acts, because observable acts come from the will. In other words: my will are my acts, because my acts are my will.
Frankfurt’s account gets even more paradoxical from here. He says this, for example:
… now consider… statements in which the term ‘to X’ refers to a desire of the first-order. There are also two kinds of situation in which it may be true that A wants to want X. In the first place, it might be true of A that he wants to have a desire to X despite the fact that he has a univocal desire, altogether free of conflict and ambivalence, to refrain from X. Someone might want to have a certain desire in other words, but univocally want that desire to be unsatisfied.
Why would anyone yearn for a desire that he could actively deny for the sake of the denial itself, unless he was some sort of fringe case of sadomasochistic schizophrenia? Frankfurt tries to answer this question with a hypothetical thought-experiment:
Suppose… that a [psychotherapist working] with narcotics addicts believes that his ability to help his patients would be enhanced if he understood better what it is like for them to desire the drug to which they are addicted. Suppose that he is led in this way to want to have a desire for the drug. If it is a genuine desire that he wants, then what he wants is not merely to feel the sensations that addicts characteristically feel when they are gripped by their desire for the drug. What the physician wants, in so far as he wants to have a desire, is to be inclined or moved to some extent to take the drug… he does not want this desire to be effective. He may not want it to move him all the way to action.
This example fails on three grounds, it seems to me. First, Frankfurt seems to be reducing the complex subtlety of emotional considerations in any given situation down to a binary of effective and ineffective desire. It essentially reduces the human capacity for empathy and understanding down to a kind of me-tooism. Why would a therapist think that the only way he could help his nation is to become the patient himself (at least, in some sense? Surely psychological training has tools for dealing with these sorts of issues that don’t require these kinds of bizarre and frankly dangerous measures. More to the point, there’s no reason to think that a therapist’s curiosity, or concern, or frustration, or wonder, can be equated directly with something like “ineffective” or “effective” desire. Lastly, in the form this example takes, you can see what he was avoiding in silhouette: namely, that the patient already must have a desire not to want what he already wants, and sometimes that desire wins. After all, this is why the patient is in treatment, presumably. The point here is that it’s not clear what the will really is. If we’re simply going to say, “yesterday, the patient’s will was not to take drugs, because he didn’t take drugs; today, the patient’s will is to take drugs because he took them”, we’re reducing the idea of the will to a triviality.
The rest of Frankfurt’s paper goes on to describe the interrelation of first-order and second-order desires. I am primarily interested in his conception of first-order desires, since this is where his notion of a genuine will resides. But these interactions are important, because they characterize the effective will. As he puts it, in relation to the therapist:
…a desire to have a certain desire that [one] does not have may not be a desire that [one’s] will should be at all different than it is…
In other words, one may not desire to act out a desire. Or, more simply, to will (make a desire effective). So again, we are left with a definition of the will that identifies it directly with the observable fact of action. Whatever those second-order desires are, they’re not part of the will. Frankfurt makes this relationship even more muddy, in section three of his essay:
…It is only because a person has volitions of the second-order that he is capable both of enjoying and of lacking freedom of the will. The concept of a person is not only, then, the concept of a type of entity that has both first-order desires and volitions of the second-order. It can also be construed as the concept of a type of entity for whom the freedom of its will may be a problem…
So, on Frankfurt’s view, the “will”, which is made up of only the first-order desires is determined, at least in the psychological sense, and requires second-order desires to condition it, or “free” it from its determination. But this suggests that the second-order is where the real “will” resides, since it seems to be capable of somehow overriding the first-order desires. Still, even if we take this at face value, we’re still only talking about psychological determination. In no sense is Frankfurt addressing the underlying physical question (i.e. causal necessity), as highlighted by Schopenhauer and Chisolm.
In short, the question comes down to one Frankfurt himself posed rhetorically to Chisolm: “Why, in any case, should anyone care whether he can interrupt the natural order of cause…?” Why, indeed. Frankfurt provides no account for this, himself, as far as I can tell. In fact, Frankfurt goes on to admit that all three of his hypothetical drug addicts have wills that are not really free, despite their willing being free. He says:
…It seems conceivable that it should be causally determined that a person is free to want what he wants to want. If this is conceivable, then it might be casually determined that a person enjoys a free will…
This seems to me, obtuse and contradictory. He’s attempting to claim that freedom is determined. I find this sort of speculation to be on the order of a “one hand clapping” kind of deepity (as Dennett use the term).
Wallace Grounds Frankfurt
R. Jay Wallace’s “Addiction as a defect of the will”9 tried to provide some model for the will through the lens of medicine. Taking Frankfurt’s hypothetical as serious inspiration, Wallace walks us through a more naturalistic theory, and criticizes Frankfurt in the process. To start with, Wallace picked up on the same problem I did:
…to say that we always do what we must want, where “want” can be interpreted in the sense of intention in action, is thus to say nothing more interesting than that human action is an intentional goal-directed phenomenon…
In other words, “my will are my actions, because my actions are my will”. He further describes Frankfurt’s view of the will as the “Hydraulic Conception” of desire, and criticizes it heavily:
…desires are conceptually and empirically distinct from our intentions in action, in the sense that one can want to do something without necessarily intending or choosing to do it. They are given to us, states we find ourselves in rather than themselves being primitive examples of agency [volition], things that we ourselves do or determine. The hydraulic conception maintains furthermore, that desires that are given in this way have a substantive explanatory role play in the etiology intentional action… This kind of psychological determinism is in my view the underlying philosophical commitment of the hydraulic model; but it is also its undoing. The problem, in broad terms, is that the model leaves no room for genuine deliberative agency. Action is traced back to the operation of forces within us, with respect to which we as agents are ultimately passive, and in a picture of this kind, real agency seems to drop out of view. Reasoned action requires the capacity to determine what one shall do in ways independent from the desires that one merely finds oneself with, and an explanatory framework that fails to leave room for this kind of self-determination cannot be adequate to the phenomenon it is meant to explain…
But for all his emphasis on explanation, Wallace himself goes on to argue for a conception of “self-control” as will that is fundamentally unfalsifiable:
…compelled agents retain a capacity to initiate a regime of self-control that cannot itself plausibly be reconstructed in terms of responses under various contrary-to-fact conditions. We think of such agents as possessing the power to struggle against their wayward impulses, not merely in counterfactual circumstances, in which the desires and beliefs to which they happen to be subject are different, but in the psychological circumstances in which they actually find themselves…
How is this belief to be shown as a matter of fact? How would one demonstrate that the will to “overcome” is a signal of freedom, or simply the product itself of deeper forces at work within the brain? How could we even tell the difference? In the end, Wallace argues for a volitional notion of the will that is vaguely similar to Kane’s classical depiction of the rational will of practical reason, in an attempt to escape the psychological determinism of what he called the “hydraulic” conception of the will:
We need, in my view… a third moment irreducible to either deliberative judgment or merely given desire. This is the moment I shall call ‘volition’. By ‘volition’ here, I mean a kind of motivating state that by contrast with the given desires that figure in the hydraulic conception, are directly under the control of the agent. Familiar examples of volitional states in this sense are intentions, choices, and decisions… Primitive examples of the phenomenon of agency itself…
Wallace gets significantly more specific a bit further down:
…[it is] the kind of agency distinctive of those creatures capable of practical reason. From the first-personal standpoint of practical deliberation take it that we are both subject to and capable of complying with rational requirements, and the volitionist approach enables us to make sense of this deliberative self-image…
He doesn’t actually explicitly mention why he uses the term “practical reason”, but I have to take this to mean what it meant traditionally (in Aristotle’s Ethics and Kant’s Groundwork): namely, the faculty for moral decision-making. So, Wallace is adding another layer to the cake of will: first, the physical; next, the psychological; and now, the moral. In short, “I can do as I will, and I can will as I reason from practical principles”. This may help Wallace escape the problem of psychological determinism, but it only pushes the metaphysical question (causal necessity) back by one degree. In other words, am I free to reason from practical principles? Schopenhauer thought not. The character of the will was baked-in from birth, according to him. So we can’t even desire what we desire. Kant, however, thought we could do this. That reasoning from practical principles was, indeed, a duty because we were rational beings, that the ultimate duty could be (indeed, must be) reasoned a priori, and that once we had done that, we were duty bound to act on it.
But all of this is starting to distract from the central question of this exploration. Namely, what in the world is the will? So far, the theories have been restricted to extensional descriptions of subjective experiences of judgment or choice, or working backward from physical phenomena like actions, but we don’t yet have a theory for the faculty itself. For that, I’m really going to have to return to Peter Tse, and Michael Gazzaniga.
- A. Schopenhauer, Prize Essay On The Freedom Of The Will, New York, Dover Publications, 2005 ↩
- P. E. Tse, The Neural Basis of Free Will, Cambridge Mass., MIT Press, 2013 ↩
- D. Dennett, Freedom Evolves, Londone, Penguine Books, 2003 ↩
- R. Kane, The Significance of Free Will, New York, Oxford University Press, 1998 ↩
- M. Balaguer, Free Will, Cambridge Mass., MIT Press, 2014 ↩
- G. Watson (ed), Free Will (Oxford Readings), Oxford, Oxford University Press, 2013 ↩
- Chisolm, R. M., ‘Human Freedom and the Self’, in G. Watson, (ed.), Free Will (Oxford Readings), Oxford, Oxford University Press, 2013, pp. 26-36 ↩
- Frankfurt, H., ‘Freedom of the Will and the Concept of a Person’, in G. Watson, (ed.), Free Will (Oxford Readings), Oxford, Oxford University Press, 2013, pp. 322-336 ↩
- Wallace, R. J., ‘Addiction as Defect of the Will: Some Philosophical Reflections, in G. Watson, (ed.), Free Will (Oxford Readings), Oxford, Oxford University Press, 2013, pp. 424-452 ↩
The film “2001: A Space Odyssey” is one of the best-known science fiction classics of all time. Over the decades since its initial release, this close collaboration between Stanley Kubrick and Arthur C. Clarke has become a focus of study for film students, philosophers, and futurists.
Attention tends to center on Kubrick’s depictions of space travel and its impact on human life, or on Clarke’s exploration of questions like the nature of consciousness and the ontological conundrums raised in the film’s unique climax and conclusion.
But all of these themes, as important as they are, overlook an essential insight about ourselves, and our relationships with others. To understand this insight, we must first understand the nature of the relationships on board the Jupiter One, and the role of killer computer best known for his refusal to open the pod bay doors.
THE HAL-9000 IS NOT A MACHINE
Historically, literature has made good use of non-human characters to represent some aspect of ourselves. With the advent of science fiction, computers and robots have often taken the lead in this role. Just about everyone today is familiar with the most common of the tropes: Man’s creations become the means of his own judgment, or his destruction. HAL certainly fits into that category.
But he represents something else, as well. Something much more subtle and powerful than merely an unhinged Pinnochio. Because HAL is already a real boy. And Kubrick points this out to us many times.
If you watch the film carefully, you’ll notice that it is only HAL who admits actual feelings to us. He tells us how much he “enjoys” working with people, how much “concern” he has about the mission, how “puzzled” he is about his misdiagnosis. He apologizes to us for being too inquisitive and silly, and finally as he is being shut down, he tells us he is afraid.
His actions tell us, too, how utterly human he is. At first he’s proudly and confidently telling us how he’s never made a mistake. Then he’s confiding in Dave how worried he is. And finally, he’s acting to defend himself from what he perceives as a threat.
Philosophical debates about “artificial intelligence” notwithstanding, what Kubrick has given us, is a character that is very much alive: he has private thoughts, feelings, desires, fears, and motivations. He has an inner life similar to, and tries to build a relationship with, his colleagues Dave and Frank.
HAL IS A CHILD
We don’t learn this until nearly the end of the second act, but HAL is only nine years old. Kubrick tells us explicitly: His birthday is January 12, 1992. This date isn’t a mere random artifact. Clarke and Kubrick both tell us that HAL has been designed from the ground up to be indistinguishable from a human consciousness, right down to posessing an emotional life (the book even boasts HALs ability to best every known Turing Test).
But even if we don’t accept the surface story as literal, we can still see HAL’s age manifest in his behaviors. He is eager to tell you about his abilities. He is defensive when those abilities are questioned, even to the point of trying to shift blame when they fail. He is obsequious with Dave, the acknowledged substitute authority on the ship (more on this later). He panics when he overhears the conversation between Dave and Frank, gloats when he thinks he has gained some power over Dave, and then descends into the predictable cycle of demand-manipulate-beg, when he is thwarted.
For all of his vast knowledge and “intelligence”, the one thing HAL’s creators did not do, was to give him time to mature. Imagine a child with this degree of intellectual superiority today. Who would consider it a good idea to put a nine-year-old in charge of other men’s lives, just because of his intellectual prowess?
Which brings me to my next point…
HAL IS PART OF A DYSFUNCTIONAL FAMILY
The described model of a dysfunctional family includes many features present in the relationship dynamics of the second act of the film. And, much of this dynamic is not made clear to us until the end of act two, which is curiously consistent with the insular nature of dysfunctional families.
For example, we can see that Dr. Floyd assumes the mantle of the distant, and emotionally abusive father figure. He keeps secrets from his “children”, plays favorites among them, and dismisses concerns with a mere smile. He burdens HAL with all of the responsibility for the family, and then forces him to carry a devastating secret about the family that HAL believes will hurt them all, but is denied the freedom to share it with the astronauts.
On board the Jupiter, Bowman and Poole play the role of suspicious older brothers to HAL. They treat him coldly, often speak of him in the third-person while in his presence, and refuse to be honest with him about their own fears and concerns.
The astronauts roles each diverge somewhat as the plot unfolds. In dialogs between Bowman and Poole, we can see that Bowman tends toward defending HAL, while Poole is the more suspicious and hostile of the two. However, Bowman is much less honest with HAL than Poole is. He is the only one to lie directly to HAL. This will come into play later, in the outcomes.
It is also interesting to note that the astronauts – especially Bowman – are actually less emotional than HAL. Bowman’s facial expressions are uniformly flat. The only time we see a change, is when Dave lies to HAL (the smile), and the famous pod bay doors scene, when Bowman shoots himself into open space. Even when Poole is murdered by HAL, Dave’s face is blank. And, as Bowman is shutting HAL down, the only way we know how Dave is feeling, is from the rapid breathing we can hear in his space helmet.
HAL, on the other hand, makes a few timid attempts to connect with both Frank and Dave. First, when he offers Frank Happy Birthday greetings, and then later more seriously, when he attempts to share his concerns about the mission with Dave. On both counts, HAL is shut out. Neither astronaut is willing to actually treat HAL as a co-equal partner, despite words to the contrary. They keep him at arms length, speak in short, curt sentences, and never address him for anything accept mission necessities.
HAL IS AN IDENTIFIED PATIENT
Psychologists have a term for a specific family member selected unconsciously to act out the family’s dysfunction. HAL is this family member. The “identified patient” can present as a family bully, as a sacrificial lamb, or as a “rebel”.
HAL doesn’t show symptoms of his role, until after his confessional conversation with Dave. But we see hints of the inevitable, in his interview with Martin Aimer. He begins the movie as the perfect child, and ends it as the bully.
Unable to make any sort of connection with Dave and Frank, and unable to resolve his own internal conflict, HAL becomes consumed by a growing paranoia, and burning need to unburden himself of the contradiction harbored in the secret he is forced to carry. Dave and Frank make matters worse, when they discuss HAL’s disconnection in the pod. This act ratchets HAL’s paranoia up to 11, and he begins to act out in ever-escalating violent ways. First, killing Frank. This is very likely because Frank was the first to openly challenge HAL. Next, killing the hibernating crew members in an attempt to hide his actions toward Frank. Lastly, attempting to abandon Dave outside the Jupiter, in the pod.
In short, the members of this family could not empathize with each other; that lack of empathy bred mistrust; that mistrust bred secrecy and lies, and the secrecy and lies ended in paranoia and escalated inevitably to violence born of the survival instinct. The Kill-or-be-killed instinct portrayed at the opening of the film is once again on display even as Kubrick is about to send us hurtling in act three, into our own cosmic evolution.
THE END IS IN THE BEGINNING
Dave’s survival and subsequent disconnection of HAL did not resolve the conflict presented in act two of the film. It simply buries it in history. Yet another inexplicably violent end of a life, with no one to witness it, and no one to explain it.
Kubrick’s message is an intoxicating one. Its aspirational visuals smuggle in the myth that somehow, with enough technology, we will be able to run away from the worst parts of ourselves — or that, just in the nick of time, some powerful loving overseer will come and rescue us from ourselves. Nothing could be further from the truth. And in spite of Dave Bowman’s fate, Clarke and Kubrick tell us clearly (though, unintentionally) through the death of HAL, that our fate will be much bleaker, if we don’t learn the real lesson of this film:
The future of humanity lies not in any new technology or imagined galactic rescuers, but finally in the fullest possible application of empathy and honesty in our relationships, and lacking that, we are doomed to failure.
It has been asked how, if at all, one might resolve the Sorites paradox. I am not convinced a solution is possible, and in this paper I will explain the responses I have become aware of, and why they fail. In the end, I will conclude that there is no solution to the paradox, but I will offer a few suggestions for a way forward.
The first response might simply be to reject the first premise of the argument. In other words, simply deny that a man with 10 hairs is in fact bald, or that 100 grains of sand is in fact a heap. In essence, this would render vague predicates useless at best, meaningless at worst, since no predicate that allows for a vague border case would be permitted to apply to anything. There is one way in which we might stretch this into plausibility, but I will address the other responses first, before returning to this in the conclusion.
The second response is to set some arbitrary boundary. This means selecting one one among the indefinite number of secondary premises beyond which all others will be false. For example, we might say that thirty-thousand and one grains of sand is the boundary below which we no longer regard a collection of grains to be a heap. At first glance, this approach might seem plausible. After all, we do this frequently in practice: setting the legal drinking age, or the number of credit-hours required to count as a ‘full-time’ student, for example. However, there are two core problems with this. First, from the context of the formal argument, there is no good reason to reject any of the subsequent conditional statements, and there appears to be no means by which we could discover a reason. The implicit modus ponens of the conditional compels us to accept them all. Second, as Wright (Vagueness, 1997) pointed out, vague predicates are inherently coarse by virtue of their intended use. So attempts to impose some sort of specificity would destroy their meaning.
The next approach would be to attempt to define a knowledge gap within some middle range of propositions between the edge false and edge true statements. On one interpretation of the idea, we could use a three-value logic, in evaluating the propositions. At some point, starting with grain one, the proposition ‘this is a heap’ would cease being false, and would instead be valued ‘unknown’ or ‘undefined’. Later, the unknown state would transition to true, once we’ve reached the next threshold. This would make it possible to judge the argument invalid, since any number of its premises were neither true nor false. However, this seems to be attempting to win on a technicality, and it suffers from the same problem as the arbitrary boundary solution, in that we have no real way of determining when the states should change.
The next response might be some form of Edgington’s “degrees” of truth (Vagueness, 1997). But this suffers from it’s own serious flaws. For example, consider the statement ‘it is raining’. That has a ‘degree’ of truth of .5. It’s negation, ‘it is not raining’ will have a degree of truth of .5. The consequence of this, is that the following propositions have exactly the same truth value: ’It is raining, and it is raining’, ’It is raining, and it is not raining’. The same problem exists with our heap of sand. So, again, we’re left with no clear way to determine the truth of the conditionals in the Sorites case.
In the end, there does not seem to be any clear resolution to this paradox. However, I would offer one suggestion. vague predicates, in addition to being inherently coarse, also seem to be describing inherently subjective experiences or judgements. While Sorites arguments seem to want to talk about objective properties of objects. Perhaps there is no solution to the paradox, precisely because “tall” or “heap” or “red” or “bald” is not in fact a property of the object being considered, but a property of the experience the subject has of that object. The paradox is perhaps trying to square subjective interpretation with objective matters of fact. And that’s why it cannot be resolved.
Susan Haack nicely diagrammed the problem of circularity in her 1976 paper, The Justification of Deduction. In that diagram, she drew a direct parallel to the circularity of the inductive justification of induction, as outlined originally by Hume. Haack argues that justification must mean syntactic justification, and offers an illustrative example argument to show why semantic justification fails – namely, that it is an axiomatic dogmatism: deduction is justified by virtue of the fact that we have defined it to be truth preserving.
Haack goes on to argue on syntactic grounds that justification is a non-starter on at least five other fronts, in addition to being circular. However, Dummett in his 1973 paper by the same name, showed that the only kind of justification that made any sense was semantic justification. First, because syntax necessarily relied on semantics for its meaning, and secondly because the whole point of justification in the first place is confidence in the function of logic as a means of preserving truth values.
Still, Dummett was able to show not only that the justification of deduction was circular, but also that any attempt to do so leads inevitably down one of the horns of Agrippa’s Trilemma. As Haack pointed out, we could simply assert the justification definitionally. But, in attempting to avoid this dogmatic horn, Dummett points out that we have only two other options: the regress horn, or the circularity horn. In the first case, this would mean crafting a set of rules of inference that could be used to independently justify deduction. These new rules would require a language and a theory of soundness and completeness all their own, which in turn, require justification, and then the process would descend yet another level. In the latter case, two different sets of rules of inference might be used to justify each other, in perpetuity. Obviously, none of these options is satisfying.
Later in his paper, Dummett attempts to explain how a set of inferential rules might be justified by reference to a theory of meaning for the object language within which it is contained. Essentially, he argues that the soundness and completeness theory of logic provides what a theory of meaning provides for a language: a functional understanding of its use. In other words, if we are to justify logic at all, we must first have a theory of meaning that shows how sentences can carry truth values. But this seems to me to begin the slide back into circularity, because as Dummett goes on to explain, our definitions of true and false themselves determine the means by which we achieve the meanings of sentences judged by those definitions. All we’ve done is to shrink the circle.
Haack and Dummett continue the debate in subsequent papers, but reach no conclusion. I am inclined to wonder, myself, whether any of it matters. The justification problem in induction has been evident for over three hundred years, and the problem of deduction for around seventy-five. Yet somehow, both of these tools of inference continue to be used and taught — and both still seem to be yielding results that most of us find satisfying most of the time.
In a word, yes, some forms of justification are circular (and it seems that no form of justification actually appears to work). But perhaps the problem isn’t what we think it is. Perhaps the process of inference is somehow more fundamental than language. Perhaps it is a feature of consciousness that resides below the level of language, rendering it impervious to notions like justification. Or, perhaps the justification of logic will someday come out of the neurological study of the brain, as an explanation of the evolutionary advantage of a linguistic mind, to a primate that would have otherwise perished on the plains of Africa.
The following is my attempt to answer a question posed to me recently.
When I look at the question, it seems to focus on the individual. So, I think the easiest way to begin this, is to start with the self. Since I’m no Derek Parfit or Bernard Williams, and the question seems to be focusing on moral sentiment and moral choice, I’m going to reduce the ‘self’ to just that part we always end up talking about, when we talk about choice: The Will. Lacking a more sophisticated understanding of consciousness, I’m going to cobble together a rudimentary theory of the conscious self from Schopenhauer (Freedom of The Will), Dennett (Freedom Evolves, Consciousness Explained), and Peter Ulric Tse (The Neural Basis of Free Will).
Schopenhauer’s basic sketch of the conscious self, while not scientifically accurate anymore, is vague and general enough not to conflict with the a simple “modular” or “functional” understanding of the mind (one theory currently being batted around in science these days). So, I’m going to take his model as read, with some technical embellishments: The conscious, motivated, active self is (for our purposes), a neurochemical process of the brain that gives rise to a sensual consciousness (awareness of the outside world), a self-consciousness (awareness of our desires and intentions), and a ‘will’ embedded within that self-consciousness, from which all intentions to act originate, and upon which all chosen intentions are acted by that self.
Free will seems essential to this question. Schopenhauer, of course, argued against such a thing. But I don’t think his argument is conclusive. In short, if we’re looking backward in time, causal explanation chains are not necessarily evidence of causal necessity looking into the future (see Hume, on Induction). Further, it has been shown that sub-atomic indeterminacy can play a role in the way neurons function (see Tse, and Dennett). Thus, it seems there is room to suppose at a minimum, that absolute determinism is not a certainty (and, at best, that there is some sort of freedom of the underlying will that does not necessarily violate physical causation). Thus, I think we can tentatively accept the idea of an underlying will that is at least possibly free.
But even if all that is wrong, Schopenhauer still gives us a get-out-of-jail-free card, for the purposes of the class. He defines a conventional notion of freedom at the beginning of his essay that he calls “negative” freedom; meaning, in a phrase, “I am free to do as I will” (regardless of whether the will itself is free). In short, I am unimpeded or uninhibited in the choices available to me, in the basic physical sense. Since this conception of freedom is enough to get us to the point where we have to start making choices, and value judgments about our choices, I think this might be an acceptable “plan B” for answering this question.
So, we have a conscious self that is free to choose to act or not, on the intentions presented to it by it’s will. The next question would be (restating the original question a bit) is it possible for this self to act ‘selflessly’? Here, I think we have a straightforward answer, in both the metaphysical and logical sense:
- The will is contained within the consciousness that corresponds to it’s identified self.
- The will cannot present intentions to any other self, than the one to which it corresponds.
- The self cannot act upon intentions not presented to it by its own will (I will expound on this point, below).
- Therefore, it is not possible to act selflessly.
On premise three: Here, it might be tempting to ask, “but what about other people’s stated intentions? Can’t we act on those?”. To this, I would appeal to Schopenhauer’s conception of consciousness. If some friend is making an appeal to me (to act in some way), he would be presenting my sensual consciousness with a motive. The sensual consciousness would pass that motive to the self-consciousness, and the self-consciousness would report back the intentional desire to act from the will. At this point, I could choose to act on the impulse and respond positively, or I could deny the impulse and respond negatively to my friend. Even if I respond positively, I am still fundamentally acting from the self, because it is my will that gave rise to the impulse to act. This means (at least logically) that I could not possibly be acting selflessly. Therefore, all acts are selfish acts.
But this whole discourse from the raw metaphysical possibility seems a bit impoverished. Perhaps we mean to ask, “Is it possible to act purely from altruistic motives”? Or perhaps, “Is acting altruistically also acting selfishly”? These are much more difficult questions, I think. Mostly, because they’re intensely psychological. For the first question, I’d lean on my Schopenhauer again, and say yes it is possible to act from an ‘altruistic’ motive, because though we characterize the motive as “altruistic”, it is still fundamentally a motive experienced and intention exhibited by a self, thus it is necessarily (by definition?) acting ’selfishly’.
The second question is more interesting, and more perplexing. It’s essentially asking how motives interact in the mind. Perhaps even, what are the basic nature of these two motives (altruism and selfishness). If these two are not mutually exclusive, what happens when they “mix” in the mind? Is it an additive mixture, or a subtractive one? However, if they are exclusive, what calculus is taking place to privilege one over the other? What circumstances or other motives might have an effect on that calculation? Is the mixture or calculus something we can reduce to a principle? If so, would that function as a moral fundamental (even if not THE moral fundamental)?
Arguing from the psychological, I would speculate that a kernel of selfish motive lies at the core of all actions, even those dominated by selfless motives. Logically speaking, this would still be an affirmative answer to your question (Yes, it is possible to act selflessly). However, perplexingly, I could also say, ‘no it’s not possible, because that kernel of selfishness is present’ — and both answers would be true, because both motives are present in my mind at the same time. But perhaps my speculation is incorrect? Perhaps only one motive can be present at a time? Somehow, I doubt that…
That’s what’s interesting about the psychological question. It’s not quite a paradox, because it’s not actually a binary. It’s like drops of black paint in a bucket of white paint. If there is a kernel of selfishness at the core of all my selfless acts, does it “pollute” my altruism? Am I being dishonest somehow if I don’t acknowledge it? Does my worry about being dishonest betray some “turtles all the way down” higher authority that I want to appeal to? Or is this more like microprocessor voltages (below 5volts of selfishness = altruism; above 5volts of selfishness = selfishness)?
We might want to say this is where value judgments can help us out. Well, they may help us to clarify which of the two motives would dominate our intentions in some specific instance, but I’m not sure how that could get us to a universal principle (viz. Altruism Is Good, or Selfishness Is Good or Altruism is Bad or Selfishness is Bad).
If you look closely at Mill’s arguments in Utilitarianism, he seems to be making a very strong response to Kant (perhaps against the Groundwork?). Mill accepts the notion of moral duty, just as Kant does. But he insists it derives not from any form of analytic (i.e., Kant’s notion of synthetic a priori) truth. Rather, Mill insists it derives from the apparently universal desire of mankind (individually, in aggregate) to seek its own pleasure. Aware of some of the contextual implications of this principle, Mill attacks head-on the charge of Epicureanism. But what strikes me as interesting, is the fact that, though he makes frequent reference to Kant, he never directly refutes Kant’s position, and never fully explains how the pleasure principle isn’t obviously and soundly refuted already by Kant’s explication of deontology (in the Groundwork). Mill just seems to ignore the problem of subjectivity in the hypothetical imperative, as described by Kant. Perhaps Mill is assuming that the apparently universal preference for pleasure somehow renders the hypothetical imperative a moot point? (i.e., since everyone prefers pleasure, it’s pointless to bother thinking in terms like, ‘if you seek pleasure, then you should do x’).
This idea of a universal preference is an intriguing one. Mill makes frequent appeals to preference – both implicit and explicit. What if we could actually identify a preference that is indeed universal to all human beings? I’m struggling, frankly, to think of one. Even something as intuitively obvious as “life” isn’t so obvious, when you consider the willingness of soldiers to throw themselves over the trenches, or the high rate of suicide among men in the west, today. Clearly, those folk do not have a preference for living. If something like life itself can’t be ascribed as a preference to all human beings, why should pleasure?
On the other hand, biology is notoriously fuzzy at the edges. Sometimes a horse is born with 5 legs. Is it no longer a horse? Sometimes humans are born with 3 x chromosomes, instead of an xy pair. Does that mean there’s no such thing as mammalian sexes? If we can accept these sorts of vaguenesses in distinction, then perhaps a “universal preference” could also be accepted as something slightly less than universal?
Perhaps, but when we start ascribing moral significance to such a thing as a preference, the game changes a bit. Because what are we really saying, when we say we can judge a behaviour as “right” or “wrong”? When I say something is or isn’t a preference of mine, nothing follows. I just go about my business, and you, yours. But when I take a preference of mine as a standard to judge you ‘right’ or ‘wrong’, I am implying a great deal more. It implies that, at the very least, I am licensed to condemn you for not sharing the preference — and at the most extreme end, that I am licensed to kill you.
But what if the standard isn’t some particular material preference (such as ice cream favors, or even living), but rather, for behavioral reciprocity? Now, if I have a preference for vanilla, and you have a preference for chocolate, but we both share (for example) a preference for not attacking people with differing preferences, then we might be able to negotiate a peaceful existence together. What’s more, we’d then be justified in self-defense against someone who didn’t share that meta-preference.
Perhaps this is what Mill was thinking when he suggested we all ought to regard each other equally, in the decisions we make? More thought must be done on this one…