Huge fan of Game Theory - I love seeing it being appropriately applied to morality. The majority of disagreements I find on social media are little more than dichotomous thinkers applying their subjective views as absolutes to complex scenarios without having even an elementary understanding of dichotomy, logic, application, subjectivity, perspective, complexity, or scenarios. Its an insurmountable shortcoming.
Is There An Objective Morality
Jason Mckenzie Alexander | Professor in Philosophy at the London School of Economics and author of The Structural Evolution of Morality.
Read time: approx. 10 mins
In 1920, the U.S. introduced a nationwide ban on alcohol by passing the Eighteenth Amendment. It lated reconsidered and repealed the ban in 1933 with the passage of the Twenty-first Amendment. In 2015, the killings prompted by the Charlie Hebdo cartoons made westerners acutely aware of prohibitions against representations of the Prophet Muhammed in Wahabbist Islam. Yet there exist examples of Islamic art from the 13th and 15th centuries, which freely contain such representations. And, lest we forget, the history of Christianity also features people like John Calvin, who not only banned representations of God, but, like the Taliban, forbade dancing as well, and condemned music as being sinful.
Extreme disagreements over what people consider morally permissible exist, yet despite all these recognised historical variations in conceptions of morality, people generally hold their moral beliefs to be correct. Stepping back and taking an anthropological point of view, we seem to be wired to have the capacity for morality, while allowing for variability in what is understood to be moral. What’s going on? Is it the case that every society which came before us, or which coexists with us, that has different moral beliefs, is mistaken? Or is there a more subtle and nuanced way to understand moral differences?
Metaethics is the study of what morality itself is, rather than the study of what morality requires. One of the central debates within metaethics concerns whether moral statements are even capable of being true or false. Statements which are capable of being true or false are known as “truth-apt”. Given a statement which is truth-apt, it is then a further question to ask what it is about the world which makes that statement true or false. Setting aside, for the moment, the question of what exactly grounds the truth of moral statements, it certainly seems as though moral statements are truth-apt. When we say “Lying is wrong,” that is generally understood as describing a fact — some kind of fact — of the world. Furthermore, we can make inferences from general moral principles, like “Lying is wrong,” to arrive at prohibitions of specific actions, like “It is wrong for you to tell your boss you are sick when you are not.” Indeed, our ability to construct arguments involving moral statements is one of the main reasons (due to Gottlob Frege and Peter Geach) for thinking that moral statements are truth-apt, a view known as cognitivism.
Yet one apparent problem with cognitivism is that it threatens to turn moral disagreement into a stark binary conflict. If moral statements are true or false, and one person believes that X is permissible, and another person believes that X is prohibited, then one of them must be wrong. This has motivated some philosophers to develop alternative theories of what morality is. According to one such theory, known as emotivism, moral statements only appear to be truth-apt. Emotivism has been around for a long time, appearing in Hume's 1751 work An Enquiry Concerning The Principles of Morals, and A. J Ayer's 1936 logical positivist text Language, Truth and Logic. From emotivists' perspective, moral statements really just express the affective state of the person making the utterance. On this view, a statement like “Lying is wrong” vocalises the speaker's negative attitude towards lying. Similarly, a statement like “It is good to help the poor,” expresses a positive attitude towards acts of helping the poor. Uttering moral statements, on this view, amounts to little more than cheerleading. Because of this, emotivism is sometimes (somewhat pejoratively) called the “boo-hurrah” theory of morality. Why? Because statements like “Murder is wrong” and “It is good to be kind,” are expressively equivalent to “Murder — Boo!” and “Kindness — Hurrah!”
Despite its counter-intuitive nature (is that really all there is to morality?), one virtue of emotivism is that it explains how moral disagreement is possible without requiring at least one person to be wrong. Differences in affective attitudes occur all the time: one person likes kale, another person can’t stand the stuff; one person may love Proust, whereas another prefers Bukowski. There’s no accounting for taste. Yet this solution to the problem of moral disagreement seems to come at too high a price. It’s hard to shake the belief that there’s more to morality than just emotional expressions of approval or disapproval.
Perhaps we can make sense of the phenomenon of moral disagreement by adopting a different conception of morality. To begin with, note that many moral requirements (within a monocultural society) serve to resolve, reduce or prevent interpersonal conflict. Given this, it’s helpful to approach the study of morality from the point of view of game theory.
Game theory is the branch of mathematics which analyses interdependent decision problems. An interdependent decision problem is one where the outcome depends on the choices and actions taken by multiple individuals. Conflict often ensues when people with different preferences are all trying to simultaneously bring about the outcome that each of them, individually, would like to see realised.
Morality can be conceived of as a social technology that provides guidance on how people should behave when facing interdependent decision problems. The social technology of morality helps us navigate, negotiate or even mitigate the conflict that arises from people having preferences which are not all capable of being satisfied at the same time. Essentially, morality instructs us to behave in ways that are conducive to satisfying everyone’s individual preferences to the greatest extent possible, given the existence of other people.
As an illustration, think of the lesson from the parable of the Good Samaritan. A traveller is beaten up and robbed by a gang of bandits and left for dead. He asks for help from a number of people who pass by, but is ignored by everyone except for the Samaritan. Now, it is true that helping people in need is costly: it takes time and effort. We can easily imagine how people might think it is in their interest not to help a stranger in need when they have other pressing concerns. Yet concentrating on the immediate cost at the time of action is too narrow a conception of one’s individual interest. Recall the proposal that morality enables the satisfaction of each person’s individual preferences, to the greatest extent possible, given the existence of other people. Some people have diametrically opposed preferences — like the bandits and the traveller in the parable — and other people have preferences which are only partially misaligned — like a competitor who wants to win but who does not actually harbour ill will towards the competition. Given these possible conflicts, there is always some ineliminable chance that a person might find themselves in need, for whatever reason, despite their best efforts to look out for themselves. The parable recommends a behavioural rule that effectively creates an unofficial, unregulated, implicit insurance scheme distributed across the population for use in times of need. It is also an early example of what has come to be known as indirect reciprocity. The recipient of the benefit from the Good Samaritan will not return the favour to his helper, but rather is told to go and do the same for others. Instead of paying it back, the traveller should pay it forward. The chain of reciprocity is not closed, but self-perpetuating.
Ultimately, morality as a social technology will take different forms because not all societies are attempting to solve exactly the same kinds of problems under the same kinds of constraints. And, even if different societies operate under the same kinds of constraints, social problems often admit multiple, different solutions which are functionally equivalent, like whether a country drives on the left or right side of the road. And that variation exists before we introduce the further complexity that different societies may have important differences in their values, or the way in which they order values in terms of their relative importance. Morality, like other forms of technology, can result in competing, incompatible standards, each of which attempt to solve the same underlying social problems.
Does this amount to a rejection of an objective notion of morality? Not at all. To see this, we must recognise that the antonym of objective is not relative, but subjective. To say that a claim is objective is to say that its truth does not depend on the particular speaker. The antonym of relative is absolute (or, if you like, universal). To say that a claim is relative is to say that its truth depends upon a particular context. This gives us the following four-fold set of possibilities:
Screen Shot 2019 05 20 at 18.18.18
When people are concerned about morality being relative, often they are just referring to an unease about morality being subjective — in that the truth depends purely on the attitudes of the individual speaker. This, as we can now see, is where emotivism went awry: it attempted to address the problem of moral disagreement by making moral statements nothing more than expressions of affective attitudes, essentially making morality subjective. Yet, as we have seen, another way to address the problem of moral disagreement is to understand moral statements as objective, yet relative.
Thinking of morality in this way provides a helpful frame to many recent events. Just as in the 1960s, we are now experiencing profound tension in our societies as multiple competing forces seek to rewrite the rules about what is morally acceptable. This tension is understandable. If your conception of morality is a social equilibrium, any attempt to move to an alternative conception of morality — seen by those advocates as a local improvement — will be challenging. It will be challenging because moving from one equilibrium to another will require spending time in the out-of-equilibrium transition, when the exact nature of the moral expectations is less well understood. This should not, in itself, be feared. Rather it should be seen as the unavoidable price to pay for social improvement, provided all are acting in good faith. (What should be feared is when people act deceptively or in bad faith.)
A second helpful point to note is that morality, as a social technology, is like any other form of technology: it can be lost. Recall the technology of the Roman Empire included hot and cold running water as well as underfloor heating, which disappeared during the Middle Ages. The social technology of morality includes concepts such as universal human rights, international law, the importance of human dignity, individual autonomy, privacy, and freedom of speech, as well as tolerance and respect for others. We should not blithely assume that our current moral beliefs will always endure, or that, when replaced, they will be replaced by beliefs we would consider an improvement. We have created a society with increased tolerance towards racial differences, featuring more equal treatment of men and women, and greater understanding and appreciation of the variation in sexual orientation. We have eliminated capital punishment, and no longer believe it acceptable to punish children by beating them. Yet these and many other gains are fragile. Moral truths are not written into the fabric of the universe, but are socially constructed. The moral truths we accept do not obey Lady Macbeth’s maxim, for what’s done can easily be undone.