Tuesday, 31 October 2017

Why Massimo Pigliucci is Wrong About Moral Psychology

It seems to me in hindsight that this post may give the impression that Pigliucci's views are more rigid than they really are. It doesn't affect any of the arguments in this post, but Pigliucci has in other contexts expressed a much more flexible view than the rigid moral realism I read out of his article. See for example this YouTube video on meta-ethics.

See also a second post in this series here: What Does Normativity Add to Moral Discourse? and a third post here: Why Insights in Evolutionary Moral Psychology Help Resolve Long-Standing Meta-Ethical Questions.

Early last year, Massimo Pigliucci posted a blog essay criticising aspects of cognitive and moral psychology, especially the latter. Pigliucci is a Professor of Philosophy at CUNY-City College and one of the most visible contemporary academic philosophers. He’s also a scientist, which is relevant to this particular issue. His post was inspired by an article by philosopher Tamsin Shaw. I’m writing this in response to Pigliucci’s post – rather than Shaw’s article – because I think he manages to condense the important criticisms, while also adding some useful analogies.

His main criticism is this: moral psychologists frequently make normative moral claims under the guise of science. The problem, as Pigliucci points out, is that this violates David Hume’s is-ought gap: no amount of knowledge about how people behave can lead us to derive how they ought to behave. The two are separate domains. What’s really happening, he suggests, is that moral psychologists are interjecting their personal moral beliefs, without acknowledgement, to bridge the gap.

Pigliucci isn’t just pointing out a logical fallacy and warning about its potential to mislead people, though. He’s also objecting to what he sees as an attempt by moral psychologists to encroach on the field of moral philosophy. For Pigliucci, it seems, the domains of descriptive and normative morality map onto the fields of moral psychology and moral philosophy, respectively, and each field should stick to its area of expertise.

***

Pigliucci draws on a couple of analogies to illustrate his criticism. First, he contrasts a mathematician with someone studying other people doing mathematics. The latter can never replace the former, he argues. He also draws on a second analogy: a scientific argument between himself and a creationist. While both parties may engage in similar psychological processes, there is a significant difference, Pigliucci argues: his beliefs correspond with the facts.

These analogies illuminate Pigliucci’s model of morality: normative moral statements are theories of a moral realm, just as evolution and creationism are theories of the natural world. This implies two things. First, that studying people engaging in morality, as moral psychologists do, adds an intermediary to the study of morality. In other words, moral philosophers study the moral realm directly, while moral psychologists only do so indirectly, through other people. Second, some moral theories are truer than others, and moral psychologists have no method of determining which ones. To do so, one must study the moral realm directly and then compare it to various theories that people hold.

On this view, of course, Pigliucci’s criticisms of moral psychology make sense. No psychologist would dream of proposing mathematical or physical theories based on cognitive studies, so why should morality be different? Why do some psychologists think they can derive morality from studying the human mind, but not mathematics or physics?

***

The answer is that it’s not clear Pigliucci’s model is correct. In fact, Pigliucci’s model assumes precisely the thing that moral psychology disputes, namely that moral beliefs are theories of a moral realm. Jonathan Haidt’s work, in particular, demonstrates the extent to which people’s moral views are driven almost entirely by moral intuitions, and that what people think are moral deliberations are really rationalisations of these underlying moral intuitions. The potential implication, although Haidt doesn’t explicitly draw it himself, is that there is no moral realm; humans trick themselves into thinking their moral beliefs are theories of an objective moral realm, perhaps so they sound more convincing to others.

Haidt’s examples of moral dumbfounding provide strong evidence that rationalisation-posing-as-rationality works even on the subjects themselves. (People wouldn’t be dumbfounded if they knew they were rationalising.) This suggests, at the very least, that we should be sceptical of our belief in a moral realm, however obvious it seems.

There is also a different area of human activity that reveals a strikingly similar pattern: religious mythology. Take the Jewish biblical prohibition on eating pork. Presumably, this was developed as a social norm to avoid disease and/or to foster community. But the Ancient Jews didn’t describe it as such. They described it as a command from God. (And most likely believed that it was.) There are many religious myths, across many different cultures, some much more elaborate than this one. Which seems to demonstrate the propensity humans have for not only rationalising their behaviour, but to believe the rationalisations they produce.

Consider how poorly a model of moral beliefs as a theory-of-a-moral-realm seems to work here. Do we really want to label the belief that eating pork is morally wrong as true or false? That seems to completely miss the point. Surely a much better model would be that the moral belief that eating pork is morally wrong is a codification of that society’s social norms, embedded in a myth. In other words, the description is inward, not outward. It just appears to be outward. And that is essentially what Haidt is saying about individuals as well. Our moral beliefs are descriptions of our moral intuitions, embedded in rationalisation.

***

Morality-as-individual-preferences is not a new philosophical view, it’s been around in some form since ancient Greece. But the historical objection to it has been that it lacks the universality that morality seems to have. This is addressed in the evolutionary aspect of the work of Haidt and other moral psychologists. While a person’s moral views are individual, describing his or her individual moral intuitions, those intuitions have evolved to form universal modules, which can be acted on by natural and social forces.

The benefit of this theory is that it combines a form of relativism with a form of universalism. Both people and societies can have different moralities, depending on their environments/experiences, yet these can still be understood through a universal framework. This seems to me to be a far superior explanatory tool to the view of morality as theory of a moral realm. In that view, there really is only one degree of freedom: ignorance. When dealing with different societies or people in radically different environments, this seems hopelessly inadequate.

***

What are the consequences of rejecting Pigliucci’s model of morality? Well, it reverses the accusation. If we assume that moral deliberation is really a process of rationalising one’s moral intuitions, as opposed to theorising about a moral realm, then moral philosophers are actually doing moral psychology. They’re essentially describing their cognitive state. The only difference is that they’re doing it less methodically, and with a population size of one. In other words, if moral philosophers are to stick to their field of expertise – theorising about the moral realm – and moral psychologists are to stick to their field of expertise – describing how people engage in morality – there’s nothing left for moral philosophers to do.

There is merit in Pigliucci’s main claim, though. Moral psychologists clearly do make normative moral claims on occasion. Since people have a tendency to produce normative statements as rationalisations, that’s understandable. But it’s also a problem, as normative statements can’t be logically derived from descriptive statements, as Hume suggested, which harms the validity of the science. The solution, I think, is to simply strive to avoid normative language as well as unacknowledged value assumptions. It’s a question of precision.

1 comment: