I’m a moral realist. That means that I think there really are some moral facts. It is wrong to do some things, and it is right to do some things, and this isn’t just a vent of emotion or an expression of my will, it’s really true. Stephen Law is also a moral realist, but if I’m reading him rightly in his debate with William Lane Craig on the existence of God or in his more recent discussion with me on the Unbelievable radio show where I discussed the moral argument for theism, he’d sooner give up moral realism than accept theism.
An argument I sketched in that discussion was that the best way to explain moral facts is by reference to God. Although he does currently believe in moral facts, he noted that they may not be there after all, so maybe there’s no reason to invoke God as an explanation. After all, he said, we can come up with an evolutionary explanation of why we would believe in moral facts whether they really existed or not. Law wants to be careful here. At the time I raised the concern that this may just be a case of the genetic fallacy, offering an explanation of where a belief came from as though this showed or suggested that the belief is false. But this isn’t what Law means to say, he replied. The point is not that the existence of an evolutionary account of why moral beliefs exist shows that those beliefs are false. That would indeed be the genetic fallacy at work. No, the point is that whether those beliefs are true or false, there exists the same evolutionary account for why we hold them – and that account is unaffected by their truth or falsehood. There is thus no particular reason to think that the evolutionary processes that brought them into being is likely to produce true-belief forming processes.
While this line of argument does not purport to show that the moral beliefs we hold aren’t true, it’s meant to cast doubt on the probability that the process that gave rise to these beliefs (or at least the process that gave rise to the relevant belief forming processes) is likely to result in either true beliefs or reliable belief forming faculties. It’s best to think in terms of the latter, if only because it’s downright bizarre to think that evolution forms beliefs. It plainly doesn’t, but it does form mechanisms or processes that creatures use to form beliefs.
So what should we make of this? Can we give an evolutionary account of why we would believe in moral facts, an account that is blind to the actual existence of those facts? Secondly, if we could give an account like this, would it undermine the probability that the processes that form those beliefs are reliable? I will give two answers: Yes, it is trivially true that we can give an account like this, and no, the fact that we can do so should not undermine our confidence in the belief form process that forms moral beliefs. In doing so I will be drawing on an argument by Alvin Plantinga, namely the “evolutionary argument against naturalism.” While I am inclined to think that argument is unsound, many of the insights that it draws attention to are true nonetheless.
It’s trivially true that we can put our imagination to work and come up with an evolutionary story about why it is we form beliefs about moral facts – a story that isn’t concerned with whether or not those beliefs are true. I say that it’s “trivially” true because it’s obviously true, and because we can actually engage in this sort of story-telling for a wide range of belief types, not just moral beliefs. In any evolutionary account of how a given creaturely function came into being, we’re explaining why there might have existed an adaptive advantage for that creature to have that function, and hence why that function (or change in function) might have been preserved. The emphasis falls on whether or not a function confers an adaptive advantage, since the only thing that evolutionary development “cares” about is producing creatures that are better at surviving and reproducing. Since this is true of all creaturely functions, systems, parts or processes for which we wish to provide an evolutionary account, it is also true of our belief forming apparatus and processes. This can be a fairly strange concept to those approaching the issue for the first time, but bluntly stated: Evolution just doesn’t care (in principle) whether or not our beliefs are true. (The caveat “in principle” is important, and it has to do with why I ultimately don’t agree with Plantinga – or Law, but I’ll comment on that soon.) Just as long as we can come up with a story about how holding a certain type of belief might confer an adaptive advantage – quite apart from whether or not the belief is true – we’ve met the challenge, and we’ve told an evolutionary story about why we would hold beliefs even if they weren’t true.
The key thing to note about Law’s suggested argument is that it involves a break between evolutionary development and the development reliable belief forming processes. The two don’t go together – or at least it is suggested that how closely they go together may well be inscrutable. Can we tell this sort of story about moral beliefs? Well actually, Dr Law didn’t exactly tell us how we might do that – but he was pretty sure that we could. And he’s right – we can. Before I give some examples of how that story might go, let’s look at other kinds of stories that we might tell. The fact is that in any given scenario, there is a vast array of false beliefs that would very probably give rise to behaviour that would be good in terms of adaptation and survival. Alvin Plantinga asked us to consider the example of Paul, a member of a species very much like ours (although in a pre-civilised age), on a planet very much like ours. Paul is confronted by a tiger. The most adaptive behaviour, let’s agree, is for Paul to flee as quickly as possible. But of course there are far more false beliefs than true beliefs that would get Paul to run away.
Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely that the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief. . . . . Or perhaps he thinks the tiger is a large, friendly, cuddly pussycat and wants to pet it; but he also believes that the best way to pet it is to run away from it. . . . or perhaps he thinks the tiger is a regularly recurring illusion, and, hoping to keep his weight down, has formed the resolution to run a mile at top speed whenever presented with such an illusion; or perhaps he thinks he is about to take part in a 1600 meter race, wants to win, and believes the appearance of the tiger is the starting signal; or perhaps . . . . Clearly there are any number of belief-cum-desire systems that equally fit a given bit of behavior.
[Alvin Plantinga, Warrant and Proper Function (New York: Oxford University Press, 1993), 225-226]
Once we realise what kind of examples that will do the job, we see straight away that the same kind of exercise can be done with moral beliefs. Presumably what Law was getting at is that morality is good for our survival. It manifests in behaviour that benefits our species, and that behaviour could well have come about in such a way that the processes that form the belief aren’t reliable in terms of true belief formation. Take a plausible moral belief: It’s wrong to torture people to death. Acting on this belief may play some role in the survival strength of the species that comes to hold it (or on the other hand it plausibly might not, since only the strongest and fittest will be in the position to torture others). But the rise of this belief could go hand in hand with all kinds of behaviours. What sorts of beliefs might reinforce this behaviour (of not torturing people) over time to the point where it became a sort of social taboo – the herd mentality that came to be enshrined as moral fact (this is the type of scenario that Law is painting when he talks about accounts of false beliefs coming to be regarded as true over time). As with Plantinga’s tiger scenario, the options are more or less endless. If we – or at least our ancestors – believed that when we torture people we lose the ability to breathe and die, then they would likely not torture people. Similarly, if they believed that their children would all be stillborn if they ever tortured anyone, they would likely not do it. True, if they themselves were afraid of being tortured and they reasoned that if torture became normal then they themselves might one day be tortured, then they would also be less likely to do it. The point is just that there is a veritable smorgasboard of possibilities when it comes to given an account of how beliefs and behaviour might come together when the driving factor is survival and reproduction, rather than the acquisition of true beliefs.
So yes, what Stephen says is trivially true. We can come up with an evolutionary story about why we might hold all kinds of beliefs – including moral beliefs – even though those beliefs might be false. In this regard moral beliefs are not different from all sorts of other beliefs we hold about the world. That’s point one.
Point two: But surely just being able to tell a story about a given belief is one thing. Painting a broader picture in which the processes that we use to form beliefs is unreliable in general is quite another. If the wider picture of our developing epistemology is one that favours belief forming processes that are reliable in terms of the production of true beliefs, then the fact that we can tell funny stories about how isolated beliefs might have formed becomes a bit of a side-show. Giving an explanation of how something came to be is not the same as giving the most plausible explanation of how it came to be. Let’s go back to Plantinga’s tiger example. It’s true that if Paul thought that the appearance of a tiger signalled the start of a race and he really wanted to win that race, then the appearance of a tiger might get him running. But why on earth would he start putting one foot in front of the other, or exerting greater effort than before? What is he acting on when he does this? Not just the belief that a tiger signals the start of a race, or his desire to win the race. More is needed. What is needed is some fairly fundamental beliefs about what will happen when Paul interacts with the world in a certain way: beliefs about the general effects of gravity (even if Paul doesn’t know it by that name, or much in detail about what it does), beliefs about the actions and reactions between his body and the environment. Obviously Paul wouldn’t start running if he believed that putting his foot on the ground with considerable pressure would cause it to explode (to use an equally silly example). In addition to basic beliefs about how physical interaction takes place, Paul would also need to have some beliefs grounded in inductive reasoning. Sure, moving his legs in a certain way yesterday and a couple of weeks ago might have gotten him moving in the intended direction at high speed, but what does that have to do with what will happen today? For the example to even get off the ground, then, we’ve got to have belief forming processes about the way the physical world works and the way inductive reasoning is done that are, to a reasonable degree, reliable. And then we’ve also got the issue of how Paul knows the object before him is the same sort of thing as any of the other things that produce a similar visual image. I suppose there’s a bit of induction involved here again, in addition to arguably more abstract reasoning involving classification.
It strikes me as more economical to think that the same set of belief forming equipment forms beliefs about a whole range of subjects, the mundane and the cosmic.
Perhaps the lesson that Plantinga could still teach us, however, is that while adaptability is better served by belief forming structures that are, in large part, reliable, more abstract theories (like metaphysical naturalism) might have little survival value or survival impediment, and so probability of belief forming structures that form beliefs like that being reliable is inscrutable. Even that seems a bit hard to swallow, however, unless we suppose that we have parallel belief forming structures that do not depend on each other: One structure for all the necessary beliefs for survival, and another for the beliefs that are more of a kind of intellectual luxury, abstruse theories about the meaning of life and so on. It strikes me as more economical to think that the same set of belief forming equipment forms beliefs about a whole range of subjects, the mundane and the cosmic.
But if the above is true, and if morality (unlike, say, metaphysical naturalism) is a pervasive kind of belief held by human beings, then it is likely easier to tell a story in terms of evolutionary development where we came to hold moral beliefs because some of them are true than it is to tell a story about how we came to hold moral beliefs that were all false. What’s particularly interesting is that Law himself rejects Plantinga’s argument by arguing that actually the evolutionary development of our belief forming structures generally (although of course nor flawlessly) favours structures that give rise to true beliefs. Perhaps the facts can change as needed in the unholy war against faith!