Every discipline has a ‘blind spot’, one or more assumptions that go largely unquestioned because they constitute the discipline’s raison d’être. For ethics (especially applied ethics) that blind spot is the assumption of moral ignorance, that there are circumstances in which we do not know the right thing to do. Moral ignorance is not ignorance of facts that may be relevant to making the right decision, such as not knowing that a piano is falling as you walk under it. That kind of factual ignorance is not the bread and butter of ethics, which concerns itself with which decision is the right one if all relevant facts are known.
It is the self-appointed task of moral philosophy to help us reason theoretically, to organize what we already know, or see it in a different light, in order to arrive at the right decision. By and large, moral philosophers do not undertake surveys, conduct experiments or make field observations to support their theories. Thought experiments in ethics usually include a list of all factual assumptions that are deemed relevant; for example, that an aborted fetus had spina bifida, was of a certain age, unwanted by both parents, etc. Of course, there are academics who are interested in psychological or sociological aspects of ethics, such as the evidence for character traits and how that impacts virtue ethics as a philosophical theory. But moral philosophy is generally not an empirical discipline, though there are moral philosophers who do empirical work.
So the question arises, is there moral ignorance? Barring insanity, the folly of youth, or factual ignorance of the sort that would render the actor less culpable, is there some kind of theoretical confusion that justifies the task of conceptual clarification that mainstream ethics claims to attempt? It turns out that the existence of moral ignorance is far from obvious (Confusion is still a type of ignorance, namely ignorance of how to get un-confused). Some would reply that if there is no moral ignorance, in what sense can we improve on someone else’s moral beliefs, or for that matter, our own? If I believe that eating meat is unethical for everyone, do I therefore believe that meat-eaters are ignorant of something that I know? If I don’t believe they are ignorant, then in what sense are they in error?
Some may argue that meat-eaters are ignorant of the conditions of animal slaughter. In other words, they lack the experiential knowledge that might cause them to go vegan after visiting a slaughterhouse, for instance. This is a dangerous argument, because experience goes both ways. For example, slaughterhouse workers can become desensitized to the routine violence of their workplace, and many have no qualms eating meat. The experience of taking drugs may impair our ability to make an objective judgement of the pros and cons of so doing. The same may be said of joining a cult. Furthermore, people come to experiences with personal baggage, whether genetic or environmental, and those predispositions partly shape what they get out of experiences. So if we ground the wrongness of meat-eating in a lack of experiences of certain kinds, we are in danger of losing the absoluteness and universality that we would like to claim for that wrongness. In other words, one person’s meat is another’s poison.
An alternative, though unpopular, proposal is that barring the usual exceptions listed above, we all know right from wrong, and it’s the same rights and wrongs for all of us. This view isn’t as outlandish as it seems. One oft-mentioned objection is that differences in moral codes between cultures renders it unlikely that we all share the same moral beliefs. However, if we believe it’s wrong to do something but want to do it anyway, it’s not uncommon to just do it. If one person can do it, so can a group, or an entire culture. So what we view as cultural differences in moral codes could instead be cultural differences as to which parts of the universal moral law we choose to ignore.
For example, some cultures routinely eat meat and others don’t, but it doesn’t follow that they differ in moral beliefs. Even if they explicitly state reasons for doing one or the other, some of those reasons could just be excuses for not doing the right thing. If moral principles are innate and universal, then culture plays no part in ‘inventing’ them, only in reinforcing or ignoring them, and making excuses for not obeying them; and those excuses can vary between cultures, just as they would from person to person.
Some may object to this kind of universalism on the grounds that it doesn’t offer a recipe for resolving moral disputes. Well, neither does moral philosophy. In reality, such disputes are resolved (if at all) pragmatically, either through pre-established mechanisms such as legislatures and ethics committees (in which professional ethicists are either absent or a minority) or if all else fails, through violence. In either case, words like ‘equality’ or ‘justice’ may get bandied around, but the application of these terms in social contexts is always contestable (though not without traction), and more attention is usually paid to the facts surrounding the dispute, or to the preponderance of power, than to abstract ethical theories. So in terms of dispute-resolving power, the assumption of moral ignorance has no advantage over assuming that the moral code is the same for, and known to, everyone (though they differ in how they choose to ignore it, and the excuses they have for doing so).
About the author
Ben Gibran is a writer with an interest in the theory and social science of communication. His work has been published in Journal of Publishing, Publishing Research Quarterly, The Philosopher and Essays in Philosophy, as well as online and in newsprint. He is the author of Why Philosophy Fails: A View From Social Psychology and The DIY Prison: Why Cults Work.