When I present this scenario to my students and ask for their punitive judgments, they respond with revulsion. They sit in their seats and recoil reflexively with the full-blown Darwinian, Jamesian emotion of moral disgust written across their faces in raised upper lips and flared nostrils and felt in the visceral turning of the stomach and the slowing of heart rate. Then, like good students of western European culture, they recall their civics lessons about individual rights, freedoms, and privacy. They eventually decide, their viscera notwithstanding, that the individual should not be punished; he should have the right to practice such a culinary (or sexual) act in the privacy of his own home, as long as he has curtains closed and refrains from writing cookbooks or having friends over for dinner.
People’s responses to this kind of thought experiment have led Jonathan Haidt to a new view of moral judgment, and one that prioritizes the moral gut. Haidt argues that our moral judgments of right and wrong, virtue, harm, and fairness, are the products of two kinds of processes. The first may seem fairly intuitive to you—it has occupied the thinking of those who have theorized about moral judgment for 2,000 years—and that is complex, deliberative reason. When we judge whether an action is right or wrong, we engage in many complex reasoning processes, we consider society-wide consequences, cost-benefit analyses, motives and intentions, and abstract principles like rights, freedoms, and duties. Psychological science has privileged these higher-order reasoning processes in accounts of moral judgment. This is no better typified than by the well-known theory of moral development of Harvard psychologist Lawrence Kohlberg. Beginning with his dissertation, Kohlberg argued that the highest forms of moral judgment require abstract considerations of rights, equality, and harm—achieved, in his research, by only 2 to 3 percent of individuals he studied around the world (most typically highly educated, upper-class males like himself!).
The second, more democratic element of moral judgment, almost completely ignored in psychological science, is the gut. Emotions provide rapid intuitions about fairness, harm, virtue, kindness, and purity. When you first reacted to the sex-with-chicken example, part of your response was most likely a rapid, ancient feeling of revulsion and disgust at the image of such a species-mixing, impure sexual practice. In one study, my first mentor, Phoebe Ellsworth, and I had individuals move their facial muscles, much as Ekman and colleagues did with the DFA, into the facial expression of anger or sadness. As participants held the expression, they made quick judgments about who was to blame for problems they might experience in the future in their romantic, work, and financial lives—other people or impersonal, situational factors. Those participants who made these judgments with an angry expression on their face blamed other people for the injustices. Those with faces configured into a sad expression attributed the same problems to fate and impersonal factors. Our moral judgments of blame are guided by sensations arising in the viscera and facial musculature.
Haidt reasons that thousands of generations of human social evolution have honed moral intuitions in the form of embodied emotions like compassion, gratitude, embarrassment, and awe. Emotions are powerful moral guides. They are upheavals that propel us to protect the foundations of moral communities—concerns over fairness, obligations, virtue, kindness, and reciprocity. Our capacity for virtue and concern over right and wrong are wired into our bodies.
If you are not convinced, consider the following neuroimaging study of Joshua Greene and colleagues, which suggests that the emotional and reasoning elements of moral judgment activate different regions of the brain. Participants judged different moral and nonmoral dilemmas in terms of whether they considered the action to be appropriate or not. Some moral dilemmas were impersonal and relatively unemotional. For example, in the “trolley dilemma” the participant imagines a runaway trolley headed for five people who will be killed if it proceeds on its course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks, where it will kill one person instead of five. When asked to indicate whether it’s appropriate or not to hit that switch and save five lives, participants answer yes with little hesitation.