It’s impossible to ignore the plumes of smoke generated by all the politicians who are “crashing and burning” in Washington of late, each flame-out ignited by varying degrees of individual immoral behavior. The latest in a long line to fall was conservative Senator John Ensign (R-NV). No doubt, AAPers more than most psychologists, have given at least some consideration to what is causing these paragons of moral virtue to end up being hoisted by their own petards.
It’s easy for us to offer flip responses about this phenomenon: it’s the result of their arrogance or narcissism, or it’s the blinding effect of power, or the stress of political life, or plain unadulterated hypocrisy, or their use of projection, or a whole host of other things. As social scientists, though, we ought to be able to do better in providing a more substantive rationale for this behavior.
As it turns out, psychology has an entire literature which offers some insight into this troublesome growing trend. The research is not without weaknesses from my perspective, but it is most stimulating in helping to understand how public figures with careers built around denouncing moral turpitude—crusading prosecutors like Eliot Spitzer, evangelical leaders like Ted Haggard and Jim Bakker, socially conservative politicians like Mark Foley, David Vitter, Larry Craig, John Ensign, and Governor Mark Sanford—end up confessing to those very acts that they so vehemently railed against from their soapboxes. Though it’s a bit simplistic, my favorite current social psychology theory argues that developing a persona of morality increases the probability of going off the deep end of the depravity scale.
The study of how we form opinions of our own moral worth is a burgeoning field, and suggests that our thoughts about our own morality may, in fact, work in subtle ways to make hypocrites out of all of us—especially those who hold themselves in the highest moral esteem. People who complain bitterly against immorality in others are often themselves fixated on it, and more likely to succumb to its allure, according to the
With apologies to Lawrence Kohlberg and his stages of moral development, on which we were all professionally raised, my personal sense of these issues aligns best with the school of thought that suggests the existence of a fundamental principle known as “moral credentialing”. In a nutshell, the theory focuses on the existence of a kind of moral “set point”: an innate human type of thermostat that controls the amount of moral behavior in which we engage. The effect of this mechanism is that when we stray too far from the mean in either direction – even if it’s toward a “goody two shoes” status – we then reset to reestablish our set point, sometimes dramatically. Personal morality, it would seem, grows out of guilt, or an ex-post-facto story we tell ourselves about actions already decided on. As this explanation goes, it’s not a moral compass that guides our actions, but rather what we have is closer to a thermostat, stubbornly set to a comfortable moderate level, neither too virtuous, nor too depraved.
If someone has behaved morally or in some other way established a moral image, that frees the person to subsequently engage in less ethical behavior–sort of the moral equivalent of permitting yourself to suck down a strawberry milkshake after jogging for half an hour. For example, one study conducted at Northwestern University by Medin and associates found that participants asked to write a story referencing their positive traits donated just one-fifth as much as those who had to write a story about their negative traits.
Another experiment in the same paper concluded that this effect occurred because of the impact of the story-writing on participants’ self-concept–people who felt badly about themselves (having made negative traits prominent) bolstered their self-image by being more generous, while those who already felt positively about themselves were free to be less generous. Other research demonstrated that when study participants’ past behavior established their credentials as non-prejudiced individuals, they were more willing to express attitudes that showed prejudice.
Benoit Monin, an expert on moral psychology at Stanford, believes that many of our moral choices are guided less by abstract principles than by whether we presently feel like a good or bad person, and whether we see other people as good or bad.
Monin’s research reveals the interplay between self-image and morality. His key idea: Your sense of self-worth greatly affects your behavior and how you judge the behavior of others. If your self-worth is threatened, you’ll likely defend it and try to compensate. So you’ll rationalize your behavior by judging yourself more moral, or you’ll put down the morality of others. On the other hand, if you are confident that you are a good person, you can actually feel less compelled to act ethically.
Monin has identified psychological concepts and everyday situations where moral dynamics come into play. He coined the term “sucker to saint,” which means basically that people tend to cast themselves as morally superior when another person’s behavior makes them feel naive or foolish. For instance, if a colleague succeeds better than you by cutting corners when you have dutifully followed every rule, you would be motivated to derogate the co-worker as unethical, and elevate yourself as a paragon of morality to justify your inferior performance.
Similarly, people often resent “moral rebels,” or peers who do the right thing by refusing to go along with a questionable status quo. Their moral high ground implicitly puts you on lower moral ground. Their principled stand points out your failure to do the right thing. You might get annoyed and dismiss the “rebel’s” actions. This is especially problematic when people in organizations take a stand against a practice or raise difficult questions that require rigorous examination of data in order to arrive at a viable resolution, and must face resentment from peers who feel threatened by their stance.
In one study by Monin and his coauthors, undergraduates were asked to pick the most likely suspect in a burglary. They were shown three photographs and given information implicating the sole African American over the other two suspects, both white. Participants were later asked to rate fictitious participants who supposedly had been given the same task. The fictitious people were either “obedient” ones who picked the black suspect or they were “rebels” who refused to circle a face, saying the task was “obviously biased.” Other neutral participants first rated the fictitious participants and then did the task themselves—so they weren’t first implicated in the task. Participants who had been implicated by choosing the black face first reported disliking the moral rebels. Those who hadn’t yet made a selection liked the rebels and called them more moral. The conclusion is that once one has committed to a response that is status quo, the rebel makes you feel like a fool, and one tends to resent the person that you would have otherwise embraced.
At the end of the day, all of the research suggests we get more moral behavior from people who have to demonstrate their positive qualities to others. If so, it has important ramifications in many areas of what we do. Think about all the organizations in which we are involved both professionally and personally. Perhaps, we should be most wary of those who have the most leeway to behave unethically because of their positive image and frequent statements asserting moral positions–as it is precisely these people who will feel they have the moral license to do immoral things. I referenced a list of public figures that fit this mold above. Throw in Newt Gingrich, John Edwards, and Bernie Madoff for good measure. How can figures with such dedication, public esteem and records of so many good deeds engage in unethical behavior? The answer, according to “moral credentialing” theory, is simply that it is precisely their moral image that frees them from worrying about and therefore adhering to moral standards.
Of particular interest, providing people with lots of ethics training may actually backfire–as going through the training may make them overconfident about their own decision making, as well as feel that having learned ethics, they now have the right to, at least occasionally, violate what they have learned.
Another conclusion that has been drawn in this area is that people who, having done something wrong, then get a second or third chance, have a greater chance, if they decide to reform, of being scrupulously honest in the future. That’s because they have the most to demonstrate, both to themselves and others, about how moral they now are. Needless to say, this does not apply to sociopaths and other character disorders.
Strangely enough, to encourage the best behavior in your group, it might be prudent to not bestow excessive praise or positive regard upon members of a group, so they feel as if they must continually demonstrate their moral credentials. If moral credentialing is accurate, it is when people feel they have nothing left to prove that they don’t.