Doing the Right Thing, or Not: What Makes People Less Likely to Be Selfish Jerks?

By Ben Christopher

Some scientific studies come as revelations—biological investigations unlocking keys hidden within the human genome, statistical analyses that identify shocking trends between disparate data sets, and explorations of the cosmos that reveal truths about the very fabric of existence.

And then there are scientific studies that tell us what most of us probably knew all along.

Take the most recent paper by UC Berkeley business ethics professor Ernesto Dal Bó and his brother, Pedro, an economist at Brown University. Pairing cash-desperate Cal undergrads in a series of financial games in which participants were asked to either pool their money for mutual gain or squirrel it away for themselves, the Dal Bó brothers found that participants were much more likely to share if, first, they had been exposed to messages heralding the virtues of sharing and, second, if they knew that parsimony could result in monetary penalty.

In other words, tell people to “Do the Right Thing,” (the Spike Lee film-inspired title of the study) and then threaten those who do the opposite, and, lo and behold, people are less likely to behave like selfish jerks.

Wondering why someone would need to conduct an experiment to reach such a painfully obvious conclusion? Then it’s safe to say you’re not an economist.

In economics, Berkeley’s Dal Bó explains, humans are presumed to act based on material incentives. “If you want people to collaborate, you have to make it in their self-interest to do so,” he says. “In other words, you have to buy people off.”

But that particular take on human nature seemed to clash with what the two brothers learned while growing up in Argentina, he says. With the final end of military dictatorship in 1983, there was a paroxysm of political organizing and politicking in the long-repressed country. “Even high school students began to get organized. These were very hectic times and people were extremely enthusiastic,” he recalls. “When you get involved in politics like that, a lot of what you do is argue and discuss. You have debates. And you get used to the idea that persuading someone often involves telling them that something is ‘the right thing to do.’ ”

“But when you study economics,” he says (which he went on to do at the University of Buenos Aires, and then Oxford), “that’s nowhere to be found.”

The caricature of decision-making put forward by many introductory economics textbooks is embodied in the half-joking name “Homo Economicus.” This mythic creature of the marketplace is thought to be a hyper-rational, cost-benefit analyzing Randian Vulcan for whom the bottom-line is always the bottom-line. Or as turn-of-the-century economist Thorstein Veblen described him, he is a “lightning calculator of pleasures and pains who oscillates like a Homogenous globule of desire of happiness under the impulse of stimuli that shift him about the area, but leave him intact.”

In the share-or-not financial game crafted by the Dal Bó brothers, a Homo Economicus dispassionately assesses which choice is most likely to maximize his or her personal profit. As for any discussion of what might be “the right thing to do,” Homo Economicus is blissfully immune.

Of course, few academics believe that each and every human is a “lightening calculator”—even professional economists occasionally hang out with other humans. Instead, Homo Economicus is a representative actor. On the whole, our respective irrationalities and eccentricities, our excesses and deficiencies, balance each other out. Averaged out, human behavior is generally rational.

“We start with the basic premise that people have material interests and selfish goals. We build theories based on that premise and, on first approximation, those theories are not all that wrong. It explains a lot,” says Dal Bó. “But that’s not everything that we are. We decided that now it’s time to enrich the picture.”

The Dal Bó brothers are not the first academics to try to debunk the myth of the hyper-rational human. Arguably the entire field of behavioral economics is dedicated to the task of convincing the field’s mainstream that humans occasionally act like humans. Even the economics of moral behavior is not virgin territory. A 2006 study showed that by simply asking student participants to recall the 10 Commandments from memory prior to taking a test reduced cheating. Simply priming people to think in moral terms seems to tamp down their self-serving instincts.

Similarly, the Dal Bós wanted to see if exposing volunteers to simple moral prompts would incite them to cooperate and share more.

The “game” the two designed is more “game-theory” than Scrabble. In 20 consecutive rounds, over 300 research participants were randomly and anonymously paired with one another, computer terminal to computer terminal. In every round, each participant was given 10 points (which could be redeemed at the end of the experiment for a little over 8 cents each). Players could either keep these points or “invest” them in a joint account, shared with the opposing player. Once both players distributed their funds, the points placed in the joint account would be increased by 40 percent and the proceeds would be split 50-50.

Game-theory aficionados will recognize this as a version of the classic “Prisoner’s Dilemma,” a scenario in which both parties could benefit by simply cooperating—but in which there is a very strong incentive to “defect.”

In this case, collectively, both players would be best served by pooling all 10 points in the joint account. Increasedied by 40 percent and split up, the strategy would give each player 14 points. That is obviously much better than the result if everyone selfishly hoards their 10 points.

This is the least prof­it­able out­come for every­one—but in the ab­sence of trust or ac­count­ab­il­ity, it is also the most pre­dict­able.

Still, the Homo Economicuses among us will recognize that there is a very good reason to hoard anyway. If you assume that the other player will do the intuitively good-natured thing and place all their points in the joint account, contributing none of your own will still give you half of the invested and multiplied proceeds (that is, 7 points). Add those 7 to your 10 and you’ve got yourself a cool 17 points—better than had you collaborated. What’s more, even if you aren’t quite so hardhearted and want to do the Kumbaya thing and share your wealth anyway, what do you really know about your opponent, this anonymous unknown? If you put all 10 points in and your opponent adds nothing, you will lose 3 points.

The expected result of such a scenario is what economics call a “Nash equilibrium,” in which both players, not willing to risk getting burned, end up investing close to nothing in the joint account. With every player receiving little more than 10 points after each round, this is the least profitable outcome for everyone—but in the absence of trust or accountability, it is also the most predictable.

Indeed, in the first experiment conducted by the Dal Bó brothers, that is exactly what happened for the participants who played the game all the way through. Contributions began with an average of 3 points, but as defections bred distrust, they declined to a mere 1 point.

But some of the players didn’t play all the way through. Instead, they were exposed to a series of messages between the 10th and 11th rounds. One fifth were reminded of the Golden Rule (“an action of yours is moral if it treats others the way you would like others to treat you”), while others were shown a Utilitarian maxim (“an action of yours is moral if it maximizes the sum of everyone’s payoffs.” For a third group, contributing to the joint fund was merely suggested. For the final group, self-serving defection was actively encouraged.

If simple “mor­al sua­sion” doesn’t have a long-term im­pact on be­ha­vi­or, what about the pro­spect of pun­ish­ment?

Perhaps not surprisingly, for those who received the two moral messages, sharing spiked in the next round, increasing by just over 100 percent for the Utilitarians and by nearly 300 percent for the Golden Rulers.

Ah, but that sharing mood proved to be short lived. In the rounds that followed, trust eroded again and by the game’s end, nine rounds later, the spike in contributions to the joint account had diminished entirely.

Apparently the Golden Rule has an expiration date.

So if simple “moral suasion” doesn’t have a long-term impact on behavior, what about the prospect of punishment?

In a second experiment, 136 participants were split into two groups—those who would see the Golden Rule message halfway through and those who would go without moral instruction entirely. More importantly, each of the participants would be given the opportunity to “punish” their opponent after each round. If a contribution was deemed insufficiently generous, the offended party could delete one point from the stingy opponent at the expense of a quarter-point from the punisher.

In early rounds, the possibility of penalty seemed to elevate contributions to the joint account. But again, as the game progressed, cooperation eroded and contributions fell.

The mor­al mes­sage changes be­ha­vi­or, but the pos­sib­il­ity of pun­ish­ment makes people be­lieve that the be­ha­vi­or will stick.

For players who saw no messages of any kind, the rate of decline over the course of the second experiment was almost identical to that of the first. In other words, there seemed to be virtually no long-term effect of punishment on behavior.

Not by itself, anyway.

But for those players who were exposed to moral proselytization midway through the second experiment, the effect was very different. As in the first experiment, sharing increased dramatically immediately after they were reminded to treat others “as you would like others to treat you.” And notably, that result persisted until the end of the game.

In short, pairing moral messaging and the prospect of punishment together seemed to work wonders.

What makes this even more peculiar is that in strictly rational terms, punishing anyone at all is absolutely “irrational.” Certainly Homo Economicus would never opt to punish his fellow player because there is nothing material to gain by doing so—in fact, it comes at the small price of a quarter-point. Nonetheless, given how popular that option proved to be, it would seem that revenge is its own reward for petulant humans. And more surprising yet, in this case that retaliatory impulse actually yielded real benefits when combined with some moral prodding.

Dal Bó’s theory about why this might be relates to that uniquely human and uniquely uneconomical notion: justice.

“We thought that the reason that cooperation tends to go down over time is that once someone experiences another player being uncooperative, even though they may have the desire to be cooperative themselves, they only have one channel to express the desire to punish: to stop cooperating in future rounds with other players,” says Ernesto Dal Bó, who is affiliated with Berkeley’s Haas School of Business. “But once you give each player an instrument with which to express that desire to punish, that makes it possible to cooperate persistently.”

In other words, the moral message changes behavior, but the possibility of punishment makes people believe that the behavior will stick.

In other studies, the Dal Bó researchers found that players were less likely to share if they were led to doubt that other players had been shown the same moral message, and thus might not be sharing as much. The take-away: In any organization, it’s important to establish policies and make sure everyone knows “we’re all in this together.”

For ethics professor Ernesto Dal Bó, the results of the research are heartening.

He notes that we are often admonished—by politicians, CEOs, university presidents—that something is the right thing to do. “From an economist’s perspective, this just doesn’t make any sense. Maybe they would say that it’s just grooming—some kind of animal thing that people do that doesn’t have any material effect, but that we do anyway because it’s ritualistic or because it signals that we are sensible,” he says. “I felt encouraged that something as immaterial as a simple message on a screen—a very strictly ethical message that doesn’t appeal to norms or tap into herd mentality—can have any kind of effect.”

This might not come as a surprise to priests, propagandists or philosophers. But for economists everywhere, it may be a revelation.

Filed under: Human Behavior
Share this article:
Google+ Reddit

Comments

The greatest historical fact of life is once again repeated here: ‘In economics, Berkeley’s Dal Bó explains, humans are presumed to act based on material incentives. “If you want people to collaborate, you have to make it in their self-interest to do so,” he says. “In other words, you have to buy people off.”’ Paul Samuelson taught me this economic, political and social fact of life in the 60s and nothing has changed, we live and die due to greed because it is wired into our brain for short term survival, so we keep proving that the Golden Rule is not wired into our brains by our continuous acts of self-destuction practiced by all of our institutions. The paramount conclusion by Will and Ariel Durant was that civilizations die because their intellectual and/or political leaders fail to meet the challenges of change. Berkeley Blog professors and scholars continuously document our decline and fall, proving that we refuse to join together even to save the future for our children unless we can satisfy our greed as our highest priority first.

Add new comment