跟他提到我想離開工程去讀社會研究，因為工程公司人和很難，這樣 defeats the purpose of working in this field，他心有戚戚焉點點頭。
在此節錄 Wikipedia 對 The Evolution of Cooperation (1985) 這本書的簡介 (目前只有英文，日文，德文和阿拉伯文)。
Axelrod initially solicited strategies from other game theorists to compete in the first tournament. Each strategy was paired with each other strategy for 200 iterations of a Prisoner's Dilemma game, and scored on the total points accumulated through the tournament. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" (TFT) that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move. The results of the first tournament were analyzed and published, and a second tournament held to see if anyone could find a better strategy. TIT FOR TAT won again. Axelrod analyzed the results, and made some interesting discoveries about the nature of cooperation, which he describes in his book
In both actual tournaments and various replays the best performing strategies were nice: that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."
Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit – or avoid exploitation – it is necessary to be provocable to both retaliation and forgiveness. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection. The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.
Most of the games that game theory had heretofore investigated are "zero-sum" – that is, the total rewards are fixed, and a player does well only at the expense of other players. But real life is not zero-sum. Our best prospects are usually in cooperative efforts. In fact, TFT cannot score higher than its partner; at best it can only do "as good as". Yet it won the tournaments by consistently scoring a strong second-place with a variety of partners. Axelrod summarizes this as don't be envious; in other words, don't strive for a payoff greater than the other player's.
In any IPD game there is a certain maximum score each player can get by always cooperating. But some strategies try to find ways of getting a little more with an occasional defection (exploitation). This can work against some strategies that are less provocable or more forgiving than TIT FOR TAT, but generally they do poorly. "A common problem with these rules is that they used complex methods of making inferences about the other player [strategy] – and these inferences were wrong." Against TFT (and "nice" strategies generally) one can do no better than to simply cooperate. Axelrod calls this clarity. Or: don't be too clever.
The success of any strategy depends on the nature of the particular strategies it encounters, which depends on the composition of the overall population. To better model the effects of reproductive success Axelrod also did an "ecological" tournament, where the prevalence of each type of strategy in each round was determined by that strategy's success in the previous round. The competition in each round becomes stronger as weaker performers are reduced and eliminated. The results were amazing: a handful of strategies – all "nice" – came to dominate the field. In a sea of non-nice strategies the "nice" strategies – provided they were also provokable – did well enough with each other to offset the occasional exploitation. As cooperation became general the non-provocable strategies were exploited and eventually eliminated, whereupon the exploitive (non-cooperating) strategies were out-performed by the cooperative strategies.
In summary, success in an evolutionary "game" correlated with the following characteristics:
- Be nice: cooperate, never be the first to defect.
- Be provocable: return defection for defection, cooperation for cooperation.
- Don't be envious: be fair with your partner.
- Don't be too clever: or, don't try to be tricky.
Foundation of reciprocal cooperation
The lessons described above apply in environments that support cooperation, but whether cooperation is supported at all depends crucially on the probability (called ω [omega]) that the players will meet again, also called the discount parameter or, poetically, the shadow of the future. When ω is low – that is, the players have a negligible chance of meeting again – each interaction is effectively a single-shot Prisoner's Dilemma game, and one might as well defect in all cases (a strategy called "ALL D"), because even if one cooperates there is no way to keep the other player from exploiting that. But in the iterated PD the value of repeated cooperative interactions can become greater than the benefit/risk of a single exploitation (which is all that a strategy like TFT will tolerate).
Curiously, rationality and deliberate choice are not necessary, nor trust nor even consciousness, as long as there is a pattern that benefits both players (e.g., increases fitness), and some probability of future interaction. Often the initial mutual cooperation is not even intentional, but having "discovered" a beneficial pattern both parties respond to it by continuing the conditions that maintain it.
This implies two requirements for the players, aside from whatever strategy they may adopt. First, they must be able to recognize other players, to avoid exploitation by cheaters. Second, they must be able to track their previous history with any given player, in order to be responsive to that player's strategy.
Even when the discount parameter ω is high enough to permit reciprocal cooperation there is still a question of whether and how cooperation might start. One of Axelrod's findings is that when the existing population never offers cooperation nor reciprocates it – the case of ALL D – then no nice strategy can get established by isolated individuals; cooperation is strictly a sucker bet. (The "futility of isolated revolt".) But another finding of great significance is that clusters of nice strategies can get established. Even a small group of individuals with nice strategies with infrequent interactions can yet do so well on those interactions to make up for the low level of exploitation from non-nice strategies.
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts", and estimated that citations to The Evolution of Cooperation alone were "growing at the rate of over 300 per year". To fully review this literature is infeasible. What follows are therefore only a few selected highlights.
Axelrod has a subsequent book, The Complexity of Cooperation, which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally, and in religion, other mechanisms for generating cooperation, the IPD under different conditions and assumptions, and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play. It has also been used to challenge the rational and self-regarding "economic man" model of economics, and as a basis for replacing Darwinian sexual selection theory with a theory of social selection.
Nice strategies are better able to invade if they have social structures or other means of increasing their interactions. Axelrod discusses this in chapter 8; in a later paper he and Rick Riolo and Michael Cohen use computer simulations to show cooperation rising among agents who have negligible chance of future encounters but can recognize similarity of an arbitrary characteristic (such as a green beard).
When an IPD tournament introduces noise (errors or misunderstandings) TFT strategies can get trapped into a long string of retaliatory defections, thereby depressing their score. TFT also tolerates "ALL C" (always cooperate) strategies, which then give an opening to exploiters.
In a 2006 paper Nowak listed five mechanisms by which natural selection can lead to cooperation. In addition to kin selection and direct reciprocity, he shows that:
- Indirect reciprocity is based on knowing the other player's reputation, which is the player's history with other players. Cooperation depends on a reliable history being projected from past partners to future partners.
- Network reciprocity relies on geographical or social factors to increase the interactions with nearer neighbors; it is essentially a virtual group.
- Group selection assumes that groups with cooperators (even altruists) will be more successful as a whole, and this will tend to benefit all members.
And there is the very intriguing paper "The Coevolution of Parochial Altruism and War" by Jung-Kyoo Choi and Samuel Bowles. From their summary:
Altruism—benefiting fellow group members at a cost to oneself —and parochialism—hostility towards individuals not of one's own ethnic, racial, or other group—are common human behaviors. The intersection of the two—which we term "parochial altruism"—is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one's payoffs by comparison to what one would gain from eschewing these behaviors. But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.... [Neither] would have been viable singly, but by promoting group conflict they could have evolved jointly.
They do not claim that humans have actually evolved in this way, but that computer simulations show how war could be promoted by the interaction of these behaviors.