When I first met Sam Bowles (it must have been in the early 2000s) I was already a committed proponent of multilevel selection. My recollection of our interactions at the time was that while he was not a strong critic of the idea, neither was he a strong supporter. I was reminded of our early conversations as I was reading the new book co-authored by Bowles and Herb Gintis, A Cooperative Species: Human Reciprocity and Its Evolution.
A central concept of the book, and more generally of Bowles and Gintis research, is strong reciprocity. I really dislike this term (I prefer cooperation), and I suspect that Bowles and Gintis use it because it reflects their personal evolution on the way to accepting multilevel selection. As many other economists, they are very well versed in game theory. Game theory played an extremely important role in defining the central puzzle of cooperation: how can it evolve despite its vulnerability to free-riders? This problem was starkly delineated in the canonic Prisoner’s Dilemma (PD) game, in which the only rational decision is to ‘defect,’ that is, to withdraw cooperation.
This insight has been with us for a long time and its implications for cooperation have been rather gloomy. During the 1970s and 1980s, an apparent solution was found. It turned out that if the PD game is played repeatedly (the so-called Iterated PD) then a cooperative strategy becomes possible. Such a strategy is conditional: cooperate if the other player does so, defect otherwise. Anatol Rapoport called this strategy “tit for tat,” and Robert Trivers referred to it as “reciprocal altruism” (by the way, another misnomer, since it has nothing to do with altruism; as Bowles and Gintis note on p. 52). Unfortunately, the reciprocal altruism breakthrough turned out to be illusory in the larger quest for the understanding of why humans are such a cooperative species. The problem is that it really works best for tiny groups of two, or very few people. Once the group becomes larger than 5–10, reciprocal altruism starts to break down, and it is certainly not the answer for lasting cooperation for any realistic group sizes, even in small-scale human societies (hundreds, or a few thousands of individuals).
Although David Hume did not know game theory, he clearly saw the problem with reciprocal altruism in larger groups: “Two neighbors agree to drain a meadow, which they possess in common; because ’tis easy for them to know each other’s mind; and each must perceive, that the immediate consequence of his failing in his part is abandoning of the whole project. But ’tis very difficult and indeed impossible, that a thousand persons shou’d agree in any such action; it being difficult for them to concert so complicated a design, and still more difficult for them to execute; while each seeks a pretext to free himself of the trouble and expense, and wou’d lay the whole burden on others.”
In fact, of course, it is quite possible for a thousand and more people to cooperate in performing highly complex tasks (think of a Roman legion building a fortified camp – something legionaries excelled at). The reason is that people are not rational actors. People can sacrifice their own payoffs in order to promote cooperative ventures, and punish free-riders, even at a cost to themselves.
Bowles and Gintis call such behaviors “strong reciprocity,” but I don’t think that simply tacking the adjective ‘strong’ on ‘reciprocity’ results in good terminology. Why not use ‘cooperation’ or ‘pro-sociality’ instead? One might object that no harm is done, because they clearly define the term (on p. 20 of their book) and so there is no confusion, as long as the reader carefully reads the book. The problem with this approach is that the science of cooperation is a highly multidisciplinary field, in which diverse disciplines are involved – from modeling and evolutionary biology to economics, sociology, psychology, neurobiology, anthropology, and even history (although according to the NSF classification history is not a science but humanity). Additionally, and even more importantly, the science of cooperation is (or, perhaps, should be) of high relevance to the general public and the world of social policy. So it is better to use words carefully and choose such terms that would be readily understood by different kinds of scientists, humanists, and non-scientists without a need for technical definitions.
Now that I’ve ‘vented my spleen’ about strong reciprocity, I wish to make a confession. I actually liked A Cooperative Species a lot. This is a book that I’ve been hoping (for years!) that someone would write. Bowles and Gintis do a great job describing the current state of theory in the highly dynamic field of social evolution.
They review dozens of models on diverse topics such as altruistic punishment, coordinated punishment, reputation and indirect reciprocity, signaling, parochial altruism, gene-culture coevolution, coevolution of institutions and preferences, evolution of shame and other pro-social emotions, and many more. It’s the best handbook of social evolutionary theory I’ve read so far. And I found the use of explicit math to be just about optimal: key formulae and equations in the text, the rest in the appendices and in references.
The book is also much more than a simple compendium of models. Bowles and Gintis have thought long and hard about how insights from different models interlock together to create a holistic canvas of our understanding how human cooperation may have evolved. The field is mature enough so that several general lessons have been gained.
One such general insight is that all successful models of altruistic behaviors share one feature – positive assortment of altruists (p. 48). Group selection works if altruists are more likely to find themselves in the same group. Kin selection works if relatives preferentially interact with each other. Even reciprocal altruism shares this feature (because of the iterated manner of the interaction: positive assortment in time, rather than space). Although those steeped in the modeling literature have known about this general result for some time, now there is an excellent reference to direct non-modeling colleagues to.
So despite my terminological problems, my recommendation is: buy the book, read it, and keep it handy as a reference.