Cooperation in Humans: Is It Really ‘Strong Reciprocity’?

admin

25 Comments

Join 36.7K other subscribers

When I first met Sam Bowles (it must have been in the early 2000s) I was already a committed proponent of multilevel selection. My recollection of our interactions at the time was that while he was not a strong critic of the idea, neither was he a strong supporter. I was reminded of our early conversations as I was reading the new book co-authored by Bowles and Herb Gintis, A Cooperative Species: Human Reciprocity and Its Evolution.

A central concept of the book, and more generally of Bowles and Gintis research, is strong reciprocity. I really dislike this term (I prefer cooperation), and I suspect that Bowles and Gintis use it because it reflects their personal evolution on the way to accepting multilevel selection. As many other economists, they are very well versed in game theory. Game theory played an extremely important role in defining the central puzzle of cooperation: how can it evolve despite its vulnerability to free-riders? This problem was starkly delineated in the canonic Prisoner’s Dilemma (PD) game, in which the only rational decision is to ‘defect,’ that is, to withdraw cooperation.

This insight has been with us for a long time and its implications for cooperation have been rather gloomy. During the 1970s and 1980s, an apparent solution was found. It turned out that if the PD game is played repeatedly (the so-called Iterated PD) then a cooperative strategy becomes possible. Such a strategy is conditional: cooperate if the other player does so, defect otherwise. Anatol Rapoport called this strategy “tit for tat,” and Robert Trivers referred to it as “reciprocal altruism” (by the way, another misnomer, since it has nothing to do with altruism; as Bowles and Gintis note on p. 52). Unfortunately, the reciprocal altruism breakthrough turned out to be illusory in the larger quest for the understanding of why humans are such a cooperative species. The problem is that it really works best for tiny groups of two, or very few people. Once the group becomes larger than 5–10, reciprocal altruism starts to break down, and it is certainly not the answer for lasting cooperation for any realistic group sizes, even in small-scale human societies (hundreds, or a few thousands of individuals).

Although David Hume did not know game theory, he clearly saw the problem with reciprocal altruism in larger groups: “Two neighbors agree to drain a meadow, which they possess in common; because ’tis easy for them to know each other’s mind; and each must perceive, that the immediate consequence of his failing in his part is abandoning of the whole project. But ’tis very difficult and indeed impossible, that a thousand persons shou’d agree in any such action; it being difficult for them to concert so complicated a design, and still more difficult for them to execute; while each seeks a pretext to free himself of the trouble and expense, and wou’d lay the whole burden on others.”

In fact, of course, it is quite possible for a thousand and more people to cooperate in performing highly complex tasks (think of a Roman legion building a fortified camp – something legionaries excelled at). The reason is that people are not rational actors. People can sacrifice their own payoffs in order to promote cooperative ventures, and punish free-riders, even at a cost to themselves.

Bowles and Gintis call such behaviors “strong reciprocity,” but I don’t think that simply tacking the adjective ‘strong’ on ‘reciprocity’ results in good terminology. Why not use ‘cooperation’ or ‘pro-sociality’ instead? One might object that no harm is done, because they clearly define the term (on p. 20 of their book) and so there is no confusion, as long as the reader carefully reads the book. The problem with this approach is that the science of cooperation is a highly multidisciplinary field, in which diverse disciplines are involved – from modeling and evolutionary biology to economics, sociology, psychology, neurobiology, anthropology, and even history (although according to the NSF classification history is not a science but humanity). Additionally, and even more importantly, the science of cooperation is (or, perhaps, should be) of high relevance to the general public and the world of social policy. So it is better to use words carefully and choose such terms that would be readily understood by different kinds of scientists, humanists, and non-scientists without a need for technical definitions.

Now that I’ve ‘vented my spleen’ about strong reciprocity, I wish to make a confession. I actually liked A Cooperative Species a lot. This is a book that I’ve been hoping (for years!) that someone would write. Bowles and Gintis do a great job describing the current state of theory in the highly dynamic field of social evolution.

They review dozens of models on diverse topics such as altruistic punishment, coordinated punishment, reputation and indirect reciprocity, signaling, parochial altruism, gene-culture coevolution, coevolution of institutions and preferences, evolution of shame and other pro-social emotions, and many more. It’s the best handbook of social evolutionary theory I’ve read so far. And I found the use of explicit math to be just about optimal: key formulae and equations in the text, the rest in the appendices and in references.

The book is also much more than a simple compendium of models. Bowles and Gintis have thought long and hard about how insights from different models interlock together to create a holistic canvas of our understanding how human cooperation may have evolved. The field is mature enough so that several general lessons have been gained.

One such general insight is that all successful models of altruistic behaviors share one feature – positive assortment of altruists (p. 48). Group selection works if altruists are more likely to find themselves in the same group. Kin selection works if relatives preferentially interact with each other. Even reciprocal altruism shares this feature (because of the iterated manner of the interaction: positive assortment in time, rather than space). Although those steeped in the modeling literature have known about this general result for some time, now there is an excellent reference to direct non-modeling colleagues to.

So despite my terminological problems, my recommendation is: buy the book, read it, and keep it handy as a reference.

Subscribe
Notify of
guest
25 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
tmtyler

Surely rational actors could collaborate to do things in large groups – like build fortified camps. Punishment is seen as being a high-status activity – and so has hidden signalling payoffs. Cooperative enterprises are often not zero-sum games – as the prisoner’s dilemma illustrates, both parties can come out ahead. Kin altruism, network altruism and signalled altruism beweem them can produce a considerable amount of altruism – and they can all be used by rational actors out to maximise their inclusive fitness. Yesterday I cooperated with a total stranger half way around the globe. We cooperated – not due to our irrationality, but because we could see and influence each other’s reputations in a reputation system. Such events illustrate the dramatic power of signalled and network altruism.

tmtyler

I am highly sceptical about all successful models of altruistic behavior sharing the feature of positive assortment of altruists. One common reason people are altruistic is because they are being manipulated. The widow doesn’t give her money to the con-man due to altruists clumping together. Yet she is behaving altruistically – by the common definition of altruism: she’s taking a hit for the benefit of another. People are also manipulated into behaving altruistically towards others by parasites, symbionts and cultural elements – which all often need close host contact in order to reproduce. Altruism in nature goes well beyond the models that involve altruists clumping together.

tmtyler

Checking with page 48, the problem is that the authors assume that the “altruistic alleles” are in the donors of the altruism – whereas in reality, in “induced altruism” the heritable elements responsible for inducing the altruism can be in the recipients, third parties, symbionts, or cultural elements. All the donor needs to have is imperfect defenses against manipulation – and there are plenty of ways for that to happen.

tmtyler

Virtue signalling and reputation systems do work in large groups, though. They can then successfully facilitate reciprocal relationships between large numbers of people. If you know – or can find out – another person’s reputation that lets you know if you can trust them.

tmtyler

If you look at large groups in practice you will normally find that individuals in them have a relatively small associated network of cooperating “neighbours”. For example coworkers and bosses which they can identify and interact with repeatedly. It’s the same within governments and charities. Since less than a hundred people are typically involved in such cooperative networks, keeping track of them doesn’t strain the cognitive capabilities of any individual human. The networks then overlap and interact, helping to create larger-scale cooperation. Iterated reciprocation plays a huge role in these kinds of interaction. That explains some of the interest “Prisoner dilemma” situations.

tmtyler

I looked up the reference for this (to my eyes) bizarre material about reciprocal altruism not applying to large groups. A prominent paper appears to be: Richerson and Boyd (1988) “The evolution of reciprocity in sizable groups” [*]. The conclusion says that they assumed that the groups had no internal structure, and that their conclusion might no longer hold if they did.

Well, large groups often do have internal structure. For instance they are often spatialised. It’s the internal structure of large groups that helps to understand how reciprocal altruism applies to them. Break the large group down into overlapping networks of around the same size as Dunbar’s number, and you should rapidly realise how important reciprocal altruism is to explaining cooperation within large groups of humans.

* http://merton.sscnet.ucla.edu/anthro/faculty/boyd/BoydRichersonJTB88.pdf

tmtyler

Another issue with the above-cited R+B (1988) paper is that it models cooperation within large groups by repeatedly sampling n individuals from the group and using an n-person prisoner’s dilemma. However, that just doesn’t match up with how reciprocal altruism actually works in large groups. Most reciprocal relationships – as the very word “reciprocal” might imply – involve *two* individuals. That is the case where everyone seems to agree that reciprocal altruism works. However, each individual has dozens – or hundreds – of such reciprocal relationships – throughout their social network. Then the cooperative network of each individual partly overlaps with the cooperative network of every other individual, creating cooperation that permeates the whole group.

The concept of an n-person prisoner’s dilemma seems to be an irrelevant red herring – from my perspective. That represents reciprocal altruism applied to the situation badly.

hgintis

We use ‘strong reciprocity’ because there are several distinct forms of cooperation, including strong reciprocity, mutualism and reciprocal altruism. Sometimes we use altruistic cooperation and altruistic punishment for the two sides of strong reciprocity.

Peter Turchin

Herb,

My preferred way of sorting these issues into terminological pigeonholes is to reserve the use of ‘cooperation’ for situations where a public good is produced and private costs are large enough so that a rational self-regarding agent would not contribute, unless forced by moralistic punishers. Thus, a group consisting entirely of self-regarding agents will never be able to cooperate. So cooperation is possible only if some agents are motivated by ‘extra-rational’ prosocial norms, including the norm of moralistic punishment.

Under certain conditions rational agents may become involved in collective action, but I prefer to call this ‘coordination’, rather cooperation.

Reciprocal altruism is neither altruism nor cooperation as you yourselves say in the book. A better term for it would be simply ‘reciprocity.’

Finally, punishment is an important issue. However, cooperation in the strong sense is possible without punishment. Additionally, as Nowak and others have argued, rewards (‘negative punishment’) may work even better for maintaining a cooperative equilibrium.

hgintis

PT: My preferred way of sorting these issues into terminological pigeonholes is to reserve the use of ‘cooperation’ for situations where a public good is produced and private costs are large enough so that a rational self-regarding agent would not contribute, unless forced by moralistic punishers.
HG: This is not terminology, but deep theory. If the Folk Theorem conditions hold, then rational self-regarding agents can cooperate in virtually any “situation.” Moreover, “cooperation” is a common, everyday term for a situation in which people coordinate their activities to achieve a commonly desired end. You can’t just redefine it any way you want, in my opinion.
We did the right thing in making up a new word for altruistic cooperation and punishment, because all reigning population biology recognized only kin selection and reciprocal altruism, and the behavior we were describing was quite different (although compatible with inclusive fitness, of course).
PT: Inder certain conditions rational agents may become involved in collective action, but I prefer to call this ‘coordination’, rather cooperation.
HG: You are welcome to use the terms the way you want in what you write, but I don’t think it is productive to say that you “hate” the way other researchers use terms, except that it differs from yours. I’m not saying that terminology is unimportant, because terminology can be misleading. But strong reciprocity is not at all misleading—it is clearly defined and we use it in only one way, I believe.
PT:Reciprocal altruism is neither altruism nor cooperation as you yourselves say in the book. A better term for it would be simply ‘reciprocity.’
HG: I agree that reciprocal altruism is not accurate, but we are saddled with it and it will not change. “Reciprocity” is too general a term for an alternative. “Reciprocal helping” is probably better, and accurate.

PT:Finally, punishment is an important issue. However, cooperation in the strong sense is possible without punishment. Additionally, as Nowak and others have argued, rewards (‘negative punishment’) may work even better for maintaining a cooperative equilibrium.
HG: I don’t agree at all. There is no cooperation without punishing defectors in any social species (and among the cells in multicellular organisms), and reward is a poor alternative to punishment under almost all conditions. Nowak’s models are completely unrealistic, and even obviously so to casual inspection.

tmtyler

That notion of “cooperation” seems to be in conflict with ordinary English usage, as enshrined in dictionaries. That isn’t necessarily a terminal problem – but it doesn’t seem to be a great feature either.

hgintis

By the way, we do not say that altruistic behavior is “irrational.” Far from it. We say that it is “other-regarding,” meaning people care about more than their own payoffs.

hgintis

tmtyler has very interesting comments and an agile mind, but he would do well to read our book before criticizing it and offering alternatives that we deal with at some length in our book and related journal articles.

tmtyler

My tuppence on “A Cooperative Species”: http://on-memetics.blogspot.com/2012/02/cooperative-species-reviewed.html I don’t pretend to be able to offer much more than tuppence at this stage on that particular topic.

Peter Turchin

Enough already dragging out the specter of poor Wynne-Edwards any time multilevel selection is mentioned. Yes, there was a lot of poor theory and naive group selectionism back in the 1950s and 60s. It’s been decades since I read Wynne-Edwards, but I don’t think he ever used mathematical models to supplement his verbal reasoning. What Bowles and Gintis do very well is review the very sophisticated mathematical theory that was developed as a result of efforts of literally dozens of theoreticians. The hard fact is that the only theory that has been able to provide an internally consistent explanation of the evolution of human ultrasociality is mutilevel selection. And this is the only theory that has generated testable predictions. So there is no realistic alternative.

tmtyler

The modern wave of group selection in the human sciences is not the only way of dealing with human ultrasociality, though.

The main problem appears to be that all the other known causes of cooperation are being insufficiently well appreciated.

My current assessment is:

* Virtue signalling and reputations are not being given enough weight.

* Very little attention is being given to “induced altruism” of various forms. “Induced altruism” is altruism as a result of manipulation by other entities – often via human culture.

* Framing everything in terms of cultural group selection (rather than cultural kin selection) is very bad. Where is the cultural parental care? Where is the cultural kin recognition? We know from the organic realm that kin selection acting on close relatives is a massive force, far outweighing in significance effects that cause altruism to other group members. Ignoring the corresponding force in the cultural realm makes no scientific sense at all.

* The idea that humans are nice to others since they were mostly surrounded by friends and kin long ago – in the environment their ancestors evolved appears to be not being given enough weight.

* Some folks appear to be dramatically under-estimating the applicability of reciprocity to large human groups – e.g. see: http://on-memetics.blogspot.com/2012/04/role-of-reciprocity.html

I should probably stop here – although a complete list would have more items.

Sure, group selection can be applied to humans, and might even explain some of their features. However, the list of things which are currently being attributed to group selection – which are actually better explained by items in the list above looks enormous to me.

I do think that the “clue” relating to human generosity in one-shot encounters (that looks as though it requires explanation in terms of group selection) needs reinterpreting in the light of the paper about “The evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters”.

I’m not sure whether the wave of modern group selection will face a rude awakening on the same scale of Wynne-Edwards – but I do expect a fairly radical change to the current picture, as more researchers with a biological background enter the field.

hgintis

TMT: My current assessment is:

* Virtue signalling and reputations are not being given enough weight.
HG: This is not true. We take these very seriously, and they explain a lot of cooperation. But they require public information, whereas much behavior is available only to a small subset of the cooperating group. Thus either subsets must punish/reward, and/or there must be truthful reporting, which is itself a prosocial act requiring altruism in the form of suppressing personal gains from lying or misrepresenting.

TMT: * Very little attention is being given to “induced altruism” of various forms. “Induced altruism” is altruism as a result of manipulation by other entities – often via human culture.
HG: This is important in some cases, but is derivative of other forms of prosociality (e.g., making ethnic cleansing popular).

TMT: * Framing everything in terms of cultural group selection (rather than cultural kin selection) is very bad. Where is the cultural parental care? Where is the cultural kin recognition? We know from the organic realm that kin selection acting on close relatives is a massive force, far outweighing in significance effects that cause altruism to other group members. Ignoring the corresponding force in the cultural realm makes no scientific sense at all.
HG: Cultural transmission is vertical, oblique, and horizontal. You are talking about vertical transmission, which is central to the theory.

TMT:* The idea that humans are nice to others since they were mostly surrounded by friends and kin long ago – in the environment their ancestors evolved appears to be not being given enough weight.
HG: This is because it is not true. There was exogamy, plently of chances for anonymity, and long-distance trade for hundreds of thousands of years.

TMT:* Some folks appear to be dramatically under-estimating the applicability of reciprocity to large human groups – e.g. see: http://on-memetics.blogspot.com/2012/04/role-of-reciprocity.html
HG: We dealt with this VERY carefully in our book and papers. I spent several years teasing apart these issues. Cooperation in groups of more than a few individuals requires prosociality. that is why all cooperation in mammal groups is always mutualism, not altruism. Only humans and social insects/corals are altruistic. Even reciprocal altruism is very rare in non-human groups, it it exists at all.

tmtyler

Hi, H.G. Looking at “A Cooperative Species”, it does treat reciprocity and virtue signalling – most of my complaints about those issues don’t really apply to that book.

Regarding “induced” altruism – I don’t personally classify this as a subset of any other well-known category of altruistic acts – it isn’t kin selection, group selection, reciprocity, or virtue signalling. It could, perhaps, be classified as form of indirect reciprocity (though that is because it is a bit of dustbin category).

Cultural kin selection is not the same as vertical transmission. It refers to interactions between cultural kin (as opposed to organic kin). So for example soldiers in an army are cultural kin – since they share uniforms and ideas about patriotism. Catholic priests are cultural kin – since they share religious doctrine. School children, nurses and monks are cultural kin – and often you can see this just by looking at them. All the iPhones in the world are cultural kin. Cultural kin selection is a pretty major source of cooperation between encultured humans – just as kin selection is important in the organic domain.

Your final comments about reciprocity read very strangely here – but it seems possible that that is down to different use of terms. As Peter mentioned, you point out that “reciprocal altruism” is a bit of a misnomer. Conventionally, altruism is thought to be common in nature – since “kin altruism” is counted as being a form of altruism.

Peter Turchin

Tim, ‘cultural kin selection’ is a very quirky term. It sounds like you are talking about the same thing that we call cultural group selection, but insist on sneaking in the keyword ‘kin’ (and avoiding the dreaded ‘G’ word).

I agree with Herb’s statement above that, as best as we know, from the dawn of humanity human groups consisted primarily of non-relatives. There is a 2011 article by Kim Hill and others in Science, which shows that in hunter-gatherer societies most individuals in residential groups are genetically unrelated. You would call them ‘cultural kin’ because they are very similar in culture, but doing so will lead to all kinds misunderstanding.

tmtyler

“Cultural kin selection” is the most common term for the concept. Other names that have been proposed are: “cultural inclusive fitness theory”, “memetic kin selection” and “kith selection”, but those seem to be less common terms. The idea is no more a synonym for cultural group selection than kin selection is a synonym for group selection. They have different models and a different emphasis – and I haven’t yet encountered anyone who advocates calling them by the same name.

Similar comments apply to the term “cultural kin”. That’s the most common term for the concept. Some people have used “memetic kin”. Anthropologists have sometimes use vaguer terms like “social kinship” and “nurtural kin” to refer to a similar idea but without really having such a clear idea about how it might work.

Think you can contribute terminology that would lead to fewer “misunderstandings”? Great, wonderful, go ahead. On the other hand, if you mean switching to talking about cultural group selection, then no, that isn’t a realistic proposal: kin selection and group selection have different names a good reason.

Peter Turchin

HG: I don’t agree at all. There is no cooperation without punishing defectors in any social species (and among the cells in multicellular organisms), and reward is a poor alternative to punishment under almost all conditions. Nowak’s models are completely unrealistic, and even obviously so to casual inspection.

PT: Actually, we are broadly in agreement (I also think that Nowak overstates the case for rewards vs. punishment). The disagreements focus on nuances. For example, theoretically, you can have evolution of cooperation without any punishment – as long as the group-level selection is strong enough, groups with any free riders are rapidly weeded out. In practice, however, punishment makes the cooperative equilibrium much more stable, so much weaker group-level selection becomes sufficient. No argument there. But punishment while an important mechanism, is in this sense secondary. Other important secondary mechanisms that make group selection a potent force in human evolution is that humans are pre-adapted for warfare (a small step from big-game hunting), which inensifies between group competition. And, of course, culture, imitation, and leveling norms, which decrease within-group variance in fitness (and increase between-group variance).

tmtyler

The “Evolutionary Legacy” hypothesis is concerned with “friends and kin” – just as I originally stated. Evidence relating to relatedness in hunter-gatherer tribes does not significantly impact on its validity.

It is a reasonable way of explaining a phenomenon such as altruistic punishment – though “overgeneralisation” – due to resource-limited cognition – probably plays a role there too. For the case for altruistic punishment as an evolutionary legacy, see: “The Biological and Evolutionary Logic of Human Cooperation” by Terence Burnham and Dominic Johnson http://www.socsci.uci.edu/imbs/CONFERENCES/2007/EVOL%20OF%20PUNISHMENT/jOHNSON2.pdf

tmtyler

The “Burnam” paper I just cited above described “strong reciprocity” as a “confusing misnomer”.

I also notice that the “Sixteen common misconceptions about the evolution of cooperation in humans” article – by West, Mouden and Gardner – features “strong reciprocity” in misconceptions 14, 15 and 16. http://www.zoo.ox.ac.uk/group/west/pdf/WestElMoudenGardner_11.pdf

john zeb

hey peter, any more book selections that are essential for readers of this web blog?

Peter Turchin

John, my plan is to blog about books as I read them. But perhaps it would be a good idea at some point to supply a least of key works (at least, in my opinion). I’ll think about it.

  1. Home
  2. /
  3. Cliodynamica
  4. /
  5. Regular Posts
  6. /
  7. Cooperation in Humans: Is...

© Peter Turchin 2023 All rights reserved

Privacy Policy