The central question of social evolution is how we can understand the rise of complex societies with extensive cooperation among millions (and more) of people. In less technical terms, what are the origins of civilizations and empires? I couldn’t help but think about this question during my visit to the Cahokia Mounds. Why was the largest-scale society in North America located in southern Illinois? Why did it arise around 1000 CE?
As readers of my books know, my favored explanation for the evolution of social complexity is the theory of cultural group (better, multilevel) selection. Since throughout most of human history competition between societies usually took the form of warfare, we need to look to the patterns of warfare to understand the rise of civilizations.
Take the case of the Sinic (Chinese) civilization. Over the last three thousand years the cradle of the Chinese civilization, the Yellow River Basin, has been unified by one empire after another. There is no other region on Earth that could rival the Yellow River Basin in the intensity of ‘imperiogenesis’ (proportion of time that it found itself within a large empire). In a series of publications (for example, this one) I have argued that the explanation of this remarkable pattern has to do with the very intensive warfare between nomadic pastoralists (Hunnu, Turks, Mongols, etc) and the agrarian Chinese. This is why China was typically unified from the North (and most frequently from the Northwest) – it was the military pressure from the Great Eurasian Steppe that selected for unusually cohesive North Chinese societies, which then would go on to build huge empires by conquering the rest of East Asia.
Steppe frontiers are crucibles of empires; you add a major river and you are practically guaranteed to have an imperiogenesis hotspot. Examples are numerous: the Nile, the Tigris-Euphrates, and the Indus are the usual suspects. But the first empires in sub-Saharan Africa (Ghana, Mali, and Songhai) arose on the Niger River where it flows through the Sahel. This correlation has been long noted. Karl Wittfogel attempted to explain this observation with his theory of ‘hydraulic empires’, based on the control of irrigation by state bureaucracy, but this theory has been empirically disproved. For example, the major river of eastern Europe, the Volga, was the cradle of a number of empires (Bulghar, the Kazan Khanate, and, most notably, Muscovy-Russia), none of which relied on intensive irrigation. Nor did the Chinese along the Yellow River. In other civilizations irrigation was typically a local, rather than an imperial concern. Most likely, the river effect is due to a combination of good environment for intensive agriculture on alluvial soils and the ease of communications (because transporting goods on water was an order of magnitude cheaper than carting them on land).
Whatever the causal factors, let’s go back to the empirical generalization, that there is something special about a major river flowing though a steppe frontier that predisposes such places to engender complex societies. If you were to look within the American landmass north of Mexico for a spot that would fit this description best, where would it be? Incredibly, southern Illinois.
First, the mightiest river in North America is indisputably the Mississippi. Second, as the Mississippi flows South along the boundary of Missouri and Illinois, it leaves the steppe region (known in North America as the Prairie) just North of modern St. Louis/ancient Cahokia. This can be seen on the map of Gross Primary Productivity, where the Prairie is colored in browns and yellows:
or even more clearly on the map of major biomes from the Cahokia Mounds Interpretive Center:
Note also that Cahokia is located in the far northwest corner of the Mississippian Cultural Area:
This is eerily similar to China, which was invariably unified from the North, and most usually from the Northwest. As a result, Chinese capitals were always located on the Northwestern frontier with the steppe, not in the Yangzi River valley, which is much more centrally located and has much more productive agriculture. In the Mississippian culture, similarly, the greatest plant productivity is in the Southeast (see the map of Gross Primary Production above), yet the first and the largest urban center, Cahokia, is located in the Northwest, on the steppe frontier.
So there are several remarkable similarities between the Chinese and Mississippian civilizations. However, I do not want to push the analogy too far. The population of urban Cahokia was perhaps 30,000 people, whereas populations of Chinese capitals, both in Ancient and Medieval times, easily topped one million. The scale of the Cahokian polity was an order (or two) of magnitude less than that of ancient Chinese empires.
A closer Chinese analog of Cahokia is not the later Imperial period, but the Erlitou Culture (early II millennium BCE), which has been tentatively identified with the shadowy Xia Dynasty (which preceded a better known Shang period). Like Cahokia, the Erlitou Culture was also in the same ambiguous position of being either a very complex chiefdom, or an archaic state (recent excavations suggest that it has achieved the statehood). And we know very little about the territorial extent of the Erlitou/Xia state, just as we are unsure about how far the rule of Cahokia extended. Other such ambiguous cultures include Uruk in Mesopotamia (IV millennium BCE), Egypt under the Dynasty 0 (late IV millennium BCE), and the Indus Valley Civilization (III millennium BCE). Archaeologists still argue whether Uruk, or Mohenjo-Daro, were city-states, or capitals of extensive territorial states.
So Cahokia seems similar to the very first urbanized societies of the Old World. Had not the Europeans arrived in the New World around 1500 CE, perhaps the Mississippian culture would also rise again – and fall again, and so on – in a typical sequence of rise and demise that was the historical pattern in Egypt, Mesopotamia, India, and China. However, the urbanized Mississippian culture arose much later than its Old World analogs. Why couldn’t a complex society evolve in North America before c.1000 CE?
I think there are two factors that determined the timing of the rise of the Mississippian civilization. First, it was only during the first millennium of the Common Era that intensive maize agriculture developed in the Mississippian region. As we know well, a productive agrarian base is a necessary condition for large-scale, complex societies. This is the standard explanation that you will see in archaeological books and articles. But intensive agriculture is only a necessary condition, not a sufficient one. Many regions on Earth had agriculture, but did not develop states until they were colonized by European Great Powers.
The second factor, I would argue, was the diffusion of bow-and-arrows into the Mississippian region from the northwest during the second half of the first millennium. The introduction of this novel military technology must have led to more destructive warfare in the Mississippian region, leading to more intense cultural group selection, and then on to larger-scale societies. Perhaps people of the Oneota Culture, who lived in the Prairies to the northwest of Cahokia, were the functional equivalent of the steppe nomads northwest of the Chinese?
Of course, archers on foot in the American Great Plains are a far cry from the mounted archers of the Great Eurasian Steppe. They were probably a much lesser threat to agrarian societies. But the Mississippian polities were also of a much lesser scale than the Chinese empires. It is a reasonable proposition that similar evolutionary mechanisms were operating, but on a much reduced scale in North America, compared to the Old World.
During the Spring semester I teach a class in Cultural Evolution for about 150 students. We use a ‘clicker’ technology that allows me to poll all students in the class electronically. Every year I ask them, in what state was the most complex, largest-scale pre-Columbian society in North America located? Most students choose New Mexico, because they have heard about the Anasazi. But the Ancient Pueblo People lived in small-scale, uncentralized societies. It is remarkable, but almost none of my students choose the correct answer – Southern Illinois, just across the Mississippi River from St. Louis.
I had wanted to visit Cahokia Mounds for years, and finally my chance came during the trip to St. Louis two weeks ago. What follows is my photo reportage of the visit to the site.
The site itself is not particularly impressive. Here’s the view of the largest mound, known as the Monks Mound:
The staircase that leads to the top of the mound is, of course, a modern addition:
From the top of the Monks Mound one can look across what used to be a huge public space to the Fox Mound:
The folks at the Cahokia Mounds State Historic Museum Society have done a great job of reconstruction of what the Mississippian culture might have looked. Naturally, there is a lot of guesswork involved, but as our archaeological knowledge gets better, many of the errors of interpretation will be corrected.
Here’s an artist’s reconstruction of central Cahokia (by the way, obviously, we don’t know what the natives called their great city; unfortunately the Mississippian society had no writing):
The large mound in the back is the Monks Mound, and the partially visible mound on the right in the forefront is the Fox Mound (compare to my photographs above).
The depicted activities of people are, as far as I know, a pure fantasy on the part of the artist (this is not a criticism – he or she had to somehow fill the landscape; just take it with a large helping of salt). One thing that is almost certainly wrong is the grassy green slopes of mounds. My colleague and co-author David Anderson told me that the mounds were probably covered with red, black, and white clays laid out in horizontal layers, which must have made for a visually striking image back in the thirteenth century.
There is probably a lot wrong with this reconstruction, but one thing seems certain – there was a huge public space in the middle of this pre-Columbian city. It was probably used for public rituals, and its extent is suggestive of huge crowds that must have participated in these rituals.
The scale of social organization is also indicated by the amount of labor that was necessary to mobilize in order to build the monumental mounds:
According to the calculation at the Center, the amount of labor that was required just to move earth to create the Monks Mound (the largest mound, but one of many) was c.5,000 people-years.
The central plaza was surrounded by houses, and there was a palisade that protected the center of Cahokia from attack:
Note that many buildings were placed outside the defensive walls. Cahokia was a center of trade, and imported exotic materials from far away:
Here’s a very nice Caddoan water bottle that came from another culture to the southwest of Cahokia:
The Mississippian society was a complex, centralized polity. It was either a complex chiefdom, or even an archaic state, as recently argued by some archaeologists such as Tim Pauketat:
This social structure is reconstructed based on historically attested Mississippian chiefdoms after the European contact (which, however, were of much lesser scale than Cahokia).
The paramount ruler of Mississippian polities was called ‘Great Sun’ in later chiefdoms:
We actually have a pretty good idea of the kind of dress and ornamentation the Cahokian elites wore, from the burials:
A live elite individual could have looked like this:
Note the kilt decoration, hair pins, and copper ear spools.
The Mississippian art was quite sophisticated. Here’s a figurine that may represent the “earth mother” or an agricultural deity:
The Mississippian culture was Neolithic. A flintknapper:
Women preparing food:
and cooking it:
Agricultural diets (in particular, over-reliance on corn) imposed heavy costs on the health of ancient Cahokians, as indicated by tooth decay:
So why did the complex society, centered on Cahokia, collapse? The Interpretive Center lists four possible explanations:
1. Over-exploitation of land
2. Climatic change
3. Failure of leadership
4. Cultural change
Clearly, these folks need to learn about cliodynamics 🙂
As readers of my books know, the explanation of societal collapse that I have argued for (at least, for societies inhabiting non-marginal environments, including most certainly Cahokia) is internal warfare brought on by a structural-demographic crisis. What is remarkable, is that the Center does not ever mention warfare. When I realized it, I specifically went on the second circuit and checked all exhibits. The only indirect mention of warfare I found was the identification of this individual as a high-status warrior:
That was it. This is yet another example of what has been called “the pacification of the past.”
Despite this caveat, I thought that the exhibition was really well done. Naturally, much of it is speculative, and the reconstructors surely got many things wrong. But I still feel very grateful to them for making this attempt of bringing the lost culture of pre-Columbian Mississippians to life. I enjoyed my Cahokia trip enormously.
Cultural evolution is what created the – in many ways – wonderful societies that we live in. It created the potential to free our lives from hunger and early death, and made possible the pursuit of science and art. But cultural evolution also has a dark side, in fact, many ‘dark sides.’
Clearly domestication of plants and animals is what made our civilization possible. All sufficiently complex societies are possible only on the basis of agriculture. But we have paid, and continue paying a huge price for this advance of human knowledge and technology. This idea was brought home to me as a result of several conversations I had with Michael Rose during the Consilience Conference at St. Louis, which I talked about in my previous blog. Michael is an evolutionary biologist at the university of California at Irvine, who studies aging from the evolutionary perspective. I actually read his book Evolutionary Biology of Aging some twenty years ago, but never met him until two weeks ago.
One way people talk about the price of civilization is in terms of evolutionary mismatch (which is one of the focus areas at the Evolution Institute). The idea is that our bodies and minds evolved during the Pleistocene, when we lived in small groups of hunter-gatherers. Now we live in a dramatically different environment, and that causes all kinds of problems. The psychological aspect of the problem was recently discussed by Robin Dunbar and commentators on his Focus Article. The physiological problems include rampant obesity, heart disease, and diabetes.
There is currently no consensus on the role of changing diet and other aspects of lifestyle, most notably exercise, in causing modern-day health problems. Some people argue that our Plestocene bodies are not adapted to high-calorie diets and sedentary life-styles of today. On the other hand, agriculture was invented roughly 10,000 years ago. 400 generations is not an insignificant length of time for evolution to do its thing. Some anthropologists (including another participant of the Consilience conference, Henry Harpending) argue that humans evolved very intensively during this period. One famous example is the evolution of lactose tolerance, that is, ability to digest milk.
Michael Rose develops a more subtle and sophisticated argument, which is explained at length in his 55 theses – a New Context for Health. There is a sophisticated mathematical model underlying his argument, but the basic logic of it is actually quite simple.
We think of people having ‘traits,’ but actually we change quite dramatically as we age. The key ‘trick’ is to realize that people have a suite of traits, and they can be quite different, depending on what stage in life we are talking about. As an extreme example, consider reproductive ability, something of great interest to evolution. Humans do not reproduce until they reach a fairly advanced age of maturation (puberty). Young adults are not very good mothers or fathers, but they improve with age during their twenties. After that reproductive ability declines and eventually disappears. So reproductive ability is actually a trait that varies quite a lot with age.
Another example is hair color. One man can have red hair and another blond hair. However, this will be true only while they are relatively young. Older men become grey, and many become bald. So by the time our two men turned 60, they may have the same hair color (grey), or no hair at all (bald). By the way, it is likely that the reason is not simple ‘degradation,’ reduced function due to aging, but that greyness and baldness evolved to signal maturity and wisdom. To really describe the phenotype of an individual we need to specify at what age it is expressed.
Ability to digest certain foods can also be age-dependent. I have already mentioned the ability to digest lactose, the sugar present in milk. Before we domesticated animals such as cows and sheep, only very young humans had this ability. Natural selection turned this ability off in adults because they never needed it (and it would be wasteful to continue producing the enzyme lactase that aids in the digestion of milk sugar).
Now clearly traits expressed at different ages are not completely independent of each other. An ability to digest milk sugar as an adult depends on the presence of an enzyme that evolved in order for babies to digest their mother’s milk. So traits at different ages can be correlated, either positively, or negatively. An example of negative correlation is the reproductive ability – in many animals, putting a lot of effort in reproducing early reduces the reproductive ability later in life. So the sophisticated mathematical framework for dealing with age-dependent traits has to take into account all kinds of possible correlations, both between the same trait at different ages and between different traits. For example, most individuals have dark eye and dark hair color, or light eye and light hair color, with dark/light and light/dark combinations a relative rarity.
We can now get to the crux of the matter. Because abilities to do something at the age of 10, 30, 50, etc. are separate (even if correlated) traits, they evolve relatively independently of each other. When grains became a large part of the diet, the ability of children to digest them (and detoxify the chemical compounds plants put into seeds to protect them against predators such as us) became critical. If you don’t have genes to help you deal with this new diet, you don’t survive to adulthood and don’t leave descendants. In other words, evolution worked very hard to adapt the young to the new diet. On the other hand, the intensity of selection on the old (e.g., 55 years old) was much less – in large part, because most people did not live to the age of 55 until very recently. Additionally, once an animal gets past its reproductive age, the evolution largely ceases to have an effect (in humans, presence of older individuals was somewhat important for the survival of their genes in their children and grandchildren, so evolution did not entirely cease, but was greatly slowed down).
What this means is that evolution caused rapid proliferation of genes that enabled children and young adults to easily digest novel foods and detoxify whatever harmful substances were in them. Genes and gene combinations that did the same for older people also increased, but at a much, much slower rate. This may sound puzzling – if we have the detoxifying genes that work for young adults, why shouldn’t they work for older adults? The reason is that one gene-one action model is wrong; it’s not how our bodies work. Most functions are regulated not by a single gene, but by whole networks of them. As we age, some genes come on, and others go off, and the network changes, often in very subtle and nonlinear ways. That’s why we need the ‘trick’ with which I started, to consider functions at different ages as separate traits. During the last 10,000 years evolution worked very hard to optimize the gene network operating during earlier ages to deal with novel foods. But the gene network during later ages was under much less selection to become optimized in this way.
The striking conclusion from this argument is that older people, even those coming from populations that have practiced agriculture for millennia, may suffer adverse health effects from the agricultural diet, despite having no problems when they were younger. The immediate corollary is that one thing they can do to improve their health is to shift to something known as the ‘Paleolithic diet,’ or paleo diet, for short. In the simplest form, this means eliminating from your diet any cereals (wheat, rice, etc), legumes such as beans and peas, and any dairy products (e.g., cheese). It is striking that this is almost precisely the opposite of the popular Mediterranean diet, which emphasizes wheat products (bread, pasta), cheese, and legumes (as in the Italian bean soup, in pasta fagioli, and in hummus).
We are now getting to something I have a personal (rather than a scientific) interest in. I am about to turn 55, and although I am generally in good health, various worrying indicators – cholesterol, sugar – have been inexorably inching up. A couple of years ago I read Ray Kurzweil Fantastic Voyage, but I was unpersuaded by his prescriptions to better health and longevity. Kurzweil’s prescription is, at basis, a calorie-restricted diet. Like the great majority of human beings, I find it extremely difficult to starve myself. More generally, his approach to human health and longevity is that of an engineer – you turn that dial down, another one up, and get the result you want (according to his book, he spends one day a week connected to a machine that removes bad things from his blood and adds good things). I am very doubtful that such an approach will work on an evolved system with multiple nonlinear feedbacks, which is the human body. So changing one variable (e.g., reducing the cholesterol level in the blood) may have unintended – and usually negative – consequences elsewhere (perhaps increasing the risk of cancer).
To conclude, the paleo diet is the first diet, of the ones I heard of, that has a sound evolutionary basis going for it. This was a deciding factor in persuading me to try it out, which I did, starting about two weeks ago. It apparently takes about six months to see its full effects, so stay tuned for progress reports.
I was another participant at the Rules as Genotype workshop held at Indiana University recently. Unlike Peter Turchin, however, I came away with a very different perspective on the usefulness of the metaphor. I am undoubtedly somewhat biased since some of my own research explores the idea of a sacred text as a kind of cultural genotype, research which Peter Turchin referred to in his April 24 blog entry. Nevertheless, the stimulating conversations that we had at the workshop reinforced in my mind the usefulness of the idea, especially since the metaphor seems to provoke interesting and productive questions whether we are talking about religious social groups such as Christian congregations operating within the local scale of Binghamton, NY or whether we are talking about constitutional arrangements at the level of nations. I would therefore like to take some space to respond to Peter Turchin’s reflections on the workshop. In this first post, I will address some of his specific concerns, most particularly with his objection that cultural evolution is just “too different” from genetic evolution to make “rules as genotype” a useful construct. In my second post I will explore what I feel is likely common ground between our two positions and give a few reasons why I think the metaphor is not only useful, but necessary for a complete understanding of the evolution of rules.
I suppose I would first question the contention that because human phenotypes are the result of interactions between genetic, epigenetic, cultural and symbolic influences (to borrow Jablonka and Lamb’s framework) that it doesn’t make sense to talk “separately of ‘phenotype resulting from genetic influences’ or ‘phenotype resulting from cultural influences.’” As Robert Boyd argued during the workshop, not all possible sources of inheritance responsible for human behavior have equal explanatory power. While I disagree with him that epigenetic and symbolic systems have a negligible influence, I nevertheless feel that his point here is well made. For example, while humans have evolved genetic systems that make them susceptible to acquiring traits such as foot binding, the genetic influences on foot binding as a particular practice are likely minimal. Cultural systems give you far more explanatory power for that phenomenon. Foot binding, then, would provide an example of a phenotype stemming primarily from cultural influences. On the other hand, if we wanted to understand the evolution of the ice cream sundae, we would need to consider the genetic influences that allow lactose tolerance as well as the cultural influences that led to the development of ice cream. Discussions, then, of what phenotypes result from is largely a matter of choosing an appropriate level of analysis for the specific questions being addressed.
But what about the contention that cultural systems are just too different from genetic systems? I both agree and disagree with this argument. Yes, cultural systems are different. But we need to be careful to consider specifically what aspects make them different and whether those really negate the usefulness of the metaphor. For instance, Peter Turchin argues that one of the properties that make cultural systems too different is that cultural information can come from many sources and be stored in many ways. Yet, prokaryotic organisms are notoriously promiscuous in terms of their horizontal exchange of genetic information. Does this mean that there is no useful distinction between genotype and phenotype in bacteria? Of course not.
I suppose it could be argued that even though bacteria exchange genetic material with wild abandon that the information is still all encoded in DNA. This strikes me as problematic for a couple of reasons. First, many viruses store their genetic material as RNA rather than DNA and this information can become embedded within the genomes of prokaryotes and eukaryotes. Prions, while not imparting information in the way we think of genes nevertheless can be passed between individuals and affect phenotype. And, of course, epigenetics involves all manner of DNA modification as well as changes in associated protein factors that affect transcription and translation. The picture of a uniform genetic material, then, seems to me not to be quite as clear as it is often portrayed. Second, most of the alternative kinds of cultural storage media cited such as writing, YouTube clips, etc. are very recent developments. Rules, on the other hand, predate all of these. Regardless of where one stands on the debate about whether these rules would have been stored in people’s brains or in distributed social networks (I couldn’t quite follow that debate either), the fact remains that those rules were not stored in multiple forms of media for most of human evolutionary development. Even if multiple media for cultural storage were a valid strike against rules as genotype in the modern context, it couldn’t have been in the (pre)historical context. This argument may not serve to validate the concept of rules as genotype, but it does caution us to be careful not to extrapolate too far from the modern condition.
Where I share some of Peter Turchin’s anxiety, I think, is in the inherent danger of arguing from metaphor, a concern expressed by several scholars at the workshop. While I may disagree that the differences between cultural and genetic evolution are so different as to make the metaphor of rules as genotype meaningless, I can hardly deny that they are different in many important ways. The notion of a cultural genotype is one laden with all the theoretical baggage of genetic evolution, only some of which may apply to cultural systems. It is an evocative idea, but one that risks not only misapplying biological theories to social phenomenon but also causing an unintentional blindness to novel processes at work in cultural evolution that have no counterparts in biological systems. Therefore, while I’m happy enough with the conversations sparked by the cultural genotype metaphor I would like to see the discussion ultimately move to the use of more substrate neutral language.
There is nothing particularly new about this call. Richard Dawkins is often credited with coining the term “universal Darwinism,” but he was by no means the first to extend general Darwinian principles to domains outside genetic processes. Moreover, a host of scholars have worked on the problem of generalizing Darwinism since the term was coined and I think it would be productive to consider rules from within this broader framework. For instance, in Darwin’s Conjecture: The Search for General Principles of Social and Economic Evolution, Geoffrey Hodgson and Thørbjorn Knudsen develop the idea of generative replicators. According to their definition, genotypes are generative replicators by virtue of the fact that they 1) are causally implicated in their replication, 2) produce copies that are similar in their generative mechanisms to the parent, 3) transfer information and 4) are conditionally activated through their interaction with the environment. From Hodgson and Knudsen’s perspective, however, there is nothing unique about genotypes in fulfilling these four requirements. While they do not address rules in the specific sense employed by Elinor Ostrom’s IAD framework, they do explore the idea of whether judicial laws can be considered as generative replicators and conclude that they can.
So is there still life in the rules as genotype metaphor? I certainly think so and I think it is a conceptually tidy way of considering institutional rules which are, after all, devised in order to affect human behavior. In other words, rules are conceived and adopted specifically as a way of creating the cultural phenotypes that Peter Turchin argues are the only important things to consider in cultural evolution. Anyone studying institutional development, however, is keenly aware that even the most carefully crafted rules often have unexpected outcomes, either failing to make any changes in human behavior at all or making profound, yet unintended changes, often with perverse consequences. This is because institutional rules interact within a complex biological and social environment, not unlike how genes interact with the environment to create phenotypes. Yes, the mechanisms by which phenotypes are generated are very different between genes and rules. Nevertheless, within a generalized Darwinian framework they are conceptually similar. Creating better institutional rules, then, becomes a process not unlike site-directed mutagenesis in which changes to specific genes are made and the resultant changes (if any) on phenotype are measured. Instead of using a DNA template, policy makers use statutes. By carefully monitoring the behavioral changes that accompany new policy implementation, researchers can gain a deeper understanding of the mechanisms of cultural evolution. This is particularly true if they are able to take the social and biological context into account. I find it difficult to see how the same level of precision could be reached if the cultural “genotype” were disregarded as unimportant. I’m happy to call it something else, such as a generative replicator, but I’m not quite willing to give up on the general concept.
Last week I was in St. Louis, where I first participated in the Consilience Conference, then went on a day trip to Cahokia Mounds, and finally gave a talk at Washington University. This has been a very intense and productive trip, and I already see that I will need several blogs to cover various themes that came up while I was there. Today, a few words about the issues that I addressed in the two talks I gave.
In the first talk (you can see the slides posted here) I argued that we badly need to learn from history so that we can avoid past mistakes and design a better collective future. The way in which it has been done so far, however, has been completely flawed. When people look to history for lessons, they pick some historical event or situation that appears to resemble the current one, and make an argument by analogy. The worst are the ideologues who simply search through the historical record to find some event that will support their argument. Because the historical record is very rich, by looking hard enough you can always find some instance that will ‘prove’ your pet theory.
The example I used in the talk is the recent proliferation of books that compare America to the Roman Empire. So it is possible for one author to ask, “Are We Rome?” and then go on about the fall of empires. But another author publishes a book “Why America is not a New Rome,” so we can relax and not worry about ‘decline’ (whatever that means).
This is clearly a wrong approach to extracting lessons from history. Instead of looking for direct analogies, I argued in the talk, we have to approach this issue in an indirect way. We need to build general theories of social change and test them empirically on the whole historical record (or at least, on a reasonable statistical sample; no cherry-picking is allowed). Once we are reasonably certain we understand why things change the way they do, we can use our understanding of mechanisms causing the change to ‘tweak’ them in ways that would generate desirable change, or avoid undesirable outcomes. Having a dynamic theory that takes into account nonlinear feedbacks between different processes is key, because it, at least, gives us a chance to foresee unintended consequences.
This means that in addition to ‘History as Humanity’ (which I have nothing against; in fact, we need more of it), we need ‘History as Science’ or Cliodynamics.
The second talk I gave to the Math department of Wash U (but it was also attended by biologists, anthropologists, and historians). While the first talk dealt with very general issues, the second one presented a detailed case-study in cliodynamics. The question was, how did social evolution result in the rise of very large human societies. Marie Taris wrote a nice news story about it, Modeling for peace with Peter Turchin.
At the end of her piece, Marie wrote, “As for my part, I was left hoping that future generations may discover a more peaceful story about a superorganism we might call humans.” I don’t think that we can find a more peaceful story in the past – the evidence is very compelling – but nothing precludes us from writing a peaceful story for our future. And a clear-headed understanding of warfare and its role in human history is a necessary precondition for abolishing war. Paradoxically, warfare itself has accomplished part of the job, by providing a selection pressure for larger-scale complex societies that are internally cooperative and peaceful.
After all, the all-important glue that holds our wonderful complex societies together is cooperation. Cooperation evolved as a result of competition among societies, which historically took the form of warfare. But by driving the evolution of complex societies warfare also made our lives less violent and more secure. And if this trend continues, warfare may eventually put itself out of business.
This does not mean that we should simply wait for this to happen. Nothing prevents us from working in ways that will help this process along, and make it happen sooner, rather than later (as an example, see the War and Peace initiative at the Evolution Institute).
What creates political and social changes in a democracy? This is a question being asked a lot lately in the United States, largely because the degree of polarization of American politics is widely perceived to have increased dramatically since the 1960s, with each party becoming less tolerant of ideological diversity in its ranks and both parties therefore finding it harder to compromise during the actual process of governing.
Despite today’s gridlock, it is clear that there is social and political change over time. One need only look at an old movie or watch an episode of the TV series “Mad Men” to realize the immense changes in social attitudes that have taken place in the United States over the past several decades. As it turns out, evolutionary science has a lot to say about this paradox and provides useful insights as to the process and the time scale that will likely be required to break the current political gridlock.
First of all, we know that there are predispositions for certain political points of view that are rooted in personality types (which are analogous to a biological genotype), but they are predispositions that will manifest themselves in differing ways depending on circumstances. For example, in the United States, people who are predisposed to highly value compassion will almost certainly be political liberals, and those who most highly value individual autonomy and responsibility will almost certainly become conservatives. The underlying personality types are present in all societies over time, but their proportions can vary to some extent in a manner analogous to an epigenetic expression, as events that occur in a society during a generation’s formative years tend to leave an indelible mark on lifelong political views. For a detailed exploration of exactly how this mechanism works in practice, see Jon Haidt’s current book The Righteous Mind.
In a democratic society, the result is that there are, for example, always liberals and conservatives, but the mix can shift, even though there are probably irreducible minima of each type. So, in the 1960’s, the Vietnam War and the counterculture produced two groups in the Baby Boomer generation, one that embraced their inner hippie and a distrust of authority, and one that was appalled at the threats to society that they saw in the “turn on, tune in and drop out” lifestyle. The cultural battles that played out over the following decades in the political arena between these two different perceptions are clearly one reason that it is so hard to find compromise between the groups as their members try to function as adult political leaders.
This kind of historical, generational “lock-in” of political attitudes is reflected in American political scientists’ observations that there are periodic “wave” elections in the United States that break with past patterns and result in new, stable patterns of voting. Political attitudes that are crystallized in such elections tend to be quite impervious to near-term subsequent disruption. By way of example, FDR’s election in 1932 ushered in a Democratic dominance of Presidential elections that ended only in 1968.
It also seems very clear that there are important effects of group identification that influence how the ideological “genotypes” in a society react to events. David Hackett Fisher’s book Albion’s Seed traces how the various waves of emigration from culturally different parts of England still influence the culture of their descendants today, including their politics. The so-called Scots-Irish, for example, brought anti-authoritarian and military traditions with them that had been forged by generations of living in the relatively lawless border between Scotland and England and in the unstable Plantation of Ulster experiment. These traditions live on, particularly in the mountainous parts of the South, and make their descendants even now disproportionately the source of American military recruits. Likewise, there is no doubt that African-American political attitudes differ sharply from American society as a whole, attributable to the very different life in the United States that people of color, especially those descended from slave immigrants, have experienced.
So, how does change happen? It is clearly not the result of changes of heart among large numbers of adults during an election. Political arguments are poor dinner table topics because, like religion, they are more likely to trigger intense arguments rather than camaraderie if the participants are not already, as we say, “like-minded.” Likewise, most political arguments when a society is polarized are about energizing one’s base rather than reaching out to the other side. There is a reason that people seek political conversations that reinforce their views rather than challenge them. Fox News and its liberal counterparts provide comfort and talking points to their audiences, and sometimes wholly different versions of the facts when needed to reinforce those views.
Instead, it seems clear that the political consensus of the public at any moment is a complex result of the basic predispositions of its citizens, mediated by experiences, with the cumulative effects of the specific experiences of its cohesive groups driving the mix. In this regard, it is a very typical evolutionary outcome, where a population can be quite adaptive even though individuals are not. It is reassuring that American political institutions seem to be well enough designed to have accommodated profound changes in the underlying society while providing continuity. The timetable can be long, but change does happen, in a pattern that looks suggestively like other forms of cultural evolution, continuously but sometimes faster and sometimes slower.
Thanks to all who left comments on my previous post. This discussion has been very useful and led me to adjust my views. Here’s how I would formulate the issues now:
(1) ‘Phenotype’ is determined jointly by (i) genetically stored information, (ii) culturally stored information, and (iii) the environment. It doesn’t make sense to speak separately of ‘phenotype resulting from genetic influences’ or ‘phenotype resulting from cultural influences.’ Culture can affect the morphology (foot binding in China) and skin coloration (tattoos). Genes affect cultural behaviors (e.g., political leanings to liberalism versus conservatism). Different traits are affected by different mixtures of genes, culture, and environment, but there are no sharp boundaries.
(2) Genetic information stored within an organism is its genotype, but it is also important to know how much genetic variability there is in a population.
(3) Cultural information can be stored in a variety of media. Initially it was just inside people’s heads, later texts and images became very important, and today (or in the very near future) most of cultural information will reside in the electronic form. It doesn’t seem to matter how it is stored. People who want to call cultural information ‘cultural genotype’ are welcome to it, but I prefer not to do it, because:
(i) culture, unlike genes, can be stored in a variety of media
(ii) what’s important is not cultural information stored in a single person (the most direct analog of the genotype), but the collective store of culture. So, if you want to push the analogy, human groups have cultural genotype, not human individuals
(iii) there are several other differences between genetic and cultural kinds of information that make this analogy not very useful (as detailed in my previous post)
A week ago I was at a workshop Rules as Genotypes in Cultural Evolution (check out the Focus Article by Elinor Ostrom that set the stage for the meeting). One major topic of discussion was what might be the cultural analog of genotype.
In biology phenotype is the observable traits and characteristics of an organism, including morphology, coloration, behavior, etc. Phenotypic traits are determined jointly by the organism’s environment and its genotype, or genetically encoded information. Multicellular organisms like us store genetic information in the DNA (although things are somewhat complicated by the possibility of epigenetic transmission of acquired traits).
The distinction between the phenotype and the genotype has been enormously productive in evolutionary biology, so folks studying human cultural evolution have proposed that we need to find cultural analogs of the genotype and phenotype. One such scheme that I find fairly coherent (I actually teach it in my class on cultural evolution) is the one formulated by Richerson and Boyd (see their Not By Genes Alone; Rob Boyd participated in the workshop and argued in favor of this view). Richerson and Boyd define culture very broadly, as socially transmitted information. The cultural phenotype is pretty clear – it is the behavioral traits of humans, understood broadly (includes collective behaviors such as dance and rituals; knowledge, philosophy, and science; tools, books, clothing, tattoos, domesticated animals, technology, etc). Unlike biological traits in most organisms, human behaviors are affected not only by genes and the environment, but also by culture. Because both genetic and cultural information is transmitted across generations, this theory is also known as the ‘dual inheritance theory.’
So what’s cultural genotype? Boyd and Richerson argue that humans had culture before there were any technological means, such as memory chips of computers or written books, to store cultural information. The only place where cultural information could be stored in prehistoric times was people’s brains. So cultural genotype is the information stored in human brains.
Fine so far, but other participants in the workshop had different views. Some objected to the idea that any information is ‘stored’ in the brain (I never figured out why, though). Others, like David Sloan Wilson, proposed very different views of cultural genotypes. Wilson, together with his graduate student Yasha Hartberg, argued that a sacred text can be thought of as a cultural genotype, because it “consists of many ‘genes’ in the form of stories, commandments, and other texts. A sacred text such as the Christian Bible is replicated with high fidelity and has a potent effect on behavior, which are two requirements of a cultural genotype.”
This view also sounds reasonable, but can cultural ‘genes’ be both neural circuits in the brain and words inked on a parchment? After all, biological genes come in only one variety, the DNA (let’s ignore viruses and prions for simplicity). This leads me to the question whether the whole idea of ‘cultural genotype’ is a useful concept.
After all, what gets transmitted is not the ‘cultural genotype,’ whatever that is, but the cultural phenotype. Dawkins’ phrase of memes jumping from brain to brain is a striking metaphor. On further thought, however, I think it is a silly, and certainly not a useful idea. We are not telepathic! The way cultural information is transmitted is by people observing the behaviors of others and then attempting to imitate them, with greater or lesser degree of success. We actually don’t even know whether the observer/learner encodes the cultural information with precisely the same configuration of neural circuits (if that’s how we store information in our brains) as the one in the brain of the person being imitated. (I believe that Richerson and Boyd made this point before me.) In fact, most likely the same behavior can be encoded by a multitude of very different circuitry configurations. Cultural evolution is Lamarckian, but the distinction between the genotype and phenotype is really useful only in a Mendelian framework.
So what really matters is the actual observed behaviors, not how they are encoded in brains. That’s a relief, because we really don’t know how information is stored in the human brain. As Rob Boyd stressed during the workshop, cultural evolution is currently in its pre-Mendelian phase. But I would argue that while it would certainly be interesting to know how brains work, this knowledge is rather academic for the scientific study of cultural evolution. Yes, we need to know about various biases affecting learning and transmission of cultural information, but psychologists are doing a pretty good job investigating such mechanisms experimentally. I am not against brain research, I am just saying that we don’t need to wait for new great insights from neuroscience to study cultural evolution productively.
In any case, in this day and age we have an alternative cultural genotype, whose physical characteristics are completely understood – digital information: books, technical manuals, audiotapes, videos, etc. Any human behaviors can be recorded and transmitted to others. You can now learn how to fix a leaky faucet or study an esoteric martial art on the Youtube.
The genotype/phenotype distinction is not a useful way to think about cultural evolution because cultural evolution is too different from genetic evolution. Cultural evolution is Lamarckian, while genetic evolution is Mendelian (but both are Darwinian). Cultural traits can be both discrete and continuous, while genetic traits are discrete. Cultural information is transmitted ‘asexually.’ Finally, in cultural evolution what ultimately matters is not what an individual person does, but what groups of people do.
As I wrote in yesterday’s blog, Robert Bellah’s Religion in Human Evolution is a complex book that addresses many roles of religion in human social evolution. One theme that I was particularly interested in was the influence of religious developments on the evolution of human egalitarianism, especially during the Axial Age. The starting point for approaching this question is what is sometimes called as the ‘U-shaped curve of despotism’ in human evolution. We know that our closest relatives, the chimps and gorillas, live in fairly ‘despotic’ or inegalitarian societies. The chimps, for example, establish linear dominance hierarchies, in which alpha males get better food and greater access to females. We don’t know for sure whether human ancestors also lived in similarly inegalitarian societies, but it seems likely.
In contrast, as was argued by Christopher Boehm in Hierarchy in the Forest: The Evolution of Egalitarian Behavior, human hunter-gatherers, who lived in small-scale societies before agriculture, were fiercely egalitarian. High degree of equality does not simply happen because hunter-gatherers are poor and cannot accumulate much wealth (chimps also cannot accumulate wealth). No, equality requires active maintenance. People living in small-scale societies possess numerous norms and institutions designed to control ‘upstarts’ – those who attempt to set themselves as alpha-males so that they can gain control of an unfair share of resources (including females). The sanctions deployed against upstarts range from gossip and ridicule to ostracism and, ultimately, assassination.
Thus, until c.10,000 years ago, before agriculture was invented, the human evolutionary trend was that of increasing egalitarianism. The adoption of agriculture, however, enabled the rise of large-scale societies organized as states and empires with highly unequal distributions of power, wealth, and social status. In other words, the trend to greater equality reversed itself. What accounts for this U-turn? Why did humans allow inequality to develop?
The answer apparently is that the U-turn was a side effect of the transition from small-scale to large-scale societies. Small-scale societies of hunter-gatherers were integrated by face-to-face sociality. Such a diffuse, non-centralized social organization was well-suited to maintaining egalitarian ethos. However, once the size of cooperating group increases beyond 100–200 people, even gigantic human brains are overwhelmed by the demands of face-to-face sociality (this is the argument made by Robin Dunbar). Shifting from diffuse, uncentralized social organization to hierarchical organization (as chains of command) allowed evolution to break through the upper limit on society size imposed by face-to-face sociality. A member of a hierarchically organized group needs to have face-to-face interactions with only a few individuals: a superior and several subordinates. Such links can connect everybody in a group of arbitrarily large size. The group size grows by adding additional hierarchical levels.
So far so good, but the great downside of hierarchical organization is that it inevitably leads to inequality. Once you allow a leader to order everybody around, he will use the power to feather his nest. This is sometimes known as the iron law of oligarchy.
I have argued elsewhere that conditions of endemic warfare between human groups create enormous selection pressures for larger group size (“God is on the side of big battalions”) and for effective (which means centralized) military organizations. Under such conditions, emergence of centralized military hierarchies becomes virtually inevitable. The result is the rise of increasingly complex centralized societies – chiefdoms, complex chiefdoms, and archaic states.
As Bellah notes, archaic states were characterized by enormous fusion of power in the person of the ruler. Almost invariably the rulers of such states were ‘divinized’, that is, considered to be gods as well as kings. They had literally the power of life and death over their subjects. One frequent characteristic of early centralized societies was the practice of massive human sacrifice. This naked pursuit of power and voracious appetite for consuming resources is reflected in such characterizations of rulers as a land shark who ‘eats’ island (in Hawaii), or a big rat that gobbles people’s millet (in archaic China).
Thus, although highly effective on the battlefield, a centralized military hierarchy has several drawbacks as a general way of organizing societies. A society cannot really be held together by force alone. Worse, great inequities resulting from rapacious military chiefs and their retinues alienate large segments of the population. As a result, early despotic chiefdoms and archaic states were very fragile and frequently did not outlast their founders.
The tension between the human preference for equitable outcomes and the need for centralized hierarchy brought about the “legitimation crisis of the early state” (this idea was borrowed by Bellah from Jürgen Habermas). The tension became particularly acute during the Axial Age (c.800–200 BCE), for reasons discussed in my review of Bellah’s book and other publications. One central argument in Bellah’s book is that the new world religions and philosophies that arose during the Axial Age began the long job of building more equitable societies. A large part of this evolution was imposing limits on the power of rulers and replacing power based on naked force with legitimate authority.
This is a very interesting idea. Further, whatever the explanation, it seems clear to me that empirically the post-axial period saw a general trend of human evolution away from the peak of despotism, which was achieved in archaic and early-axial states and empires. In particular, such extreme forms of inequality as human sacrifice, slavery, and distinction in legal status (such as that between nobles and commoners) have been gradually disappearing over the last 2.5 thousand years. God-kings have gone out of fashion, and what royalty are left have been relegated to an entirely ceremonial function. The spread of democracy in the last couple of centuries have imposed more effective restraints on the rulers. The only exception to this overall trend towards greater egalitarianism is that economic inequality remains as large as ever (and, in fact, has been growing over the last three decades in, for example, the US). Still, overall it appears that the peak of despotism (massive concentration of power within the hands of the ruler and the ruling clique) took place in archaic states.
If this is correct, and I believe it is, then the implication is that the evolution of egalitarianism in humans was not just a U-shaped curve, but a more complex trajectory. After ‘zigging’ to greater inequity during the pre-axial period, the trajectory than ‘zagged’ back to greater equity in the last 2.5 thousand years. I propose that we call this evolutionary pattern the Z-curve of human egalitarianism.
Follow Peter Turchin on an epic journey through time. From stone-age assassins to the orbiting cathedrals of the space age, from bloodthirsty god-kings to India’s first vegetarian emperor, discover the secret history of our species—and the evolutionary logic that governed it all.
200 years ago Alexis de Tocqueville wrote about the exceptional ability of Americans to cooperate in solving problems that required concerted collective action. This capacity for cooperation apparently lasted into the post-World War II era, but numerous indicators suggest that during the last 3-4 decades it has been unraveling.
Pants are the standard item of clothing for people, especially men belonging to the Western civilization. Why not a kilt, a robe, a tunic, a sarong, or a toga?