War and Human Evolution

40-12-17/35

 

Has human evolution and progress been propelled by war? 

The question is not an easy one to ask, not least because war is not merely one of the worst but arguably the worst thing human beings inflict on one another comprising murder, collective theft, and, almost everywhere but in the professional militaries of Western powers, and only quite recently, mass, and sometimes systematic rape. Should it be the case that war was somehow the driving mechanism, the hidden motor behind what we deem most good regarding human life, trapped in some militaristic version of Mandeville’s Fable of the Bees, then we might truly feel ourselves in the grip and the plaything of something like the demon of Descartes’ imagination, though rather than taunt us by making the world irrational we would be cursed to a world of permanent moral incomprehensibility.

When it comes to human evolution we should probably begin to address the question by asking how long war has been part of the human condition? Almost all of our evolution took place while we lived as hunter gatherers, indeed, we’ve only even had agriculture (and anything that comes with it) for 5-10% of our history. From a wide angle view the agricultural, industrial, and electronic revolutions might all be considered a set piece, perhaps one of the reasons things like philosophy, religion, and literature from millennia ago still often resonate with us. Within the full scope of our history, Socrates and Jesus lived only yesterday.

The evidence here isn’t good for those who hoped our propensity for violence is a recent innovation. For over the past generation a whole slew of archeological scholars has challenged the notion of the “peaceful savage” born in the radiating brilliance of the mind of Jean-Jacques Rousseau. Back in 1755 Rousseau had written his Discourse on Inequality perhaps the best timed book, ever. Right before the industrial revolution was to start, Rousseau made the argument that it was civilization, development and the inequalities it produced that was the source of the ills of mankind- including war. He had created an updated version of the myth of Eden, which became in his hands mere nature, without any God in the story. For those who would fear humanity’s increasingly rapid technological development, and the victory of our artificial world over the natural one we were born from and into, he gave them an avenue of escape, which would be to shed not only the new industrial world but the agricultural one that preceded it. Progress, in the mind of Rousseau and those inspired by him, was a Sisyphean trap, the road to paradise lie backwards in time, in the return to primitive conditions.

During the revived Romanticism of the 1960’s archeologists like Francis de Waal set out to prove Rousseau right. Primitive, hunter-gatherer societies, in his view, were not war like and had only become so with contact from advanced warlike societies such as our own. In recent decades, however, this idea of prehistoric and primitive pacifism has come under sustained assault. Steven Pinker’s Better Angels of Our Nature: Why Violence has Declined is the most popularly know of these challenges to the idea of primitive innocence that began with Rousseau’s Discourse, but in many ways, Pinker was merely building off of, and making more widely known, scholarship done elsewhere.

One of the best of examples of this scholarship is Lawrence H. Keeley’s War Before Civilization: The Myth of the Peaceful Savage. Keeley makes a pretty compelling argument that warfare was almost universal across pre-agricultural, hunter-gatherer societies. Rather than being dominated by the non-lethal ritualized “battle”, as de Waal had argued, war for such societies war was a high casualty affair and total.

It is quite normal for conflict between tribes in close proximity to be endemic. Protracted periods of war result in casualty figures that exceed modern warfare even the total war between industrial states seen only in World War II.Little distinction is made in such pre-agricultural forms of conflict between women and children and men of fighting age. Both are killed indiscriminately in wars made up, not of the large set battles that characterize warfare in civilizations after the adoption of agriculture, but of constant ambushes and attacks that aim to kill those who are most vulnerable.

Pre-agricultural war is what we now call insurgency or guerilla war. Americans, who remember Vietnam or look with clear eyes at the Second Iraq War, know how effective this form of war can be against even the most technologically advanced powers. Although Keeley acknowledges the effectiveness and brutal efficiency of guerilla war,  he does not make the leap to suggest that in going up against insurgency advanced powers face in some sense a more evolutionarily advanced rival, honed by at least 100,000 years of human history.  

Keeley concludes that as long as there have been human beings there has been war, yet, he reaches much less dark conclusions from the antiquity of war than one might expect. Human aversion to violence and war, disgust with its consequences, and despair at its, more often than not, unjustifiable suffering are near universal whatever a society’s level of technological advancement. He writes:

“… warfare whether primitive or civilized involves losses, suffering, and terror even for the victors. Consequently, it was nowhere viewed as an unalloyed good, and the respect accorded to an accomplished warrior was often tinged with aversion. “

“For example, it was common the world over for the warrior who had just killed an enemy to be regarded by his own people as spiritually polluted or contaminated. He therefore had to undergo a magical cleansing to remove this pollution. Often he had to live for a time in seclusion, eat special food or fast, be excluded from participation in rituals and abstain from sexual intercourse. “ (144)

Far from the understanding of war found in pre-agricultural societies being ethically shallow, psychologically unsophisticated or unaware of the moral dimension of war their ideas of war are as sophisticated as our own. What the community asks the warrior to do when sending them into conflict and asking them to kill others is to become spiritually tainted and ritual is a way for them to become reintegrated into their community.  This whole idea that soldiers require moral repair much more than praise for their heroics and memorials is one we are only now rediscovering.

Contrary to what Freud wrote to Einstein when the former asked him why human beings seemed to show such a propensity for violence, we don’t have a “death instinct” either. Violence is not an instinct, or as Azar Gat wrote in his War in Human Civilization:

Aggression does not accumulate in the body by a hormone loop mechanism, with a rising level that demands release. People have to feed regularly if they are to stay alive, and in the relevant ages they can normally avoid sexual activity all together only by extraordinary restraint and at the cost of considerable distress. By contrast, people can live in peace for their entire lives, without suffering on that account, to put it mildly, from any particular distress. (37)

War and violence are not an instinct but a tactic, thank God, for if it was an instinct we’d all be dead. Still, if it is a tactic it has been a much used one and it seems that throughout our hundred thousand year human history it is one that we have used with less rather than greater frequency as we have technologically advanced, and one might wonder why this has been the case.    

Azar Gat’s theory of our increasing pacifism is very similar to Pinker’s, war is a very costly and dangerous tactic. If you can get what you are after without it, peace is normally the course human beings will follow. Since the industrial revolution, nature has come under our increasing command, war and expansion are no longer very good answers to the problems of scarcity, thus the frequency of wars was on the decline even before we had invented atomic weapons which turned full scale war into pure madness.

Yet, a good case can be made that war played a central role in putting down the conditions in which the industrial revolution, the spark of the material progress that has made war an unnecessary tactic, occurred and that war helped push technological advancement forward. The Empire won by the British through force of arms was the lever that propelled their industrialization. The Northern United States industrialized much more quickly as a consequence of the Civil War, Western pressure against pre-industrial Japan and China pushed those countries to industrialize. War research gave us both nuclear power and computers, the space race and satellite communications were the result of geopolitical competition. The Internet was created by the Pentagon, and the contemporary revolution in bionic prosthetics grew out of the horrendous injuries suffered in the wars in Iraq and Afghanistan.

Thing is, just as is the case for pre-agricultural societies it is impossible to disaggregate war itself with the capacity for cooperation between human beings a capacity that has increased enormously since the widespread adoption of agriculture. Any increase in human cooperative capacity increases a society’s capacity for war, and, oddly enough many increases in human war fighting capacity lead to increases in cooperative capacity in the civilian sphere. We are caught in a tragic technological logic that those who think technology leads of itself to global prosperity and peace if we just use it for the right ends are largely unaware of. Increases in our technological capacity even if pursued for non-military ends will grant us further capacity for violence and it is not clear that abandoning military research wouldn’t derail or greatly slow technological progress altogether.

One perhaps doesn’t normally associate the quest for better warfighting technologies with transhumanism- the idea “that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology”-, butone should. It’s not only the case that, as mentioned above, the revolution in bionic prosthetics emerged from war, the military is at the forefront of almost every technology on transhumanists’ wish list for future humanity from direct brain-machine and brain- to-brain interfaces to neurological enhancements for cognition and human emotion. And this doesn’t even capture the military’s revolutionary effect on other aspects of the long held dreams of futurists such as robotics.

The subject of how the world militaries have and are moving us in the direction of “post-human” war is the subject Christopher Coker’s  Warrior Geeks: How 21st Century Technology Is Changing the Way We Fight and Think About War and it is with this erudite and thought provoking book that the majority of the rest of this post will be concerned.

Warrior Geeks is a book about much more than the “geekification”  of war, the fact that much of what now passes for war is computerized, many the soldiers themselves now video gamers stuck behind desks, even if the people they kill remain flesh and blood. It is also a book about the evolving human condition and how war through science is changing that condition, and what this change might mean for us. For Coker sees the capabilities many transhumanists dream of finding their application first on the battlefield.

There is the constantly monitored “quantified-self”:

And within twenty years much as engineers analyze data and tweak specifications in order to optimize a software program, military commanders will be able to bio-hack into their soldiers brains and bodies, collecting and correlating data the better to optimize their physical and mental performance. The will be able to reduce distractions and increase productivity and achieve “flow”- the optimum state of focus. (63-64)

And then there is “augmented reality” where the real world is overlaid with the digital such as in the ideal which the US army and others are reaching for-to tie together servicemen in a “cybernetic network” where soldiers:

…. don helmets with a minute screen which when flipped over the eyes enables them to see the entire battlefield, marking the location of both friendly and enemy forces. They can also access the latest weather reports. In future, soldiers will not even need helmets at all. They will be able to wear contact lenses directly onto their retinas. (93)

In addition the military is investigating neurological enhancements of all sorts:

The ‘Persistence in Combat Program’ is well advanced and includes research into painkillers which soldiers could take before going into action, in anticipation of blunting the pain of being wounded. Painkillers such as R1624, created by scientists at Rinat Neuroscience, use an antibody to keep in check a neuropeptide that helps transmit pain sensations from the tissues to the nerves. And some would go further in the hope that one day it might be possible to pop a pill and display conspicuous courage like an army of John Rambos. (230)

With 1 in 8 soldiers returning from combat suffering from PTSD Some in the military hope that the scars of war might be undone with neurological advances that would allow us to “erase” memories.

One might go on with examples of exoskeletons that give soldiers superhuman endurance and strength or tying together the most intimate military unit, the platoon, with neural implants that allow direct brain- to-brain communication a kind of “brain-net” that an optimistic neuroscientist like Miguel Nicolelis seems not to have anticipated.

Coker, for his part, sees these developments in a very dark light, a move away from what have been eternal features of both human nature and war such as courage and character and towards a technologically over-determined and ultimately less than human world. He is very good at showing how our reductive assumptions end up compressing the full complexity of human life into something that gives us at least the illusion of technological control. It’s not our machines who are becoming like us, but us like our machines, it is not necessarily our gain in understanding and sovereignty over our nature to understand underlying genetic and neural mechanisms, but a constriction of our sense of ourselves.

Do we gain from being able to erase the memory of an event in combat that leads to our moral injury or lose something in no longer being able to understand and heal it through reconciliation and self-knowledge? Is this not in some way a more reduced and blind answer to the experience of killing and death in war than that of the primitive tribes people mentioned earlier? Luckily, reductive theories of human nature based on our far less than complete understanding of neuroscience or genetics almost always end up showing their inadequacies after a period of friction with the sheer complexity of being alive.

There are dangers which I see as two-fold. First, there is always the danger that we will move under the assumptions that we have sufficient knowledge and that the solutions to some problem are therefore “easy” should we just double-down and apply some technological or medical solution. Given enough time, the complexity of reality is likely to rear its head, though not after we have caused a good deal of damage in the name of a chimera. Secondly, it is that we will use our knowledge to torque the complexity of the human world into something more simple and ultimately more shallow in the quest for a reality we deem manageable. There is something deeper and more truthful in the primitive response to the experience of killing than erasing the memory of such experiences.

One of the few problems I had with Coker’s otherwise incredible book is, sadly, a very fundamental one. In some sense, Warrior Geeks is not so much a critique of the contemporary ways and trend lines of war, but a bio-conservative argument against transhumanist technologies that uses the subject of war as a backdrop. Echoing Thucydides Coker defines war as “the human thing” and sees its transformation towards post-human forms as ultimately dehumanizing. Thucydides may have been the most brilliant historian to ever live, but his claim that in war our full humanity is made manifest is true only in a partial sense. Almost all higher animals engage in intraspecies killing, and we are not the worst culprit. It wasn’t human beings who invented war, and who have been fighting for the longests, but the ants. As Keeley puts it:

But before developing too militant a view of human existence, let us put war in its place. However frequent, dramatic, and eye-catching, war remains a lesser part of social life. Whether one takes a purely behavioral view of human life or imagines that one can divine mental events, there can be no dispute that peaceful activities, arts, and ideas are by far more crucial and more common even in bellicose societies. (178)

If war has anything in human meaning over and above areas that are truly distinguishing between us and animals such as our culture or for that matter experiences we share with other animals such as love, companionship, and care for our young, it is that at the intimate level, such as is found in platoons, it is one of the few areas where what we do truly has consequences and where we are not superfluous.

We have evolved for culture, love and care as much as we ever have for war, and during most of our evolutionary history the survival of those around us really was dependent on what we did. War, in some cases, is one of the few places today where this earlier importance of the self can be made manifest. Or, as the journalist, Sebastian Junger has stated it, many men eagerly return to combat not so much because they are addicted to its adrenaline high as because being part of a platoon in a combat zone is one of the few places where what a 20 something year old man does actually counts for something.

As for transhumanism or continuing technological advancement and war, the solution to the problem starts with realizing there is no ultimate solution to the problem. To embrace human enhancement advanced by the spear-tip of military innovation would be to turn the movement into something resembling fascism- that is militarism combined with the denial of our shared human worth and destiny. Yet, to completely deny the fact that research into military applications often leads to leaps in technological advancement would be to blind ourselves to the facts of history. Exoskeletons that are developed as armor for our soldiers will very likely lead to advances for paraplegics and the elderly not to mention persons in perfect health just as a host of military innovations have led to benefits for civilians in the past.

Perhaps, one could say that if we sought to drastically reduce military spending worldwide, as in any case I believe we should, we would have sufficient resources to devote to civilian technological improvements of and for themselves. The problem here is the one often missed by technological enthusiasts who fail to take into account the cruel logic of war- that advancements in the civilian sphere alone would lead to military innovations. Exoskeletons developed for paraplegics would very likely end up being worn by soldiers somewhere, or as Keeley puts it:

Warfare is ultimately not a denial of the human capacity for social cooperation, but merely the most destructive expression of it. (158)

Our only good answer to this is really not technological at all, it is to continue to decrease the sphere of human life where war “makes sense” as a tactic. War is only chosen as an option when the peaceful roads to material improvement and respect are closed. We need to ensure both that everyone is able to prosper in our globalized economy and that countries and peoples are encouraged, have ways to resolve potentially explosive disputes and afforded the full respect that should come from belonging to what Keeley calls the “intellectual, psychological, and physiological equality of humankind ”, (179) to which one should add moral equality as well.  The prejudice that we who possess scientific knowledge and greater technological capacity than many of our rivals are in some sense fundamentally superior is one of our most pernicious dangers- we are all caught in the same trap.

For two centuries a decline in warfare has been possible because, even taking into account the savagery of the World Wars, the prosperity of mankind has been improving. A new stage of crisis truly earthshaking proportions would occur should this increasing prosperity be derailed, most likely from its collision with the earth’s inability to sustain our technological civilization becoming truly universal, or from the exponential growth upon which this spreading prosperity rests being tapped out and proving a short-lived phenomenon- it is, after all only two centuries old. Should rising powers such as China and India along with peoples elsewhere come to the conclusion that not only were they to be denied full development to the state of advanced societies, but that the system was rigged in favor of the long industrialized powers, we might wake up to discover that technology hadn’t been moving us towards an age of global unity and peace, after all, but that we had been lucky enough to live in a long interlude in which the human story, for once, was not also the story of war.

 

Wired for Good and Evil

Battle in Heaven

It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.

As is the case with so many of the ideas regarding human nature, Aristotle got there first. A good bet to make when anyone comes up with a really interesting idea regarding what we are as a species is that either Plato or Aristotle had already come up with it, or discussed it. Which is either a sign of how little progress we have made understanding ourselves, or symptomatic of the fact that the fundamental questions for all our gee-whiz science and technology remain important even if ultimately never fully answerable.

Aristotle had classified human beings as being unique in the sense that we were a zóon politikón variously translated as a social animal or a political animal. His famous quote on this score of course being: “He who is unable to live in society, or who has no need because he is sufficient for himself, must be either a beast or a god.” (Politics Book I)

He did recognize that other animals were social in a similar way to human beings, animals such as storks (who knew), but especially social insects such as bees and ants. He even understood bees (the Greeks were prodigious beekeepers) and ants as living under different types of political regimes. Misogynists that he was he thought bees lived under a king instead of a queen and thought the ants more democratic and anarchical. (History of Animals).       

Yet Aristotle’s definition of mankind as zóon politikón is deeper than that. As he says in his Politics:

That man is much more a political animal than any kind of bee or any herd animal is clear. For, as we assert, nature does nothing in vain, and man alone among the animals has speech….Speech serves to reveal the advantageous and the harmful and hence also the just and unjust. For it is peculiar to man as compared to the other animals that he alone has a perception of good and bad and just and unjust and other things of this sort; and partnership in these things is what makes a household and a city.  (Politics Book 1).

Human beings are shaped in a way animals are not by the culture of their community, the particular way their society has defined what is good and what is just. We live socially in order to live up to their potential for virtue- ideas that speech alone makes manifest and allows us to resolve. Aristotle didn’t just mean talk but discussion, debate, conversation all guided by reason. Because he doesn’t think women and slaves are really capable of such conversations he excludes them from full membership in human society.More on that at the end, but now back to Pagel’s Wired for Culture.

One of the things Pagel thinks makes human beings unique among all other animals is the fact that we alone are hyper-social or ultra-social. The only animals that even come close are the eusocial insects- Aristotle’s ants and bees. As E.O. Wilson pointed out in his The Social Conquest of Earth it is the eusocial species, including us, which, pound-for-pound absolutely dominate life on earth. In the spirit of a 1950’s B-movie all the ants in the world added together are actually heavier than us.

Eusociality makes a great deal of sense for insects colonies and hives given how closely related these groups are, indeed, in many ways they might be said to constitute one body and just as cells in our body sacrifice themselves for the survival of the whole, eusocial insects often do the same. Human beings, however, are different, after all, if you put your life at risk by saving the children of non-relatives from a fire, you’ve done absolutely nothing for your reproductive potential, and may even have lost your chance to reproduce. Yet, sacrifices and cooperation like this are done by human beings all the time, though the vast majority in a less heroic key. We are thus, hypersocial- willing to cooperate with and help non-relatives in ways no other animals do. What gives?

Pagel’s solution to this evolutionary conundrum is very similar to Wilson’s though he uses different lingo. Pagel sees us as naturally members of what he calls cultural survival vehicles, we evolved to be raised in them, and to be loyal to them because that was the best, perhaps the only way, of securing our own survival. These cultural survival vehicles themselves became the selective environment in which we evolved. Societies where individuals hoarded their cooperation or refused to makes sacrifices for the survival of the group overtime were driven out of existence by those where individuals had such qualities.

The fact that we have evolved to be cooperative and giving within our particular cultural survival vehicle Pagel sees as the origin of our angelic qualities our altruism and heroism and just general benevolence. No other animal helps its equivalent of little old ladies cross the street, or at least to nothing like the extent we do.   

Yet, if having evolved to live in cultural survival vehicles has made us capable of being angels, for Pagel, it has also given rise to our demonic capacities as well. No other species is a giving as we are, but, then again, no other species has invented torture devices and practices like the iron maiden or breaking on the wheel like we have either. Nor does any other species go to war with fellow members of its species so commonly or so savagely as human beings.

Pagel thinks that our natural benevolence can be easily turned off under two circumstance both of which represent a threat to our particular cultural survival vehicle.

We can be incredibly cruel when we think someone threatens the survival of our group by violating its norms. There is a sort of natural proclivity in human nature to burn “witches” which is why we need to be particularly on guard whenever some part of our society is labeled as vile and marked for punishment- The Scarlet Letter should be required reading in high school.

There is also a natural proclivity for members of one cultural survival vehicle to demonize and treat cruelly their rivals- a weakness we need to be particularly aware of in judging the heated rhetoric in the run up to war.

Pagel also breaks with Richard Dawkins over the later’s view of the evolutionary value of religion. Dawkins in his book The God Virus and subsequently has tended to view religion as a net negative- being a suicide bomber is not a successful reproductive strategy, nor is celibacy. Dawkins has tended to see ideas his “memes” as in a way distinct from the person they inhabit. Like genes, his memes don’t care all that much about the well being of the person who has them- they just want to make more of themselves- a goal that easily conflicts with the goal of genes to reproduce.

Pagel quite nicely closes the gap between memes and genes which had left human beings torn in two directions at once. Memes need living minds to exists, so any meme that had too large a negative effect on the reproductive success of genes should have lain its own grave quite quickly. But if genes are being selected for because they protect not the individual but the group one can see how memes that might adversely impact an individual’s chance of reproductive success can nonetheless aid in the survival of the group. For religions to have survived for so long and to be so universal they must be promoting group survival rather than being a detriment to it, and because human beings need these groups to live religion must on average, though not in every particular case, be promoting the survival of the individual at least to reproductive age.

One of the more counterintuitive, and therefore interesting claims Pagel makes is that what really separates human beings from other species is our imitativeness as opposed to our inventiveness. It the reverse of the trope found in the 60’s-70’s books and films that built around the 1963 novel La Planète des singes- The Planet of the Apes. If you remember, the apes there are able to imitate human technology, but aren’t able to invent it- “monkey see, monkey do”. Perhaps it should be “human see human do” for if Pagel is right our advantage as a species really lies in our ability to copy one another. If I see you doing something enough times I am pretty likely to be able to repeat it though I might not have a clue what it was I was actually doing.

Pagel thinks that all we need is innovation through minor tweaks combined with our ability to rapidly copy one another to have technological evolution take off. In his view geniuses are not really required for human advancement, though they might speed the process along. He doesn’t really apply his ideas to understand modern history, but such a view could explain what the scientific revolution which corresponded with the widespread adoption of the printing press was really all about. The printing press allowed ideas to be deciminated more rapidly whereas the scientific method proved a way to sift those that worked from those that didn’t.

This makes me wonder if something rather silly like Youtube hasn’t even further revved up cultural evolution in this imitative sense. What I’ve found is that instead of turning to “elders” or “experts” when I want to do something new, I just turn to Youtube and find some nice or ambitious soul who has filmed through the steps.

The idea that human beings are the great imitator has some very unanticipated consequences for the rationality of human being visa-vi other animals. Laurie Santos of the Comparative Cognition Laboratory at Yale has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. One more sign that human beings can be thankful for at least some of their irrationality.

This might have implications for machine intelligence that neither Pagel nor Santos draw. Machines might be said to be rational in the way animals are rational but this does not mean that they possess something like human thought. The key to making artificial intelligence more human like might be in making them, in some sense, less rational or as Jeff Steibel stated it for machine intelligence to be truly human like it will need to be “loopy”, “iterative”, and able to make mistakes.

To return to Pagel, he also sees implications for human consciousness and our sense of self in the idea that we are “wired for culture”. We still have no idea why it is human beings have a self, or think they have one, and we have never been able to locate the sense of self in the brain. Pagel thinks its pretty self-evident why a creature that is enmeshed in a society of like creatures with less than perfectly aligned needs would evolve a sense of self. We need one in order to negotiate our preferences, and our rights.

Language, for Pagel, is not so much a means for discovering and discussing the truth as it is an instrument wielded by the individual to gain in his own interest through persuasion and sometimes outright deception. Pagel’s views seem to be seconded by Steven Pinker who in a fascinating talk back in 2007 laid out how the use of language seems to be structured by the type of relationship it represents. In a polite dinner conversation you don’t say “give me the salt”, but “please pass me the salt” because the former is a command of the type most found in a dominance/submission relationship. The language of seduction is not “come up to my place so I can @#!% you”, but “would you like to come up and see my new prints.” Language is a type of constant negotiation and renegotiation between individuals one that is necessary because relationships between human beings are not ordinarily based on coercion where no speech is necessary besides the grunt of a command.

All of which brings me back to Aristotle and to the question of evil along with the sole glaring oversight of Pagel’s otherwise remarkable book. The problem moderns often have with the brilliant Aristotle is his justification of slavery and the subordination of women. That is, Aristotle clearly articulates what makes a society human- our interdependence, our capacity for speech, the way our “telos” is to be shaped and tuned by the culture in which we are raised like a precious instrument. At the same time he turns around and allows this society to compel by force the actions of those deliberately excluded from the opportunity for speech which he characterizes as a “natural” defect.

Pagel’s cultural survival vehicles miss this sort of internal oppression that is distinct both from the moral anger of the community against those who have violated its norms or its violence towards those outside of it. Aren’t there minature cultural survival vehicles in the form of one group or another that fosters its own genetic interest through oppression and exploitation? It got me thinking if any example of what a more religious age would have called “evil” as opposed to mere “badness” couldn’t be understood as a form of either oppression or exploitation, and I am unable to come up with any.

Such sad reflections dampen Pagel’s optimism towards the end of Wired for Culture that the world’s great global cities are a harbinger of us transcending the state of war between our various cultural survival vehicles and moving towards a global culture. We might need to delve more deeply into our evolutionary proclivities to fight within our own and against other cultural survival vehicles inside our societies as much as outside before we put our trust in such a positive outcome for the human story.

If Pagel is right and we are wired, if we have co-evolved, to compete in the name of our own particular cultural survival vehicle against other such communities then we may have in a very real sense co-evolved with and for war. A subject I will turn to next time by jumping off another very recent and excellent book.

The Danger of Using Science as a God Killing Machine

Gravitational Waves

Big news this year for those interested in big questions, or potentially big news, as long as the findings hold up. Scientists at the Harvard-Smithsonian Center for Astrophysics may have come as close as we have ever ever to seeing the beginning of time in our universe.  They may have breached our former boundary in peering backwards into the depth of time, beyond the Cosmic Microwave Background, the light echo of the Big Bang- taking us within an intimate distance of the very breath of creation.

Up until now we have not been able to peer any closer to the birth of our universe than 300,000 or so years distance from the Big Bang. One of the more amazing things about our universe, to me at least, is that given its scale and the finite speed of light,  looking outward also means looking backward in time. If you want to travel into the past stand underneath a starry sky on a clear night and look up.  We rely on light and radiation for this kind of time travel, but get too close to beginning of the universe and the scene becomes blindingly opaque. Scientists need a whole new way of seeing to look back further, and they may just have proven that one of these new ways of seeing actually works.

What the Harvard-Smithsonian scientists hope they have seen isn’t light or radiation, but the lensing effect of  gravitational waves predicted by the controversial Inflationary Model of the Big Bang, which claims that the Big Bang was followed by a quick burst in which the universe expanded incredibly fast. One prediction of the Inflationary Model is that this rapid expansion would have created ripples in spacetime- gravitational waves, the waves whose indirect effect scientist hope they have observed.

If they’re right, in one fell swoop, they may have given an evidential boost to a major theory on how our universe was born, and given us a way of peering deeper than we ever have into the strobiloid of time, a fertile territory, we should hope, for testing, revising and creating theories about the origin of our cosmos, its nature, and ultimate destiny. Even more tentatively, the discovery might also allow physicists to move closer to understanding how to unify gravity with quantum mechanics the holy grail of physics since the early 20th century.

Science writer George Johnson may, therefore, have been a little premature when he recently asked:

As the effort to understand the world has advanced, the low-hanging fruits (like Newton’s apple) have been plucked. Scientists are reaching higher and deeper into the tree. But with finite arms in an infinite universe, are there limits — physical and mental — to how far they can go?

The answer is almost definitely yes, but, it seems, not today, although our very discovery may ironically take us closer to Johnson’s limits. The reason being that, one of the many hopes for gravitational lensing is that it might allow us to discover experimental evidence for theories that we live in a multiverse- ours just one of perhaps an infinite number of universes. Yet, with no way to actually access these universes, we might find ourselves, in some sense,  stuck in the root-cap of our “local” spacetime and the tree of knowledge rather than grasping towards the canopy. But for now, let’s hope, the scientists at Harvard/Smithsonian have helped us jump up to an even deeper branch.
For human beings in general, but for Americans and American scientists in particular, this potential discovery should have resulted in almost universal celebration and pride. If the discovery holds, we are very near to being able to “see” the very beginning of our universe. American science had taken it on the chin when Europeans using their Large Hadron Collider (LHC) were able to discover the Higgs Boson a fundamental particle that had been dubbed in the popular imagination with the most inaccurate name in the history of science as “the God particle”. Americans would very likely have gotten there first had their own and even more massive particle collider the Superconducting Super Collider (SSC) not been canceled back in the 1990’s under a regime of shortsighted public investments and fiscal misallocation that continues to this day.

Harvard and even more so the Smithsonian are venerable American institutions. Indeed, the Smithsonian is in many ways a uniquely American hybrid not only in terms of its mix of public and private support but in its general devotion to the preservation, expansion and popularization of all human knowledge, a place where science and the humanities exist side- by- side and in partnership and which serves as an institution of collective memory for all Americans.

As is so often the case, any recognition of the potential for the discovery by the scientists at the Harvard-Smithsonian Center for Astrophysics being something that should be shared across the plurality of society was blown almost immediately, for right out of the gate, it became yet another weapon in the current atheists vs religious iteration of the culture war. It was the brilliant physicist and grating atheists Lawrence Krauss who took this route in his summary of the discovery for the New Yorker. I didn’t have any problem with his physics- the man is a great physicist and science writer, but he of course took the opportunity to get in a dig at the religious.

For some people, the possibility that the laws of physics might illuminate even the creation of our own universe, without the need for supernatural intervention or any demonstration of purpose, is truly terrifying. But Monday’s announcement heralds the possible beginning of a new era, where even such cosmic existential questions are becoming accessible to experiment.

What should have been a profound discovery for all of us, and a source of conversations between the wonderstruck is treated here instead as another notch on the New Atheists’ belt, another opportunity to get into the same stupid fight, with not so much the same ignorant people, as caricatures of those people. Sometimes I swear a lot of this rhetoric, on both sides of the theists and atheist debate, is just an advertising ploy to sell more books, TV shows, and speaking events.  

For God’s sake, the Catholic Church has held the Big Bang to be consistent with Church doctrine since 1951. Pat Robertson openly professes belief in evolution and the Big Bang. Scientists should be booking spots on EWTN and the 700 Club to present this amazing discovery for the real enemy of science isn’t religion it’s ignorance. It’s not some theist Christian, Muslim or Jew who holds God ultimately responsible, somehow, for the Big Bang, but the members of the public, religious or not, who think the Big Bang is just a funny name for an even funnier TV show.

Science will always possess a gap in its knowledge into which those so inclined will attempt to stuff their version of a creator. If George Johnson is right we may reach a place where that gap, rather than moving with scientific theories that every generation probe ever deeper into the mysteries of nature may stabilize as we come up against the limits of our knowledge. God, for those who need a creating intelligence, will live there.

There is no doubt something forced and artificial in this “God of the gaps”, but theologians of the theistic religions have found it a game they need to play clinging as they do to the need for God to be a kind of demiurge and ultimate architect of all existence. Other versions of God where “he” is not such an engineer in the sky, God as perhaps love, or relationship, or process, or metaphor, or the ineffable would better fit with the version of  reality given us by science, and thus, be more truthful, but the game of the gaps is one theologians may ultimately win in any case.

Religions and the persons who belong to them will either reconcile their faith with the findings of science or they will not, and though I wish they would reconcile, so that religions would hold within them our comprehensive wisdom and acquired knowledge as they have done in the past, their doing so is not necessary for religions to survive or even for their believers to be “rational.”

For the majority of religious people, for the non-theologians, it simply does not matter if the Big Bang was inflationary or not, or even if there was a Big Bang at all. What matters is that they are able to deal with loss and grief, can orient themselves morally to others, that they are surrounded by a mutually supportive community that acts in the world in the same way, that is, that they can negotiate our human world.

Krauss, Dawkins et al often take the position that they are administering hard truths and that people who cling to something else need to be snapped out of their child-like illusions. Hard truths, however, are a relative thing. Some people can actually draw comfort from the “meaninglessness” of their life, which science seems to show them. As fiction author Jennifer Percy wrote of her astronomy loving father:

This brand of science terrified me—but my dad found comfort in going to the stars. He flees from what messy realm of human existence, what he calls “dysfunctional reality” or “people problems.” When you imagine that we’re just bodies on a rock, small concerns become insignificant. He keeps an image above his desk, taken by the Hubble space telescope, that from a distance looks like an image of stars—but if you look more closely, they are not stars, they are whole galaxies. My dad sees that, imagining the tiny earth inside one of these galaxies—and suddenly, the rough day, the troubles at work, they disappear.

Percy felt something to be missing in her father’s outlook, which was as much as a shield against the adversity of human life as any religion, but one that she felt only did so by being blind to the actual world around him. Percy found this missing human element in literature which drew her away from a science career and towards the craft of fiction.

The choice of fiction rather than religion or spirituality may seem odd at first blush, but it makes perfect sense to me. Both are ways of compressing reality by telling stories which allow us to make sense of our experience, something that despite our idiosyncrasies,  is always part of the universal human experience. Religion, philosophy, art, music, literature, and sometimes science itself, are all means of grappling with the questions, dilemmas and challenges life throws at us.

In his book Wired for CultureMark Pagel points out how the benefits of religion and art to our survival must be much greater than they are in Dawkins’ terms “a virus of the mind” that uses us for its purposes and to our detriment. Had religion been predominantly harmful or even indifferent to human welfare it’s very hard to explain why it is so universal across human societies. We have had religion and art around for so long because they work.

Unlike the religious, Percy’s beloved fiction is largely free from the critique of New Atheists who demand that science be the sole method of obtaining truth. This is because fiction, by its very nature, makes no truth claims on the physical world, nor does it ask us to do anything much in response to it. Religion comes in for criticism by the New Atheists wherever it seems or appears to make truth claims on material existence, including its influence on our actions, but religion does other things as well which are best seen by looking at two forms of fiction that most resemble religion’s purposeful storytelling, that is, fairy tales and myths.

My girls love fairy tales and I love reading them to them. What surprised me when I re-encountered these stories as a parent was just how powerful they were, while at the same time being so simple. I hadn’t really had a way of understanding this until I picked up Bruno Bettelheim’s The Uses of Enchantment: The Meaning and Importance of Fairy Tales As long as one can get through  Bettelheim’ dated Freudian psychology his book shows why fairy tales hit us like they do, why their simplification of life is important, and why, as adults we need to outgrow their black and white thinking.

Fairy tales establish the foundation for our moral psychology. They teach us the difference between good and evil, between aiming to be a good person and aiming to do others harm. They teach us to see our own often chaotic emotional lives as a normal expression of the human condition, and also, and somewhat falsely, teach us to never surrender our hope.

Myths are a whole different animal from fairy tales. They are the stories of once living religions that have become detached and now float freely from the practices and rituals that once made them, in a sense, real. The stories alone, however, remain deep with meaning and much of this meaning, especially when it comes to the Greek myths, has to do with the tragic nature of human existence. What you learn from myths is that even the best of intentions can result in something we would not have chosen- think Pandora who freed the ills of the world from the trap of their jar out of compassion- or that sometimes even the good can be unjustly punished- as in Prometheus the fire bringer chained to his rock.

Religion is yet something different from either fairy tales or myths. The “truth” of a religion is not to be found or sought in its cosmology but in its reflection on the human condition and the way it asks us to orient ourselves to the world and guides our actions in it. The truth of a faith becomes real through its practice- a Christian washing the feet of the poor in imitation of Christ makes the truth of Christianity in some sense real and a part of our world, just as Jews who ask for forgiveness on the Day of Atonement, make the covenant real.

Some atheists, the philosopher Alain Botton most notably have taken notice that the atheists accusation against religion- that it’s a fairy tale adults are suckered into believing in- is a conversation so exhausted it is no longer interesting. In his book Religion for Atheists he tries to show what atheists and secular persons can learn from religion, things like a sense of community and compassion, religion’s realistic, and therefore pessimistic, view of human nature, religions’ command of architectural space and holistic approach to education, which is especially focused on improving the moral character of the young.

Yet Botton’s suggestion of how secular groups and persons might mimic the practices of religion such as his “stations of life” rather than “stations of the cross” fell flat with me.There is something in the enchantment of religion which resembles the enchantment of fairy tales that fails rather than succeeds by running too close to reality- though it can not go too far from reality either. There is a genius in organic things which emerge from collective effort, unplanned, and over long stretches of time that can not be recreated deliberately without resulting in a cartoonish and disneyfied, version of reality or conversely something so true to life and uncompressed we can not view it without instinctively turning away in horror or falling into boredom.

I personally do not have the constitution to practice any religion and look instead for meaning to literature, poetry, and philosophy, though I look at religion as a rich source of all three.  I also sometimes look to science, and do indeed, like Jennifer Percy’s father, find some strange comfort in my own insignificance in light of the vastness of it all.

The dangers of me, or Krauss, or Dawkins or anyone else trying to turn this taste for insignificance into the only truth, the one shown to us by science is that we turn what is really a universal human endeavor, the quest to know our origins, into a means of stopping rather than starting a conversation we should all be parties to, and threaten the broad social support needed to fund and see through our quest to understand our world and how it came to be. For, the majority of people (outside of Europe) continue to turn to religion to give their lives meaning, which might mean that if we persist in treating science as a God killing machine, or better, a meaning killing machine, we run the risk of those who need such forms of meaning in order to live turning around and killing science.

Waiting for World War III

The Consequences of War Paul Rubens

Everyone alive today owes their life to a man most of us have never heard of, and that I didn’t even know existed until last week. On September, 26 1983, just past mid-night, Soviet lieutenant colonel Stanislav Petrov was alerted by his satellite early warning system that an attack from an American ICBM was underway. Normal protocol should have resulted in Petrov giving the order to fire Russian missiles at the US in response. Petrov instead did nothing, unable to explain to himself why the US would launch only one missile rather than a massive first strike in the hope of knocking out Russia’s capacity to retaliate. Then, something that made greater sense- more missiles appeared on Petrov’s radar screen, yet he continued to do nothing. And then more. He refused to give the order to fire, and he waited, and waited.

No news ever came in that night of the devastation of Soviet cities and military installations due to the detonation of American nuclear warheads, because, as we know, there never was such an attack. What Petrov had seen was a computer error, an electronic mirage, and we are here, thank God, because he believed in the feelings in his gut over the data illusion on his screen.

That is the story as told by Christopher Coker in his book Warrior Geeks: How 21st Century Technology is Changing the Way We Fight and Think About War. More on that book another time, but now to myself. During the same time Petrov was saving us through morally induced paralysis I was a budding cold warrior, a passionate supporter of Ronald Reagan and his massive defense buildup. I had drawn up detailed war scenarios calculating precisely the relative strengths of the two opposing power blocs, was a resident expert in Soviet history and geography. I sincerely thought World War Three was inevitable in my lifetime. I was 11 years old.

Anyone even slightly younger than me has no memory of living in a world where you went to sleep never certain we wouldn’t blow the whole thing up over night. I was a weird kid, as I am a weird adult, and no doubt hypersensitive to the panic induced by too close a relationship with modern media. Yet, if the conversations I have had with people in my age group over the course of my lifetime are any indication, I was not totally alone in my weirdness. Other kids too would hear jets rumbling overhead at night and wonder if the sounds were missiles coming to put an end to us all, were haunted by movies like The Day After or inspired by Red Dawn. Other kids staged wars in their neighborhoods fighting against “robot”like Russians.

During the early 1980′s world war wasn’t something stuck in a black and white movie, but a brutal and epic thing our grandfathers told us about, that some of our teachers had fought in. A reality that, with the end of detente and in light of the heated rhetoric of the Reagan years, felt as much part of the future as part of the past. It was not just a future of our imaginations, and being saved by Stanislav Petrov wasn’t the only time we dodged the bullet in those tense years.

Whatever the fear brought on by 9-11, this anxiety that we might just be fool enough to completely blow up our own world is long gone. The last twenty three years since the fall of the Soviet Union have been, in this sense, some of the most blessed in human history, a time when the prospect of the big powers pulverizing each other to death has receded from the realm of possibility. I am starting to fear its absence cannot last.

Perhaps it’s Russian aggression against Ukraine that has revived my pre-teen anxieties, it’s seizure of Crimea, veiled threats to conquer the Russophone eastern regions of the country, Putin’s jingoistic speech before the Kremlin. Of course, of course, I don’t think world war will come from the crisis in Ukraine now matter how bad it gets there. Rather, I am afraid we were wrong to write the possibility of war between the big powers out of human history permanently. That one of these times, and when we do not expect it, 10 years or 20 years or 100 years from now one of these dust ups will result in actual clashes between the armed forces of the big powers, a dangerous game that the longer we played it would hold the real risk of ending in the very nightmare we had avoided the night Petrov refused to fire.

Disputes over which the big powers might come to blows are not hard to come up with. There is China’s dispute with Japan, the Philippines, other, and ultimately the United States, over islands in the Pacific, there is the lingering desire for China to incorporate Taiwan, there is the legacy conflict on the Korean peninsula, clashes between India and China, disputes over resources and trade routes through an arctic opened up by global warming, or possible future fights over unilateral geoengineering. Then there are frictions largely unanticipated , as we now see, Russia’s panic induced aggression against Ukraine which brings it back into collision with NATO.

Still, precise predictions about the future is a game for fools. Hell, I can still remember when “experts” in all seriousness predicted a coming American war with Japan. I am aiming, rather, for something more general.   The danger I see is that the big powers start to engage in increasingly risky behavior precisely because they think world war is now impossible. That all of us will have concluded that limited and lukewarm retaliation is the only rational response to aggression given that the existential stakes are gone. As a foolish eleven year old I saw the risk of global catastrophe worth taking if the alternative was totalitarian chains. I am an adult now, hopefully much wiser, and with children of my own, whose lives I would not risk to save Ukraine from dismemberment along ethnic/linguistic lines or to stop China from asserting its rising power in the Pacific. I am certainly not alone in this, but fear such sanity will make me party to an avalanche. That the decline of the fear that states may go too far in aggressive action may lead them to go so far they accidentally spark a scale of war we have deemed inconceivable.

My current war pessimism over the long term might also stem from the year I am in, 2014, a solemn centenary of the beginning of the First World War. Back when I was in high school, World War I was presented with an air of tragic determinism. It was preordained, or so we were taught, the product of an unstable system alliance system, growing nationalism and imperialism. It was a war that was in some sense “wanted” by those who had fought it. Historians today have called this determinism into question. Christopher Clark in his massive The Sleepwalkers details just how important human mistakes and misperceptions were to the outbreak of the war, the degree to which opportunities to escape the tragedy were squandered because no one knew just how enormous the tragedy they were unleashing would become.

Another historian, Holger Afflerbach, in his essay The Topos of Improbable War in Europe Before 1914 shows how few were the voices in Europe that expected a continental or world war. Even the German military that wanted conflict was more afraid until the war broke out, and did not end quickly, that conflict would be averted at the last minute rather than stopped. The very certainty that a world war could not be fought, it part because of the belief that modern weapons had become too terrible, led to risk taking and refusal to compromise, which made such a war more likely as the crisis that began with the assassination of Archduke Fransferdinad unfolded.

If World War II can be considered an attempt by the aggrieved side to re-fight the First World War, what followed  Japan’s surrender was very different, a difference perhaps largely due to one element- the presence of nuclear weapons. What dissuaded big states in the Cold War era from directly fighting one another was likelihood that the potential costs of doing so were too high relative to the benefits that would accrue from any victory. The cost in a nuclear age was destruction itself.

Yet, for those costs to be an effective deterrent the threat of their use had to be real. Both sides justified their possible suicide in a nuclear holocaust on the grounds that they were engaged in a Manichean struggle where the total victory of the opposing side was presented as being in some sense worse than the destruction of the world itself. Yes, I know this was crazy, yet, by some miracle, we’re still here, and whether largely despite of or because of this insanity we cannot truly know.

Still, maybe I’m wrong. Perhaps I should not be so uncertain over the reason why there have been no wars between the big powers in the modern era, perhaps my anxiety that the real threat of nuclear annihilation might have been responsible is just my eleven year old self coming back to haunt me. It’s just possible that nuclear weapons had nothing to do with the long peace between great powers. Some have argued that there were other reasons big states have seemingly stopped fighting other big states since the end of World War II, that what changed were not so much weapons but norms regarding war. Steven Pinker most famously makes this case in his Better Angels of Our Nature: Why Violence has Declined.

Sadly, I have my doubts regarding Pinker’s argument. Here’s me from an earlier piece:

His evidence against the “nuclear peace” is that more nations have abandoned nuclear weapons programs than have developed such weapons. The fact is perhaps surprising but nonetheless accurate. It becomes a little less surprising, and a little less encouraging in Pinker’s sense, when you actually look at the list of countries who have abandoned them and why. Three of them: Belarus, Kazakhstan and the Ukraine are former Soviet republics and were under enormous Russian and US pressure- not to mention financial incentives- to give up their weapons after the fall of the Soviet Union. Two of them- South Africa and Libya- were attempting to escape the condition of being international pariahs. Another two- Iraq and Syria had their nuclear programs derailed by foreign powers. Three of them: Argentina, Brazil, and Algeria faced no external existential threat that would justify the expense and isolation that would come as a consequence of  their development of nuclear weapons and five others: Egypt, Japan, South Korea, Taiwan and Germany were woven tightly into the US security umbrella.

I am sure you have noticed that Ukraine is on that list. Had Ukraine not given up its nuclear weapons it is almost certain that it would not have seen Crimea seized by the Russians, or find itself facing the threat by Moscow to split the country in two.

A little more on Pinker: he spends a good part of his over 800 page book showing us just how savagely violent human societies were in the past. Tribal societies had homicide rates that rival or exceed the worst inner cities. Human are natural Hobbesians given to “a war of all against all”, but, in his view we have been socialized out of such violence, and not just as individuals, but in terms of states.

Pinker’s idea of original human societies being so violent and civilization as a kind of domestication of mankind away from this violence left me with many unanswered questions. If we were indeed so naturally violent how or why did we establish societies in the first place? Contrary to his claim, didn’t the institutionalization of violence in the form of war between states actually make our situation worse? How could so many of us cringe from violence at even a very early age, if we were naturally wired to be killers?

I couldn’t resolve any of these questions until I had read Mark Pagel’s Wired for Culture. What Pagel showed is that most of us are indeed naturally “wired” to be repulsed by violence the problem is this repulsion has a very sensitive off switch. The way it can be turned off is when our community is threatened either by those who had violated community norms, so-called moral anger, or when violence is directed towards rival groups outside of the community. In such cases we can be far more savage than the most vicious of animals with our creativity and inventiveness turned to the expression of cruelty.

Modern society is one that has cordoned off violence. We don’t have public hangings anymore and cringe at the death of civilians at the hands of our military (when we are told about them.) Yet this attitude towards violence is so new we can not reasonably expect it has become permanent.

I have no intention of picking on the Russians, and Bush’s “Axis of Evil” speech would have done just as well or better here, but to keep things current: Putin in his bellicose oration before the Kremlin pressed multiple sides of the Pagel’s violence “off switch”:

He presented his opponents as an evil rival “tribe”:

However, those who stood behind the latest events in Ukraine had a different agenda: they were preparing yet another government takeover; they wanted to seize power and would stop short of nothing. They resorted to terror, murder and riots. Nationalists, neo-Nazis, Russophobes and anti-Semites executed this coup. They continue to set the tone in Ukraine to this day.

And called for the defense of the community and the innocent:

Let me say one other thing too. Millions of Russians and Russian-speaking people live in Ukraine and will continue to do so. Russia will always defend their interests using political, diplomatic and legal means. But it should be above all in Ukraine’s own interest to ensure that these people’s rights and interests are fully protected. This is the guarantee of Ukraine’s state stability and territorial integrity.

What this should show us, and Americans certainly shouldn’t need a lesson in here, is that norms against violence (though violence in Ukraine has so far, thankfully been low), can be easily turned off given the right circumstance. Putin, by demonizing his Ukrainian opponents, and claiming that Russia would stand in defense of the rights of the Russian minority in Ukraine was rallying the Russian population for a possible escalation of violence should his conditions not be met. His speech was met with a standing ovation. It is this ease by which our instincts for violence can be turned on that suggests Pinker may have been too optimistic in thinking war was becoming a thing of the past if we are depending  on a change in norms alone.

Then there is sheer chance. Pinker’s theory of the decline of violence in general relies on Gaussian bell curves, averages over long stretches of time, but if we should have learned anything from Nassim Taleb and his black swans and the financial crisis, its the fat tails that should worry us most. The occurrence of a highly improbable event that flips our model of the world and the world itself on its head and collapses the Gaussian curve. Had Stanislav Petrov decided to fire his ICBMS rather than sit on his hands, Pinker’s decline of violence, up to that point, would have looked like statistical noise masking the movement towards the real event- an unprecedented expression of human violence that would have killed the majority of the human race.

Like major financial crises that happen once in a century, or natural disasters that appear over longer stretches of time, anything we’ve once experienced can happen again with the probability of recurrence often growing over time.  If human decision making is the primary factor involved, as it is in economic crises and war, the probability of such occurrences may increase as the generation whose errors in judgement brought on the economic crisis or war recedes from the scene taking their acquired wisdom and caution with them.

And we are losing sight of this possibility. Among military theorists rather than defense contractors, Colin S. Grey is one of an extreme minority trying to remind us the war between the big powers is not impossible as he writes in Another Bloody Century

If we, grant with some reservations, that there is a trend away from interstate warfare, there hovers in the background the thought that this is a trend that might be reversed abruptly. No country that is a significant player in international security, not least the United States, has yet reorganized and transformed its regular military establishment to reflect the apparent demise  of ‘old’ (interstate wars and the rise of new ones.

Grey, for one, does not think that we’ll see a replay of 20th century world wars with massive, industrial armies fighting it out on land and sea. The technology today is simply far too different than it was in the first half of the last century. War has evolved and is evolving into something very different, but interstate war will likely return.

We might not see the recurrence of world war but merely skirmishes between the big powers. This would be more of a return to normalcy than anything else. World wars, involving the whole of society, especially civilians, are a very modern phenomenon dating perhaps no earlier than the French Revolution. In itself a return to direct clashes between the big powers would be very bad, but not so bad as slippage into something much worse, something that might happen because escalation had gone beyond the point of control.

The evolution of 21st century war may make such great power skirmishes more likely. Cyber-attacks have, so far at least, come with little real world consequences for the attacking country. As was the case with the German officer corps in World War I, professional soldiers, who have replaced draftees and individuals seeking a way out of poverty as the basis of modern militaries seem likely more eager to fight so as to display their skills, and may in time be neurologically re-engineered so as to deal with the stresses of combat. It is at least conceivable that professional soldiers might be the first class to have full legal access to technological and biological enhancements being made possible by advances in prosthetics, mind-computer interfaces and neuroscience.

Governments as well as publics may become more willing to engage in direct conflict as relatively inexpensive and expendable drones and robots replace airmen and soldiers. Ever more of warfighting might come to resemble a videogame with soldiers located far from the battlefield.  Both war and the international environment in which wars are waged has evolved and is evolving into something very unlike that which we have experienced since the end of the Cold War. The father out it comes the more likely that the next big war will be a transhumanist or post-human version of war, and there are things we can do now that might help us avoid it- subjects I will turn to in the near future.

How the Web Will Implode

Jeff Stibel is either a genius when it comes to titles, or has one hell of an editor. The name of his recent book Breakpoint: Why the web will implode, search will be obsolete, and everything you need to know about technology is in your brain was about as intriguing as I had found a title, at least since The Joys of X. In many ways, the book delivers on the promise of its title, making an incredibly compelling argument for how we should be looking at the trend lines in technology, a book which is chalk full of surprising and original observations. The problem is that the book then turns round to come up with almost the opposite conclusions one would expect. It wasn’t the Internet that imploded but my head.

Stibel’s argument in Breakpoint is that all throughout nature and neurology, economics and technology we see this common pattern of slow growth rising quickly to an exponential pace followed by a rapid plateau, a “breakpoint” at which the rate of increase collapses, or even a sharp decline occurs, and future growth slows to a snail’s pace. One might think such breakpoints were a bad thing for whatever it is undergoing them, and when they are followed by a crash they usually are, but in many cases it just ain’t so. When ant colonies undergo a breakpoint they are keeping themselves within a size that their pheromonal communication systems can handle. The human brain grows rapidly in connections between birth and five after which it loses a great deal of those connections through pruning- a process that allows the brain to discard useless information and solidify the types of knowledge it needs- such as the common language being spoken in its environment.

His thesis leads Stibel to all sorts of fascinating observations. Here are just a few: Precision takes a huge amount of energy, and human brains are error prone because they are trading this precision for efficiency. The example is mine, not Stibel’s, but it captures his point: if I did the math right, IBM’s Watson consumed about 4,000 times as much energy as its human opponents, and the machine, as impressive as it was, it couldn’t drive itself there, or get its kids to school that morning, or compose a love poem about Alex Trebek. It could only answer trivia questions.

Stibel points out how the energy consumption of computers and the web are approaching what are likely hard energy ceilings. Continuing on its current trajectory the Internet will consume, in relatively short order, 20% of the world energy, about as much as the percentage of calories that are needed to run the human brain. A prospect that makes the Internet’s growth  rate under current conditions ultimately unsustainable runless we really are determined to fry ourselves with global warming.

Indeed, this 20% mark seems to be a kind of boundary for intelligence, at least if the human brain is any indication. As always with, for me at least, new and surprising observations, Stibel points out how the human brain has been steadily shrinking and losing connections over time. Pound for pound, our modern brain is actually “dumber” than our cave man ancestors. (Not sure how this gels with the Flynn effect.) Big brains are expensive for bodies to maintain, and its caloric ravenousness relative to other essential bodily functions must not be favored by evolution otherwise we’d see more of our lopsided brain to body ratio in nature.  As we’ve been able to offload functions to our tools and to our cultures, evolution has been shedding some of this cost in raw thinking prowess and slowly moving us back towards a more “natural” ratio.             

If the Internet is going to survive it’s going to have to become more energy efficient as well.  Stibel sees this already happening. Mobile has allowed targeted apps rather than websites to be the primary way we get information. Cloud computing allows computational prowess and memory to be distributed and brought together as needed. The need for increased efficiency, Stibel believes, will continue to change the nature of search too. Increasing personalization will allow for ever more targeted information, so that the individual can find just what they are looking for. This becoming “brainlike”, he speculates may actually result in the emergence of something like consciousness from the web.

It is on these last two points, on personalization, and the emergence of consciousness from the Internet that he lost me. Indeed, had Stibel held fast to his idea of the importance of breakpoints he may have seen both personalization and emergent consciousness from the Internet in a much different light.

The quote below captures Steibel’s view of personalization:

We’re moving towards search becoming a kind of personal assistant that knows an awful lot about you. As a side note, some of you may be feeling quite uncomfortable at this point with your new virtual friend. My advice: get used to it. The benefits will be worth it. As Kevin Kelly has said: “Total personalization in this new world will require total transparency. That is going to be the price. If you want to have total personalization, you have to be totally transparent. “ (93)

I suppose the question one should ask of Steibel is transparent to whom and for what? The answer, can be seen in the example of he gives of transparency in action:

Imagine that the Internet can read your thoughts. Your personal computer, now a personal assistant, knows you skipped breakfast, just as your brain knows you skipped breakfast. She also knows that you have back to back meetings, but that your schedule just cleared. So she offers the suggestion “It’s 11:00am and you should really eat before your next meeting. D’Amore’s Pizza Express can deliver to you within 25 minutes. Shall I order your favorite, a large thin crust pizza, light on the cheese with extra red pepper flakes on the side?” (97)

The answer, as Stibel’s example makes apparent, is that one is transparent to advertisers and for them. In the example of D’Amore’s”, what is presented as something that works for you is actually a device on loan to a restaurant- it is their “personal assistant”.

Transparent individuals become a kind of territory mined for resources by those capable of performing the data mining. For the individual being “mined” such extraction can be good or bad, and part of our problem, now and in the future, will be to give the individual the ability to control this mining and refract it in directions that better suit our interest. To decide for ourselves when it is good and we want its benefits ,and are therefore are willing to pay its costs, and when it is bad and we are not.

Stibel thinks personalization is part of the coming “obsolescence of search” and a response of the web to the need for increased efficiency as a way to avoid, for a time, reaching its breakpoint. Yet, looking at our digital data as a sort of contested territory gives us a different version of the web’s breakpoint than the one that Stibel gives us even if it flows naturally from his logic. The fact that corporations and other groups are attempting to court individuals on the basis of having gathered and analysed a host of intimate and not so intimate details on those individuals sparks all kinds of efforts to limit, protect, monopolize, subvert, or steal such information. This is the real “implosion” of the web.

We would do well to remember that the Internet really got its public start as a means of open exchange between scientists and academics, a community of common interest and mutual trust. Trust essentially entails the free flow of information- transparency- and as human beings we probably agree that transparency exists along a spectrum with more information provided to those closest to you and less the further out you go.

Reflecting its origins, the culture of the Internet in its initial years had this sense of widespread transparency and trust baked  into our understanding of it. This period in Eden, even if it just imagined, could not last forever. It has been a long time since the Internet was a community of trust, and it can’t be, it’s just too damned big, even if it took a long time for us to realize this.

The scales have now fallen from our eyes, and we all know that the web has been a boon for all sorts of cyber-criminals and creeps and spooks, a theater of war between states. Recent events surrounding mass surveillance by state security services have amplified this cynicism and decline of trust. Trust, for humans, is like pheromones in Seibel’s ants- it gives the limits of how large a human community can be before breaking off to form a new one, unless some other way of keeping a community together is applied. So far, human societies have discovered three means of keeping societies that have grown beyond the capacity of circles of trust intact: ethnicity, religion and law.

Signs that trust has unraveled are not hard to find. There has been an incredible spike in interest in anti-transparent technologies with “crypto-parties” now being a phenomenon in tech circles. A lot of this interest is coming from private citizens, and sometimes, yes, criminals. Technologies that offer a bubble of protection for individuals against government and corporate snooping seem to be all the rage. Yet even more interest is coming from governments and businesses themselves. Some now seem to want exclusive powers to a “mining territory”- to spy on, and sometimes protect, their own citizens and customers in a domain with established borders. There are, in other words,  splintering pressures building against the Internet, or, as Steven Levy stated there are increased rumblings of:

… a movement to balkanize the Internet—a long-standing effort that would potentially destroy the web itself. The basic notion is that the personal data of a nation’s citizens should be stored on servers within its borders. For some proponents of the idea it’s a form of protectionism, a prod for nationals to use local IT services. For others it’s a way to make it easier for a country to snoop on its own citizens. The idea never posed much of a threat, until the NSA leaks—and the fears of foreign surveillance they sparked—caused some countries to seriously pursue it. After learning that the NSA had bugged her, Brazilian president Dilma Rousseff began pushing a law requiring that the personal data of Brazilians be stored inside the country. Malaysia recently enacted a similar law, and India is also pursuing data protectionism.

As John Schinasi points out in his paper Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion, even before the Snowden revelations, which sparked the widespread breakdown of public trust, or even just heated public debate regarding such trust, there were huge differences between different regimes of trust on the Internet, with the US being an area where information was exchanged most freely and privacy against corporations considered contrary to the spirit of American capitalism.

Europe, on the other had, on account of its history, had in the early years of the Internet taken a different stand adhering to an EU directive that was deeply cognizant of the dangers of too much trust being granted to corporations and the state. The problem was this directive is so antiquated, dating from 1995, it not only failed to reflect the Internet as it has evolved, but severely compromised the way the Internet in Europe now works. The way the directive was implemented turned Europe into a patchwork quilt of privacy laws, which was onerous for American companies, but which they were able to often circumvent being largely self-policing in any case under the so-called Safe Harbor provisions.

Then there is the whole different ball game of China, which  Schinasi characterizes as a place where the Internet is seen without apology or sense of limits by officialdom as a tool of for monitoring its own citizens placing huge restrictions on the extension of trust to entities beyond its borders. China under its current regime seems dedicated to carving out its own highly controlled space on the Internet a partnership between its Internet giants and its control freak government , something which we can hope the desire of such companies to go global might help eventually temper.

The US and Europe, in a process largely sparked by the Snowden revelations appear to be drifting apart. Just last week, on March 12, 2014 the European parliament by an overwhelming majority of 621 to 10 (I didn’t forget a zero), passed a law that aims at bringing some uniformity to the chaos of European privacy laws and that would severely restrict the way personal data is used and collected, essentially upending the American transparency model. (Snowden himself testified to the parliament by video link). The Safe Harbor provisions, while not yet abandoned ,as that would take a decision of the European Council rather than the parliament, have not been nixed, but given the broad support for the other changes are clearly in jeopardy. If these trends continue they would constitute something of a breaking apart and consolidation of the Internet- a sad end to the utopian hopes of a global and transparent society that sprung from the Internet’s birth.

Yet, if Steibel’s thesis about breakpoints is correct, it may also be part of a “natural” process.  Where Steibel was really good was when it came to, well… ants. As he repeatedly shows, ants have this amazing capacity to know when their colony, their network, has grown too large and when it’s time to split up and send out a new queen. Human beings are really good at this formation into separate groups too. In fact as Mark Pagel points out in his Wired for Culture its one of the two things human beings are naturally wired to do: to form groups which breakup once they have exceeded the number of people that any one individual can know on a deep level- a number that remains even in the era of FaceBook “friends” right around where it was when we were setting out from Africa 60,000 years ago- about 150.

If we go by the example of ants and human beings the natural breakpoint(s) for the Internet is where bonds of trust become too loose. Where trust is absent, such as in large scale human societies, we have, as mentioned, come up with three major solutions of which only law, rather than ethnicity or religion, is applicable to the Internet.

What we are seeing- the Internet moving towards splitting itself off into rival spheres of trust, deception, protection and control. The only thing that could keep it together as a universal entity would be the adoption of global international law, as opposed to mere law within and between a limited number of countries, which regulated how the Internet is used, limited states from using the tools of cyber-espionage and what often amounts to the same thing cyber-war, international agreements on how exactly corporations could use customer information, and how citizens should be informed regarding the use of their data by companies and the state would all allow the universal promise of the Internet to survive. This would be the kind of “Magna Carta for the Internet” that Sir Tim Berners-Lee the man who wrote the first draft of the first proposal for what would become the world wide web” is calling for with his Web We Want initiative.

If we get to the destination proposed by Berners-Lee our arrival might have been as much from the push of self-interest from multinational corporations as from the pull of noble efforts by defenders of the world’s civil liberties. For, it plausible that to the desire of Internet giants to be global companies may lead help spur the adoption of higher limits against government spying in the name of corporate protections against “industrial” espionage, protections that might intersect with the desire to protect global civil society as seen in the efforts of Berners-Lee and others and that will help establish a firmer ground for the protection of political freedom for individual citizens everywhere. We’ll probably need both push and pull to stem, let alone rollback, the current regime of mass surveillance we have allowed to be built around us.

Thus, those interested in political freedom should throw their support behind Berners-Lee’s efforts. The erection of a “territory” in which higher standards of data protection prevail, as seen in the current moves of the EU, at this juncture, isn’t contrary to a data regime such as that which Berners-Lee proposed where “Bill of Rights for the Internet” is adhered to, but helps this process along. By creating an alternative to the current transparency model being promoted by American corporations and abused by its security services, one which is embraced by Chinese state capitalism as a tool of the authoritarian state, the EU’s efforts, if successful, would offer a region where the privacy (including corporate privacy) necessary for political freedom continues to be held sacred and protected.

Even if efforts such as those of Berners-Lee to globalize these protections should fail, which sadly appears ultimately most likely, efforts such as those of the EU would create a bubble of  protection- a 21st century version of the medieval fortress and city walls.  We would do well to remember that underneath our definition of the law lies an understanding of law as a type of wall hence the fact that we can speak of both being “breached”. Law, like the rules of a sports game are simply a set of rules that are agreed to within a certain defined arena. The more bound the arena the easier it is to establish a set of clear defined and adhered to rules.

To return to Stiebel, all this has implications for the other idea he explored and about which I also have doubts- the emergence of consciousness from the Internet. As he states:

It took millions of years for human to gain intelligence, but it may only take a century for the Internet. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines.

I largely agree with Steibel, especially when he echoes Dan Dennett in saying that artificial intelligence will be about as much like our human consciousness as the airplane is to a bird. Some similarities in terms of underlying principle, but huge differences in engineering and manifestation.  Meaning the path to machine intelligence probably doesn’t lie in brute computational force tried since the 1950′s or the current obsession with reverse engineering the brain, but in networks. Thing is, I just wishes he had said “internets” as in the plural rather than “Internet” singular, or just “networks”, again plural. For my taste, Stiebel has a tone when he’s talking about the emergence of intelligence from the Internet that leans a little too closely to Teilhard de Chardin and his Noosphere or Kevin Kelly and his Technium, all of which could have been avoided had Steibel just stuck with the logic of his breakpoints.

Indeed, given the amount of space he had given to showing how anomalous our human intelligence was and how networks (ants and others) could show intelligent behavior without human type consciousness at all, I was left to wonder why our networks would ever become conscious in the way we are in the first place. If intelligence could emerge from networked computers as Stibel suggests it seems more likely to emerge from well bounded constellations of such computers rather than the network as a global whole- as in our current Internet. If the emergence of AI resembles, which is not the same as replicates, the evolution and principles of the brain it will probably require the same sorts of sharp boundaries we have, the pruning that takes place as we individualize, similar sorts of self-referential goals to ourselves, and some degree of opacity visa-vi other similar entities.

To be fair to Stibel, he admits that we may have already undergone the singularity in something like this sense. What he does not see is that ants or immune systems or economies give us alternative models of how something can be incredibly intelligent and complex but not conscious in the human sense- perhaps human type consciousness is a very strange anomaly rather than an almost pre-determined evolutionary path once the rising complexity train gains enough momentum. AI in this understanding would merely entail truly purposeful coordinated action and goal seeking by complex units, a dangerous situation indeed given that these large units will often be rivals, but one not existentially distinct from what human beings have known since we were advanced enough technologically to live in large cities or fight with massive armies or produce and trade with continent and world straddling corporations.

 Be all that as it may, Stibel’s Breakpoint was still a fascinating read. He not only left me with lots of cool and bizarre tidbits about the world I had not known before, he gave me a new way to think about the old problem of whither our society was headed and if in fact we might be approaching limits to the development of civilization which the scientific and industrial revolution had seemed to suggest we had freed ourselves eternally from. Stibel’s breakpoints were another way for me to understand Joseph A. Tainter’s idea of how and why complex societies collapse and why such collapse should not of necessity fill us with pessimism and doom. Here’s me on Tainter:

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity.

Exponential trends might not be leading us to a stark choice between global society and singularity or our own destruction. We might just be approaching some of Stibel’s breakpoints, and as long as we can keep your wits about us, and not act out of a heightened fear of losing dwindling advantages for us and ours, breakpoints aren’t all necessarily bad- and can even sometimes- be good.

Privacy Strikes Back, Dave Eggers’ The Circle and a Response to David Brin

I believe that we have turned a corner: we have finally attained Peak Indifference to Surveillance. We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilization will only decline from here on in.  Cory Doctorow

If I was lucky enough to be teaching a media studies class right now I would assign two books to be read in tandem. The first of these books, Douglas Rushkoff’s Present Shocka book I have written about before, gives one the view of our communications landscape from 10,000 feet. Asking how can we best understand what is going on, with not just Internet and mobile technologies, but all forms of modern communication including that precious antique, the narrative book or novel.

Perspectives from “above” have the strength that they give you a comprehensive view, but human meaning often requires another level, an on-the-ground emotional level, that good novels, perhaps still more than any other medium, succeed at brilliantly. Thus, the second book I would assign in my imaginary media studies course would be Dave Eggers’ novel The Circle where one is taken into a world right on the verge of our own, which because it seems so close, and at the same time so creepy, makes us conscious of changes in human communication less through philosophy, although the book has plenty of that too, as through our almost inarticulable discomfort. Let me explain:

The Circle tells the story of a 20 something young woman, Mae Holland, who through a friend lands her dream job at the world’s top corporation, named, you guessed it, the Circle. To picture Circle, imagine some near future where Google swallowed FaceBook and Twitter and the resulting behemoth went on to feed on and absorb all the companies and algorithms that now structure our lives: the algorithm that suggests movies for you at NetFlix, or books and products on Amazon, in addition to all the other Internet services you use like online banking. This monster of a company is then able integrate all of your online identities into one account, they call it “TruYou”.

Having escaped a dead end job in a nowhere small town utility company, Mae finds herself working at the most powerful, most innovative, most socially conscious and worker friendly company on the planet. “Who else but utopians could make utopia. “ (30) she muses, but there are, of course, problems on the horizon.

The Circle is the creation of a group called the “3 Wise men”. One of these young wise men, Bailey, is the philosopher of the group. Here he is musing about the creation of small, cheap, ubiquitous and high definition video cameras that the company is placing anywhere and everywhere in a program called SeeChange:

Folks, we’re at the dawn of the Second Enlightenment. And I am not talking about a new building on campus. I am talking about an era where we don’t allow the vast majority of human thought and action and achievement to escape as if from a leaky bucket. We did that once before. It was called the Middle Ages, the Dark Ages. If not for the monks, everything the world had ever learned would have been lost. Well, we live in a similar time and we’re losing the vast majority of what we do and see and learn. But it doesn’t have to be that way. Not with these cameras, and not with the mission of the Circle. (67)

The philosophy of the company is summed up, in what the reader can only take as echoes of Orwell, in slogans such as “all that happens must be known” , “privacy is theft”, “secrets are lies”, and “to heal we must know, to know we must share”.

Our protagonist, Mae, has no difficulty with this philosophy. She is working for what she believes is the best company in the world, and it is certainly a company that on the surface all of us would likely want to work for: there are non-stop social events which include bringing in world class speakers, free cultural and sporting events and concerts. The company supports the relief of a whole host of social problems. Above all, there are amazing benefits which include the company covering the healthcare costs of Mae’s father who is stricken with an otherwise bankrupting multiple sclerosis.

What Eggers is excellent at is taking a person who is in complete agreement with the philosophy around her and making her humanity become present in her unintended friction with it. It’s really impossible to convey without pasting in large parts of the book just how effective Eggers is at presenting the claustrophobia that comes from a too intense use of social technology. The shame Mae is made to feel from missing out on a coworker’s party, the endless rush to keep pace with everyone’s updates, and the information overload and data exhaustion that results, the pressure of always being observed and “measured”, both on the job and off, the need to always present oneself in the best light, to “connect” with others who share the same views, passions and experiences,the anxiety that people will share something, such as intimate or embarrassing pictures one would not like shared, the confusion of “liking” and “disliking” with actually doing something, with the consequence that not sharing one’s opinion begins to feel like moral irresponsibility.

Mae desperately wants to fit into the culture of transparency found at the Circle, but her humanity keeps getting in the way. She has to make a herculean effort to keep up with the social world of the company, mistakenly misses key social events, throws herself into sexual experiences and relationships she would prefer not be shared, keeps the pain she is experiencing because of her father’s illness private.

She also has a taste for solitude, real solitude, without any expectation that she bring something out of it- photos or writing to be shared. Mae has a habit of going on solo kayaking excursions, and it is in these that her real friction with the culture of the Circle begins. She relies on an old fashioned brochure to identify wildlife and fails to document and share her experiences. As an HR representative who berates her for this “selfish” practice states it:

You look at your paper brochure, and that’s where it ends. It ends with you. (186)

The Circle is a social company built on the philosophy of transparency and anyone who fails to share, it is assumed, must not really buy into that worldview. The “wise man” Bailey, as part of the best argument against keeping secrets I have ever read captures the ultimate goal of this philosophy:

A circle is the strongest shape in the universe. Nothing can beat it,  nothing can improve upon it.  And that’s what we want to be: perfect. So any information that eludes us, anything that’s not accessible, prevents us from being perfect.  (287)

The growing power of the Circle, the way it is swallowing everything and anything, does eventually come in for scrutiny by a small group of largely powerless politicians, but as was the case with the real world Julian Assange, transparency, or the illusion of it, can be used as a weapon against the weak as much as one against the strong. Suddenly all sorts of scandalous ilk becomes known to exist on these politicians computers and their careers are destroyed. Under “encouragement” from the Circle politicians “go transparent” their every move recorded so as to be free from the charge of corruption as the Circle itself takes over the foundational mechanism of democracy- voting.

The transparency that the Circle seeks is to contain everyone, and Mae herself, after committing the “crime” of temporarily taking a kayak for one of her trips and being caught by a SeeChange camera, at the insistence of Bailey, becomes one of only two private citizens to go transparent, with almost her every move tracked and recorded.

If Mae actually believes in the philosophy of transparency and feels the flaw is with her, despite almost epiphanies that would have freed her from its grip, there are voices of solid opposition. There is Mae’s ex-boyfriend, Mercer, who snaps at her while at dinner with her parents, and had it been Eggers’ intention would have offered an excellent summation of Rushkoff’s Present Shock.

Here, though, there are no oppressors. No one’s forcing you to do this. You willingly tie yourself to these leashes. And you willingly become utterly socially autistic. You no longer pick up on basic human communication cues. You’re at a table with three humans, all of whom you know and are trying to talk to you, and you’re staring at a screen searching for strangers in Dubai.  (260)

There is also another of the 3 wise men, Ty, who fears where the company he helped create is leading and plots to destroy it. He cries to Mae:

This is it. This is the moment where history pivots. Imagine you could have been there before Hitler became chancellor. Before Stalin annexed Eastern Europe. We’re on the verge of having another very hungry, very evil empire on our hands, Mae. Do you understand? (401)

Ty says of his co-creator Bailey:

This is the moment he has been waiting for, the moment when all souls are connected. This is his rapture, Mae! Don’t you see how extreme this view is? His idea is radical, and in another era would have been a fringe notion espoused by an eccentric adjunct professor somewhere: that all information, personal or not, should be shared by all.  (485)

If any quote defines what I mean by radical transparency it is that one immediately above. It is, no doubt, a caricature and the individuals who adhere to something like it in the real world do so in shades, along a spectrum. One of the thinkers who does so, and whose thought might therefore shed light on what non-fictional proponents of transparency are like is the science fiction author, David Brin, who took some umbrage over at the IEET in response to my last post.

In that post itself I did not really have Brin in mind, partly because, like Julian Assange, his views have always seemed to me more cognizant of the deep human need for personal privacy, in a way the figures I mentioned there; Mark Zuckerberg, Kevin Kelly and Jeff Stibel; have not, and thus his ideas were not directly relevant to where I originally intended to go in the second part of my post, which was to focus on precisely this need. Given his sharp criticism, it now seems important that I address his views directly and thus swerve for a moment away from my main narrative.

Way back in 1997, Brin had written a quite prescient work The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom? which accurately gauged the way technology and culture were moving on the questions of surveillance and transparency. In a vastly simplified form, Brin’s argument was that given the pace of technological change, which makes surveillance increasingly easier and cheaper, our best protection against elite or any other form of surveillance, is not to put limits on or stop that surveillance, but our capacity to look back, to watch the watchers, and make as much as possible of what they do transparent.

Brin’s view that transparency is the answer for surveillance leads him to be skeptical of traditional approaches, such as those of the ACLU, that think laws are the primary means to protect us because technology, in Brin’s perspective, will always any outrun any legal limitations on the capacities to surveil.

While I respect Brin as a fellow progressive and admire his early prescience, I simply have never found his argument compelling, and think in fact his and similar sentiments held by early figures at the dawn of the Internet age have led us down a cul de sac.

Given the speed at which technologies of surveillance have been and are being developed it has always been the law and its ability to give long lasting boundaries to the permissible and non-permissible that is our primary protection against them. Indeed, the very fall in cost and rise in capacity of surveillance technologies, a reality which Brin believes make legal constraints largely unworkable, in fact make law, one of our oldest technologies, and which no man should be above or below, our best weapon in privacy’s defense.

The same logic of the deterrent threat of legal penalties that we will need for, say, preventing a woman from being tracked and stalked by a jealous ex boyfriend using digital technology, will be necessary to restrain corporations and the state. It does not help a stalked woman just to know she is being stalked, to be able to “watch her watcher”, rather, she needs to be able to halt the stalking. Perhaps she can digitally hide, but she especially needs the protection of the force of law which can impose limitations and penalties on anyone who uses technological capacities for socially unacceptable ends, and in the same way, citizens are best protected not by being able to see into government prying, but by prohibiting that prying under penalty of law in the first place.We already do this effectively, the problem is that the law has lagged behind technology.

Admittedly, part of the issue is that technology has moved incredibly fast, but a great deal of law’s slowness has come from a culture that saw no problem with citizens being monitored and tracked 24/7- a culture which Brin helped create.

The law effectively prohibits authorities from searching your home without a warrant and probable cause, something authorities have been “technologically” able to do since we started living in shelters. Phone tapping, again, without a warrant and probable cause, has been prohibited to authorities in the US since the late 1960’s- authorities had been tapping phones since shortly after the phone was invented in the 1890’s. Part of the problem today is that no warrant is required for the government to get your “meta-data” who you called  or where you were as indicated by GPS. When your email exists in the “cloud” and not on your personal device those emails can in some cases be read without any oversight from a judge. These gaps in Fourth Amendment protections exist because the bulk of US privacy law that was meant to deal with electronic communications was written before even email, existed, indeed, before most of us knew what the Internet was. The law can be slow, but it shouldn’t be that slow, email, after all, is pretty damned old.  

There’s a pattern here in that egregious government behavior or abuse of technological capacities – British abuses in the American colonies, the American government and law enforcement’s egregious behavior and utilization of wiretapping/recording capacities in the 1960’s, results in the passing of restrictions on the use of such techniques and technological capacities. Knowing about those abuses is only a necessary condition of restricting or putting a stop to them.

I find no reason to believe the pattern will not repeat itself again and that we will soon push for and achieve restrictions on the surveillance power of government and others which will work until the powers- that- be find ways to get around them and new technology will allow those who wish to surveil in an abusive way allow them to do so. Then we’ll be back at this table again in the endless cat and mouse game that we of necessity must play if we wish to retain our freedom.

Brin seems to think that the fact that “elites” always find ways to get around such restrictions is a reason for not having such restrictions in the first place, which is a little like asking why should you clean your house when it just gets dirty again. As I see it, our freedom is always in a state of oscillation between having been secured and being at risk. We preserve it by asserting our rights during times of danger, and, sadly, this is one of those times.

I agree with Brin that the development of surveillance technologies are such that they themselves cannot directly be stopped, and spying technologies that would have once been the envy of the CIA or KGB, such as remote controlled drones with cameras, or personal tracking and bugging devices, are now available off the shelf to almost everyone an area in which Brin was eerily prescient in this. Yet, as with another widespread technology that can be misused, such as the automobile, their use needs to be licensed, regulated, and where necessary, prohibited. The development of even more sophisticated and intrusive surveillance technologies may not be preventable, but it can certainly be slowed, and tracked into directions that better square with long standing norms regarding privacy or even human nature itself.

Sharp regulatory and legal limits on the use of surveillance technologies would likely derail a good deal innovation and investment in the technologies of surveillance, which is exactly the point. Right now billions of dollars are flowing in the direction of empowering what only a few decades ago we would have clearly labeled creeps, people watching other people in ways they shouldn’t be, and these creeps can be found at the level of the state, the corporation and the individual.

On the level of individuals, transparency is not a solution for creepiness, because, let’s face it, the social opprobrium of being known as a creep (because everyone is transparent) is unlikely to make them less creepy- it is their very insensitivity to such social rules that make them creeps in the first place. All transparency would have done would be to open the “shades” of the victim’s “window”. Two-way transparency is only really helpful, as opposed to inducing a state of fear in the watched,  if the perception of intrusive watching allows the victim to immediately turn such watching off, whether by being able to make themselves “invisible”, or, when the unwanted watching has gone too far, to bring down the force of the law upon it.

Transparency is a step in the solution to this problem, as in, we need to somehow require tracking apps or surveillance apps in private hands to notify the person being tracked, but it is only a first step. Once a person knows they are being watched by another person they need ways to protect themselves, to hide, and the backup of authorities to stop harassment.

In the economic sphere, the path out of the dead end we’ve driven ourselves into might lie in the recognition that the commodity for sale in the transaction between Internet companies and advertisers, the very reason they have and continue pushing to make us transparent and surveilling us in the first place, is us. We would do well to remember, as Hannah Arendt pointed out in her book The Human Condition, that the root of our conception of privacy lies in private property. The ownership of property, “one’s own private place in the world” was once considered the minimum prerequisite for the possession of political rights.

Later, property as in land was exchanged for the portable property of our labor, which stemmed from our own body, and capital. We have in a sense surrendered control over the “property” of ourselves and become what Jaron Lanier calls digital peasants. Part of the struggle to maintain our freedoms will mean reasserting control over this property- which means our digital data and genetic information. How exactly this might be done is not clear to me, but I can see outlines. For example, it is at least possible to imagine something like digital “strikes” in which tracked consumers deliberately make themselves opaque to compel better terms.

In terms of political power, the use of law, as opposed to mere openness or transparency, to constrain the abuse of surveillance powers by elites would square better with our history. For the base of the Western democratic tradition (at least in its modern incantation) is not primarily elites’ openness to public scrutiny, or their competition with one another, as Brin argues is the case in The Transparent Society, (though admittedly the first especially is very important) but the constraints on power of the state, elites, the mob, or nefarious individuals provided by the rule of law which sets clear limits on how power, technologically enabled or otherwise, can be used.

The argument that prohibition, or even just regulation, never works and comparisons to the failed drug war I find too selective to be useful when discussing surveillance technologies. Society has prohibitions on all sorts of things that are extremely effective if never universally so.

In the American system, as mentioned, police and government are severely constrained in how they are able to obtain evidence against suspects or targets. Past prohibitions against unreasonable searches and surveillance have actually worked. Consumer protection laws dissuade corporations from abusing, putting customers at risk, or even just misusing consumer’s information. Environmental protection laws ban certain practices or place sharp boundaries on their use. Individuals are constrained in how they can engage with one another socially or how they can use certain technologies without their privilege (e.g driving) to use such technologies being revoked.

Drug and alcohol prohibitions, both having to push against the force of highly addictive substances, are exceptions the general rule that thoughtful prohibition and regulation works. The ethical argument is over what we should prohibit and what we should permit and how. It is ultimately a debate over what kind of society we want to live in based on our technological capacities, which should not be confused with a society determined by those capacities.

The idea that laws, regulations, and prohibitions under certain circumstances is well.., boring  shouldn’t be an indication that it is also wrong. The Transparent Society was a product of its time, the 1990’s, a prime example of the idea that as long as the playing field was leveled spontaneous order would emerge and that government “interference” through regulation and law (and in a democracy that is working the government is us) would distort this natural balance. It was the same logic that got us into the financial crisis and a species of an eternal human illusion that this time is different. Sometimes the solution to a problem is merely a matter of knowing your history and applying common sense, and the solution to the problem of mass surveillance is to exert our power as citizens of a democracy to regulate and prohibit it where we see fit. Or to sum it all up-we need updated surveillance laws.

It would be very unfair to Brin to say his views are as radical as the Circle’s philosopher Bailey, for, as mentioned, Brin is very cognizant and articulate regarding the human need for privacy at the level of individual intimacy. Eggers’ fictional entrepreneur-philosopher’s vision is a much more frightening version of radical transparency entailing the complete loss of private life. Such total transparency is victorious over privacy at the conclusion of The Circle. For, despite Mae’s love for Ty, he is unable to convince her to help him to destroy the company, and she betrays him.

We are left with the impression that the Circle, as a consequence of Mae’s allegiance to its transparency project, has been able as Lee Billings said in a different context,” to sublime and compress our small isolated world into an even more infinitesimal, less substantial state”  that our world is about to be enveloped in a dark sphere.

Yet, it would be wrong to view even Bailey in the novel as somehow “evil”, something that might make the philosophy of the Circle in some sense even more disturbing. The leadership of the Circle (with the exception of the Judas Ty) doesn’t view what they are building as somehow creepy or dangerous, they see it as a sort of paradise. In many ways they are actually helping people and want to help them. Mae in the beginning of Eggers’ novel is right- the builders of the Circle are utopians as were those who thought and continue to think radical transparency would prove the path to an inevitably better world.

As drunks are known for speaking the truth, an inebriated circler makes the connection between the aspirations of the Circle and those of religion:

….you’re gonna save all the souls. You’re gonna get everyone in the same place, you’re gonna teach them all the same things. There can be one morality, one set of rules. Imagine! (395)

The kinds of religious longings lying behind the mission of the Circle is even better understood by comparison to that first utopian, Plato, and his character Glaucon’s myth of the Ring of Gyges in The Republic. The ring makes its possessor invisible and the question it is used to explore is what human beings might do were there no chance they might get caught. The logic of the Circle is like a reverse Ring of Gyges making everyone perfectly visible. Bailey, thinks Mae had stolen the kayak because she thought she couldn’t be seen, couldn’t get caught:

All because you were being enabled by ,what, some cloak of invisibility? (296)

If not being able to watch people would make them worse, being able to fully and completely watch them, so the logic goes, would inevitably make them better.

In making this utopian assumption proponents of radical transparency both fictional and real needed to jettison some basic truths about the human condition we are only now relearning. A pattern that has, sadly, happened many times before.

Utopia does not feel like utopia if upon crossing the border you can’t go back home.  And upon reaching utopia we almost always want to return home because every utopia is built on a denial of or oversimplification regarding our multidimensional and stubbornly imperfectable human nature, and this would be the case whether or not our utopia was free of truly bad actors, creeps or otherwise.

The problem one runs into, in the transparency version of utopia, as in any other, is that given none of us are complete, or are less complete than we wish others to understand us to be, the push for us to be complete in an absolute sense often leads to its opposite. On social networks, we end up showcasing not reality, but a highly edited and redacted version of it: not the temper tantrums, but our kids at their cutest, not vacation disasters, but their picture perfect moments.

Pushing us, imperfect creatures that we are, towards total transparency leads almost inevitably to hypocrisy and towards exhausting and ultimately futile efforts at image management. All this becomes even more apparent when asymmetries in power between the watched and watcher are introduced. Employees are with reason less inclined to share that drunk binge over the weekend if they are “friends” with their boss on FaceBook. I have had fellow bloggers tell me they are afraid to express their opinions because of risks to their employment prospects. No one any longer knows whether the image one can find of a person on a social network is the “real” one or a carefully crafted public facade.

These information wars ,where every side is attempting to see as deeply as possible into the other while at the same time presenting an image of itself which best conforms to its own interest, is found up and down the line from individuals to corporations and all the way up to states. The quest for transparency, even when those on the quest mean no harm, is less about making oneself known than eliminating the uncertainty of others who are, despite all our efforts, not fully knowable. As Mae reflects:

It occurred to her, in a sudden moment of clarity, that what had always caused her anxiety, or stress, or worry, was not any one force, nothing independent and external- it wasn’t danger to herself or the calamity of other people and their problems. It was internal: it was subjective: it was not knowing. (194)

And again:

It was not knowing that was the seed of madness, loneliness, suspicion, fear. But there were ways to solve all this. Clarity had made her knowable to the world, and had made her better, had brought her close, she hoped., to perfection. Now the world would follow. Full transparency would bring full access and there would be no more not knowing. (465)

Yet, this version of eliminating uncertainty is an illusion. In fact, the more information we collect the more uncertainty increases, a point made brilliantly by another young author, who is also a scientist, Pippa Goldschmidt in her novel, The Falling Sky. To a talk show host who defines science as the search for answers she replies “That’s exactly what science isn’t about…. it’s about quantifying uncertainty.

Mae, at one point in the novel is on the verge of understanding this:

That the volume of information, of data, of judgments of measurement was too much, and there were too many people, and too many desires of too many people, and too much pain of too many people, and having it all constantly collated, collected, added and aggregated, and presented to her as if it was tidy and manageable- it was too much.  (410)

Sometimes one can end up in the exact opposite destination of where one wants to go if one is confused about the direction to follow to get there. Many of the early advocates of radical transparency thought our openness would make Orwellian nightmares of intrusive and controlling states less likely. Yet, by being blissfully unconcerned about our fundamental right to privacy, by promoting corporate monitoring and tracking of our every behavior, we have not created a society that is more open and humane but given spooks tools, democratic states would never have been able to openly construct, to spy upon us in ways that would have brought smiles to the faces of the totalitarian dictators and J Edgar Hoovers of the 20th century. We have given criminals and creeps the capability to violate the intimate sphere of our lives, and provided real authoritarian dictatorships the template and technologies to make Orwell’s warnings a reality.

Eggers, whose novel was released shortly after the first Snowden revelations was certainly capturing a change in public perception regarding the whole transparency project. It is the sense that we have been headed in the wrong direction an unease that signals the revival of our internal sense of orientation, that the course we are headed on does not feel right, and in fact somehow hints at grave dangers.

It was an unease captured equally well and around the same time by Vienna Teng’s hauntingly beautiful song Hymn of Axicom (brought to my attention by reader, Gregory Maus). Teng’s heavenly music and metalized voice- meant to be the voice of the world largest private database-  make the threshold we are at risk of crossing identified by Eggers to be somehow beautiful yet ultimately terrifying.

Giving voice to this unease and questioning the ultimate destination of the radical transparency project has done and will likely continue to do us well.  At a minimum, as the quote from Cory Doctorow with which this post began indicates, a wall between citizens and even greater mass surveillance, at least by the state, may have been established by recent events.

Yet, even if the historical pattern of our democracy repeats itself, that we are able to acknowledge and turn into law protections against a new mutation in the war of power against freedom, if privacy is indeed able to “strike back”, the proponents of radical transparency were certainly right about one thing, we can never put the genie fully back in the bottle, even if we are still free enough to restrain him with the chains of norms, law, regulation and market competition.

The technologies of transparency may not have affected a permanent change in the human condition in our relationship to the corporation and the state, criminals and the mob and the just plain creepy, unless, that is, we continue to permit it, but they have likely permanently affected the social world much closer to our daily concerns- our relationship with our family and friends our community and tribe. They have upended our relationship with one of the most precious of human ways of being, with solitude, though not our experience of loneliness, a subject that will have to wait until another time…

Cracks in the Cult of Radical Transparency

MC Escher Eye Reflection

FaceBook turns ten this year, yes only ten, which means if the company were a person she wouldn’t even remember when Friends was a hit TV show- a reference meant to jolt anyone over 24 with the recognition of just how new the whole transparency culture, which FaceBook is the poster child for, is. Nothing so young can be considered a permanent addition to the human condition, but mere epiphenomenon, like the fads and fashions we foolishly embraced, a mullet and tidied jeans, we have now left behind, lost in the haze of the stupidities and mistakes in judgement of our youth.

The idea behind the cult of radical transparency was that “sharing” would make the world a better place. Private-life was now passe, our photos, our experiences, our thoughts, our opinions were to be endlessly shared not just with an ever expanding group of “friends” but with the world. Transparency would lead to individual authenticity, an end to hypocrisy, to open and accountable government. It would even allow us to re- stitch together our divided selves our work- self with our family-self with our social-self or as Mark Zuckerberg himself stated it:

The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly.”

Having two identities for yourself is an example of a lack of integrity.

Yet, radical transparency has come in for a thumping lately. We can largely thank Edward Snowden ,whose revelations of how the US government used the ubiquitous sharing and monitoring technologies used by FaceBook, Google et al to spy on foreign and American citizens alike, has cast a pall over the whole transparency project. Still, both the Silicon Valley giants and much of the technorati appear to be treating the whole transparency question as a public relations problem or an issue of government surveillance alone. They continue to vigorously pursue their business model which is based on developing the tools for personalization.

A technorati semi-royalty like Kevin Kelly put the matter this way in Jeff Stibel’s book , Breakpoint:

Total personalization in this new world will require total transparency. That is going to be the price. If you want total personalization, you have to be totally transparent (93)

Or as Kelly put it over at The Edge:

I don’t see any counter force to the forces of surveillance and self-tracking, so I’m trying to listen to what the technology wants, and the technology is suggesting that it wants to be watched.

It’s suggesting that it wants to monitor, it wants to track, and that you really can’t stop the tracking. So maybe what we have to do is work with this tracking—try to bring symmetry or have areas where there’s no tracking in a temporary basis. I don’t know, but this is the question I’m asking myself: how are we going to live in a world of ubiquitous tracking?

The problem with this, of course, is that technology in and of itself doesn’t “want” anything. Self- tracking as in the “quantified-self” or surveillance of individuals by corporations and governments is not just an avenue being opened up by technological developments, it is a goal being pursued actively by both private sector companies, and the security state who are in light of that goal pushing technological evolution further in that direction. Indeed, Silicon Valley companies are so mesmerized with their ideal of a personalized economy that they are doubling down on forcing the transparency upon the public it requires even as cracks are beginning to show in the model’s foundation.

Let’s look at the cracks: people under 30, who have never lived in a world where the private and public had sharp boundaries might be more interested in privacy than their elders, many of whom are old enough to remember Nixon and J. Edgar Hoover and should, therefore, know better. Europeans, who were never as comfortable with corporate snooping as their American counterparts (Germany gave Google Street View the boot) are even less comfortable now that they know these companies were willing to act as spying tools for the US Government. There is an increasing desire among Europeans to build their own Internet with their own (higher) privacy standards.

The biggest market in the world, China, has already pressured Google to such an extent that it left country. It has its own public/private spying infrastructure in the form of front companies that work with US firms such as Microsoft, along with it own companies like Baidu, persons with sensitive information to hide; namely, criminals or terrorists are onto the fact that they are being watched and are embracing technologies to hide their data, including developing technologies that are anti-transparent. If technology wants anything here, in Kelly’s phraseology, it is an arms race between the watched and the watcher.

Fans of transparency, who are at the same time defenders of civil liberties, sometimes make the case that everything would be alright if the field was leveled and everyone: individual, corporation and government alike was made transparent.  Yet, even if the powers unleashed by transparency were totally taken out of the hands of government and put in the hands of corporations and citizens, there would still be problems because there are issues of asymmetry that universal transparency does not address. We can already see what our government is really like: want to understand D.C. ? Read This Town- and yet such knowledge seems to change nothing. It is still the big-wigs who go on as usual and call the shots. And even if we could wave a magic wand and rid ourselves of all invasive government snooping  private-sector transparency has the same asymmetries.

Personalization, as imagined by Silicon Valley would work something like this example, which I’ve essentially ripped and twisted from Stibel’s Breakpoint. I wake up in the morning and haven’t had time for breakfast which is known by my health monitoring system.  This fact is integrated with my tracking system which knows that in my morning commute I pass The Donut Shop which has a two for one special on my favorite doughnut the Boston Cream. Facts known by my purchase tracking or perhaps gleaned from the fact that I once “liked” a comment by a “friend” on FaceBook who had posted about eating said donut. All this information is integrated, and I am pinged before passing The Donut Shop. Me being me, and lacking any self control, I stop and buy my donuts on the way to work.

Really?

What personalization means is constant bombardment by whatever advertiser has paid enough for my information at the moment to suggest for me what I should buy. The fruit and vegetable vender down the street is likely invisible to me in such a scenario if he hasn’t paid for such suggestions, which is unlikely because he is, well… Amish. This is the first asymmetry. The second asymmetry is between me and The Donut Shop. What possible piece of information could they share with me that would make the relationship more equal? Pictures of people made obese by their obsession with the Boston Cream? Nobody advertises to destroy their own business. The information “shared” with me is partial and distorted.

The biggest danger of the cult of radical transparency is not, I think, in Western countries where traditions of civil liberty and market competition (meaning as ubiquitous surveillance gets more “creepy” there will be a rising number of alternative businesses that offer “non-creepy” services), although this does not mean that things will work themselves out- we have to push back. Rather, the bigger danger lies in authoritarian countries, especially China, where radical transparency could be pursued to its logical limit both by companies and the security state or, most disturbing of all, the collusion of the two.

Yet, even there, I tend to have a faith that, over the long run, the more ornery aspects of human nature will ultimately rule the day, that people will find ways to tune out, to ignore, to play tricks on and be subversive against anything that tries to assert control over individual decisions.

The question of transparency is thus political, cultural, and psychological rather than technological. A great example of the push against it is Dave Eggers’ novel The Circle, which not only gives a first hand account of absurdity of the cult of radical transparency, but brings into relief questions which I believe will prove deeper and more long lasting for the human condition than current debates over FaceBook privacy settings, or the NSA’s spy-a-thon.

These other, deeper and more existential questions deal with the boundary between self and community, the tension between solitude and togetherness. They are questions that have been with us from our very beginning on the African savanna, and will likely never be fully resolved until our species is no more. These questions along with The Circle itself is where I will turn next time…      

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

The Earth’s Inexplicable Solitude

Throw your arms wide out to represent the span of all of Earthly time. Our planet forms at the tip of your left arm’s longest finger, and the Cambrian begins at the wrist of your right arm. The rise of complex life lies in the palm of your right hand, and, if you choose, you can wipe out all of human history ‘in a single stroke with a medium grained nail file’  

Lee Billings, Five Billion Years of Solitude (145)  

For most of our days and for most of the time we live in the world of Daniel Kahneman’s experiencing self. What we pay attention to is whatever is right in front of us, which can range from the pain of hunger to the boredom of cubicle walls. Nature has probably wired us this way, the stone age hunter and gatherer still in our heads, where the failure to focus on the task at hand came with the risk of death. A good deal of modern society, and especially contemporary technology such as smart phones, leverages this presentness and leaves us trapped in its muck, a reality Douglas Rushkoff brilliantly lays out in his Present Shock.      

Yet, if the day to day world is what rules us and is most responsible for our happiness our imagination has given us the ability to leap beyond it. We can at a whim visit our own personal past or imagined future but spend even more of our time inhabiting purely imagined worlds. Indeed, perhaps Kahneman’s “remembering self” should be replaced by an imagining self, for our memories aren’t all that accurate to begin with, and much of remembering takes the form of imagining ourselves in a sort of drama or comedy in which we are the protagonist and star.

Sometimes imagined worlds can become so mesmerizing they block out the world in front of our eyes. In Plato’s cave it is the real world that is thought of as shadows and the ideas in our heads that are real and solid. Plato was taking a leap not just in perception but in time. Not only is it possible to roll out and survey the canvass of our own past and possible future or the past and future of the world around, you can leap over the whole thing and end up looking down at the world from the perspective of eternity. And looking down meant literally down, with timeless eternity located in what for Plato and his Christian and Muslim descendants was the realm of the stars above our heads.

We can no longer find a physical location for eternity, but rather than make time shallow this has instead allowed us to grasp its depth, that is, we have a new appreciation for how much the canvass of time stretches out behind us and in front of us. Some may want an earth that is only thousands of years old as was evident in the recent much publicized debate between the creationist Ken Ham and Bill Nye, but even Pat Robertson now believes in deep time.   

Recently, The Long Now Foundation,  held a 20th anniversary retrospective “The Long Now, Now” a discussion between two of the organization’s founders- Brian Eno and Danny Hillis. The Long Now Foundation may not be dedicated to deep time, but its 10,000 year bookends, looking that far back, and that far ahead, still doubles the past time horizon of creationists, and given the association between creationism and ideas of impending apocalypse, no doubt comparatively adds millennia to the sphere of concern regarding the human future as well.    

Yet, as suggested above, creationists aren’t the only ones who can be accused of having a diminished sense of time. Eno acknowledged that the genesis for the Long Now Foundation and its project of the 10,000 year clock stemmed from his experience living in “edgy Soho” where he found the idea of “here” constrained to just a few blocks rather than New York or the United States and the idea of “now” limited to at most a few days or weeks in front of one’s face. This was, as Eno notes, before the “Wall Street crowd” muscled its way in. High-speed traders have now compressed time to such small scales that human beings can’t even perceive it.  

What I found most interesting about the Eno-Hillis discussion was how they characterized their expanded notion of time, something they credited not merely to the clock project but to their own perspective gained from age. Both of their time horizons had expanded forward and backward and the majority of what they now read was history despite Eno’s background as a musician and Hillis’ as an engineer. Hillis’ study of history had led him to the view that there were three main ways of viewing the human story.

For most of human history our idea of time was cyclical- history wasn’t going anywhere but round and round. A second way of viewing history was that it was progressive- things were getting better and better- a view which had its most recent incantation beginning in the Enlightenment and was now, in both Hillis and Eno’s view, coming to a close. For both, we were entering a stage where our understanding of the human story was of a third type “improvisational” in which we were neither moving relentlessly forward or repeating but had to “muddle” our way through, with some things getting better, and others worse, but no clear understanding as to where we might end up.    

Still, if we wish to reflect on deep time even 10,000 years is not nearly enough. A great recent example of such reflection  is Lee Billings Five Billion Years of Solitude, which, though it is written as a story of our search for life outside of the solar system, is just as much or more a meditation on the depth of past and future.

When I was a kid there were 9 known planets all within our solar system, and none beyond, and now, though we have lost poor Pluto, we have discovered over a thousand planets orbiting suns other than our own with estimates in the Milky Way alone on the order of 100 billion. A momentous change few of us have absorbed, and much of Five Billion Years of Solitude reflects upon our current failure to value these discoveries, or respond to the nature of the universe that has been communicated by them. It is also a reflection on our still present solitude, the very silence of a universe that is believed to be fertile soil for life may hint that no civilization ever has or survived long enough, or devoted themselves in earnest enough, to reach into the beyond.

Perhaps our own recent history provides some clues explaining the silence. Our technology has taken on a much different role than what Billings imagined as a child mesmerized by the idea of humans breaking out beyond the bounds of earth. His pessimism captured best not by the funding cutbacks and withdrawal of public commitment or cancellation of everything from SETI to NASA’S Terrestrial Planet Finder (TPF) or the ESA’s Darwin, but in Billings’ conversations with Greg Laughlin an astrophysicist and planet hunter at UC Santa Cruz.  Laughlin was now devoting part of his time and the skills he had learned as a planet hunter to commodity trading. At which Billings lamented:

The planet’s brightest scientific minds no longer leveraged the most powerful technologies to grow and expand human influence far out beyond earth, but to sublime and compress our small isolated world into an even more infinitesimal, less substantial state. As he described for me the dark arts of reaping billion dollar profits from sub-cent scale price changes rippling at near light-speed around the globe, Laughlin shook his head in quiet awe. Such feats, he said, were “much more difficult than finding an earth-like exoplanet”. (112)

Billings finds other, related, possible explanations for our solitude as well. He discusses the thought experiment of UC San Diego’s Tom Murphy who tried to extrapolate the world’s increasing energy use into the future at an historical rate of 2.3 percent per year. To continue to grow at that rate, which the United States has done since the middle of the seventeenth-century, we would have to encase every star in the Milky Way galaxy within an energy absorbing Dyson sphere within 2,500 years. At which Billings concludes:

If technological civilization like ours are common in the universe, the fact that we have yet to see stars or entire galaxies dimming before our eyes beneath starlight-absorbing veneers of Dyson spheres suggests that our own present era of exponential growth may be anomalous, not only to our past, but also to our future.

Perhaps even with a singularity we can not continue the exponential trend lines we have been on since the industrial revolution. Technological civilization may peak much closer to our own level of advancement than we realize, or may more often than not destroy itself, but, if the earth is any example, life itself once established is incredibly resilient.

As Billings shows us in the depths of time the earth has been a hot house or a ball of ice with glaciers extending to the equator. Individual species and even whole biomes may disappear under the weight of change and shocks, but life itself holds on. If our current civilization proves suicidal we will not be the first form of life that has so altered the earthly environment that it has destroyed both itself and much of the rest of life on earth.

In this light Billings discusses the discovery of the natural gas fields of the Marcellus Shale and the explosive growth of fracking, the shattering of the earth using water under intense pressure, which while it has been an economic boon to my beloved Pennsylvania, and is less of a danger to us as a greenhouse gas than demon coal, presents both short term and longer term dangers.

The problem with the Marcellus is that it is merely the largest of many such gas shale field located all over the earth. Even if natural gas is a less potent greenhouse gas than coal it still contributes to global warming and its very cheapness may delay our necessary move away from fossil fuels in total if we are to avoid potentially catastrophic levels of warming.

The Marcellus was created by eons of anaerobic bacteria trapped in underwater mountain folds which released hydrogen sulfide toxic to almost any form of life and leading to a vast accumulation of carbon as dead bacteria could no longer be decomposed. Billings muses whether we ourselves might be just another form of destructive bacteria.

Removed from its ancient context, the creation of the Marcellus struck me as eerily familiar. A new source of energy and nutrients flows into an isolated population. The population balloons and blindly grows, occasionally crashing when it surpasses the carrying capacity of its environment. The modern drill rigs shattering stone to harvest carbon from boom- and- bust waves of ancient death suddenly seemed like echoes, portents of history repeating itself on the grandest of scales. (130)

Technological civilization does not seem to be a gift to life on the planet on which it emerges, so much as it is a curse and danger, until, that is, the star upon which life depends itself becomes a danger or through stellar- death no longer produces the energy necessary for life. Billings thinks we have about 500 million years before the sun heats up so much the earth loses all its water. Life on earth will likely only survive the warming sun if we or our descendants do, whether we literally tow the planet to a more distant orbit or settle earthly life elsewhere, but in the mean time the biosphere will absorb our hammer blows and shake itself free of us entirely if we can not control our destructive appetites.

Over the very, very long term the chain of life that began on earth almost four billion years ago will only continue if we manage to escape our solar system entirely, but for now, the quest to find other living planets is less a  matter of finding a new home than it is about putting the finishing touches on the principle of Copernican Mediocrity, the idea that there is nothing especially privileged about earth, and, above all, ourselves.

And yet, the more we learn about the universe the more it seems that the principle of Copernican Mediocrity will itself need to be amended.  In the words of Billings’ fellow writer and planet hunter Caleb Scharf  the earth is likely “special but not significant”. Our beloved sun burns hotter than most stars, our gas giants larger are farther from our parent star, our stabilizing moon unlikely. How much these rarity factors play in the development of life, advanced life and technological civilization is anybody’s guess, and answering this question one of the main motivations behind the study of exoplanets and the search for evidence of technological civilization beyond earth. Yet, Billings wants to remind us that even existing at all is a very low probability event.

Only “the slimmest fraction of interstellar material is something so sophisticated as a hydrogen atom. To simply be any piece of ordinary matter- a molecule, a wisp of gas, a rock, a star, a person- appears to be an impressive and statistically unlikely accomplishment.” (88) Astrophysicists ideas of the future of the universe seem to undermine Copernican mediocrity as well for, if their models are right, the universe will spend most of its infinite history not only without stars and galaxies and people, but without even stable atoms.  Billings again laments:

Perhaps its just a failure of imagination to see no hope for life in such a bleak, dismal future. Or, maybe, the predicted evolution of the universe is a portent against Copernican mediocrity, a sign that the bright age of bountiful galaxies, shining stars, unfolding only a cosmic moment after the dawn of all things, is in fact rather special. (89)

I think this failure of imagination stems from something of a lack of gratitude on the part of human beings, and is based on a misunderstanding that for something to be meaningful it needs to last “forever.” The glass, for me, is more than half-full, for, even given the dismal views of astrophysicists on the universe’s future there is still as much as 100 billion years left for life to play out on its stage. And life and intelligence in this universe will likely not be the last.

Billings himself capture the latter point. The most prominent theory of how the Big Bang occurred, the “inflationary model” predicts an infinity of universes- the multiverse. Echoing Giordano Bruno, he writes:

Infinity being ,well, infinite, it would follow that the multiverse would host infinitudes of living beings on a limitless number of other worlds. (91)

I care much less that the larger infinity of these universes are lifeless than that an infinity of living worlds will exist as well.

As Billings points out, this expanded canvass of time and decentering on ourselves is a return to the philosophy of Democritus which has come down to us especially from Lucretius’ On the Nature of Things the point being one of the ways to avoid anxiety and pressure in our day-to-day would is to remember how small we and our problems are in the context of the big picture.

Still, one is tempted to ask what this vastly expanded canvass both in time past and future and in the potential number of sentient feeling beings means for the individual human life?

In a recent interview in The Atlantic, author Jennifer Percy describes how she was drawn away from physics and towards fiction because fiction allowed her to think through questions about human existence that science could not. Here father had looked to the view of human beings as insignificant with glee in a way she could not.

He flees from what messy realm of human existence, what he calls “dysfunctional reality” or “people problems.” When you imagine that we’re just bodies on a rock, small concerns become insignificant. He keeps an image above his desk, taken by the Hubble space telescope, that from a distance looks like an image of stars—but if you look more closely, they are not stars, they are whole galaxies. My dad sees that, imagining the tiny earth inside one of these galaxies—and suddenly, the rough day, the troubles at work, they disappear.

The kind of diminishment of the individual human life that Percy’s father found comforting, she instead found terrifying and almost nihilistic. Upon encountering fiction such as Lawrence Sargent Hall’s The Ledge, Percy realized fiction:

It helped me formulate questions about how the immensity and cruelty of the universe coexists with ordinary love, the everyday circumstances of human beings. The story leaves us with an image of this fisherman caught man pitilessly between these two worlds. It posed a question that became an obsession, and that followed me into my writing: what happens to your character when nature and humanity brutally encounter one another?

Trying to think and feel our way through this tension of knowing that we and our concerns are so small, but our feelings are so big, is perhaps the best we can do. Escaping the tedium and stress of the day through the contemplation of the depth of time and space is, no doubt a good thing, but it would be tragic to use such immensities as a means of creating space between human hearts or no longer finding the world that exists between and around us to be one of exquisite beauty and immeasurable value- a world that is uniquely ours to wonder at and care for.

Erasmus reads Kahneman, or why perfect rationality is less than it’s cracked up to be

sergei-kirillov-a-jester-with-the-crown-of-monomakh-1999

The last decade or so has seen a renaissance is the idea that human beings are something far short of rational creatures. Here are just a few prominent examples: there was Nassim Taleb with his The Black Swan, published before the onset of the financial crisis, which presented Wall Street traders caught in the grip of their optimistic narrative fallacies, that led them to “dance” their way right over a cliff. There was the work of Philip Tetlock which proved that the advice of most so-called experts was about as accurate as chimps throwing darts. There were explorations into how hard-wired our ideological biases are with work such as that of Jonathan Haidt in his The Righteous Mind.

There was a sustained retreat from the utilitarian calculator of homo economicus, seen in the rise of behavioral economics, popularized in the book, Nudge which took human irrational quirks at face value and tried to find wrap arounds, subtle ways to trick the irrational mind into doing things in its own long term interest. Among all of these none of the uncoverings of the less than fully rational actors most of us, no all of us, are have been more influential than the work of Daniel Kahneman, a psychologist with a Nobel Laureate in Economics and author of the bestseller Thinking Fast and Slow.

The surprising thing, to me at least, is that we forgot we were less than fully rational in the first place. We have known this since before we had even understood what rationality, meaning holding to the principle of non-self contradiction, was. How else do you explain Ulysses’ pact, where the hero had himself chained to his ship’s mast so he could both hear the sweet song of the sirens and not plunge to his own death? If the Enlightenment had made us forget the weakness of our rational souls Nietzsche and Freud eventually reminded us.

The 20th century and its horrors should have laid to rest the Enlightenment idea that with increasing education would come a golden age of human rationality, as Germany, perhaps Europe’s most enlightened state became the home of its most heinous regime. Though perhaps one might say that what the mid-20th century faced was not so much a crisis of irrationality as the experience of the moral dangers and absurdities into which closed rational systems that held to their premises as axiomatic articles of faith could lead. Nazi and Soviet totalitarianism had something of this crazed hyper-rational quality as did the Cold War nuclear suicide pact of mutually assured destruction. (MAD)

As far as domestic politics and society were concerned, however, the US largely avoided the breakdown in the Enlightenment ideal of human rationality as the basis for modern society. It was a while before we got the news. It took a series of institutional failures- 9/11, the financial crisis, the botched, unnecessary wars in Afghanistan and Iraq to wake us up to our own thick headedness.

Human folly has now come in for a pretty severe beating, but something in me has always hated a weakling being picked on, and finds it much more compelling to root for the underdog. What if we’ve been too tough on our less than perfect rationality? What if our inability to be efficient probability calculators or computers isn’t always a design flaw but sometimes one of our strengths?

I am not the first person to propose such a heresy of irrationalism. Way back in 1509 the humanist Desiderius Erasmus wrote his riotous, The Praise of Folly with something like this in mind. The book has what amounts to three parts all in the form of a speech by the goddess of Folly. The first part attempts to show how, in the world of everyday people, Folly makes the world go round and guides most of what human beings do. The second part is a critique of the society that surrounded Erasmus, a world of inept and corrupt kings, princes and popes and philosophers. The third part attempts to show how all good Christians, including Christ himself, are fools, and here Erasmus means fool as a compliment. As we all should know, good hearted people, who are for that very reason nieve, often end up being crushed by the calculating and rapacious.

It’s the lessons of the first part of The Praise of Folly that I am mostly interested in here. Many of Erasmus’ intuitions not only can be backed up by the empirical psychological studies of  Daniel Kahneman, they allow us to flip the import of Kahneman’s findings on their head. Foolish, no?

Here are some examples: take the very simple act of a person opening their own business. The fact that anyone engages in such a risk while believing in their likely success  is an example of what Kahneman calls an optimism bias. Overconfident entrepreneurs are blind to the fact, as Kahneman puts it:

The chances that a small business will survive in the United States are about 35%. But the individuals who open such businesses do not believe statistics apply to them. (256)

Optimism bias may result in the majority of entrepreneurs going under, but what would we do without it? Their sheer creative-destructive churn surely has some positive net effect on our economy and the employment picture, and those 35% of successful businesses are most likely adding long lasting and beneficial tweaks to modern life,  and even sometimes revolutions born in garages.

Yet, optimism bias doesn’t only result in entrepreneurial risk. Erasmus describes such foolishness this way:

To these (fools) are to be added those plodding virtuosos that plunder the most inward recesses of nature for the pillage of a new invention and rake over sea and land for the turning up some hitherto latent mystery and are so continually tickled with the hopes of success that they spare for no cost nor pains but trudge on and upon a defeat in one attempt courageously tack about to another and fall upon new experiments never giving over till they have calcined their whole estate to ashes and have not money enough left unmelted to purchase one crucible or limbeck…

That is, optimism bias isn’t just something to be found in business risks. Musicians and actors dreaming of becoming stars, struggling fiction authors and explorers and scientists all can be said to be biased towards the prospect of their own success. Were human beings accurate probability calculators where would we get our artists? Where would our explorers have come from? Instead of culture we would all (if it weren’t already too much the case) be trapped permanently in cubicles of practicality.

Optimism bias is just one of the many human cognitive flaws Kahneman identifies. Take this one, error of affective forecasting and its role in those oh, so important of human institutions, marriage and children. For Kahneman, many people base their decision to get married on how they feel when in the swirl of romance thinking that marriage will somehow secure this happiness indefinitely. Yet the fact is, married persons, according to Kahneman:

Unless they think happy thoughts about their marriage for much of the day, it will not directly influence their happiness. Even newlyweds who are lucky enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced happiness will again depend, as it does for the rest of us, on the environment and activities of the present moment. (400)

Unless one is under the impression that we no longer need marriage or children, again, one can be genuinely pleased that, at least in some cases, we suffer from the cognitive error of affective forecasting. Perhaps societies, such as Japan, that are in the process of erasing themselves as they forego marriage and even more so childbearing might be said to suffer from too much reason.

Yet another error Kahneman brings our attention to is the focusing illusion. Our world is what we attend to and our feelings towards it have less to do with objective reality than what it is we are paying attention to. Something like focusing illusion accounts for a fact that many of us in good health would find hard to believe;namely, the fact that paraplegics are no more miserable than the rest of us. Kahneman explains it this way:

Most of the time, however, paraplegics work, read, enjoy jokes and friends, and get angry when they read about politics in the newspaper.

Adaption to a new situation, whether good or bad, consists, in large part, of thinking less and less about it. In that sense, most long-term circumstances, including paraplegia and marriage, are part-time states that one inhabits only when one attends to them. (405)  

Erasmus identifies something like the focusing illusion in states like dementia where the old are made no longer capable of reflecting on the breakdown of their body and impending death, but he captured it best, I think in these lines:

But there is another sort of madness that proceeds from Folly so far from being any way injurious or distasteful that it is thoroughly good desirable and this happens when harmless mistake in the judgment of the mind is freed from those cares would otherwise gratingly afflict it smoothed over with a content and faction it could not under other so happily enjoy. (78)

We may complain against our hedonic setpoints, but as the psychologists Dan Gilbert points out they not only offer us resilience on the downside- we will adjust to almost anything life throws at us- such set points should caution us that in any one thing- the perfect job, the perfect spouse lies the key to happiness. But that might leave us with a question, what exactly is this self we are trying to win happiness for?

A good deal of Kahneman’s research deals with the disjunction between two selves in the human person, what he calls the experiencing self and the remembering self. It appears that the remembering self usually gets the last laugh in that it guides our behavior. The most infamous example of this is Kahneman’s colonoscopy study where the pain of the procedure was tracked minute by minute and then compared with questions later on related to decisions in the future.

The surprising thing was that future decisions were biased not by the frequency or duration of pain over the course of the procedure but how the procedure ended. The conclusion dictated how much negative emotion was associated with the procedure, that is, the experiencing self seemed to have lost its voice over how the procedure was judged and how it would be approached in the future.

Kahneman may have found an area where the dominance of the remembering self over the experiencing self are irrational, but again, perhaps it is generally good that endings, that closure is the basis upon which events are judged. It’s not the daily pain and sacrifice that goes into Olympic training that counts so much as outcomes, and the meaning of some life event does really change based on its ultimate outcome. There is some wisdom in the words of Solon that we should  “Count no man happy until the end is known”which doesn’t mean no one can be happy until they are dead, but that we can’t judge the meaning of a life until its story has concluded.

“Okay then”, you might be thinking, “what is the point of all this, that we should try to be irrational”? Not exactly. My point is we perhaps should take a slight breather from the critique of human nature from standpoint of its less than full rationality, that our flaws, sometimes, and just sometimes, are what makes life enjoyable and interesting. Sometimes our individual rationality may serve larger social goods of which we are foolishly oblivious.

When Erasmus wrote his Praise of Folly he was clear to separate the foolishness of everyday people from the folly of those in power, whether that power be political power, or intellectual power such as that of the scholastic theologians. Part of what distinguished the fool on the street from the fool on the throne or cloister was the claim of the latter to be following the dictates of reason- the reason of state or the internal logic of a theology that argued over angels on the head of pins. The fool on the street knew he was a fool or at least experienced the world as opaque, whereas the fools in power thought they had all the answers. For Erasmus, the very certainty of those in power along with the mindlessness of their goals made them, instead, even bigger fools than the rest of us.

One of our main weapons against power has always been to laugh at the foolishness of those who wield it. We could turn even a psychopathically rational monster like Hitler into a buffoon because even he, after all, was one of us. Perhaps if we ever do manage to create machines vastly more intelligent than ourselves we will have lost something by no longer being able to make jokes at the expense of our overlords. Hyper-rational characters like DATA from Star Trek or Sheldon from the Big Bang are funny because they are either endearingly trying to enter the mixed-up world of human irrationality, or because their total rationality, in a human context, is itself a form of folly. Super-intelligent machines might not want to become flawed humans, and though they might still be fools just like their creators, likely wouldn’t get the joke.