War and Human Evolution

40-12-17/35

 

Has human evolution and progress been propelled by war? 

The question is not an easy one to ask, not least because war is not merely one of the worst but arguably the worst thing human beings inflict on one another comprising murder, collective theft, and, almost everywhere but in the professional militaries of Western powers, and only quite recently, mass, and sometimes systematic rape. Should it be the case that war was somehow the driving mechanism, the hidden motor behind what we deem most good regarding human life, trapped in some militaristic version of Mandeville’s Fable of the Bees, then we might truly feel ourselves in the grip and the plaything of something like the demon of Descartes’ imagination, though rather than taunt us by making the world irrational we would be cursed to a world of permanent moral incomprehensibility.

When it comes to human evolution we should probably begin to address the question by asking how long war has been part of the human condition? Almost all of our evolution took place while we lived as hunter gatherers, indeed, we’ve only even had agriculture (and anything that comes with it) for 5-10% of our history. From a wide angle view the agricultural, industrial, and electronic revolutions might all be considered a set piece, perhaps one of the reasons things like philosophy, religion, and literature from millennia ago still often resonate with us. Within the full scope of our history, Socrates and Jesus lived only yesterday.

The evidence here isn’t good for those who hoped our propensity for violence is a recent innovation. For over the past generation a whole slew of archeological scholars has challenged the notion of the “peaceful savage” born in the radiating brilliance of the mind of Jean-Jacques Rousseau. Back in 1755 Rousseau had written his Discourse on Inequality perhaps the best timed book, ever. Right before the industrial revolution was to start, Rousseau made the argument that it was civilization, development and the inequalities it produced that was the source of the ills of mankind- including war. He had created an updated version of the myth of Eden, which became in his hands mere nature, without any God in the story. For those who would fear humanity’s increasingly rapid technological development, and the victory of our artificial world over the natural one we were born from and into, he gave them an avenue of escape, which would be to shed not only the new industrial world but the agricultural one that preceded it. Progress, in the mind of Rousseau and those inspired by him, was a Sisyphean trap, the road to paradise lie backwards in time, in the return to primitive conditions.

During the revived Romanticism of the 1960’s archeologists like Francis de Waal set out to prove Rousseau right. Primitive, hunter-gatherer societies, in his view, were not war like and had only become so with contact from advanced warlike societies such as our own. In recent decades, however, this idea of prehistoric and primitive pacifism has come under sustained assault. Steven Pinker’s Better Angels of Our Nature: Why Violence has Declined is the most popularly know of these challenges to the idea of primitive innocence that began with Rousseau’s Discourse, but in many ways, Pinker was merely building off of, and making more widely known, scholarship done elsewhere.

One of the best of examples of this scholarship is Lawrence H. Keeley’s War Before Civilization: The Myth of the Peaceful Savage. Keeley makes a pretty compelling argument that warfare was almost universal across pre-agricultural, hunter-gatherer societies. Rather than being dominated by the non-lethal ritualized “battle”, as de Waal had argued, war for such societies war was a high casualty affair and total.

It is quite normal for conflict between tribes in close proximity to be endemic. Protracted periods of war result in casualty figures that exceed modern warfare even the total war between industrial states seen only in World War II.Little distinction is made in such pre-agricultural forms of conflict between women and children and men of fighting age. Both are killed indiscriminately in wars made up, not of the large set battles that characterize warfare in civilizations after the adoption of agriculture, but of constant ambushes and attacks that aim to kill those who are most vulnerable.

Pre-agricultural war is what we now call insurgency or guerilla war. Americans, who remember Vietnam or look with clear eyes at the Second Iraq War, know how effective this form of war can be against even the most technologically advanced powers. Although Keeley acknowledges the effectiveness and brutal efficiency of guerilla war,  he does not make the leap to suggest that in going up against insurgency advanced powers face in some sense a more evolutionarily advanced rival, honed by at least 100,000 years of human history.  

Keeley concludes that as long as there have been human beings there has been war, yet, he reaches much less dark conclusions from the antiquity of war than one might expect. Human aversion to violence and war, disgust with its consequences, and despair at its, more often than not, unjustifiable suffering are near universal whatever a society’s level of technological advancement. He writes:

“… warfare whether primitive or civilized involves losses, suffering, and terror even for the victors. Consequently, it was nowhere viewed as an unalloyed good, and the respect accorded to an accomplished warrior was often tinged with aversion. “

“For example, it was common the world over for the warrior who had just killed an enemy to be regarded by his own people as spiritually polluted or contaminated. He therefore had to undergo a magical cleansing to remove this pollution. Often he had to live for a time in seclusion, eat special food or fast, be excluded from participation in rituals and abstain from sexual intercourse. “ (144)

Far from the understanding of war found in pre-agricultural societies being ethically shallow, psychologically unsophisticated or unaware of the moral dimension of war their ideas of war are as sophisticated as our own. What the community asks the warrior to do when sending them into conflict and asking them to kill others is to become spiritually tainted and ritual is a way for them to become reintegrated into their community.  This whole idea that soldiers require moral repair much more than praise for their heroics and memorials is one we are only now rediscovering.

Contrary to what Freud wrote to Einstein when the former asked him why human beings seemed to show such a propensity for violence, we don’t have a “death instinct” either. Violence is not an instinct, or as Azar Gat wrote in his War in Human Civilization:

Aggression does not accumulate in the body by a hormone loop mechanism, with a rising level that demands release. People have to feed regularly if they are to stay alive, and in the relevant ages they can normally avoid sexual activity all together only by extraordinary restraint and at the cost of considerable distress. By contrast, people can live in peace for their entire lives, without suffering on that account, to put it mildly, from any particular distress. (37)

War and violence are not an instinct but a tactic, thank God, for if it was an instinct we’d all be dead. Still, if it is a tactic it has been a much used one and it seems that throughout our hundred thousand year human history it is one that we have used with less rather than greater frequency as we have technologically advanced, and one might wonder why this has been the case.    

Azar Gat’s theory of our increasing pacifism is very similar to Pinker’s, war is a very costly and dangerous tactic. If you can get what you are after without it, peace is normally the course human beings will follow. Since the industrial revolution, nature has come under our increasing command, war and expansion are no longer very good answers to the problems of scarcity, thus the frequency of wars was on the decline even before we had invented atomic weapons which turned full scale war into pure madness.

Yet, a good case can be made that war played a central role in putting down the conditions in which the industrial revolution, the spark of the material progress that has made war an unnecessary tactic, occurred and that war helped push technological advancement forward. The Empire won by the British through force of arms was the lever that propelled their industrialization. The Northern United States industrialized much more quickly as a consequence of the Civil War, Western pressure against pre-industrial Japan and China pushed those countries to industrialize. War research gave us both nuclear power and computers, the space race and satellite communications were the result of geopolitical competition. The Internet was created by the Pentagon, and the contemporary revolution in bionic prosthetics grew out of the horrendous injuries suffered in the wars in Iraq and Afghanistan.

Thing is, just as is the case for pre-agricultural societies it is impossible to disaggregate war itself with the capacity for cooperation between human beings a capacity that has increased enormously since the widespread adoption of agriculture. Any increase in human cooperative capacity increases a society’s capacity for war, and, oddly enough many increases in human war fighting capacity lead to increases in cooperative capacity in the civilian sphere. We are caught in a tragic technological logic that those who think technology leads of itself to global prosperity and peace if we just use it for the right ends are largely unaware of. Increases in our technological capacity even if pursued for non-military ends will grant us further capacity for violence and it is not clear that abandoning military research wouldn’t derail or greatly slow technological progress altogether.

One perhaps doesn’t normally associate the quest for better warfighting technologies with transhumanism- the idea “that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology”-, butone should. It’s not only the case that, as mentioned above, the revolution in bionic prosthetics emerged from war, the military is at the forefront of almost every technology on transhumanists’ wish list for future humanity from direct brain-machine and brain- to-brain interfaces to neurological enhancements for cognition and human emotion. And this doesn’t even capture the military’s revolutionary effect on other aspects of the long held dreams of futurists such as robotics.

The subject of how the world militaries have and are moving us in the direction of “post-human” war is the subject Christopher Coker’s  Warrior Geeks: How 21st Century Technology Is Changing the Way We Fight and Think About War and it is with this erudite and thought provoking book that the majority of the rest of this post will be concerned.

Warrior Geeks is a book about much more than the “geekification”  of war, the fact that much of what now passes for war is computerized, many the soldiers themselves now video gamers stuck behind desks, even if the people they kill remain flesh and blood. It is also a book about the evolving human condition and how war through science is changing that condition, and what this change might mean for us. For Coker sees the capabilities many transhumanists dream of finding their application first on the battlefield.

There is the constantly monitored “quantified-self”:

And within twenty years much as engineers analyze data and tweak specifications in order to optimize a software program, military commanders will be able to bio-hack into their soldiers brains and bodies, collecting and correlating data the better to optimize their physical and mental performance. The will be able to reduce distractions and increase productivity and achieve “flow”- the optimum state of focus. (63-64)

And then there is “augmented reality” where the real world is overlaid with the digital such as in the ideal which the US army and others are reaching for-to tie together servicemen in a “cybernetic network” where soldiers:

…. don helmets with a minute screen which when flipped over the eyes enables them to see the entire battlefield, marking the location of both friendly and enemy forces. They can also access the latest weather reports. In future, soldiers will not even need helmets at all. They will be able to wear contact lenses directly onto their retinas. (93)

In addition the military is investigating neurological enhancements of all sorts:

The ‘Persistence in Combat Program’ is well advanced and includes research into painkillers which soldiers could take before going into action, in anticipation of blunting the pain of being wounded. Painkillers such as R1624, created by scientists at Rinat Neuroscience, use an antibody to keep in check a neuropeptide that helps transmit pain sensations from the tissues to the nerves. And some would go further in the hope that one day it might be possible to pop a pill and display conspicuous courage like an army of John Rambos. (230)

With 1 in 8 soldiers returning from combat suffering from PTSD Some in the military hope that the scars of war might be undone with neurological advances that would allow us to “erase” memories.

One might go on with examples of exoskeletons that give soldiers superhuman endurance and strength or tying together the most intimate military unit, the platoon, with neural implants that allow direct brain- to-brain communication a kind of “brain-net” that an optimistic neuroscientist like Miguel Nicolelis seems not to have anticipated.

Coker, for his part, sees these developments in a very dark light, a move away from what have been eternal features of both human nature and war such as courage and character and towards a technologically over-determined and ultimately less than human world. He is very good at showing how our reductive assumptions end up compressing the full complexity of human life into something that gives us at least the illusion of technological control. It’s not our machines who are becoming like us, but us like our machines, it is not necessarily our gain in understanding and sovereignty over our nature to understand underlying genetic and neural mechanisms, but a constriction of our sense of ourselves.

Do we gain from being able to erase the memory of an event in combat that leads to our moral injury or lose something in no longer being able to understand and heal it through reconciliation and self-knowledge? Is this not in some way a more reduced and blind answer to the experience of killing and death in war than that of the primitive tribes people mentioned earlier? Luckily, reductive theories of human nature based on our far less than complete understanding of neuroscience or genetics almost always end up showing their inadequacies after a period of friction with the sheer complexity of being alive.

There are dangers which I see as two-fold. First, there is always the danger that we will move under the assumptions that we have sufficient knowledge and that the solutions to some problem are therefore “easy” should we just double-down and apply some technological or medical solution. Given enough time, the complexity of reality is likely to rear its head, though not after we have caused a good deal of damage in the name of a chimera. Secondly, it is that we will use our knowledge to torque the complexity of the human world into something more simple and ultimately more shallow in the quest for a reality we deem manageable. There is something deeper and more truthful in the primitive response to the experience of killing than erasing the memory of such experiences.

One of the few problems I had with Coker’s otherwise incredible book is, sadly, a very fundamental one. In some sense, Warrior Geeks is not so much a critique of the contemporary ways and trend lines of war, but a bio-conservative argument against transhumanist technologies that uses the subject of war as a backdrop. Echoing Thucydides Coker defines war as “the human thing” and sees its transformation towards post-human forms as ultimately dehumanizing. Thucydides may have been the most brilliant historian to ever live, but his claim that in war our full humanity is made manifest is true only in a partial sense. Almost all higher animals engage in intraspecies killing, and we are not the worst culprit. It wasn’t human beings who invented war, and who have been fighting for the longests, but the ants. As Keeley puts it:

But before developing too militant a view of human existence, let us put war in its place. However frequent, dramatic, and eye-catching, war remains a lesser part of social life. Whether one takes a purely behavioral view of human life or imagines that one can divine mental events, there can be no dispute that peaceful activities, arts, and ideas are by far more crucial and more common even in bellicose societies. (178)

If war has anything in human meaning over and above areas that are truly distinguishing between us and animals such as our culture or for that matter experiences we share with other animals such as love, companionship, and care for our young, it is that at the intimate level, such as is found in platoons, it is one of the few areas where what we do truly has consequences and where we are not superfluous.

We have evolved for culture, love and care as much as we ever have for war, and during most of our evolutionary history the survival of those around us really was dependent on what we did. War, in some cases, is one of the few places today where this earlier importance of the self can be made manifest. Or, as the journalist, Sebastian Junger has stated it, many men eagerly return to combat not so much because they are addicted to its adrenaline high as because being part of a platoon in a combat zone is one of the few places where what a 20 something year old man does actually counts for something.

As for transhumanism or continuing technological advancement and war, the solution to the problem starts with realizing there is no ultimate solution to the problem. To embrace human enhancement advanced by the spear-tip of military innovation would be to turn the movement into something resembling fascism- that is militarism combined with the denial of our shared human worth and destiny. Yet, to completely deny the fact that research into military applications often leads to leaps in technological advancement would be to blind ourselves to the facts of history. Exoskeletons that are developed as armor for our soldiers will very likely lead to advances for paraplegics and the elderly not to mention persons in perfect health just as a host of military innovations have led to benefits for civilians in the past.

Perhaps, one could say that if we sought to drastically reduce military spending worldwide, as in any case I believe we should, we would have sufficient resources to devote to civilian technological improvements of and for themselves. The problem here is the one often missed by technological enthusiasts who fail to take into account the cruel logic of war- that advancements in the civilian sphere alone would lead to military innovations. Exoskeletons developed for paraplegics would very likely end up being worn by soldiers somewhere, or as Keeley puts it:

Warfare is ultimately not a denial of the human capacity for social cooperation, but merely the most destructive expression of it. (158)

Our only good answer to this is really not technological at all, it is to continue to decrease the sphere of human life where war “makes sense” as a tactic. War is only chosen as an option when the peaceful roads to material improvement and respect are closed. We need to ensure both that everyone is able to prosper in our globalized economy and that countries and peoples are encouraged, have ways to resolve potentially explosive disputes and afforded the full respect that should come from belonging to what Keeley calls the “intellectual, psychological, and physiological equality of humankind ”, (179) to which one should add moral equality as well.  The prejudice that we who possess scientific knowledge and greater technological capacity than many of our rivals are in some sense fundamentally superior is one of our most pernicious dangers- we are all caught in the same trap.

For two centuries a decline in warfare has been possible because, even taking into account the savagery of the World Wars, the prosperity of mankind has been improving. A new stage of crisis truly earthshaking proportions would occur should this increasing prosperity be derailed, most likely from its collision with the earth’s inability to sustain our technological civilization becoming truly universal, or from the exponential growth upon which this spreading prosperity rests being tapped out and proving a short-lived phenomenon- it is, after all only two centuries old. Should rising powers such as China and India along with peoples elsewhere come to the conclusion that not only were they to be denied full development to the state of advanced societies, but that the system was rigged in favor of the long industrialized powers, we might wake up to discover that technology hadn’t been moving us towards an age of global unity and peace, after all, but that we had been lucky enough to live in a long interlude in which the human story, for once, was not also the story of war.

 

Wired for Good and Evil

Battle in Heaven

It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.

As is the case with so many of the ideas regarding human nature, Aristotle got there first. A good bet to make when anyone comes up with a really interesting idea regarding what we are as a species is that either Plato or Aristotle had already come up with it, or discussed it. Which is either a sign of how little progress we have made understanding ourselves, or symptomatic of the fact that the fundamental questions for all our gee-whiz science and technology remain important even if ultimately never fully answerable.

Aristotle had classified human beings as being unique in the sense that we were a zóon politikón variously translated as a social animal or a political animal. His famous quote on this score of course being: “He who is unable to live in society, or who has no need because he is sufficient for himself, must be either a beast or a god.” (Politics Book I)

He did recognize that other animals were social in a similar way to human beings, animals such as storks (who knew), but especially social insects such as bees and ants. He even understood bees (the Greeks were prodigious beekeepers) and ants as living under different types of political regimes. Misogynists that he was he thought bees lived under a king instead of a queen and thought the ants more democratic and anarchical. (History of Animals).       

Yet Aristotle’s definition of mankind as zóon politikón is deeper than that. As he says in his Politics:

That man is much more a political animal than any kind of bee or any herd animal is clear. For, as we assert, nature does nothing in vain, and man alone among the animals has speech….Speech serves to reveal the advantageous and the harmful and hence also the just and unjust. For it is peculiar to man as compared to the other animals that he alone has a perception of good and bad and just and unjust and other things of this sort; and partnership in these things is what makes a household and a city.  (Politics Book 1).

Human beings are shaped in a way animals are not by the culture of their community, the particular way their society has defined what is good and what is just. We live socially in order to live up to their potential for virtue- ideas that speech alone makes manifest and allows us to resolve. Aristotle didn’t just mean talk but discussion, debate, conversation all guided by reason. Because he doesn’t think women and slaves are really capable of such conversations he excludes them from full membership in human society.More on that at the end, but now back to Pagel’s Wired for Culture.

One of the things Pagel thinks makes human beings unique among all other animals is the fact that we alone are hyper-social or ultra-social. The only animals that even come close are the eusocial insects- Aristotle’s ants and bees. As E.O. Wilson pointed out in his The Social Conquest of Earth it is the eusocial species, including us, which, pound-for-pound absolutely dominate life on earth. In the spirit of a 1950’s B-movie all the ants in the world added together are actually heavier than us.

Eusociality makes a great deal of sense for insects colonies and hives given how closely related these groups are, indeed, in many ways they might be said to constitute one body and just as cells in our body sacrifice themselves for the survival of the whole, eusocial insects often do the same. Human beings, however, are different, after all, if you put your life at risk by saving the children of non-relatives from a fire, you’ve done absolutely nothing for your reproductive potential, and may even have lost your chance to reproduce. Yet, sacrifices and cooperation like this are done by human beings all the time, though the vast majority in a less heroic key. We are thus, hypersocial- willing to cooperate with and help non-relatives in ways no other animals do. What gives?

Pagel’s solution to this evolutionary conundrum is very similar to Wilson’s though he uses different lingo. Pagel sees us as naturally members of what he calls cultural survival vehicles, we evolved to be raised in them, and to be loyal to them because that was the best, perhaps the only way, of securing our own survival. These cultural survival vehicles themselves became the selective environment in which we evolved. Societies where individuals hoarded their cooperation or refused to makes sacrifices for the survival of the group overtime were driven out of existence by those where individuals had such qualities.

The fact that we have evolved to be cooperative and giving within our particular cultural survival vehicle Pagel sees as the origin of our angelic qualities our altruism and heroism and just general benevolence. No other animal helps its equivalent of little old ladies cross the street, or at least to nothing like the extent we do.   

Yet, if having evolved to live in cultural survival vehicles has made us capable of being angels, for Pagel, it has also given rise to our demonic capacities as well. No other species is a giving as we are, but, then again, no other species has invented torture devices and practices like the iron maiden or breaking on the wheel like we have either. Nor does any other species go to war with fellow members of its species so commonly or so savagely as human beings.

Pagel thinks that our natural benevolence can be easily turned off under two circumstance both of which represent a threat to our particular cultural survival vehicle.

We can be incredibly cruel when we think someone threatens the survival of our group by violating its norms. There is a sort of natural proclivity in human nature to burn “witches” which is why we need to be particularly on guard whenever some part of our society is labeled as vile and marked for punishment- The Scarlet Letter should be required reading in high school.

There is also a natural proclivity for members of one cultural survival vehicle to demonize and treat cruelly their rivals- a weakness we need to be particularly aware of in judging the heated rhetoric in the run up to war.

Pagel also breaks with Richard Dawkins over the later’s view of the evolutionary value of religion. Dawkins in his book The God Virus and subsequently has tended to view religion as a net negative- being a suicide bomber is not a successful reproductive strategy, nor is celibacy. Dawkins has tended to see ideas his “memes” as in a way distinct from the person they inhabit. Like genes, his memes don’t care all that much about the well being of the person who has them- they just want to make more of themselves- a goal that easily conflicts with the goal of genes to reproduce.

Pagel quite nicely closes the gap between memes and genes which had left human beings torn in two directions at once. Memes need living minds to exists, so any meme that had too large a negative effect on the reproductive success of genes should have lain its own grave quite quickly. But if genes are being selected for because they protect not the individual but the group one can see how memes that might adversely impact an individual’s chance of reproductive success can nonetheless aid in the survival of the group. For religions to have survived for so long and to be so universal they must be promoting group survival rather than being a detriment to it, and because human beings need these groups to live religion must on average, though not in every particular case, be promoting the survival of the individual at least to reproductive age.

One of the more counterintuitive, and therefore interesting claims Pagel makes is that what really separates human beings from other species is our imitativeness as opposed to our inventiveness. It the reverse of the trope found in the 60’s-70’s books and films that built around the 1963 novel La Planète des singes- The Planet of the Apes. If you remember, the apes there are able to imitate human technology, but aren’t able to invent it- “monkey see, monkey do”. Perhaps it should be “human see human do” for if Pagel is right our advantage as a species really lies in our ability to copy one another. If I see you doing something enough times I am pretty likely to be able to repeat it though I might not have a clue what it was I was actually doing.

Pagel thinks that all we need is innovation through minor tweaks combined with our ability to rapidly copy one another to have technological evolution take off. In his view geniuses are not really required for human advancement, though they might speed the process along. He doesn’t really apply his ideas to understand modern history, but such a view could explain what the scientific revolution which corresponded with the widespread adoption of the printing press was really all about. The printing press allowed ideas to be deciminated more rapidly whereas the scientific method proved a way to sift those that worked from those that didn’t.

This makes me wonder if something rather silly like Youtube hasn’t even further revved up cultural evolution in this imitative sense. What I’ve found is that instead of turning to “elders” or “experts” when I want to do something new, I just turn to Youtube and find some nice or ambitious soul who has filmed through the steps.

The idea that human beings are the great imitator has some very unanticipated consequences for the rationality of human being visa-vi other animals. Laurie Santos of the Comparative Cognition Laboratory at Yale has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. One more sign that human beings can be thankful for at least some of their irrationality.

This might have implications for machine intelligence that neither Pagel nor Santos draw. Machines might be said to be rational in the way animals are rational but this does not mean that they possess something like human thought. The key to making artificial intelligence more human like might be in making them, in some sense, less rational or as Jeff Steibel stated it for machine intelligence to be truly human like it will need to be “loopy”, “iterative”, and able to make mistakes.

To return to Pagel, he also sees implications for human consciousness and our sense of self in the idea that we are “wired for culture”. We still have no idea why it is human beings have a self, or think they have one, and we have never been able to locate the sense of self in the brain. Pagel thinks its pretty self-evident why a creature that is enmeshed in a society of like creatures with less than perfectly aligned needs would evolve a sense of self. We need one in order to negotiate our preferences, and our rights.

Language, for Pagel, is not so much a means for discovering and discussing the truth as it is an instrument wielded by the individual to gain in his own interest through persuasion and sometimes outright deception. Pagel’s views seem to be seconded by Steven Pinker who in a fascinating talk back in 2007 laid out how the use of language seems to be structured by the type of relationship it represents. In a polite dinner conversation you don’t say “give me the salt”, but “please pass me the salt” because the former is a command of the type most found in a dominance/submission relationship. The language of seduction is not “come up to my place so I can @#!% you”, but “would you like to come up and see my new prints.” Language is a type of constant negotiation and renegotiation between individuals one that is necessary because relationships between human beings are not ordinarily based on coercion where no speech is necessary besides the grunt of a command.

All of which brings me back to Aristotle and to the question of evil along with the sole glaring oversight of Pagel’s otherwise remarkable book. The problem moderns often have with the brilliant Aristotle is his justification of slavery and the subordination of women. That is, Aristotle clearly articulates what makes a society human- our interdependence, our capacity for speech, the way our “telos” is to be shaped and tuned by the culture in which we are raised like a precious instrument. At the same time he turns around and allows this society to compel by force the actions of those deliberately excluded from the opportunity for speech which he characterizes as a “natural” defect.

Pagel’s cultural survival vehicles miss this sort of internal oppression that is distinct both from the moral anger of the community against those who have violated its norms or its violence towards those outside of it. Aren’t there minature cultural survival vehicles in the form of one group or another that fosters its own genetic interest through oppression and exploitation? It got me thinking if any example of what a more religious age would have called “evil” as opposed to mere “badness” couldn’t be understood as a form of either oppression or exploitation, and I am unable to come up with any.

Such sad reflections dampen Pagel’s optimism towards the end of Wired for Culture that the world’s great global cities are a harbinger of us transcending the state of war between our various cultural survival vehicles and moving towards a global culture. We might need to delve more deeply into our evolutionary proclivities to fight within our own and against other cultural survival vehicles inside our societies as much as outside before we put our trust in such a positive outcome for the human story.

If Pagel is right and we are wired, if we have co-evolved, to compete in the name of our own particular cultural survival vehicle against other such communities then we may have in a very real sense co-evolved with and for war. A subject I will turn to next time by jumping off another very recent and excellent book.

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

Erasmus reads Kahneman, or why perfect rationality is less than it’s cracked up to be

sergei-kirillov-a-jester-with-the-crown-of-monomakh-1999

The last decade or so has seen a renaissance is the idea that human beings are something far short of rational creatures. Here are just a few prominent examples: there was Nassim Taleb with his The Black Swan, published before the onset of the financial crisis, which presented Wall Street traders caught in the grip of their optimistic narrative fallacies, that led them to “dance” their way right over a cliff. There was the work of Philip Tetlock which proved that the advice of most so-called experts was about as accurate as chimps throwing darts. There were explorations into how hard-wired our ideological biases are with work such as that of Jonathan Haidt in his The Righteous Mind.

There was a sustained retreat from the utilitarian calculator of homo economicus, seen in the rise of behavioral economics, popularized in the book, Nudge which took human irrational quirks at face value and tried to find wrap arounds, subtle ways to trick the irrational mind into doing things in its own long term interest. Among all of these none of the uncoverings of the less than fully rational actors most of us, no all of us, are have been more influential than the work of Daniel Kahneman, a psychologist with a Nobel Laureate in Economics and author of the bestseller Thinking Fast and Slow.

The surprising thing, to me at least, is that we forgot we were less than fully rational in the first place. We have known this since before we had even understood what rationality, meaning holding to the principle of non-self contradiction, was. How else do you explain Ulysses’ pact, where the hero had himself chained to his ship’s mast so he could both hear the sweet song of the sirens and not plunge to his own death? If the Enlightenment had made us forget the weakness of our rational souls Nietzsche and Freud eventually reminded us.

The 20th century and its horrors should have laid to rest the Enlightenment idea that with increasing education would come a golden age of human rationality, as Germany, perhaps Europe’s most enlightened state became the home of its most heinous regime. Though perhaps one might say that what the mid-20th century faced was not so much a crisis of irrationality as the experience of the moral dangers and absurdities into which closed rational systems that held to their premises as axiomatic articles of faith could lead. Nazi and Soviet totalitarianism had something of this crazed hyper-rational quality as did the Cold War nuclear suicide pact of mutually assured destruction. (MAD)

As far as domestic politics and society were concerned, however, the US largely avoided the breakdown in the Enlightenment ideal of human rationality as the basis for modern society. It was a while before we got the news. It took a series of institutional failures- 9/11, the financial crisis, the botched, unnecessary wars in Afghanistan and Iraq to wake us up to our own thick headedness.

Human folly has now come in for a pretty severe beating, but something in me has always hated a weakling being picked on, and finds it much more compelling to root for the underdog. What if we’ve been too tough on our less than perfect rationality? What if our inability to be efficient probability calculators or computers isn’t always a design flaw but sometimes one of our strengths?

I am not the first person to propose such a heresy of irrationalism. Way back in 1509 the humanist Desiderius Erasmus wrote his riotous, The Praise of Folly with something like this in mind. The book has what amounts to three parts all in the form of a speech by the goddess of Folly. The first part attempts to show how, in the world of everyday people, Folly makes the world go round and guides most of what human beings do. The second part is a critique of the society that surrounded Erasmus, a world of inept and corrupt kings, princes and popes and philosophers. The third part attempts to show how all good Christians, including Christ himself, are fools, and here Erasmus means fool as a compliment. As we all should know, good hearted people, who are for that very reason nieve, often end up being crushed by the calculating and rapacious.

It’s the lessons of the first part of The Praise of Folly that I am mostly interested in here. Many of Erasmus’ intuitions not only can be backed up by the empirical psychological studies of  Daniel Kahneman, they allow us to flip the import of Kahneman’s findings on their head. Foolish, no?

Here are some examples: take the very simple act of a person opening their own business. The fact that anyone engages in such a risk while believing in their likely success  is an example of what Kahneman calls an optimism bias. Overconfident entrepreneurs are blind to the fact, as Kahneman puts it:

The chances that a small business will survive in the United States are about 35%. But the individuals who open such businesses do not believe statistics apply to them. (256)

Optimism bias may result in the majority of entrepreneurs going under, but what would we do without it? Their sheer creative-destructive churn surely has some positive net effect on our economy and the employment picture, and those 35% of successful businesses are most likely adding long lasting and beneficial tweaks to modern life,  and even sometimes revolutions born in garages.

Yet, optimism bias doesn’t only result in entrepreneurial risk. Erasmus describes such foolishness this way:

To these (fools) are to be added those plodding virtuosos that plunder the most inward recesses of nature for the pillage of a new invention and rake over sea and land for the turning up some hitherto latent mystery and are so continually tickled with the hopes of success that they spare for no cost nor pains but trudge on and upon a defeat in one attempt courageously tack about to another and fall upon new experiments never giving over till they have calcined their whole estate to ashes and have not money enough left unmelted to purchase one crucible or limbeck…

That is, optimism bias isn’t just something to be found in business risks. Musicians and actors dreaming of becoming stars, struggling fiction authors and explorers and scientists all can be said to be biased towards the prospect of their own success. Were human beings accurate probability calculators where would we get our artists? Where would our explorers have come from? Instead of culture we would all (if it weren’t already too much the case) be trapped permanently in cubicles of practicality.

Optimism bias is just one of the many human cognitive flaws Kahneman identifies. Take this one, error of affective forecasting and its role in those oh, so important of human institutions, marriage and children. For Kahneman, many people base their decision to get married on how they feel when in the swirl of romance thinking that marriage will somehow secure this happiness indefinitely. Yet the fact is, married persons, according to Kahneman:

Unless they think happy thoughts about their marriage for much of the day, it will not directly influence their happiness. Even newlyweds who are lucky enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced happiness will again depend, as it does for the rest of us, on the environment and activities of the present moment. (400)

Unless one is under the impression that we no longer need marriage or children, again, one can be genuinely pleased that, at least in some cases, we suffer from the cognitive error of affective forecasting. Perhaps societies, such as Japan, that are in the process of erasing themselves as they forego marriage and even more so childbearing might be said to suffer from too much reason.

Yet another error Kahneman brings our attention to is the focusing illusion. Our world is what we attend to and our feelings towards it have less to do with objective reality than what it is we are paying attention to. Something like focusing illusion accounts for a fact that many of us in good health would find hard to believe;namely, the fact that paraplegics are no more miserable than the rest of us. Kahneman explains it this way:

Most of the time, however, paraplegics work, read, enjoy jokes and friends, and get angry when they read about politics in the newspaper.

Adaption to a new situation, whether good or bad, consists, in large part, of thinking less and less about it. In that sense, most long-term circumstances, including paraplegia and marriage, are part-time states that one inhabits only when one attends to them. (405)  

Erasmus identifies something like the focusing illusion in states like dementia where the old are made no longer capable of reflecting on the breakdown of their body and impending death, but he captured it best, I think in these lines:

But there is another sort of madness that proceeds from Folly so far from being any way injurious or distasteful that it is thoroughly good desirable and this happens when harmless mistake in the judgment of the mind is freed from those cares would otherwise gratingly afflict it smoothed over with a content and faction it could not under other so happily enjoy. (78)

We may complain against our hedonic setpoints, but as the psychologists Dan Gilbert points out they not only offer us resilience on the downside- we will adjust to almost anything life throws at us- such set points should caution us that in any one thing- the perfect job, the perfect spouse lies the key to happiness. But that might leave us with a question, what exactly is this self we are trying to win happiness for?

A good deal of Kahneman’s research deals with the disjunction between two selves in the human person, what he calls the experiencing self and the remembering self. It appears that the remembering self usually gets the last laugh in that it guides our behavior. The most infamous example of this is Kahneman’s colonoscopy study where the pain of the procedure was tracked minute by minute and then compared with questions later on related to decisions in the future.

The surprising thing was that future decisions were biased not by the frequency or duration of pain over the course of the procedure but how the procedure ended. The conclusion dictated how much negative emotion was associated with the procedure, that is, the experiencing self seemed to have lost its voice over how the procedure was judged and how it would be approached in the future.

Kahneman may have found an area where the dominance of the remembering self over the experiencing self are irrational, but again, perhaps it is generally good that endings, that closure is the basis upon which events are judged. It’s not the daily pain and sacrifice that goes into Olympic training that counts so much as outcomes, and the meaning of some life event does really change based on its ultimate outcome. There is some wisdom in the words of Solon that we should  “Count no man happy until the end is known”which doesn’t mean no one can be happy until they are dead, but that we can’t judge the meaning of a life until its story has concluded.

“Okay then”, you might be thinking, “what is the point of all this, that we should try to be irrational”? Not exactly. My point is we perhaps should take a slight breather from the critique of human nature from standpoint of its less than full rationality, that our flaws, sometimes, and just sometimes, are what makes life enjoyable and interesting. Sometimes our individual rationality may serve larger social goods of which we are foolishly oblivious.

When Erasmus wrote his Praise of Folly he was clear to separate the foolishness of everyday people from the folly of those in power, whether that power be political power, or intellectual power such as that of the scholastic theologians. Part of what distinguished the fool on the street from the fool on the throne or cloister was the claim of the latter to be following the dictates of reason- the reason of state or the internal logic of a theology that argued over angels on the head of pins. The fool on the street knew he was a fool or at least experienced the world as opaque, whereas the fools in power thought they had all the answers. For Erasmus, the very certainty of those in power along with the mindlessness of their goals made them, instead, even bigger fools than the rest of us.

One of our main weapons against power has always been to laugh at the foolishness of those who wield it. We could turn even a psychopathically rational monster like Hitler into a buffoon because even he, after all, was one of us. Perhaps if we ever do manage to create machines vastly more intelligent than ourselves we will have lost something by no longer being able to make jokes at the expense of our overlords. Hyper-rational characters like DATA from Star Trek or Sheldon from the Big Bang are funny because they are either endearingly trying to enter the mixed-up world of human irrationality, or because their total rationality, in a human context, is itself a form of folly. Super-intelligent machines might not want to become flawed humans, and though they might still be fools just like their creators, likely wouldn’t get the joke.

The Dark Side of a World Without Boundaries

Peter Blume: The Eternal City

Peter Blume: The Eternal City

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Of course, the benefit to paralyzed, or persons otherwise locked tightly in the straitjacket of their own skull, of technologies to communicate directly from the human brain to machines or to other human beings is undeniable. The potential to expand the range of human experience through directly experiencing the thoughts and emotions of others is, also, of course, great. Next weekend being Super Bowl Sunday, I can’t help but think how cool it would be to experience the game through the eyes of Peyton Manning, or Russell Wilson. How amazing would a concert or an orchestra be if experienced likewise in this way?

Still, one need not necessarily be a professional dystopian and unbudgeable cassandra to come up with all kinds of quite frightening scenarios that might arise through the erosion of boundaries between human minds, all one needs to do is pay attention to less sanguine developments of the latest iteration of a once believed to be utopian technology, that is, the Internet and the social Web, to get an idea of some of the possible dangers.

The Internet was at one time prophesied to user in a golden age of transparency and democratic governance, and promised the empowerment of individuals and small companies. Its legacy, at least at this juncture, is much more mixed. There is little disputing that the Internet and its successor mobile technologies have vastly improved our lives, yet it is also the case that these technologies have led to an explosion in the activities of surveillance and control, by nefarious individuals and criminal groups, corporations and the state. Nicolelis’ vision of eroded boundaries between human minds is but the logical conclusion of the trend towards transparency. Given how recently we’ve been burned by techno-utopianism in precisely this area a little skepticism is certainly in order.

The first question that might arise is whether direct brain-to-brain communication (especially when invasive technologies are required) will ever out-compete the kinds of mediated mind-to-mind technology we have had for quite sometime time, that is, both spoken and written language. Except in very special circumstances, not all of them good, language seem to retain its advantages over direct brain-to-brain communication, and, in cases where the advantage of language over such direct communication are used it be may be less of a gain in communicative capacity than a signal that normalcy has broken down.

Couples in the first rush of new love may want to fuse their minds for a time, but a couple that been together for decades? Seems less likely, though there might be cases of pathological jealousy or smothering control bordering on abuse when one half of a couple would demand such a thing. The communion with fellow human beings offered by religion might gain something from the ability of individuals to bridge the chasm that separates them from others, but such technologies would also be a “godsend” for fanatics and cultists. I can not decide whether a mega-church such as Yoido Full Gospel Church in a world where Nicolelis’ brain-nets are possible would represent a blessed leap in human empathetic capacity or a curse.

Nicolelis seems to assume that the capacity to form brain-nets will result in some kind of idealized and global neural version of FaceBook, but human history seems to show that communication technology is just as often used to hermetically seal group off from group and becomes a weapon in the primary human moral dilemma ,which is not the separation of individual from individual, so much as the rivalry between different groups. We seem unable to exit ourselves from such rivalry even when the stakes are merely imagined- as many of us will do next Sunday, and has Nicolelis himself should have known from his beloved soccer where the rivalry expressed in violent play has a tendency to slip into violent riots in the real world.

Direct brain-to-brain communication would seem to have real advantages over language when persons are joined together in teams acting as a unit in response to a rapidly changing situation. Groups such as fire-fighters. By far the greatest value of such capacities would be found in small military units such as Platoons, members who are already bonded in a close experiential and deep emotional bond, as in on display in the documentary- Restrepo. Whatever their virtuous courage, armed bands killing one another are about as far as you can get from the global kumbaya of Nicolelis’ brain-net.

If such technologies were non-invasive, would they be used by employers to monitor the absence of sustained attention while on the job? What of poor dissidents under the thumb of a madman like Kim Jong Un .Imagine such technologies in the hands of a pimp, or even, the kinds of slave owners who, despite our obliviousness, still exist.

One of the problems with the transparency paradigm, and the new craze for an “Internet of things” where everything in the environment of an individual is transformed into Web connected computers is the fact that anything that is a computer, by that very fact, becomes hackable. If someone is worried about their digital data being sucked up and sold on an elaborate black market, about webcams being used as spying tools, if one has concern that connecting one’s home to the web might make one vulnerable, how much more so should be worried if our very thoughts could be hacked? The opportunities for abuse seem legion.

Everything is shaped by personal experience. Nicolelis whose views of the collective mind were forged by the crowds that overthrew the Brazilian military dictatorship and his beloved soccer games. But the crowd is not always wise. I being a person who has always valued solitude and time with individuals and small groups over the electric energy of crowds have no interest in being part of a hive mind to any degree more than the limited extent I might be said to already be in such a mind by writing a piece such as this.

It is a sad fact, but nonetheless true, that should anything like Nicolelis’ brain-net ever be created it can not be under the assumptions of the innate goodness of everyone. As our experience with the Internet should have taught us, an open system with even a minority of bad actors leads to a world of constant headaches and sometimes what amount to very real dangers.

The World Beyond Boundaries

360 The Virgin Oil Painting by Gustav Klimt

I  first came across Miguel Nicolelis in an article for the MIT Technology Review entitled The Brain is not computable: A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines. That got my attention. Nicolelis, if you haven’t already heard of him, is one of the world’s top researchers in building brain-computer interfaces. He is the mind behind the project to have a paraplegic using a brain controlled exoskeleton make the first kick in the 2014 World Cup. An event that takes place in Nicolelis’ native Brazil.

In the interview, Nicolelis characterizes the singularity “as a bunch of hot air”. His reasoning being that “The brain is not computable and no engineering can reproduce it,”. He explains himself this way:

You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

This non-computability of consciousness, he thinks, has negative implications for the prospect of ever “downloading” (or uploading) human consciousness into a computer.

“Downloads will never happen,” he declares with some confidence.

Science journalism, like any sort of journalism needs a “hook” and the hook here was obviously a dig at a number of deeply held beliefs among the technorati; namely, that AI was on track to match and eventually surpass human level intelligence, that the brain could be emulated computationally, and that, eventually, the human personality could likewise be duplicated through computation.

The problem with any hook is that they tend to leave you with a shallow impression of the reality of things. If the world is too complex to be represented in software it is even less able to be captured in a magazine headline or 650 word article. For that reason,  I wanted a clearer grasp of where Nicolelis was coming from, so I bought his recent and excellent, if a little dense, book, Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Let me start with a little of  Nicolelis’ research and from there flesh out the neuroscientist’s view of our human-machine future, a view I found both similar in many respects and at the same time very different from perspectives typical today of futurists thinking about such things.

If you want to get an idea of just how groundbreaking Nicolelis’ work is, the best thing to do is to peruse the website of his lab.  Nicolelis and his colleagues have done conducted experiments where a monkey has controlled the body of a robot located on the other side of the globe, and where another simian has learned to play a videogame with its thoughts alone. Of course, his lab is not interested in blurring the lines between monkeys and computers for the heck of it, and the immediate aim of their research is to improve the lives of those whose ties between their bodies and their minds have been severed, that is, paraplegics. A fact which explains Nicolelis’ bold gamble to successfully demonstrate his lab’s progress by having a paralyzed person kickoff the World Cup.

For how much the humanitarian potential of this technology is inspiring, it is the underlying view of the brain the work of the Nicolelis Lab appears to experimentally support and the neuroscientist’s longer term view of the potential of technology to change the human condition that are likely to have the most lasting importance. They are views and predictions that put Nicolelis more firmly in the trans-humanist camp than might be gleaned from his MIT interview.

The first aspect of Nicolelis’ view of the brain I found stunning was the mind’s extraordinary plasticity when it came to the body. We might tend to think of our brain and our body as highly interlocked things, after all, our brains have spent their whole existence as part of one body- our own. This a reality that the writer, Paul Auster, turns into the basis of his memoir Winter Journal which is essentially the story of his body’s movement through time, its urges, muscles, scars, wrinkles, ecstasies and pains.

The work of Nicolelis’ Lab seems to sever the cord we might thinks joins a particular body and the brain or mind that thinks of it as home. As he states it in Beyond Boundaries:

The conclusion from more than two decades of experiments is that the brain creates a sense of body ownership through a highly adaptive, multimodal process, which can, through straightforward manipulations of visual, tactile, and body position (also known as proprioception) sensory feedback, induce each of us, in a matter of seconds, to accept another whole new body as being the home of our conscious existence. (66)

Psychologists have had an easy time with tricks like fooling a person into believing they possess a limb that is not actually theirs, but Nicolelis is less interested in this trickery than finding a new way to understand the human condition in light of his and others findings.

The fact that the boundaries of the brain’s body image are not limited to the body that brain is located in is one way to understand the perhaps almost unique qualities of the extended human mind. We are all ultimately still tool builders and users, only now our tools:

… include technological tools with which we are actively engaged, such as a car, bicycle, or walking stick; a pencil or a pen, spoon, whisk or spatula; a tennis racket, golf club, a baseball glove or basketball; a screwdriver or hammer; a joystick or computer mouse; and even a TV remote control or Blackberry, no matter how weird that may sound. (217)

Specialized skills honed over a lifetime can make a tool an even more intimate part of the self. The violin, an appendage of a skilled musician, a football like a part of the hand of a seasoned quarterback. Many of the most prized people in society are in fact master tool users even if we rarely think of them this way.

Even with our master use of tools, the brain is still, in Nicolelis’ view,trapped within a narrow sphere surrounding its particular body. It is here where he sees advances in neuroscience eventually leading to the liberation of the mind from its shell. The logical outcome of minds being able to communicate directly to computers is a world where, according to Nicolelis:

… augmented humans make their presence felt in a variety of remote environments, through avatars and artificial tools controlled by thought alone. From the depths of the oceans to the confines of supernovas, even to the tiny cracks of intracellular space, human reach will finally catch up to our voracious appetite to explore the unknown. (314)

He characterizes this as Mankind’s “epic journey of emancipation from the obsolete bodies they have inhabited for millions of years” (314) Yet, Nicolelis sees human communication with machines as but a stepping stone to the ultimate goal- the direct exchange of thoughts between human minds. He imagines the sharing of what has forever been the ultimately solipsistic experience of what it is like to be a particular individual with our own very unique experience of events, something that can never be fully captured even in the most artful expressions of,  language. This exchange of thoughts, which he calls “brainstorms” is something Nicolelis does not limit to intimates- lovers and friends- but which he imagines giving rise to a “brain- net”.

Could we one day, down the road of a remote future, experience what it is to be part of a conscious network of brains, a collectively thinking true brain-net? (315)

… I have no doubt that the rapacious voracity with which most of us share our lives on the Web today offers just a hint of the social hunger that resides deep in human nature. For this reason, if a brain- net ever becomes practicable,  I suspect it will spread like a supernova explosion throughout human societies. (316)

Given this context, Nicolelis’ view on the Singularity and the emulation or copying of human consciousness on a machine is much more nuanced than the impression one is left with from the MIT interview. It is not that he discounts the possibility that “advanced machines may come to dominate and even dominate the human race” (302) , but that he views it as a low probability danger relative to the other catastrophic risks faced by our species.

His views on prospect of human level intelligence in machines is less that high level machine intelligence is impossible, but that our specific type of intelligence is non-replicable. Building off of Stephen Jay Gould’s idea of the “life tape”  the reason being that we can not duplicate through engineering the sheer contingency that lies behind the evolution of human intelligence. I understand this in light of an observation by the philosopher Daniel Dennett, that I remember but cannot place, that it may be technically feasible to replicate mechanically an exact version of a living bird, but that it may prove prohibitively expensive, as expensive as our journeys to the moon, and besides we don’t need to exactly replicate a living bird- we have 747s. Machine intelligence may prove to be like this where we are never able to replicate our own intelligence other than through traditional and much more exciting means, but where artificial intelligence is vastly superior to human intelligence in many domains.

In terms of something like uploading, Nicolelis does believe that we will be able to record and store human thoughts- his brainstorms- at some place in the future, we may be able to record  the whole of a life in this way, but he does not think this will mean the preservation of a still experiencing intelligence anymore than a piece by Chopin is the actual man. He imagines us deliberately recording the memories of individuals and broadcasting them across the universe to exist forever in the background of the cosmos which gave rise to us.

I can imagine all kinds of wonderful developments emerging should the technological advances Nicolelis imagines coming to pass. It would revolutionize psychological therapy, law, art and romance. It would offer us brand new ways to memorialize and remember the dead.

Yet, Nicolelis’ Omega Point- a world where all human being are brought together into one embracing common mind, has been our dream at least since Plato, and the very antiquity of these urges should give us pause, for what history has taught us is that the optimistic belief that “this time is different” has never proved true. A fact which should encourage us to look seriously, which Nicolelis himself refuse to do, at the potential dark side of the revolution in neuroscience this genius Brazilian is helping to bring about. It is less a matter of cold pessimism to acknowledge this negative potential as it is a matter of steeling ourselves against disappointment, at the the least, and in preventing such problem from emerging in the first place at best, a task I will turn to next time…

Silicon Secessionists

Moore's Utopia

Lately, there have be weird mumblings about secession coming from an unexpected corner. We’ve come to expect that there are hangers on to the fallen Confederate States of America, or Texans hankering after their lost independent Republic, but Silicon Valley? Really? The idea, at least at first blush, seems absurd.

We have the tycoon founder of PayPal and early FaceBook investor, Peter Thiel, whose hands seem to be in every arch-conservative movement under the sun, and who is a vocal supporter of utopian seasteading. The idea of creating a libertarian oasis of artificial islands beyond the reach of law, regulation and taxes.

Likewise, Zoltan Istvan’s novel The Transhumanist Wager uses the idolatry of Silicon Valley’s Randian individualism and technophilia as lego blocks with which to build an imagined “Transhumania”.  A moveable artificial island that is, again, free from the legal and regulatory control of the state.

A second venture capitalist, Tim Draper, recently proposed shattering mammoth California into six pieces, with Silicon Valley to become its own separate state. There are plans to build a techno-libertarian Galt’s Gulch type city-state in Chile, a geographical choice which given Chile’s brutal experience with right-wing economics via Pinochet and the Chicago-school is loaded with historical irony.

Yet another Silicon Valley tech entrepreneur, Elon Musk, hopes to do better than all of these and move his imagined utopian experiment off of the earth, to Mars. Perhaps, he could get some volunteer’s from Winnipeg whose temperature earlier this month under a “polar vortex” was colder than that around the Curiosity Rover tooling around in the dead red dust of the planet of war.

What in the world is going on?

By far the best articulation of Silicon Valley’s new secessionists urges I have seen comes from  Balaji Srinivasan, who doesn’t consider himself a secessionist along the lines of John C Calhoun at all. In an article for Wired back in November  Srinivasan laid out what I found to be a quite intriguing argument for a kind of Cambrian explosion of new polities. The Internet now allows much easier sorting of individuals based on values and its only a step or two ahead to imagine virtual associations becoming physical ones.

I have to say that I find much to like in the idea of forming small, new political societies as a means of obtaining forms of innovation we sorely lack- namely political and economic innovation. I also think Srinivasan and others  are onto something in that that small societies, which get things right, seem best positioned to navigate the complex landscape of our globalized world. I myself would much prefer a successful democratic-socialist small society, such as a Nordic one like Finland, to a successful capitalist-authoritarian on like Singapore, but the idea of a plurality of political systems operating at a small scale doesn’t bother me in the least as long as belonging to such polities is ultimately voluntary.

The existence of such societies might even help heal one of the main problems of the larger pluralist societies, such as our own, to which these new communities might remain attached. Pluralist societies are great on diversity, but often bad on something older, and invariably more intolerant types of society had in droves; namely the capacity of culture to form a unified physical and intellectual world- a kind of home- at least for those lucky enough to believe in that world and be granted a good place within it.

Even though I am certain that, like most past efforts  have, the majority of these newly formed polities would fail, as have the utopian experiments in the past, we would no doubt learn something from them. And some might even succeed and become the legacy of those bold enough to dream of the new.

One might wonder, however, why this recent interest in utopian communities has been so strongly represented both by libertarians and Silicon Valley technolphiles? Nothing against libertarian experiments per se, but there are, after all a whole host of other ideological groups that could be expected to be attracted to the idea of forming new political communities where their principles could be brought to fruition. Srinivasan, again, provides us with the most articulate answer to this question.

In a speech I had formerly misattributed to one of the so-called neo-reactionaries (apologies), Srinivasan lays out the case for what he calls “Silicon Valley’s ultimate exit”.

He begins by asking in all seriousness “Is the USA the Microsoft of Nations?”and then goes on to draw the distinction between two different types of responses to institutional failure- Voice versus Exit. Voice essentially means aiming to change an institution from within whereas Exit is flight or in software terms “forking” to form a new institution whether that be anything from a corporation to a state. Srinivasan thinks Exit is an important form of political leverage pressuring a system to adopt reform or face flight.

The problem I see is the logic behind the choice of Exit over Voice which threatens a kind of social disintegration. Indeed, the rationale for Exit behind libertarian flight which Srinivasan draws seems not only to assume an already existent social disintegration, but proposes to act as an accelerant for more.

Srinivasan’s argument is that Silicon Valley is on the verge of becoming the target of the old elites which he calls “The Paper Belt: based in:Boston with higher ed; New York City with Madison Avenue, books, Wall Street, and newspapers; Los Angeles with movies, music, Hollywood; and, of course, DC with laws and regulations, formally running it.” That Silicon Valley with it’s telecommunications revolution was “putting a horse head in all of their beds. We are becoming stronger than all of them combined.” That the elites of the Paper Belt  “are basically going to try to blame the economy on Silicon Valley, and say that it is iPhone and Google that done did it, not the bailouts and the bankruptcies and the bombings…” And that  “What they’re basically saying is: rule by DC means people are going back to work and the emerging meme is that rule by us is rule by Terminators. We’re going to take all the jobs.”

Given what has actually happened so far Srinivasan’s tone seems almost paranoid. Yes, the shine is off the apple (pun intended) of Silicon Valley, but the most that seems to be happening are discussions about how to get global tech companies to start paying their fair share of taxes. And the Valley has itself woken up to the concerns of civil libertarians that tech companies were being us by the US as a giant listening device.

Srinivasan himself admits that unemployment due to advances in AI and automation is a looming crisis, but rather than help support society, something that even a libertarian like Peter Diamandis has admitted may lead to the requirement for a universal basic income, Srinivasan instead seems to want to run away from the society he helped create.

And therein lies the dark side of what all this Silicon Valley talk of flight is about. As much as it’s about experimentation,or Exit, it’s also about economic blackmail and arbitrage. It’s like a marriage where one partner, rather than engage even in discussions where they contemplate sacrificing some of their needs threatens at the smallest pretense to leave.

Arbitrage has been the tool by which the global, (to bring back the good old Marxist term) bourgeoisie, has been able to garner such favorable conditions for itself over the past generation. “Just try to tax us, and we will move to a place with lower or no taxation”, “Just try to regulate us and we will move to a place with lower or no regulation”, it says.

Yet, both non-excessive taxation, and prudent regulation are the way societies keep themselves intact in the face of the short-sightedness and greed at the base of any pure market. Without them, shared social structures and common infrastructure decays and all costs- pollution etc- are externalized onto the society as a whole. Maybe what we need is not so much more and better tools for people to opt out, which Srinivasan proposes, than a greater number and variety of ways for people to opt in. Better ways of providing the information and tools of Voice that are relevant, accessible, and actionable.

Perhaps what’s happened is that we’ve come almost full round from our start in feudalism. We started with a transnational church and lords locked in the place of their local fiefdoms and moved to nation-states where ruling elites exercised control over a national territory where concern for the broad society underneath along with its natural environment was only fully extended with the expansion of the right to vote almost universally across society.

With the decline of the national state as the fundamental focus of our loyalty we are now torn in multiple directions, between our country, our class, by our religious and philosophical orientations, by our concern for the local or its invisibility, or our concern for the global or its apparent irrelevance.  Yet, despite our virtuality we still belong to physical communities, our neighborhood, country and our shared earth.

Closer to our own time, this hope to escape the problems of society by flight and foundation of new uncorrupted enclaves is an idea buried deep in the founding myth of Silicon Valley. The counter-culture from which many of the innovators of Silicon Valley emerged wanted nothing to do with America’s deep racial and Cold War era problems. They wanted to “drop out” and instead ended up sparking a revolution that not only challenged the whitewashed elites of the “Paper Belt”, but ended up creating a new set of problems, which the responsibility of adulthood should compel them to address.

The elite that has emerged from Silicon Valley is perhaps the first in history dis-attached from any notion of physical space, even the physical space of our shared earth. But “ultimate exit” is an illusion, at least for the vast majority of us, for even if we could settle the stars or retreat into an electron cloud, the distances are far too great and both are too damned cold.

An Epicurean Christmas Letter To Transhumanists

Botticelli Spring- Primivera

Whatever little I retain from my Catholic upbringing, the short days of the winter and the Christmas season always seem to turn my thoughts to spiritual matters and the search for deeper meanings. It may be a cliche, but if you let it hit you, the winter and coming of the new year can’t help but remind you endings, and sometimes even the penultimate ending of death. After all, the whole world seems dead now,  frozen like some morgue-corpse, although this one, if past is prelude, really will rise from the dead with the coming of spring.

Now, I would think death is the last thing most people think of, especially during what for many of us is such a busy, drowned in tinsel, time of the year. The whole subject is back there buried with the other detritus of life, such as how we get the food we’ll stuff ourselves with over the holidays, or the origin of the presents, from tinker-toys to diamond rings, that some of us will wrap up and hide under trees. It’s like the Jason Isbell song The Elephant that ends with the lines:

There’s one thing that’s real clear to me,


no one dies with dignity.


We just try to ignore the elephant somehow

This aversion to even thinking about death is perhaps the unacknowledged biggest obstacle for transhumanists whose goal, when all is said and done, is to conquer death. It’s similar to the kind of aversion that lies behind our inability to tackle climate change.Who wants to think about something so dreadful?

There are at least some people who do want to think of something so dreadful, and not only that, they want to tie a bow around it and make it appear some wonderful present left under the tree by Kris Kringle. Maria Konovalenko recently panned a quite silly article in the New York Times by Daniel Callahan who was himself responding to the hyperbolic coverage of Google’s longevity initiative, Calico. Here’s Callahan questioning the push for extended longevity:

And exactly what are the potential social benefits? Is there any evidence that more old people will make special contributions now lacking with an average life expectancy close to 80? I am flattered, at my age, by the commonplace that the years bring us wisdom — but I have not noticed much of it in myself or my peers. If we weren’t especially wise earlier in life, we are not likely to be that way later.

Perhaps not, but neither did we realize the benefits of raising life expectancy from 45 to near 80 between 1900 and today, such as The Rolling Stones. Callahan himself is a still practicing heart surgeon- he’s 83- and I’m assuming, because he’s still here, that he wouldn’t rather be dead. And even if one did not care about pushing the healthy human lifespan out further for oneself, how could one not wish for such an opportunity for one’s children? Even 80 years is really too short for all of the life projects we might fulfill, barely long enough to feel at home into the “world in which we’re thrown” ,quite literally, like the calf the poet Diane Ackerman helped deliver and described in her book Deep Play:

When it lifted its fluffy head and looked at me, its eyes held the absolute bewilderment of the newly born. A moment before it had enjoyed the even, black nowhere of the womb, and suddenly its world was full of color, movement, and noise. I have never seen anything so shocked to be alive. (141)

And if increased time to be here would likely be good for us as individuals, sufficient time to learn what we should learn and do what we should do, I agree as well with Vernor Vinge that greatly expanded human longevity would likely be an uncomparable good for society not least because it might refocus the mind on the longer term health of the societies and planet we call home.

That said, I do have some concern that my transhumanists friends are losing something by not acknowledging the death elephant given that they’re are too busy trying to push it out of the room. The problem I see is that many transhumanists are, how to put this, old, and can’t afford or aren’t sufficiently convinced in the potential of cryonics to put faith in it as a “backup”. Even when they embrace being deep- froze many of their loved ones are unlikely to be so convinced ,and, therefore, they will watch or have knowledge of their parents, siblings, spouse and friends experiencing a death that transhumanists understand to be nothing short of dark oblivion.

Lately it seems some have been trying to stare this oblivion in the face. Such, I take it, is the origin of classical composer David Lang’s haunting album Death Speaks. I do not think Lang’s personification of death in the ghostly voice of Shara Worden, or the presentation of the warm embrace of the grave as a sort of womb, should be considered “deathist”, even if death in his work is sometimes represented as final rest from the weariness of life, and anthropomorphized into a figure that loves even as she goes about her foul business of killing us.  Rather, I see the piece as merely the attempt to understand death through metaphor, which is sometimes all we have, and personally found the intimacy both chilling and thought provoking.

This is the oblivion we are all too familiar of biological death, which given sufficient time for technological advancement we may indeed escape as we might someday even exit biology itself, but I suspect that even over the very, very long run, some sort of personal oblivion regardless of how advanced our technology is likely inevitable.

As I see it, given the nature of the universe and its continuous push towards entropy we are unlikely to ever fully conquer death so much as phase change into new timescales and mechanisms of mortality. The reason for us thinking otherwise is, I think, our insensitivity to the depth of time. Even a 10,000 year old you is a mayfly compared to the age of our sun, let alone the past and future of the universe. What of “you” today would be left after 10,000 years, 100,000, a million, a billion years of survival? I would think not much, or at least not much more than would have survived on smaller time scales that you pass on today- your genes, your works, your karma. How many of phase changes exist between us today and where the line through us and our descendants ends is anyone’s guess, but maintaining the core of a particular human personality throughout all of these transformations seems like a very long shot indeed.

Even if the core of ourselves could be kept in existence through these changes what are the prospects that it would survive into the end of the universe, not to mention beyond?  As Lawrence Krauss pointed out, the physics seem to lean in the direction that in a universe with a finite amount of energy which is infinitely expanding no form of intelligence can engage in thinking for an infinite amount of time. Not even the most powerful form of intelligence we can imagine, as long as we use our current understanding of the laws of physics as boundary conditions, can truly be immortal.

On a more mundane level, even if a person could be fully replicated as software or non-biological hardware these systems too have their own versions of mortality (are you still running Windows ME and driving a Pinto?), and the preservation of a replicated person would require continuous activity to keep this person as software and/or non-biological hardware in a state of existence while somehow retaining the integrity of the self.

What all this adds up to is that if one adopts a strict atheism based on what science tells us is the nature of reality one is almost forced to come to terms with the prospect of personal oblivion at some point in the future, however far out that fate can be delayed. Which is not to say that reprieve should not be sought in the first place, only that we shouldn’t confuse the temporal expansion of human longevity, whether biological or through some other means, with the attainment of actual immortality. Breaking through current limits to human longevity would likely confront us with new limits we would still be faced with the need to overcome.

Some transhumanists who are pessimistic about the necessary breakthroughs to keep them in existence occurring in the short run, within their lifetime, cling to a kind of “Quantum Zen”, as Giulio Prisco recently put it, where self and loved ones are resurrected in a kind of cosmic reboot in the far future. Speaking of the protagonist of Zoltan Istvan’s Transhumanist Wager here’s how Prisco phrased it:

Like Jethro, I consider technological resurrection (Tipler, quantum weirdness, or whatever) as a possibility, and that is how I cope with my conviction that indefinite lifespans and post-biological life will not be developed in time for us, but later.

 To my eyes at least, this seems less a case of dealing with the elephant in the room than zapping it with a completely speculative invisible-izing raygun. If the whole moral high ground of secularists over the religious is that the former tie themselves unflinchingly to the findings of empirical science, while the latter approach the world through the lens of unquestioning faith, then clinging to a new faith, even if it is a faith in the future wonders of science and technology surrenders that high ground.

That is, we really should have doubts about any idea, whatever its use of scientific language, that isn’t falsifiable and is based on mere speculation (even the speculation of notable physicists) on future technological potential. Shouldn’t we want to live on the basis of what we can actually know through proof, right now?

How then, as a secular person, which I take most transhumanists to be, do you deal with idea of personal oblivion? It might seem odd to turn to a Roman Epicurean natural philosopher and poet born a century before Christ to answer such a question, but Titus Lucretius Carus, usually just called Lucretius, offered us one way of dampening the fear of death while still holding a secular view of the world.  At least that’s what Stephen Greenblatt found was the effect of  Lucretius’ only major work- On the Nature of Things.

Greenblatt found his secondhand copy of On the Nature of Things in a college book bin attracted as much by the summer- of- love suggestiveness of the 1960’s cover as anything else. He cracked it open that summer and found a book that no doubt seemed to reflect directly the spirit of the times, beginning as it does with a prayer to the goddess of love, Venus, and a benediction to the power of sexual attraction over even Mars the god of war.

It was also a book in the words of Lucretius whose purpose was to “ to free men’s minds from fear of the bonds religious scruples have imposed” (124) As Greenblatt describes it in his book The Swerve: How the World Became Modern, in Lucretius’ On the Nature of Things he found refuge from his own painful experience not with death, but the thought of it, and not even the fear of his own oblivion, but that of his mother’s fear of the same.  As Greenblatt writes of his mother:

It was death itself- simply ceasing to be- that terrified her. From as far back as I can remember, she brooded obsessively on the imminence of her end, invoking it again and again, especially at moments of parting. My life was full of operatic scenes of farwell. When she went with my father from Boston to New York  for the weekend, when I went off to summer camp, even- when things were especially hard for her- when I left the house for school, she clung tightly to me, speaking of her fragility and of the distinct possibility that I would never see her again. If we walked somewhere together, she would frequently come to a halt, as if she were about to keel over. Sometimes she would show me a vein pulsing in her neck, and taking my finger, make me feel it for myself, the sign of her heart dangerously racing. (3)

The Swerve tells the history of On the Nature of Things, its loss after the collapse of Roman civilization, its nearly accidental preservation by Christian monks, rediscovery in the early Renaissance and deep and all but forgotten impact on the sentiment of modernity having had an influence on figures as diverse as Shakespeare, Bruno, Galileo, More, Montesquieu and Jefferson. Yet, Greenblatt’s interest in On the Nature of Things was born of a personal need to understand and dispel anxiety over death, so it’s best to look at Lucretius’ book itself to see how that might be done.

Lucretius was a secular thinker before there was even a name for such a thing. He wanted a naturalistic explanation of the world where the gods, if they existed, played no role either in the workings of nature or the affairs of mankind. The basis to everything he held was a fundamental level of particles he sometimes called “atoms” and it was the non-predetermined interaction of these atoms that gave rise to everything around us, from stars and planets to animals and people.

From this basis Lucretius arrived at a picture of the universe that looked amazingly like our own. There is an evolution of the universe- stars and planets- from simpler elements and the evolution of life. Anything outside this world made of atoms is ultimately irrelevant to us. There is no need to placate the unseen gods or worry what they think of us.

Everything we experience for good and ill including the lucky accident of our own existence and our ultimate demise is from the “swerve” of underlying atoms. The Lucretian world makes no sharp division, as ancients and medievals often did, between the earthly world and the world of the sky above our heads.

The universe is finite in matter if infinite in size, and there are likely other worlds in it with intelligent life like our own. In the Copernican sense we are not at the center of things either as a species or individually. All we can experience, including ourselves, is made of the same banal substance of atoms going about their business of linking and unlinking with one another. And, above all, everything that belongs to this universe built of atoms is mortal, a fleeting pattern destined to fall apart.

On the Nature of Things is the strangest of hybrids. It is a poem, a scientific text and a self-help book all at the same time. Lucretius addresses his poem to Gaius Memmius an unknown figure whom the author aims to free from the fear of the gods and death. Lucretius advises Memmius  that death is nothing to fear for it will be no different to us than all the time that passed before we were born. To rage against no longer existing through the entirety of the future is no more sensical than raging that we did not exist through the entirety of the past.

Think how the long past age of hoary time

Before our birth is nothing to us now

This in a mirror

Nature shows to us

Of what will be hereafter when we’re dead

Does this seem terrible is this so sad?

Is it not less troubled than our daily sleep? (118)

______________________________

I know, I know, this is the coldest of cold comforts.

Yet, Lucretius was an Epicurean whose ultimate aim was that we be wise enough to keep in our view the simple pleasures of being alive, right now, in the moment in which we were lucky enough to be living. While reading On the Nature of Things I had in my ear the constant goading whisper- “Enjoy your life!” Lucretius’ fear was that we would waste our lives away in fear and anticipation of life, or its absence, in the future. That we would be of those:

Whose life was living death while yet you live

And see the light who spend the greater part

Of life in sleep still snoring while awake.( 122-123)

It is not that Lucretius advises us to take up the pleasure seeking life of hedonism, but he urges us to not waste our preciously short time here with undue anxiety over things that are outside of our control or in our control to only a limited extent. On The Nature of Things admonishes us to start not from the position of fear or anger that the universe intends to eventually “kill” us, but from one of gratitude that out of a stream of randomly colliding atoms we were lucky enough to have been born in the first place.

This message in a bottle from an ancient Epicurean reminded me of the conclusion to the aforementioned Diane Ackerman’s Deep Play where she writes to imagined inhabitants of the far future that might let her live again and concludes in peaceful lament:

If that’s not possible, then I will have to make due with the playgrounds of mortality, and hope that at the end of my life I can say simply, wholeheartedly that it was grace enough to be born and live. (212)

 Nothing that happens, or fails to happen, within our lifetimes, or after it, can take away this joy that it was to live, and to know it.

Pernicious Prometheus

Prometheus brings fire to mankind Heinrich fueger 1817

It should probably seem strange to us that one of the memes we often use when trying to grapple with the question of how to understand the powers brought to us by modern science and technology is one inspired by an ancient Greek god chained to a rock. Well, actually not quite a god but a Titan, that is Prometheus.

Do a search on Amazon for Prometheus and you’ll find that he has more titles devoted to him than even the lightning bolt throwing king of the gods himself, Zeus, who was the source of the poor Titan’s torment. Many, perhaps the majority of these titles containing Prometheus- whether fiction or nonfiction- have to do not so much with the Titan himself as our relationship to science, technology and knowledge.

Here’s just an odd assortment of titles you find: American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer, Forbidden Knowledge: From Prometheus to Pornography, Prometheus Reimagined: Technology, Environment, and Law in the Twenty-first Century, Prometheus Redux: A New Energy Age With the IGR, Prometheus Bound: Science in a Dynamic ‘Steady State’ . It seems even the Japanese use the Western Prometheus myth as in: Truth of the Fukushima nuclear accident has now been revealed: Trap of Prometheus. These are just some of the more recent non-fiction titles. It would take up too much space to list the works of science-fiction where the name Prometheus graces the cover.   

Film is in the Prometheus game as well with the most recent being Ridley Scott’s movie named for the tragic figure where, again, scientific knowledge and our quest for truth, along with its dangers, are the main themes of the plot.

It should be obvious that the myth of Prometheus is a kind of mental tool we use, and have used for quite some time, to understand our relationship to knowledge in general and science and technology specifically. Why this is the case is an interesting question, but above all we should want to know whether or not the Promethean myth for all the ink and celluloid devoted to is actually a good tool for thinking about human knowing and doing, or whether have we ourselves become chained to it and unable to move like the hero, or villain- depending upon your perspective, whose story still holds us in his spell.

It is perhaps especially strange that we turn to the myth of Prometheus to think through potential and problems brought about by our advanced science and technology because the myth is so damn old. The story of Prometheus is first found in the works of the Greek poetic genius, Hesiod who lived somewhere between 750 and 650 B.C.E. Hesiod portrays the Titan as the foil of Zeus and the friend of humankind. Here’s me describing how Hesiod portrayed Zeus’ treatment of our unfortunate species.

If one thought Yahweh was cruel for cursing humankind to live “by the sweat of his brow” he has nothing on Zeus, who along with his court of Olympian gods:

“…keep hidden from men the means of life. Else you would easily do work enough in a day to supply you for a full year even without working.”

In Hesiod, it is Prometheus who tries to break these limitations set on humankind by Zeus.

Prometheus, the only one of the Titans that had taken Zeus’s side in coup d’état against Cronos had a special place in his heart for human beings having, according to some legends, created them. Not only had Prometheus created humans who the Greeks with their sharp wisdom called  mortals, he was also the one who when all the useful gifts of nature had seemingly been handed out to the other animals before humans had got to the front of the line, decided to give mortals an upright posture, just like the gods, and that most special gift of all- fire.

We shouldn’t be under the impression, however, that Hesiod thought Prometheus was the good guy in this story. Rather, Hesiod sees him as a breaker of boundaries and therefore guilty of a sort of sacrilege. It is Prometheus’ tricking Zeus into accepting a subpar version of sacrifice that gets him chained to his rock more than his penchant for handing out erect posture, knowledge and technology to his beloved humans.

The next major writer to take up the myth of Prometheus was the playwright of Greek tragedy, Aeschylus, or at least we once believed it to have been him, who put meat on the bones of Hesiod’s story. In Aeschylus’ play, Prometheus Bound, and the fragments of the play’s sequels which have survived we find the full story of Prometheus’ confrontation with Zeus, which Hesiod had only brushed upon.

Before I had read the play I had always imagined the Titan chained to his rock like some prisoner in the Tower of London.  In Prometheus Bound the rebel against Zeus is not so much chained as nailed to Mount Kazbek like Jesus to his cross. A spike driven through his torso doesn’t kill him, he’s immortal after all, but pains him as much as it would a man made of flesh and bones. And just like Jesus to the good thief crucified next to him, Prometheus in his dire condition is more animated by compassion that rage.

“For think not that because I suffer therefore I would behold all others suffer too.”

To the chorus who inquires as to the origin of his suffering Prometheus list the many gifts besides his famous fire which he has given humankind. Before him humans had moved “as in a dream”, and did not yet have houses like those of the proverbial three little pigs made of straw or wood or brick. Instead they lived in holes in the ground- like ants. Before Prometheus human beings did not know when the seasons were coming or how to read the sky. From him we received numbers and letters, learned how to domesticate animals and make wheels. It was him who gave us sails for plying the seas, life in cities, the art of making metals. What the ancient myth of Prometheus shows us at the very least is that the ancient Greeks were conscious of the historical nature of technological development- a consciousness that would be revived in the modern era and includes our own.

In Aeschylus’  play Prometheus holds a trump card. He knows not only that Zeus will be overthrown, but who is destined to do the deed. Definant before Zeus’ messenger, Hermes and his winged shoes, Prometheus Bound ends with the Titan hurtled down a chasm.

Like all good, and more not so good, stories Prometheus Bound had sequels. Only fragments remain of the two plays that followed Prometheus Bound, and again like most sequels they are disappointments. In the first sequel Zeus frees the Titans he has imprisoned in anticipation of his reconciliation with Prometheus in the final play. That is, Prometheus eventually reconciles with his tormentor, centuries later many would find themselves unable to stomach this.

Indeed, it would be a little over a millennium after Hesiod had brought Prometheus to life with his poetry that yet another genius poet and playwright- Percy Bysshe Shelley would transform the ancient Greek myth into a symbol of the Enlightenment and a newly emergent relationship with both knowledge and power.

Europeans in the 18th the early 19th century were a people in search of an alternative history something other than their Christian inheritance, though it should be said that by the middle of the 19th century they had turned their prior hatred of the “dark ages” into a new obsession. In the mid to late 1800s they would go all gothic including a new obsession with the macabre. A host of 19th century thinker including Shelly’s brilliant wife would help bring this transition from brightsky neo-classicism and its worship of reason which gave us our Greco-Roman national capital among other things, to a darker and more pessimistic sense of the gothic and a romanticism tied to our emotions and the unconscious including their dangers.

Percy Shelley’s Prometheus as presented in his play Prometheus Unbound was an Enlightenment rebel. As the child resembles the parent so the generation of Enlightenment and revolution could see in themselves their own promethean origins. Not only had they shattered nearly every sacred shibboleth and not just asked but answered hitherto unasked questions of the natural world their motto being in Kant’s famous words “dare to know”, they had thrown out (America) and over (France) the world’s most powerful kings- earthbound versions of Zeus- and gained in the process new found freedoms and dignity.

Shelly says as much in his preface:

“But in truth I was averse from a catastrophe so feeble as that of reconciling the  Champion with the Oppressor of mankind.”

There is no way Shelley is going to present Prometheus coming to terms with with a would be omnipotent divine tyrant like Zeus. Our Titan is going to remain defiant to the end, and Zeus’s days as the king of the gods are numbered. Yet, it won’t be our hero that eventually knocks Zeus off of his throne it will be a god- the Demogorgon- not a Titan who overthrows Zeus and frees Prometheus. There are many theories about who or what Shelley’s Demogorgon is- it is a demon in Milton, a rather un-omnipotent architect  a bumbling version of Plato’s demiurge- in a play by Voltaire, but the theory I like best is that Shelley was playing with the word demos or people. The Demogorgon in this view isn’t a god- it’s us- that is if we are as brave as Prometheus in standing up to tyrants.  Indeed, it is this lesson in standing up for justice that the play ends:

To defy Power which seems omnipotent

To love and bear to hope till

Hope creates

From its own wreck the thing it contemplates

Neither to change nor falter nor repent his like thy glory

Titan is to be

Good great and joyous beautiful and free

     This is alone Life Joy Empire and Victory

Now, Shelley’s Prometheus just like the character in Hesiod and Aeschylus is also a bringer of knowledge and even more so. Yet, what makes Shelley’s Titan different is that he has a very different idea of what this knowledge is actually for.

Shelley was well aware that given the brilliant story of the rebellion in heaven and the Fall of Adam and Eve that had been imagined by Milton Christians and anti-Christians might easily confuse Prometheus with another rebellious character- Lucifer or Satan. If Milton had unwittingly turned Satan into a hero in his Paradise Lost the contrast with Shelley’s Prometheus would reveal important differences. In his preface to Prometheus Unbound he wrote:

The only imaginary being resembling in any degree Prometheus is Satan and Prometheus is in my judgment a more poetical character than Satan because in addition to courage and majesty and firm and patient opposition to omnipotent force he is susceptible of being described as exempt from the taints of ambition envy, revenge and a desire for personal aggrandizement which in the Hero of Paradise Lost interfere with the interest. The character of Satan engenders in the mind a pernicious casuistry which leads us to weigh his faults with his wrongs and to excuse the former because the latter exceed all measure.

That is, Prometheus is good in a way Satan is not because his rebellion which includes the granting of knowledge is a matter of justice not of a quest for power. This was the moral lesson Shelley thought the Prometheus story could teach us about our quest for knowledge. And yet somehow we managed to get this all mixed up.

The Prometheus myth has cross fertilized and merged with Judeo-Christian myths about the risks of knowledge and our lust for god-like power. Prometheus the fire bringer is placed in the same company as Satan and his rebellious angels, Eve and her eating from the “Tree of Knowledge”, the  story of the construction of the Tower of Babel. The Promethean myth is most often used today as either an invitation to the belief that “knowledge is power” or a cautionary tale of our own hubris and the need to accept limits to our quest for knowledge lest we bring down upon ourselves the wrath of God. Seldom is Shelley’s distinction between knowledge used in the service of justice and freedom which is good- his Prometheus- and knowledge used in the service of power- his understanding of Milton’s Satan acknowledged.

The idea that the Prometheus myth wasn’t only or even primarily a story about hidden knowledge- either its empowerment or its dangers- but a tale about justice is not something Shelley invented whole cloth but a latent meaning that could be found in both Hesiod and Aeschylus. In both, Prometheus gives his technological gifts not because he is trying to uplift the human race, but because he is trying to undo what he thinks was a rigged lottery of creation in which human beings in contrast to the other animals were given the short end of the stick. If the idea of hidden knowledge comes into play it is that the veiled workings of nature have been used by power- that is Zeus- to secure dutiful worship- a ripoff and injustice Prometheus is willing to risk eternal torment to amend.

Out of the three varieties of the Promethean myth- that of encouraging the breaking of boundaries in the name of empowerment (a pro-science and transhumanist interpretation), that of cautioning us against the breaking of these boundaries (a Christian, environmentalist and bioconservative interpretation), or knowledge as a primary means and secondary end in our quest for justice (a techno-progressive interpretation?) it is the last that has largely faded from our view. Oddly enough it would be Shelly’s utterly brilliant and intellectually synesthetic young wife who would bear a large part of the responsibility for shifting the Prometheus myth from a complex and therefore more comprehensive trichotomy to a much more simplistic and narrowing dichotomy.

Mary Shelley’s Frankenstein; or, The Modern Prometheus would turn the quest for knowledge itself into a potential breaking of boundaries that might even be considered a sort of evil. Scientists in the public mind would play the role of Milton’s Satan – hero or villain. Unfortunately in drawing this distinction with such artistic genius Mary Shelley helped ensure that the paramount promethean concern with justice would become lost.  I’ll turn to that tale next time…

I’m going to London! (sort of)

For interested readers of Utopia or Dystopia this Sunday, October 20th there will be a live discussion on The Transhumanist Wager held by the London Futurists. I’ve been invited.

Here’s the announcement:

This “London Futurists Hangout on Air” will feature a live discussion between Zoltan Istvan and a panel of leading futurists and transhumanists: Giulio Prisco, Rick Searle, and Chris T. Armstrong. Questions covered will include:

• Which aspects of the near future depicted in the book are attractive, and which are abhorrent?

• What do panellists think of the basic concept of the transhumanist wager, and of “the three laws of transhumanism” stated in the book?

• What are the best ways for transhumanists and radical futurists to use fiction to engage the wider public in awareness of the positive potential of transhumanist technologies?

Live questions

Futurists who want to join the discussion about the book and the issues raised are welcome to view the discussion live on Google+ or YouTube.

Viewers of the live broadcast on Google+ will be able to vote in real time on questions and suggestions to be discussed by the panellists as the Hangout proceeds.

Here’s how to view or participate:

This event will take place between 7pm and 8.30pm UK time on Sunday 20th October.

You can view the event:

• On Google+, via the page https://plus.google.com/104281987519632639471/posts – where you’ll also be able to vote on questions to be submitted to the panellists

• Via YouTube (the URL will be published here 15 minutes prior to the start of the event).

There is no charge to participate in this discussion.

Note: there is no physical location for this meetup (despite the postcode given above – in compliance with something that the Meetup software seems to insist upon).

No Spoilers please – until the Hangout starts

Wish me luck...