Wired for Good and Evil

Battle in Heaven

It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.

As is the case with so many of the ideas regarding human nature, Aristotle got there first. A good bet to make when anyone comes up with a really interesting idea regarding what we are as a species is that either Plato or Aristotle had already come up with it, or discussed it. Which is either a sign of how little progress we have made understanding ourselves, or symptomatic of the fact that the fundamental questions for all our gee-whiz science and technology remain important even if ultimately never fully answerable.

Aristotle had classified human beings as being unique in the sense that we were a zóon politikón variously translated as a social animal or a political animal. His famous quote on this score of course being: “He who is unable to live in society, or who has no need because he is sufficient for himself, must be either a beast or a god.” (Politics Book I)

He did recognize that other animals were social in a similar way to human beings, animals such as storks (who knew), but especially social insects such as bees and ants. He even understood bees (the Greeks were prodigious beekeepers) and ants as living under different types of political regimes. Misogynists that he was he thought bees lived under a king instead of a queen and thought the ants more democratic and anarchical. (History of Animals).       

Yet Aristotle’s definition of mankind as zóon politikón is deeper than that. As he says in his Politics:

That man is much more a political animal than any kind of bee or any herd animal is clear. For, as we assert, nature does nothing in vain, and man alone among the animals has speech….Speech serves to reveal the advantageous and the harmful and hence also the just and unjust. For it is peculiar to man as compared to the other animals that he alone has a perception of good and bad and just and unjust and other things of this sort; and partnership in these things is what makes a household and a city.  (Politics Book 1).

Human beings are shaped in a way animals are not by the culture of their community, the particular way their society has defined what is good and what is just. We live socially in order to live up to their potential for virtue- ideas that speech alone makes manifest and allows us to resolve. Aristotle didn’t just mean talk but discussion, debate, conversation all guided by reason. Because he doesn’t think women and slaves are really capable of such conversations he excludes them from full membership in human society.More on that at the end, but now back to Pagel’s Wired for Culture.

One of the things Pagel thinks makes human beings unique among all other animals is the fact that we alone are hyper-social or ultra-social. The only animals that even come close are the eusocial insects- Aristotle’s ants and bees. As E.O. Wilson pointed out in his The Social Conquest of Earth it is the eusocial species, including us, which, pound-for-pound absolutely dominate life on earth. In the spirit of a 1950’s B-movie all the ants in the world added together are actually heavier than us.

Eusociality makes a great deal of sense for insects colonies and hives given how closely related these groups are, indeed, in many ways they might be said to constitute one body and just as cells in our body sacrifice themselves for the survival of the whole, eusocial insects often do the same. Human beings, however, are different, after all, if you put your life at risk by saving the children of non-relatives from a fire, you’ve done absolutely nothing for your reproductive potential, and may even have lost your chance to reproduce. Yet, sacrifices and cooperation like this are done by human beings all the time, though the vast majority in a less heroic key. We are thus, hypersocial- willing to cooperate with and help non-relatives in ways no other animals do. What gives?

Pagel’s solution to this evolutionary conundrum is very similar to Wilson’s though he uses different lingo. Pagel sees us as naturally members of what he calls cultural survival vehicles, we evolved to be raised in them, and to be loyal to them because that was the best, perhaps the only way, of securing our own survival. These cultural survival vehicles themselves became the selective environment in which we evolved. Societies where individuals hoarded their cooperation or refused to makes sacrifices for the survival of the group overtime were driven out of existence by those where individuals had such qualities.

The fact that we have evolved to be cooperative and giving within our particular cultural survival vehicle Pagel sees as the origin of our angelic qualities our altruism and heroism and just general benevolence. No other animal helps its equivalent of little old ladies cross the street, or at least to nothing like the extent we do.   

Yet, if having evolved to live in cultural survival vehicles has made us capable of being angels, for Pagel, it has also given rise to our demonic capacities as well. No other species is a giving as we are, but, then again, no other species has invented torture devices and practices like the iron maiden or breaking on the wheel like we have either. Nor does any other species go to war with fellow members of its species so commonly or so savagely as human beings.

Pagel thinks that our natural benevolence can be easily turned off under two circumstance both of which represent a threat to our particular cultural survival vehicle.

We can be incredibly cruel when we think someone threatens the survival of our group by violating its norms. There is a sort of natural proclivity in human nature to burn “witches” which is why we need to be particularly on guard whenever some part of our society is labeled as vile and marked for punishment- The Scarlet Letter should be required reading in high school.

There is also a natural proclivity for members of one cultural survival vehicle to demonize and treat cruelly their rivals- a weakness we need to be particularly aware of in judging the heated rhetoric in the run up to war.

Pagel also breaks with Richard Dawkins over the later’s view of the evolutionary value of religion. Dawkins in his book The God Virus and subsequently has tended to view religion as a net negative- being a suicide bomber is not a successful reproductive strategy, nor is celibacy. Dawkins has tended to see ideas his “memes” as in a way distinct from the person they inhabit. Like genes, his memes don’t care all that much about the well being of the person who has them- they just want to make more of themselves- a goal that easily conflicts with the goal of genes to reproduce.

Pagel quite nicely closes the gap between memes and genes which had left human beings torn in two directions at once. Memes need living minds to exists, so any meme that had too large a negative effect on the reproductive success of genes should have lain its own grave quite quickly. But if genes are being selected for because they protect not the individual but the group one can see how memes that might adversely impact an individual’s chance of reproductive success can nonetheless aid in the survival of the group. For religions to have survived for so long and to be so universal they must be promoting group survival rather than being a detriment to it, and because human beings need these groups to live religion must on average, though not in every particular case, be promoting the survival of the individual at least to reproductive age.

One of the more counterintuitive, and therefore interesting claims Pagel makes is that what really separates human beings from other species is our imitativeness as opposed to our inventiveness. It the reverse of the trope found in the 60’s-70’s books and films that built around the 1963 novel La Planète des singes- The Planet of the Apes. If you remember, the apes there are able to imitate human technology, but aren’t able to invent it- “monkey see, monkey do”. Perhaps it should be “human see human do” for if Pagel is right our advantage as a species really lies in our ability to copy one another. If I see you doing something enough times I am pretty likely to be able to repeat it though I might not have a clue what it was I was actually doing.

Pagel thinks that all we need is innovation through minor tweaks combined with our ability to rapidly copy one another to have technological evolution take off. In his view geniuses are not really required for human advancement, though they might speed the process along. He doesn’t really apply his ideas to understand modern history, but such a view could explain what the scientific revolution which corresponded with the widespread adoption of the printing press was really all about. The printing press allowed ideas to be deciminated more rapidly whereas the scientific method proved a way to sift those that worked from those that didn’t.

This makes me wonder if something rather silly like Youtube hasn’t even further revved up cultural evolution in this imitative sense. What I’ve found is that instead of turning to “elders” or “experts” when I want to do something new, I just turn to Youtube and find some nice or ambitious soul who has filmed through the steps.

The idea that human beings are the great imitator has some very unanticipated consequences for the rationality of human being visa-vi other animals. Laurie Santos of the Comparative Cognition Laboratory at Yale has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. One more sign that human beings can be thankful for at least some of their irrationality.

This might have implications for machine intelligence that neither Pagel nor Santos draw. Machines might be said to be rational in the way animals are rational but this does not mean that they possess something like human thought. The key to making artificial intelligence more human like might be in making them, in some sense, less rational or as Jeff Steibel stated it for machine intelligence to be truly human like it will need to be “loopy”, “iterative”, and able to make mistakes.

To return to Pagel, he also sees implications for human consciousness and our sense of self in the idea that we are “wired for culture”. We still have no idea why it is human beings have a self, or think they have one, and we have never been able to locate the sense of self in the brain. Pagel thinks its pretty self-evident why a creature that is enmeshed in a society of like creatures with less than perfectly aligned needs would evolve a sense of self. We need one in order to negotiate our preferences, and our rights.

Language, for Pagel, is not so much a means for discovering and discussing the truth as it is an instrument wielded by the individual to gain in his own interest through persuasion and sometimes outright deception. Pagel’s views seem to be seconded by Steven Pinker who in a fascinating talk back in 2007 laid out how the use of language seems to be structured by the type of relationship it represents. In a polite dinner conversation you don’t say “give me the salt”, but “please pass me the salt” because the former is a command of the type most found in a dominance/submission relationship. The language of seduction is not “come up to my place so I can @#!% you”, but “would you like to come up and see my new prints.” Language is a type of constant negotiation and renegotiation between individuals one that is necessary because relationships between human beings are not ordinarily based on coercion where no speech is necessary besides the grunt of a command.

All of which brings me back to Aristotle and to the question of evil along with the sole glaring oversight of Pagel’s otherwise remarkable book. The problem moderns often have with the brilliant Aristotle is his justification of slavery and the subordination of women. That is, Aristotle clearly articulates what makes a society human- our interdependence, our capacity for speech, the way our “telos” is to be shaped and tuned by the culture in which we are raised like a precious instrument. At the same time he turns around and allows this society to compel by force the actions of those deliberately excluded from the opportunity for speech which he characterizes as a “natural” defect.

Pagel’s cultural survival vehicles miss this sort of internal oppression that is distinct both from the moral anger of the community against those who have violated its norms or its violence towards those outside of it. Aren’t there minature cultural survival vehicles in the form of one group or another that fosters its own genetic interest through oppression and exploitation? It got me thinking if any example of what a more religious age would have called “evil” as opposed to mere “badness” couldn’t be understood as a form of either oppression or exploitation, and I am unable to come up with any.

Such sad reflections dampen Pagel’s optimism towards the end of Wired for Culture that the world’s great global cities are a harbinger of us transcending the state of war between our various cultural survival vehicles and moving towards a global culture. We might need to delve more deeply into our evolutionary proclivities to fight within our own and against other cultural survival vehicles inside our societies as much as outside before we put our trust in such a positive outcome for the human story.

If Pagel is right and we are wired, if we have co-evolved, to compete in the name of our own particular cultural survival vehicle against other such communities then we may have in a very real sense co-evolved with and for war. A subject I will turn to next time by jumping off another very recent and excellent book.

The Danger of Using Science as a God Killing Machine

Gravitational Waves

Big news this year for those interested in big questions, or potentially big news, as long as the findings hold up. Scientists at the Harvard-Smithsonian Center for Astrophysics may have come as close as we have ever ever to seeing the beginning of time in our universe.  They may have breached our former boundary in peering backwards into the depth of time, beyond the Cosmic Microwave Background, the light echo of the Big Bang- taking us within an intimate distance of the very breath of creation.

Up until now we have not been able to peer any closer to the birth of our universe than 300,000 or so years distance from the Big Bang. One of the more amazing things about our universe, to me at least, is that given its scale and the finite speed of light,  looking outward also means looking backward in time. If you want to travel into the past stand underneath a starry sky on a clear night and look up.  We rely on light and radiation for this kind of time travel, but get too close to beginning of the universe and the scene becomes blindingly opaque. Scientists need a whole new way of seeing to look back further, and they may just have proven that one of these new ways of seeing actually works.

What the Harvard-Smithsonian scientists hope they have seen isn’t light or radiation, but the lensing effect of  gravitational waves predicted by the controversial Inflationary Model of the Big Bang, which claims that the Big Bang was followed by a quick burst in which the universe expanded incredibly fast. One prediction of the Inflationary Model is that this rapid expansion would have created ripples in spacetime- gravitational waves, the waves whose indirect effect scientist hope they have observed.

If they’re right, in one fell swoop, they may have given an evidential boost to a major theory on how our universe was born, and given us a way of peering deeper than we ever have into the strobiloid of time, a fertile territory, we should hope, for testing, revising and creating theories about the origin of our cosmos, its nature, and ultimate destiny. Even more tentatively, the discovery might also allow physicists to move closer to understanding how to unify gravity with quantum mechanics the holy grail of physics since the early 20th century.

Science writer George Johnson may, therefore, have been a little premature when he recently asked:

As the effort to understand the world has advanced, the low-hanging fruits (like Newton’s apple) have been plucked. Scientists are reaching higher and deeper into the tree. But with finite arms in an infinite universe, are there limits — physical and mental — to how far they can go?

The answer is almost definitely yes, but, it seems, not today, although our very discovery may ironically take us closer to Johnson’s limits. The reason being that, one of the many hopes for gravitational lensing is that it might allow us to discover experimental evidence for theories that we live in a multiverse- ours just one of perhaps an infinite number of universes. Yet, with no way to actually access these universes, we might find ourselves, in some sense,  stuck in the root-cap of our “local” spacetime and the tree of knowledge rather than grasping towards the canopy. But for now, let’s hope, the scientists at Harvard/Smithsonian have helped us jump up to an even deeper branch.
For human beings in general, but for Americans and American scientists in particular, this potential discovery should have resulted in almost universal celebration and pride. If the discovery holds, we are very near to being able to “see” the very beginning of our universe. American science had taken it on the chin when Europeans using their Large Hadron Collider (LHC) were able to discover the Higgs Boson a fundamental particle that had been dubbed in the popular imagination with the most inaccurate name in the history of science as “the God particle”. Americans would very likely have gotten there first had their own and even more massive particle collider the Superconducting Super Collider (SSC) not been canceled back in the 1990’s under a regime of shortsighted public investments and fiscal misallocation that continues to this day.

Harvard and even more so the Smithsonian are venerable American institutions. Indeed, the Smithsonian is in many ways a uniquely American hybrid not only in terms of its mix of public and private support but in its general devotion to the preservation, expansion and popularization of all human knowledge, a place where science and the humanities exist side- by- side and in partnership and which serves as an institution of collective memory for all Americans.

As is so often the case, any recognition of the potential for the discovery by the scientists at the Harvard-Smithsonian Center for Astrophysics being something that should be shared across the plurality of society was blown almost immediately, for right out of the gate, it became yet another weapon in the current atheists vs religious iteration of the culture war. It was the brilliant physicist and grating atheists Lawrence Krauss who took this route in his summary of the discovery for the New Yorker. I didn’t have any problem with his physics- the man is a great physicist and science writer, but he of course took the opportunity to get in a dig at the religious.

For some people, the possibility that the laws of physics might illuminate even the creation of our own universe, without the need for supernatural intervention or any demonstration of purpose, is truly terrifying. But Monday’s announcement heralds the possible beginning of a new era, where even such cosmic existential questions are becoming accessible to experiment.

What should have been a profound discovery for all of us, and a source of conversations between the wonderstruck is treated here instead as another notch on the New Atheists’ belt, another opportunity to get into the same stupid fight, with not so much the same ignorant people, as caricatures of those people. Sometimes I swear a lot of this rhetoric, on both sides of the theists and atheist debate, is just an advertising ploy to sell more books, TV shows, and speaking events.  

For God’s sake, the Catholic Church has held the Big Bang to be consistent with Church doctrine since 1951. Pat Robertson openly professes belief in evolution and the Big Bang. Scientists should be booking spots on EWTN and the 700 Club to present this amazing discovery for the real enemy of science isn’t religion it’s ignorance. It’s not some theist Christian, Muslim or Jew who holds God ultimately responsible, somehow, for the Big Bang, but the members of the public, religious or not, who think the Big Bang is just a funny name for an even funnier TV show.

Science will always possess a gap in its knowledge into which those so inclined will attempt to stuff their version of a creator. If George Johnson is right we may reach a place where that gap, rather than moving with scientific theories that every generation probe ever deeper into the mysteries of nature may stabilize as we come up against the limits of our knowledge. God, for those who need a creating intelligence, will live there.

There is no doubt something forced and artificial in this “God of the gaps”, but theologians of the theistic religions have found it a game they need to play clinging as they do to the need for God to be a kind of demiurge and ultimate architect of all existence. Other versions of God where “he” is not such an engineer in the sky, God as perhaps love, or relationship, or process, or metaphor, or the ineffable would better fit with the version of  reality given us by science, and thus, be more truthful, but the game of the gaps is one theologians may ultimately win in any case.

Religions and the persons who belong to them will either reconcile their faith with the findings of science or they will not, and though I wish they would reconcile, so that religions would hold within them our comprehensive wisdom and acquired knowledge as they have done in the past, their doing so is not necessary for religions to survive or even for their believers to be “rational.”

For the majority of religious people, for the non-theologians, it simply does not matter if the Big Bang was inflationary or not, or even if there was a Big Bang at all. What matters is that they are able to deal with loss and grief, can orient themselves morally to others, that they are surrounded by a mutually supportive community that acts in the world in the same way, that is, that they can negotiate our human world.

Krauss, Dawkins et al often take the position that they are administering hard truths and that people who cling to something else need to be snapped out of their child-like illusions. Hard truths, however, are a relative thing. Some people can actually draw comfort from the “meaninglessness” of their life, which science seems to show them. As fiction author Jennifer Percy wrote of her astronomy loving father:

This brand of science terrified me—but my dad found comfort in going to the stars. He flees from what messy realm of human existence, what he calls “dysfunctional reality” or “people problems.” When you imagine that we’re just bodies on a rock, small concerns become insignificant. He keeps an image above his desk, taken by the Hubble space telescope, that from a distance looks like an image of stars—but if you look more closely, they are not stars, they are whole galaxies. My dad sees that, imagining the tiny earth inside one of these galaxies—and suddenly, the rough day, the troubles at work, they disappear.

Percy felt something to be missing in her father’s outlook, which was as much as a shield against the adversity of human life as any religion, but one that she felt only did so by being blind to the actual world around him. Percy found this missing human element in literature which drew her away from a science career and towards the craft of fiction.

The choice of fiction rather than religion or spirituality may seem odd at first blush, but it makes perfect sense to me. Both are ways of compressing reality by telling stories which allow us to make sense of our experience, something that despite our idiosyncrasies,  is always part of the universal human experience. Religion, philosophy, art, music, literature, and sometimes science itself, are all means of grappling with the questions, dilemmas and challenges life throws at us.

In his book Wired for CultureMark Pagel points out how the benefits of religion and art to our survival must be much greater than they are in Dawkins’ terms “a virus of the mind” that uses us for its purposes and to our detriment. Had religion been predominantly harmful or even indifferent to human welfare it’s very hard to explain why it is so universal across human societies. We have had religion and art around for so long because they work.

Unlike the religious, Percy’s beloved fiction is largely free from the critique of New Atheists who demand that science be the sole method of obtaining truth. This is because fiction, by its very nature, makes no truth claims on the physical world, nor does it ask us to do anything much in response to it. Religion comes in for criticism by the New Atheists wherever it seems or appears to make truth claims on material existence, including its influence on our actions, but religion does other things as well which are best seen by looking at two forms of fiction that most resemble religion’s purposeful storytelling, that is, fairy tales and myths.

My girls love fairy tales and I love reading them to them. What surprised me when I re-encountered these stories as a parent was just how powerful they were, while at the same time being so simple. I hadn’t really had a way of understanding this until I picked up Bruno Bettelheim’s The Uses of Enchantment: The Meaning and Importance of Fairy Tales As long as one can get through  Bettelheim’ dated Freudian psychology his book shows why fairy tales hit us like they do, why their simplification of life is important, and why, as adults we need to outgrow their black and white thinking.

Fairy tales establish the foundation for our moral psychology. They teach us the difference between good and evil, between aiming to be a good person and aiming to do others harm. They teach us to see our own often chaotic emotional lives as a normal expression of the human condition, and also, and somewhat falsely, teach us to never surrender our hope.

Myths are a whole different animal from fairy tales. They are the stories of once living religions that have become detached and now float freely from the practices and rituals that once made them, in a sense, real. The stories alone, however, remain deep with meaning and much of this meaning, especially when it comes to the Greek myths, has to do with the tragic nature of human existence. What you learn from myths is that even the best of intentions can result in something we would not have chosen- think Pandora who freed the ills of the world from the trap of their jar out of compassion- or that sometimes even the good can be unjustly punished- as in Prometheus the fire bringer chained to his rock.

Religion is yet something different from either fairy tales or myths. The “truth” of a religion is not to be found or sought in its cosmology but in its reflection on the human condition and the way it asks us to orient ourselves to the world and guides our actions in it. The truth of a faith becomes real through its practice- a Christian washing the feet of the poor in imitation of Christ makes the truth of Christianity in some sense real and a part of our world, just as Jews who ask for forgiveness on the Day of Atonement, make the covenant real.

Some atheists, the philosopher Alain Botton most notably have taken notice that the atheists accusation against religion- that it’s a fairy tale adults are suckered into believing in- is a conversation so exhausted it is no longer interesting. In his book Religion for Atheists he tries to show what atheists and secular persons can learn from religion, things like a sense of community and compassion, religion’s realistic, and therefore pessimistic, view of human nature, religions’ command of architectural space and holistic approach to education, which is especially focused on improving the moral character of the young.

Yet Botton’s suggestion of how secular groups and persons might mimic the practices of religion such as his “stations of life” rather than “stations of the cross” fell flat with me.There is something in the enchantment of religion which resembles the enchantment of fairy tales that fails rather than succeeds by running too close to reality- though it can not go too far from reality either. There is a genius in organic things which emerge from collective effort, unplanned, and over long stretches of time that can not be recreated deliberately without resulting in a cartoonish and disneyfied, version of reality or conversely something so true to life and uncompressed we can not view it without instinctively turning away in horror or falling into boredom.

I personally do not have the constitution to practice any religion and look instead for meaning to literature, poetry, and philosophy, though I look at religion as a rich source of all three.  I also sometimes look to science, and do indeed, like Jennifer Percy’s father, find some strange comfort in my own insignificance in light of the vastness of it all.

The dangers of me, or Krauss, or Dawkins or anyone else trying to turn this taste for insignificance into the only truth, the one shown to us by science is that we turn what is really a universal human endeavor, the quest to know our origins, into a means of stopping rather than starting a conversation we should all be parties to, and threaten the broad social support needed to fund and see through our quest to understand our world and how it came to be. For, the majority of people (outside of Europe) continue to turn to religion to give their lives meaning, which might mean that if we persist in treating science as a God killing machine, or better, a meaning killing machine, we run the risk of those who need such forms of meaning in order to live turning around and killing science.

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

The Earth’s Inexplicable Solitude

Throw your arms wide out to represent the span of all of Earthly time. Our planet forms at the tip of your left arm’s longest finger, and the Cambrian begins at the wrist of your right arm. The rise of complex life lies in the palm of your right hand, and, if you choose, you can wipe out all of human history ‘in a single stroke with a medium grained nail file’  

Lee Billings, Five Billion Years of Solitude (145)  

For most of our days and for most of the time we live in the world of Daniel Kahneman’s experiencing self. What we pay attention to is whatever is right in front of us, which can range from the pain of hunger to the boredom of cubicle walls. Nature has probably wired us this way, the stone age hunter and gatherer still in our heads, where the failure to focus on the task at hand came with the risk of death. A good deal of modern society, and especially contemporary technology such as smart phones, leverages this presentness and leaves us trapped in its muck, a reality Douglas Rushkoff brilliantly lays out in his Present Shock.      

Yet, if the day to day world is what rules us and is most responsible for our happiness our imagination has given us the ability to leap beyond it. We can at a whim visit our own personal past or imagined future but spend even more of our time inhabiting purely imagined worlds. Indeed, perhaps Kahneman’s “remembering self” should be replaced by an imagining self, for our memories aren’t all that accurate to begin with, and much of remembering takes the form of imagining ourselves in a sort of drama or comedy in which we are the protagonist and star.

Sometimes imagined worlds can become so mesmerizing they block out the world in front of our eyes. In Plato’s cave it is the real world that is thought of as shadows and the ideas in our heads that are real and solid. Plato was taking a leap not just in perception but in time. Not only is it possible to roll out and survey the canvass of our own past and possible future or the past and future of the world around, you can leap over the whole thing and end up looking down at the world from the perspective of eternity. And looking down meant literally down, with timeless eternity located in what for Plato and his Christian and Muslim descendants was the realm of the stars above our heads.

We can no longer find a physical location for eternity, but rather than make time shallow this has instead allowed us to grasp its depth, that is, we have a new appreciation for how much the canvass of time stretches out behind us and in front of us. Some may want an earth that is only thousands of years old as was evident in the recent much publicized debate between the creationist Ken Ham and Bill Nye, but even Pat Robertson now believes in deep time.   

Recently, The Long Now Foundation,  held a 20th anniversary retrospective “The Long Now, Now” a discussion between two of the organization’s founders- Brian Eno and Danny Hillis. The Long Now Foundation may not be dedicated to deep time, but its 10,000 year bookends, looking that far back, and that far ahead, still doubles the past time horizon of creationists, and given the association between creationism and ideas of impending apocalypse, no doubt comparatively adds millennia to the sphere of concern regarding the human future as well.    

Yet, as suggested above, creationists aren’t the only ones who can be accused of having a diminished sense of time. Eno acknowledged that the genesis for the Long Now Foundation and its project of the 10,000 year clock stemmed from his experience living in “edgy Soho” where he found the idea of “here” constrained to just a few blocks rather than New York or the United States and the idea of “now” limited to at most a few days or weeks in front of one’s face. This was, as Eno notes, before the “Wall Street crowd” muscled its way in. High-speed traders have now compressed time to such small scales that human beings can’t even perceive it.  

What I found most interesting about the Eno-Hillis discussion was how they characterized their expanded notion of time, something they credited not merely to the clock project but to their own perspective gained from age. Both of their time horizons had expanded forward and backward and the majority of what they now read was history despite Eno’s background as a musician and Hillis’ as an engineer. Hillis’ study of history had led him to the view that there were three main ways of viewing the human story.

For most of human history our idea of time was cyclical- history wasn’t going anywhere but round and round. A second way of viewing history was that it was progressive- things were getting better and better- a view which had its most recent incantation beginning in the Enlightenment and was now, in both Hillis and Eno’s view, coming to a close. For both, we were entering a stage where our understanding of the human story was of a third type “improvisational” in which we were neither moving relentlessly forward or repeating but had to “muddle” our way through, with some things getting better, and others worse, but no clear understanding as to where we might end up.    

Still, if we wish to reflect on deep time even 10,000 years is not nearly enough. A great recent example of such reflection  is Lee Billings Five Billion Years of Solitude, which, though it is written as a story of our search for life outside of the solar system, is just as much or more a meditation on the depth of past and future.

When I was a kid there were 9 known planets all within our solar system, and none beyond, and now, though we have lost poor Pluto, we have discovered over a thousand planets orbiting suns other than our own with estimates in the Milky Way alone on the order of 100 billion. A momentous change few of us have absorbed, and much of Five Billion Years of Solitude reflects upon our current failure to value these discoveries, or respond to the nature of the universe that has been communicated by them. It is also a reflection on our still present solitude, the very silence of a universe that is believed to be fertile soil for life may hint that no civilization ever has or survived long enough, or devoted themselves in earnest enough, to reach into the beyond.

Perhaps our own recent history provides some clues explaining the silence. Our technology has taken on a much different role than what Billings imagined as a child mesmerized by the idea of humans breaking out beyond the bounds of earth. His pessimism captured best not by the funding cutbacks and withdrawal of public commitment or cancellation of everything from SETI to NASA’S Terrestrial Planet Finder (TPF) or the ESA’s Darwin, but in Billings’ conversations with Greg Laughlin an astrophysicist and planet hunter at UC Santa Cruz.  Laughlin was now devoting part of his time and the skills he had learned as a planet hunter to commodity trading. At which Billings lamented:

The planet’s brightest scientific minds no longer leveraged the most powerful technologies to grow and expand human influence far out beyond earth, but to sublime and compress our small isolated world into an even more infinitesimal, less substantial state. As he described for me the dark arts of reaping billion dollar profits from sub-cent scale price changes rippling at near light-speed around the globe, Laughlin shook his head in quiet awe. Such feats, he said, were “much more difficult than finding an earth-like exoplanet”. (112)

Billings finds other, related, possible explanations for our solitude as well. He discusses the thought experiment of UC San Diego’s Tom Murphy who tried to extrapolate the world’s increasing energy use into the future at an historical rate of 2.3 percent per year. To continue to grow at that rate, which the United States has done since the middle of the seventeenth-century, we would have to encase every star in the Milky Way galaxy within an energy absorbing Dyson sphere within 2,500 years. At which Billings concludes:

If technological civilization like ours are common in the universe, the fact that we have yet to see stars or entire galaxies dimming before our eyes beneath starlight-absorbing veneers of Dyson spheres suggests that our own present era of exponential growth may be anomalous, not only to our past, but also to our future.

Perhaps even with a singularity we can not continue the exponential trend lines we have been on since the industrial revolution. Technological civilization may peak much closer to our own level of advancement than we realize, or may more often than not destroy itself, but, if the earth is any example, life itself once established is incredibly resilient.

As Billings shows us in the depths of time the earth has been a hot house or a ball of ice with glaciers extending to the equator. Individual species and even whole biomes may disappear under the weight of change and shocks, but life itself holds on. If our current civilization proves suicidal we will not be the first form of life that has so altered the earthly environment that it has destroyed both itself and much of the rest of life on earth.

In this light Billings discusses the discovery of the natural gas fields of the Marcellus Shale and the explosive growth of fracking, the shattering of the earth using water under intense pressure, which while it has been an economic boon to my beloved Pennsylvania, and is less of a danger to us as a greenhouse gas than demon coal, presents both short term and longer term dangers.

The problem with the Marcellus is that it is merely the largest of many such gas shale field located all over the earth. Even if natural gas is a less potent greenhouse gas than coal it still contributes to global warming and its very cheapness may delay our necessary move away from fossil fuels in total if we are to avoid potentially catastrophic levels of warming.

The Marcellus was created by eons of anaerobic bacteria trapped in underwater mountain folds which released hydrogen sulfide toxic to almost any form of life and leading to a vast accumulation of carbon as dead bacteria could no longer be decomposed. Billings muses whether we ourselves might be just another form of destructive bacteria.

Removed from its ancient context, the creation of the Marcellus struck me as eerily familiar. A new source of energy and nutrients flows into an isolated population. The population balloons and blindly grows, occasionally crashing when it surpasses the carrying capacity of its environment. The modern drill rigs shattering stone to harvest carbon from boom- and- bust waves of ancient death suddenly seemed like echoes, portents of history repeating itself on the grandest of scales. (130)

Technological civilization does not seem to be a gift to life on the planet on which it emerges, so much as it is a curse and danger, until, that is, the star upon which life depends itself becomes a danger or through stellar- death no longer produces the energy necessary for life. Billings thinks we have about 500 million years before the sun heats up so much the earth loses all its water. Life on earth will likely only survive the warming sun if we or our descendants do, whether we literally tow the planet to a more distant orbit or settle earthly life elsewhere, but in the mean time the biosphere will absorb our hammer blows and shake itself free of us entirely if we can not control our destructive appetites.

Over the very, very long term the chain of life that began on earth almost four billion years ago will only continue if we manage to escape our solar system entirely, but for now, the quest to find other living planets is less a  matter of finding a new home than it is about putting the finishing touches on the principle of Copernican Mediocrity, the idea that there is nothing especially privileged about earth, and, above all, ourselves.

And yet, the more we learn about the universe the more it seems that the principle of Copernican Mediocrity will itself need to be amended.  In the words of Billings’ fellow writer and planet hunter Caleb Scharf  the earth is likely “special but not significant”. Our beloved sun burns hotter than most stars, our gas giants larger are farther from our parent star, our stabilizing moon unlikely. How much these rarity factors play in the development of life, advanced life and technological civilization is anybody’s guess, and answering this question one of the main motivations behind the study of exoplanets and the search for evidence of technological civilization beyond earth. Yet, Billings wants to remind us that even existing at all is a very low probability event.

Only “the slimmest fraction of interstellar material is something so sophisticated as a hydrogen atom. To simply be any piece of ordinary matter- a molecule, a wisp of gas, a rock, a star, a person- appears to be an impressive and statistically unlikely accomplishment.” (88) Astrophysicists ideas of the future of the universe seem to undermine Copernican mediocrity as well for, if their models are right, the universe will spend most of its infinite history not only without stars and galaxies and people, but without even stable atoms.  Billings again laments:

Perhaps its just a failure of imagination to see no hope for life in such a bleak, dismal future. Or, maybe, the predicted evolution of the universe is a portent against Copernican mediocrity, a sign that the bright age of bountiful galaxies, shining stars, unfolding only a cosmic moment after the dawn of all things, is in fact rather special. (89)

I think this failure of imagination stems from something of a lack of gratitude on the part of human beings, and is based on a misunderstanding that for something to be meaningful it needs to last “forever.” The glass, for me, is more than half-full, for, even given the dismal views of astrophysicists on the universe’s future there is still as much as 100 billion years left for life to play out on its stage. And life and intelligence in this universe will likely not be the last.

Billings himself capture the latter point. The most prominent theory of how the Big Bang occurred, the “inflationary model” predicts an infinity of universes- the multiverse. Echoing Giordano Bruno, he writes:

Infinity being ,well, infinite, it would follow that the multiverse would host infinitudes of living beings on a limitless number of other worlds. (91)

I care much less that the larger infinity of these universes are lifeless than that an infinity of living worlds will exist as well.

As Billings points out, this expanded canvass of time and decentering on ourselves is a return to the philosophy of Democritus which has come down to us especially from Lucretius’ On the Nature of Things the point being one of the ways to avoid anxiety and pressure in our day-to-day would is to remember how small we and our problems are in the context of the big picture.

Still, one is tempted to ask what this vastly expanded canvass both in time past and future and in the potential number of sentient feeling beings means for the individual human life?

In a recent interview in The Atlantic, author Jennifer Percy describes how she was drawn away from physics and towards fiction because fiction allowed her to think through questions about human existence that science could not. Here father had looked to the view of human beings as insignificant with glee in a way she could not.

He flees from what messy realm of human existence, what he calls “dysfunctional reality” or “people problems.” When you imagine that we’re just bodies on a rock, small concerns become insignificant. He keeps an image above his desk, taken by the Hubble space telescope, that from a distance looks like an image of stars—but if you look more closely, they are not stars, they are whole galaxies. My dad sees that, imagining the tiny earth inside one of these galaxies—and suddenly, the rough day, the troubles at work, they disappear.

The kind of diminishment of the individual human life that Percy’s father found comforting, she instead found terrifying and almost nihilistic. Upon encountering fiction such as Lawrence Sargent Hall’s The Ledge, Percy realized fiction:

It helped me formulate questions about how the immensity and cruelty of the universe coexists with ordinary love, the everyday circumstances of human beings. The story leaves us with an image of this fisherman caught man pitilessly between these two worlds. It posed a question that became an obsession, and that followed me into my writing: what happens to your character when nature and humanity brutally encounter one another?

Trying to think and feel our way through this tension of knowing that we and our concerns are so small, but our feelings are so big, is perhaps the best we can do. Escaping the tedium and stress of the day through the contemplation of the depth of time and space is, no doubt a good thing, but it would be tragic to use such immensities as a means of creating space between human hearts or no longer finding the world that exists between and around us to be one of exquisite beauty and immeasurable value- a world that is uniquely ours to wonder at and care for.

Erasmus reads Kahneman, or why perfect rationality is less than it’s cracked up to be

sergei-kirillov-a-jester-with-the-crown-of-monomakh-1999

The last decade or so has seen a renaissance is the idea that human beings are something far short of rational creatures. Here are just a few prominent examples: there was Nassim Taleb with his The Black Swan, published before the onset of the financial crisis, which presented Wall Street traders caught in the grip of their optimistic narrative fallacies, that led them to “dance” their way right over a cliff. There was the work of Philip Tetlock which proved that the advice of most so-called experts was about as accurate as chimps throwing darts. There were explorations into how hard-wired our ideological biases are with work such as that of Jonathan Haidt in his The Righteous Mind.

There was a sustained retreat from the utilitarian calculator of homo economicus, seen in the rise of behavioral economics, popularized in the book, Nudge which took human irrational quirks at face value and tried to find wrap arounds, subtle ways to trick the irrational mind into doing things in its own long term interest. Among all of these none of the uncoverings of the less than fully rational actors most of us, no all of us, are have been more influential than the work of Daniel Kahneman, a psychologist with a Nobel Laureate in Economics and author of the bestseller Thinking Fast and Slow.

The surprising thing, to me at least, is that we forgot we were less than fully rational in the first place. We have known this since before we had even understood what rationality, meaning holding to the principle of non-self contradiction, was. How else do you explain Ulysses’ pact, where the hero had himself chained to his ship’s mast so he could both hear the sweet song of the sirens and not plunge to his own death? If the Enlightenment had made us forget the weakness of our rational souls Nietzsche and Freud eventually reminded us.

The 20th century and its horrors should have laid to rest the Enlightenment idea that with increasing education would come a golden age of human rationality, as Germany, perhaps Europe’s most enlightened state became the home of its most heinous regime. Though perhaps one might say that what the mid-20th century faced was not so much a crisis of irrationality as the experience of the moral dangers and absurdities into which closed rational systems that held to their premises as axiomatic articles of faith could lead. Nazi and Soviet totalitarianism had something of this crazed hyper-rational quality as did the Cold War nuclear suicide pact of mutually assured destruction. (MAD)

As far as domestic politics and society were concerned, however, the US largely avoided the breakdown in the Enlightenment ideal of human rationality as the basis for modern society. It was a while before we got the news. It took a series of institutional failures- 9/11, the financial crisis, the botched, unnecessary wars in Afghanistan and Iraq to wake us up to our own thick headedness.

Human folly has now come in for a pretty severe beating, but something in me has always hated a weakling being picked on, and finds it much more compelling to root for the underdog. What if we’ve been too tough on our less than perfect rationality? What if our inability to be efficient probability calculators or computers isn’t always a design flaw but sometimes one of our strengths?

I am not the first person to propose such a heresy of irrationalism. Way back in 1509 the humanist Desiderius Erasmus wrote his riotous, The Praise of Folly with something like this in mind. The book has what amounts to three parts all in the form of a speech by the goddess of Folly. The first part attempts to show how, in the world of everyday people, Folly makes the world go round and guides most of what human beings do. The second part is a critique of the society that surrounded Erasmus, a world of inept and corrupt kings, princes and popes and philosophers. The third part attempts to show how all good Christians, including Christ himself, are fools, and here Erasmus means fool as a compliment. As we all should know, good hearted people, who are for that very reason nieve, often end up being crushed by the calculating and rapacious.

It’s the lessons of the first part of The Praise of Folly that I am mostly interested in here. Many of Erasmus’ intuitions not only can be backed up by the empirical psychological studies of  Daniel Kahneman, they allow us to flip the import of Kahneman’s findings on their head. Foolish, no?

Here are some examples: take the very simple act of a person opening their own business. The fact that anyone engages in such a risk while believing in their likely success  is an example of what Kahneman calls an optimism bias. Overconfident entrepreneurs are blind to the fact, as Kahneman puts it:

The chances that a small business will survive in the United States are about 35%. But the individuals who open such businesses do not believe statistics apply to them. (256)

Optimism bias may result in the majority of entrepreneurs going under, but what would we do without it? Their sheer creative-destructive churn surely has some positive net effect on our economy and the employment picture, and those 35% of successful businesses are most likely adding long lasting and beneficial tweaks to modern life,  and even sometimes revolutions born in garages.

Yet, optimism bias doesn’t only result in entrepreneurial risk. Erasmus describes such foolishness this way:

To these (fools) are to be added those plodding virtuosos that plunder the most inward recesses of nature for the pillage of a new invention and rake over sea and land for the turning up some hitherto latent mystery and are so continually tickled with the hopes of success that they spare for no cost nor pains but trudge on and upon a defeat in one attempt courageously tack about to another and fall upon new experiments never giving over till they have calcined their whole estate to ashes and have not money enough left unmelted to purchase one crucible or limbeck…

That is, optimism bias isn’t just something to be found in business risks. Musicians and actors dreaming of becoming stars, struggling fiction authors and explorers and scientists all can be said to be biased towards the prospect of their own success. Were human beings accurate probability calculators where would we get our artists? Where would our explorers have come from? Instead of culture we would all (if it weren’t already too much the case) be trapped permanently in cubicles of practicality.

Optimism bias is just one of the many human cognitive flaws Kahneman identifies. Take this one, error of affective forecasting and its role in those oh, so important of human institutions, marriage and children. For Kahneman, many people base their decision to get married on how they feel when in the swirl of romance thinking that marriage will somehow secure this happiness indefinitely. Yet the fact is, married persons, according to Kahneman:

Unless they think happy thoughts about their marriage for much of the day, it will not directly influence their happiness. Even newlyweds who are lucky enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced happiness will again depend, as it does for the rest of us, on the environment and activities of the present moment. (400)

Unless one is under the impression that we no longer need marriage or children, again, one can be genuinely pleased that, at least in some cases, we suffer from the cognitive error of affective forecasting. Perhaps societies, such as Japan, that are in the process of erasing themselves as they forego marriage and even more so childbearing might be said to suffer from too much reason.

Yet another error Kahneman brings our attention to is the focusing illusion. Our world is what we attend to and our feelings towards it have less to do with objective reality than what it is we are paying attention to. Something like focusing illusion accounts for a fact that many of us in good health would find hard to believe;namely, the fact that paraplegics are no more miserable than the rest of us. Kahneman explains it this way:

Most of the time, however, paraplegics work, read, enjoy jokes and friends, and get angry when they read about politics in the newspaper.

Adaption to a new situation, whether good or bad, consists, in large part, of thinking less and less about it. In that sense, most long-term circumstances, including paraplegia and marriage, are part-time states that one inhabits only when one attends to them. (405)  

Erasmus identifies something like the focusing illusion in states like dementia where the old are made no longer capable of reflecting on the breakdown of their body and impending death, but he captured it best, I think in these lines:

But there is another sort of madness that proceeds from Folly so far from being any way injurious or distasteful that it is thoroughly good desirable and this happens when harmless mistake in the judgment of the mind is freed from those cares would otherwise gratingly afflict it smoothed over with a content and faction it could not under other so happily enjoy. (78)

We may complain against our hedonic setpoints, but as the psychologists Dan Gilbert points out they not only offer us resilience on the downside- we will adjust to almost anything life throws at us- such set points should caution us that in any one thing- the perfect job, the perfect spouse lies the key to happiness. But that might leave us with a question, what exactly is this self we are trying to win happiness for?

A good deal of Kahneman’s research deals with the disjunction between two selves in the human person, what he calls the experiencing self and the remembering self. It appears that the remembering self usually gets the last laugh in that it guides our behavior. The most infamous example of this is Kahneman’s colonoscopy study where the pain of the procedure was tracked minute by minute and then compared with questions later on related to decisions in the future.

The surprising thing was that future decisions were biased not by the frequency or duration of pain over the course of the procedure but how the procedure ended. The conclusion dictated how much negative emotion was associated with the procedure, that is, the experiencing self seemed to have lost its voice over how the procedure was judged and how it would be approached in the future.

Kahneman may have found an area where the dominance of the remembering self over the experiencing self are irrational, but again, perhaps it is generally good that endings, that closure is the basis upon which events are judged. It’s not the daily pain and sacrifice that goes into Olympic training that counts so much as outcomes, and the meaning of some life event does really change based on its ultimate outcome. There is some wisdom in the words of Solon that we should  “Count no man happy until the end is known”which doesn’t mean no one can be happy until they are dead, but that we can’t judge the meaning of a life until its story has concluded.

“Okay then”, you might be thinking, “what is the point of all this, that we should try to be irrational”? Not exactly. My point is we perhaps should take a slight breather from the critique of human nature from standpoint of its less than full rationality, that our flaws, sometimes, and just sometimes, are what makes life enjoyable and interesting. Sometimes our individual rationality may serve larger social goods of which we are foolishly oblivious.

When Erasmus wrote his Praise of Folly he was clear to separate the foolishness of everyday people from the folly of those in power, whether that power be political power, or intellectual power such as that of the scholastic theologians. Part of what distinguished the fool on the street from the fool on the throne or cloister was the claim of the latter to be following the dictates of reason- the reason of state or the internal logic of a theology that argued over angels on the head of pins. The fool on the street knew he was a fool or at least experienced the world as opaque, whereas the fools in power thought they had all the answers. For Erasmus, the very certainty of those in power along with the mindlessness of their goals made them, instead, even bigger fools than the rest of us.

One of our main weapons against power has always been to laugh at the foolishness of those who wield it. We could turn even a psychopathically rational monster like Hitler into a buffoon because even he, after all, was one of us. Perhaps if we ever do manage to create machines vastly more intelligent than ourselves we will have lost something by no longer being able to make jokes at the expense of our overlords. Hyper-rational characters like DATA from Star Trek or Sheldon from the Big Bang are funny because they are either endearingly trying to enter the mixed-up world of human irrationality, or because their total rationality, in a human context, is itself a form of folly. Super-intelligent machines might not want to become flawed humans, and though they might still be fools just like their creators, likely wouldn’t get the joke.

The Dark Side of a World Without Boundaries

Peter Blume: The Eternal City

Peter Blume: The Eternal City

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Of course, the benefit to paralyzed, or persons otherwise locked tightly in the straitjacket of their own skull, of technologies to communicate directly from the human brain to machines or to other human beings is undeniable. The potential to expand the range of human experience through directly experiencing the thoughts and emotions of others is, also, of course, great. Next weekend being Super Bowl Sunday, I can’t help but think how cool it would be to experience the game through the eyes of Peyton Manning, or Russell Wilson. How amazing would a concert or an orchestra be if experienced likewise in this way?

Still, one need not necessarily be a professional dystopian and unbudgeable cassandra to come up with all kinds of quite frightening scenarios that might arise through the erosion of boundaries between human minds, all one needs to do is pay attention to less sanguine developments of the latest iteration of a once believed to be utopian technology, that is, the Internet and the social Web, to get an idea of some of the possible dangers.

The Internet was at one time prophesied to user in a golden age of transparency and democratic governance, and promised the empowerment of individuals and small companies. Its legacy, at least at this juncture, is much more mixed. There is little disputing that the Internet and its successor mobile technologies have vastly improved our lives, yet it is also the case that these technologies have led to an explosion in the activities of surveillance and control, by nefarious individuals and criminal groups, corporations and the state. Nicolelis’ vision of eroded boundaries between human minds is but the logical conclusion of the trend towards transparency. Given how recently we’ve been burned by techno-utopianism in precisely this area a little skepticism is certainly in order.

The first question that might arise is whether direct brain-to-brain communication (especially when invasive technologies are required) will ever out-compete the kinds of mediated mind-to-mind technology we have had for quite sometime time, that is, both spoken and written language. Except in very special circumstances, not all of them good, language seem to retain its advantages over direct brain-to-brain communication, and, in cases where the advantage of language over such direct communication are used it be may be less of a gain in communicative capacity than a signal that normalcy has broken down.

Couples in the first rush of new love may want to fuse their minds for a time, but a couple that been together for decades? Seems less likely, though there might be cases of pathological jealousy or smothering control bordering on abuse when one half of a couple would demand such a thing. The communion with fellow human beings offered by religion might gain something from the ability of individuals to bridge the chasm that separates them from others, but such technologies would also be a “godsend” for fanatics and cultists. I can not decide whether a mega-church such as Yoido Full Gospel Church in a world where Nicolelis’ brain-nets are possible would represent a blessed leap in human empathetic capacity or a curse.

Nicolelis seems to assume that the capacity to form brain-nets will result in some kind of idealized and global neural version of FaceBook, but human history seems to show that communication technology is just as often used to hermetically seal group off from group and becomes a weapon in the primary human moral dilemma ,which is not the separation of individual from individual, so much as the rivalry between different groups. We seem unable to exit ourselves from such rivalry even when the stakes are merely imagined- as many of us will do next Sunday, and has Nicolelis himself should have known from his beloved soccer where the rivalry expressed in violent play has a tendency to slip into violent riots in the real world.

Direct brain-to-brain communication would seem to have real advantages over language when persons are joined together in teams acting as a unit in response to a rapidly changing situation. Groups such as fire-fighters. By far the greatest value of such capacities would be found in small military units such as Platoons, members who are already bonded in a close experiential and deep emotional bond, as in on display in the documentary- Restrepo. Whatever their virtuous courage, armed bands killing one another are about as far as you can get from the global kumbaya of Nicolelis’ brain-net.

If such technologies were non-invasive, would they be used by employers to monitor the absence of sustained attention while on the job? What of poor dissidents under the thumb of a madman like Kim Jong Un .Imagine such technologies in the hands of a pimp, or even, the kinds of slave owners who, despite our obliviousness, still exist.

One of the problems with the transparency paradigm, and the new craze for an “Internet of things” where everything in the environment of an individual is transformed into Web connected computers is the fact that anything that is a computer, by that very fact, becomes hackable. If someone is worried about their digital data being sucked up and sold on an elaborate black market, about webcams being used as spying tools, if one has concern that connecting one’s home to the web might make one vulnerable, how much more so should be worried if our very thoughts could be hacked? The opportunities for abuse seem legion.

Everything is shaped by personal experience. Nicolelis whose views of the collective mind were forged by the crowds that overthrew the Brazilian military dictatorship and his beloved soccer games. But the crowd is not always wise. I being a person who has always valued solitude and time with individuals and small groups over the electric energy of crowds have no interest in being part of a hive mind to any degree more than the limited extent I might be said to already be in such a mind by writing a piece such as this.

It is a sad fact, but nonetheless true, that should anything like Nicolelis’ brain-net ever be created it can not be under the assumptions of the innate goodness of everyone. As our experience with the Internet should have taught us, an open system with even a minority of bad actors leads to a world of constant headaches and sometimes what amount to very real dangers.

The World Beyond Boundaries

360 The Virgin Oil Painting by Gustav Klimt

I  first came across Miguel Nicolelis in an article for the MIT Technology Review entitled The Brain is not computable: A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines. That got my attention. Nicolelis, if you haven’t already heard of him, is one of the world’s top researchers in building brain-computer interfaces. He is the mind behind the project to have a paraplegic using a brain controlled exoskeleton make the first kick in the 2014 World Cup. An event that takes place in Nicolelis’ native Brazil.

In the interview, Nicolelis characterizes the singularity “as a bunch of hot air”. His reasoning being that “The brain is not computable and no engineering can reproduce it,”. He explains himself this way:

You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

This non-computability of consciousness, he thinks, has negative implications for the prospect of ever “downloading” (or uploading) human consciousness into a computer.

“Downloads will never happen,” he declares with some confidence.

Science journalism, like any sort of journalism needs a “hook” and the hook here was obviously a dig at a number of deeply held beliefs among the technorati; namely, that AI was on track to match and eventually surpass human level intelligence, that the brain could be emulated computationally, and that, eventually, the human personality could likewise be duplicated through computation.

The problem with any hook is that they tend to leave you with a shallow impression of the reality of things. If the world is too complex to be represented in software it is even less able to be captured in a magazine headline or 650 word article. For that reason,  I wanted a clearer grasp of where Nicolelis was coming from, so I bought his recent and excellent, if a little dense, book, Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Let me start with a little of  Nicolelis’ research and from there flesh out the neuroscientist’s view of our human-machine future, a view I found both similar in many respects and at the same time very different from perspectives typical today of futurists thinking about such things.

If you want to get an idea of just how groundbreaking Nicolelis’ work is, the best thing to do is to peruse the website of his lab.  Nicolelis and his colleagues have done conducted experiments where a monkey has controlled the body of a robot located on the other side of the globe, and where another simian has learned to play a videogame with its thoughts alone. Of course, his lab is not interested in blurring the lines between monkeys and computers for the heck of it, and the immediate aim of their research is to improve the lives of those whose ties between their bodies and their minds have been severed, that is, paraplegics. A fact which explains Nicolelis’ bold gamble to successfully demonstrate his lab’s progress by having a paralyzed person kickoff the World Cup.

For how much the humanitarian potential of this technology is inspiring, it is the underlying view of the brain the work of the Nicolelis Lab appears to experimentally support and the neuroscientist’s longer term view of the potential of technology to change the human condition that are likely to have the most lasting importance. They are views and predictions that put Nicolelis more firmly in the trans-humanist camp than might be gleaned from his MIT interview.

The first aspect of Nicolelis’ view of the brain I found stunning was the mind’s extraordinary plasticity when it came to the body. We might tend to think of our brain and our body as highly interlocked things, after all, our brains have spent their whole existence as part of one body- our own. This a reality that the writer, Paul Auster, turns into the basis of his memoir Winter Journal which is essentially the story of his body’s movement through time, its urges, muscles, scars, wrinkles, ecstasies and pains.

The work of Nicolelis’ Lab seems to sever the cord we might thinks joins a particular body and the brain or mind that thinks of it as home. As he states it in Beyond Boundaries:

The conclusion from more than two decades of experiments is that the brain creates a sense of body ownership through a highly adaptive, multimodal process, which can, through straightforward manipulations of visual, tactile, and body position (also known as proprioception) sensory feedback, induce each of us, in a matter of seconds, to accept another whole new body as being the home of our conscious existence. (66)

Psychologists have had an easy time with tricks like fooling a person into believing they possess a limb that is not actually theirs, but Nicolelis is less interested in this trickery than finding a new way to understand the human condition in light of his and others findings.

The fact that the boundaries of the brain’s body image are not limited to the body that brain is located in is one way to understand the perhaps almost unique qualities of the extended human mind. We are all ultimately still tool builders and users, only now our tools:

… include technological tools with which we are actively engaged, such as a car, bicycle, or walking stick; a pencil or a pen, spoon, whisk or spatula; a tennis racket, golf club, a baseball glove or basketball; a screwdriver or hammer; a joystick or computer mouse; and even a TV remote control or Blackberry, no matter how weird that may sound. (217)

Specialized skills honed over a lifetime can make a tool an even more intimate part of the self. The violin, an appendage of a skilled musician, a football like a part of the hand of a seasoned quarterback. Many of the most prized people in society are in fact master tool users even if we rarely think of them this way.

Even with our master use of tools, the brain is still, in Nicolelis’ view,trapped within a narrow sphere surrounding its particular body. It is here where he sees advances in neuroscience eventually leading to the liberation of the mind from its shell. The logical outcome of minds being able to communicate directly to computers is a world where, according to Nicolelis:

… augmented humans make their presence felt in a variety of remote environments, through avatars and artificial tools controlled by thought alone. From the depths of the oceans to the confines of supernovas, even to the tiny cracks of intracellular space, human reach will finally catch up to our voracious appetite to explore the unknown. (314)

He characterizes this as Mankind’s “epic journey of emancipation from the obsolete bodies they have inhabited for millions of years” (314) Yet, Nicolelis sees human communication with machines as but a stepping stone to the ultimate goal- the direct exchange of thoughts between human minds. He imagines the sharing of what has forever been the ultimately solipsistic experience of what it is like to be a particular individual with our own very unique experience of events, something that can never be fully captured even in the most artful expressions of,  language. This exchange of thoughts, which he calls “brainstorms” is something Nicolelis does not limit to intimates- lovers and friends- but which he imagines giving rise to a “brain- net”.

Could we one day, down the road of a remote future, experience what it is to be part of a conscious network of brains, a collectively thinking true brain-net? (315)

… I have no doubt that the rapacious voracity with which most of us share our lives on the Web today offers just a hint of the social hunger that resides deep in human nature. For this reason, if a brain- net ever becomes practicable,  I suspect it will spread like a supernova explosion throughout human societies. (316)

Given this context, Nicolelis’ view on the Singularity and the emulation or copying of human consciousness on a machine is much more nuanced than the impression one is left with from the MIT interview. It is not that he discounts the possibility that “advanced machines may come to dominate and even dominate the human race” (302) , but that he views it as a low probability danger relative to the other catastrophic risks faced by our species.

His views on prospect of human level intelligence in machines is less that high level machine intelligence is impossible, but that our specific type of intelligence is non-replicable. Building off of Stephen Jay Gould’s idea of the “life tape”  the reason being that we can not duplicate through engineering the sheer contingency that lies behind the evolution of human intelligence. I understand this in light of an observation by the philosopher Daniel Dennett, that I remember but cannot place, that it may be technically feasible to replicate mechanically an exact version of a living bird, but that it may prove prohibitively expensive, as expensive as our journeys to the moon, and besides we don’t need to exactly replicate a living bird- we have 747s. Machine intelligence may prove to be like this where we are never able to replicate our own intelligence other than through traditional and much more exciting means, but where artificial intelligence is vastly superior to human intelligence in many domains.

In terms of something like uploading, Nicolelis does believe that we will be able to record and store human thoughts- his brainstorms- at some place in the future, we may be able to record  the whole of a life in this way, but he does not think this will mean the preservation of a still experiencing intelligence anymore than a piece by Chopin is the actual man. He imagines us deliberately recording the memories of individuals and broadcasting them across the universe to exist forever in the background of the cosmos which gave rise to us.

I can imagine all kinds of wonderful developments emerging should the technological advances Nicolelis imagines coming to pass. It would revolutionize psychological therapy, law, art and romance. It would offer us brand new ways to memorialize and remember the dead.

Yet, Nicolelis’ Omega Point- a world where all human being are brought together into one embracing common mind, has been our dream at least since Plato, and the very antiquity of these urges should give us pause, for what history has taught us is that the optimistic belief that “this time is different” has never proved true. A fact which should encourage us to look seriously, which Nicolelis himself refuse to do, at the potential dark side of the revolution in neuroscience this genius Brazilian is helping to bring about. It is less a matter of cold pessimism to acknowledge this negative potential as it is a matter of steeling ourselves against disappointment, at the the least, and in preventing such problem from emerging in the first place at best, a task I will turn to next time…

Caves, Creationism and the Divine Wonder of Deep Time

The Mutiliation of Uranus by Saturn

Last week was my oldest daughter’s 5th birthday in my mind the next “big” birthday after the always special year one. I decided on a geology themed day one of whose components were her, me and my younger daughter who’s 3 taking a trip to a local limestone cave that holds walk through tours.

Given that we were visiting the cave on a weekday we had the privilege of getting a private tour: just me, the girls and our very friendly and helpful tour guide. I was really hoping the girls would get to see some bats, which I figured would be hibernating by now. Sadly, the bats were gone. In some places upwards of 90% of them had been killed by an epidemic with the understated name of “White Nose Syndrome” a biological catastrophe I hadn’t even been aware of and a crisis both for bats and farmers who depend on them for insect control and pollination.

For a local show-cave this was pretty amazing- lots of intricate stalactites and stalagmites, thin rock in the form of billowing ribbons a “living” growth moving so slowly it appears to be frozen in time. I found myself constantly wondering how long it took for these formations to take shape. “We tell people thousands of years” our guide told us. “We have no idea how old the earth is”. I thought to myself that I was almost certain that we had a pretty good idea of the age of the earth – 4.5 billion, but did not press- too interested in the caves features and exploring with the girls. I later realized I should have, it would have likely made our guide feel more at ease.

Towards the end of our tour we spotted a sea shell embedded in the low ceiling above us, and I picked up the girls one at a time so they could inspect it with their magnifying glass. I felt the kind of vertigo you feel when you come up against deep time. Here was the echo of a living thing from eons ago viewed by the living far far in its future.

Later I found myself thinking about our distance in time from the sea shell and the cave that surrounded it. How much time separated us and the shell? How old was the cave? How long had the things we had seen taken to form? Like any person who doesn’t know much about something does I went to our modern version of Delphi’s Oracle- Google.

When I Googled the very simple question: “how old are limestone caves?” a very curious thing happened. The very first link that popped up wasn’t Wikipedia or a geology site but The Institute for Creation Research.  That wasn’t the only link to creationist websites. Many, perhaps the majority, of articles written on the age of caves were by creationists, the ones that I read in seemingly scientific language, difficult for a non-scientist/non-geologist to parse. Creationists seem to be as interested in the age of caves as speleologist, and I couldn’t help but wonder, why?

Unless one goes looking or tries to remain conscious of it, there are very few places where human beings confront deep time- that is time far behind (or in front) of thousands of years by which we reckon human historical time.  The night sky is one of these places, though we have so turned the whole world into a sprawling Las Vegas that few of us can even see into the depths of night any more. Another place is natural history museums where prehistoric animals are preserved and put on display. Creationists have attempted to tackle the latter by designing their very own museums such as The Creation Museum replete with an alternative history of the universe where, among other things, dinosaurs once lived side-by-side with human beings like in The Flintstones.

Another place where a family such as my own might confront deep time is in canyons and caves. The Grand Canyon has a wonderful tour called The Trail of Time that gives some idea of the scale of geological time where tourists start at the top of the canyon in present time and move step by step and epoch by epoch to the point where the force of the Colorado River has revealed a surface 2 billion years old.

Caves are merely canyons under the ground and in both their structure and their slow growing features- stalactites and the like- give us a glimpse into the depths of geologic time. Creationists feel compelled to steel believers in a 5,000 year old earth against the kinds of doubts and questions that would be raised after a family walks through a cave. Hence all of the ink spilt arguing over how long it takes a stalagmite to grow five feet tall and look like a melting Santa Claus. What a shame.

It was no doubt the potential prickliness of his tourists that led our poor guide to present the age of the earth or the passage of time within the cave as open questions he could not address. After all, he didn’t know me from Adam and one slip of the word “million” from his mouth might have resulted in what should have been an exciting outing turning into a theological debate. As he said he was not, after all, a geologist and had merely found himself working in the cave after his father had passed away.

As regular readers of my posts well known, I am far from being an anti-religious person. Religion to me is one of the more wondrous inventions and discoveries we human beings have come up with, but religion, understood in this creationist sense seems to me a very real diminishment not merely of the human intellect but of the idea of the divine itself.

I do not mean to diminish the lives of people who believe in such pseudo-science. One of the most hardworking and courageous persons I can think of was a man blinded by a mine in Vietnam. Once we were discussing what he would most like to see were his sight restored and he said without hesitation “The Creation Museum!”. I think this man’s religious faith was a well spring for his motivation and courage, and this, I believe is what religions are for- to provide strength for us to deal with the trials and tribulations of human life. Yet, I cannot help but think that the effort to black- hole- like suck in and crush our ideas of creation so that it fits within the scope of our personal lives isn’t just an assault on scientific truth but a suffocation of our idea of the divine itself.

The Genesis story certainly offers believers and non-believers alike deep reflection on what it means to be a moral creature, but much of this opportunity for reflection is lost when the story is turned into a science text book. Not only that, both creation and creator become smaller. How limited is the God of creationists whose work they constrict from billions into mere thousand of years and whose overwhelming complexity and wonder they reduce to a mere 788,280 human words!  With bitter irony creationists diminish the depth of the work God has supposedly made so that man can exalt himself to the center of the universe and become the primary character of the story of creation. In trying to diminish the scale and depth of the universe in space and time they are committing the sin of Milton’s Satan- that is pride.

The more we learn of the universe the deeper it becomes. Perhaps the most amazing projects in NASA’s history were two very recent one- Kepler and Hubble. Their effects on our understanding of our place in the universe are far more profound than the moon landings or anything else the agency has done.

Hubble’s first Deep Field image was taken over ten consecutive days in December of 1995. What it discovered in the words of Lance Wallace over at The Atlantic:

What researchers found when they focused the Hubble over those 10 days on that tiny speck of darkness, Mather said, shook their worlds. When the images were compiled, they showed not just thousands of stars, but thousands of galaxies. If a tiny speck of darkness in the night sky held that many galaxies, stars and—as scientists were beginning to realize—associated planets … the number of galaxies, stars, and planets the universe contained had to be breathtakingly larger than they’d previously imagined.

The sheer increased scale of the universe has led scientist to believe that it is near impossible that we are “alone” in the cosmos. The Kepler Mission has filled in the details with recent studies suggesting that there may be billions of earth like planets in the universe.   If we combine these two discoveries with the understanding of planet hunter Dimitar Sasselov, who thinks that not only are we at the very beginning of the prime period for life in the universe because it has taken this long for stars to produce the heavy elements that are life’s prerequisites, but that we also have a very long time perhaps as much as 100 billion years for this golden age of life to play out, we get an idea of just how prolific creation is and will be beside which a God who creates only one living planet and one intelligent species seems tragically sterile.

To return underground, caves were our first cathedrals- witness Lascaux. It is even possible that our idea of the Underworld as the land of the dead grew out of the bronze age temple complex of Alepotrypa inspiring the Greek idea of Hades that served as the seed through which the similar ideas of Sheol held by the Jews and revved up by Christians to the pinnacle of horror shows with the idea of Hell.

I like to think that this early understanding of the Underworld as the land of the dead and the use of caves as temples reflects an intuitive understanding of deep time. Walking into a cave is indeed, in a sense, entering the realm of the dead because it is like walking into the earth’s past. What is seen there is the movement of time across vast scales. The shell my daughter’s peered at with their magnifying glass, for instance, was the exoskeleton of a creature that lived perhaps some 400 million years ago in the Silurian Period when what is now Pennsylvania was located at the equator and the limestone that was the product of decay the shells inhabitant’s relatives began to form.

Recognition of this deep time diminishes nothing of the human scale and spiritual meaning of this moment taken to stop and stare at something exquisite peeking at us from the ceiling- quite the opposite. Though I might be guilty of overwinding,  it was a second or two 400 million years or perhaps one might say 13 billion years in the making- who couldn’t help thanking God for that?

Finding Frankenstein a Home

Frankenstein Cover

Percy’s epic poem, Prometheus Unbound is seldom read today while his wife’s novel,  Frankenstein; or the Modern Prometheus has become so well known that her monster graces the boxes of children’s cereal, and became the fodder from one of the funniest movies of the 20th century.

The question that always strikes me when I have the pleasure of re-reading Frankenstein is how could someone so young have written this amazing book? Mary Shelley was a mere twenty-one when the novel was published and the story she penned largely to entertain her husband and friends has managed to seep deeply into our collective assumptions especially those regarding science and technology. Just think of the kinds of associations the word “frankenfood” brings to mind and one gets a sense of how potent as a form of resistance against new forms of technology her gothic horror story is.

What is lost in this hiving off of the idea of the dangers of “unnatural” science for use as a cautionary tale against using a particular form of technology or exploring a certain line of research is the depth and complexity of the other elements present in the novel. I blame Hollywood.

The Frankenstein’s monster of our collective imagination isn’t the one given us by Mary Shelley, but that imagined by the director James Whale in his 1931 classic Frankenstein.

It was Whale who gave us the monster in a diner jacket, bolts protruding from his neck, and head like a block. Above all, Whale’s monster is without speech whereas the monster Mary Shelley imagined is extraordinarily articulate.

Whale’s monster is a sort of natural born killer his brain having come from a violent criminal. It is like the murderous chimpanzee written about in the weekend’s New York Times a creature that because we can not control or tame its murderous instincts must be killed before it can harm another person. Mary Shelley’s monster has a reason behind its violence. He can learn and love like we do, and isn’t really non-human at all. It is his treatment by human beings as something other than one of us- his abandonment by Victor Frankenstein after he was created, the horror which he induces in every human being that encounters him, that transforms the “creature” into something not so much non-human as inhumane.

There is a lesson here regarding our future potential to create beings that our sentient like ourselves – the technological hopes of the hour being uplifting and AI – that we need to think about the problem of homelessness when creating such beings. All highly intelligent creatures that we know of with the remarkable exception of the cephalopods are social creatures therefore any intelligent creature we create will likely need to have some version of home a world where it can be social as well.

The dangers of monstrousness emerging from intelligence lacking a social world was brilliantly illustrated by another 19th century science-fiction horror story- H.G. Wells’ The Island of Dr Moreau. In Mary Shelley’s novel she gives us insight into the origins of evil in the absence of such a world. Because it cannot be loved, Victor’s Frankenstein’s creation will destroy in the same way his every attempt to reach out to other sentient creature is ultimately destroyed with the creature telling his creator who has left him existentially shipwrecked:

“I too can cause desolation.”

Mary’s Shelley’s creature isn’t just articulate, he can read, and not only everyday reading, he has a taste for deep literature, especially Milton’s Paradise Lost which seems to offer him understanding of his own fate:

Like Adam, I was apparently united by no link to any other being in existence; but his state was far different from mine in every respect.  He had come forth from the hands of God a perfect creature, happy and prosperous, guarded by the especial care of his creator, he was allowed to converse with, and acquire knowledge from, beings of a superior nature: but I was helpless and alone. Many times I considered Satan as the fitter emblem of my condition: for often, like him, when I viewed the bliss of my protectors, the bitter gall of envy rose within me. “ (Chapter 15, p.2)

In some sense Mary Shelley’s horror story can be seen as less of a warning to 19th century scientist engaged in strange experiments with galvanization than a cautionary tale for those whose dehumanizing exploitation of industrial workers, miners, serfs and chattel slaves might lead to a potentially inhuman form of revolutionary blow back.  The creature cries to his creator:

Yet mine shall not be the submission of abject slavery: if I cannot inspire love, I will cause fear; and chiefly towards you my archenemy, because my creator, do I swear inextinguisable hatred. “(Chpapter 17 p. 1)

Yet, these revelations of the need for compassion towards sentient beings were largely lost in the anti-scientific thrust of the novel by which Frankenstein; or the Modern Prometheus and its progeny has become one of our most potent cautionary tales against hubris.  A scene in Whale’s Frankenstein where the doctor is speaking to a fellow scientist who lacks his ambition for great discovery sums it up nicely:

Where should we be if nobody tried to find out what lies beyond. Have you never wanted to look beyond the clouds and the stars, or what causes the trees to mount, or what changes the darkness to light? When you talk like that people call you crazy. But if I could discover just one of these things- what eternity is for example, I wouldn’t care if they did think I was crazy.

This bias against trying to answer the big questions isn’t merely an invention of the film maker but a deep part of Mary Shelley’s novel itself. Victor Frankenstein is first inspired not by science but by medieval occultists such as Cornelius Agrippa. Exchanging these power and knowledge aspirations of the magicians for run of the mill science meant for Victor:

“I was required to exchange chimeras of boundless grandeur for realities of little worth.” (Chapter 3, p. 3)

Victor would not let this diminishment of his horizons happen:

So much has been done, exclaimed the soul of Frankenstein, – more, far more will I achieve: treading the steps already marked, I will pioneer a new way, explore unknown powers, and unfold to the world the deepest mysteries of creation. (Chapter 3)

His ultimate goal being to create-life anew, a road not only to biological immortality but his worship:

A new species would bless me as creator and source; many happy and excellent natures would owe their being to me. No father could claim the gratitude of his child so completely as should deserve theirs. “ (Chapter 4, p. 4)

It is here, I think where we see that Mary Shelley has turned the tables on her husband’s Prometheus giving him the will to power seen in Milton’s Satan whom Percy Shelley in his tale of the Titan had tried to find an alternative for. Scientists would oblige Mary’s warnings by coming up with such horrors as the machine gun, chemical warfare, aerial bombing, nuclear weapons, napalm and inhumane medical experiments such as those performed not just by the NAZIs, but by ourselves.

At the same time scientists gave us anesthesia, and electric lighting, penicillin and anti- biotics along with a host of other humane inventions. It is here where the emotional pull of Mary Shelley’s divine imagination loses me and the anti-scientific nature of her novel becomes something I am not inclined to accept.

The idea of hubris is a useful concept some variant of which we must adopt the exploration of which I will leave for another time. In crafting an updated version of the tale of the dangers of human hubris Mary Shelley has dimmed under Gothic shadows some of the illumination of the Enlightenment in which she played a large part. Warnings against following our desire to know is, after all, the primary moral of her novel. As Victor tells the polar explorer Robert Walton who has saved him:

Learn from me, if not by my precepts, at least by my example, how dangerous is the acquirement of knowledge, and how much happier the man is who thinks his native town to be the world, than he who aspires to become more than his nature will allow (Chapter 4, page 2)

Walton on the basis of Victor’s story does prematurely end his polar exploration, perhaps saving his crew from mortal danger, but also stopping short an adventure and as a consequence contracting the horizon of what we as human beings can know. Many of the lessons of Frankenstein; or the Modern Prometheus we need to grapple with and take to heart, yet this refusal to ask or take upon ourselves the danger of attempting to answer the deepest of questions would constitute another very different, though very real, way of losing a elemental component of our humanity.       

Pernicious Prometheus

Prometheus brings fire to mankind Heinrich fueger 1817

It should probably seem strange to us that one of the memes we often use when trying to grapple with the question of how to understand the powers brought to us by modern science and technology is one inspired by an ancient Greek god chained to a rock. Well, actually not quite a god but a Titan, that is Prometheus.

Do a search on Amazon for Prometheus and you’ll find that he has more titles devoted to him than even the lightning bolt throwing king of the gods himself, Zeus, who was the source of the poor Titan’s torment. Many, perhaps the majority of these titles containing Prometheus- whether fiction or nonfiction- have to do not so much with the Titan himself as our relationship to science, technology and knowledge.

Here’s just an odd assortment of titles you find: American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer, Forbidden Knowledge: From Prometheus to Pornography, Prometheus Reimagined: Technology, Environment, and Law in the Twenty-first Century, Prometheus Redux: A New Energy Age With the IGR, Prometheus Bound: Science in a Dynamic ‘Steady State’ . It seems even the Japanese use the Western Prometheus myth as in: Truth of the Fukushima nuclear accident has now been revealed: Trap of Prometheus. These are just some of the more recent non-fiction titles. It would take up too much space to list the works of science-fiction where the name Prometheus graces the cover.   

Film is in the Prometheus game as well with the most recent being Ridley Scott’s movie named for the tragic figure where, again, scientific knowledge and our quest for truth, along with its dangers, are the main themes of the plot.

It should be obvious that the myth of Prometheus is a kind of mental tool we use, and have used for quite some time, to understand our relationship to knowledge in general and science and technology specifically. Why this is the case is an interesting question, but above all we should want to know whether or not the Promethean myth for all the ink and celluloid devoted to is actually a good tool for thinking about human knowing and doing, or whether have we ourselves become chained to it and unable to move like the hero, or villain- depending upon your perspective, whose story still holds us in his spell.

It is perhaps especially strange that we turn to the myth of Prometheus to think through potential and problems brought about by our advanced science and technology because the myth is so damn old. The story of Prometheus is first found in the works of the Greek poetic genius, Hesiod who lived somewhere between 750 and 650 B.C.E. Hesiod portrays the Titan as the foil of Zeus and the friend of humankind. Here’s me describing how Hesiod portrayed Zeus’ treatment of our unfortunate species.

If one thought Yahweh was cruel for cursing humankind to live “by the sweat of his brow” he has nothing on Zeus, who along with his court of Olympian gods:

“…keep hidden from men the means of life. Else you would easily do work enough in a day to supply you for a full year even without working.”

In Hesiod, it is Prometheus who tries to break these limitations set on humankind by Zeus.

Prometheus, the only one of the Titans that had taken Zeus’s side in coup d’état against Cronos had a special place in his heart for human beings having, according to some legends, created them. Not only had Prometheus created humans who the Greeks with their sharp wisdom called  mortals, he was also the one who when all the useful gifts of nature had seemingly been handed out to the other animals before humans had got to the front of the line, decided to give mortals an upright posture, just like the gods, and that most special gift of all- fire.

We shouldn’t be under the impression, however, that Hesiod thought Prometheus was the good guy in this story. Rather, Hesiod sees him as a breaker of boundaries and therefore guilty of a sort of sacrilege. It is Prometheus’ tricking Zeus into accepting a subpar version of sacrifice that gets him chained to his rock more than his penchant for handing out erect posture, knowledge and technology to his beloved humans.

The next major writer to take up the myth of Prometheus was the playwright of Greek tragedy, Aeschylus, or at least we once believed it to have been him, who put meat on the bones of Hesiod’s story. In Aeschylus’ play, Prometheus Bound, and the fragments of the play’s sequels which have survived we find the full story of Prometheus’ confrontation with Zeus, which Hesiod had only brushed upon.

Before I had read the play I had always imagined the Titan chained to his rock like some prisoner in the Tower of London.  In Prometheus Bound the rebel against Zeus is not so much chained as nailed to Mount Kazbek like Jesus to his cross. A spike driven through his torso doesn’t kill him, he’s immortal after all, but pains him as much as it would a man made of flesh and bones. And just like Jesus to the good thief crucified next to him, Prometheus in his dire condition is more animated by compassion that rage.

“For think not that because I suffer therefore I would behold all others suffer too.”

To the chorus who inquires as to the origin of his suffering Prometheus list the many gifts besides his famous fire which he has given humankind. Before him humans had moved “as in a dream”, and did not yet have houses like those of the proverbial three little pigs made of straw or wood or brick. Instead they lived in holes in the ground- like ants. Before Prometheus human beings did not know when the seasons were coming or how to read the sky. From him we received numbers and letters, learned how to domesticate animals and make wheels. It was him who gave us sails for plying the seas, life in cities, the art of making metals. What the ancient myth of Prometheus shows us at the very least is that the ancient Greeks were conscious of the historical nature of technological development- a consciousness that would be revived in the modern era and includes our own.

In Aeschylus’  play Prometheus holds a trump card. He knows not only that Zeus will be overthrown, but who is destined to do the deed. Definant before Zeus’ messenger, Hermes and his winged shoes, Prometheus Bound ends with the Titan hurtled down a chasm.

Like all good, and more not so good, stories Prometheus Bound had sequels. Only fragments remain of the two plays that followed Prometheus Bound, and again like most sequels they are disappointments. In the first sequel Zeus frees the Titans he has imprisoned in anticipation of his reconciliation with Prometheus in the final play. That is, Prometheus eventually reconciles with his tormentor, centuries later many would find themselves unable to stomach this.

Indeed, it would be a little over a millennium after Hesiod had brought Prometheus to life with his poetry that yet another genius poet and playwright- Percy Bysshe Shelley would transform the ancient Greek myth into a symbol of the Enlightenment and a newly emergent relationship with both knowledge and power.

Europeans in the 18th the early 19th century were a people in search of an alternative history something other than their Christian inheritance, though it should be said that by the middle of the 19th century they had turned their prior hatred of the “dark ages” into a new obsession. In the mid to late 1800s they would go all gothic including a new obsession with the macabre. A host of 19th century thinker including Shelly’s brilliant wife would help bring this transition from brightsky neo-classicism and its worship of reason which gave us our Greco-Roman national capital among other things, to a darker and more pessimistic sense of the gothic and a romanticism tied to our emotions and the unconscious including their dangers.

Percy Shelley’s Prometheus as presented in his play Prometheus Unbound was an Enlightenment rebel. As the child resembles the parent so the generation of Enlightenment and revolution could see in themselves their own promethean origins. Not only had they shattered nearly every sacred shibboleth and not just asked but answered hitherto unasked questions of the natural world their motto being in Kant’s famous words “dare to know”, they had thrown out (America) and over (France) the world’s most powerful kings- earthbound versions of Zeus- and gained in the process new found freedoms and dignity.

Shelly says as much in his preface:

“But in truth I was averse from a catastrophe so feeble as that of reconciling the  Champion with the Oppressor of mankind.”

There is no way Shelley is going to present Prometheus coming to terms with with a would be omnipotent divine tyrant like Zeus. Our Titan is going to remain defiant to the end, and Zeus’s days as the king of the gods are numbered. Yet, it won’t be our hero that eventually knocks Zeus off of his throne it will be a god- the Demogorgon- not a Titan who overthrows Zeus and frees Prometheus. There are many theories about who or what Shelley’s Demogorgon is- it is a demon in Milton, a rather un-omnipotent architect  a bumbling version of Plato’s demiurge- in a play by Voltaire, but the theory I like best is that Shelley was playing with the word demos or people. The Demogorgon in this view isn’t a god- it’s us- that is if we are as brave as Prometheus in standing up to tyrants.  Indeed, it is this lesson in standing up for justice that the play ends:

To defy Power which seems omnipotent

To love and bear to hope till

Hope creates

From its own wreck the thing it contemplates

Neither to change nor falter nor repent his like thy glory

Titan is to be

Good great and joyous beautiful and free

     This is alone Life Joy Empire and Victory

Now, Shelley’s Prometheus just like the character in Hesiod and Aeschylus is also a bringer of knowledge and even more so. Yet, what makes Shelley’s Titan different is that he has a very different idea of what this knowledge is actually for.

Shelley was well aware that given the brilliant story of the rebellion in heaven and the Fall of Adam and Eve that had been imagined by Milton Christians and anti-Christians might easily confuse Prometheus with another rebellious character- Lucifer or Satan. If Milton had unwittingly turned Satan into a hero in his Paradise Lost the contrast with Shelley’s Prometheus would reveal important differences. In his preface to Prometheus Unbound he wrote:

The only imaginary being resembling in any degree Prometheus is Satan and Prometheus is in my judgment a more poetical character than Satan because in addition to courage and majesty and firm and patient opposition to omnipotent force he is susceptible of being described as exempt from the taints of ambition envy, revenge and a desire for personal aggrandizement which in the Hero of Paradise Lost interfere with the interest. The character of Satan engenders in the mind a pernicious casuistry which leads us to weigh his faults with his wrongs and to excuse the former because the latter exceed all measure.

That is, Prometheus is good in a way Satan is not because his rebellion which includes the granting of knowledge is a matter of justice not of a quest for power. This was the moral lesson Shelley thought the Prometheus story could teach us about our quest for knowledge. And yet somehow we managed to get this all mixed up.

The Prometheus myth has cross fertilized and merged with Judeo-Christian myths about the risks of knowledge and our lust for god-like power. Prometheus the fire bringer is placed in the same company as Satan and his rebellious angels, Eve and her eating from the “Tree of Knowledge”, the  story of the construction of the Tower of Babel. The Promethean myth is most often used today as either an invitation to the belief that “knowledge is power” or a cautionary tale of our own hubris and the need to accept limits to our quest for knowledge lest we bring down upon ourselves the wrath of God. Seldom is Shelley’s distinction between knowledge used in the service of justice and freedom which is good- his Prometheus- and knowledge used in the service of power- his understanding of Milton’s Satan acknowledged.

The idea that the Prometheus myth wasn’t only or even primarily a story about hidden knowledge- either its empowerment or its dangers- but a tale about justice is not something Shelley invented whole cloth but a latent meaning that could be found in both Hesiod and Aeschylus. In both, Prometheus gives his technological gifts not because he is trying to uplift the human race, but because he is trying to undo what he thinks was a rigged lottery of creation in which human beings in contrast to the other animals were given the short end of the stick. If the idea of hidden knowledge comes into play it is that the veiled workings of nature have been used by power- that is Zeus- to secure dutiful worship- a ripoff and injustice Prometheus is willing to risk eternal torment to amend.

Out of the three varieties of the Promethean myth- that of encouraging the breaking of boundaries in the name of empowerment (a pro-science and transhumanist interpretation), that of cautioning us against the breaking of these boundaries (a Christian, environmentalist and bioconservative interpretation), or knowledge as a primary means and secondary end in our quest for justice (a techno-progressive interpretation?) it is the last that has largely faded from our view. Oddly enough it would be Shelly’s utterly brilliant and intellectually synesthetic young wife who would bear a large part of the responsibility for shifting the Prometheus myth from a complex and therefore more comprehensive trichotomy to a much more simplistic and narrowing dichotomy.

Mary Shelley’s Frankenstein; or, The Modern Prometheus would turn the quest for knowledge itself into a potential breaking of boundaries that might even be considered a sort of evil. Scientists in the public mind would play the role of Milton’s Satan – hero or villain. Unfortunately in drawing this distinction with such artistic genius Mary Shelley helped ensure that the paramount promethean concern with justice would become lost.  I’ll turn to that tale next time…