How Should Humanity Steer the Future?

FQXi

Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.

If there was any theme I found across the 140 or so essays entered in the contest – it was that the 21st century was make- it- or-break-it for humanity, so we need to get our act together, and fast. If you want a metaphor for this sentiment, you couldn’t do much better than Nietzsche’s idea that humanity is like an individual walking on a “rope over an abyss”.

A Rope over an Abyss by Laurence Hitterdale

Hitterdale’s idea is that for most of human history the qualitative aspects of human experience have pretty much been the same, but that is about to change. What are facing, according to Hitterdale, is the the extinction of our species or the realization of our wildest perennial human dreams- biological superlongevity, machine intelligence that seem to imply the end of drudgery and scarcity. As he points out, some very heavy hitting thinkers seem to think we live in make or break times:

 John Leslie, judged the probability of human extinction during the next five centuries as perhaps around thirty per cent at least. Martin Rees in 2003 stated, “I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century.”Less than ten years later Rees added a comment: “I have been surprised by how many of my colleagues thought a catastrophe was even more likely than I did, and so considered me an optimist.”

In a nutshell, Hiterdale’s solution is for us to concentrate more on preventing negative outcomes that achieving positive ones in this century. This is because even positive outcomes like human superlongevity and greater than human AI could lead to negative outcomes if we don’t sort out our problems or establish controls first.

How to avoid steering blindly: The case for a robust repository of human knowledge by Jens C. Niemeyer

This was probably my favorite essay overall because it touched on issues dear to my heart- how will we preserve the past in light of the huge uncertainties of the future.  Niemeyer makes the case that we need to establish a repository of human knowledge in the event we suffer some general disaster, and how we might do this.

By one of those strange incidences of serendipity, while thinking about Niemeyer’s ideas and browsing the science section of my local bookstore I came across a new book by Lewis Dartnell The Knowledge: How to Rebuild Our World from Scratch which covers the essential technologies human beings will need if they want to revive civilization after a collapse. Or maybe I shouldn’t consider it so strange. Right next to The Knowledge was another new book The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day, by David Hand, but I digress.

The digitization of knowledge and its dependence on the whole technological apparatus of society actually makes us more vulnerable to the complete loss of information both social and personal and therefore demands that we backup our knowledge. Only things like a flood or a fire could have destroyed our lifetime visual records the way we used to store them- in photo albums- but now all many of us would have to do is lose or break our phone. As Niemeyer  says:

 Currently, no widespread efforts are being made to protect digital resources against global disasters and to establish the means and procedures for extracting safeguarded digital information without an existing technological infrastructure. Facilities like, for instance, the Barbarastollen underground archive for the preservation of Germany’s cultural heritage (or other national and international high-security archives) operate on the basis of microfilm stored at constant temperature and low humidity. New, digital information will most likely never exist in printed form and thus cannot be archived with these techniques even in principle. The repository must therefore not only be robust against man-made or natural disasters, it must also provide the means for accessing and copying digital data without computers, data connections, or even electricity.

Niemeyer imagines the creation of such a knowledge repository as a unifying project for humankind:

Ultimately, the protection and support of the repository may become one of humanity’s most unifying goals. After all, our collective memory of all things discovered or created by mankind, of our stories, songs and ideas, have a great part in defining what it means to be human. We must begin to protect this heritage and guarantee that future generations have access to the information they need to steer the future with open eyes.

Love it!

One Cannot Live in the Cradle Forever by Robert de Neufville

If Niemeyer is trying to goad us into preparing should the worst occur, like Hitterdale, Robert de Neufville is working towards making sure these nightmare, especially self-inflicted ones, don’t come true in the first place. He does this as a journalist and writer and as an associate of the Global Catastrophic Risk Institute.

As de Neufville points out, and as I myself have argued before, the silence of the universe gives us reason to be pessimistic about the long term survivability of technological civilization. Yet, the difficulties that stand in the way of our minimizing global catastrophic risks, thing like developing an environmentally sustainable modern economy, protecting ourselves against global pandemics or meteor strikes of a scale that might set civilization on its knees, or the elimination of the threat of nuclear war, are more challenges of politics than technology. He writes:

But the greatest challenges may be political. Overcoming the technical challenges may be easy in comparison to using our collective power as a species wisely. If humanity were a single person with all the knowledge and abilities of the entire human race, avoiding nuclear war, and environmental catastrophe would be relatively easy. But in fact we are billions of people with different experiences, different interests, and different visions for the future.

In a sense, the future is a collective action problem. Our species’ prospects are effectively what economists call a “common good”. Every person has a stake in our future. But no one person or country has the primary responsibility for the well-being of the human race. Most do not get much personal benefit from sacrificing to lower the risk of extinction. And all else being equal each would prefer that others bear the cost of action. Many powerful people and institutions in particular have a strong interest in keeping their investments from being stranded by social change. As Jason Matheny has said, “extinction risks are market failures”.

His essay makes an excellent case that it is time we mature as a species and live up to our global responsibilities. The most important of which is ensuring our continued existence.

The “I” and the Robot by Cristinel Stoica

Here Cristinel Stoica makes a great case for tolerance, intellectual humility and pluralism, a sentiment perhaps often expressed but rarely with such grace and passion.

As he writes:

The future is unpredictable and open, and we can make it better, for future us and for our children. We want them to live in peace and happiness. They can’t, if we want them to continue our fights and wars against others that are different, or to pay them back bills we inherited from our ancestors. The legacy we leave them should be a healthy planet, good relations with others, access to education, freedom, a healthy and critical way of thinking. We have to learn to be free, and to allow others to be free, because this is the only way our children will be happy and free. Then, they will be able to focus on any problems the future may reserve them.

Ends of History and Future Histories in the Longue Duree by Benjamin Pope

In his essay Benjamin Pope is trying to peer into the human future over the long term, by looking at the types of institutions that survive across centuries and even millennia: Universities, “churches”, economic systems- such as capitalism- and potentially multi-millennial, species – wide projects, namely space colonization.

I liked Pope’s essay a lot, but there are parts of it I disagreed with. For one, I wish he would have included cities. These are the oldest lived of human institutions, and unlike Pope’s other choices are political, and yet manage to far out live other political forms- namely states or empires. Rome far outlived the Roman Empire and my guess is that many American cities, as long as they are not underwater, will outlive the United States.

Pope’s read on religion might be music to the ears of some at the IEET:

Even the very far future will have a history, and this future history may have strong, path-dependent consequences. Once we are at the threshold of a post-human society the pace of change is expected to slow down only in the event of collapse, and there is a danger that any locked-in system not able to adapt appropriately will prevent a full spectrum of human flourishing that might otherwise occur.

Pope seems to lean toward the negative take on the role of religion to promote “a full spectrum of human flourishing” and , “as a worst-case scenario, may lock out humanity from futures in which peace and freedom will be more achievable.”

To the surprise of many in the secular West, and that includes an increasingly secular United States, the story of religion will very much be the story of humanity over the next couple of centuries, and that includes especially the religion that is dying in the West today, Christianity. I doubt, however, that religion has either the will or the capacity to stop or even significantly slow technological development, though it might change our understanding of it. It also the case that, at the end of the day, religion only thrives to the extent it promotes human flourishing and survival, though religious fanatics might lead us to think otherwise. I am also not the only one to doubt Pope’s belief that “Once we are at the threshold of a posthuman society the pace of change is expected to slow down only in the event of collapse”.

Still, I greatly enjoyed Pope’s essay, and it was certainly thought provoking.  

Smooth seas do not make good sailors by Georgina Parry

If you’re looking to break out of your dystopian gloom for a while, and I myself keep finding reasons for which to be gloomy, then you couldn’t do much better to take a peak and Georgina Parry’s fictionalized peak at a possible utopian future. Like a good parent, Parry encourages our confidence, but not our hubris:

 The image mankind call ‘the present’ has been written in the light but the material future has not been built. Now it is the mission of people like Grace, and the human species, to build a future. Success will be measured by the contentment, health, altruism, high culture, and creativity of its people. As a species, Homo sapiens sapiens are hackers of nature’s solutions presented by the tree of life, that has evolved over millions of years.

The future is the past by Roger Schlafly

Schlafly’s essay literally made my draw drop, it was so morally absurd and even obscene.

Consider a mundane decision to walk along the top of a cliff. Conventional advice would be to be safe by staying away from the edge. But as Tegmark explains, that safety is only an illusion. What you perceive as a decision to stay safe is really the creation of a clone who jumps off the cliff. You may think that you are safe, but you are really jumping to your death in an alternate universe.

Armed with this knowledge, there is no reason to be safe. If you decide to jump off thecliff, then you really create a clone of yourself who stays on top of the cliff. Both scenarios are equally real, no matter what you decide. Your clone is indistinguishable from yourself, and will have the same feelings, except that one lives and the other dies. The surviving one can make more clones of himself just by making more decisions.

Schlafly rams the point home that under current views of the multiverse in physics nothing you do really amount to a choice, we are stuck on an utterly deterministic wave-function on whose branching where we play hero and villain, and there is no space for either praise or guilt. You can always act as a coward or naive sure that somewhere “out there” another version of “you” does the right thing. Saving humanity from itself in the ways proposed by Hitterdale and de Neufville, preparing for the worst as in Niemeyer and Pope or trying to build a better future as Parry and Stoica makes no sense here. Like poor Schrodinger’s cat, on some branches we end up surviving, on some we destroy ourselves and it is not us who is in charge of which branch we are on.

The thought made me cringe, but then I realized Schlafly must be playing a Swiftian game. Applying quantum theory to the moral and political worlds we inhabit leads to absurdity. This might or might not call into question the fundamental  reality of the multiverse or the universal wave function, but it should not lead us to doubt or jettison our ideas regarding our own responsibility for the lives we live, which boil down to the decisions we have made.

Chinese Dream is Xuan Yuan’s Da Tong by KoGuan Leo

Those of us in the West probably can’t help seeing the future of technology as nearly synonymous with the future of our own civilization, and a civilization, when boiled down to its essence, amounts to a set of questions a particular group of human beings keeps asking, and their answer to these questions. The questions in the West are things like what is the right balance between social order and individual freedom? What is the relationship between the external and internal (mental/spiritual) worlds, including the question of the meaning of Truth? How might the most fragile thing in existence, and for us the most precious- the individual- survive across time? What is the relationship between the man-made world- and culture- visa-vi nature, and which is most important to the identity and authenticity of the individual?

The progress of science and technology intersect with all of these questions, but what we often forget is that we have sown the seeds of science and technology elsewhere and the environment in which they will grow can be very different and hence their application and understanding different based as they will be on a whole different set of questions and answers encountered by a distinct civilization.

Leo KoGuan’s essay approaches the future of science and technology from the perspective of Chinese civilization. Frankly, I did not really understand his essay which seemed to me a combination of singularitarianism and Chinese philosophy that I just couldn’t wrap my head around.  What am I to make of this from the Founder and Chairman of a 5.1 billion dollar computer company:

 Using the KQID time-engine, earthlings will literally become Tianming Ren with God-like power to create and distribute objects of desire at will. Unchained, we are free at last!

Other than the fact that anyone interested in the future of transhumanism absolutely needs to be paying attention to what is happening and what and how people are thinking in China.

Lastly, I myself had an essay in the contest. It was about how we are facing incredible hurdles in the near future and that one of the ways we might succeed in facing these hurdles is by recovering the ability to imagine what an ideal society, Utopia, might look like. Go figure.

Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…

 

 

 

Why does the world exist, and other dangerous questions for insomniacs

William Blake Creation of the World

 

A few weeks back I wrote a post on how the recent discovery of gravitational lensing provided evidence for inflationary models of the Big Bang. These are cosmological models that imply some version of the multiverse, essentially the idea that ours is just one of a series of universes, a tiny bubble, or region, of a much, much larger universe where perhaps even the laws of physics or rationality of mathematics differed from one region to another.

My earlier piece had taken some umbrage with the physicist Lawrence Krauss’ new atheist take on the discovery of gravitational lensing, in the New Yorker. Krauss is a “nothing theorists”, one of a group of physicists who argue that the universe emerged from what in effect was nothing at all, although; unlike other nothing theorists such as Stephen Hawking, Krauss uses his science as a cudgel to beat up on contemporary religion. It was this attack on religion I was interested in, while the deeper issue the issue of a universe arising from nothing, left me shrugging my shoulders as if there was, excuse the pun, nothing of much importance in the assertion.

Perhaps I missed the heart of the issue because I am a nothingist myself, or at the very least, never found the issue of nothingness something worth grappling with.  It’s hard to write this without sounding like a zen koan or making my head hurt, but I didn’t look into the physics or the metaphysics of Krauss’ nothingingist take on gravitational lensing, inflation or anything else, in fact I don’t think I had ever really reflected on the nature of nothing at all.

The problems I had with Krauss’ overall view as seen in his book on the same subject A Universe from Nothing had to do with his understanding of the future and the present not the past.  I felt the book read the future far too pessimistically, missing the fact that just because the universe would end in nothing there was a lot of living to be done from now to the hundreds of billions of years before its heat death. As much as it was a work of popular science, Krauss’ book was mostly an atheist weapon in what I called “The Great God Debate” which, to my lights, was about attacking, or for creationists defending, a version of God as a cosmic engineer that was born no earlier and in conjunction with modern science itself. I felt it was about time we got beyond this conception of God and moved to a newer or even more ancient one.

Above all, A Universe from Nothing, as I saw it, was epistemologically hubristic, using science to make a non-scientific claim over the meaning of existence- that there wasn’t any- which cut off before they even got off the ground so many other interesting avenues of thought. What I hadn’t thought about was the issue of emergence from nothingness itself. Maybe the question of the past, the question of why our universe was here at all, was more important than I thought.

When thinking a question through, I always find a helpful first step to turn to the history of ideas to give me some context. Like much else, the idea that the universe began from nothing is a relatively recent one. The ancients had little notion of nothingness with their creation myths starring not with nothing but most often an eternally existing chaos that some divinity or divinities sculpted into the ordered world we see. You start to get ideas of creation out of nothing- ex nihilo- really only with Augustine in the 5th century, but full credit for the idea of a world that began with nothing would have to wait until Leibniz in the 1600s, who, when he wasn’t dreaming up new cosmologies was off independently inventing calculus at the same time as Newton and designing computers three centuries before any of us had lost a year playing Farmville.

Even when it came to nothingness Leibniz was ahead of his time. Again about three centuries after he had imagined a universe created from nothing the renegade Einstein was just reflecting universally held opinion when he made his biggest “mistake” tweaking his theory of general relativity with what he thought was a bogus cosmological constant so that he could get a universe that he and everyone else believed in- a universe that was eternal and unchanging- uncreated. Not long after Einstein had cooked the books Edwin Hubble discovered the universe was changing with time, moving apart, and not long after the that, evidence mounted that the universe had a beginning in the Big Bang.

With a creation event in the Big Bang cosmologists, philosophers and theologians were forced to confront the existence of a universe emerging from what was potentially nothing running into questions that had lain dormant since Leibniz- how did the universe emerge from nothing? why this particular universe? and ultimately why something rather than nothing at all? Krauss thinks we have solved the first and second questions and finds the third question, in his words, “stupid”.

Strange as it sounds coming out of my mouth, I actually find myself agreeing with Krauss: explanations that the universe emerged from fluctuations in a primordial “quantum foam” – closer to the ancient’s idea of chaos than our version of nothing- along with the idea that we are just one of many universes that follow varied natural laws- some like ours capable of fostering intelligent life- seem sufficient to me.  The third question, however, I find in no sense stupid, and if it’s childlike, it is childlike in the best wondrously curious kind of way. Indeed, the answers to the question “why is there something rather than nothing?” might result is some of the most thrilling ideas human beings have come up with yet.

The question of why there is something rather than nothing is brilliantly explored in a book by Jim Holt Why the World Exist?: An Existential Detective Story. As Holt points out, the problem with nothingists theories like those of Krauss is that they fail to answer  the question as to why the quantum foam or multiple universes churning out their versions of existence are there in the first place. The simplest explanation we have is that “God made it”, and Holt does look at this answer as provided by philosopher of religion Richard Swinburne who answers the obvious question “who made God?” with the traditional answer “God is eternal and not made” which makes one wonder why we can’t just stick with Krauss’ self-generating universe in the first place?

Yet, it’s not only religious persons who think the why question is addressing something fundamental or even that science reveals the question as important even if we are forever barred from completely answering it. As physicist David Deutsch says in Why does the world exist:

 … none of our laws of physics can possibly answer the question of why the multiverse is there…. Laws don’t do that kind of work.

Wheeler used to say, take all the best laws of physics and put those bits on a piece of paper on the floor. Then stand back and look at them and say, “Fly!” They won’t fly they just sit there. Quantum theory may explain why the Big Bang happened, but it can’t answer the question you’re interested in, the question of existence. The very concept of existence is a complex one that needs to be unpacked. And the question Why is there something rather than nothing is a layered one, I expect. Even if you succeeded in answering it at some level, you’d still have the next level to worry about.  (128)

Holt quotes Deutsch from his book The Fabric of Reality “I do not believe that we are now, or shall ever be, close to understanding everything there is”. (129)

Others, philosophers and physicists are trying to answer the “why” question by composing solutions that combine ancient and modern elements. These are the Platonic multiverses of John Leslie and Max Tegmark both of whom, though in different ways, believe in eternally existing “forms”, goodness in the case of Leslie and mathematics in the case of Tegmark, which an infinity of universes express and realize. For the philosopher Leslie:

 … what the cosmos consists of is an infinite number of infinite minds, each of which knows absolutely everything which is worth knowing. (200)

Leslie borrows from Plato the idea that the world appeared out of the sheer ethical requirement for Goodness, that “the form of the Good bestows existence upon the world” (199).

If that leaves you scratching your scientifically skeptical head as much as it does mine, there are actual scientists, in this case the cosmologist Max Tegmark who hold similar Platonic ideas. According to Holt, Tegmark believes that:

 … every consistently desirable mathematical structure exists in a genuine physical sense. Each of these structures constitute a parallel world, and together these parallel worlds make up a mathematical multiverse. 182

Like Leslie, Tegmark looks to Plato’s Eternal Forms:

 The elements of this multiverse do not exist in the same space but exist outside space and time they are “static sculptures” that represent the mathematical structure of the physical laws that govern them.  183

If you like this line of reasoning, Tegmark has a whole book on the subject, Our Mathematical Universe. I am no Platonist and Tegmark is unlikely to convert me, but I am eager to read it. What I find most surprising about the ideas of both Leslie and Tegmark is that they combine two things I did not previously see as capable of being combined ,or even considered outright rival models of the world- an idea of an eternal Platonic world behind existence and the prolific features of multiverse theory in which there are many, perhaps infinite varieties of universes.

The idea that the universe is mind bogglingly prolific in its scale and diversity is the “fecundity” of the philosopher Robert Nozick who until Holt I had only associated with libertarian economics. Anyone who has a vision of a universe so prolific and diverse is okay in my book, though I do wish the late Nozick had been as open to the diversity of human socio-economic systems as he had been to the diversity of universes.

Like the physicist Paul Davies, or even better to my lights the novelists John Updike, both discussed by Holt, I had previously thought the idea of the multiverse was a way to avoid the need for either a creator God or eternally existing laws- although, unlike Davies and Updike and in the spirit of Ockham’s Razor I thought this a good thing. The one problem I had with multiverse theories was the idea of not just a very large or even infinite number of alternative universes but parallel universes where there are other versions of me running around, Holt managed to clear that up for me.

The idea that the universe was splitting every time I chose to eat or not eat a chocolate bar or some such always struck me as silly and also somehow suffocating. Hints that we may live in a parallel universe of this sort are just one of the weird phenomenon that emerge from quantum mechanics, you know, poor Schrodinger’s Cat . Holt points out that this is much different and not connected to the idea of multiple universes that emerge from the cosmological theory of inflation. We simply don’t know if these two ideas have any connection. Whew! I can now let others wrestle with the bizarre world of the quantum and rest comforted that the minutiae of my every decision doesn’t make me responsible for creating a whole other universe.

This returning to Plato seen in Leslie and Tegmark, a philosopher who died, after all,  2,5000 years ago, struck me as both weird and incredibly interesting. Stepping back, it seems to me that it’s not so much that we’re in the middle of some Hegelian dialectic relentlessly moving forward through thesis-antithesis-synthesis, but more involved in a very long conversation that is moving in no particular direction and every so often will loop back upon itself and bring up issues and perspectives we had long left behind.  It’s like a maze where you have to backtrack to the point you made a wrong turn in order to go in the right direction. We can seemingly escape the cosmological dead end created by Christian theology and Leibniz’s idea of creation ex nihilo only by going back to ideas found before we went down that path, to Plato. Though, for my money, I even better prefer another ancient philosopher- Lucretius.

Yet, maybe Plato isn’t back quite far enough. It was the pre-socratics who invented the natural philosophy that eventually became science. There is a kind of playfulness to their ideas all of which could exist side-by-side in dialogue and debate with one another with no clear way for any theory to win. Theories such as Heraclitus: world as flux and fire, or Pythagoras: world as number, or Democritus: world as atoms.

My hope is that we recognize our contemporary versions of these theories for what they are “just-so” stories that we tell about the darkness beyond the edge of scientific knowledge- and the darkness is vast. They are versions of a speculative theology- the possibilism of David Eagleman, which I have written about before and which are harmful only when they become as rigid and inflexible as the old school theology they are meant to replace or falsely claim the kinds of proof from evidence that only science and its experimental verification can afford. We should be playful with them, in the way Plato himself was playful with such stories in the knowledge that while we are in the “cave” we can only find the truth by looking through the darkness at varied angles.

Does Holt think there is a reason the world exists? What is really being asked here is what type of, in the philosopher Derek Parfit’s term “selector” brought existence into being. For Swinburne  the selector was God, for Leslie Goodness, Tegmark mathematics, Nozik fullness, but Holt thinks the selector might have been more simple, indeed, that the selector was simplicity. All the other selectors Holt finds to be circular, ultimately ending up being used to explain themselves. But what if our world is merely the simplest one possible that is also full? Moving from reason alone Holt adopts something like the mid-point between a universe that contained nothing and one that contained an infinite number of universes that are perfectly good adopting a mean he calls  “infinite mediocrity.”

I was not quite convinced by Holt’s conclusion, and was more intrigued by the open-ended and ambiguous quality of his exploration of the question of why there is something rather than nothing than I was his “proof” that our existence could be explained in such a way.

What has often strikes me as deplorable when it comes to theists and atheists alike is their lack of awe at the mysterious majesty of it all. That “God made it” or “it just is” strikes me flat. Whenever I have the peace in a busy world to reflect it is not nothingness that hits me but the awe -That the world is here, that I am here, that you are here, a fact that is a statistical miracle of sorts – a web weaving itself. Holt gave me a whole new way to think about this wonder.

How wonderfully strange that our small and isolated minds leverage cosmic history and reality to reflect the universe back upon itself, that our universe might come into existence and disappear much like we do. On that score, of all the sections in Holt’s beautiful little book it was the most personal section on the death of his mother that taught me the most about nothingness. Reflecting on the memory of being at his mother’s bedside in hospice he writes:

My mother’s breathing was getting shallower. Her eyes remained closed. She still looked peaceful, although every once in a while she made a little gasping noise.

Then I was standing over her, still holding her hand, my mother’s eyes open wide, as if in alarm. It was the first time I had seen them that day. She seemed to be looking at me. She opened her mouth. I saw her tongue twitch two or three times. Was she trying to say something? Within a couple of seconds her breathing stopped.

I leaned down and told her I loved her. Then I went into the hall and said to the nurse. “I think she just died.” (272-273)

The scene struck me as the exact opposite of the joyous experience I had at the birth of my own children and somehow reminded me of a scene from Diane Ackerman’s book Deep Play.

 The moment a new-born opens its eyes discovery begins. I learned this with a laugh one morning in New Mexico where I worked through the seasons of a large cattle ranch. One day, I delivered a calf. When it lifted up its fluffy head and looked at me its eyes held the absolute bewilderment of the newly born. A moment before it had enjoyed the even, black  nowhere of the womb and suddenly its world was full of color, movement and noise. I’ve never seen anything so shocked to be alive. (141-142)

At the end of the day, for the whole of existence, the question of why there is something rather than nothing may remain forever outside our reach, but we, the dead who have become the living and the living who will become the dead, are certainly intimates with the reality of being and nothingness.
 

The Problem with the Trolley Problem, or why I avoid utilitarians near subways

 

Master Bertram of Miden The Fall

Human beings are weird. At least, that is, when comparing ourselves to our animal cousins. We’re weird in terms of our use of language, our creation and use of symbolic art and mathematics, our extensive use of tools. We’re also weird in terms of our morality, and engage in strange behaviors visa-via one another that are almost impossible to find throughout the rest of the animal world.

We will help one another in ways that no other animal would think of, sacrificing resources, and sometimes going so far as surrendering the one thing evolution commands us to do- to reproduce- in order to aid total strangers. But don’t get on our bad side. No species has ever murdered or abused fellow members of its own species with such viciousness or cruelty. Perhaps religious traditions have something fundamentally right, that our intelligence, our “knowledge of good and evil” really has put us on an entirely different moral plain, opened up a series of possibilities and consciousness new to the universe or at least our limited corner of it.

We’ve been looking for purely naturalistic explanations for our moral strangeness at least since Darwin. Over the past decade or so there has been a growing number of hybrid explorations of this topic combining elements in varying degrees of philosophy, evolutionary theory and cognitive psychology.

Joshua Greene’s recent Moral Tribes: Emotion, Reason, and the Gap Between Us and Them   is an excellent example of these. Yet, it is also a book that suffers from the flaw of almost all of these reflections on singularity of human moral distinction. What it gains in explanatory breadth comes at the cost of a diminished understanding of how life on this new moral plain is actually experienced the way knowing the volume of water in an ocean, lake or pool tells you nothing about what it means to swim.

Greene like any thinker looking for a “genealogy of morals” peers back into our evolutionary past. Human beings spent the vast majority of our history as members of small tribal groups of perhaps a few hundred members. Wired over eons of evolutionary time into our very nature is an unconscious ability to grapple with conflicting interests between us as individuals and the interest of the tribe to which we belong. What Greene calls the conflicts of Me vs Us.

When evolution comes up with a solution to a commonly experienced set of problems it tends to hard-wire that solution. We don’t think all at all about how to see, ditto how to breathe, and when we need to, it’s a sure sign that something has gone wrong. There is some degree of nurture in this equation, however. Raise a child up to a certain age in total darkness and they will go blind. The development of language skills, especially, show this nature-nurture codependency. We are primed by evolution for language acquisition at birth and just need an environment in which any language is regularly spoken. Greene thinks our normal everyday “common sense morality” is like this. It comes natural to us and becomes automatic given the right environment.

Why might evolution have wired human beings morally in this way, especially when it comes to our natural aversion to violence? Greene thinks Hobbes was right when he proposed that human beings possess a natural equality to kill the consequence of our evolved ability to plan and use tools. Even a small individual could kill a much larger human being if they planned it well and struck fast enough. Without some in built inhibition against acts of personal aggression humanity would have quickly killed each other off in a stone age version of the Hatfields and the McCoys.

This automatic common sense morality has gotten us pretty far, but Greene thinks he has found a way where it has also become a bug, distorting our thinking rather than helping us make moral decisions. He thinks he sees distortion in the way we make wrong decisions when imagining how to save people being run over by runaway trolley cars.

Over the last decade the Trolley problem has become standard fair in any general work dealing with cognitive psychology. The problem varies but in its most standard depiction the scenario goes as follows: there is a trolley hurtling down a track that is about to run over and kill five people. The only way to stop this trolley is to push an innocent, very heavy man onto the tracks. A decision guaranteed to kill the man. Do you push him?

The answer people give is almost universally- no, and Greene intends to show why most of us answer in this way even though, as a matter of blunt moral reasoning, saving five lives should be worth more than sacrificing one.

The problem, as Greene sees it, is that we are using our automatic moral decision making faculties to make the “don’t push call.” The evidence for this can be found in the fact that those who have their ability to make automatic moral decisions compromised end up making the “right” moral decision to sacrifice one life in order to save five.

People who have compromised automatic decision making processes might be persons with damage to their ventromedial prefrontal cortex, persons such as Phineas Gauge whose case was made famous by the neuroscientist, Antonio Damasio in his book Descartes’ Error. Gauge was a 19th century miner who after suffering an accident that damaged part of his brain became emotionally unstable and unable to make good decisions. Damasio used his case and more modern ones to show just how important emotion was to our ability to reason.

It just so happens that persons suffering from Gauge style injuries also decide the Trolley problem in favor of saving five persons rather than refusing to push one to their death. Persons with brain damage aren’t the only ones who decide the Trolley problem in this way- autistic individuals and psychopaths do so as well.

Greene makes some leaps from this difference in how people respond to the Trolley problem, proposing that we have not one but two systems of moral decision making which he compares to the automatic and manual settings on a camera. The automatic mode is visceral, fast and instinctive whereas the manual mode is reasoned, deliberative and slow. Greene thinks that those who are able to decide in the Trolley problem to push the man onto the tracks to save five people are accessing their manual mode because their automatic settings have been shut off.

The leaps continue in that Greene thinks we need manual mode to solve the types of moral problems our evolution did not prepare us for, not Me vs. Us, but Us vs. Them, of our society in conflict other societies. Moral philosophy is manual mode in action, and Greene believes one version of our moral philosophy is head and shoulders above the rest, Utilitarianism, which he would prefer to be called Deep Pragmatism.

Needless to say there are problems with the Trolley Problem. How for instance does one interpret the change in responses to the problem when one substitutes a push with a switch? The number of respondents who chose to send the fat man to his death on the tracks to save five people significantly increases when one exchanges a push for a switch of a button that opens a trapdoor. Greene thinks it is because giving us bodily distance allows our more reasoned moral thinking to come into play. However, this is not the conclusion one gets when looking at real instances of personal versus instrumentalized killing i.e. in war.

We’ve known for quite a long time that individual soldiers have incredible difficulty killing even fellow enemy soldiers on the battlefield a subject brilliantly explored by Dave Grossman in his book On Killing: The Psychological Cost of Learning to Kill in War and Society. Strange thing is, whereas an individual soldier has problems killing even one human being who is shooting at him, and can sustain psychological scars because of such killing, airmen working in bombers who incinerate thousands of human beings including men women and children do not seem to be affected to such a degree by either inhibitions on killing or its mental scars. The US military’s response to this is to try to instrumentalize and automaticize killing by infantrymen in the same way killing by airmen and sailors is disassociated. Disconnection from their instinctual inhibitions against violence allows human being to achieve levels of violence impossible at an individual level.     

It may be the persons who chose to save five persons at the cost of one aren’t engaging in a form of moral reasoning at all they are merely comparing numbers. 5 > 1, so all else being equal choose the greater number. Indeed, the only way it might be said that higher moral reasoning was being used in choosing 5 over 1 was if the 1 that needed to be sacrificed to save 5 was the person being asked. With the question being would you throw yourself on the trolley tracks in order to save 5 other people?

Our automatic, emotional systems are actually very good at leading us to moral decisions, and the reasons we have trouble stretching them to Us vs Them problems might be different than the ones Greene identifies.

A reason the stretching might be difficult can be seen in a famous section in The Brother’s Karamazov by Russian novelist, Fyodor Dostoyevsky. “The Grand Inquisitor” is a section where the character Ivan is conveying an imaginary future to his brother Alyosha where Jesus has returned to earth and is immediately arrested by authorities of the Roman church who claim that they have already solved the problem of man’s nature: the world is stable so long as mankind is given bread and prohibited from exercising his free will. Writing in the late 19th century, Dostoyevsky was an arch-conservative, with deep belief in the Orthodox Church and had a prescient anxiety regarding the moral dangers of nihilisms and revolutionary socialism in Russia.

In her On Revolution, Hannah Arendt pointed out that Dostoyevsky was conveying with his story of the Grand Inquisitor, not only an important theological observation, that only an omniscient God could experience the totality of suffering without losing sight of the suffering individual, but an important moral and political observation as well- the contrast between compassion and pity.

Compassion by its very nature can not be touched off by the sufferings of a whole group of people, or, least of all,  mankind as a whole. It cannot reach out further than what is suffered by one person and remain what it is supposed to be, co-suffering. Its strength hinges on the strength of passion itself, which, in contrast to reason, can comprehend only the particular, but has no notion of the general and no capacity for generalization. (75)

In light of Greene’s argument what this means is that there is a very good reason why our automatic morality has trouble with abstract moral problems- without having sight of an individual to which moral decisions can be attached one must engage in a level of abstraction our older moral systems did not evolve to grapple with. As a consequence of our lack of God-like omniscience moral problems that deal with masses of individuals achieve consistency only by applying some general rule. Moral philosophers, not to mention the rest of us, are in effect imaginary kings issuing laws for the good of their kingdoms. The problem here is not only that the ultimate effect of such abstract level decisions are opaque due to our general uncertainty regarding the future, but there is no clear and absolute way of defining for whose good exactly are we imposing these rules?

Even the Utilitarianism that Greene thinks is our best bet for reaching common agreement regarding such rules has widely different and competing answers to how we should answer the question “good for who?” In their camp can be found Abolitionist such as David Pierce who advocate the re-engineering of all of nature to minimize suffering, and Peter Singer who establishes a hierarchy of rational beings. For the latter, the more of a rational being you are the more the world should conform to your needs, so much so, that Singer thinks infanticide is morally justifiable because newborns do not yet possess the rational goals and ideas regarding the future possessed by older children and adults. Greene, who thinks Utilitarianism could serve as mankind’s common “moral currency”, or language,  himself has little to say regarding where his concept of Utilitarianism falls on this spectrum, but we are very far indeed from anything like universal moral agreement on the Utilitarianism of Pierce or Singer.

21st century global society has indeed come to embrace some general norms. Past forms of domination and abuse, most notably slavery, have become incredibly rare relative to other periods of human history, though far from universal, women are better treated than in ages past. External relations between states have improved as well the world is far less violent than in any other historical era, the global instruments of cooperation, if far too seldom, coordination, are more robust.

Many factors might be credited here, but the one I would least credit is any adoption of a universal moral philosophy. Certainly one of the large contributors has been the increased range over which we exercise our automatic systems of morality. Global travel, migration, television, fiction and film all allow us to see members of the “Them” as fragile, suffering, and human all too human individuals like “Us”. This may not provide us with a negotiating tool to resolve global disputes over questions like global warming, but it does put us on the path to solving such problems. For, a stable global society will only appear when we realize that in more sense than not there is no tribe on the other side of us, that there is no “Them”, there is only “Us”.

War and Human Evolution

40-12-17/35

 

Has human evolution and progress been propelled by war? 

The question is not an easy one to ask, not least because war is not merely one of the worst but arguably the worst thing human beings inflict on one another comprising murder, collective theft, and, almost everywhere but in the professional militaries of Western powers, and only quite recently, mass, and sometimes systematic rape. Should it be the case that war was somehow the driving mechanism, the hidden motor behind what we deem most good regarding human life, trapped in some militaristic version of Mandeville’s Fable of the Bees, then we might truly feel ourselves in the grip and the plaything of something like the demon of Descartes’ imagination, though rather than taunt us by making the world irrational we would be cursed to a world of permanent moral incomprehensibility.

When it comes to human evolution we should probably begin to address the question by asking how long war has been part of the human condition? Almost all of our evolution took place while we lived as hunter gatherers, indeed, we’ve only even had agriculture (and anything that comes with it) for 5-10% of our history. From a wide angle view the agricultural, industrial, and electronic revolutions might all be considered a set piece, perhaps one of the reasons things like philosophy, religion, and literature from millennia ago still often resonate with us. Within the full scope of our history, Socrates and Jesus lived only yesterday.

The evidence here isn’t good for those who hoped our propensity for violence is a recent innovation. For over the past generation a whole slew of archeological scholars has challenged the notion of the “peaceful savage” born in the radiating brilliance of the mind of Jean-Jacques Rousseau. Back in 1755 Rousseau had written his Discourse on Inequality perhaps the best timed book, ever. Right before the industrial revolution was to start, Rousseau made the argument that it was civilization, development and the inequalities it produced that was the source of the ills of mankind- including war. He had created an updated version of the myth of Eden, which became in his hands mere nature, without any God in the story. For those who would fear humanity’s increasingly rapid technological development, and the victory of our artificial world over the natural one we were born from and into, he gave them an avenue of escape, which would be to shed not only the new industrial world but the agricultural one that preceded it. Progress, in the mind of Rousseau and those inspired by him, was a Sisyphean trap, the road to paradise lie backwards in time, in the return to primitive conditions.

During the revived Romanticism of the 1960’s archeologists like Francis de Waal set out to prove Rousseau right. Primitive, hunter-gatherer societies, in his view, were not war like and had only become so with contact from advanced warlike societies such as our own. In recent decades, however, this idea of prehistoric and primitive pacifism has come under sustained assault. Steven Pinker’s Better Angels of Our Nature: Why Violence has Declined is the most popularly know of these challenges to the idea of primitive innocence that began with Rousseau’s Discourse, but in many ways, Pinker was merely building off of, and making more widely known, scholarship done elsewhere.

One of the best of examples of this scholarship is Lawrence H. Keeley’s War Before Civilization: The Myth of the Peaceful Savage. Keeley makes a pretty compelling argument that warfare was almost universal across pre-agricultural, hunter-gatherer societies. Rather than being dominated by the non-lethal ritualized “battle”, as de Waal had argued, war for such societies war was a high casualty affair and total.

It is quite normal for conflict between tribes in close proximity to be endemic. Protracted periods of war result in casualty figures that exceed modern warfare even the total war between industrial states seen only in World War II.Little distinction is made in such pre-agricultural forms of conflict between women and children and men of fighting age. Both are killed indiscriminately in wars made up, not of the large set battles that characterize warfare in civilizations after the adoption of agriculture, but of constant ambushes and attacks that aim to kill those who are most vulnerable.

Pre-agricultural war is what we now call insurgency or guerilla war. Americans, who remember Vietnam or look with clear eyes at the Second Iraq War, know how effective this form of war can be against even the most technologically advanced powers. Although Keeley acknowledges the effectiveness and brutal efficiency of guerilla war,  he does not make the leap to suggest that in going up against insurgency advanced powers face in some sense a more evolutionarily advanced rival, honed by at least 100,000 years of human history.  

Keeley concludes that as long as there have been human beings there has been war, yet, he reaches much less dark conclusions from the antiquity of war than one might expect. Human aversion to violence and war, disgust with its consequences, and despair at its, more often than not, unjustifiable suffering are near universal whatever a society’s level of technological advancement. He writes:

“… warfare whether primitive or civilized involves losses, suffering, and terror even for the victors. Consequently, it was nowhere viewed as an unalloyed good, and the respect accorded to an accomplished warrior was often tinged with aversion. “

“For example, it was common the world over for the warrior who had just killed an enemy to be regarded by his own people as spiritually polluted or contaminated. He therefore had to undergo a magical cleansing to remove this pollution. Often he had to live for a time in seclusion, eat special food or fast, be excluded from participation in rituals and abstain from sexual intercourse. “ (144)

Far from the understanding of war found in pre-agricultural societies being ethically shallow, psychologically unsophisticated or unaware of the moral dimension of war their ideas of war are as sophisticated as our own. What the community asks the warrior to do when sending them into conflict and asking them to kill others is to become spiritually tainted and ritual is a way for them to become reintegrated into their community.  This whole idea that soldiers require moral repair much more than praise for their heroics and memorials is one we are only now rediscovering.

Contrary to what Freud wrote to Einstein when the former asked him why human beings seemed to show such a propensity for violence, we don’t have a “death instinct” either. Violence is not an instinct, or as Azar Gat wrote in his War in Human Civilization:

Aggression does not accumulate in the body by a hormone loop mechanism, with a rising level that demands release. People have to feed regularly if they are to stay alive, and in the relevant ages they can normally avoid sexual activity all together only by extraordinary restraint and at the cost of considerable distress. By contrast, people can live in peace for their entire lives, without suffering on that account, to put it mildly, from any particular distress. (37)

War and violence are not an instinct but a tactic, thank God, for if it was an instinct we’d all be dead. Still, if it is a tactic it has been a much used one and it seems that throughout our hundred thousand year human history it is one that we have used with less rather than greater frequency as we have technologically advanced, and one might wonder why this has been the case.    

Azar Gat’s theory of our increasing pacifism is very similar to Pinker’s, war is a very costly and dangerous tactic. If you can get what you are after without it, peace is normally the course human beings will follow. Since the industrial revolution, nature has come under our increasing command, war and expansion are no longer very good answers to the problems of scarcity, thus the frequency of wars was on the decline even before we had invented atomic weapons which turned full scale war into pure madness.

Yet, a good case can be made that war played a central role in putting down the conditions in which the industrial revolution, the spark of the material progress that has made war an unnecessary tactic, occurred and that war helped push technological advancement forward. The Empire won by the British through force of arms was the lever that propelled their industrialization. The Northern United States industrialized much more quickly as a consequence of the Civil War, Western pressure against pre-industrial Japan and China pushed those countries to industrialize. War research gave us both nuclear power and computers, the space race and satellite communications were the result of geopolitical competition. The Internet was created by the Pentagon, and the contemporary revolution in bionic prosthetics grew out of the horrendous injuries suffered in the wars in Iraq and Afghanistan.

Thing is, just as is the case for pre-agricultural societies it is impossible to disaggregate war itself with the capacity for cooperation between human beings a capacity that has increased enormously since the widespread adoption of agriculture. Any increase in human cooperative capacity increases a society’s capacity for war, and, oddly enough many increases in human war fighting capacity lead to increases in cooperative capacity in the civilian sphere. We are caught in a tragic technological logic that those who think technology leads of itself to global prosperity and peace if we just use it for the right ends are largely unaware of. Increases in our technological capacity even if pursued for non-military ends will grant us further capacity for violence and it is not clear that abandoning military research wouldn’t derail or greatly slow technological progress altogether.

One perhaps doesn’t normally associate the quest for better warfighting technologies with transhumanism- the idea “that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology”-, butone should. It’s not only the case that, as mentioned above, the revolution in bionic prosthetics emerged from war, the military is at the forefront of almost every technology on transhumanists’ wish list for future humanity from direct brain-machine and brain- to-brain interfaces to neurological enhancements for cognition and human emotion. And this doesn’t even capture the military’s revolutionary effect on other aspects of the long held dreams of futurists such as robotics.

The subject of how the world militaries have and are moving us in the direction of “post-human” war is the subject Christopher Coker’s  Warrior Geeks: How 21st Century Technology Is Changing the Way We Fight and Think About War and it is with this erudite and thought provoking book that the majority of the rest of this post will be concerned.

Warrior Geeks is a book about much more than the “geekification”  of war, the fact that much of what now passes for war is computerized, many the soldiers themselves now video gamers stuck behind desks, even if the people they kill remain flesh and blood. It is also a book about the evolving human condition and how war through science is changing that condition, and what this change might mean for us. For Coker sees the capabilities many transhumanists dream of finding their application first on the battlefield.

There is the constantly monitored “quantified-self”:

And within twenty years much as engineers analyze data and tweak specifications in order to optimize a software program, military commanders will be able to bio-hack into their soldiers brains and bodies, collecting and correlating data the better to optimize their physical and mental performance. The will be able to reduce distractions and increase productivity and achieve “flow”- the optimum state of focus. (63-64)

And then there is “augmented reality” where the real world is overlaid with the digital such as in the ideal which the US army and others are reaching for-to tie together servicemen in a “cybernetic network” where soldiers:

…. don helmets with a minute screen which when flipped over the eyes enables them to see the entire battlefield, marking the location of both friendly and enemy forces. They can also access the latest weather reports. In future, soldiers will not even need helmets at all. They will be able to wear contact lenses directly onto their retinas. (93)

In addition the military is investigating neurological enhancements of all sorts:

The ‘Persistence in Combat Program’ is well advanced and includes research into painkillers which soldiers could take before going into action, in anticipation of blunting the pain of being wounded. Painkillers such as R1624, created by scientists at Rinat Neuroscience, use an antibody to keep in check a neuropeptide that helps transmit pain sensations from the tissues to the nerves. And some would go further in the hope that one day it might be possible to pop a pill and display conspicuous courage like an army of John Rambos. (230)

With 1 in 8 soldiers returning from combat suffering from PTSD Some in the military hope that the scars of war might be undone with neurological advances that would allow us to “erase” memories.

One might go on with examples of exoskeletons that give soldiers superhuman endurance and strength or tying together the most intimate military unit, the platoon, with neural implants that allow direct brain- to-brain communication a kind of “brain-net” that an optimistic neuroscientist like Miguel Nicolelis seems not to have anticipated.

Coker, for his part, sees these developments in a very dark light, a move away from what have been eternal features of both human nature and war such as courage and character and towards a technologically over-determined and ultimately less than human world. He is very good at showing how our reductive assumptions end up compressing the full complexity of human life into something that gives us at least the illusion of technological control. It’s not our machines who are becoming like us, but us like our machines, it is not necessarily our gain in understanding and sovereignty over our nature to understand underlying genetic and neural mechanisms, but a constriction of our sense of ourselves.

Do we gain from being able to erase the memory of an event in combat that leads to our moral injury or lose something in no longer being able to understand and heal it through reconciliation and self-knowledge? Is this not in some way a more reduced and blind answer to the experience of killing and death in war than that of the primitive tribes people mentioned earlier? Luckily, reductive theories of human nature based on our far less than complete understanding of neuroscience or genetics almost always end up showing their inadequacies after a period of friction with the sheer complexity of being alive.

There are dangers which I see as two-fold. First, there is always the danger that we will move under the assumptions that we have sufficient knowledge and that the solutions to some problem are therefore “easy” should we just double-down and apply some technological or medical solution. Given enough time, the complexity of reality is likely to rear its head, though not after we have caused a good deal of damage in the name of a chimera. Secondly, it is that we will use our knowledge to torque the complexity of the human world into something more simple and ultimately more shallow in the quest for a reality we deem manageable. There is something deeper and more truthful in the primitive response to the experience of killing than erasing the memory of such experiences.

One of the few problems I had with Coker’s otherwise incredible book is, sadly, a very fundamental one. In some sense, Warrior Geeks is not so much a critique of the contemporary ways and trend lines of war, but a bio-conservative argument against transhumanist technologies that uses the subject of war as a backdrop. Echoing Thucydides Coker defines war as “the human thing” and sees its transformation towards post-human forms as ultimately dehumanizing. Thucydides may have been the most brilliant historian to ever live, but his claim that in war our full humanity is made manifest is true only in a partial sense. Almost all higher animals engage in intraspecies killing, and we are not the worst culprit. It wasn’t human beings who invented war, and who have been fighting for the longests, but the ants. As Keeley puts it:

But before developing too militant a view of human existence, let us put war in its place. However frequent, dramatic, and eye-catching, war remains a lesser part of social life. Whether one takes a purely behavioral view of human life or imagines that one can divine mental events, there can be no dispute that peaceful activities, arts, and ideas are by far more crucial and more common even in bellicose societies. (178)

If war has anything in human meaning over and above areas that are truly distinguishing between us and animals such as our culture or for that matter experiences we share with other animals such as love, companionship, and care for our young, it is that at the intimate level, such as is found in platoons, it is one of the few areas where what we do truly has consequences and where we are not superfluous.

We have evolved for culture, love and care as much as we ever have for war, and during most of our evolutionary history the survival of those around us really was dependent on what we did. War, in some cases, is one of the few places today where this earlier importance of the self can be made manifest. Or, as the journalist, Sebastian Junger has stated it, many men eagerly return to combat not so much because they are addicted to its adrenaline high as because being part of a platoon in a combat zone is one of the few places where what a 20 something year old man does actually counts for something.

As for transhumanism or continuing technological advancement and war, the solution to the problem starts with realizing there is no ultimate solution to the problem. To embrace human enhancement advanced by the spear-tip of military innovation would be to turn the movement into something resembling fascism- that is militarism combined with the denial of our shared human worth and destiny. Yet, to completely deny the fact that research into military applications often leads to leaps in technological advancement would be to blind ourselves to the facts of history. Exoskeletons that are developed as armor for our soldiers will very likely lead to advances for paraplegics and the elderly not to mention persons in perfect health just as a host of military innovations have led to benefits for civilians in the past.

Perhaps, one could say that if we sought to drastically reduce military spending worldwide, as in any case I believe we should, we would have sufficient resources to devote to civilian technological improvements of and for themselves. The problem here is the one often missed by technological enthusiasts who fail to take into account the cruel logic of war- that advancements in the civilian sphere alone would lead to military innovations. Exoskeletons developed for paraplegics would very likely end up being worn by soldiers somewhere, or as Keeley puts it:

Warfare is ultimately not a denial of the human capacity for social cooperation, but merely the most destructive expression of it. (158)

Our only good answer to this is really not technological at all, it is to continue to decrease the sphere of human life where war “makes sense” as a tactic. War is only chosen as an option when the peaceful roads to material improvement and respect are closed. We need to ensure both that everyone is able to prosper in our globalized economy and that countries and peoples are encouraged, have ways to resolve potentially explosive disputes and afforded the full respect that should come from belonging to what Keeley calls the “intellectual, psychological, and physiological equality of humankind ”, (179) to which one should add moral equality as well.  The prejudice that we who possess scientific knowledge and greater technological capacity than many of our rivals are in some sense fundamentally superior is one of our most pernicious dangers- we are all caught in the same trap.

For two centuries a decline in warfare has been possible because, even taking into account the savagery of the World Wars, the prosperity of mankind has been improving. A new stage of crisis truly earthshaking proportions would occur should this increasing prosperity be derailed, most likely from its collision with the earth’s inability to sustain our technological civilization becoming truly universal, or from the exponential growth upon which this spreading prosperity rests being tapped out and proving a short-lived phenomenon- it is, after all only two centuries old. Should rising powers such as China and India along with peoples elsewhere come to the conclusion that not only were they to be denied full development to the state of advanced societies, but that the system was rigged in favor of the long industrialized powers, we might wake up to discover that technology hadn’t been moving us towards an age of global unity and peace, after all, but that we had been lucky enough to live in a long interlude in which the human story, for once, was not also the story of war.

 

Wired for Good and Evil

Battle in Heaven

It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.

As is the case with so many of the ideas regarding human nature, Aristotle got there first. A good bet to make when anyone comes up with a really interesting idea regarding what we are as a species is that either Plato or Aristotle had already come up with it, or discussed it. Which is either a sign of how little progress we have made understanding ourselves, or symptomatic of the fact that the fundamental questions for all our gee-whiz science and technology remain important even if ultimately never fully answerable.

Aristotle had classified human beings as being unique in the sense that we were a zóon politikón variously translated as a social animal or a political animal. His famous quote on this score of course being: “He who is unable to live in society, or who has no need because he is sufficient for himself, must be either a beast or a god.” (Politics Book I)

He did recognize that other animals were social in a similar way to human beings, animals such as storks (who knew), but especially social insects such as bees and ants. He even understood bees (the Greeks were prodigious beekeepers) and ants as living under different types of political regimes. Misogynists that he was he thought bees lived under a king instead of a queen and thought the ants more democratic and anarchical. (History of Animals).       

Yet Aristotle’s definition of mankind as zóon politikón is deeper than that. As he says in his Politics:

That man is much more a political animal than any kind of bee or any herd animal is clear. For, as we assert, nature does nothing in vain, and man alone among the animals has speech….Speech serves to reveal the advantageous and the harmful and hence also the just and unjust. For it is peculiar to man as compared to the other animals that he alone has a perception of good and bad and just and unjust and other things of this sort; and partnership in these things is what makes a household and a city.  (Politics Book 1).

Human beings are shaped in a way animals are not by the culture of their community, the particular way their society has defined what is good and what is just. We live socially in order to live up to their potential for virtue- ideas that speech alone makes manifest and allows us to resolve. Aristotle didn’t just mean talk but discussion, debate, conversation all guided by reason. Because he doesn’t think women and slaves are really capable of such conversations he excludes them from full membership in human society.More on that at the end, but now back to Pagel’s Wired for Culture.

One of the things Pagel thinks makes human beings unique among all other animals is the fact that we alone are hyper-social or ultra-social. The only animals that even come close are the eusocial insects- Aristotle’s ants and bees. As E.O. Wilson pointed out in his The Social Conquest of Earth it is the eusocial species, including us, which, pound-for-pound absolutely dominate life on earth. In the spirit of a 1950’s B-movie all the ants in the world added together are actually heavier than us.

Eusociality makes a great deal of sense for insects colonies and hives given how closely related these groups are, indeed, in many ways they might be said to constitute one body and just as cells in our body sacrifice themselves for the survival of the whole, eusocial insects often do the same. Human beings, however, are different, after all, if you put your life at risk by saving the children of non-relatives from a fire, you’ve done absolutely nothing for your reproductive potential, and may even have lost your chance to reproduce. Yet, sacrifices and cooperation like this are done by human beings all the time, though the vast majority in a less heroic key. We are thus, hypersocial- willing to cooperate with and help non-relatives in ways no other animals do. What gives?

Pagel’s solution to this evolutionary conundrum is very similar to Wilson’s though he uses different lingo. Pagel sees us as naturally members of what he calls cultural survival vehicles, we evolved to be raised in them, and to be loyal to them because that was the best, perhaps the only way, of securing our own survival. These cultural survival vehicles themselves became the selective environment in which we evolved. Societies where individuals hoarded their cooperation or refused to makes sacrifices for the survival of the group overtime were driven out of existence by those where individuals had such qualities.

The fact that we have evolved to be cooperative and giving within our particular cultural survival vehicle Pagel sees as the origin of our angelic qualities our altruism and heroism and just general benevolence. No other animal helps its equivalent of little old ladies cross the street, or at least to nothing like the extent we do.   

Yet, if having evolved to live in cultural survival vehicles has made us capable of being angels, for Pagel, it has also given rise to our demonic capacities as well. No other species is a giving as we are, but, then again, no other species has invented torture devices and practices like the iron maiden or breaking on the wheel like we have either. Nor does any other species go to war with fellow members of its species so commonly or so savagely as human beings.

Pagel thinks that our natural benevolence can be easily turned off under two circumstance both of which represent a threat to our particular cultural survival vehicle.

We can be incredibly cruel when we think someone threatens the survival of our group by violating its norms. There is a sort of natural proclivity in human nature to burn “witches” which is why we need to be particularly on guard whenever some part of our society is labeled as vile and marked for punishment- The Scarlet Letter should be required reading in high school.

There is also a natural proclivity for members of one cultural survival vehicle to demonize and treat cruelly their rivals- a weakness we need to be particularly aware of in judging the heated rhetoric in the run up to war.

Pagel also breaks with Richard Dawkins over the later’s view of the evolutionary value of religion. Dawkins in his book The God Virus and subsequently has tended to view religion as a net negative- being a suicide bomber is not a successful reproductive strategy, nor is celibacy. Dawkins has tended to see ideas his “memes” as in a way distinct from the person they inhabit. Like genes, his memes don’t care all that much about the well being of the person who has them- they just want to make more of themselves- a goal that easily conflicts with the goal of genes to reproduce.

Pagel quite nicely closes the gap between memes and genes which had left human beings torn in two directions at once. Memes need living minds to exists, so any meme that had too large a negative effect on the reproductive success of genes should have lain its own grave quite quickly. But if genes are being selected for because they protect not the individual but the group one can see how memes that might adversely impact an individual’s chance of reproductive success can nonetheless aid in the survival of the group. For religions to have survived for so long and to be so universal they must be promoting group survival rather than being a detriment to it, and because human beings need these groups to live religion must on average, though not in every particular case, be promoting the survival of the individual at least to reproductive age.

One of the more counterintuitive, and therefore interesting claims Pagel makes is that what really separates human beings from other species is our imitativeness as opposed to our inventiveness. It the reverse of the trope found in the 60’s-70’s books and films that built around the 1963 novel La Planète des singes- The Planet of the Apes. If you remember, the apes there are able to imitate human technology, but aren’t able to invent it- “monkey see, monkey do”. Perhaps it should be “human see human do” for if Pagel is right our advantage as a species really lies in our ability to copy one another. If I see you doing something enough times I am pretty likely to be able to repeat it though I might not have a clue what it was I was actually doing.

Pagel thinks that all we need is innovation through minor tweaks combined with our ability to rapidly copy one another to have technological evolution take off. In his view geniuses are not really required for human advancement, though they might speed the process along. He doesn’t really apply his ideas to understand modern history, but such a view could explain what the scientific revolution which corresponded with the widespread adoption of the printing press was really all about. The printing press allowed ideas to be deciminated more rapidly whereas the scientific method proved a way to sift those that worked from those that didn’t.

This makes me wonder if something rather silly like Youtube hasn’t even further revved up cultural evolution in this imitative sense. What I’ve found is that instead of turning to “elders” or “experts” when I want to do something new, I just turn to Youtube and find some nice or ambitious soul who has filmed through the steps.

The idea that human beings are the great imitator has some very unanticipated consequences for the rationality of human being visa-vi other animals. Laurie Santos of the Comparative Cognition Laboratory at Yale has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. One more sign that human beings can be thankful for at least some of their irrationality.

This might have implications for machine intelligence that neither Pagel nor Santos draw. Machines might be said to be rational in the way animals are rational but this does not mean that they possess something like human thought. The key to making artificial intelligence more human like might be in making them, in some sense, less rational or as Jeff Steibel stated it for machine intelligence to be truly human like it will need to be “loopy”, “iterative”, and able to make mistakes.

To return to Pagel, he also sees implications for human consciousness and our sense of self in the idea that we are “wired for culture”. We still have no idea why it is human beings have a self, or think they have one, and we have never been able to locate the sense of self in the brain. Pagel thinks its pretty self-evident why a creature that is enmeshed in a society of like creatures with less than perfectly aligned needs would evolve a sense of self. We need one in order to negotiate our preferences, and our rights.

Language, for Pagel, is not so much a means for discovering and discussing the truth as it is an instrument wielded by the individual to gain in his own interest through persuasion and sometimes outright deception. Pagel’s views seem to be seconded by Steven Pinker who in a fascinating talk back in 2007 laid out how the use of language seems to be structured by the type of relationship it represents. In a polite dinner conversation you don’t say “give me the salt”, but “please pass me the salt” because the former is a command of the type most found in a dominance/submission relationship. The language of seduction is not “come up to my place so I can @#!% you”, but “would you like to come up and see my new prints.” Language is a type of constant negotiation and renegotiation between individuals one that is necessary because relationships between human beings are not ordinarily based on coercion where no speech is necessary besides the grunt of a command.

All of which brings me back to Aristotle and to the question of evil along with the sole glaring oversight of Pagel’s otherwise remarkable book. The problem moderns often have with the brilliant Aristotle is his justification of slavery and the subordination of women. That is, Aristotle clearly articulates what makes a society human- our interdependence, our capacity for speech, the way our “telos” is to be shaped and tuned by the culture in which we are raised like a precious instrument. At the same time he turns around and allows this society to compel by force the actions of those deliberately excluded from the opportunity for speech which he characterizes as a “natural” defect.

Pagel’s cultural survival vehicles miss this sort of internal oppression that is distinct both from the moral anger of the community against those who have violated its norms or its violence towards those outside of it. Aren’t there minature cultural survival vehicles in the form of one group or another that fosters its own genetic interest through oppression and exploitation? It got me thinking if any example of what a more religious age would have called “evil” as opposed to mere “badness” couldn’t be understood as a form of either oppression or exploitation, and I am unable to come up with any.

Such sad reflections dampen Pagel’s optimism towards the end of Wired for Culture that the world’s great global cities are a harbinger of us transcending the state of war between our various cultural survival vehicles and moving towards a global culture. We might need to delve more deeply into our evolutionary proclivities to fight within our own and against other cultural survival vehicles inside our societies as much as outside before we put our trust in such a positive outcome for the human story.

If Pagel is right and we are wired, if we have co-evolved, to compete in the name of our own particular cultural survival vehicle against other such communities then we may have in a very real sense co-evolved with and for war. A subject I will turn to next time by jumping off another very recent and excellent book.

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

Erasmus reads Kahneman, or why perfect rationality is less than it’s cracked up to be

sergei-kirillov-a-jester-with-the-crown-of-monomakh-1999

The last decade or so has seen a renaissance is the idea that human beings are something far short of rational creatures. Here are just a few prominent examples: there was Nassim Taleb with his The Black Swan, published before the onset of the financial crisis, which presented Wall Street traders caught in the grip of their optimistic narrative fallacies, that led them to “dance” their way right over a cliff. There was the work of Philip Tetlock which proved that the advice of most so-called experts was about as accurate as chimps throwing darts. There were explorations into how hard-wired our ideological biases are with work such as that of Jonathan Haidt in his The Righteous Mind.

There was a sustained retreat from the utilitarian calculator of homo economicus, seen in the rise of behavioral economics, popularized in the book, Nudge which took human irrational quirks at face value and tried to find wrap arounds, subtle ways to trick the irrational mind into doing things in its own long term interest. Among all of these none of the uncoverings of the less than fully rational actors most of us, no all of us, are have been more influential than the work of Daniel Kahneman, a psychologist with a Nobel Laureate in Economics and author of the bestseller Thinking Fast and Slow.

The surprising thing, to me at least, is that we forgot we were less than fully rational in the first place. We have known this since before we had even understood what rationality, meaning holding to the principle of non-self contradiction, was. How else do you explain Ulysses’ pact, where the hero had himself chained to his ship’s mast so he could both hear the sweet song of the sirens and not plunge to his own death? If the Enlightenment had made us forget the weakness of our rational souls Nietzsche and Freud eventually reminded us.

The 20th century and its horrors should have laid to rest the Enlightenment idea that with increasing education would come a golden age of human rationality, as Germany, perhaps Europe’s most enlightened state became the home of its most heinous regime. Though perhaps one might say that what the mid-20th century faced was not so much a crisis of irrationality as the experience of the moral dangers and absurdities into which closed rational systems that held to their premises as axiomatic articles of faith could lead. Nazi and Soviet totalitarianism had something of this crazed hyper-rational quality as did the Cold War nuclear suicide pact of mutually assured destruction. (MAD)

As far as domestic politics and society were concerned, however, the US largely avoided the breakdown in the Enlightenment ideal of human rationality as the basis for modern society. It was a while before we got the news. It took a series of institutional failures- 9/11, the financial crisis, the botched, unnecessary wars in Afghanistan and Iraq to wake us up to our own thick headedness.

Human folly has now come in for a pretty severe beating, but something in me has always hated a weakling being picked on, and finds it much more compelling to root for the underdog. What if we’ve been too tough on our less than perfect rationality? What if our inability to be efficient probability calculators or computers isn’t always a design flaw but sometimes one of our strengths?

I am not the first person to propose such a heresy of irrationalism. Way back in 1509 the humanist Desiderius Erasmus wrote his riotous, The Praise of Folly with something like this in mind. The book has what amounts to three parts all in the form of a speech by the goddess of Folly. The first part attempts to show how, in the world of everyday people, Folly makes the world go round and guides most of what human beings do. The second part is a critique of the society that surrounded Erasmus, a world of inept and corrupt kings, princes and popes and philosophers. The third part attempts to show how all good Christians, including Christ himself, are fools, and here Erasmus means fool as a compliment. As we all should know, good hearted people, who are for that very reason nieve, often end up being crushed by the calculating and rapacious.

It’s the lessons of the first part of The Praise of Folly that I am mostly interested in here. Many of Erasmus’ intuitions not only can be backed up by the empirical psychological studies of  Daniel Kahneman, they allow us to flip the import of Kahneman’s findings on their head. Foolish, no?

Here are some examples: take the very simple act of a person opening their own business. The fact that anyone engages in such a risk while believing in their likely success  is an example of what Kahneman calls an optimism bias. Overconfident entrepreneurs are blind to the fact, as Kahneman puts it:

The chances that a small business will survive in the United States are about 35%. But the individuals who open such businesses do not believe statistics apply to them. (256)

Optimism bias may result in the majority of entrepreneurs going under, but what would we do without it? Their sheer creative-destructive churn surely has some positive net effect on our economy and the employment picture, and those 35% of successful businesses are most likely adding long lasting and beneficial tweaks to modern life,  and even sometimes revolutions born in garages.

Yet, optimism bias doesn’t only result in entrepreneurial risk. Erasmus describes such foolishness this way:

To these (fools) are to be added those plodding virtuosos that plunder the most inward recesses of nature for the pillage of a new invention and rake over sea and land for the turning up some hitherto latent mystery and are so continually tickled with the hopes of success that they spare for no cost nor pains but trudge on and upon a defeat in one attempt courageously tack about to another and fall upon new experiments never giving over till they have calcined their whole estate to ashes and have not money enough left unmelted to purchase one crucible or limbeck…

That is, optimism bias isn’t just something to be found in business risks. Musicians and actors dreaming of becoming stars, struggling fiction authors and explorers and scientists all can be said to be biased towards the prospect of their own success. Were human beings accurate probability calculators where would we get our artists? Where would our explorers have come from? Instead of culture we would all (if it weren’t already too much the case) be trapped permanently in cubicles of practicality.

Optimism bias is just one of the many human cognitive flaws Kahneman identifies. Take this one, error of affective forecasting and its role in those oh, so important of human institutions, marriage and children. For Kahneman, many people base their decision to get married on how they feel when in the swirl of romance thinking that marriage will somehow secure this happiness indefinitely. Yet the fact is, married persons, according to Kahneman:

Unless they think happy thoughts about their marriage for much of the day, it will not directly influence their happiness. Even newlyweds who are lucky enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced happiness will again depend, as it does for the rest of us, on the environment and activities of the present moment. (400)

Unless one is under the impression that we no longer need marriage or children, again, one can be genuinely pleased that, at least in some cases, we suffer from the cognitive error of affective forecasting. Perhaps societies, such as Japan, that are in the process of erasing themselves as they forego marriage and even more so childbearing might be said to suffer from too much reason.

Yet another error Kahneman brings our attention to is the focusing illusion. Our world is what we attend to and our feelings towards it have less to do with objective reality than what it is we are paying attention to. Something like focusing illusion accounts for a fact that many of us in good health would find hard to believe;namely, the fact that paraplegics are no more miserable than the rest of us. Kahneman explains it this way:

Most of the time, however, paraplegics work, read, enjoy jokes and friends, and get angry when they read about politics in the newspaper.

Adaption to a new situation, whether good or bad, consists, in large part, of thinking less and less about it. In that sense, most long-term circumstances, including paraplegia and marriage, are part-time states that one inhabits only when one attends to them. (405)  

Erasmus identifies something like the focusing illusion in states like dementia where the old are made no longer capable of reflecting on the breakdown of their body and impending death, but he captured it best, I think in these lines:

But there is another sort of madness that proceeds from Folly so far from being any way injurious or distasteful that it is thoroughly good desirable and this happens when harmless mistake in the judgment of the mind is freed from those cares would otherwise gratingly afflict it smoothed over with a content and faction it could not under other so happily enjoy. (78)

We may complain against our hedonic setpoints, but as the psychologists Dan Gilbert points out they not only offer us resilience on the downside- we will adjust to almost anything life throws at us- such set points should caution us that in any one thing- the perfect job, the perfect spouse lies the key to happiness. But that might leave us with a question, what exactly is this self we are trying to win happiness for?

A good deal of Kahneman’s research deals with the disjunction between two selves in the human person, what he calls the experiencing self and the remembering self. It appears that the remembering self usually gets the last laugh in that it guides our behavior. The most infamous example of this is Kahneman’s colonoscopy study where the pain of the procedure was tracked minute by minute and then compared with questions later on related to decisions in the future.

The surprising thing was that future decisions were biased not by the frequency or duration of pain over the course of the procedure but how the procedure ended. The conclusion dictated how much negative emotion was associated with the procedure, that is, the experiencing self seemed to have lost its voice over how the procedure was judged and how it would be approached in the future.

Kahneman may have found an area where the dominance of the remembering self over the experiencing self are irrational, but again, perhaps it is generally good that endings, that closure is the basis upon which events are judged. It’s not the daily pain and sacrifice that goes into Olympic training that counts so much as outcomes, and the meaning of some life event does really change based on its ultimate outcome. There is some wisdom in the words of Solon that we should  “Count no man happy until the end is known”which doesn’t mean no one can be happy until they are dead, but that we can’t judge the meaning of a life until its story has concluded.

“Okay then”, you might be thinking, “what is the point of all this, that we should try to be irrational”? Not exactly. My point is we perhaps should take a slight breather from the critique of human nature from standpoint of its less than full rationality, that our flaws, sometimes, and just sometimes, are what makes life enjoyable and interesting. Sometimes our individual rationality may serve larger social goods of which we are foolishly oblivious.

When Erasmus wrote his Praise of Folly he was clear to separate the foolishness of everyday people from the folly of those in power, whether that power be political power, or intellectual power such as that of the scholastic theologians. Part of what distinguished the fool on the street from the fool on the throne or cloister was the claim of the latter to be following the dictates of reason- the reason of state or the internal logic of a theology that argued over angels on the head of pins. The fool on the street knew he was a fool or at least experienced the world as opaque, whereas the fools in power thought they had all the answers. For Erasmus, the very certainty of those in power along with the mindlessness of their goals made them, instead, even bigger fools than the rest of us.

One of our main weapons against power has always been to laugh at the foolishness of those who wield it. We could turn even a psychopathically rational monster like Hitler into a buffoon because even he, after all, was one of us. Perhaps if we ever do manage to create machines vastly more intelligent than ourselves we will have lost something by no longer being able to make jokes at the expense of our overlords. Hyper-rational characters like DATA from Star Trek or Sheldon from the Big Bang are funny because they are either endearingly trying to enter the mixed-up world of human irrationality, or because their total rationality, in a human context, is itself a form of folly. Super-intelligent machines might not want to become flawed humans, and though they might still be fools just like their creators, likely wouldn’t get the joke.

The Dark Side of a World Without Boundaries

Peter Blume: The Eternal City

Peter Blume: The Eternal City

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Of course, the benefit to paralyzed, or persons otherwise locked tightly in the straitjacket of their own skull, of technologies to communicate directly from the human brain to machines or to other human beings is undeniable. The potential to expand the range of human experience through directly experiencing the thoughts and emotions of others is, also, of course, great. Next weekend being Super Bowl Sunday, I can’t help but think how cool it would be to experience the game through the eyes of Peyton Manning, or Russell Wilson. How amazing would a concert or an orchestra be if experienced likewise in this way?

Still, one need not necessarily be a professional dystopian and unbudgeable cassandra to come up with all kinds of quite frightening scenarios that might arise through the erosion of boundaries between human minds, all one needs to do is pay attention to less sanguine developments of the latest iteration of a once believed to be utopian technology, that is, the Internet and the social Web, to get an idea of some of the possible dangers.

The Internet was at one time prophesied to user in a golden age of transparency and democratic governance, and promised the empowerment of individuals and small companies. Its legacy, at least at this juncture, is much more mixed. There is little disputing that the Internet and its successor mobile technologies have vastly improved our lives, yet it is also the case that these technologies have led to an explosion in the activities of surveillance and control, by nefarious individuals and criminal groups, corporations and the state. Nicolelis’ vision of eroded boundaries between human minds is but the logical conclusion of the trend towards transparency. Given how recently we’ve been burned by techno-utopianism in precisely this area a little skepticism is certainly in order.

The first question that might arise is whether direct brain-to-brain communication (especially when invasive technologies are required) will ever out-compete the kinds of mediated mind-to-mind technology we have had for quite sometime time, that is, both spoken and written language. Except in very special circumstances, not all of them good, language seem to retain its advantages over direct brain-to-brain communication, and, in cases where the advantage of language over such direct communication are used it be may be less of a gain in communicative capacity than a signal that normalcy has broken down.

Couples in the first rush of new love may want to fuse their minds for a time, but a couple that been together for decades? Seems less likely, though there might be cases of pathological jealousy or smothering control bordering on abuse when one half of a couple would demand such a thing. The communion with fellow human beings offered by religion might gain something from the ability of individuals to bridge the chasm that separates them from others, but such technologies would also be a “godsend” for fanatics and cultists. I can not decide whether a mega-church such as Yoido Full Gospel Church in a world where Nicolelis’ brain-nets are possible would represent a blessed leap in human empathetic capacity or a curse.

Nicolelis seems to assume that the capacity to form brain-nets will result in some kind of idealized and global neural version of FaceBook, but human history seems to show that communication technology is just as often used to hermetically seal group off from group and becomes a weapon in the primary human moral dilemma ,which is not the separation of individual from individual, so much as the rivalry between different groups. We seem unable to exit ourselves from such rivalry even when the stakes are merely imagined- as many of us will do next Sunday, and has Nicolelis himself should have known from his beloved soccer where the rivalry expressed in violent play has a tendency to slip into violent riots in the real world.

Direct brain-to-brain communication would seem to have real advantages over language when persons are joined together in teams acting as a unit in response to a rapidly changing situation. Groups such as fire-fighters. By far the greatest value of such capacities would be found in small military units such as Platoons, members who are already bonded in a close experiential and deep emotional bond, as in on display in the documentary- Restrepo. Whatever their virtuous courage, armed bands killing one another are about as far as you can get from the global kumbaya of Nicolelis’ brain-net.

If such technologies were non-invasive, would they be used by employers to monitor the absence of sustained attention while on the job? What of poor dissidents under the thumb of a madman like Kim Jong Un .Imagine such technologies in the hands of a pimp, or even, the kinds of slave owners who, despite our obliviousness, still exist.

One of the problems with the transparency paradigm, and the new craze for an “Internet of things” where everything in the environment of an individual is transformed into Web connected computers is the fact that anything that is a computer, by that very fact, becomes hackable. If someone is worried about their digital data being sucked up and sold on an elaborate black market, about webcams being used as spying tools, if one has concern that connecting one’s home to the web might make one vulnerable, how much more so should be worried if our very thoughts could be hacked? The opportunities for abuse seem legion.

Everything is shaped by personal experience. Nicolelis whose views of the collective mind were forged by the crowds that overthrew the Brazilian military dictatorship and his beloved soccer games. But the crowd is not always wise. I being a person who has always valued solitude and time with individuals and small groups over the electric energy of crowds have no interest in being part of a hive mind to any degree more than the limited extent I might be said to already be in such a mind by writing a piece such as this.

It is a sad fact, but nonetheless true, that should anything like Nicolelis’ brain-net ever be created it can not be under the assumptions of the innate goodness of everyone. As our experience with the Internet should have taught us, an open system with even a minority of bad actors leads to a world of constant headaches and sometimes what amount to very real dangers.

The World Beyond Boundaries

360 The Virgin Oil Painting by Gustav Klimt

I  first came across Miguel Nicolelis in an article for the MIT Technology Review entitled The Brain is not computable: A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines. That got my attention. Nicolelis, if you haven’t already heard of him, is one of the world’s top researchers in building brain-computer interfaces. He is the mind behind the project to have a paraplegic using a brain controlled exoskeleton make the first kick in the 2014 World Cup. An event that takes place in Nicolelis’ native Brazil.

In the interview, Nicolelis characterizes the singularity “as a bunch of hot air”. His reasoning being that “The brain is not computable and no engineering can reproduce it,”. He explains himself this way:

You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

This non-computability of consciousness, he thinks, has negative implications for the prospect of ever “downloading” (or uploading) human consciousness into a computer.

“Downloads will never happen,” he declares with some confidence.

Science journalism, like any sort of journalism needs a “hook” and the hook here was obviously a dig at a number of deeply held beliefs among the technorati; namely, that AI was on track to match and eventually surpass human level intelligence, that the brain could be emulated computationally, and that, eventually, the human personality could likewise be duplicated through computation.

The problem with any hook is that they tend to leave you with a shallow impression of the reality of things. If the world is too complex to be represented in software it is even less able to be captured in a magazine headline or 650 word article. For that reason,  I wanted a clearer grasp of where Nicolelis was coming from, so I bought his recent and excellent, if a little dense, book, Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Let me start with a little of  Nicolelis’ research and from there flesh out the neuroscientist’s view of our human-machine future, a view I found both similar in many respects and at the same time very different from perspectives typical today of futurists thinking about such things.

If you want to get an idea of just how groundbreaking Nicolelis’ work is, the best thing to do is to peruse the website of his lab.  Nicolelis and his colleagues have done conducted experiments where a monkey has controlled the body of a robot located on the other side of the globe, and where another simian has learned to play a videogame with its thoughts alone. Of course, his lab is not interested in blurring the lines between monkeys and computers for the heck of it, and the immediate aim of their research is to improve the lives of those whose ties between their bodies and their minds have been severed, that is, paraplegics. A fact which explains Nicolelis’ bold gamble to successfully demonstrate his lab’s progress by having a paralyzed person kickoff the World Cup.

For how much the humanitarian potential of this technology is inspiring, it is the underlying view of the brain the work of the Nicolelis Lab appears to experimentally support and the neuroscientist’s longer term view of the potential of technology to change the human condition that are likely to have the most lasting importance. They are views and predictions that put Nicolelis more firmly in the trans-humanist camp than might be gleaned from his MIT interview.

The first aspect of Nicolelis’ view of the brain I found stunning was the mind’s extraordinary plasticity when it came to the body. We might tend to think of our brain and our body as highly interlocked things, after all, our brains have spent their whole existence as part of one body- our own. This a reality that the writer, Paul Auster, turns into the basis of his memoir Winter Journal which is essentially the story of his body’s movement through time, its urges, muscles, scars, wrinkles, ecstasies and pains.

The work of Nicolelis’ Lab seems to sever the cord we might thinks joins a particular body and the brain or mind that thinks of it as home. As he states it in Beyond Boundaries:

The conclusion from more than two decades of experiments is that the brain creates a sense of body ownership through a highly adaptive, multimodal process, which can, through straightforward manipulations of visual, tactile, and body position (also known as proprioception) sensory feedback, induce each of us, in a matter of seconds, to accept another whole new body as being the home of our conscious existence. (66)

Psychologists have had an easy time with tricks like fooling a person into believing they possess a limb that is not actually theirs, but Nicolelis is less interested in this trickery than finding a new way to understand the human condition in light of his and others findings.

The fact that the boundaries of the brain’s body image are not limited to the body that brain is located in is one way to understand the perhaps almost unique qualities of the extended human mind. We are all ultimately still tool builders and users, only now our tools:

… include technological tools with which we are actively engaged, such as a car, bicycle, or walking stick; a pencil or a pen, spoon, whisk or spatula; a tennis racket, golf club, a baseball glove or basketball; a screwdriver or hammer; a joystick or computer mouse; and even a TV remote control or Blackberry, no matter how weird that may sound. (217)

Specialized skills honed over a lifetime can make a tool an even more intimate part of the self. The violin, an appendage of a skilled musician, a football like a part of the hand of a seasoned quarterback. Many of the most prized people in society are in fact master tool users even if we rarely think of them this way.

Even with our master use of tools, the brain is still, in Nicolelis’ view,trapped within a narrow sphere surrounding its particular body. It is here where he sees advances in neuroscience eventually leading to the liberation of the mind from its shell. The logical outcome of minds being able to communicate directly to computers is a world where, according to Nicolelis:

… augmented humans make their presence felt in a variety of remote environments, through avatars and artificial tools controlled by thought alone. From the depths of the oceans to the confines of supernovas, even to the tiny cracks of intracellular space, human reach will finally catch up to our voracious appetite to explore the unknown. (314)

He characterizes this as Mankind’s “epic journey of emancipation from the obsolete bodies they have inhabited for millions of years” (314) Yet, Nicolelis sees human communication with machines as but a stepping stone to the ultimate goal- the direct exchange of thoughts between human minds. He imagines the sharing of what has forever been the ultimately solipsistic experience of what it is like to be a particular individual with our own very unique experience of events, something that can never be fully captured even in the most artful expressions of,  language. This exchange of thoughts, which he calls “brainstorms” is something Nicolelis does not limit to intimates- lovers and friends- but which he imagines giving rise to a “brain- net”.

Could we one day, down the road of a remote future, experience what it is to be part of a conscious network of brains, a collectively thinking true brain-net? (315)

… I have no doubt that the rapacious voracity with which most of us share our lives on the Web today offers just a hint of the social hunger that resides deep in human nature. For this reason, if a brain- net ever becomes practicable,  I suspect it will spread like a supernova explosion throughout human societies. (316)

Given this context, Nicolelis’ view on the Singularity and the emulation or copying of human consciousness on a machine is much more nuanced than the impression one is left with from the MIT interview. It is not that he discounts the possibility that “advanced machines may come to dominate and even dominate the human race” (302) , but that he views it as a low probability danger relative to the other catastrophic risks faced by our species.

His views on prospect of human level intelligence in machines is less that high level machine intelligence is impossible, but that our specific type of intelligence is non-replicable. Building off of Stephen Jay Gould’s idea of the “life tape”  the reason being that we can not duplicate through engineering the sheer contingency that lies behind the evolution of human intelligence. I understand this in light of an observation by the philosopher Daniel Dennett, that I remember but cannot place, that it may be technically feasible to replicate mechanically an exact version of a living bird, but that it may prove prohibitively expensive, as expensive as our journeys to the moon, and besides we don’t need to exactly replicate a living bird- we have 747s. Machine intelligence may prove to be like this where we are never able to replicate our own intelligence other than through traditional and much more exciting means, but where artificial intelligence is vastly superior to human intelligence in many domains.

In terms of something like uploading, Nicolelis does believe that we will be able to record and store human thoughts- his brainstorms- at some place in the future, we may be able to record  the whole of a life in this way, but he does not think this will mean the preservation of a still experiencing intelligence anymore than a piece by Chopin is the actual man. He imagines us deliberately recording the memories of individuals and broadcasting them across the universe to exist forever in the background of the cosmos which gave rise to us.

I can imagine all kinds of wonderful developments emerging should the technological advances Nicolelis imagines coming to pass. It would revolutionize psychological therapy, law, art and romance. It would offer us brand new ways to memorialize and remember the dead.

Yet, Nicolelis’ Omega Point- a world where all human being are brought together into one embracing common mind, has been our dream at least since Plato, and the very antiquity of these urges should give us pause, for what history has taught us is that the optimistic belief that “this time is different” has never proved true. A fact which should encourage us to look seriously, which Nicolelis himself refuse to do, at the potential dark side of the revolution in neuroscience this genius Brazilian is helping to bring about. It is less a matter of cold pessimism to acknowledge this negative potential as it is a matter of steeling ourselves against disappointment, at the the least, and in preventing such problem from emerging in the first place at best, a task I will turn to next time…