Return to the Island of Dr. Moreau

The Island of Dr Moreau

Sometimes a science-fiction novel achieves the impossible, and actually succeeds in reaching out and grasping the future, anticipating its concerns, grappling with its possibilities, wrestling with its ethical dilemmas. H.G. Wells’ short 1886 novel, The Island of Dr. Moreau, is like that. The work achieved the feat of actually being relevant to our own time at the very least because the scientific capabilities Well’s imagined in the novel have really only begun to be possible today, and will be only more so going forward. The ethical territory he identified with his strange little book ones we are likely to be increasingly called upon to chart our own course through.

The novel starts with a shipwreck. Edward Prendick, an Englishman who we later learn studied under the Darwinian, Thomas Huxley, is saved at sea by a man named Montgomery. This Montgomery along with his beast like servant, M’ling, is a passenger aboard a ship shepherding a cargo of exotic animals including a puma to an isolated island. Prendick is a man whose only type of luck seems to be bad, and he manages to anger the captain of the ship and is left to his fate on the island with Montgomery a man in the service of a mad scientific genius- Dr Moreau.

Prendick soon discovers that his new island home is a house of horrors. Moreau is a “vivisectionist” meaning he conducts experiments on live animals and does so without any compunction as to the pain these animals experience. The natural sounds of the island are drowned out by cries of suffering Prendrick hears from the puma being surgically transformed into a “man” at the hands of Moreau. This is the goal of Moreau who has been chased out of civilization for pursuing his mission, to use the skills of vivisection and hypnosis to turn his poor animals into something like human beings giving them not only a humanoid forms but human abilities such as speech.

The list of the “beast folk” transformed in such a way include not only M’ling and the puma but other creatures trapped in the space between human beings and their original animal selves or are a blend of purely animal forms. There is the Leopard-Man, and the Ape-Man, the satanic looking Satyr-Man a humanoid formed from a goat. There is smallish Sloth-Man who resembles a flayed child. Two characters, the Sayer of the Law and the Fox-Bear-Witch, revere and parrot the “Law” of their creator Moreau which amount to commandments to act like human beings or not act like animals: “Not to go on all Fours”, “Not to suck up Drink”, “Not to claw Bark of Trees” “Not to chase other Men” (81)

There is also the chimera of the Hyena-Swine the only animal that seems able to deliberately flaunt the commandments of Moreau and which becomes the enemy of Prendick after the death of the mad scientist at the hands of his last creation- the puma man. Montgomery, giving into his own animal instincts dies shortly after his master’s demise in the midst of a mad alcoholic revelry with the Beast Men.

Prendrick escapes the island before the Hyena-Swine and his allies are able to kill him by setting sail on a life raft which comes near shore on which the dead bodies of the captain who had abandoned him on the island and his shipmate are found. Eventually the scarred castaway makes his way back to England where his experience has seemed to shatter his connection with his fellow human beings who now appear to Prendick as little more than Beast Men in disguise.

Why did Wells write this strange story? In part, the story grew out of an emerging consciousness of cruelty to animals and calls for animal rights. As mentioned, Moreau is an expert in the practice of  “vivisection”  an old term that means simply animal experimentation whether the animals are anesthetized or not. He thinks the whole question of the pain experienced by his animals to be a useless hindrance to the pursuit of knowledge which he needed to overcome to free him for his scientific quest. “The colorless light of these intellectual desires” from the fetters of empathy.  The animals he experiments upon become no longer “a fellow creature, but a problem”. Free to pursue his science on his island he is “never troubled by matters of ethics”, and notes that the study of nature can only lead to one becoming “as remorseless as Nature” itself. (104)

It is often thought that the idea of animal rights emerged first in the 1970s with the publication of the philosopher, Peter Singer’s Animal Liberation. Yet the roots of the movement can be traced back nearly a century earlier and emerged in response to the practice of  vivisection. Within two years of Wells’ story the British Union for the Abolition of Vivisection would be founded- a society which has advocated the end of animal experimentation ever since. In the years immediately following the publication of The Island of Dr. Moreau protests would begin erupting around experiments on live animals, especially dogs.

It is almost as if the moment the idea of common kinship between humans and animals entered the public mind with the publication of Darwin’s Origins of Species in 1859 the idea of treating animals “humanely” gathered moral force.  Yet, however humane and humanistic a figure Darwin himself was, and in terms of both there is no doubt, his ideas also left room for darker interpretations. The ethical questions opened up by Darwin was whether the fact that humankind shared common kinship with other animals meant that animals should be treated more like human beings, or whether such common kinship offered justification for treating fellow human beings as we always had treated animals? The Origins of Species opened up all sorts of these anxieties surrounding the narrowness of the gap between human beings and animals. It became hard to see which way humanity was going.

The Island of Dr. Moreau gives us a first hand experience of this vertigo. Prendick initially thinks Moreau is transforming human beings into animals, and only finds out later that he is engaged in a form of uplift. Moreau wants to find out how difficult it would be to turn an animal into something like a human being- if the gap can be bridged on the upside. However, the Beast Men that Moreau creates inevitably end up slipping back to their animal natures- the downside pressures are very strong and this suggests to Prendick that humankind itself is on a razors edge between civilization and the eruption of animal instincts.

This idea of the de-generative potential evolution was one of the most deadly memes to have ever emerged from Western civilization. It should come as no surprise that Adolf Hitler was born three years after the publication of The Island of Dr. Moreau and ideas about the de-generation of the “race” would become the putrid intellectual mother’s milk on which Hitler was raised. The anxiety that humanity might evolve “backward”, which came with a whole host of racially charged assumptions, would be found in the eugenics movements and anti-immigrant sentiments in both Great Britain and the United States in the early 20th century following the publication of Well’s novel. It was the Nazis, of course, who took this idea of de-generative evolution to its logical extreme using this fear as justification for mass genocide.

It’s not that no one in the early 20th century held the idea that course of evolution might be progressive at least once one stepped aside from evolution controlled by nature and introduced human agency. Leon Trotsky famously made predictions about the Russian Revolution that sound downright transhumanist such as:

Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman.

Yet, around the same time Trotsky was predicting the arrival of the New Soviet Man, the well respected Soviet scientist, Il’ya Ivanov tried to move the arrow backward. Ivanov almost succeeded in an attempt to see if African women, who were not to be informed of the nature of the experiment, could be inseminated with the sperm of chimps. The idea that Stalin had hatched this plan to create a race of ape-men super soldiers is a myth  often played up by modern religious opponents of Darwin’s ideas regarding human evolution. But, what it does show that Well’s Dr. Moreau was no mere anti-scientific fantasy but a legitimate fear regarding the consequences of Darwin’s discovery- that the boundary between human beings and animals was the thinnest of ice and that we were standing on it.

Yet the real importance of the questions raised by The Island of Dr. Moreau would have to wait over a century, until our own day to truly come to the fore. This is because the type of sovereignty over animals in terms of their composition and behavior really wasn’t possible until we actually not only understood the directions that guided the development and composition of life, something we didn’t understand until the discovery of the hereditary role of DNA in 1952, but how to actually manipulate this genetic code, something we have only begun to master in our own day.

Another element of the true import of the questions Well’s posed would have to wait unit recently when we developed the capacity to manipulate the neural processes of animals.  As mentioned, in the novel Moreau uses hypnotism to get his animals to do what he wants. We surely find this idea silly, conjuring up images of barnyard beasts staring mindlessly at swinging pendulums. Yet meaning of the term hypnotism in The Island of Dr. Moreau would be better expressed with our modern term “programming”. This was what the 19th century mistakenly thought it had stumbled across when it invented the pseudo-science of hypnotism- a way to circumvent the conscious mind and tap into a hidden unconscious in such a way as to provide instructions for how the hypnotized should act. This too is something we have only been able to achieve now that microelectronics have shrunk to a small enough size that they and their programs can be implanted into animals without killing them.

Emily Anthes balanced and excellent book Frankenstein’s Cat gives us an idea of what both this genetic manipulation and programming of animals through bio-electronics looks like. According to Anthes, the ability to turn genes on and off has given us an enormous ability to play with the developmental trajectory of animals such as that poor scientific workhorse- the lab mouse. We can make mice that suffer human like male pattern baldness, can only make left turns, or grow tusks. In the less funny department we can make mice riddled with skin tumors or that suffer any number of other human diseases merely by “knocking out” one of their genes. (FC 3-4).

As Anthes lays out, our increasing control over the genetic code of all life gives us power over animals (not to mention the other kingdoms of living things) that is trivial, profound, and sometimes disturbing. We can insert genes from one species into another that could never under natural circumstances mix- so that fish with coral genes inserted can be made to glow under certain wavelengths of light, and we can do the same with the button noses of cats or even ourselves if we wanted to be a hit at raves. We can manipulate the genes of dairy animals so that they produce life saving pharmaceuticals in their milk or harvest spider’s silk from udders. We have inserted human growth hormone into pigs to make them leaner- an experiment that resulted in devastating diseases for the pigs concerned and a large dose of yuck factor.

Anthes also shows us how we are attempting to combine the features of our silicon based servants with those of animals, by creating creatures such as remote controlled rats through the insertion of microelectronics. The new field of optogenetics gives us amazing control over the actions of animals using light to turn neurons on and off. Something that one of the founders of the science, Karl Deisseroth. thinks is raising a whole host of ethical and philosophical questions we need to deal with now as it becomes possible to use optogenetics with humans.  These cyborg experiments have even gone D.I.Y. The company Backyard Brains sells a kit called Robo Roach that allows amature neuro-scientists to turn a humble roach into their own personal insect cyborg.

The primary motivation for Well’s Dr. Moreau was the discovery of knowledge whatever its cost. It’s hard to see his attempt to turn animals into something with human characteristics as of any real benefit to the animals themselves given the pain they had to suffer to get there. Moreau is certainly not interested in any benefit his research might bring to human beings either. Our own Dr. Moreaus- and certainly many of the poor creatures who are subject to our experiments would see the matter that way- are indeed driven by not just human welfare, but as Anthes is at pains to point out animal welfare as well. If we played our genetic knowledge cards right the kinds of powers we are gaining might allow us to save our beloved pets from painful and debilitating diseases, lessen the pain of animals we use for consumption or sport, revive extinct species or shield at least some species from the Sixth Great Extinction which human beings have unleashed. On the latter issue, there is always a danger that we will see these gene manipulating powers as an easy alternative to the really hard choices we have to make regarding the biosphere, but at the very least our mastery of genetics and related areas such as cloning should allow us to mitigate at least some of the damage we are doing.

Anthes wants to steer us away from moral absolutes regarding biological technologies. Remote controlled rats would probably not be a good idea either for the rats or other people were they sold as “toys” for my 12 year old nephew. The humane use of such cyborgs creations to find earthquake victims, even if disturbing on its face, is another matter entirely.

At one point in The Island of Dr. Moreau Well’s mad scientist explains himself to Pendrick with the chilling words:

I wanted- it was the only thing I wanted- to find out the extreme limit of plasticity in a living shape. (104)

 

Again, the achievement of such plasticity as Dr. Moreau longed for had to wait until the mastery of genetics and in addition to Anthes’ Frankenstein’s Cat this newfound technological power is explored in yet another excellent recent book this time by the geneticist George Church. In Regenesis Church walks us through the revolution in molecular biology and especially the emerging field of synthetic biology. The potential implied in our ability to re-engineer life at the genetic level or to create almost from scratch and design life on the genetic plane are quite simply astounding.

According to Church, we could redesign the true overlords of nature- bacteria- to do an incredible number of things including creating our fuel, growing our homes, and dealing with major challenges such as climate change. We could use synthetic bacteria to create our drugs and chemicals and put them to work as true nanotech assemblers.We could design “immortal” human organs and restore the natural environment to its state before the impact of our dominant and destructive species including resurrecting species that have gone extinct using reverse engineering.

Some of Church’s ideas might be scientifically sound, but nonetheless appear fanciful, including his idea for creating a species of “mirror” humans whose cellular handedness is reversed. Human beings with reversed cellular handedness would be essentially immune forever from the scourge of viruses because these diseases rely on shared handedness to highjack human cells. Aside from the logistics, part of the problem is that a human being possessing the old handedness would be unable to have children with a person possessing the new version. Imagine that version of Romeo and Juliet!

Church’s projections and proposals are all pretty astounding and none to me at least raise overly red flags in an ethical sense except for one in which he goes all Dr. Moreau on us.  Church proposes that we resurrect Neanderthals:

If society becomes comfortable with human cloning and sees value in human diversity, then the whole Neanderthal creature itself could be cloned by a surrogate mother chimp- or by an extremely adventurous female human. (10)

The immediate problem I see here is not so much that the Neanderthal is brought back into existence as that Church proposes the mother of this “creature” (why does he use this word and not the more morally accurate term- person?) should be a chimp.  Nothing against chimpanzees, but Neanderthals are better thought of as an extinct variant of ourselves rather than as a different species, the clearest evidence of which is that homo sapiens and Neanderthals could produce viable and fertile offspring, and we still carry the genes from such interbreeding. Neanderthals are close enough to us- they buried their dead surrounded by flowers after all- that they should be considered kin, and were such a project ever to take place they should be carried by a human being and raised as one of us with all of our rights and possibilities.

Attempting to bring back other hominids such as Homo Habilis or Australopithecus would raise similar Moreau type questions and might be more in line with Church’s suggestion regarding the resurrection of Neanderthals. Earlier hominids are distinct enough from modern humans that their full integration into human society would be unlikely. The question then becomes what type of world are we bringing these beings into because it is certainly not our own?

One of the ethical realities that The Island of Dr. Moreau is excellent at revealing is the idea of how the attempts to change animals into something more like ourselves might create creatures that are themselves shipwrecked, forever cut off from both the life of their own species and also the human world where we have attempted to bring them. Like modern humans, earlier hominids seem to have been extremely gregarious and dependent upon rich social worlds. Unless we could bring a whole society of them into existence at a clip we might be in danger of creating extremely lonely and isolated individuals who would be unable to find a natural and happy place in the world.

Reverse engineering the early hominids by manipulating the genomes of all the higher primates, including our own, might be the shortest path to the uplift of animals such as the chimpanzee a project suggested by George Dvorsky of the IEET in his interview with Anthes in Frankenstein’s Cat. The hope, it seems, is to give to other animals the possibilities implicit in humanity and provide us with a companion at something closer to our level of sentience.  Yet the danger here, to my lights, is that we might create creatures that are essentially homeless- caught between human understanding of what they could or “should” be and the desire to actualize their own non-human nature which is the result of millions of years of evolution.

This threat of homelessness applies even in cases where the species in question has the clear potential to integrate into human society. What would it feel like to be the first Neanderthal in 10,000 years? Would one feel a sense of pride in one’s Neanderthal heritage or like a circus freak constantly subject since childhood to the prying eyes of a curious public? Would there be a sense of interminable pressure to act “Neanderthal like” or constant questions by ignorant Lamarckians as to the nature of a lost Neanderthal society one knows nothing about?  Would there be pressure to mate with only other Neanderthals?  Harping on about the need for a “royal wedding” of “caveman” and “cavewoman”? What if one was only attracted to the kinds of humans seen in Playboy? Would it be scandalous for a human being to date a Neanderthal, or could the latter even find a legitimate human mate and not just weird people with a caveman fetish? In other words, how do we avoid this very practical and ethical problem of homelessness?

On the surface these questions may appear trivial given the existential importance of what creating such species would symbolize, but they are after all, the types of questions that would need to be addressed if we are thinking about giving these beings the possibility for a rich and happy life as themselves and not just answering to our own philosophic and scientific desires. We can only proceed morally if we begin to grapple with some of these issues and here we’ll need the talents of not just scientists and philosophers, but novelists as well. We need to imagine what it will feel like to live in the kinds of worlds and as the beings we are creating.  A job that as Bruce Sperling has pointed out is the very purpose of science-fiction novelists in the first-place. We will need more of our own versions of The Island of Dr. Moreau.

 

References:
H.G. Wells, The Island of Dr.  Moreau, Lancer Books, 1968 First published 1886.
Emily Anthes, Frankenstein’s Cat: Cuddling up to biotech’s brave new beasts, Scientific American Books, 2013
George Church and Ed Regis, Regenesis: How Synthetic Biology will Reinvent Nature and Ourselves, Basic Books, 2012  

The Lost Art of Architectonic

Palmanova

One of the things that most strikes me when thinking on the subject of our contemporary discourse regarding the future is just how seldom those engaged in the discussion aim at giving us a vision of society as a whole. There are books that dig deep but remain narrow  such as Eric Topol’s: The Creative Destruction of Medicine: How the Digital Revolution Will Create Better Health Care, and George Church’s recent Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. Many of these books are excellent, but one walks away from reading them with a good idea of where a particular field or part of our lives might be headed rather than society as a whole.

Even when authors try to extend the reach of speculation out more broadly they tend to approach things from certain lens that ends up constraining them. Steven Johnson’s Future Perfect tries to apply the idea of peer-networks to areas as diverse as politics, education, economics and the arts while Peter Diamandis, and Steven Kotler in Abundance appear to project the mindset and passions of Silicon Valley: exponential growth in computers, the D.I.Y movement, billionaire philanthropy, and the spread of the benefits of technology to the world’s poorest, into a future where human potential can finally be met.  However good these broader approaches are they seldom leave you with an idea of how all the necessary parts of the future societies they hint at and depict fit together let alone what it might be like to actually live within them.

What seems to be missing in many works on the future is the sense of a traveler’s perspective on worlds they depict. The interesting thing about being a traveler is that you both experience the society you are traveling in as an individual and are at the same time outside of it, able to take a bird’s eye view that allows you to see connections and get a sense for how the whole things fits together like an artful building designed by an architect.

We used to have a whole genre devoted to this architectonic way of looking at the future, the literature of utopia. Writing utopias may seem very different from envisioning positive versions of the future, but perhaps not as much as we might think. Like futurists, many utopian writers tried to extrapolate from technological trends. Way back in 1833, John Adolphus Etzler, wrote a utopia  The Paradise Within the Reach of All Men, that bears remarkable similarity to the arguments of Diamandis and Kotler though the technology that was thought would finally end human want was nascent industrial era machines, and Etzler embedded his argument in a narrative that gave the reader an idea of what living in such a society should look like. More famous example of such extrapolations are Edward Bellamy’ s Looking Backward, 1887 or the incomparable H.G. Wells’ A Modern Utopia, 1905.

One might object that futurism is a predictive endeavor whereas utopianism tries to prescribe what would be best and therefore what we should do. Again, I am not quite sure this distinction holds, for much of futurists writing is as much arguments about potential and are meant to coax us into some possible future, or as Diamandis and Kotler put it in Abundance: “The best way to predict the future is to create it yourself.” (239). Utopianism is at least transparent in the fact that it is being prescriptive as opposed to futurism which often tries to come across as tomorrow’s news.

I am not quite sure who we could blame for this state of affairs, but surely Friedrich Engels of  the duo Marx and Engels would bear part of the responsibility. Back in 1880 Engels in his essay Socialism: Utopian and Scientific made the case that Marxism as opposed to the utopianism such found in figures such as Robert Owen was “scientific” in that it was based upon an understanding of the future as determined. Marxists in this view weren’t trying to influence the course of history they were just responding to and playing a role in underlying forces that would unfold to an inevitable conclusion anyway.  Whether he wanted to or not, Engels had removed human agency when it came to the issue of deciding what the human future would look like, or rather he drove such agency into the shadows, unacknowledged and occult.

One might ask, what about futurism that is focused not on good scenarios at least from the author’s point of view, but bad outcomes- predictions of disaster and explorations of risk? Even here, I think, futurists are trying to shape the future as much as predict it like a fortune teller. It’s a sadistic prophet indeed who merely tells you your society will be destroyed without giving you someway to avoid that fate. Futurism that focuses on negative futures are largely warnings that we need to take steps to prevent something terrible from happening or at the very least prepare.

We can find utopian literature in this world of negative futures too or at least the doppelganger of utopia- the realm of dystopia. Indeed, dystopian literature has provided us with some of the best early warning systems we have ever had such as Yevgeny Zamyatin’s novel We which warned us about Soviet totalitarianism, Jack London’s The Iron Heel which provided us with a premonition of fascism or Aldous Huxley’s Brave New World which has proved a better reflection of the future with its depiction of the dystopia of consumer society than George Orwell’s 1984.

The one place where compelling architectonic versions of the future continue to be found is in science-fiction. Writers of science-fiction continue to play this social role, and often do so with brilliance as this praise for Kim Stanley Robinson’s recent novel 2312 attests. This teasing out and wrestling with both the positive and negative possible worlds we might arrive at given our current course or with some changes in our trajectory lend evidence to the observation that science-fiction whose surface often seems so silly is in fact the most socially and philosophically serious form of fiction we have.

Yet there are differences between science-fiction and utopian literature not the least of which is utopia’s demand to convey an architectonic vision of possible worlds. The desire in utopias to get every detail right down to how people dress and what they eat make utopian literature some of the worst we have because it takes away from the development of characters. In a utopia the normal way fiction is organized is inverted: the world portrayed is the real character whereas the protagonists and others are the mere backdrops  for this world.

Unlike much of science-fiction, the focus of utopia is more on social organization than science and technology  although even in one of the earliest and the most famous utopia- Plato’s Republic- science and technology play a role, just not in a form we would readily recognize. Plato based his work on some of the most advanced applied-sciences of his day: pedagogy, dialectical philosophy, geometry, musicology, medicine/athletics, and animal husbandry. Yet these sciences and technologies were not the driver of his utopia. They were the building blocks he used to create a certain form of social organization.

In the sense that they were often seen as proposing a blueprint for the human world utopian literature was often serious in a way science-fiction need not be. Etzler did not merely write a utopia he tried to build one in Venezuela based upon his ideas. Edward Bellamy is a mere footnote in American literature compared to the geniuses who shared the stage with him during his era: Herman Melville, Henry James or Mark Twain. Yet, Bellamy’s work was considered serious enough that it inspired clubs to debate his ideas regarding the future of industrialism all throughout the United States and  influenced real revolutionaries such as V.I. Lenin. Would any serious social thinker today dare to write a version of the future in the form of a utopia?

Perhaps what we need today is less a revival of utopia as a literary form- something it was never very good at- than a new way to imagine architectonic possible futures. Given the fact that we are a long way off from the day when a person like H.G. Wells could conjure up a vision of utopia sure in the belief that he had some working knowledge of everything under the sun this new form of utopia would need to be collaborative rather than springing from the mind of just one individual.

I can imagine gathering together a constellation of individuals from across different disciplines in a room: scientists, engineers, artists, fiction writers, philosophers, economists, social scientists etc and asking them to design together the “perfect” city. They would ask and answer questions such how their imagined city provides for its basic needs, how it is layed out, how it fits into the surrounding ecosystem, how its economy works,  how its educational system functions, its penal system. Above all it would attempt to create a society whose pieces fit together in an architectonic whole. Unlike past utopian literature, such exercises wouldn’t present themselves as somehow final and perfected, but merely provide us with glimpse of destinations to which we might go.

How Science and Technology Slammed into a Wall and What We Should Do About It

Captin Future 1944

It might be said that some contemporary futurists tend to use technological innovation and scientific discovery in the same way God was said to use the whirlwind against defiant Job, or Donald Rumsfeld treated the poor citizens of Iraq a decade ago. It’s all about the “shock and awe”. One glance at something like KurzweilAI.Net leaves a reader with the impression that brand new discoveries are flying off the shelf by the nanosecond and that of all our deepest sci-fi dreams are about to come true. No similar effort is made, at least that I know of, to show all the scientific and technological paths that have led  into cul-de-sac, or chart all the projects packed up and put away like our childhood chemistry sets to gather dust in the attic of the human might-have- been.  In exact converse to the world of political news, in technological news it’s the jetpacks that do fly we read about not the ones that never get off the ground.

Aside from the technologies themselves future oriented discussion of the potential of technologies or scientific discovery tends to come in two stripes when it comes to political and ethical concerns: we’re either on the verge of paradise or about to make Frankenstein seem like an amiable dinner guest.

There are a number of problems with this approach to science and technology, I can name more, but here are three: 1) it distorts the reality of innovation and discovery 2) it isn’t necessarily true, 3) the political and ethical questions, which are the most essential ones, are too often presented in a simplistic all- good or all-bad manner when any adult knows that most of life is like ice-cream. It tastes great and will make you fat.

Let’s start with distortion: A futurists’ forum like the aforementioned, KurzweilAI.Net,  by presenting every hint of innovation or discovery side-by-side does not allow the reader to discriminate between both the quality and the importance of such discoveries. Most tentative technological breakouts and discoveries are just that- tentative- and ultimately go nowhere. The first question a reader should ask is whether or not some technique process or prediction has been replicated.  The second question is whether or not the technology or discovery being presented is actually all that important. Anyone who’s ever seen an infomercial knows people invent things everyday that are just minor tweaks on what we already have. Ask anyone trapped like Houdini in a university lab-  the majority of scientific discovery is not about revolutionary paradigm shifts ala Thomas Kuhn but merely filling in the details. Most scientists aren’t Einsteins in waiting. They just edit his paperwork.

Then we have the issue of reality: anyone familiar with the literature or websites of contemporary futurists is left with the impression that we live in the most innovative and scientifically productive era in history. Yet, things may not be as rosy as they might appear when we only read the headlines. At least since 2009, there has been a steady chorus of well respected technologists, scientists and academics telling us that innovation is not happening fast enough, that is that our rates of technological advancement are not merely not exceeding those found in the past, they are not even matching them. A common retort to this claim might be to club whoever said it over the head with Moore’s Law; surely,with computer speeds increasing exponentially it must be pulling everything else along. But, to pull a quote from ol’ Gershwin “ it ain’t necessarily so”.

As Paul Allen, co-founder of Microsoft and the financial muscle behind the Allen Institute for Brain Science pointed out in his 2011 article, The singularity isn’t near, the problems of building ever smaller and faster computer chips is actually relatively simple, but many of the other problems we face, such as understanding how the human brain works and applying the lessons of that model- to the human brain or in the creation of AI- suffer what Allen calls the “complexity break”. The problems have become so complex that they are slowing down the pace of innovation itself. Perhaps it’s not so much of a break as we’ve slammed into a wall.

A good real world example of the complexity break in action is what is happening with innovation in the drug industry where new discoveries have stalled. In the words of Mark Herper at Forbes:

But the drug industry has been very much a counter-example to Kurzweil’s proposed law of accelerating returns. The technologies used in the production of drugs, like DNA sequencing and various types of chemistry approaches, do tend to advance at a Moore’s Law-like clip. However, as Bernstein analyst Jack Scannell pointed out in Nature Review’s Drug Discovery, drug discovery itself has followed the opposite track, with costs increasing exponentially. This is like Moore’s law backwards, or, as Scannell put it, Eroom’s Law.”

It is only when we acknowledge that there is a barrier in front of our hopes for innovation and discovery that we can seek to find its source and try to remove it. If you don’t see a wall you run the risk of running into it and certainly won’t be able to do the smart things: swerve, scale, leap or prepare to bust through.

At least part of the problem stems from the fact that though we are collecting a simply enormous amount of scientific data we are having trouble bringing this data together to either solve problems or aid in our actual understanding of what it is we are studying. Trying to solve this aspect of the innovation problem is a goal of the brilliant young technologist,Jeffrey Hammerbach, founder of Cloudera. Hammerbach has embarked on a project with Mt. Sinai Hospital to apply tools for organizing and analyzing the overwhelming amounts of data gather by companies like Google and FaceBook to medical information in the hopes of spurring new understanding and treatments of diseases. The problem Hammerbach is trying to solve as he acknowledged on a recent interview with Charlie Rose is precisely the one identified by Herper in the quote above, that innovation in treating diseases like mental illness is simply not moving fast enough.

Hammerbach, is our Spiderman. Having helped create the analytical tools that underlie FaceBook he began to wonder if it was worth it quipping: “The best minds of my generation are thinking about how to make people click ads”.  The real problem wasn’t the lack of commercial technology it was the barriers to important scientific discoveries that would actually save people’s lives. Hammerbach’s conscientious objection to what technological innovation was being applied for vs what it wasn’t is, I think, a perfect segway (excuse the pun) to my third point the political and ethical dimension, or lack thereof, in much of futurists writing today.

In my view, futurists too seldom acknowledges the political and ethical dimension in which technology will be embedded. Technologies hoped for in futurists communities such as brain-computer interfaces, radical life extension, cognitive enhancements or AI are treated in the spirit of consumer goods. If they exist we will find a way to pay for them. There is, perhaps, the assumption that such technologies will follow the decreasing cost curves found in consumer electronics: cell phones were once toys for the rich and now the poorest people on earth have them.

Yet, it isn’t clear to me that the technologies hoped for will actually follow decreasing curves and may instead resemble health care costs rather than the plethora of cheap goods we’ve scored thanks to Moore’s Law. It’s also not clear to me that should such technologies be invented to be initially be affordable only by the rich that this gap will at all be acceptable by the mass of the middle class and the poor unless it is closed very, very fast. After all, some futurists are suggesting that not just life but some corporal form of immortality will be at stake. There isn’t much reason to riot if your wealthy neighbor toots around in his jetpack while you’re stuck driving a Pinto. But the equation would surely change if what was at stake was a rich guy living to be a thousand while you’re left broke, jetpackless, driving a Pinto and kicking the bucket at 75.

The political question of equity will thus be important as will much deeper ethical questions as to what we should do and how we should do it. The slower pace of innovation and discovery, if it holds, might ironically, for a time at least, be a good thing for society (though not for individuals who were banking on an earlier date for the arrival of technicolored miracles) for it will give us time to sort these political and ethical questions through.

There are 3 solutions I can think of that would improve the way science and technology is consumed and presented by futurists, help us get through the current barriers to invention and discovery and build our capacity to deal with whatever is on the other side. The first problem, that of distortion, might be dealt with by better sorting of scientific news stories so that the reader has some idea both where the finding being presented lies along the path of scientific discovery or technological innovation and how important a discovery is in the overall framework of a field. This would prevent things occurring such as a preliminary findings regarding the creation of an artificial hippocampus in a rat brain being placed next to the discovery of the Higgs Boson, at least without some color coating or other signification that these discoveries are both widely separated along the path of discovery and of grossly different import.

As to the barriers to innovation and discovery itself: more attempts such as those of Hammerbach’s need to be tried. Walls need to be better identified and perhaps whole projects bringing together government and venture capital resources and money used to scale over or even bust through these blocked paths. As Hammerbach’s case seems to illustrate, a lot of technology is fluff, it’s about click-rates, cool gadgets, and keeping up with the joneses. Yet, technology is also vitally important as the road to some of our highest aspirations. Without technological innovation we can not alleviate human suffering, extend the time we have here, or spread the ability to reach self-defined ends to every member of the human family.

Some technological breakthroughs would actually deserve the appellation. Quantum computing, if viable, and if it lives up to the hype, would be like Joshua’s trumpets against the walls of Jericho in terms of the barriers to innovation we face. This is because, theoretically at least, it would answer the biggest problem of the era, the same one identified by Hammerbach, that we are generating an enormous amount of real data but are having a great deal of trouble organizing this information into actionable units we actually understand. In effect, we are creating an exponentially increasing data base that requires exponentially increasing effort to put the pieces of this puzzle together- running to standstill. Quantum computing, again theoretically at least, would solve this problem by making such databases searchable without the need to organize them beforehand. 

Things in terms of innovation are not, of course, all gloom and doom. One fast moving  field that has recently come to public attention is that of molecular and synthetic biology perhaps the only area where knowledge and capacity is not merely equalling but exceeding Moore’s Law.

To conclude, the very fact that innovation might be slower than we hope- though we should make every effort to get it moving- should not be taken as an unmitigated disaster but as an opportunity to figure out what exactly it is we want to do when many of the hoped for wonders of science and technology actually arrive. At the end of his recent TED-Talk on reviving extinct species, a possibility that itself grows out of the biological revolution of which synthetic biology is a part, Stewart Brand, gives us an idea of what this might look like. When asked if it was ethical for scientist to “play God” in doing such a thing he responded that he and his fellow pioneers were trying to answer the question of if we could revive extinct species not if we should. The ability to successfully revive extinct species, if it worked, would take some time to master, and would be a multi-generational project the end of which Brand, given his age, would not see. This would give us plenty of time to decide if de-extinction was a good idea, a decision he certainly hoped we would make. The only way we can justly do this is to set up democratic forums to discuss and debate the question.

The ex-hippie Brand has been around a long time, and great evidence to me that old age plus experience can still result in what we once called wisdom. He has been present long enough to see many of our technological dreams of space colonies and flying cars and artificial intelligence fail to come true despite our childlike enthusiasm. He seems blissfully unswept up in all the contemporary hoopla, and still less his own importance in the grand scheme of things, and has devoted his remaining years to generation long projects such as the Clock of the Long now, or the revival of extinct species that will hopefully survive into the far future after he is gone.

Brand has also been around long enough to see all the Frankenstein’s that have not broken loose, the GMOs and China Syndromes and all those oh so frightening “test tube babies”.  His attitude towards science and technology seems to be that it is neither savior nor Shiva it’s just the cool stuff we can do when we try. Above all, he knows what the role of scientists and technologists are, and what is the role of the rest of us. Both of the former show us what tricks we can play, but it is up to all of us as to how or if we should play them.

The Ubiquitous Conflict between Past and Future

Materia and Dynamism of a Cyclist by Boccioni

Perhaps one of the best ways to get a grip on our thoughts about the future is to look at the future as seen in the eyes of the past. This is not supposed to be a Zen koan to cause the reader’s mind to ground to a screeching halt, but a serious suggestion. Looking at how the past saw the future might reveal some things we might not easily see with our nose so close to the glass of contemporary visions of it. A good place to look, I think, would be the artistic and cultural movement of the early 20th century that went under the name of Futurism.

Futurism, was a European movement found especially in Italy but also elsewhere, that took as its focus radical visions of a technologically transformed human future. Futurists were deeply attracted to the dynamic aspects of technology. They loved speed and skyscrapers and the age of the machine. They also hated the past and embraced violence two seemingly unrelated orientations that I think may in fact fit hand in glove.

The best document to understand Futurism is its first: F.T. Marinetti’s The Futurist Manifesto.  Here is Marinetti on the Futurist’s stance towards the cultural legacy of the past that also gives some indication of the movement’s positive outlook towards violence:

Let the good incendiaries with charred fingers come! Here they are! Heap up the fire to the shelves of the libraries! Divert the canals to flood the cellars of the museums! Let the glorious canvases swim ashore! Take the picks and hammers! Undermine the foundation of venerable towns!

 

Given what I think is probably the common modern view that technology, at least in its non-military form, is a benign force that makes human lives better, where better also means less violent, and that technology in the early 21st century is used as often to do things such as digitally scan and make globally available an ancient manuscript such as The Dead Sea Scrolls  as to destroy the past, one might ask what gives with Futurism?

Within the context of early 1900s Italy, both Futurisms’ praise of violence and its hatred for the accumulated historical legacy makes some sense. Italy was both a relatively new country having only been established in 1861, and compared to its much more modernized neighbors a backward one. At the same time the sheer weight of the past of pre-unification Italy stretching back to the Etruscans followed by the Romans followed by the age of great Italian city-states such as Florence, not to mention the long history as the seat of the Roman church, was overwhelming.

Italy was a country with too much history, something felt to be holding back its modernization, and the Futurist’s  hatred of history and their embrace of violence were dual symptoms of the desire to clear the deck so that modernity could take hold. What we find with Futurism is an acute awareness that the past may be the enemy of the future and an acknowledgement, indeed a full embrace, of the disruptive nature of technology, that technology upends and destroys old social forms and ways of doing things.

We might think such a conflict between past and future is itself a thing of the past, but I think this would be mistaken. At bottom much of the current debates around the question of what the human (or post-human) future should look like are really debates between those who value the past and wish to preserve it and those who want to escape this past and create the world anew.

Bio-conservatives might be thought of as those who hope to preserve evolved human nature and or the evolved biosphere in intact in the face of potentially transformative technology. Transhumanists might be thought of as holding the middle position in the debate between past and future wanting to preserve in varying degrees some aspects of evolved humanity while embracing some new characteristics that they hope technology will soon make widely available. Singularitarians are found on the far end of the future side of the past vs future spectrum, not merely embracing but pursuing the creation of a brand new evolutionary kingdom in an equally new substrate- silicon.

You also find an awareness of this conflict between past and future in the most seeming disparate of thinkers. On the future as destroyer of the past side of the ledger you have a recent mesmerizing speech at the SXSW Conference by the science-fiction author, Bruce Sterling. Part of the point Sterling seems to be making in this multifaceted talk is that technologists need to acknowledge that technology does not always produce the better, just the different. Technology upends the old and creates a new order and we should both lament the loss of what we have destroyed and embrace our role in having been a party to the destruction of the legacy of the past.

On the past as immovable anchor that prevents us from realizing not just the future but the present side of the ledger we have the architect Rem Koolhaas who designed such modern wonders as the CCTV Headquarters in Beijing. Koolhaas thinks the past is currently winning in the fight against the future. He is troubled by the increase in the area human being have declared “preserved”, that is off-limits to human development. This area is surprisingly huge, Koolhaas, claims it is comparable to the space of the entire country of India.

In a hypothetical plan to deal with the “crisis” of historic preservation in Beijing Koolhaas proposed mapping a grid over the city with areas to be preserved and those where development was unconstrained established at random. Koolhaas thinks that the process should be random to avoid cultural fights over what should be preserved and what should not, which end up reflecting the current realities of power more than any “real” historical significance. A version of “history is written by the victors”. But the very randomness of the process not only leaves me at least with the impression of being both historically illiterate and somewhat crazy, it is based, I think, on a distorted picture of the very facts of preservation themselves.

Koolhaas combines the area preserved for historic reasons with those preserved for natural ones. Thus, his numbers would include the vast amounts of space countries have set and hope to set aside as natural preserves. It’s not quite clear to me that these are areas that most human beings would really want to live in anyway, so it remains uncertain whether the fact that these areas prohibited from being developed indeed somehow hold back development in the aggregate. Koolhaas thinks we need a “theory” of preservation as a guide to what we should preserve and what we should allow to be preserved in the name of something new.  A theory suggests that the conflict between past and future is something that can or should be resolved, yet I do not think resolution is the goal we should seek.  To my lights what is really required is a discussion and a debate that acknowledges what we are in fact really arguing about. What we are arguing about is how much we should preserve the past vs how much we should allow the past to be destroyed in the name of the future.

What seems to me a new development, something that sets us off from the early 20th century Futurism with which this post began, is that technology, rather than by default being the force that destroys the past and gives rise to the future as was seen in Sterling’s speech is becoming neutral in the conflict between past and future. You can see some of this new neutrality not only in efforts to use technology to preserve and spread engagement with the past as was seen in the creation of a digital version of The Dead Sea Scrolls, you can see it in the current project of the eternal outsider, Stewart Brand, which hopes to bring back into existence extinct species such as the Passenger Pigeon through a combination of reverse genetic engineering and breeding.

For de-extinct species to be viable in the wild will probably require the resurrection of ecosystems and co-dependent species that have also been gone for quite some time such as Chestnut forests for the Passenger Pigeon. Thus, what is not in the present and in front of us- the future- in Brand’s vision will be the lost past. We are probably likely to see more of this past/future neutrality of technology going forward and therefore might do well to stop assuming that technological advancement of necessity lead to the victory of the new.

Because we can never fully determine whether we have not at least secondarily been responsible for a species extinction does that mean we should never allow another species to go extinct, if we can prevent it? This would constitute not an old world, but in fact a very new one, a world in which evolution does not occur, at least the type of evolution in large creatures that leads to extinction. When combined with the aim of ending human biological death this world has a strong resemblance to the mythical Garden of Eden, perhaps something that should call the goal itself into question. Are we merely trying to realize these deeply held religious ideas that are so enmeshed  in our thought patterns they have become invisible?

Advances in synthetic biology could lead as Freeman Dyson believes to life itself soon becoming part- software part- art with brand new species invented with the ease and frequency that new software is written or songs composed today. Or, it could lead to projects to reverse the damage humankind has done to the world and return it to the state of the pre-human past such as those of Brand. Perhaps the two can live side by side perhaps not. Yet, what does seem clear is that the choice of one takes time and talent away from the other. Time spent resurrecting the Passenger Pigeon or any other extinct species is time and intellectual capacity spent inventing species never seen.

Once one begins seeing technological advancement as neutral in the contest between the past and future a number of aspirations that seem futuristic because of their technological dependence become perhaps less so.  A world where people live “forever” seems like a very futuristic vision, but if one assumes that this will require less people being born it is perhaps the ultimate victory of the old vs the new. We might tend to think that genetic engineering will be used to “advance” humankind, but might it not be seen as an alternative to more radical versions of the future, cyborg-technologies or AI, that, unlike genetic engineering go beyond merely reaching biological potential that human beings have had since they emerged on the African savanna long ago?

There is no lasting solution to the conflict of past vs future nor is it a condition we should in someway lament. Rather, it is merely a reflection of our nature as creatures in time.If we do not hold onto at least something from our past, and that includes both our individual and collective past, we become creatures or societies without identity. At the same time, if we put all of our efforts into preserving what was and never attempt the new we are in a sense already dead frozen in amber inside a world of what was. The sooner we acknowledge this conflict is at the root of many of our debates the more likely we are to come up with goals to shoot for that neither aim to destroy the deep legacy of the past, both human and biological, nor prevent us from ever crossing into the frontier of the new.

The Ethics of a Simulated Universe

Zhuangzi-Butterfly-Dream

This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary.  Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it:


This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

To state his case in everyday language: any technological civilization whose progress did not stop at some point would gain the capacity to realistically simulate whole worlds, including the individual minds that inhabit them, that is, they could run realistic ancestor simulations. Technologically advanced civilizations either die out before gaining the capacity to produce realistic ancestor simulations,  there is  something stopping such technologically mature civilizations from running such simulations in large numbers or, we ourselves are most likely living in such a simulation because there are a great many more such simulated worlds than real ones.

There is a lot in that argument to digest and a number of underlying assumptions that might be explored or challenged, but I want to look at just one of them, #2. That is, I will make the case that there may be very good reasons why technological civilizations both prohibit and are largely uninterested in creating realistic ancestor simulations. Reasons that are both ethical and scientific. Bostrom himself discusses the possibility that ethical constraints might prevent  technologically mature civilizations from creating realistic ancestor simulations. He writes:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

I think the issue of “suffering that is inflicted on the inhabitants of a simulation” may be more serious than Bostrom appears to believe. Any civilization that has reached the stage where it can create realistic worlds that contain fully conscious human beings will have almost definitely escaped the two conditions that haunt the human condition in its current form- namely pain and death. The creation of realistic ancestor simulations will have brought back into existence these two horrors and thus might likely be considered not merely unethical but perhaps even evil. Were our world actually such a simulation it would confront us with questions that once went by the name of theodicy, namely, the attempt to reconcile the assumed goodness of the creator (for our case the simulator)  with the existence of evil: natural, moral, and metaphysical that exists in the world.

Questions as to why there is, or the morality of there being, such a wide disjunction between the conditions of any imaginable creator/simulator and those of the created/simulated is a road we’ve been down before as the philosopher Susan Neiman so brilliantly showed us in her Evil in Modern Thought: An Alternative History of Philosophy.  There, Neiman points out that the question of theodicy is a nearly invisible current that swept through the modern age. As a serious topic of thought in this period it began as an underlying assumption behind the scientific revolution with thinkers such as Leibniz arguing in his Essays on the Goodness of God, the Freedom of Man and the Origin of Evil that “ours is the best of all possible worlds”. Leibniz having invented calculus and been the first to envision what we would understand as a modern computer was no dope, so one might wonder how such a genius could ever believe in anything so seemingly contradictory to actual human experience?

What one needs to remember in order to understand this is that the early giants of the scientific revolution, Newton, Descartes, Leibniz, Bacon weren’t out to replace God but to understand what was thought of as his natural order. Given how stunning and quick what seemed at the time a complete understanding of the natural world had been formulated using the new methods it perhaps made sense to think that a similar human understanding of the moral order was close at hand. Given the intricate and harmonious picture of how nature was “designed” it was easy to think that this natural order did not somehow run in violation of the moral order. That underneath every seemingly senseless and death-bringing natural event- an earthquake or plague- there was some deeper and more necessary process for the good of humanity going on.    


The quest for theodicy continued when Rousseau gave us the idea that for harmony to be restored to the human world we needed to return to the balance found in nature, something we had lost when we developed civilization.  Nature, and therefore the intelligence that was thought to have designed it were good. Human beings got themselves into trouble when they failed to heed this nature- built their cities on fault lines, or even, for Rousseau, lived in cities at all.


Anyone who has ever taken a serious look at history (or even paid real attention to the news) would agree with Hegel that: “History… is, indeed, little more than the register of the ‘crimes, follies, and misfortunes’ of mankind”. There isn’t any room for an ethical creator there, but Hegel himself tried to find some larger meaning in the long term trajectory of history. With him, and Marx who followed on his heels we have not so much a creator, but a natural and historical process that leads to a fulfillment an intelligence or perfect society at its end. There may be a lot of blood and gore, a huge amount of seemingly unnecessary human suffering between the beginning of history and its end, but the price is worth paying.

Lest anyone think that these views are irrelevant in our current context, one can see a great deal of Rousseau in the positions of contemporary environmentalists and bio-conservatives who take the position that it is us, that is human beings and the artificial technological society we have created that is the problem. Likewise, the views of both singularitarians and some transhumanists who think we are on the verge of reaching some breakout stage where we complete or transcend the human condition, have deep echoes of both Hegel and Marx.

But perhaps the best analog for the kinds of rules a technologically advanced civilization might create around the issue of realistic ancestor simulations might lie not in these religiously based philosophical ideas but  in our regulations regarding more practical areas such as animal experimentation. Scientific researchers have long recognized that the pursuit of truth needs humane constraints and that such constraints apply not just to human research subjects, but to animal subjects as well. In the United States, standards regarding research that use live animals is subject to self-regulation and oversight based on those established by the Institutional Animal Care and Use Committee (IACUC). Those criteria are as follows:

The basic criteria for IACUC approval are that the research (a) has the potential to allow us to learn new information, (b) will teach skills or concepts that cannot be obtained using an alternative, (c) will generate knowledge that is scientifically and socially important, and (d) is designed such that animals are treated humanely.

The underlying idea behind these regulations is that researchers should never unnecessarily burden animals in research. Therefore, it is the job of researchers to design and carry out research in a way that does not subject animals to unnecessary burdens. Doing research that does not promise to generate important knowledge, subjecting animals to unnecessary pain, doing experiments on animals when the objectives can be reached without doing so are all ways of unnecessarily burdening animals.

Would realistic ancestor simulations meet these criteria? I think realistic ancestor simulations would probably fulfill criteria (a), but I have serious doubts that it meets any of the others. In terms of simulations offering us “skills or concepts that cannot be obtained using an alternative” (b), in what sense would a realistic simulations teach skills or concepts that could not be achieved using something short of such simulations where those within them actually suffer and die? Are there not many types of simulations that would fall short of the full range of subjective human experiences of suffering that would nevertheless grant us a good deal of knowledge regarding such types of worlds and societies?  There is probably an even greater ethical hurdle for realistic ancestor simulations to cross in (c), for it is difficult to see exactly what lessons or essential knowledge running such simulations would bring. Societies that are advanced enough to run such simulations are unlikely to gain vital information about how to run their own societies.

The knowledge they are seeking is bound to be historical in nature, that is, what was it like to live in such and such a period or what might have happened if some historical contingency were reversed? I find it extremely difficult to believe that we do not have the majority of information today to create realistic models of what it was like to live in a particular historical period, a recreation that does not have to entail real suffering on the part of innocent participants to be of worth.

Let’s take a specific rather than a general historical example dear to my heart because I am a Pennsylvanian- the Battle of Gettysburg. Imagine that you live hundreds or even thousands of years into the future when we are capable of creating realistic ancestor simulations. Imagine that you are absolutely fascinated by the American Civil War and would like to “live” in that period to see it for yourself. Certainly you might want to bring your own capacity for suffering and pain or to replicate this capacity in a “human” you, but why do the simulated beings in this world with you have to actually feel the lash of a whip, or the pain of a saw hacking off an injured limb? Again, if completely accurate simulations of limited historical events are ethically suspect, why would this suspicion not hold for the simulation of entire worlds?

One might raise the objection that any realistic ancestor simulation would need to possess sentient beings with free will such as ourselves that possess not merely the the ability to suffer, but to inflict evil upon others. This, of course, is exactly the argument Christian theodicy makes. It is also an argument that was undermined by sceptics of early modern theodicy- such as that put forth by Leibniz.    

Arguments that the sentient beings we are now (whether simulated or real) require free will that puts us at risk of self-harm were dealt with brilliantly by the 17th century philosopher, Pierre Bayle, who compared whatever creator might exist behind a world where sentient beings are in constant danger due to their exercise of free will to a negligent parent. Every parent knows that giving their children freedom is necessary for moral growth, but what parent would give this freedom such reign when it was not only likely but known that it would lead to the severe harm or even death of their child?

Applying this directly to the simulation argument any creator of such simulations knows that we will kill millions of our fellow human beings in war and torture, deliberately starve and enslave countless others. At least these are problems we have brought on ourselves,but the world also contains numerous so-called natural evils such as earthquakes and pandemics which have devastated us numerous times. Above all, it contains death itself which will kill all of us in the end.    

It was quite obvious to another sceptic, David Hume, that the reality we lived in had no concern for us. If it was “engineered” what did it say that the engineer did not provide obvious “tweaks” to the system that would make human life infinitely better? Why, are we driven by pain such as hunger and not correspondingly greater pleasure alone?

Arguments that human and animal suffering is somehow not “real” because it is simulated seem to me to be particularly tone deaf. In a lighter mood I might kick a stone like Samuel Johnson and squeak “I refute it thus!” In a darker mood I might ask those who hold the idea whether they would willingly exchange places with a burn victim. I think not! 

If it is the case that we live in a realistic ancestor simulation then the simulator cares nothing for our suffering on a granular level. This leads us to the question of what type of knowledge could truly be gained from running such realistic ancestor simulations. It might be the case that the more granular a simulation is the less knowledge can actually be gained from it. If one needs to create entire worlds with individuals and everything in them in order to truly understand what is going on, then the results of the process you are studying is either contingent or you are really not in possession of full knowledge regarding how such worlds work. It might be interesting to run natural selection over again from the beginning of life on earth to see what alternatives evolution might have come up with, but would we really be gaining any knowledge about evolution itself rather than a view of how it might have looked had such and such initial conditions been changed? And why do we need to replicate an entire world including the pain suffered by the simulated creatures within before we can grasp what alternatives to the path evolution or even history followed might have looked like. Full understanding of the process by which evolution works, which we do not have but a civilization able to create realistic ancestor simulations doubtless would, should allow us to envision alternatives without having to run the process from scratch a countless number of times.

Yet, it is in the last criteria (d) that experiments are  “designed such that animals are treated humanely” that the ethical nature of any realistic ancestor simulation really hits a wall. If we are indeed living in an ancestor simulation it seems pretty clear that the simulator(s) should be declared inhumane. How otherwise would the simulator create a world of pandemics and genocides, torture, war, and murder, and above all, universal death?

One might claim that any simulator is so far above us that it takes little concern of our suffering. Yet, we have granted this kind of concern to animals and thus might consider ourselves morally superior to an ethically blind simulator. Without any concern or interest in our collective well-being, why simulate us in the first place?

Indeed, any simulator who created a world such as our own would be the ultimate anti-transhumanist. Having escaped pain and death “he” would bring them back into the world on account of what would likely be socially useless curiosity. Here then we might have an answer to the second half of Bostrom’s quote above:


Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

A society that was technologically capable of creating realistic ancestor simulations and actually runs them would appear to have on of two features (1) it finds such simulations ethically permissible, (2) it is unable to prevent such simulations from being created. Perhaps any society that remains open to creating such a degree of suffering found in realistic ancestor simulations  for anything but the reason of existential survival would be likely to destroy itself for other reasons relating to such lack of ethical boundaries.

However, it is (2) that I find the most illuminating. For perhaps the condition that decides if a civilization will continue to exist is its ability to adequately regulate the use of technology within it. Any society that is unable to prevent rogue members from creating realistic ancestor simulations despite deep ethical prohibitions is incapable of preventing the use of destructive technologies or in managing its own technological development in a way that promotes survival. A situation we can perhaps see glimpses of in our own situation related to nuclear and biological weapons, or the dangers of the Anthropocene.

This link between ethics, successful control of technology and long term survival is perhaps is the real lesson we should glean from Bostrom’s provocative simulation argument.

Immortal Jellyfish and the Collapse of Civilization

Luca Giordano Cave of Eternity 1680s

The one rule that seems to hold for everything in our Universe is that all that exists, after a time, must pass away, which for life forms means they will die. From there, however, the bets are off and the definition of the word “time” in the phrase “after a time” comes into play. The Universe itself may exist for as long as 100s of trillions of years to at last disappear into the formless quantum field from whence it came. Galaxies, or more specifically, clusters of galaxies and super-galaxies, may survive for perhaps trillions of years to eventually be pulled in and destroyed by the black holes at their centers.

Stars last for a few billion years and our own sun some 5 or so billion years in the future will die after having expanded and then consumed the last of its nuclear fuel. The earth having lasted around 10 billion years by that point will be consumed in this  expansion of the Sun. Life on earth seems unlikely to make it all the way to the Sun’s envelopment of it and will likely be destroyed billions of years before the end of the Sun- as solar expansion boils away the atmosphere and oceans of our precious earth.

The lifespan of even the oldest lived individual among us is nothing compared to this kind of deep time. In contrast to deep time we are, all of us, little more than mayflies who live out their entire adult lives in little but a day. Yet, like the mayflies themselves who are one of the earth’s oldest existent species: by the very fact that we are the product of a long chain of life stretching backward we have contact with deep time.

Life on earth itself if not quite immortal does at least come within the range of the “lifespan” of other systems in our Universe, such as stars. If life that emerged from earth manages to survive and proves capable of moving beyond the life-cycle of its parent star, perhaps the chain in which we exist can continue in an unbroken line to reach the age of galaxies or even the Universe itself. Here then might lie something like immortality.

The most likely route by which this might happen is through our own species,  Homo Sapiens or our descendents. Species do not exist forever, and our is likely to share this fate of demise either through actual extinction or evolution into something else. In terms of the latter, one might ask if our survival is assumed, how far into the future we would need to go where our descendents are no longer recognizably, human? As long as something doesn’t kill us, or we don’t kill ourselves off first, I think that choice, for at least the foreseeable future will be up to us.

It is often assumed that species have to evolve or they will die. A common refrain I’ve heard among some transhumanists  is “evolve or die!”. In one sense, yes, we need to adapt to changing circumstances, in another, no, this is not really what evolution teaches us, or is not the only thing it teaches us. When one looks at the earth’s longest extant species what one often sees is that once natural selection comes us with a formula that works that model will be preserved essentially unchanged over very long stretches of time, even for what can be considered deep time. Cyanobacteria are nearly as old as life on earth itself, and the more complex Horseshoe Crab, is essentially the same as its relatives that walked the earth before the dinosaurs. The exact same type of small creatures that our children torture on beach vacations might have been snacks for a baby T-Rex!

That was the question of the longevity of species but what about the longevity of individuals? Anyone interested the should check out the amazing photo study of the subject by the artist Rachel Sussman. You can see Sussman’s work here at TED, and over at Long Now.  The specimen Sussman brings to light have individuals over 2,000 years old.  Almost all are bacteria or plants and clonal- that is they exist as a single organism composed of genetically identical individuals linked together by common root and other systems. Plants and especially trees are perhaps the most interesting because they are so familiar to us and though no plant can compete with the longevity of bacteria, a clonal colony of Quaking Aspen in Utah is an amazing 80,000 years old!

The only animals Sussman deals with are corals, an artistic decision that reflects the fact that animals do not survive for all that long- although one species of animal she does not cover might give the long-lifers in the other kingdoms a run for their money. The “immortal jellyfish” the turritopsis nutricula are thought to be effectively biologically immortal (though none are likely to have survived in the wild for anything even approaching the longevity of the longest lived plants). The way they achieve this feat is a wonder of biological evolution.The turritopsis nutricula, after mating upon sexual maturity, essentially reverses it own development process and reverts back to prior clonal state.

Perhaps we could say that the turritopsis nutricula survives indefinitely by moving between more and less complex types of structures all the while preserving the underlying genes of an individual specimen intact. Some hold out the hope that the turritopsis nutricula holds the keys to biological immortality for individuals, and let’s hope they’re right, but I, for one, think its lessons likely lie elsewhere.

A jellyfish is a jellyfish, after all, among more complex animals with well developed nervous systems longevity moves much closer to a humanly comprehensible lifespan with the oldest living animal a giant tortoise by the too cute name of “Jonathan” thought to be around 178 years old.  This is still a very long time frame in human terms, and perhaps puts the briefness of our own recent history in perspective: it would be another 26 years after Jonathan hatched from his egg till the first shots of the American Civil War were fired. A lot can happen over the life of a “turtle”.

Individual plants, however, put all individual animals to shame. The oldest non-clonal plant, The Great Basin Bristlecone Pine, has a specimen believed to be 5,062 years old. In some ways this oldest living non-clonal individual is perfectly illustrative of the (relatively) new way human beings have reoriented themselves to time, and even deep time.When this specimen of pine first emerged from a cone human beings had only just invented a whole set of tools that would make the transmission of cultural rather than genetic information across vast stretches of time possible. During the 31st century B.C.E. we invented monumental architecture such as Stonehenge and the pyramids of Egypt whose builders still “speak” to us, pose questions to us, from millennia ago. Above all, we invented writing which allowed someone with little more than a clay tablet and a carving utensil to say something to me living 5,000 years in his future.

Humans being the social animals that they are we might ask ourselves about the mortality or potential immortality of groups that survive across many generations, and even for thousands of years. Group that survive for such a long period of time seem to emerge most fully out of the technology of writing which allows both the ability to preserve historical memory and permits a common identity around a core set of ideas. The two major types of human groups based on writing are institutions, and societies which includes not just the state but also the economic, cultural, and intellectual features of a particular group.


Among the biggest mistakes I think those charged with responsibility for an institution or a society can make is to assume that it is naturally immortal, and that such a condition is independent of whatever decisions and actions those in charge of it take. This was part of the charge Augustine laid against the seemingly eternal Roman Empire in his The City of God. The Empire, Augustine pointed out, was a human institution that had grown and thrived from its virtues in the past just as surely as it was in his day dying from its vices. Augustine, however, saw the Church and its message as truly eternal. Empires would come and go but the people of God and their “city” would remain.

It is somewhat ironic, therefore, that the Catholic Church, which chooses a Pope this week, has been so beset by scandal that its very long-term survivability might be thought at stake. Even seemingly eternal institutions, such as the 2,000 year old Church, require from human beings an orientation that might be compared to the way theologians once viewed the relationship of God and nature. Once it was held that constant effort by God was required to keep the Universe from slipping back into the chaos from whence it came. That the action of God was necessary to open every flower. While this idea holds very little for us in terms of our understanding of nature, it is perhaps a good analog for human institutions, states and our own personal relationships which require our constant tending or they give way to mortality.

It is perhaps difficult for us to realize that our own societies are as mortal as the empires of old, and someday my own United States will be no more. America is a very odd country in respect to its’ views of time and history. A society seemingly obsessed with the new and the modern, contemporary debates almost always seek reference and legitimacy on the basis of men who lived and thought over 200 years ago. The Founding Fathers were obsessed with the mortality of states and deliberately crafted a form of government that they hoped might make the United States almost immortal.

Much of the structure of American constitutionalism where government is divided into “branches” which would “check and balance” one another was based on a particular reading of long-lived ancient systems of government which had something like this tripart structure, most notably Sparta and Rome. What “killed” a society, in the view of the Founders, was when one element- the democratic, oligarchic-aristocratic, or kingly rose to dominate all others. Constitutionally divided government was meant to keep this from happening and therefore would support the survival of the United States indefinitely.

Again, it is somewhat bitter irony that the very divided nature of American government that was supposed to help the United States survive into the far future seems to be making it impossible for the political class in the US to craft solutions to the country’s quite serious long-term problems and therefore might someday threaten the very survival of the country divided government was meant to secure.

Anyone interested in the question of the extended survival of their society, indeed of civilization itself, needs to take into account the work of Joseph A. Tainter and his The Collapse of Complex Societies (1988). Here, the archaeologist Tainter not only provides us with a “science” that explains the mortality of societies, his viewpoint, I think, provides us for ways to think about and gives us insight into seeming intractable social and economic and technological bottlenecks that now confront all developed economies: Japan, the EU/UK and the United States.

Tainter, in his Collapse wanted to move us away from vitalist ideas of the end of civilization seen in thinkers such as Oswald Spengler and Arnold Toynbee. We needed, in his view, to put our finger on the material reality of a society to figure out what conditions most often lead them to dissipate i.e. to move from a more complex and integrated form, such as the Roman Empire, to a more simple and less integrated form, such as the isolated medieval fiefdoms that followed.

Grossly oversimplified, Tainter’s answer was a dry two word concept borrowed from economics- marginal utility. The idea is simple if you think about it for a moment. Any society is likely to take advantage of “low-hanging fruit” first. The best land will be the first to be cultivated, the easiest resources to gain access to exploited.

The “fruit”,  however, quickly becomes harder to pick- problems become harder for a society to solve which leads to a growth in complexity. Romans first tapped tillable land around the city, but by the end of the Empire the city needed a complex international network of trade and political control to pump grain from the distant Nile Valley into the city of Rome.

Yet, as a society deploys more and more complex solutions to problems it becomes institutionally “heavy” (the legacy of all the problems it has solved in the past) just as problems become more and more difficult to solve. The result is, at some point, the shear amount of resources that need to be thrown at a problem to solve it are no longer possible and the only lasting solution becomes to move down the chain of complexity to a simpler form. Roman prosperity and civilization drew in the migration of “barbarian” populations in the north whose pressures would lead to the splitting of the Empire in two and the eventual collapse of its Western half.            

It would seem that we have broken through Tainter’s problem of marginal utility with the industrial revolution, but we should perhaps not judge so fast. The industrial revolution and all of its derivatives up to our current digital and biological revolutions, replaced a system in which goods were largely produced at a local level and communities were largely self-sufficient, with a sprawling global network of interconnections and coordinated activities requiring vast amounts of specialized knowledge on the part of human beings who, by necessity, must participate in this system to provide for their most basic needs.

Clothes that were once produced in the home of the individual who would wear them, are now produced thousands of miles away by workers connected to a production and transportation system that requires the coordination of millions of persons many of whom are exercising specialized knowledge. Food that was once grown or raised by the family that consumed it now requires vast systems of transportation, processing, the production of fertilizers from fossil fuels and the work of genetic engineers to design both crops and domesticated animals.

This gives us an indication of just how far up the chain of complexity we have moved, and I think leads inevitably to the questions of whether such increasing complexity might at some point stall for us, or even be thrown into reverse?

The idea that, despite all the whiz-bang! of modern digital technology, we have somehow stalled out in terms of innovation is an idea that has recently gained traction. There was the argument made by the technologist and entrepreneur, Peter Thiel, at the 2009 Singularity Summit, that the developed world faced real dangers of the Singularity not happening quickly enough. Thiel’s point was that our entire society was built around the expectations of exponential technological growth that showed ominous signs of not happening. I only need to think back to my Social Studies textbooks in the 1980s and their projections of the early 2000s with their glittering orbital and underwater cities, both of which I dreamed of someday living in, to realize our futuristic expectations are far from having been met. More depressingly, Thiel points out how all of our technological wonders have not translated into huge gains in economic growth and especially have not resulted in any increase in median income which has been stagnant since the 1970s.

In addition to Theil, you had the economist, Tyler Cowen, who in his The Great Stagnation (2011)  argued compellingly that the real root of America’s economic malaise was that the kinds of huge qualitative innovations that were seen in the 19th and early 20th centuries- from indoor toilets, to refrigerators, to the automobile, had largely petered out after the low hanging fruit- the technologies easiest to reach using the new industrial methods- were picked. I may love my iPhone (if I had one), but it sure doesn’t beat being able to sanitarily go to the bathroom indoors, or keep my food from rotting, or travel many miles overland on a daily basis in mere minutes or hours rather than days.

One reason why technological change is perhaps not happening as fast as boosters such as singularitarians hope, or our society perhaps needs to be able to continue to function in the way we have organized it, can be seen in the comments of the technologists, social critic and novelist, Ramez Naam. In a recent interview for  The Singularity Weblog, Naam points out that one of the things believers in the Singularity or others who hold to ideas regarding the exponential pace of technological growth miss is that the complexity of the problems technology is trying to solve are also growing exponentially, that is problems are becoming exponentially harder to solve. It’s for this reason that Naam finds the singularitarians’ timeline widely optimistic. We are a long long way from understanding the human brain in such a way that it can be replicated in an AI.

The recent proposal of the Obama Administration to launch an Apollo type project to understand the human brain along with the more circumspect, EU funded, Human Brain Project /Blue Brain Project might be seen as attempts to solve the epistemological problems posed by increasing complexity, and are meant to be responses to two seemingly unrelated technological bottlenecks stemming from complexity and the problem of increasing marginal returns.

On the epistemological front the problem seems to be that we are quite literally drowning in data, but are sorely lacking in models by which we can put the information we are gathering together into working theories that anyone actually understands. As Henry Markham the founder of the Blue Brain Project stated:

So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists are forced to become informaticians or robot operators.

This epistemological problem, which was brilliantly discussed by Noam Chomsky in an interview late last year is related to the very real bottleneck in Artificial Intelligence- the very technology Peter Thiel thinks is essentially if we are to achieve the rates of economic growth upon which our assumptions of technological and economic progress depend.

We have developed machines with incredible processing power, and the digital revolution is real, with amazing technologies just over the horizon. Still, these machines are nowhere near doing what we would call “thinking”. Or, to paraphrase the neuroscientist and novelist David Eagleman- the AI WATSON might have been able to beat the very best human being in the game Jeopardy! What it could not do was answer a question obvious to any two year old like “When Barack Obama enters a room, does his nose go with him?”

Understanding how human beings think, it is hoped, might allow us to overcome this AI bottleneck and produce machines that possess qualities such as our own or better- an obvious tool for solving society’s complex problems.

The other bottleneck a large scale research project on the brain is meant to solve is the halted development of psychotropic drugs– a product of the enormous and ever increasing costs for the creation of such products. Itself a product of the complexity of the problem pharmaceutical companies are trying to tackle, namely; how does the human brain work and how can we control its functions and manage its development?  This is especially troubling given the predictable rise in neurological diseases such as Alzheimer’s.   It is my hope that these large scale projects will help to crack the problem of the human brain, and especially as it pertains to devastating neurological disorders, let us pray they succeed.

On the broader front, Tainter has a number of solutions that societies have come up with to the problem of marginal utility two of which are merely temporary and the other long-term. The first is for society to become more complex, integrated, bigger. The old school way to do this was through conquest, but in an age of nuclear weapons and sophisticated insurgencies the big powers seem unlikely to follow that route. Instead what we are seeing is proposals such as the EU-US free trade area and the Trans-Pacific partnership both of which appear to assume that the solution to the problems of globalization is more globalization. The second solution is for a society to find a new source of energy. Many might have hoped this would have come in the form of green-energy rather than in the form it appears to have taken- shale gas, and oil from the tar sands of Canada. In any case, Tainter sees both of these solutions as but temporary respites for the problem of marginal utility.

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity. It is a solution that strangely echoes that of the “immortal jellyfish” the turritopsis nutricula, the only path complex entities have discovered that allows them to survive into something that whispers eternity.

Image description: From the National Gallery in London. “The Cave Of Eternity” (1680s) by Luca Giordan.“The serpent biting its tail symbolises Eternity. The crowned figure of Janus holds the fleece from which the Three Fates draw out the thread of life. The hooded figure is Demagorgon who receives gifts from Nature, from whose breasts pours forth milk. Seated at the entrance to the cave is the winged figure of Chronos, who represents Time.”

Finding Our Way in the Great God Debate, part 2

Last time, I tried to tackle the raging debate between religious persons and a group of thinkers that at least aspire to speak for science who go under the name of the New Atheists. I tried to show how much of the New Atheist’s critique of religion was based on a narrow definition of what “God” is, that is, a kind of uber-engineer who designed the cosmos we inhabit and set up its laws. Despite my many critiques of the New Atheists, much of which has to do with both their often inflammatory language, and just as often their dismissal and corresponding illiteracy regarding religion, I do think they highlight a very real philosophical and religious problem (at least in the West), namely, that the widely held religious concept of the world has almost completely severed any relationship with science, which offers the truest picture of what the world actually is that we have yet to discover.

In what follows I want to take a look at the opportunities for new kinds of religious thinking this gap between religion and science offers, but most especially for new avenues of secular thought that might manage, unlike the current crop of New Atheists to hold fast to science while not discarding the important types of thinking and approaches to the human condition that have been hard won by religious traditions since the earliest days of humanity.

The understanding of God as a celestial engineer, the conception of God held by figures of the early scientific revolution such as Newton, was, in some ways, doomed from the start predicated not only on a version of God that was so distant from the lives of your average person that it would likely become irrelevant, but predicated as well on the failure of science to explain the natural world from its own methods alone. If science is successful at explaining the world then a “God of the gaps” faces the likely fate that science at some point eventually explains away any conceivable gap in which the need for such a God might be called for.  It is precisely the claim that physics has or is on the verge of closing the gap in scientific knowledge of what Pope Pius XII thought was God’s domain- the Universe before the Big Bang- that is the argument of another atheist who has entered the “God debate”, Lawrence Krauss with his A Universe From Nothing.

In his book Krauss offers a compelling picture of the state of current cosmology, one which I find fascinating and even wonderous. Krauss shows how the Universe may have appeared from quantum fluctuations out of what was quite literally nothing. He reveals how physicists are moving away from the idea of one Universe whose laws seem ideally tuned to give rise to life to a version of a multiverse. A vision in which an untold number of universes other than our own exist in an extended plane or right next to one another on a so call “brane” which will forever remain beyond our reach, and that might all have a unique physical laws and even mathematics.

Krauss shows us not only the past but the even deeper time of the future where our own particular Universe is flat and expanding so rapidly that it will, on the order of a few hundred billion years, be re-organized into islands of galaxies surrounded by the blackness of empty space, and how in the long-long frame of time, trillions of years,  our Universe will become a dead place inhospitable for life- a sea of the same nothing from whence it came.

A Universe from Nothing is a great primer on the current state of physics, but it also has a secondary purpose as an atheist track- with the afterword written by none other than Dawkins himself. Dawkins, I think quite presumptuously, thinks Krauss will have an effect akin to Darwin banishing God from the questions regarding the beginning of the Universe and its order in the same way Darwin had managed to banish God from questions regarding the origin of life and its complexity.

Many people, myself included, have often found the case made by the New Atheists in many ways to be as counter-productive for the future of science as the kind of theologization and politicization of science found in those trying to have Intelligent Design (ID) taught in schools, or deny the weight of scientific evidence in regards to a public issue with a clear consensus of researchers for what amounts to a political position e.g. climate change. This is because their rhetoric forces people to choose between religion and science, a choice that in such a deeply religious country such as the United States would likely be to the detriment of science.  And yet, perhaps in the long-run the New Atheists will have had a positive effect not merely on public discourse, but ironically on religion itself.

New Atheists have encouraged a great “coming out” of both fellow atheists and agnostics in a way that has enabled people in a religious society like the United States to openly express their lack of belief and skepticism in a way that was perhaps never possible before. They have brought into sharp relief the incongruity of scientific truth and religious belief as it is currently held and thus pushed scholars such as Karen Armstrong to look for a conception of God that isn’t, like the modern conception, treated as an increasingly irrelevant hypothesis of held over from a pre-Darwinian view of the world.


Three of the most interesting voices to have emerged from the God Debate each follow seemingly very different tracks that when added together might point the way to the other side. The first is the religious scholar Stephen Prothero. Armstrong’s Case for God helped inspire Prothero, to write his God is Not One (2010.) He was not so much responding to the religious/atheist debate as he was cautioning us against the perennialism at the root of much contemporary thought regarding religion. If the New Atheists went overboard with their claim that religion was either all superstitious nonsense ,or evil, and most likely both, the perennialists, with whom he lumps Armstrong, went far too much to the other side arguing in their need to press for diversity and respect the equally false notion that all religions preached in essence the same thing.


Prothero comes up with a neat formula for what a religion is. “Each religion articulates”:

 

  • a problem;
  • a solution to this  problem, which also serves as the religious goal;
  • a technique (or techniques) for moving from this problem to this solution; and
  • an exemplar (or exemplars) who chart this path from problem to solution. (p. 14)

For Prothero, what religions share is, as the perennialists claim, the fundamentals of human ethics, but also a view that the human condition is problematic. But the devil, so to speak, is in the details, for the world’s great religions have come up with very different identifications of what exactly this problematic feature is ,and hence very different, and in more ways than we might be willing to admit, incompatible solutions   to the problems they identify.

God is Not One is a great introduction to the world’s major religions, a knowledge I think is sorely lacking especially among those who seem to have the most knee-jerk reactions to any mention of religious ideas. For me, one of the most frustrating things about the New Atheists, especially, is their stunning degree of religious illiteracy. Not only do they paint all of Judaism, Christianity, and Islam with the same broad brush of religious literalism and fundamentalism, and have no grasp of religious history within the West, they also appear completely disinterested in trying to understand non-Western religious traditions an understanding that might give them better insight into the actual nature of the human religious instinct which is less about pre-scientific explanations for natural events than it is about an ethical orientation towards the world. Perhaps no religion as much as Confucianism can teach us something in this regard.  In the words of Prothero:

Unlike Christianity which drives a wedge between the sacred and the secular- the eternal “City of God” and the temporal “City of Man”- Confucianism glories in creatively confusing the two. There is a transcendent dimension in Confucianism. Confucians just locate it in the world rather than above or beyond it.

For all these reasons, Confucianism can be considered as religious humanism. Confucians share with secular humanists a single minded focus on this world of rag and bone.  They, too, are far more interested in how to live than in plumbing the depths of Ultimate Reality. But whereas secular humanists insist on emptying the rest of the world of the sacred, Confucians insists on infusing the world with sacred import- of seeing Heaven in humanity, on investing human beings with incalculable value, on hallowing the everyday. In Confucianism, the secular is sacred. (GN1 108)

Religions, then, are in large part philosophical orientations towards the world. They define the world, or better the inadequacies of the world, in a certain way and prescribe a set of actions in light of those inadequacies. In this way they might be thought of as embodied moral philosophies, and it is my second thinker, the philosopher Alain de Botton who in his Religion for Atheists: A Non-believer’s Guide to the Uses of Religion tries to show just how much even the most secular among us can learn from religion.

What the secular, according to Botton, can learn from religions has little to do with matters of belief which in the light of modern science he too finds untenable. Yet, the secular can learn quite a lot from religion in terms institutions and practices. Religion often fosters a deep sense of community founded around a common view of the world. The religious often see education as less about the imparting of knowledge but the formation of character. Because religion in part arose in response to the human need to live peacefully together in society it tends to track people away from violence and retribution and toward kindness and reconciliation. Much of religion offers us tenderness in times of tragedy, and holds out the promise of moral improvement. It provides the basis for some of our longest lived institutions, embeds art and architecture in a philosophy of life.


Often a religion grants its followers a sense of cosmic perspective that might otherwise be missing from human life, and reminds us of our own smallness in light of all that is, encourages an attitude of humility in the face of the depths of all we do not yet and perhaps never will know. It provides human beings with a sense of the transcendent something emotionally and intellectually beyond our reach which stretches the human heart and mind to the very edge of what it can be and understand.

So if religion, then, is at root a way for people to ethically orient themselves to others and the world even the most avowed atheist who otherwise believes that life can have meaning can learn something from it.  Though one must hope that monstrous absurdities such as the French Revolution’s Cult of the Supreme Being, or the “miraculously” never decomposing body of V.I. Lenin are never repeated.

One problem however remains before we all rush out to the local Church, Temple, or Mosque, or start our own version of the Church of Scientology. It is that religion is untrue, or better, while much of the natural ethics found at the bottom of most religions is something to which we secularists can assent because it was forged for the problems of human living together, and we still live in together in societies, the idea of the natural world and the divinities that supposedly inhabit and guide it are patently false from a scientific point of view. There are two possible solutions I can imagine to this dilemma, both of which play off of the ideas of my third figure, the neuroscientist
and fiction author, David Eagleman.

Like others, what frustrates Eagleman about the current God debate is the absurd position of certainty and narrowness of perspective taken by both the religious and the anti-religious. Fundamentalist Christians insist that a book written before we knew what the Universe was or how life develops or what the brain does somehow tells us all we really need to know, indeed thinks they possess all the questions we should ask. The New Atheists are no better when it comes to such a stance of false certainty and seem to base many of their argument on the belief that we are on the verge of knowing all we can know, and that the Universe(s) is already figured out except for a few loose ends.

In my own example, I think you can see this in Krauss who not only thinks we have now almost fully figured out the origins of the Universe and its fate. He also thinks he has in his hand its meaning, which is that it doesn’t have one, and to drive this home avoids looking at the hundreds of billions of years between now and the Universe’s predicted end.

Take this almost adolescent quote from Dawkins’ afterword to A Universe From Nothing:

Finally, and inevitably, the universe will further flatten into a nothing that mirrors its beginning” what will be left of our current Universe will be “Nothing at all. Not even atoms. Nothing. “ Dawkins then adds with his characteristic flourish“If you think that’s bleak and cheerless, too bad. Reality doesn’t owe us comfort.  (UFN 188).

And here’s Krauss himself:

The structures we can see, like stars and galaxies, were all created in quantum fluctuations from nothing. And the average total Newtonian gravitational energy of each object in our universe is equal to nothing. Enjoy the thought while you can, if this is true, we live in the worst of all possible universes one can live in, at least as far as the future of life is concerned. (UFN 105)

 

Or perhaps more prosaically the late Hitchens:

 

For those of us who find it remarkable that we live in a universe of Something, just wait. Nothingness is headed on a collision course right towards us! (UFN 119)

 

Really? Even if the view of the fate of the Universe is exactly like Krauss thinks it will be, and that’s a Big- Big-  If, what needs to be emphasized here, a point that neither Krauss or Dawkins seem prone to highlight is that this death of the Universe is trillions of years in the future and that there will be plenty of time for living, according to Dimitar Sasselov, hundreds of billions of years, for life to evolve throughout the Universe, and thus for an unimaginable diversity of experiences to be had. Our Universe, at least at the stage we have entered and for many billions of years longer than life on earth has existed, is actually a wonderful place for creatures such as ourselves.  If one adds to that the idea that there may not just be one Universe, but many many universes, with at least some perhaps having the right conditions for life developing and lasting for hundreds of billions of years as well, you get a picture of diversity and profusion that puts even the Hindu Upanishads to shame. That’s not a bleak picture. It one of the most exhilarating pictures I can think of, and perhaps in some sense even a religious one.


And the fact of the matter is we have no idea. We have no idea if the theory put forward by Krauss regarding the future of the Universe is ultimately the right one, we have no idea if life plays a large, small, or no role in the ultimate fate of the Universe, we have no idea if there is any other life in the Universe that resembles ourselves in terms of intelligence, or what scale- planet, galaxy, even larger- a civilization such as our own can achieve if it survives for a sufficiently long time, or how common such civilizations might become in the Universe as the time frame in which the conditions for life to appear and civilizations to appear and develop grows over the next hundred billion years or so. See the glass empty, half empty, or potentially overflowing it’s all just guesswork, even if Krauss’ physics make it an extremely sophisticated and interesting guesswork. To think otherwise is to assume the kind of block-headed certainty Krauss reserves for religious fanatics.  

David Eagleman wants to avoid the false certainty found in many of the religious and the New Atheists by adopting what he calls Possibilism. The major idea behind Possibilism is the same one, although Eagleman himself doesn’t make this connection, that is found in Armstrong’s pre-modern religious thinkers, especially among the so-called mystics, that is the conviction that we are in a position of profound ignorance.

Science has taken us very very far in terms of our knowledge of the world and will no doubt take us much much farther, but we are still probably at a point where we know an unbelievable amount less than will eventually become known, and perhaps there are limits to our ability to know in front of us beyond which will never be able to pass. We just don’t know. In a similar way to how the mystics tried to lead a path to the very limits of human thought beyond which we can not pass, Possibilism encourages us to step to the very edge of our scientific knowledge and imagine what lies beyond our grasp.

I can imagine at least two potential futures for Possibilism either one of which I find very encouraging.  If traditional religion is to regain its attraction for the educated it will probably have to develop and adopt a speculative theology that looks a lot like Eagleman’s Possibilism.  

A speculative theology would no longer seek to find support for its religious convictions in selective discoveries of science, but would  place its core ideas in dialogue with whatever version of the world science comes up with. This need not mean that any religion need abandon its core ethical beliefs or practices both of which were created for human beings at moral scale that reflects the level at which an individual life is lived. The Golden Rule needs no reference to the Universe as a whole nor do the rituals surrounding the life cycle of the individual- birth, marriage, and death at which religion so excels.


What a speculative theology would entail is that religious thinkers would be free to attempt to understand their own tradition in light of modern science and historical studies without any attempt to use either science or history to buttress its own convictions.

Speculative theology would ask how its concepts can continue to be understood in light of the world presented by modern science and would aim at being a dynamic, creative, and continuously changing theology which would better reflect the dynamic nature of modern knowledge, just as theology in the past was tied to a view of knowledge that was static and seemingly eternal. It might more easily be able to tackle contemporary social concerns such as global warming, inequality and technological change by holding the exact same assumptions as to the nature of the physical world as science while being committed to interpreting the findings in light of their own ethical traditions and perspectives.

Something like this speculative philosophy already existed during the Islamic Golden Age a period lasting from the 700s through the 1200s in which Islamic scientists such as Ibn Sina (Avicenna) managed to combine the insights of ancient Greek, Indian, and even Chinese thinking to create whole new field of inquiry and ways of knowing. Tools from algebra to trigonometry to empiricism that would later be built upon by Europeans to launch the scientific revolution. The very existence of this golden age of learning in the Muslim world exposes Sam Harris’ anti-Islamic comment for what it is- ignorant bigotry.

Still, a contemporary speculative theology would have more expansive and self-aware than anything found among religious thinkers before.  It would need to be an open system of knowledge cognizant of its own limitations and would examine and take into itself ideas from other religious traditions, even dead ones such as paganism, that added depth and dimensions to its core ethical stance. It would be a vehicle through which religious persons could enter into dialogue and debate with persons from other perspectives both religious and non-religious who are equally concerned with the human future, humanists and transhumanists, singularitarians, and progressives along with those who with some justification have deep anxieties regarding the future, traditional conservatives, bio-conservatives,  neo-luddites, and persons from every other spiritual tradition in the world.  It would mean an end to that dangerous anxiety held by a great number of the world’s religions that it alone needs to be the only truth in order to thrive.   

Yet, however beneficial such a speculative theology would be I find its development out of any current religion highly unlikely.  If traditional religions do not adopt something like this stance towards knowledge I find it increasingly likely that persons alienated from them as a consequence of the way their beliefs are contradicted by science will invent something similar for themselves. This is because despite the fact that the Universe in the way it is presented by the New Atheists is devoid of meaning this human need for meaning, to discuss it, and argue it, and try to live it, is unlikely to ever stop among human beings. The very quest seems written into our very core. Hopefully, such dialogues will avoid dogma and be self-critical and self-conscious in a way religion or secular ideologies never were before. I see such discussions more akin to works of art than religious-treatises, though I hope they provide the bases for community and ethics in the same way religion has in the past as well.

And we already have nascent forms of such communities- the environmentalist cause is one such community as is the transhumanist movement. So, is something like The Long Now Foundation which seeks to bring attention to the issues of the long term human future and has even adopted the religious practice of pilgrimage to its 10,000 year clock– a journey that is meant to inspire deeper time horizons for our present obsessed culture.

Even should such communities and movements become the primary way a certain cohort of scientifically inclined persons seek to express the human need for meaning, there is every reason for those so inclined to seek out and foster relationships with the more traditionally religious ,who are, and will likely always, comprise the vast majority human beings. Contrary to the predictions of New Atheists such as Daniel Dennett the world is becoming more not less religious, and Christianity is not a dying faith but changing location moving from its locus in Europe and North America to Latin America, Africa, and even Asia. Islam even more so than Christianity is likely to be a potent force in world affairs for some time to come, not to mention Hinduism with India destined to become the world’s most populous country by mid-century. We will need all of their cooperation to address pressing global problems from climate change, to biodiversity, to global pandemics, and inequality. And we should not think them persons who profess such faiths ignorant for holding fast to beliefs that largely bypass the world brought to us by science. Next to science religion is among the most amazing things to emerge from humanity and those who seek meaning from it are entitled to our respect as fellow human beings struggling to figure out not the what? but the why? of human life.

Whether inside or outside of traditional religion the development of scientifically grounded meaning discourses and communities, and the projects that would hopefully grow from them, would signal that wrestling with the “big questions” again offered a path to human transcendence. A path that was at long last no longer in conflict with the amazing Universe that has been shown to us by modern science. Finding such ways of living would mean that we truly have found our way through the great God debate.

Finding Our Way Through The Great God Debate

“The way that can be spoken of. Is not the constant way; The name that can be named. Is not the constant name.”

Lao Tzu: Tao Te Ching

Over the last decade a heated debate broke onto the American scene. The intellectual conflict between the religious and those who came to be called “the New Atheists” did indeed, I think, bring something new into American society that had not existed before- the open conflict between what, at least at the top tier of researchers, is a largely secular scientific establishment, or better those who presumed to speak for them, and the traditionally religious. What emerged on the publishing scene and bestseller lists were not the kinds of “soft-core” atheism espoused by previous science popularizers such as the late and much beloved Carl Sagan, but harsh, in your face atheist suchs as the Sam Harris, Richard Dawkins and another late figure, Christopher Hitchens.

On the one hand there was nothing “new” about the New Atheists. Science in the popular imagination had for sometime been used as a bridge away from religion. I need only look at myself. What really undermined my belief in the Catholicism in which I was raised were books like Carl Sagan’s Dragons of Eden, and Cosmos, or to a lesser extent Stephen Hawkings A Brief History of Time, even a theologically sounding book like Paul Davies God and the New Physics did nothing to shore up my belief in a personal God and the philosophically difficult or troubling ideas such as “virgin birth”, “incarnation”, “transubstantiation” or “papal infallibility” under which I was raised.

What I think makes the New Atheists of the 2000s truly new is that many science popularizers have lost the kind of cool indifference, or even openness to reaching out and explaining science in terms of religious concepts that was found in someone like Carl Sagan. Indeed, their confrontationalist stance is part of their shtick and a sure way to sell books in the same way titles like the Tao of Physics sold like hotcakes in the archaic Age of Disco.

For example, compare the soft, even if New Agey style of Sagan  to the kind of anti-religious screeds given by one of the New Atheist “four horsemen”, Richard Dawkins, in which he asks fellow atheists to “mock” and “ridicule” people’s religious  beliefs. Sagan sees religion as existing along a continuum with science, an attempt to answer life’s great questions. Science may be right, but religion stems from the same deep human instinct to ask questions and understand, whereas Dawkins seems to see only a dangerously pernicious stupidity.

It impossible to tease out who fired the first shot in the new conflict between atheists and the religious, but shots were fired. In 2004, Sam Harris’ The End of Faith hit the bookstores, a book I can still remember paging through at a Barnes N’ Noble when there were such things and thinking how shocking it was in tone. That was followed by Harris’ Letter to a Christian Nation, Richard Dawkins’ The God Delusion along with his documentary about religion with the less than live-and-let-live  title- The Root of All Evil? -followed by Hitchens’ book  God is Not Great, among others.

What had happened between the easy going atheism of the late Carl Sagan and the militant atheism of Harris’ The End of Faith was the tragedy of 9/11 which acted as an accelerant on a trend that had been emerging at the very least since Dawkins’ Virus of the Mind published way back in 1993. People who once made the argument that religion was evil or that signaled out any specific religion as barbaric would have before 9/11 been labeled as intolerant or racists.  Instead, in 2005 they were published in the liberal Huffington Post. Here is Harris:

It is time we admitted that we are not at war with “terrorism”; we are at war with precisely the vision of life that is prescribed to all Muslims in the Koran.

One of the most humanistic figures in recent memory, and whose loss is thus even more to our detriment than the loss of either the poetic Sagan or the gadfly Hitchens tried early on to prevent this conflict from ever breaking out. Stephen Jay Gould as far back as 1997 tried to minimize the friction between science and religion with his idea of Nonoverlapping Magisteria (NOMA). If you read one link I have ever posted please read this humane and conciliatory essay.  The argument Gold makes in his essay is that there is no natural conflict between science and religion. Both religion and science possess their own separate domains: science deals with what is- the nature of the natural world, whereas religion deals with the question of meaning.

Science does not deal with moral questions because the study of nature is unable to yield meaning or morality on a human scale:

As a moral position (and therefore not as a deduction from my knowledge of nature’s factuality), I prefer the “cold bath” theory that nature can be truly “cruel” and “indifferent”—in the utterly inappropriate terms of our ethical discourse—because nature was not constructed as our eventual abode, didn’t know we were coming (we are, after all, interlopers of the latest geological microsecond), and doesn’t give a damn about us (speaking metaphorically). I regard such a position as liberating, not depressing, because we then become free to conduct moral discourse—and nothing could be more important—in our own terms, spared from the delusion that we might read moral truth passively from nature’s factuality.

Gould later turned his NOMA essay into a book- The Rocks of Ages in which he made an extensive argument that the current debate between science and religion is really a false one that emerges largely from great deal of misunderstanding on both sides.

The real crux of any dispute between science and religion, Gould thinks, are issues of what constitutes the intellectual domain of each endeavor. Science answers questions regarding the nature of the world, but should not attempt to provide answers for questions of meaning or ethics. Whereas religion if properly oriented should concern itself with exactly these human level meaning and ethical concerns. The debate between science and religion was not only unnecessary, but the total victory of one domain over the other would, for Gould, result in a diminishment of depth and complexity that emerges as a product of our humanity.

The New Atheists did not heed Gould’s prescient and humanistic advice. Indeed, he was already in conflict with some who would become its most vociferous figures- namely Dawkins and E.O. Wilson- even before the religion vs. science debate broke fully onto the scene. This was a consequence of Gould pushing back against what he saw as a dangerous trend towards reductionism of those applying genetics and evolutionary principles to human nature. The debate between Gould and Dawkins even itself inspired a book, the 2001, Dawkins vs. Gould published a year before Gould’s death from cancer.

Yet, for how much I respect Gould, and am attracted to his idea of NOMA, I do not find his argument to be without deep flaws. One of inadequacies of Gould’s Rocks of Ages is that it makes the case the that the conflict between science and religion is merely a matter of dispute the majority of us, the scientifically inclined along with the traditionally religious, against marginal groups such as creationists, and their equally fanatical atheists antagonists. The problem, however, may be deeper than Gould lets on.

Rocks of Ages begins with the story of the Catholic Church’s teaching on evolution as an example of NOMA in action.  That story begins with Pius XII’s 1950 encyclical, Humani Generis, which declared that the theory of evolution, if  it was eventually proven definitively true by science, would not be in direct contradiction to Church teaching.  This was followed up (almost 50 years later, at the Church’s usual geriatric speed) by Pope John Paul II’s 1996 declaration that the theory of evolution was now firmly established  as a scientific fact, and that Church theology would have to adjust to this truth. Thus, in Gould’s eyes, the Church had respected NOMA by eventually deferring to science on the question of evolution whatever challenges evolution might pose to traditional Catholic theology.

Yet, the same Pope Pius XII who grudgingly accepted that evolution might be true, one year later used the scientific discovery of the Big Bang to argue, not for a literal interpretation of the book of Genesis which hadn’t been held by the Church since Augustine, but for evidence of the more ephemeral concept of a creator that was the underlying “truth” pointed to in Genesis:

Thus, with that concreteness which is characteristic of physical proofs, it [science] has confirmed the contingency of the universe and also the well-founded deduction as to the epoch when the cosmos came forth from the hands of the Creator.

Hence, creation took place in time.

Therefore, there is a Creator. Therefore, God exists!

Pius XII was certainly violating the spirit if not the letter on NOMA here. Preserving the requisite scientific skepticism in regard to evolution until all the facts were in, largely because it posed challenges to Catholic theology, while jumping almost immediately on a scientific discovery that seemed to lend support for some of the core ideas of the Church- a creator God and creation ex nihilo.

This seeming inclination of the religious to ask for science to keep its hands off when it comes to challenging it claims, while embracing science the minute it seems to offer proof of some religious conviction, is precisely the argument Richard Dawkins makes against NOMA, I think rightly, in his The God Delusion, and for how much I otherwise disagree with Dawkins and detest his abusive and condescending style, I think he is right in his claim that NOMA only comes into play for many of the religious when science is being used to challenge their beliefs, and not when science seems to lend plausibility to its stories and metaphysics. Dawkins and his many of his fellow atheists, however, think this is a cultural problem, namely that the culture is being distorted by the “virus” of religion. I disagree. Instead, what we have is a serious theological problem on our hands, and some good questions to ask might be: what exactly is this problem? Was this always the case? And in light of this where might the solution lie?

In 2009, Karen Armstrong, the ex-nun, popular religious scholar, and what is less know one time vocal atheist, threw her hat into the ring in defense of religion. The Case for God which was Armstrong’s rejoinder to the New Atheists is a remarkable book .
At least one explanation of theological problems we have is that a book like the Bible, when taken literally, is clearly absurd in light of modern science. We know that the earth was not created in 6 days, that there was no primordial pair of first parents, that the world was not destroyed by flood, and we have no proof of miracles understood now as the suspension of the laws of nature through which God acts in the world.  

The critique against religion by the New Atheists assumes a literal understanding either of scripture or the divine figures behind its stories. Armstrong’s The Case for God sets out to show that this literalism is a modern invention. Whatever the case with the laity, among Christian religious scholars before the modern era, the Bible was never read as a statement of scientific or historical fact. This, no doubt, is part of the reason its many contradictions and omissions where one story of an event is directly at odds with a different version of the exact same.

The way in which medieval Christian scholars were taught to read the Bible gives us an indication of what this meant. They were taught to read first for the literal meaning and once they understood move to the next level to discover the moral meaning of a text. Only once they had grasped this moral meaning would they attempt to grapple with the true, spiritual meaning of a text. Exegesis thus resembles the movement of the spirit away from the earth and the body above into the transcendent.

The Christian tradition in its pre-modern form, for Armstrong then, was not attempting to provide a proto-scientific version of the world that our own science has shown to be primitive and false, and the divine order which it pointed to was not one of some big bearded guy in clouds, but understood as the source and background of  a mysterious cosmic order which the human mind was incapable of fully understanding.

The New Atheists often conflate religion with bad and primitive science we should have long outgrown: “Why was that man’s barn destroyed by a lightning bolt?” “ As punishment from God. “How can we save our crops from drought?” “Perform a rain dance for the spirits?” It must therefore be incredibly frustrating to the New Atheists for people to cling to such an outdated science. But, according to Armstrong religion has never primarily been about offering up explanations for the natural world.

Armstrong thinks the best way to understand religion is as akin to art or music or poetry and not as proto-science. Religion is about practice not about belief and its truths are only accessible for those engaged in such practices. One learns what being a Christian by imitating Jesus: forgiving those who hurt you, caring for the poor, the sick, the imprisoned, viewing suffering as redemptive or “carrying your cross”. Similar statements can be made for the world’s other religions as well though the kinds of actions one will pursue will be very different. This is, as Stephen Prothero points out in his God is not One, because the world’s diverse faith have defined the problem of the human condition quite differently and therefore have come up with quite distinct solutions. (More on that in part two.)

How then did we become so confused regarding what religion is really about? Armstrong traces how Biblical literalism arose in tandem with the scientific revolution and when you think about it this makes a lot of sense. Descartes, who helped usher in modern mathematics wrote a “proof” for God. The giant of them all, Newton, was a Biblical literalists and thought his theory of gravity proved the existence of the Christian God. Whereas the older theology had adopted a spirit of humility regarding human knowledge, the new science that emerged in the 16th century was bold in its claims that a new way had been discovered which would lead to a full understanding of the world- including the God who it was assumed had created it. It shouldn’t be surprising that Christians, perhaps especially the new Protestants, who sought to return to the true meaning of the faith through a complete knowledge of scripture, would think that God and the Bible could be proved in the same way the new scientific truths were being proved.


God thus became a “fact” like other facts and Westerners began the road to doubt and religious schizophrenia as the world science revealed showed no trace of the Christian God -or any other God(s) – amid the cold, indifferent and self-organizing natural order science revealed. To paraphrase Richard Dawkins’ The Blind Watchmaker if God was a hypothetical builder of clocks there is no need for the hypothesis now that we know how a clock can build itself.

Armstrong,thus, provides us with a good grasp of why Gould’s NOMA might have trouble gaining traction. Our understanding of God or the divine has become horribly mixed up with the history of science itself, with the idea that God would be “proven”
an assumption that helped launch the scientific revolution. Instead, what science has done in relation to this early modern idea of God as celestial mechanic and architect of the natural world, is push this idea of God into an ever shrinking “gap” of knowledge where science (for the moment) lacked a reasonable natural explanation of events. The gap into which the modern concept of God got itself jammed after Darwin’s theory of natural selection proved sufficient to explain the complexity of life eventually became the time before the Big Bang. A hole Pope Pius XII eagerly jumped into given that it seemed to match up so closely with Church theology. Combining his 1950-51 encyclicals he seemed to be saying “We’ll give you the origin of life, but we’ll gladly take the creation of the Universe”!

This was never a good strategy.

Next time I’ll try to show how this religious crisis actually might provide an opportunity for new ways of approaching age old religious questions and open up new avenues of  pursuing transcendence- a possibility that might prove particularly relevant for the community that goes under the label, transhumanist.

God in the Age of Alien Earths

329px-Giordano_Bruno_Campo_dei_Fiori

Four hundred and thirteen years ago to the day, on February, 17 1600, the mystic philosopher, and some would  argue,scientist, Giordano Bruno, was handed over by the Catholic Church to the civil authorities in Rome and burned to death in the Campo de’ Fiori. Burning at the stake was a punishment that aimed at the annihilation of the person. The scattering of a body’s ashes, like Bruno’s ashes were scattered into the Tiber river, was not as many now look on similar rituals, a way of connecting a person forever into the fabric of a place they find for whatever reason sacred, but a way to cast a person out from the human world from which they came. It was the way “soulless” creatures such as animals were treated in death.

The “crimes” for which Bruno was executed revolved around his holding heretical beliefs, that is ideas which were directly in contradiction to the teachings of the Church.The idea common today that Bruno was not so much a condemned religious heretic as the first martyr in the conflict between science and religion gained traction only in the late 19th century, and is if not false, is doubtless overly simplistic.  Only one of the charges against Bruno dealt with a scientific rather than a religious question, or at the very least what should have been considered a scientific hypothesis alone. Bruno was charged with believing the existence of, in the words of the Inquisition, “the plurality of worlds”.

What was meant by the plurality of worlds? Primarily it meant Bruno had argued for the belief in more earths than our own, that there were many worlds with life and sentient creatures not merely similar but perhaps, given his belief in infinities, identical or only slightly different from the those on our own earth. What we see when we witness Bruno trying to apply the new Copernican astronomy along with his own ideas regarding infinity to Christian theology is a Christian narrative stretched to the breaking point. Bruno wrote:

I can imagine an infinite number of worlds like the earth, with a Garden of Eden on each one. In all these Gardens of Eden, half the Adams and Eves will not eat the fruit of knowledge, but half will. But half of infinity is infinity, so an infinite number of worlds will fall from grace and there will be an infinite number of crucifixions. Therefore, either there is one unique Jesus who goes from one world to another, or there are an infinite number of Jesuses. Since a single Jesus visiting an infinite number of earths one at a time would take an infinite amount of time, there must be an infinite number of Jesuses. Therefore, God must create an infinite number of Christs.  *

What drew Bruno to the attention of Inquisitors was, I think, less the fact that he contradicted scriptures than the fact that he drew theological implications from the new view of the Universe provided by Copernicus and in light of his own ideas regarding infinity. This matter of the drawing of theological implications by someone outside of the Church hierarchy was a matter for which the Church had become particularly defensive and aggressive during Bruno’s time given this was the very power that had slipped from its fingers with Martin Luther and lay at the root of much of its then raging conflict with Protestants.

Yet, this drawing of specifically Christian theological implications was something Bruno had to do give the fact that any discussion regarding the question of cosmological meaning by someone living in a Christian society had to be done within Christian terms.  What I mean by the phrase cosmological meaning is what might be called the “big questions”. They include questions that were answered almost solely by religion until the modern era, but are now the prerogative of science, questions such as “what is the nature of the world?” “what is its origin?” “what is its ultimate fate?” ,but also questions that really can’t be answered by science even today, questions such as “what is the role of humanity in the future of the Universe?”, or “what is our place in the cosmic scheme of things?”. At the time Bruno was writing there existed no real philosophical or other discourse which might address these sorts of questions, questions that had been reopened by the new science that wasn’t tightly woven with Christian theology.

The fact that Bruno was more likely a theological rather than a scientific martyr can be seen in the experience of the thinker who spurred much of his brilliant imagination. No one had really cared when Copernicus, 60 years before Bruno, had shown that a heliocentric model of the solar system was the best way to predict the orbits of the planets. Church authorities did care when Galileo in 1610 with his book The Starry Messenger was not only able to show that the Copernican Theory was no mere piece of mathematical fancy-footwork that had little to do with reality, but was an observable fact, and that, as was Bruno had argued, objects in the heavens in some sense were worlds like earth- the moon had mountains, and Jupiter itself was orbited a series of moons, but this too might have been for different reasons than at first seem.

Yet, Galileo probably got himself in hot water with the Church as much for his pugnaciousness as for his science. He did, after all, name the dim-witted character who argues for a geocentric model in his book, Simplistico, a character with whom it doesn’t take much imagination to see the Pope himself. In the end, Galileo fared far better than Bruno, not being burned, but merely confined to the limits of his home.

That the Church didn’t much care about philosophical arguments for life on other worlds so long as no theological implications were drawn, and that its brutal reaction to Bruno was driven in part by its then raging conflict with “heresies” throughout Europe during Bruno’s time seems to be the conclusion one should draw from the reaction, or better lack of reaction, to Bernard Le Bovier de Fontenelle’s Conversations on the Plurality of Worlds, published in 1686 after the religious wars had died down. The reaction to Fontenelle’s work, to which science-fiction writers ever since owe their lineage, was that no one batted an eye .  No Catholic or Protestant theologian took note of the work. Despite its popularity, no one ever attempted to ban it or burn it as had happened with the works of both Bruno and Galileo not to mention the fact that Fontenelle was never accused of heresy, tortured, put on trial or imprisoned in his home or anywhere else.

Fontenelle’s Conversations on the Plurality of Worlds is a dialogue between a philosopher (presumably Fontenelle himself)  and a marquess, a noblewoman of refinement and leisure as such person existed before the French monarchy went to rot, that takes place while the pair take a series of walks in her garden under the beauty of the night sky.   

The first chapter in Conversations has the philosopher establish, despite all appearances, the reality of the Copernican heliocentric model of the solar system whereas the rest is a superb piece of early science fiction in which the philosopher
discusses what life might be like on earth’s moon and her sister planets.

The resolution of telescopes wasn’t all that great in Fontenelle’s day, though it was a heck of alot better than looking at things through the naked eye. It was thus easy to confuse smooth areas on the moon lacking mountains or craters for large bodies of water, which is why we have things like The Sea of Tranquility, features that were themselves mapped and named by the Jesuit scholars Giovanni Battista Riccioli, and Francesco Maria Grimaldi.

What makes Fontenelle’s Conversations, despite its sometimes jarring Eurocentrism, and the droll background of a caricature of  philosopher engaged in a flirtatious dance with a caricature of a noblewoman, is not that he projects the kind of life found on earth outward, but that he argues that we may be blinded to its possibility beyond earth by sticking too close to our own experience, and should therefore try to reason our way from the type of planet we are talking about to the kinds of life that might be found upon it.

Fontenelle’s philosopher in Conversations states that  “the moon is an earth too and a habitable world” (p. 30). He speculates that our belief that it is lifeless no doubt resembles ancient belief that there could be no life at the antipodes of the earth- its poles. Our position is somewhat akin to that of American Indians that had no idea that whole civilizations existed over the oceans, and that someday we will be able to communicate with beings from the moon, whether initiated by us or them.

The revolution in the way we looked into space thanks the telescope, or the way we looked at the geography of the earth on account of Columbus and those that followed him, weren’t the only revolutions that upended the view of the world of early modern Europe. There was also the new world brought to us by the invention of the microscope, and it is here that Fontenelle thinks we can see a glimpse of just how deep the profusion of living forms in the Universe, especially ones that defy our expectations run:

A mulberry leaf is a little world inhabited by multitudes of these invisible worms which to them is a country of vast extent. What mountains what abysses are there in it? (pp. 71-72)

Fontenelle with two centuries to go before evolution would become known, assumes that life will have to be suited for the environment in which it exists. In leaps we with our knowledge likely find ridiculous, but which are close in spirit to the truth of the matter the philosopher speculates on what life might be like in environments very different from the earth: The arid moon bathed in long stretches of light and darkness, the hothouses of Mercury and Venus the frigid enormity of Jupiter and Saturn. (For whatever reason he has no interest in Mars.)

The philosopher is overawed by the diversity of a Universe filled with life, and acknowledges the stars are suns like our own. Around many of these stars will circle planets, although a diversity of types of solar systems will be found. The marquess recoils from the scale of all this life and its ungraspable diversity. Such a richness makes her own life seem far too small. Yet, the philosopher thinks this should not impact our appreciation for the specialness of what is in front of us, an ability which has been implanted in us by nature.

“They cannot spoil fine eyes or a pretty mouth their value is still the same in spite of all the worlds that can possibly exist.” (p. 105)

As to life in the solar system we, three and some centuries after Fontenelle, of course, know better. Our exploration of the moons and planets that along with us circle our sun might be written as a tragic tale of the failure to find any life in our neighborhood other than our own. Fontenelle’s marquess had anticipated a reason for this- the Goldilocks Zone, the fact that the earth appears to orbit the sun at just the right distance from the sun. Like the famous porridge, neither too hot nor too cold.

I am sure said the Marchioness we have one great convenience in the situation of our world it is not so hot as Mercury and Venus nor so cold as Jupiter or Saturn and our country is so temperately placed that we have no excess either of heat or cold I have heard of a philosopher who gave thanks to nature that he was born a man and not a beast a Greek and not a Barbarian and for my part I render thanks that I am seated in the mildest planet of the universe and in one of the most temperate regions of that planet. (p. 99)

Yet, our own day might give new life to the dream of Fontenelle’s philosopher. This at least is the story one finds in another remarkable and much more recent book-  Dimitar Sasselov’ The Life of Super-Earths: How the Hunt for Alien Worlds and Artificial Cells Will Revolutionize Life on Our Planet.  Sasselov book lays out how astronomers have in the last decade identified a huge number of extra-solar planets, and through amazing instruments, especially NASA’s aptly named Kepler Probe , are seeking out such bodies in the Goldilocks Zone around near earth planets.

Many of the extrasolar  planets so far discovered turnout to be something quite different from our earth, especially in terms of scale. They are much bigger than our own “big” blue marble, only a little smaller than Uranus- hence the term “super-earth”.  Sasselov’ tries to show how life on such planets might be even more ideally suited for life than the earth itself which is almost as small as dead planets such as Mars that are not large enough to hold complex atmospheres and engender internal processes such as plate tectonics,and is nowhere near the limit of big living planets that might be water worlds of pure ocean, a limit that is closer to the size of Uranus.  Planets much larger such as Saturn or Jupiter are not good places for life to take hold with too much atmospheric complexity and gravity for fragile life forms. We are Goldilocks here too, but we have baby bear’s bowl.

Life requires the kinds of complex elements that make up our own planet, and it is supposed, any super-earths. It is only recently,  Sasselov argues, that the generation of stars in the Universe has aged enough to produce these necessary elements. Hence the silence of the Universe today likely reflects the fact that we are one of the first to the party. But it will be a long celebration, with more than 10 times the age of the Universe- 150 billion years- of excellent conditions for life to take hold in front of it. Fontenelle would leap for joy at the diversity!

The Kepler Probe is helping us find planets located in the Goldilocks Zone using the same method Galileo used for identifying Jupiter’s moons. It is locating transits- moments when an object passes, in this case, in front of a parent star. It will take other techniques and instruments to gauge whether life is present on such planets when identified, but Sasselov is pretty confident that we are on the verge of finding life outside the solar system. Thus, in his view, we are about to complete the revolution started by Copernicus.

In one of those strange coincidences of intellectual history, Sasselov, is proposing the use of updated ideas and tools to understand life beyond the solar system that can be found in Fontenelle’s Conversations. Sasselov thinks we should be able to use the new synthetic biology, the heir to the philosopher’s microbiology, to create in the lab life forms that might have evolved in any of the distinct worlds we find, which, as Fontenelle claimed in a much more simplified form will be different from life on earth because they have evolved (not his word) to match the conditions found there. Sasselov thinks synthetic biology will allow us to reset the clock back to the very beginning of life and run experiments in how these life forms might have evolved given conditions that are different, in terms of composition, temperature, pressure etc. from those on earth.

Should life be discovered on another planet in our near future, what might the impact be on traditional religion, especially in light of the current conflict between it and science? Or, to phrase the question differently, how might Bruno fare in a world where his alien earths had actually been found?

I, of course, have no way to know. My sincere hope, however, is that such a completion of the Copernican Revolution will have the effect of ending (or at least substantially decreasing) rather than exacerbating the conflict between the two domains. Right now, a good deal of the conflict between science and religion appears to be driven by the desire of religious authorities to use the language of science to make cosmological claims that support their own theological worldview, and the overreaction of the so-called New Atheists to these claims. On the one hand you have the misnomer of “creation science” that seeks to lend the air of scientific support for what are in reality religious convictions. On the other, you have the seemingly open minded view of an institution such as the Catholic Church that looks for a God that somehow resembles the God of their tradition in the “gaps” of knowledge science cannot currently explain. Seeing, for instance, in the Big Bang evidence for a creator God who brought the world into being ex nihilo- out of nothing.  This seemingly modern view inspired an entire book by the vocally atheist physicist, Lawrence Krauss, who in his A Universe From Nothing aims to show that the Universe of contemporary physics leaves no room at all for a creator God in the Christian sense.

I have complained about the militant intolerance of the New Atheist elsewhere, but agree with at least this about their critique: No religion should use the findings of science to back their theological claims, which in the end amount to a claim to exclusive say over the universal human questions of cosmological meaning. For example, followers of theistic religions: Judaism, Islam or Christianity should avoid using scientific phenomena such as the Big Bang to argue for the existence of the creator that underlies their theology because it theologically excludes the many faiths and philosophies that contain no concept of such a creator. Likewise, even if the finding of modern science appear to fit better with the concepts of a particular religion, say Buddhism, this does not mean that Buddhists can turn around and say that the moral, ethical and existential bearings of a theistic religion such as Christianity are somehow false. This is because the cosmological claims of any religion are merely secondary features rather than primary ones. Religions are above all ethical and philosophical orientations towards life on a human scale. If one finds concepts such as charity or sin the best way to establish one’s bearings towards other human beings and the human condition more generally, then the truth of this can be found only in the living out of the concepts not in the scientific reality of cosmological claims which were only secondary to the development of the faith in the first place.

As for these big questions, the ones that transcend our individual lives or even the life of our species, these questions of meaning  ideally should involve a discussion all of us, secular and religious, scientist and non-scientist, can share. In terms of its “why?” and “where?” questions that explore the issue of human purpose. These types of questions are not really resolvable in the sense that that they will never really be answered in the same way scientific questions are answered or religious beliefs and practices constitute a particular set of answers to the question of how a human being should live.

My hope is that the discovery of life elsewhere in the Milky Way, rather than result in continued conflict between religion and science (in which many of the religious play a role similar to “climate deniers” and view such an amazing scientific discovery as a cabal against their deeply held beliefs)  will encourage the religious to back off from using cosmological justifications for their faith. Instead, perhaps they will realize that the world which the modern science has revealed is far more complex and surprising than anything any sacred books contain and that their most fruitful path lie not in trying to square these two worlds and find “proofs” for their faith, but to ensure their faith provides meaning and purpose for the human beings who live within them.

In turn, I hope the New Atheists, no longer vulnerable to the belief that religion is somehow invading its intellectual turf, will stop demonizing and ridiculing religious persons whose goal has always been, even if it has so often failed, or if other non-religious paths to the same exist higher purpose exist, to be better human beings. It would be a world in which Bruno was never again burned at the stake be he scientist- or saint.

*This Bruno quote is reportedly from the Fifth Dialogue of his Cause Principle and Unity, but I have not been able while searching the text to find it. Any help would be appreciated.

Big Brother, Big Data and The Forked Path Revisited

This week witnessed yet another examples of the distortions caused by the 9/11 Wars on the ideals that underlie the American system of government and the ballooning of the powers and reach of the national security state.  In a 16 page Justice Department memo obtained by the NBC News reporter, Michael Isikoff, legal justifications were outlined for the extra-judicial killings of American citizens deemed to pose a “significant threat” to the United States. The problem here is who gets to define what such a threat is. The absence of any independent judicial process outside of the executive branch that can determine whether the rights of an American citizen can be stripped, including the condition of being considered innocent before being proved guilty amounts to an enormous increase in executive power that will likely survive the Obama Administration. This is not really news in that we already knew that extra-judicial killings (in the case of one known suspect, Anwar al-Awlaki, at least) had already taken place. What was not known was just how sweeping, permanent, and without clear legal boundaries these claims of an executive right to kill American citizens absent the proof of guilt actually were. Now we know.

This would be disturbing information if it stood alone by itself, but it does not stand alone. What we have seen since the the attacks on 9/11 is the spread of similar disturbing trends which have only been accelerated by technological changes such as the growth of Big-Data and robotics. Given the news, I thought it might be a good idea to reblog a post I had written back in September where I tried to detail these developments. What follows is a largely unedited version of that original post.

………………………………………………………………………………

The technological ecosystem in which political power operates tends to mark out the possibility space for what kinds of political arrangements, good and bad, exist within that space. Orwell’s Oceania and its sister tyrannies were imagined in what was the age of big, centralized media. Here the Party had under its control not only the older printing press, having the ability to craft and doctor, at will, anything created using print from newspapers, to government documents, to novels. It also controlled the newer mediums of radio and film, and, as Orwell imagined, would twist those technologies around backwards to serve as spying machines aimed at everyone.

The questions, to my knowledge, Orwell never asked was what was the Party to do with all that data? How was it to store, sift through, make sense of, or locate locate actual threats within it the  yottabytes of information that would be gathered by recording almost every conversation, filming or viewing almost every movement, of its citizens lives? In other words, the Party would have ran into the problem of Big Data. Many of Orwellian developments since 9/11 have come in the form of the state trying to ride the wave of the Big Data tsunami unleashed with the rise of the internet, an attempt create it’s own form of electronic panopticon.

In their book Top Secret America: The Rise of the New American Security State, Dana Priest, and ,William Arkin, of the Washington Post present a frightening picture of the surveillance and covert state that has mushroomed in the United States since 9/11. A vast network of endeavors which has grown to dwarf, in terms of cummulative numbers of programs and operations, similar efforts, during the unarguably much more dangerous Cold War. (TS 12)

Theirs’ is not so much a vision of an America of dark security services controlled behind the scenes by a sinister figure like J. Edgar Hoover, as it is one of complexity gone wild. Priest and Arkin paint a picture of Top Secret America as a vast data sucking machine, vacuuming up every morsel of information with the intention of correctly “connecting the dots”, (150) in the hopes of preventing another tragedy like 9/11.

So much money was poured into intelligence gathering after 9/11, in so many different organizations, that no one, not the President, nor the Director of the CIA, nor any other official has a full grasp of what is going on. The security state, like the rest of the American government, has become reliant on private contractors who rake in stupendous profits. The same corruption that can be found elsewhere in Washington is found here. Employees of the government and the private sector spin round and round in a revolving door between the Washington connections brought by participation in political establishment followed by big-time money in the ballooning world of private security and intelligence. Priest quotes one American intelligence official  who had the balls to describe the insectous relationship between government and private security firms as “a self-licking ice cream cone”. (TS 198)

The flood of money that inundated the intelligence field in after  9/11 has created what Priest and Arkin call an “alternative geography” companies doing covert work for the government that exist in huge complexes, some of which are large enough to contain their very own “cities”- shopping centers, athletic facilities, and the like. To these are added mammoth government run complexes some known and others unknown.

Our modern day Winston Smiths, who work for such public and private intelligence services, are tasked not with the mind numbing work of doctoring history, but with the equally superfluous job of repackaging the very same information that had been produced by another individual in another organization public or private each with little hope that they would know that the other was working on the same damned thing. All of this would be a mere tragic waste of public money that could be better invested in other things, but it goes beyond that by threatening the very freedoms that these efforts are meant to protect.

Perhaps the pinnacle of the government’s Orwellian version of a Google FaceBook mashup is the gargantuan supercomputer data center in Bluffdale Nevada built and run by the premier spy agency in the age of the internet- the National Security Administration or NSA. As described by James Bamford for Wired Magazine:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

It had been thought that domestic spying by the NSA, under a super-secret program with the Carl Saganesque name, Stellar Wind, had ended during the G.W. Bush administration, but if the whistleblower, William Binney, interviewed in this chilling piece by Laura Poitras of the New York Times, is to be believed, the certainly unconstitutional program remains very much in existence.

The bizarre thing about this program is just how wasteful it is. After all, don’t private companies, such as FaceBook and Google not already possess the very same kinds of data trails that would be provided by such obviously unconstitutional efforts like those at Bluffdale? Why doesn’t the US government just subpoena internet and telecommunications companies who already track almost everything we do for commercial purposes? The US government, of course, has already tried to turn the internet into a tool of intelligence gathering, most notably, with the stalled Cyber Intelligence Sharing and Intelligence Act, or CISPA , and perhaps it is building Bluffdale in anticipation that such legislation will fail, that however it is changed might not be to its liking, or because it doesn’t want to be bothered with the need to obtain warrants or with constitutional niceties such as our protection against unreasonable search and seizure.

If such behemoth surveillance instruments fulfill the role of the telescreens and hidden microphones in Orwell’s 1984, then the role the only group in the novel whose name actually reflects what it is- The Spies – children who watch their parents for unorthodox behavior and turn them in, is taken today by the American public itself. In post 9/11 America it is, local law enforcement, neighbors, and passersby who are asked to “report suspicious activity”. People who actually do report suspicious activity have their observations and photographs recorded in an ominous sounding data base that Orwell himself might have named called The Guardian. (TS 144)

As Priest writes:

Guardian stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. Most are not even suspected of one. What they have done is appear, to a town sheriff, a traffic cop, or even a neighbor to be acting suspiciously”. (TS 145)

Such information is reported to, and initially investigated by, the personnel in another sort of data collector- the “fusion centers” which had been created in every state after 9/11.These fusion centers are often located in rural states whose employees have literally nothing to do. They tend to be staffed by persons without intelligence backgrounds, and who instead hailed from law enforcement, because those with even the bare minimum of foreign intelligence experience were sucked up by the behemoth intelligence organizations, both private and public, that have spread like mould around Washington D.C.

Into this vacuum of largely non-existent threats came “consultants” such as Montijo Walid Shoebat, who lectured fusion center staff on the fantastical plot of Muslims to establish Sharia Law in the United States. (TS 271-272). A story as wild as the concocted boogeymen of Goldstein and the Brotherhood in Orwell’s dystopia.

It isn’t only Mosques, or Islamic groups that find themselves spied upon by overeager local law enforcement and sometimes highly unprofessional private intelligence firms. Completely non-violent, political groups, such as ones in my native Pennsylvania, have become the target of “investigations”. In 2009 the private intelligence firm the Institute for Terrorism Research and Response compiled reports for state officials on a wide range of peaceful political groups that included: “The Pennsylvania Tea Party Patriots Coalition, the Libertarian Movement, anti-war protesters, animal-rights groups, and an environmentalist dressed up as Santa Claus and handing out coal-filled stockings” (TS 146). A list that is just about politically broad enough to piss everybody off.

Like the fusion centers, or as part of them, data rich crime centers such as the Memphis Real Time Crime Center are popping up all over the United States. Local police officers now suck up streams of data about the environments in which they operate and are able to pull that data together to identify suspects- now by scanning licence plates, but soon enough, as in Arizona, where the Maricopa County Sheriff’s office was creating up to 9,000 biometric, digital profiles a month (TS 131) by scanning human faces from a distance.

Sometimes crime centers used the information gathered for massive sweeps arresting over a thousand people at a clip. The result was an overloaded justice and prison system that couldn’t handle the caseload (TS 144), and no doubt, as was the case in territories occupied by the US military, an even more alienated and angry local population.

From one perspective Big Data would seem to make torture more not less likely as all information that can be gathered from suspects, whatever their station, becomes important in a way it wasn’t before, a piece in a gigantic, electronic puzzle. Yet, technological developments outside of Big Data, appear to point in the direction away from torture as a way of gathering information.

“Controlled torture”, the phrase burns in my mouth, has always been the consequence of the unbridgeable space between human minds. Torture attempts to break through the wall of privacy we possess as individuals through physical and mental coercion. Big Data, whether of the commercial or security variety, hates privacy because it gums up the capacity to gather more and more information for Big Data to become what so it desires- Even Bigger Data. The dilemma for the state, or in the case of the Inquisition, the organization, is that once the green light has been given to human sadism it is almost impossible to control it. Torture, or the knowledge of torture inflicted on loved ones, breeds more and more enemies.

Torture’s ham fisted and outwardly brutal methods today are going hopelessly out of fashion. They are the equivalent of rifling through someone’s trash or breaking into their house to obtain useful information about them. Much better to have them tell you what you need to know because they “like” you.

In that vein, Priest describes some of the new interrogation technologies being developed by the government and private security technology firms. One such technology is an “interrogation booth” that contain avatars with characteristics (such as an older Hispanic woman) that have been psychologically studied to produce more accurate answers from those questioned. There are ideas to replace the booth with a tiny projector mounted on a soldier’s or policeman’s helmet to produce the needed avatar at a moments notice. There was also a “lie detecting beam” that could tell- from a distance- whether someone was lying by measuring miniscule changes on a person’s skin. (TS 169) But if security services demand transparency from those it seeks to control they offer up no such transparency themselves. This is the case not only in the notoriously secretive nature of the security state, but also in the way the US government itself explains and seeks support for its policies in the outside world.

Orwell, was deeply interested in the abuse of language, and I think here too, the actions of the American government would give him much to chew on. Ever since the disaster of the war in Iraq, American officials have been obsessed with the idea of “soft-power”. The fallacy that resistance to American policy was a matter of “bad messaging” rather than the policy itself. Sadly, this messaging was often something far from truthful and often fell under what the government termed” Influence operations” which, according to Priest:

Influence operations, as the name suggests, are aimed at secretly influencing or manipulating the opinions of foreign audiences, either on an actual battlefield- such as during a feint in a tactical battle- or within civilian populations, such as undermining support for an existing government or terrorist group (TS 59)

Another great technological development over the past decade has been the revolution in robotics, which like Big Data is brought to us by the ever expanding information processing powers of computers, the product of Moore’s Law.

Since 9/11 multiple forms of robots have been perfected, developed, and deployed by the military, intelligence services and private contractors only the most discussed and controversial of which have been flying drones. It is with these and other tools of covert warfare, such as drones, and in his quite sweeping understanding and application of executive power that President Obama has been even more Orwellian than his predecessor.

Obama may have ended the torture of prisoners captured by American soldiers and intelligence officials, and he certainly showed courage and foresight in his assassination of Osama Bin Laden, a fact by which the world can breathe a sigh of relief. The problem is that he has allowed, indeed propelled, the expansion of the instruments of American foreign policy that are largely hidden from the purview and control of the democratic public. In addition to the surveillance issues above, he has put forward a sweeping and quite dangerous interpretation of executive power in the forms of indefinite detention without trial found in the NDAAengaged in the extrajudicial killings of American citizens, and asserted the prerogative, questionable under both the constitution and international law, to launch attacks, both covert and overt, on countries with which the United States is not officially at war.

In the words of Conor Friedersdorf of the Atlantic writing on the unprecedented expansion of executive power under the Obama administration and comparing these very real and troubling developments to the paranoid delusions of right-wing nuts, who seem more concerned with the fantastical conspiracy theories such as the Social Security Administration buying hollow-point bullets:

… the fact that the executive branch is literally spying on American citizens, putting them on secret kill lists, and invoking the state secrets privilege to hide their actions doesn’t even merit a mention.  (by the right-wing).

Perhaps surprisingly, the technologies created in the last generation seem tailor made for the new types of covert war the US is now choosing to fight. This can perhaps best be seen in the ongoing covert war against Iran which has used not only drones but brand new forms of weapons such the Stuxnet Worm.

The questions posed to us by the militarized versions of Big Data, new media, Robotics, and spyware/computer viruses are the same as those these phenomena pose in the civilian world: Big Data; does it actually provide us with a useful map of reality, or instead drown us in mostly useless information? In analog to the question of profitability in the economic sphere: does Big Data actually make us safer? New Media, how is the truth to survive in a world where seemingly any organization or person can create their own version of reality. Doesn’t the lack of transparency by corporations or the government give rise to all sorts of conspiracy theories in such an atmosphere, and isn’t it ultimately futile, and liable to backfire, for corporations and governments to try to shape all these newly enabled voices to its liking through spin and propaganda? Robotics; in analog to the question of what it portends to the world of work, what is it doing to the world of war? Is Robotics making us safer or giving us a false sense of security and control? Is it engendering an over-readiness to take risks because we have abstracted away the very human consequences of our actions- at least in terms of the risks to our own soldiers. In terms of spyware and computer viruses: how open should our systems remain given their vulnerabilities to those who would use this openness for ill ends?

At the very least, in terms of Big.Data, we should have grave doubts. The kind of FaceBook from hell the government has created didn’t seem all that capable of actually pulling information together into a coherent much less accurate picture. Much like their less technologically enabled counterparts who missed the collapse of the Eastern Bloc and fall of the Soviet Union, the new internet enabled security services missed the world shaking event of the Arab Spring.

The problem with all of these technologies, I think, is that they are methods for treating the symptoms of a diseased society, rather than the disease itself. But first let me take a detour through Orwell vision of the future of capitalist, liberal democracy seen from his vantage point in the 1940s.

Orwell, and this is especially clear in his essay The Lion and the Unicorn, believed the world was poised between two stark alternatives: the Socialist one, which he defined in terms of social justice, political liberty, equal rights, and global solidarity, and a Fascist or Bolshevist one, characterized by the increasingly brutal actions of the state in the name of caste, both domestically and internationally.

He wrote:

Because the time has come when one can predict the future in terms of an “either–or”. Either we turn this war into a revolutionary war (I do not say that our policy will be EXACTLY what I have indicated above–merely that it will be along those general lines) or we lose it, and much more besides. Quite soon it will be possible to say definitely that our feet are set upon one path or the other. But at any rate it is certain that with our present social structure we cannot win. Our real forces, physical, moral or intellectual, cannot be mobilised.

It is almost impossible for those of us in the West who have been raised to believe that capitalist liberal democracy is the end of the line in terms of political evolution to remember that within the lifetimes of people still with us (such as my grandmother who tends her garden now in the same way she did in the 1940’s) this whole system seemed to have been swept up into the dustbin of history and that the future lie elsewhere.

What the brilliance of Orwell missed, the penetrating insight of Aldous Huxley in his Brave New World caught: that a sufficiently prosperous society would lull it’s citizens to sleep, and in doing so rob them both of the desire for revolutionary change and their very freedom.

As I have argued elsewhere, Huxley’s prescience may depend on the kind of economic growth and general prosperity that was the norm after the Second World War. What worries me is that if the pessimists are proven correct, if we are in for an era of resource scarcity, and population pressures, stagnant economies, and chronic unemployment that Huxley’s dystopia will give way to a more brutal Orwellian one.

This is why we need to push back against the Orwellian features that have crept upon us since 9/11. The fact is we are almost unaware that we building the architecture for something truly dystopian and should pause to think before it is too late.

To return to the question of whether the new technologies help or hurt here: It is almost undeniable that all of the technological wonders that have emerged since 9/11 are good at treating the symptoms of social breakdown, both abroad and at home. They allow us to kill or capture persons who would harm largely innocent Americans, or catch violent or predatory criminals in our own country, state, and neighborhood. Where they fail is in getting to the actual root of the disease itself.

American would much better serve  its foreign policy interest were it to better align itself with the public opinion of the outside world insofar as we were able to maintain our long term interests and continue to guarantee the safety of our allies. Much better than the kind of “information operation” supported by the US government to portray a corrupt, and now deposed, autocrat like Yemen’s  Abdullah Saleh as “an anti-corruption activist”, would be actual assistance by the US and other advanced countries in…. I duknow… fighting corruption. Much better Western support for education and health in the Islamic world that the kinds of interference in the internal political development of post-revolutionary Islamic societies driven by geopolitical interest and practiced by the likes of Iran and Saudi Arabia.

This same logic applies inside the United States as well. It is time to radically roll back the Orwellian advances that have occurred since 9/11. The dangers of the war on terrorism were always that they would become like Orwell’s “continuous warfare”, and would perpetually exist in spite, rather than because of the level of threat. We are in danger of investing so much in our security architecture, bloated to a scale that dwarfs enemies, which we have blown up in our own imaginations into monstrous shadows, that we are failing to invest in the parts of our society that will actually keep us safe and prosperous over the long-term.

In Orwell’s Oceania, the poor, the “proles” were largely ignored by the surveillance state. There is a danger here that with the movement of what were once advanced technologies into the hands of local law enforcement: drones, robots, biometric scanners, super-fast data crunching computers, geo-location technologies- that domestically we will move even further in the direction of treating the symptoms of social decay, rather than dealing with the underlying conditions that propel it.

The fact of the matter is that the very equality, “the early paradise”, a product of democratic socialism and technology, Orwell thought was at our fingertips has retreated farther and farther from us. The reasons for this are multiple; To name just a few: financial   concentration automation, the end of “low hanging fruit” and their consequent high growth rates brought by industrialization,the crisis of complexity and the problem of ever more marginal returns. This retreat, if it lasts, would likely tip the balance from Huxley’s stupification by consumption to Orwell’s more brutal dystopia initiated by terrified elites attempting to keep a lid on things.

In a state of fear and panic we have blanketed the world with a sphere of surveillance, propaganda and covert violence at which Big Brother himself would be proud. This is shameful, and threatens not only to undermine our very real freedom, but to usher in a horribly dystopian world with some resemblance to the one outlined in Orwell’s dark imaginings. We must return to the other path.