Why the Global Brain needs a Therapist

Gaia Greek Mythology

The idea that the world itself could be considered an overarching form of mind can trace its roots deep into the religious longings of pantheism- the idea that the universe itself is God, or the closest thing we will ever find to our conception of God. In large part, I find pantheists to be a noble group. Any club that might count as its members a philosophical giant like Spinoza, a paradigm shattering genius such as Einstein, or a songbird like Whitman I would be honored to belong to myself. But alas, I have my doubts about pantheism- at least in particular its contemporary manifestation in the form of our telecommunications and computer networks being granted the status of an embryonic “global brain”. I wish it were so, but all the evidence seems to point in the other direction.

Key figures in this idea that our communications networks might constitute the neural passageways of a great collective brain predate the Internet by more than a generation. The great prophet of sentience emerging from our ever growing and intertwined communications networks was the Jesuit Priest Pierre Teilhard de Chardin. He stated it this way:

We are faced with a harmonized collectivity of consciousness, the equivalent of a sort of super-consciousness. The idea is that of the earth becoming enclosed in a single thinking envelope, so as to form, functionally, no more than a single vast grain of thought on the cosmic scale …

A more dystopian take on this was brought to us via the genius of Arthur C. Clarke in his 1961 short- “Dial F for Frankenstein” in which the telephone network “wakes up” and predictable chaos ensues. Only one year later the poetic and insightful Marshall Mcluhan gave us a view mixed with utopian and dystopian elements. We were weaving ourselves together into what Mcluhan called a “global village” filled both the intimacy and terror that was the hallmark of pre-literate societies. De Chardin  looked to evolution as the source of comparison to the emergence of his “Noosphere”, Mcluhan looked to the human brain. We had, he thought, “extended our central nervous system in a global embrace, abolishing space and time as far as our planet is concerned.”

 The phrase “global brain” itself would have to await until the 1980s and the New Age philosopher Peter Russell. Russell pushed the analogy between telecommunications networks and and the human brain even deeper managing to fuse together the two major views of the meaning of our telecommunications networks: Chardin’s evolutionary analogy with Mcluhan’s view of telecommunications networks as a nascent global brain. Russell saw the emergence of the human brain itself as an evolutionary leap towards even more interconnectivity- a property of the universe that had been growing at least since the appearance of life and reaching an apogee with the new computer networks tying individuals together.

In a period of rising communications across the nascent Internet, Russell held that the 10 billion neural connections of the human brain represented a phase change in the evolution of consciousness that would be replicated when the projected persons living on earth in the early 21st century would themselves be connected to one another via computer networks giving rise to a true “global brain”. With the age of the Internet just beginning, Russell would soon have company.

During the heady early 1990’s when the Internet was exploding into public consciousness the idea that a global brain was emerging from the ether graced the pages of tech mags such as Wired. There, journalists such as Jennifer Cobb Kreisberg could quote without any hint of suspicion Internet gurus like John Perry Barlow to the effect that:

We stand today at the beginning of Teilhard’s third phase of evolution, the moment at which the world is covered with the incandescent glow of consciousness. Teilhard characterized this as “evolution becoming conscious of itself.” The Net, that great collectivizer of minds, is the primary tool for our emergence into the third phase. “With cyberspace, we are, in effect, hard-wiring the collective consciousness,” says Barlow.

In 2002 Francis Heylighen of the Free University of Brussels could state his hopes for the emerging global brain this way:

The global brain will moreover help eliminate conflicts. It in principle provides a universal channel through which people from all countries, languages and cultures of this world can communicate. This will make it easier to reduce mutual ignorance and misunderstandings, or discuss and resolve differences of opinion. The greater ease with which good ideas can spread over the whole planet will make it easier to reach global consensus about issues that concern everybody. The free flow of information will make it more difficult for authoritarian regimes to plan suppression or war. The growing interdependence will stimulate collaboration, while making war more difficult. The more efficient economy will indirectly reduce the threat of conflict, since there will be less competition for scarce resources.

The Global Brain/Mind is one of those rare ideas in history that prove resilient whatever happens in the real world. 9-11 did not diminish Heylighen’s enthusiasm for the idea, which shouldn’t be surprising because neither did anything that occurred in the decade that followed his 2002 essay; including, the invasion of Iraq, the global economic crisis, failure to tackle world impacting phenomena such as climate change, increasing tensions between states, rapidly climbing economic inequality, or the way in which early 21st century global revolutions have played out to date. The hope that our networks will “wake up” and give rise to something like Chardin’s “Omega Point” continues to be widely popular in technology circles, both in the Kurzweilian Singularity variety and even in guises more aligned with traditional religious thinking such as that expounded by the Christian Kevin Kelly in his recent book What Technology Wants.

Part of the problem with seeing our telecommunications networks and especially the Internet as an embryonic form of global brain is that the idea of what exactly a brain is seems stuck in time and has not kept up with the findings of contemporary neuroscience. Although his actual meaning was far more nuanced, Mcluhan’s image of a “global village” suggests a world shrunk to a comfortably small size where all human being stand in the relation of “neighbors” to one another.

The meaning would have been much different had Mcluhan chosen the image of a refugee camp or the city of Oran from Albert Camus’ novel, The Plague. Like the community in Stephen King’s new TV production Under the Dome, Camus’ imagined Oran is hermetically sealed off from the outside world. A self-contained entity that is more a form of suffocation than community.

Much more than Mcluhan, thinkers such as Russell, Barlow or Heylighen see in the evolution of a single entity an increase of unity. The deepening of our worldwide communication networks will in Heylighen’s words “help eliminate conflicts”, and “make it easier to reach global consensus”. This idea that the creation of one entity embracing all the world’s peoples along with the belief that the development of self-awareness by this network is the threshold event both stem from an antiquated understanding of neuroscience. The version of the human brain proponents of the global brain hope the world’s telecommunications networks evolves into is a long discredited picture from the 1960’s, 70’s and 80’s. If the Internet and our other networks are evolving towards some brain like state it’s pretty important that we have an accurate picture of how the brain actually works.

As far as popular tours through the ganglia of contemporary neuroscience are concerned,none is perhaps better than David Eagleman’s Incognito The Secret Lives of the Brain.  In Incognito, Eagleman shows us how neuroscience has upended one of the deepest of Western assumptions – that of the unity of the self. Here he is in an interview on NPR’s Fresh Air:

EAGLEMAN: Yeah. Intuitively, it feels like there’s a you. So when somebody meets Terry Gross, they feel like: Oh, yeah, that’s one person. But in fact, it turns out what we have under the hood are lots of neural populations, lots of neural networks that are all battling it out to control your behavior.

And it’s exactly a parliament, in the sense that these different political parties might disagree with one another. They’re like a team of rivals in this way, to borrow Kearns Goodwin’s phrase of this. They’re like a team of rivals in that they all feel they know the best way to steer the nation, and yet they have different ways of going about it, just like different political parties do.

You need not be Ulysses who had himself strapped to the stern of his ship so that he could both listen to the song of the Sirens and resist their murderous call to have some intuitive sense of divisions within the self. Anyone who has resisted the urge to hit the snooze button one more time understands this. What is remarkable is how deep modern neuroscience has revealed these internal divisions to be. A person need not suffer Dissociative Disorder with multiple personalities or be a drug addict. The internal rivalry over who we really are is manifest in every decision where we feel pulled in two or more directions at once.

The related idea that Eagleman tries to convey in Incognito is just how small a role self-awareness plays in the workings of the human brain. The vast majority of what the brain does is actually outside of the perception of the consciousness. Indeed, one of the primary roles of self- consciousness is to learn new stuff only to bury it outside of conscious access where further interference from the self-conscious brain will only end up screwing things up. Once you know how to play the piano, ride a bike, or tie your shoelaces actually thinking about it is sure to turn you into a klutz. It’s not even that the self-conscious part of the brain is like George W. Bush “the decider” of our actions. It’s more like the news report of whatever neural faction within us has its hold on the reins of power. Like journalists in general, the self-conscious “I” thinks itself more important than those actually calling the shots.

What is perhaps surprising is that this updated version of the human brain actually does look a lot like the “global mind” we actually have, if not the one we want. The equivalent of the brain’s “neural populations” are the rivalrous countries, corporations,terrorist organizations, criminal groups, NGOs, cooperating citizens and others who populate the medium of the internet. Rival, and not so rival, as in the recent case of the US spying on EU officials, states, use the internet as a weapon of espionage and “cyber war” corporations battle one another for market share, terrorist and criminal entities square off against states and each other. NGOs and some citizen groups try to use the Internet to leverage efforts to make the world a better place.

As for this global brain, such as it is, obtaining self-awareness: if self-awareness plays such a small role in human cognition, why should we expect it to be such a defining feature of any “true” global brain? As Eagleman makes clear, the reason that the human brain exhibits self-awareness is largely a matter of its capacity to learn new things. The job of self-awareness is to make this learning unconscious like the pre- programmed instincts and regulatory functions of the body. Only once they are unconscious is their performance actually efficient. If any global brain follows the pattern of the one between our ears the more efficient it is- the less self-aware it will be.

This understanding of how the brain works has counter-intuitive, and what I think to be largely unexplored implications outside of the question of any global brain. Take the issue of uploading. The idea behind uploading is that our minds are a kind of software where our thoughts and memories can be uploaded offering us a form of immortality. Uploading seems to necessitate its flip-side of downloading as well. In the future if I want to learn how to play the oboe instead of painful practice I will be able to download into my mind all the skills needed to play and in a flash I’ll be hitting out the tunes like Jack Cozen Harel.

Our current understand of the brain seems to throw a wrench into uploading and downloading as far as thoughts are concerned. For one, the plural nature of the brain leaves one wondering which of our neural populations make into into the afterword or whether they should make it through as one entity at all? The gregarious part of myself has never like the introverted bookworm. At death, maybe before, why not make the divorce final and let the two go their separate ways?

Seeing the downloading of thoughts in light of contemporary neuroscience opens up other interesting questions as well. If our self-awareness is most in play when we are going through the tedious steps of learning something new perhaps we should describe it as a slow bandwidth phenomenon? An increase in the efficiency in getting new things into our heads may come at the cost of self-awareness. The more machine like we become the less self-aware we will be.

Those are interesting questions for another time. To return to the global brain: Eagleman made use of Doris Kearns Goodwin’s characterization of Lincoln’s cabinet as “a team of rivals”. With the Civil War fresh in our memories, perhaps we could extend the analogy to say that recent efforts by countries such as China to de-internationalize, or de-Americanize the Internet are something like the beginning of a movement to “secede” from the global brain itself.  As a pre-publication review of Eric Schmidt and Jared Cohen’s The New Digital Age: Reshaping the Future of People, Nations and Business:

Ultimately, Schmidt and Cohen even foresee the possibility of the world’s countries deliberately breaking the Internet into several distinct Internets. According to Gara’s reading of the book, the authors “speculate that the Internet could eventually fracture into pieces, some controlled by an alliance of states that are relatively tolerant and free, and others by groupings that want their citizens to take part in a less rowdy and open online life.

Whether such a fracturing would constitute the sort of deep loss Schmidt and Cohen present it as or something much less dire depends on how one values the global brain as it exists today. As I see it, our networks are nowhere near the sort of sentient and essential globe spanning consciousness the global brain’s most vocal advocates wish it to be.

We do, however, have hints of what such a true global brain might look like in things like IBM’s Smart Cities Initiative which allows cities like New York and Rio to get instant feedback from their citizens along with a network of sensors that allow these cities to respond accordingly and target their services.

Yet, what makes something like Smart Cities work is that this feedback is plugged into services and governance with clear paths of response. A similar network of sensors and feedback systems that instead spanned the earth itself would need somewhere to be plugged into. A brain needs a body, but where? As of yet our tools of global governance are not even up to the information steaming from the limited global brain we have. This would allow us to, among much else, not just monitor but care for our earth. To take on the responsibilities of the terrestrial, if not cosmic, adults we are.

As long as we hold that there is some degree of similarity between the network under our skulls and the network civilization we have been constructing since the first telegraph message in 1844- “What hath God wrought?”, then we need to update our understanding of what a global brain would actually be. This needs to be based both on current neuroscience and where our networks are themselves trending in addition to our commitment to actually heed what such networks might tell us. To do otherwise is to be blinded to the truth by our deep longing for a pantheist deity: hugging close like the earth- mother of a child’s fable.

The Algorithms Are Coming!

Attack of the Blob

It might not make a great b-movie from the late 50’s, but the rise of the algorithms over the last decade has been just as thrilling, spectacular, and yes, sometimes even scary.

I was first turned on to the rise of algorithms by the founder of the gaming company Area/Code , Ken Slavin, and his fascinating 2011 talk on the subject at TED.  I was quickly draw to one Slavin’s illustration of the new power of algorithms in the world of finance.  Algorithms now control more than 70% of US financial transactions meaning that the majority of decisions regarding the buying and selling of assets are now done by machines. I initially took, indeed I still take, the rise of algorithms in finance to be a threat to democracy. It took me much longer to appreciate Slavin’s deeper point that algorithms have become so powerful that they represent a new third player on the stage of human experience: Nature-Humanity-Algorithms. First to finance.

The global financial system has been built around the electronic net we have thrown over the world. Assets are traded at the speed of light. The system rewards those best equipped to navigate this system granting stupendous profits to those with the largest processing capacity and the fastest speeds. Processing capacity means access to incredibly powerful supercomputers, but the question of speed is perhaps more interesting.

Slavin points out how the desire to shave off a few milliseconds of trading time has led to the hollowing out of whole skyscrapers in Manhattan. We tend to think of the Internet as something that is “everywhere” but it actually has a location of sorts in the form of its 13 core root servers through which all of its traffic flows. The desire to get close to route servers and therefore be able to move faster has led not only to these internally re-configured skyscrapers, but the transformation of the landscape itself.

By far the best example of the needs of algorithms shaping the world is the 825 mile fiber optic trench dug from Chicago to New York by the company Spread Networks. Laying the tunnel for this cable was done by cutting through my formidable native Alleghenies rather than following, as regular communications networks do, the old railway lines.

Slavin doesn’t point this out, but the 13 milliseconds of trading advantage those using this cable is only partially explained by its direct route between Chicago and New York. The cable is also “dark fiber” meaning it does not need to compete with other messages zipping through it. It’s an exclusive line- the private jet of the web. Some alien archaeologist who stumbled across this cable would be able to read the economic inequality endemic to early 21st century life. The Egyptians had pyramids, we have a tube of glass hidden under one of the oldest mountain ranges on earth.

Perhaps the best writer on the intersection of digital technology and finance is the Wall Street Journal’s  Scott Peterson with books like his Quants, and even more so his Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock MarketIn Dark Pools Peterson takes us right into the heart of new algorithm based markets where a type of evolutionary struggle is raging that few of us are even aware of. There are algorithms that exist as a type of “predator” using their speed to out maneuver slow moving “herbivores” such as the mutual funds and pension funds in which the majority of us little-guys, if we have any investments at all, have our money parked. Predators, because they can make trades milliseconds faster than these “slow” funds can see a change in market position- say selling a huge chunk of stock- and then pounce taking an advantageous position relative to the sale leaving the slow mover with much less than would have been gained or much more than would have been lost had these lightning fast piranhas not been able to strike.

To protect themselves the slow moving funds have not only established things like “decoy” algorithms to throw the predators off their trail, but have shifted much of their trading into non-public markets the “dark-pools” of Peterson’s title. Yet, even these pools have become infected with predator algos. No environment is safe- the evolutionary struggle goes on.

However this ends, and it might end very badly, finance is not the only place where we have seen the rise of the algorithms. The books recommended for you by Amazon or the movies Netflix informs you might lead to a good movie night are all based on sophisticated algorithms about who you are. The same kinds of algorithms that try to “understand” you are used by online dating services or even your interaction with the person on the other end of the line at customer service.

Christopher Steiner in his Automate This  points out that little ditty at the beginning of every customer service call “this call may be monitored…” is used not so much as we might be prone to think it is- a way to gauge the performance of the person who is supposed to help you with your problem as it is to add you to a database of personality types.  Your call can be guided to someone skilled in whatever personality type you have. Want no nonsense answers? No problem! Want a shoulder to cry on? Ditto!

The uber-dream of the big technology companies is to have their algorithms understand every element of our lives and “help” us to make decisions accordingly. Whether or not help should actually be in quotes is something for us as individuals and even more so as a society to decide with the main questions being how much of our privacy are we willing to give up in order to have smooth financial transactions, and is this kind of guidance a help or a hindrance to the self-actualization we all prize?

The company closest to achieving this algorithmic mastery over our lives is Google as Steven Kovach points out in a recent article with the somewhat over the top title Google’s plan to take over the world. Most of us might think of Google as a mere search company that offers a lot of cool compliments such as Google Earth. But, as its founders have repeatedly said, the ultimate goal of the company is to achieve true artificial intelligence, a global brain covering the earth.

 Don’t think the state, which Nietzsche so brilliantly called “the coldest of all cold monsters”  hasn’t caught on to the new power and potential of algorithms. Just as with Wall Street firms and tech companies the state has seized on the capabilities of advances in artificial intelligence and computing power which allow the scanning of enormous databases. Recent revelations regarding the actions of the NSA should have come as no surprise. Not conspiracy theorists, but reputable journalists such as the Washington Post’s Dana Priest  had already informed us that the US government was sweeping up huge amounts of data about people all over the world, including American citizens, under a program with the Orwellian name of The Guardian.  Reporting by James Bamford of Wired in March of last year had already informed us that:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

The NSA scandals have the potential of shifting the ground under US Internet companies, especially companies such as Google whose business model and philosophy are built around the idea of an open Internet. Countries have even more reason now to be energetic in pursuing “Internet sovereignty”, the idea that each county should have the right and power to decide how the Internet is used within its borders.

In many cases, such as in Europe, this might serve to protect citizens against the prying eyes of the US security state, but we should not be waving the flag of digitopia quite yet. There are likely to be many more instances of the state using “Internet sovereignty” not to protect its people from US snoops, but authoritarian regimes from the democratizing influences of the outside world. Algorithms and the ecosystem of the Internet in which most of them exists might be moving from being the vector of a new global era of human civilization to being just another set of tools in the arsenal of state power. Indeed, the use of algorithms as weapons and the Internet as a means of delivery is already well under way.

At this early date it’s impossible to know whether the revolution in algorithms will ultimately be for the benefit of tyranny or freedom. As of right now I’d unfortunately have to vote for the tyrants. The increase in the ability to gather and find information in huge pools of data has, as is well known, given authoritarian regimes such as China the ability to spy on its netizens that would make the more primitive totalitarians of the 20th century salivate. Authoritarians have even leveraged the capacity of commercial firms to increase their own power, a fact that goes unnoticed when people discuss the anti- authoritarian “Twitter Revolutions” and the like.

Such was the case in Tunisia during its revolution in 2011 where the state was able to leverage the power of a commercial company- FaceBook- to spy on its citizens. Of course, resistance is fought through the Internet as well. As Parmy Olson points out in her We Are Anonymous it was not the actions of the US government but one of the most politically motivated of the members of the hacktivist groups Anonymous and LulzSec, a man with the moniker “Sabu” who later turned FBI informant that was able to launch a pushback of this authoritarian takeover of the Internet. Evidence if there ever was any of that hacktivism even when using Distributed Denial of Service Attacks (DDOS) can be a legitimate form of political speech

Yet, unlike in the movies, even the rebels in this story aren’t fully human. Anonymous’ most potent weapon DDOS attacks rely on algorithmic bots to infect or inhabit host  computers and then strike at some set moment causing a system to crash  due to surges in traffic. Still, it isn’t this hacktivism of groups like Anonymous and LulzSec that should worry any of us, but the weaponization of the Internet by states, corporations and criminals.

Perhaps the first well known example of a weaponized algorithms was the Stuxnet Worm deployed by the US, Israel, or both, against the Iranian nuclear program. This was a very smart computer worm that could find and disable valuable pieces of Iran’s nuclear infrastructure leaving one to wonder whether the algo wars on Wall Street are just a foretaste of a much bigger and more dangerous evolutionary struggle.

Hacktivist groups like Anonymous or LulzSec have made DDOS attacks famous. What I did not know, until I read Parmy Olson, is that companies are using botnets to attack other companies as when Bollywood used the company AiPlex to attack the well known copyright violators such as Pirate Bay by using DDOS attacks. What this in all likelihood means is that AiPlex unknown to their owners infiltrated perhaps millions of computers (maybe your computer) to take down companies whose pirated materials you might never have viewed. Indeed, it seems the majority of DDOS attacks are little but a-political thuggery- mobsters blackmailing gambling houses with takedowns on large betting days and that sort of Sopranosesque type of thing.

Indeed, the “black-hats” of criminal groups are embracing the algorithmic revolution with abandon. A lot of this is just annoying: it’s algorithms that keep sending you all those advertisements about penis enlargement, or unclaimed lottery winnings, but it doesn’t stop there. One of the more disturbing things I took away from Mark Bowden’s Worm the First Digital War   is that criminals who don’t know the first thing about programming can buy “kits”, crime algorithms they can customize to, say, find and steal your credit card information by hacking into your accounts. The criminal behind this need only press a few buttons and whola! he’s got himself his very own cyber-burglar.

 The most advanced of these criminal algorithms- though it might be a weapon of some state or terrorist group, we just don’t know- is the Conficker Worm, the subject of Bowden’s book which was able to not only infects millions of computers by exploiting a whole in Windows- can you believe it?!- but has created the mother of all botnets, an algorithm capable of taking down large parts of the Internet if it chose, but for whatever reason just sits there without doing a damned thing.

As for algorithms and less kinetic forms of conflict, the Obama Campaign of 2012 combined the same mix of the capability to sort huge data sets combined with the ability to sort individuals based on psychological/social profiles that we see being used by tech companies and customer service firms. Just like the telemarketers or CSRs the Obama campaign was able to tailor their approach  to the individual on the other end of their canvasing – amplifying their persuasive power. That such mobilizing prowess has not also led to an actual capacity to govern is another matter.

All this is dark, depressing stuff, I know. So, I should end with a little light. As Steiner points out in his Automate This, our new found power to play with huge data sets, and,  in what sounds like an oxymoron, customize automation, promises a whole host of amazing benefits. One of these might be our ability to have a 24/7 personal AI “physician” that monitors our health and drastically reduces medical costs. A real bone for treating undeserved patients whether in rural Appalachia or the develping world.

Steiner is also optimistic when it comes to artists. Advanced algorithms now allow, and they’ll just get better, buyers  to link with sellers in a way that has never been possible before. A movie company might be searching for a particular piece of music for their film. Now, through a service like Music-Xray  the otherwise invisible musician can be found.

Here I have to put my pessimist cap back on for just a minute, for the idea that algorithms can help artists be viable, as of this writing, is just that a hope. Sadly, there is little evidence for it in reality. This is a point hit home by the recent Atlantic Online article: The Reality of the Music Business Today: 1 Million Plays = $16.89. The algorithm used by the Internet music service, Pandora, may have helped a million people find musician David Lowery and his song “Low”, but its business model seems incapable of providing even successful musicians with meaningful income. The point that the economic model we have built around the “guy with the biggest computer” has been a bust for artists of all sorts is most strongly driven home by the virtual reality pioneer and musician Jaron Lanier. Let’s hope Lanier is ultimately wrong and algorithms eventually provide a way of linking artists and their patrons, but we are far, far from there yet. At the very least they should provide artists and writers with powerful tools to create their works.

Their are more grounds for optimism. The advance of algorithms is one of the few lit paths out of our current economic malaise. Their rise appears to signal that the deceleration in innovation which emerged because of the gap between the flood of information we could gather, the new discoveries we were making, and our ability to model those discoveries coherently, may be coming to an end almost as soon as it was identified. Advanced algorithms should allow us to make potent and amazing new models of the natural world. In the long run they may allow us to radically expand the artistic, philosophical and religious horizons of intelligence creating visions of the world of which we can today barely dream.

On a more practical and immediate level, advanced algorithms that can handle huge moving pieces of information seem perfect for dealing with something like responding to a natural disaster or managing the day to day flows of a major city such as routing traffic or managing services- something IBM is pioneering with its Smart Cities Projects in New York City and Rio De Janeiro.

What algorithms are not good at, at least so far, and we can see this in everything from the Obama campaign in light of its political aftermath, to the war on terrorism, to the 300,000 person protests in Rio this past week despite how “smart” the city, is expanding their horizon beyond the immediate present to give us solutions for the long term political, economic and social challenges we confront, instead merely acting as a globe- sized amplifier of such grievances which can bring down governments but not creating a lasting political order.  To truly solve our problems we still need the mother of all bots, collective human intelligence. I am still old fashioned enough to call it democracy.

Capitalism, Evolution and the Attack of the Giant Fungus

Armillaria ostoyae

One of the stranger features of our era is its imaginative exhaustion in terms of the future, which I realize is a strange thing to say here. This exhaustion is not so much of an issue when it comes to imagining tomorrow’s gadgets, or scientific breakthroughs, but becomes apparent once the question of the future political and economic order is at stake. In fact, the very idea that something different will almost inevitably follow the institutions and systems we live in seems to have retreated from our consciousness at the very time when the endemic failures of our political and economic order has shown that the current world can not last.

Whatever the failures of government in Washington no serious person is discussing an alternative to the continued existence of the United States or its constitutional form of government now over two centuries old. The situation is even more pronounced when it comes to our capitalist economic system which has taken root almost everywhere and managed to outlive all of its challengers. Discussions about the future economy are rarely ones about what might succeed capitalism but merely the ironing out of its contradictions so that the system itself can continue to function.

It’s not just me saying this, here is the anthropologists and anarchist philosopher David Graeber in his wonderful Debt the first 5,000 Years on our contemporary collective brain freeze when it comes to thinking about what a future economy might be like:

It’s only now, at the very moment when it’s becoming increasingly clear that current arrangements are not viable, that we suddenly have hit the wall in terms of our collective imagination.

There is very good reason to believe that, in a generation or so, capitalism itself will no longer exist-most obviously, as ecologists keep reminding us, because it’s impossible to maintain an engine of perpetual growth forever on a finite planet, and the current form of capitalism doesn’t seem to be capable of generating the kind of vast technological breakthroughs and mobilizations that would be required for us to start finding and colonizing any other planets. Yet faced with the prospect of capitalism actually ending, the most common reaction-even from those who call themselves “progressives”-is simply fear. We cling to what exists because we can no longer imagine an alternative that wouldn’t be even worse. (381-382)

There are all sorts of reasons why our imagination has become stuck. To begin with the only seemingly viable alternative to capitalism- state communism- proved itself a failure in the 1980’s when the Soviet Union began to kick the bucket. Even the socialist alternatives to capitalism were showing their age by then and began pulling themselves back from any sort of direct management of the economy. Then there is one word- China- which embraced a form of state capitalism in the late 1970’s and never looked back. To many of the new middle class in the developing world the globalization of capitalism appears a great success and can be credited with moving millions out of poverty.

Yet capitalism has its problems. There is not only the question of its incompatibility with survival on a finite earth, as Graber mentions, there are its recurrent financial crises, its run away inequality, its endemic unemployment in the developed and its inhuman exploitation in the developing world. One would have thought that the financial crisis would have brought some soul searching to the elites and a creative upsurge in thinking about alternative systems, but, alas, it has not happened except among anarchists like Graeber and the short-lived Occupy movement he helped inspire and old school unrepentant communists such as Slavoj Zizek.

At least part of our imaginative atrophy can be explained by the fact that capitalism, like all political-economic systems before has managed to enmesh itself so deeply into our view of the natural world that it’s difficult to think of it as something we ourselves made and hence can abandon or reconfigure if we wanted to. Egyptian pharaohs, Aztec chieftains, or Chinese emperors, all made claims to rule that justified themselves as reflections of the way the cosmos worked. The European feudal order that preceded the birth of capitalism was based on an imagined chain of being that stretched from the peasant in his field to the king on his throne through the “angelic” planets to God himself- out there somewhere in the Oort Cloud.

The natural order that capitalism is thought to reflect is an evolutionary one which amounts to a bias against design and control. Like evolution, the “market” is thought to be wiser than any intentional attempts to design steer or control in could ever be. This is the argument one can find in 19th century social Darwinist like Thomas Huxley, a 20th century iconoclast like Friedrich Hayek, or a 21st century neo-liberal like Robert Wright, all of whom see in capitalism a reflection of biological evolution in that sense. In the simplest form of this argument evolution pits individuals against one another in a competition to reproduce with the fittest individuals able to get their genes into the next generation. Capitalism pits producers and sellers against others dealing in similar products with only the most efficient able to survive.  History seemed to provide the ultimate proof of this argument as the command economy of the Soviet Union imploded in the 1980’s and country after country adopted some sort of pro-market system. The crash, however,  should have sparked some doubts.

The idea that the market is a social version of biological evolution has some strong historical roots. The late Stephen Jay Gould, in his essay “Of bamboos, cicadas and the economy of Adam Smith” drew our attention to the fact that this similarity between evolution and capitalism might hold not because the capitalist theory of economics emerged under the influence of the theory of evolution but the reverse. That the theory of evolution was discovered when it was because Darwin was busy reading the first theorist of capitalism- Adam Smith. I am unable to find a link to the essay, but here is Gould explaining himself.

The top down mercantilist economy Smith attacked in his Wealth of Nations, according to Gould, must have seemed to Darwin like the engineering God of William Paley in his Natural Philosophy. Paley was the man who gave us the analogy of God as “watchmaker”. If you found a watch on the beach and had never seen such a thing before you could reasonably assume it was designed by a creature with intelligence. We should then reason from the intricate engineering of nature that it was designed by a being of great intelligence.

Adam Smith was dealing with a whole other sort of question- how do you best design and manage an economy? Smith argued that the best way to do this wasn’t to design it from the top down, but  to let the profit motive loose from which an “invisible hand” would bring the best possible economic order into being. In the free market theory of Smith, Darwin could find a compelling argument against Paley. The the way you arrived at the complex order of living things was not to design it from on high but to let the struggle for reproduction loose and from an uncountable number of failures and successes would emerge the rich tapestry of life which surrounds us in words of a much later book on the topic by Richard Dawkins, the “designer” of nature was a Blind Watchmaker.

The problem with thinking our current economic system reflects the deep truth of evolution is not that the comparison lacks a grain of truth, and it certainly isn’t the case that the theory of evolution is untrue or is likely to be shown to be untrue as something like the Great Chain of Being that justified the feudal order was eventually shown to be untrue. Rather, the problem lies with the particularly narrow version of evolution with which capitalism is compared and the papering over of the way evolution often lacks the wisdom of something like Smith’s “invisible hand”.

Perhaps we should borrow another idea from Gould if we are to broaden our evolutionary analogy between evolution and capitalism. Gould pioneered a way to understand evolution known as punctuated equilibrium. In this view evolution does not precede gradually but in fits and starts with periods of equilibrium in which evolutionary change grinds to a halt are ended by periods of rapid evolutionary change driven by some disequilibrating event- say a rapid change in climate or the mass appearance of new species such as in the Columbian Exchange. This is then followed by a new period of equilibrium after species have evolved to best meet the new conditions, or gone extinct because they could not adapt and so on and so on.

The defining feature of late capitalism, or whatever you chose to call it, is that it is unable to function under conditions of equilibrium, or better, that its goal of ever increasing profits is incompatible with the kinds of equilibrium found in mature economies. This is part of the case the financial journalist Chrystia Freeland makes in her engaging book Plutocrats. The fact that so much Western money is now flowing into the developing world stems from the reality that the rapid transformation in such places makes stupendous profits possible. Part of this plethora of potential profits arises from the fact that areas such as the former Soviet Union China and countries that have undergone neo-liberal reforms- like India- are virgin territories for capitalist entrepreneurs. As Rosa Luxemburg pointed out during the last great age of globalization at the end of the 19th and beginning of the 20th century capitalism was the first system “that was calculated for the whole earth”.

To return to the analogy with evolution, it is like the meeting of two formerly separated ecosystems only one of which has undergone intense selective pressures. Capitalist corporations whether Western or imitated are the ultimate invasive species in areas that formerly lived in the zoo- like conditions of state socialism.   In the mature economies such as those of the United States, Europe, and Japan the kinds of disequilibrium which leads to the ever increasing profits at the root of capitalism have come in two very different forms- technological change and deregulation. The revolution in computers and telecommunications has been a source of disequilibrium upending everything from entertainment to publishing to education. In the process it has given rise to the sorts of economic titans, and sadly inequality,seen in a similar periods of upheaval. We no longer have Andrew Carnegie, but we do have Bill Gates. Standard Oil is a thing of the past, but we have Google and Facebook and Amazon.

The transformation of society that has come with such technological disequilibrium is probably, on net, a positive thing for all of us. But, we have also engendered self-inflicted disequilibrium without clear benefit to the larger society. The enormous growth in the profits and profile of the financial industry came on the back of the dismantling of Depression era controls making financiers and financial institutions into the wealthiest segment of our society. We know where that got us. It is as if a stable, if staid, island ecosystem suddenly invited upon itself all sorts of natural disasters in the hope of jump starting evolution and got instead little but mutants that threaten to eat everything in sight until the island became a wasteland. Late capitalism is like evolution only if we redact the punctuated equilibrium. It is we ourselves who have taken to imposing the kinds of stresses that upend the economy into a state of permanent disequilibrium.

The capitalism/evolution analogy also only works under conditions of a near perfect market where the state or some other entity not only preserves free competition at its heart but intervenes to dismantle corporations once they get too large. Such interference is akin to the balancing effect of predation against plants or animals that exhibit such rapid reproduction that if the majority of them were not quickly eaten they would consume entire ecosystems.  Such is the case with the common aphid which if left to its devices would have one individual producing 1,560,000,000,000,000,000,000,000 (1 heptillion, 560 hexillion) offspring!

The balance of nature is a constraint that every species is desperate to break out of just as at the root of every corporation lies the less than secret wish to have eliminated all of its competition. Most of the time predation manages to prevent the reproductive drive of any one species from threatening the entire ecosystem, but sometimes it fails. This is the case with the giant fungus Armillaria ostoyae whose relentless growth kills the trees above it and smothers the diverse forest ecosystem from which it had emerged.

We can complain that the failure of the government to break up giant corporations has let loose the like of Armillaria ostoyae. Calls to dismantle the big banks after the financial crisis fell on deaf ears. Big banks and mega-corporations can now point to their global presence and competition against other behemoths to justify their survival. We couldn’t dismantle Google if we wanted too because everything left would be swallowed by Baidu.

And it’s not only that the government is failing to preserve market competition by letting companies get too big, it’s also distorting the economic ecosystem to favor the companies that are already there. The corruption of democracy through corporate lobbying has meant that the government, to the extent that it acts at all, is not preserving free competition but distorting it. To quote from Plutocrats:

“Most lobbying seeks to tilt the playing field, in one direction or another, not level it.” (262)

Another, and for my part much more galling oversight of the capitalism/evolution analogy is that it tends to treat any attempt at design, guidance or intention on the part of the society at large as somehow “unnatural” interference in what would otherwise be a perfectly balanced system. What this position seems to conveniently forget is that the discovery of Natural Selection didn’t somehow lead to the end of Artificial Selection– instead quite the opposite. We don’t just throw a bunch of animals in a room and cross our fingers that some miracle of milk or egg production will result. What we do is meticulously shape the course of evolution using some constraint based on our hoped for result.

It we who have established the selective criteria of maximizing and increasing profits and growth to be the be all and end all of a corporation’s existence when we could have chosen a much different set of selection criteria that would give rise to completely different sorts of economic entities. Governments already do this when they force industries to comply with constraints such as health and safety or environmental requirements. Without these constraints we get the evolution of economic entities that are focused on maximizing profits and growth alone, man made creatures which giant fungus like care little for the people and societies underneath them.

Related to this is another evolutionary assumption shared by proponents of the unfettered free market, this one with somewhat dubious scientific validity. Those who believe capitalism can run itself seem to subscribe to an economic version of James Lovelock’s “Gaia Hypothesis”.  Recall, that Lovelock proposed that the earth itself was a kind of living organism that had evolved in such a way as to be self-regulating towards an environment that was optimal for life. Human beings, if they were crazy enough to challenge this Gaian equilibrium were asking for extinction, but life itself would go on until it faced a challenger it would be unable to best- the earth’s beloved sun.

Belief that the technological world is a kind of superorganism can be found thought like these of the journalist Robert Wright that I have quoted elsewhere:

Could it be that, in some sense, the point of evolution — both the biological evolution that created an intelligent species and the technological evolution that a sufficiently intelligent species is bound to unleash — has been to create these social brains, and maybe even to weave them into a giant, loosely organized planetary brain? Kind of in the way that the point of the maturation of an organism is to create an adult organism?”

In this quote can be found both of the great forces of disequilibrium unleashed by late capitalism, both the computer and communications revolution and globalization. But it seem that this planetary brain lacks the part of our neural architecture that makes us the most human- the neocortex, by which we are able to act intentionally and plan.

The kinds of hair-trigger threads we are weaving around society are good in many respects, but are not an answer to the problem of our long term direction and can even, if they are not tempered by foresight, themselves lead to the diminution of long term horizons in the name of whatever is right in front of our nose, and spark crises of uninformed panic lacking any sense of perspective. Twitter was a helpful tool in helping to overthrow Middle Eastern dictators, but proved useless in actually establishing anything like democracy.

Supercomputers using sophisticated algorithms now perform a great deal of the world’s financial transactions in milliseconds,  and sometimes lead to frightening glitches like the May 2010 “flash crash” that may portend deeper risks lying underneath a world where wealth is largely detached from reality and become a sea of electrons. Even if there are no further such crashes our ever shortening time scale needs to be somehow tempered and offset with an idea of the future where the long term horizon extends beyond the next financial quarter.

 Late 21st century capitalism with its focus on profit maximization and growth, where corporations have managed to free themselves from social constraints, and where old equilibriums are overturned in the name of creating new opportunities is just one version of a “natural” economic system.  We are free to imagine others. As Graeber hoped we would start to wonder what different kinds of economic systems might be possible besides the one we live in. The people we would do best turn to when it comes to imagining such alternatives are unlikely to come from the ranks of economists who are as orthodox as any medieval priesthood, or our modern fortune-tellers- the futurists- who are little better than “consultants” for the very system we might hope to think our way beyond.

No, the people who might best imagine a future alternative to capitalism are those who are the most free of the need to intellectually conform so as to secure respectability, tenure, promotion, or a possible consulting gig, and who have devoted their lives to thinking about the future. The people who best meet this description today are the authors of science and speculative fiction. It will be to their success and failure in this task that I will turn next time…

Boston, Islam and the Real Scientific Solution

Arab Astronomers

The tragic bombing on April 15 by the Tsarnaev brothers, Tamerlane and Dzhohkar collided and amplified contemporary debates- deep and ongoing disputes on subjects as diverse as the role of religion and especially Islam in inspiring political violence, and the role and use of surveillance technology to keep the public safe. It is important that we get a grip on these issues and find a way forward that reflects the reality of the situation rather than misplaced fear and hope for the danger is that the lessons we take to be the meaning of the Boston bombing will be precisely opposite to those we should on more clear headed reflection actually draw. A path that might lead us, as it has in the recent past, to make egregious mistakes that both fail to properly understand and therefore address the challenges at issue while putting our freedom at risk in the name of security. What follows then is an attempt at clear headedness.

The Boston bombing admittedly inspired by a violent version of political Islam seemed almost to fall like clockwork into a recent liberal pushback against perceived Islamophobia by the group of thinkers known as the New Atheists. In late March of this year, less than a month before the bombing, Nathan Lean at Salon published his essay Dawkins, Hitchens Harris: New Atheists Flirt With Islamophobia   which documented a series of inflammatory statements about Islam by the New Atheists including the recent statement of Richard Dawkins that “Islam was the greatest source of evil in the world today” or an older quote by Sam Harris that: “Islam, more than any other religion human beings have devised, has all the makings of a thorough going cult of death.” For someone, such as myself who does indeed find many of the statements about Islam made by the New Atheists to be, if not overtly racists, then at least so devoid of religious literacy and above all historical and political self-reflection that they seem about as accurate as pre-modern traveler’s tales about the kingdom of the cyclops, or the lands at the antipodes of the earth where people have feet atop their heads, the bombings could not have come at a worse cultural juncture.

If liberals such as Lean had hoped to dissuade the New Atheists from making derogatory comments about Muslims at the very least before they made an effort to actually understand the beliefs of the people they were talking about, so that Dawkins when asked after admitting he had never read the Koran responded in his ever so culturally sensitive way: “Of course you can have an opinion about Islam without having read the Qur’an. You don’t have to read “Mein Kampf” to have an opinion about Nazism ” the fact that the murders in Boston ended up being two Muslim brothers from Chechnya would appear to give the New Atheists all the evidence they need.  The argument for a more tolerant discourse has lost all traction.

It wasn’t only this aspect of the “God debate” with which the Boston bombing intersected. There is also the on going argument for and against the deployment of widespread surveillance technology especially CCTV. The fact that the killers were caught on tape there for all the world to see seems to give weight to those arguing that whatever the concerns of civil libertarians the widespread use of CCTV is something that would far outweigh its costs. A mere three days after the bombing Slate ran an article by Farhad Manjoo We Need More Cameras and We Need Them Now.  The title kinda says it all but here’s a quote:

Cities under the threat of terrorist attack should install networks of cameras to monitor everything that happens at vulnerable urban installations. Yes, you don’t like to be watched. Neither do I. But of all the measures we might consider to improve security in an age of terrorism, installing surveillance cameras everywhere may be the best choice. They’re cheap, less intrusive than many physical security systems, and—as will hopefully be the case with the Boston bombing—they can be extremely effective at solving crimes.

Manjoo does not think the use of ubiquitous surveillance would be limited to deterring crime or terrorism or solving such acts once they occur, but that they might eventually give us a version of precrime that seems like something right out of Philip K. Dick’s Minority Report:

The next step in surveillance technology involves artificial intelligence. Several companies are working on software that monitors security-camera images in an effort to spot criminal activity before it happens.

London is the queen of such surveillance technology, but in the US it is New York that has most strongly devoted itself to this technological path of preventing terrorism spending upwards of 40 million dollars to develop its Domain Awareness System in partnership with Microsoft. New York has done this despite the concerns of those whom Mayor Bloomberg calls “special interests”, that is those who are concerned that ubiquitous surveillance represents a grave threat to our right to privacy.

Combining the recent disputes surrounding the New Atheists treatment of Islam and the apparent success of surveillance technology in solving the Boston bombing along with the hope that technology could prevent such events from occurring in the future might give one a particular reading of contemporary events that might go something as follows. The early 21st century is an age of renewed religious fanaticism centered on the rejection of modernity in general and the findings of science in particular. With Islam being only the most representative of the desire to overturn modern society through the use of violence. The best defense of modern societies in such circumstances is to turn to the very features that have made their societies modern in the first place, that is science and technology. Science and technology not only promise us a form of deterrence against acts of violence by religiously inspired fanatics they should allow us to prevent such acts from occurring at all if, that is, they are applied with full force.

This might be one set of lessons to draw from the Boston bombings, but would it be the right one? Let’s take the issue of surveillance technology first. The objection to surveillance technology notably CCTV was brilliantly laid out by the science-fiction writer and commentator Cory Doctorow in an article for The Guardian back in 2011.

Something like CCTV works on the assumption that people are acting rationally and therefore can be deterred. No one could argue that Tamerlane and his brother were acting rationally. Their goal seemed to be to kill as many people as possible before they were caught, but they certainly knew they would be caught. The deterrence factor of CCTV and related technologies comes into play even less when suicide bombers are concerned. According to Doctorow we seem to be hoping that we can use surveillance technology as a stand in for the social contact that should bind all of us together.

But the idea that we can all be made to behave if only we are watched closely enough all the time is bunkum. We behave ourselves because of our social contract, the collection of written and unwritten rules that bind us together by instilling us with internal surveillance in the form of conscience and aspiration. CCTVs everywhere are an invitation to walk away from the contract and our duty to one another, to become the lawlessness the CCTV is meant to prevent.

This is precisely the lesson we can draw from the foiled train bombing plot in Canada that occurred at almost at the same moment bombs were going off in Boston. While the American city was reeling, Canada was making arrests of Muslim terrorists who were plotting to bomb trains headed for the US. An act that would have been far more deadly than the Boston attacks. The Canadian Royal Mounted Police was tipped off to this planned attack by members of the Muslim community in Canada a fact that highlights the difference in the relationship between law enforcement and Muslim communities in Canada and the US. As reported in the Chicago Tribune:

Christian Leuprecht, an expert in terrorism at Queen’s University in Kingston, Ontario, said the tip-off reflected extensive efforts by the Royal Canadian Mounted Police to improve ties with Muslims.

‘One of the key things, and what makes us very different from the United States, is that the RCMP has always very explicitly separated building relationships with local communities from the intelligence gathering side of the house,’ he told Reuters.

 A world covered in surveillance technology based on artificial intelligence that can “read the minds” of would be terrorists is one solution to the threat of terrorism, but the seemingly low tech approach of the RCMP is something far different and at least for the moment far more effective. In fact, when we look under the hood of what the Canadians are doing we find something much more high tech than any application of AI based sensors on the horizon.

We tend to confuse advanced technology with bells and whistles and therefore miss the fact that a solution that we don’t need to plug in or program can be just as if not more complex than anything we are currently capable of building. The religious figures who turned the Canadian plotters into the authorities were far more advanced than any “camera” we can currently construct. They were able to gauge the threat of the plotters through the use of networks of trust and communication using the most advanced machine at our disposal- the human brain. They were also able to largely avoid what will likely be the bane of first generation of the AI based surveillance technologies hoped for by Manjoo- that is false alarms. Human based threat assessment is far more targeted than the types of ubiquitous surveillance offered by our silicon friends. We only need to pay attention to those who appear threatening rather than watch everybody and separate out potential bad actors through brute force calculation.

The after effects of the Boston bombings is likely to make companies that sell surveillance technologies very rich as American cities pour billions into covering themselves with a net of “smart” cameras in the name of safety. The high profile role of such cameras in apprehending the suspects will likely result in an erosion of the kinds of civil libertarian viewpoints held by those such as Doctorow. Yet, in an age of limited resources, are these the kinds of investments cities should be making?

The Boston bombing capped off a series of massacres in Colorado and Newtown all of which might have been prevented by a greater investment in mental health services. It may seem counter intuitive to suggest that many of those drawn to terrorist activities are suffering from mental health conditions but that is what some recent research suggests.

We need better ways to identify and help persons who have fallen into some very dark places, and whereas the atomistic nature of much of American social life might not give us inroads to provide these services for many, the very connectedness of immigrant Muslim communities should allow mental health issues to be more quickly identified and addressed.  A good example of a seemingly anti-modern community that has embraced mental health services are my neighbors the Amish where problems are perhaps more quickly identified and dealt with than in my own modern community where social ties are much more diffuse. The problem of underinvestment in mental health combined with an over reliance on security technologies isn’t one confined to the US alone. Around the same time Russia was touting its superior intelligence gathering capabilities when it came to potential Chechiyan terrorist including the Tsarnaev family, an ill cared for mental health facility in Moscow burned to the ground killing 38 of its trapped residents.

Lastly, there is the issue of the New Atheists’ accusations against Islam- that it is particularly anti-modern, anti-scientific and violent. As we should well know, any belief system can be used to inspire violence especially when that violence is presented in terms of self-defense.  Yet, Islam today does seem to be a greater vector of violence than other anti-modern creeds. We need to understand why this is the case.

People who would claim that there is something anti-scientific about Islam would do well to acquaint themselves with a little history. There is a sense that the scientific revolution in the West would have been impossible without the contribution of Islamic civilization a case made brilliantly in not at least two recent books Jonathan Lyons’ House  of Wisdom  and John Freely’s Aladdin’s Lamp.  

It isn’t merely that Islamic civilization preserved the science of the ancient Greeks during the European dark ages, but that it built upon their discoveries and managed to synthesize technology and knowledge from previously disconnected civilizations from China (paper) to India (the number zero). While Western Europeans still thought diseases were caused by demons, Muslims were inventing what was the most effective medical science until the modern age. They gave us brand new forms of knowledge such as algebra, taught us how to count using our current system of arabic numerals, and the mapped the night sky giving us the names of our stars. They showed us how to create accurate maps, taught us how to tell time and measure distance, and gave us the most advanced form that most amazing of instruments, a pre-modern form of pocket computer- the astrolabe. Seeing is believing and those who doubt just how incredible the astrolabe was should checkout Tom Wujec’s great presentation on the astrolabe at TED.

Compared to the Christian West which was often busy with brutalities such as genocidal campaigns against religious dissidents, inquisitions, or the persecution, forced conversion, or expulsion of Muslims and Jews, the Islamic world was generally extremely tolerant of religious minorities to the extent that both Christian and Jewish communities thrived there.

Part of the reason Islamic civilization, which was so ahead of the West in terms of science and technology in the 1300s ,would fall so far behind was a question of geography. It wasn’t merely that the West was able to rip from the more advanced Muslims the navigational technology that would lead to the windfall of discovering the New World, it was that the way science and technology eventually developed through the 18th, 19th and 20th centuries demanded strong states which Islamic civilizations on account of geography and a tradition of weak states- in Muslim societies it was a diffuse network of religious jurists rather than a centralized Church in league with the state that controlled religion and a highly internationalized network of traders rather than a tight corporate-state alliance that dominated the economy. Modernity in its pre-21st century manifestation required strong states to put down communication and    transportation networks and to initiate and support high impact policies such as economic standardization and universal education.

Yet, technology appears to have changed this reliance on the state and brought back into play the kinds of diffuse international networks which Islamic societies continue to be extremely good at. As opposed to earlier state-centric infrastructure cell phone networks can be put up almost overnight. The global nature of trade puts a premium on international connections which the communications revolution has put at the hands of everybody. The rapid decline in the cost of creating media and the bewildering ease with which this media can be distributed globally has overturned the prior centralized and highly localized nature in which communication used to operate.

Islam’s diffuse geography and deeply ingrained assumptions regarding power left it vulnerable to both its own pitifully weak states and incursions from outside powers who had followed a more centralized developmental path.  Many of these conflicts are now playing themselves out and the legacies of Western incursions unraveling so that largely Muslim states that were created out of thin air by imperialist powers such as Iraq and Syria- are imploding. Sadly, states where a largely secular elite was able to suppress traditional publics with the help of Western aid – most ominously Egypt- are slipping towards fundamentalism. We have helped create an equation where religious fundamentalism is confused with freedom.

Given the nature of modern international communications and ease of travel we are now in a situation where an alienated Muslim such as Tamerlane is not only plugged into a worldwide anti-modern discourse, he is able to “shop around” for conflicts in which to insert himself. His unhinged mother had apparently suggested he go to Palestine in search of jihad he reportedly traveled to far away Dagestan to make contact with like minded lost souls.

Our only hope here is that these conflicts in the Muslim world will play themselves out as quickly and as peacefully as possible, and that Islam, which is in many ways poised to thrive in the new condition of globalization will remember its own globalists traditions. Not just their tradition as international traders- think of how successful diaspora peoples such as the Chinese and the Jewish people have been- but their tradition of philosophic and scientific brilliance as well. The internet allows easy access to jihadi discourses by troubled Muslims, but it also increasingly offers things such as Massive Open Online Courses or MOOCS that might, in the same way Islamic civilization did for the West, bring the lessons of modernity and science deep into the Islamic world even into areas such as Afghanistan that now suffer under the isolating curse of geography.

International communication, over the long term, might be a way to bring Enlightenment norms regarding rational debate and toleration not so much to the Muslim world as back to it. Characteristics which it in some ways passed to the West in the first place, providing a forerunner of what a global civilization tolerant of differences and committed to combining the best from all the world’s cultures might look like.

Visions of Immortality and the Death of the Eternal City

Vangough Flowers in a blue  vase

It was a time when the greatest power the world had yet known suffered an attack on its primary city which seemed to signal the coming of an age of unstoppable decline.The once seemingly unopposable power no longer possessed control over its borders,it was threatened by upheaval in North Africa,  unable to bring to heel the stubborn Iranians, or stem its relative decline. It was suffering under the impact of climate change, its politics infected with systemic corruption, its economy buckling under the weight of prolonged crisis.

Blame was sought. Conservatives claimed the problem lie with the abandonment of traditional religion, the rise of groups they termed “atheists” especially those who preached the possibility of personal immortality. One of these “atheists” came to the defense of the immortalist movement arguing not so much that the new beliefs were not responsible for the great tribulations in the political world, but that such tribulations themselves were irrelevant. What counted was the prospect of individual immortality and  the cosmic view- the perspective that held only that which moved humanity along to its ultimate destiny in the universe was of true importance. In the end it would be his movement that survived after the greatest of empires collapsed into the dust of scattered ruins….

Readers may be suspicious that I am engaged in a sleight of hand with such an introduction, and I indeed am; however much the description above resembles the United States of the early 21st century, I am in fact describing the Roman Empire of the 400s C.E. The immortalists here are not contemporary transhumanists or singularitarians but early Christians whom many pagans considered not just dangerously innovative, but also, because they did not believe in the pagan gods, were actually labeled atheists- which is where we get the term. The person who came to the defense of the immortalists and who laid out the argument that it was this personal immortality and the cosmic view of history that went with it that counted rather than the human drama of politics history and culture was not Ray Kurzweil or any other figure of the singularitarian or transhumanist movements,  but Augustine who did so in his work The City of God.

Now, a book with such a title not to mention one written in the 400s might not seem like it would be relevant to any secular 21st century person contemplating our changing relationship with death and time, but let me try to show you why such a conclusion would be false. When Augustine wrote The City of God he was essentially arguing for a new version of immortality. For however much we might tend to think the dream of immortality was invented to assuage human fears of personal death the ancient pagans (and the Jews for that matter) had a pretty dark version of it. Or, as Achilles said to Odysseus:

“By god, I’d rather slave on earth for another man—some dirt-poor tenant farmer who scrapes to keep alive—than rule down here over all the breathless dead.”

For the pagans, the only good form of immortality was that brought by fame here on earth. Indeed, in later pagan versions of paradise which more resembled the Christian notion “heaven” was reserved for the big-wigs. The only way to this divinity was to do something godlike on earth in the service of one’s city or the empire. Augustine signaled a change in all that.

Christianity not only upended the pagan idea of immortality granting it to everybody- slaves no less than kings, but, according to Augustine,  rendered the whole public world the pagans had found so important  irrelevant. What counted was the fact of our immortality and the big picture- the fate that God had in store for the universe, the narrative that got us to this end. The greatest of empires could rise and fall what they will, but they were nothing next to the eternity of our soul our God and his creation. Rome had been called the eternal city- but it was more mortal than the soul of its most powerless and insignificant citizen.

Of course, from a secular perspective the immortality that Augustine promised was all in his head. If anything, it served as the foundation not for immortal human beings nor for a narrative of meaning that stretched from the beginning to the the end of time- but for a very long lived institution- the Catholic Church- that has managed to survive for two millennia, much longer indeed than all the empires and states that have come and gone in this period, and who knows, if it gets its act together may even survive for another thousand years.

All of this might seem far removed from contemporary concerns until one realizes the extent to which we ourselves are and will be confronting a revolutionary change in our relationship to both death and time akin to and of a more real and lasting impact than the one Augustine wrestled with. No matter what way one cuts it our relationship to death has changed and is likely to continue to change. The reason for this is that we are now living so long, and suspect we might be able to live much longer. A person in the United States in 1913 could be expected to live just shy of the ripe old age of 53. The same person in 2013 is expected to live within a hair’s breath of 79. In this as in other departments the US has a ways to go. The people of Monaco, the country with the highest life expectancy, live on average a decade longer.

How high can such longevity go? We have no idea. Some, most famously Aubrey de Gray, think there is no theoretical limit to how long a human being can live if we get the science right. We are likely some ways from this, but given the fact that such physical immortality, or something close to it, does not seem to violate any known laws of nature, then if there is not something blocking its appearance, say complexity, or more ominously, expense, we are likely to see something like the defeat of death if not in this century then in some further future that is not some inconceivable distance from us.

This might be a good time,then, to start asking questions about our relationship to death and time, and how both relate to the societies in which we live. Even if immortality remains forever outside our grasp by exploring the topic we might learn something important about death time and ourselves in the process. It was exactly these sorts of issues that Augustine was out to explore.

What Augustine argued in The City of God was that societies, in light of eternal life had nothing like the meaning we were accustomed to giving them. What counted was that one lead a Christian life (although as a believer in predestination Augustine really thought that God would make you do this if he wanted to). The kinds of things that counted in being a good pagan were from Augustine’s standpoint of eternity little but vanity. A wealthy pagan might devote money to his city, pay for a public festival, finance the construction of a theater or forum. Any pagan above the level of a slave would likely spend a good deal of time debating the decisions of his city, take concern in its present state and future prospects. As young men it would be the height of honor for a pagan to risk his life for his city. The pagan would do all these things in the hope that they might be remembered. In this memory and in the lives of his descendants lie the possibility of a  good version of immortality as opposed to the wallowing in the darkness of Hades after death. Augustine countered with a question that was just as much a charge: What is all this harking after being remembered compared to the actual immortality offered by Christ?

There are lengths in the period of longevity far in advance of what human beings now possess where the kinds of tensions between the social and pagan idea of immortality and the individual Christian idea of eternal life that Augustine explored do not yet come into full force. I think Vernor Vinge is onto something (@36 min) when he suggests that extending the human lifespan into the range of multiple centuries would be extremely beneficial for both the individual and society. Longevity on the order of 500 years would likely anchor human beings more firmly in time. In terms of the past such a lifespan would give us a degree of wisdom we sorely lack, allowing us to avoid almost inevitably repeating the mistakes of a prior age once those with personal experience of such mistakes or tragedies are no longer around to offer warnings or even who we could ask- as was the case with our recent financial crisis once most of those who had lived through the Great Depression as adults had passed.

500 years of life would also likely orient us more strongly towards the future. We could not only engage in very long term projects of all sorts that are impossible today given our currently limited number of years, we would actually be invested in making sure our societies weren’t making egregious mistakes- such as changing the climate- whose full effects wouldn’t be felt until centuries in the future. These longer-term horizons, at least when compared to what we have now, would cease being abstractions because it would no longer be our great grandchildren that lived there but us.

The kinds of short-termism that now so distort the politics of the United States might be less likely in a world where longevity was on the order of several centuries. Say what one might about the Founding Fathers, but they certainly took the long-term future of the United States into account and more importantly established the types of lasting institutions that would make that future a good one. Adams and Jefferson created our oldest and perhaps most venerable cultural institution- The Library of Congress. Jefferson founded the University of Virginia. Later American political figures, such as Lincoln, not only did the difficult work of keeping the young nation intact, but developed brillant institutions such as Land-Grant Colleges to allow learning and technological know-how to penetrate the countryside. In addition to all this, the United States invented the idea of national parks, the hope being to preserve for centuries or even millennia the natural legacy of North America.

Where did we get this long term perspective and where did it go? Perhaps it was the afterglow of the Enlightenment’s resurrection of pagan civic virtues. We certainly seem no longer capable of taking such a long view- the light has gone out. Politicians today seem unable to craft policy that will deal with issues beyond the next election cycle let alone respond to issues or foster public investments that will play out or come to fruition in the days of their grandchildren. Individuals seem to have lost some of the ability to think beyond their personal life span, so perhaps, just perhaps, the solution is to increase the lifespan of individuals themselves.

Thinking about time, especially long stretches of time, and how it runs into the annihilation of death, seems to inevitably lead to reflections about human society and the relationship of the young with the old. Freud might have been right when he reduced the human condition to sex and death for something in the desire to leave a future legacy inspired by death does seem to resemble sex- or rather procreation- allowing us to pass along imperfect “copies” of ourselves even if, when it comes to culture, these copies are very much removed from us indeed.

This bridging of the past and the future might even be the ultimate description of what societies and institutions are for. That is, both emerge from our confrontation with time and are attempts to win this conflict by passing information, or better, knowledge, from the past into the future in a similar way to how this is accomplished biologically in the passing on of genes through procreation. The common conception that innovation most likely emerges from youth, if it is true, might be as much a reflection of the fact that the young have quite simply not been here long enough to receive this full transmission and are on that account the most common imperfect “copies” of the ideas of their elders. Again, like sex, the transmission of information from the old to the new generation allows novel combinations and even mutations which if viable are themselves passed along.

It’s at least an interesting question to ask what might happen to societies and institutions that act as our answer to the problem of death if human beings lived for what amounted to forever? The conclusion Augustine ultimately reached is once you eliminate death you eliminate the importance of the social and political worlds. Granting the individual eternity means everything that now grips our attention become little but fleeting Mayflies in the wind. We might actually experience a foretaste of this, some say we are right now, even before extreme periods of longevity are achieved if predictions of an ever accelerating change in our future bear fruit. Combined with vastly increased longevity accelerating change means the individual becomes the only point in history that actually stands still. Whole technological regimes, cultures, even civilizations rise and fall under the impact of technological change while only the individual- in terms of a continuous perspective since birth or construction- remains. This would entail a reversal of all of human history to this point where it has been the institutions that survive for long periods of time while the individuals within them come and go. We therefore can’t be sure what many of the things we take as mere background to our lives today- our country, culture, idea of history, in such circumstances even look like.

What might this new relationship between death and technological and social change  do to the art and the humanities, or “post-humanities”?  The most hopeful prediction regarding this I have heard is again Vinge, though I can not find the clip. Vinge discussed the possibility of extended projects that are largely impossible given today’s average longevity. Artists and writers probably have an intuitive sense for what he means, but here are some examples- though they are mine not his. In a world of vastly increased longevity a painter could do a flower study of every flower on earth or portraits of an entire people. A historian could write a detailed history of a particular city from the palaeolithic through the present in pain staking detail, a novelist could really create a whole alternative imagined world filled with made up versions of memoirs, religious texts, philosophical works and the like. Projects such as these are impossible given our limited time here and on account of the fact that there are many more things to care about in life besides these creations.

Therefore, vastly increased longevity might mean that our greatest period of cultural creation- the world’s greatest paintings, novels, historical works and much else besides will be found in the human future rather than its past. Increased longevity would also hopefully open up scientific and technological projects that were centuries rather than decades in the making such as the recovery of lost species and ecosystems or the move beyond the earth. One is left wondering, however, the extent to which such long term focus will be preserved in light of an accelerating speed of change.

A strange thing is happening in that while the productive lifespan of an individual artist is increasing- so much for The Who’s “hope I die before I get old”–  the amount of time it takes an artist to create any particular work is, through technological advancement, likely decreasing. This should lead to more artistic productivity, but one is left to ponder what any work of art would mean if the dreams of accelerating technological change come true at precisely the same time that human longevity is vastly increased? Here is what I mean: when one tries to imagine a world where human beings live for lengths of time that are orders of magnitude longer than those today and where technological change proves to be in a persistently accelerating state what you get is a world that is so completely transformed within the course of a generation that it bears as much resemblance to the generation before it as we do to the ancient Greeks. What does art mean in such a context where the world it addresses no longer exists not long after it has been made?

This same condition of fleetingness applies to every other aspect of life as well from the societies we live in down to the relationships with those we love.  It was Augustine’s genius to realize that if one looks at life from the perspective of individual eternity it is not the case that everything else becomes immortal as well, but that the mortality of everything we are not becomes highlighted.

This need not spell the end of all those things we currently take as the vastly important long lasting background of our mortal lives, but it will likely change their character. To use a personal story, cultural and political creations and commitments in the future might still be meaningful in the sense that my Nana’s flower garden is meaningful. My Nana just turned 91 and still manages to plant and care for her flowers that bloom in spring only to disappear until brought back by her gentle and wise hands at the season’s return. Perhaps the longer we live the more the world around us will get its meaning not through its durability but as a fleeting beauty brought forth because we cared enough to stop for a moment to orchestrate and hold still, if even for merely a brief instant, time’s relentless flow.

 

Return to the Island of Dr. Moreau

The Island of Dr Moreau

Sometimes a science-fiction novel achieves the impossible, and actually succeeds in reaching out and grasping the future, anticipating its concerns, grappling with its possibilities, wrestling with its ethical dilemmas. H.G. Wells’ short 1886 novel, The Island of Dr. Moreau, is like that. The work achieved the feat of actually being relevant to our own time at the very least because the scientific capabilities Well’s imagined in the novel have really only begun to be possible today, and will be only more so going forward. The ethical territory he identified with his strange little book ones we are likely to be increasingly called upon to chart our own course through.

The novel starts with a shipwreck. Edward Prendick, an Englishman who we later learn studied under the Darwinian, Thomas Huxley, is saved at sea by a man named Montgomery. This Montgomery along with his beast like servant, M’ling, is a passenger aboard a ship shepherding a cargo of exotic animals including a puma to an isolated island. Prendick is a man whose only type of luck seems to be bad, and he manages to anger the captain of the ship and is left to his fate on the island with Montgomery a man in the service of a mad scientific genius- Dr Moreau.

Prendick soon discovers that his new island home is a house of horrors. Moreau is a “vivisectionist” meaning he conducts experiments on live animals and does so without any compunction as to the pain these animals experience. The natural sounds of the island are drowned out by cries of suffering Prendrick hears from the puma being surgically transformed into a “man” at the hands of Moreau. This is the goal of Moreau who has been chased out of civilization for pursuing his mission, to use the skills of vivisection and hypnosis to turn his poor animals into something like human beings giving them not only a humanoid forms but human abilities such as speech.

The list of the “beast folk” transformed in such a way include not only M’ling and the puma but other creatures trapped in the space between human beings and their original animal selves or are a blend of purely animal forms. There is the Leopard-Man, and the Ape-Man, the satanic looking Satyr-Man a humanoid formed from a goat. There is smallish Sloth-Man who resembles a flayed child. Two characters, the Sayer of the Law and the Fox-Bear-Witch, revere and parrot the “Law” of their creator Moreau which amount to commandments to act like human beings or not act like animals: “Not to go on all Fours”, “Not to suck up Drink”, “Not to claw Bark of Trees” “Not to chase other Men” (81)

There is also the chimera of the Hyena-Swine the only animal that seems able to deliberately flaunt the commandments of Moreau and which becomes the enemy of Prendick after the death of the mad scientist at the hands of his last creation- the puma man. Montgomery, giving into his own animal instincts dies shortly after his master’s demise in the midst of a mad alcoholic revelry with the Beast Men.

Prendrick escapes the island before the Hyena-Swine and his allies are able to kill him by setting sail on a life raft which comes near shore on which the dead bodies of the captain who had abandoned him on the island and his shipmate are found. Eventually the scarred castaway makes his way back to England where his experience has seemed to shatter his connection with his fellow human beings who now appear to Prendick as little more than Beast Men in disguise.

Why did Wells write this strange story? In part, the story grew out of an emerging consciousness of cruelty to animals and calls for animal rights. As mentioned, Moreau is an expert in the practice of  “vivisection”  an old term that means simply animal experimentation whether the animals are anesthetized or not. He thinks the whole question of the pain experienced by his animals to be a useless hindrance to the pursuit of knowledge which he needed to overcome to free him for his scientific quest. “The colorless light of these intellectual desires” from the fetters of empathy.  The animals he experiments upon become no longer “a fellow creature, but a problem”. Free to pursue his science on his island he is “never troubled by matters of ethics”, and notes that the study of nature can only lead to one becoming “as remorseless as Nature” itself. (104)

It is often thought that the idea of animal rights emerged first in the 1970s with the publication of the philosopher, Peter Singer’s Animal Liberation. Yet the roots of the movement can be traced back nearly a century earlier and emerged in response to the practice of  vivisection. Within two years of Wells’ story the British Union for the Abolition of Vivisection would be founded- a society which has advocated the end of animal experimentation ever since. In the years immediately following the publication of The Island of Dr. Moreau protests would begin erupting around experiments on live animals, especially dogs.

It is almost as if the moment the idea of common kinship between humans and animals entered the public mind with the publication of Darwin’s Origins of Species in 1859 the idea of treating animals “humanely” gathered moral force.  Yet, however humane and humanistic a figure Darwin himself was, and in terms of both there is no doubt, his ideas also left room for darker interpretations. The ethical questions opened up by Darwin was whether the fact that humankind shared common kinship with other animals meant that animals should be treated more like human beings, or whether such common kinship offered justification for treating fellow human beings as we always had treated animals? The Origins of Species opened up all sorts of these anxieties surrounding the narrowness of the gap between human beings and animals. It became hard to see which way humanity was going.

The Island of Dr. Moreau gives us a first hand experience of this vertigo. Prendick initially thinks Moreau is transforming human beings into animals, and only finds out later that he is engaged in a form of uplift. Moreau wants to find out how difficult it would be to turn an animal into something like a human being- if the gap can be bridged on the upside. However, the Beast Men that Moreau creates inevitably end up slipping back to their animal natures- the downside pressures are very strong and this suggests to Prendick that humankind itself is on a razors edge between civilization and the eruption of animal instincts.

This idea of the de-generative potential evolution was one of the most deadly memes to have ever emerged from Western civilization. It should come as no surprise that Adolf Hitler was born three years after the publication of The Island of Dr. Moreau and ideas about the de-generation of the “race” would become the putrid intellectual mother’s milk on which Hitler was raised. The anxiety that humanity might evolve “backward”, which came with a whole host of racially charged assumptions, would be found in the eugenics movements and anti-immigrant sentiments in both Great Britain and the United States in the early 20th century following the publication of Well’s novel. It was the Nazis, of course, who took this idea of de-generative evolution to its logical extreme using this fear as justification for mass genocide.

It’s not that no one in the early 20th century held the idea that course of evolution might be progressive at least once one stepped aside from evolution controlled by nature and introduced human agency. Leon Trotsky famously made predictions about the Russian Revolution that sound downright transhumanist such as:

Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman.

Yet, around the same time Trotsky was predicting the arrival of the New Soviet Man, the well respected Soviet scientist, Il’ya Ivanov tried to move the arrow backward. Ivanov almost succeeded in an attempt to see if African women, who were not to be informed of the nature of the experiment, could be inseminated with the sperm of chimps. The idea that Stalin had hatched this plan to create a race of ape-men super soldiers is a myth  often played up by modern religious opponents of Darwin’s ideas regarding human evolution. But, what it does show that Well’s Dr. Moreau was no mere anti-scientific fantasy but a legitimate fear regarding the consequences of Darwin’s discovery- that the boundary between human beings and animals was the thinnest of ice and that we were standing on it.

Yet the real importance of the questions raised by The Island of Dr. Moreau would have to wait over a century, until our own day to truly come to the fore. This is because the type of sovereignty over animals in terms of their composition and behavior really wasn’t possible until we actually not only understood the directions that guided the development and composition of life, something we didn’t understand until the discovery of the hereditary role of DNA in 1952, but how to actually manipulate this genetic code, something we have only begun to master in our own day.

Another element of the true import of the questions Well’s posed would have to wait unit recently when we developed the capacity to manipulate the neural processes of animals.  As mentioned, in the novel Moreau uses hypnotism to get his animals to do what he wants. We surely find this idea silly, conjuring up images of barnyard beasts staring mindlessly at swinging pendulums. Yet meaning of the term hypnotism in The Island of Dr. Moreau would be better expressed with our modern term “programming”. This was what the 19th century mistakenly thought it had stumbled across when it invented the pseudo-science of hypnotism- a way to circumvent the conscious mind and tap into a hidden unconscious in such a way as to provide instructions for how the hypnotized should act. This too is something we have only been able to achieve now that microelectronics have shrunk to a small enough size that they and their programs can be implanted into animals without killing them.

Emily Anthes balanced and excellent book Frankenstein’s Cat gives us an idea of what both this genetic manipulation and programming of animals through bio-electronics looks like. According to Anthes, the ability to turn genes on and off has given us an enormous ability to play with the developmental trajectory of animals such as that poor scientific workhorse- the lab mouse. We can make mice that suffer human like male pattern baldness, can only make left turns, or grow tusks. In the less funny department we can make mice riddled with skin tumors or that suffer any number of other human diseases merely by “knocking out” one of their genes. (FC 3-4).

As Anthes lays out, our increasing control over the genetic code of all life gives us power over animals (not to mention the other kingdoms of living things) that is trivial, profound, and sometimes disturbing. We can insert genes from one species into another that could never under natural circumstances mix- so that fish with coral genes inserted can be made to glow under certain wavelengths of light, and we can do the same with the button noses of cats or even ourselves if we wanted to be a hit at raves. We can manipulate the genes of dairy animals so that they produce life saving pharmaceuticals in their milk or harvest spider’s silk from udders. We have inserted human growth hormone into pigs to make them leaner- an experiment that resulted in devastating diseases for the pigs concerned and a large dose of yuck factor.

Anthes also shows us how we are attempting to combine the features of our silicon based servants with those of animals, by creating creatures such as remote controlled rats through the insertion of microelectronics. The new field of optogenetics gives us amazing control over the actions of animals using light to turn neurons on and off. Something that one of the founders of the science, Karl Deisseroth. thinks is raising a whole host of ethical and philosophical questions we need to deal with now as it becomes possible to use optogenetics with humans.  These cyborg experiments have even gone D.I.Y. The company Backyard Brains sells a kit called Robo Roach that allows amature neuro-scientists to turn a humble roach into their own personal insect cyborg.

The primary motivation for Well’s Dr. Moreau was the discovery of knowledge whatever its cost. It’s hard to see his attempt to turn animals into something with human characteristics as of any real benefit to the animals themselves given the pain they had to suffer to get there. Moreau is certainly not interested in any benefit his research might bring to human beings either. Our own Dr. Moreaus- and certainly many of the poor creatures who are subject to our experiments would see the matter that way- are indeed driven by not just human welfare, but as Anthes is at pains to point out animal welfare as well. If we played our genetic knowledge cards right the kinds of powers we are gaining might allow us to save our beloved pets from painful and debilitating diseases, lessen the pain of animals we use for consumption or sport, revive extinct species or shield at least some species from the Sixth Great Extinction which human beings have unleashed. On the latter issue, there is always a danger that we will see these gene manipulating powers as an easy alternative to the really hard choices we have to make regarding the biosphere, but at the very least our mastery of genetics and related areas such as cloning should allow us to mitigate at least some of the damage we are doing.

Anthes wants to steer us away from moral absolutes regarding biological technologies. Remote controlled rats would probably not be a good idea either for the rats or other people were they sold as “toys” for my 12 year old nephew. The humane use of such cyborgs creations to find earthquake victims, even if disturbing on its face, is another matter entirely.

At one point in The Island of Dr. Moreau Well’s mad scientist explains himself to Pendrick with the chilling words:

I wanted- it was the only thing I wanted- to find out the extreme limit of plasticity in a living shape. (104)

 

Again, the achievement of such plasticity as Dr. Moreau longed for had to wait until the mastery of genetics and in addition to Anthes’ Frankenstein’s Cat this newfound technological power is explored in yet another excellent recent book this time by the geneticist George Church. In Regenesis Church walks us through the revolution in molecular biology and especially the emerging field of synthetic biology. The potential implied in our ability to re-engineer life at the genetic level or to create almost from scratch and design life on the genetic plane are quite simply astounding.

According to Church, we could redesign the true overlords of nature- bacteria- to do an incredible number of things including creating our fuel, growing our homes, and dealing with major challenges such as climate change. We could use synthetic bacteria to create our drugs and chemicals and put them to work as true nanotech assemblers.We could design “immortal” human organs and restore the natural environment to its state before the impact of our dominant and destructive species including resurrecting species that have gone extinct using reverse engineering.

Some of Church’s ideas might be scientifically sound, but nonetheless appear fanciful, including his idea for creating a species of “mirror” humans whose cellular handedness is reversed. Human beings with reversed cellular handedness would be essentially immune forever from the scourge of viruses because these diseases rely on shared handedness to highjack human cells. Aside from the logistics, part of the problem is that a human being possessing the old handedness would be unable to have children with a person possessing the new version. Imagine that version of Romeo and Juliet!

Church’s projections and proposals are all pretty astounding and none to me at least raise overly red flags in an ethical sense except for one in which he goes all Dr. Moreau on us.  Church proposes that we resurrect Neanderthals:

If society becomes comfortable with human cloning and sees value in human diversity, then the whole Neanderthal creature itself could be cloned by a surrogate mother chimp- or by an extremely adventurous female human. (10)

The immediate problem I see here is not so much that the Neanderthal is brought back into existence as that Church proposes the mother of this “creature” (why does he use this word and not the more morally accurate term- person?) should be a chimp.  Nothing against chimpanzees, but Neanderthals are better thought of as an extinct variant of ourselves rather than as a different species, the clearest evidence of which is that homo sapiens and Neanderthals could produce viable and fertile offspring, and we still carry the genes from such interbreeding. Neanderthals are close enough to us- they buried their dead surrounded by flowers after all- that they should be considered kin, and were such a project ever to take place they should be carried by a human being and raised as one of us with all of our rights and possibilities.

Attempting to bring back other hominids such as Homo Habilis or Australopithecus would raise similar Moreau type questions and might be more in line with Church’s suggestion regarding the resurrection of Neanderthals. Earlier hominids are distinct enough from modern humans that their full integration into human society would be unlikely. The question then becomes what type of world are we bringing these beings into because it is certainly not our own?

One of the ethical realities that The Island of Dr. Moreau is excellent at revealing is the idea of how the attempts to change animals into something more like ourselves might create creatures that are themselves shipwrecked, forever cut off from both the life of their own species and also the human world where we have attempted to bring them. Like modern humans, earlier hominids seem to have been extremely gregarious and dependent upon rich social worlds. Unless we could bring a whole society of them into existence at a clip we might be in danger of creating extremely lonely and isolated individuals who would be unable to find a natural and happy place in the world.

Reverse engineering the early hominids by manipulating the genomes of all the higher primates, including our own, might be the shortest path to the uplift of animals such as the chimpanzee a project suggested by George Dvorsky of the IEET in his interview with Anthes in Frankenstein’s Cat. The hope, it seems, is to give to other animals the possibilities implicit in humanity and provide us with a companion at something closer to our level of sentience.  Yet the danger here, to my lights, is that we might create creatures that are essentially homeless- caught between human understanding of what they could or “should” be and the desire to actualize their own non-human nature which is the result of millions of years of evolution.

This threat of homelessness applies even in cases where the species in question has the clear potential to integrate into human society. What would it feel like to be the first Neanderthal in 10,000 years? Would one feel a sense of pride in one’s Neanderthal heritage or like a circus freak constantly subject since childhood to the prying eyes of a curious public? Would there be a sense of interminable pressure to act “Neanderthal like” or constant questions by ignorant Lamarckians as to the nature of a lost Neanderthal society one knows nothing about?  Would there be pressure to mate with only other Neanderthals?  Harping on about the need for a “royal wedding” of “caveman” and “cavewoman”? What if one was only attracted to the kinds of humans seen in Playboy? Would it be scandalous for a human being to date a Neanderthal, or could the latter even find a legitimate human mate and not just weird people with a caveman fetish? In other words, how do we avoid this very practical and ethical problem of homelessness?

On the surface these questions may appear trivial given the existential importance of what creating such species would symbolize, but they are after all, the types of questions that would need to be addressed if we are thinking about giving these beings the possibility for a rich and happy life as themselves and not just answering to our own philosophic and scientific desires. We can only proceed morally if we begin to grapple with some of these issues and here we’ll need the talents of not just scientists and philosophers, but novelists as well. We need to imagine what it will feel like to live in the kinds of worlds and as the beings we are creating.  A job that as Bruce Sperling has pointed out is the very purpose of science-fiction novelists in the first-place. We will need more of our own versions of The Island of Dr. Moreau.

 

References:
H.G. Wells, The Island of Dr.  Moreau, Lancer Books, 1968 First published 1886.
Emily Anthes, Frankenstein’s Cat: Cuddling up to biotech’s brave new beasts, Scientific American Books, 2013
George Church and Ed Regis, Regenesis: How Synthetic Biology will Reinvent Nature and Ourselves, Basic Books, 2012  

The Ethics of a Simulated Universe

Zhuangzi-Butterfly-Dream

This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary.  Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it:


This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

To state his case in everyday language: any technological civilization whose progress did not stop at some point would gain the capacity to realistically simulate whole worlds, including the individual minds that inhabit them, that is, they could run realistic ancestor simulations. Technologically advanced civilizations either die out before gaining the capacity to produce realistic ancestor simulations,  there is  something stopping such technologically mature civilizations from running such simulations in large numbers or, we ourselves are most likely living in such a simulation because there are a great many more such simulated worlds than real ones.

There is a lot in that argument to digest and a number of underlying assumptions that might be explored or challenged, but I want to look at just one of them, #2. That is, I will make the case that there may be very good reasons why technological civilizations both prohibit and are largely uninterested in creating realistic ancestor simulations. Reasons that are both ethical and scientific. Bostrom himself discusses the possibility that ethical constraints might prevent  technologically mature civilizations from creating realistic ancestor simulations. He writes:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

I think the issue of “suffering that is inflicted on the inhabitants of a simulation” may be more serious than Bostrom appears to believe. Any civilization that has reached the stage where it can create realistic worlds that contain fully conscious human beings will have almost definitely escaped the two conditions that haunt the human condition in its current form- namely pain and death. The creation of realistic ancestor simulations will have brought back into existence these two horrors and thus might likely be considered not merely unethical but perhaps even evil. Were our world actually such a simulation it would confront us with questions that once went by the name of theodicy, namely, the attempt to reconcile the assumed goodness of the creator (for our case the simulator)  with the existence of evil: natural, moral, and metaphysical that exists in the world.

Questions as to why there is, or the morality of there being, such a wide disjunction between the conditions of any imaginable creator/simulator and those of the created/simulated is a road we’ve been down before as the philosopher Susan Neiman so brilliantly showed us in her Evil in Modern Thought: An Alternative History of Philosophy.  There, Neiman points out that the question of theodicy is a nearly invisible current that swept through the modern age. As a serious topic of thought in this period it began as an underlying assumption behind the scientific revolution with thinkers such as Leibniz arguing in his Essays on the Goodness of God, the Freedom of Man and the Origin of Evil that “ours is the best of all possible worlds”. Leibniz having invented calculus and been the first to envision what we would understand as a modern computer was no dope, so one might wonder how such a genius could ever believe in anything so seemingly contradictory to actual human experience?

What one needs to remember in order to understand this is that the early giants of the scientific revolution, Newton, Descartes, Leibniz, Bacon weren’t out to replace God but to understand what was thought of as his natural order. Given how stunning and quick what seemed at the time a complete understanding of the natural world had been formulated using the new methods it perhaps made sense to think that a similar human understanding of the moral order was close at hand. Given the intricate and harmonious picture of how nature was “designed” it was easy to think that this natural order did not somehow run in violation of the moral order. That underneath every seemingly senseless and death-bringing natural event- an earthquake or plague- there was some deeper and more necessary process for the good of humanity going on.    


The quest for theodicy continued when Rousseau gave us the idea that for harmony to be restored to the human world we needed to return to the balance found in nature, something we had lost when we developed civilization.  Nature, and therefore the intelligence that was thought to have designed it were good. Human beings got themselves into trouble when they failed to heed this nature- built their cities on fault lines, or even, for Rousseau, lived in cities at all.


Anyone who has ever taken a serious look at history (or even paid real attention to the news) would agree with Hegel that: “History… is, indeed, little more than the register of the ‘crimes, follies, and misfortunes’ of mankind”. There isn’t any room for an ethical creator there, but Hegel himself tried to find some larger meaning in the long term trajectory of history. With him, and Marx who followed on his heels we have not so much a creator, but a natural and historical process that leads to a fulfillment an intelligence or perfect society at its end. There may be a lot of blood and gore, a huge amount of seemingly unnecessary human suffering between the beginning of history and its end, but the price is worth paying.

Lest anyone think that these views are irrelevant in our current context, one can see a great deal of Rousseau in the positions of contemporary environmentalists and bio-conservatives who take the position that it is us, that is human beings and the artificial technological society we have created that is the problem. Likewise, the views of both singularitarians and some transhumanists who think we are on the verge of reaching some breakout stage where we complete or transcend the human condition, have deep echoes of both Hegel and Marx.

But perhaps the best analog for the kinds of rules a technologically advanced civilization might create around the issue of realistic ancestor simulations might lie not in these religiously based philosophical ideas but  in our regulations regarding more practical areas such as animal experimentation. Scientific researchers have long recognized that the pursuit of truth needs humane constraints and that such constraints apply not just to human research subjects, but to animal subjects as well. In the United States, standards regarding research that use live animals is subject to self-regulation and oversight based on those established by the Institutional Animal Care and Use Committee (IACUC). Those criteria are as follows:

The basic criteria for IACUC approval are that the research (a) has the potential to allow us to learn new information, (b) will teach skills or concepts that cannot be obtained using an alternative, (c) will generate knowledge that is scientifically and socially important, and (d) is designed such that animals are treated humanely.

The underlying idea behind these regulations is that researchers should never unnecessarily burden animals in research. Therefore, it is the job of researchers to design and carry out research in a way that does not subject animals to unnecessary burdens. Doing research that does not promise to generate important knowledge, subjecting animals to unnecessary pain, doing experiments on animals when the objectives can be reached without doing so are all ways of unnecessarily burdening animals.

Would realistic ancestor simulations meet these criteria? I think realistic ancestor simulations would probably fulfill criteria (a), but I have serious doubts that it meets any of the others. In terms of simulations offering us “skills or concepts that cannot be obtained using an alternative” (b), in what sense would a realistic simulations teach skills or concepts that could not be achieved using something short of such simulations where those within them actually suffer and die? Are there not many types of simulations that would fall short of the full range of subjective human experiences of suffering that would nevertheless grant us a good deal of knowledge regarding such types of worlds and societies?  There is probably an even greater ethical hurdle for realistic ancestor simulations to cross in (c), for it is difficult to see exactly what lessons or essential knowledge running such simulations would bring. Societies that are advanced enough to run such simulations are unlikely to gain vital information about how to run their own societies.

The knowledge they are seeking is bound to be historical in nature, that is, what was it like to live in such and such a period or what might have happened if some historical contingency were reversed? I find it extremely difficult to believe that we do not have the majority of information today to create realistic models of what it was like to live in a particular historical period, a recreation that does not have to entail real suffering on the part of innocent participants to be of worth.

Let’s take a specific rather than a general historical example dear to my heart because I am a Pennsylvanian- the Battle of Gettysburg. Imagine that you live hundreds or even thousands of years into the future when we are capable of creating realistic ancestor simulations. Imagine that you are absolutely fascinated by the American Civil War and would like to “live” in that period to see it for yourself. Certainly you might want to bring your own capacity for suffering and pain or to replicate this capacity in a “human” you, but why do the simulated beings in this world with you have to actually feel the lash of a whip, or the pain of a saw hacking off an injured limb? Again, if completely accurate simulations of limited historical events are ethically suspect, why would this suspicion not hold for the simulation of entire worlds?

One might raise the objection that any realistic ancestor simulation would need to possess sentient beings with free will such as ourselves that possess not merely the the ability to suffer, but to inflict evil upon others. This, of course, is exactly the argument Christian theodicy makes. It is also an argument that was undermined by sceptics of early modern theodicy- such as that put forth by Leibniz.    

Arguments that the sentient beings we are now (whether simulated or real) require free will that puts us at risk of self-harm were dealt with brilliantly by the 17th century philosopher, Pierre Bayle, who compared whatever creator might exist behind a world where sentient beings are in constant danger due to their exercise of free will to a negligent parent. Every parent knows that giving their children freedom is necessary for moral growth, but what parent would give this freedom such reign when it was not only likely but known that it would lead to the severe harm or even death of their child?

Applying this directly to the simulation argument any creator of such simulations knows that we will kill millions of our fellow human beings in war and torture, deliberately starve and enslave countless others. At least these are problems we have brought on ourselves,but the world also contains numerous so-called natural evils such as earthquakes and pandemics which have devastated us numerous times. Above all, it contains death itself which will kill all of us in the end.    

It was quite obvious to another sceptic, David Hume, that the reality we lived in had no concern for us. If it was “engineered” what did it say that the engineer did not provide obvious “tweaks” to the system that would make human life infinitely better? Why, are we driven by pain such as hunger and not correspondingly greater pleasure alone?

Arguments that human and animal suffering is somehow not “real” because it is simulated seem to me to be particularly tone deaf. In a lighter mood I might kick a stone like Samuel Johnson and squeak “I refute it thus!” In a darker mood I might ask those who hold the idea whether they would willingly exchange places with a burn victim. I think not! 

If it is the case that we live in a realistic ancestor simulation then the simulator cares nothing for our suffering on a granular level. This leads us to the question of what type of knowledge could truly be gained from running such realistic ancestor simulations. It might be the case that the more granular a simulation is the less knowledge can actually be gained from it. If one needs to create entire worlds with individuals and everything in them in order to truly understand what is going on, then the results of the process you are studying is either contingent or you are really not in possession of full knowledge regarding how such worlds work. It might be interesting to run natural selection over again from the beginning of life on earth to see what alternatives evolution might have come up with, but would we really be gaining any knowledge about evolution itself rather than a view of how it might have looked had such and such initial conditions been changed? And why do we need to replicate an entire world including the pain suffered by the simulated creatures within before we can grasp what alternatives to the path evolution or even history followed might have looked like. Full understanding of the process by which evolution works, which we do not have but a civilization able to create realistic ancestor simulations doubtless would, should allow us to envision alternatives without having to run the process from scratch a countless number of times.

Yet, it is in the last criteria (d) that experiments are  “designed such that animals are treated humanely” that the ethical nature of any realistic ancestor simulation really hits a wall. If we are indeed living in an ancestor simulation it seems pretty clear that the simulator(s) should be declared inhumane. How otherwise would the simulator create a world of pandemics and genocides, torture, war, and murder, and above all, universal death?

One might claim that any simulator is so far above us that it takes little concern of our suffering. Yet, we have granted this kind of concern to animals and thus might consider ourselves morally superior to an ethically blind simulator. Without any concern or interest in our collective well-being, why simulate us in the first place?

Indeed, any simulator who created a world such as our own would be the ultimate anti-transhumanist. Having escaped pain and death “he” would bring them back into the world on account of what would likely be socially useless curiosity. Here then we might have an answer to the second half of Bostrom’s quote above:


Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

A society that was technologically capable of creating realistic ancestor simulations and actually runs them would appear to have on of two features (1) it finds such simulations ethically permissible, (2) it is unable to prevent such simulations from being created. Perhaps any society that remains open to creating such a degree of suffering found in realistic ancestor simulations  for anything but the reason of existential survival would be likely to destroy itself for other reasons relating to such lack of ethical boundaries.

However, it is (2) that I find the most illuminating. For perhaps the condition that decides if a civilization will continue to exist is its ability to adequately regulate the use of technology within it. Any society that is unable to prevent rogue members from creating realistic ancestor simulations despite deep ethical prohibitions is incapable of preventing the use of destructive technologies or in managing its own technological development in a way that promotes survival. A situation we can perhaps see glimpses of in our own situation related to nuclear and biological weapons, or the dangers of the Anthropocene.

This link between ethics, successful control of technology and long term survival is perhaps is the real lesson we should glean from Bostrom’s provocative simulation argument.

Immortal Jellyfish and the Collapse of Civilization

Luca Giordano Cave of Eternity 1680s

The one rule that seems to hold for everything in our Universe is that all that exists, after a time, must pass away, which for life forms means they will die. From there, however, the bets are off and the definition of the word “time” in the phrase “after a time” comes into play. The Universe itself may exist for as long as 100s of trillions of years to at last disappear into the formless quantum field from whence it came. Galaxies, or more specifically, clusters of galaxies and super-galaxies, may survive for perhaps trillions of years to eventually be pulled in and destroyed by the black holes at their centers.

Stars last for a few billion years and our own sun some 5 or so billion years in the future will die after having expanded and then consumed the last of its nuclear fuel. The earth having lasted around 10 billion years by that point will be consumed in this  expansion of the Sun. Life on earth seems unlikely to make it all the way to the Sun’s envelopment of it and will likely be destroyed billions of years before the end of the Sun- as solar expansion boils away the atmosphere and oceans of our precious earth.

The lifespan of even the oldest lived individual among us is nothing compared to this kind of deep time. In contrast to deep time we are, all of us, little more than mayflies who live out their entire adult lives in little but a day. Yet, like the mayflies themselves who are one of the earth’s oldest existent species: by the very fact that we are the product of a long chain of life stretching backward we have contact with deep time.

Life on earth itself if not quite immortal does at least come within the range of the “lifespan” of other systems in our Universe, such as stars. If life that emerged from earth manages to survive and proves capable of moving beyond the life-cycle of its parent star, perhaps the chain in which we exist can continue in an unbroken line to reach the age of galaxies or even the Universe itself. Here then might lie something like immortality.

The most likely route by which this might happen is through our own species,  Homo Sapiens or our descendents. Species do not exist forever, and our is likely to share this fate of demise either through actual extinction or evolution into something else. In terms of the latter, one might ask if our survival is assumed, how far into the future we would need to go where our descendents are no longer recognizably, human? As long as something doesn’t kill us, or we don’t kill ourselves off first, I think that choice, for at least the foreseeable future will be up to us.

It is often assumed that species have to evolve or they will die. A common refrain I’ve heard among some transhumanists  is “evolve or die!”. In one sense, yes, we need to adapt to changing circumstances, in another, no, this is not really what evolution teaches us, or is not the only thing it teaches us. When one looks at the earth’s longest extant species what one often sees is that once natural selection comes us with a formula that works that model will be preserved essentially unchanged over very long stretches of time, even for what can be considered deep time. Cyanobacteria are nearly as old as life on earth itself, and the more complex Horseshoe Crab, is essentially the same as its relatives that walked the earth before the dinosaurs. The exact same type of small creatures that our children torture on beach vacations might have been snacks for a baby T-Rex!

That was the question of the longevity of species but what about the longevity of individuals? Anyone interested the should check out the amazing photo study of the subject by the artist Rachel Sussman. You can see Sussman’s work here at TED, and over at Long Now.  The specimen Sussman brings to light have individuals over 2,000 years old.  Almost all are bacteria or plants and clonal- that is they exist as a single organism composed of genetically identical individuals linked together by common root and other systems. Plants and especially trees are perhaps the most interesting because they are so familiar to us and though no plant can compete with the longevity of bacteria, a clonal colony of Quaking Aspen in Utah is an amazing 80,000 years old!

The only animals Sussman deals with are corals, an artistic decision that reflects the fact that animals do not survive for all that long- although one species of animal she does not cover might give the long-lifers in the other kingdoms a run for their money. The “immortal jellyfish” the turritopsis nutricula are thought to be effectively biologically immortal (though none are likely to have survived in the wild for anything even approaching the longevity of the longest lived plants). The way they achieve this feat is a wonder of biological evolution.The turritopsis nutricula, after mating upon sexual maturity, essentially reverses it own development process and reverts back to prior clonal state.

Perhaps we could say that the turritopsis nutricula survives indefinitely by moving between more and less complex types of structures all the while preserving the underlying genes of an individual specimen intact. Some hold out the hope that the turritopsis nutricula holds the keys to biological immortality for individuals, and let’s hope they’re right, but I, for one, think its lessons likely lie elsewhere.

A jellyfish is a jellyfish, after all, among more complex animals with well developed nervous systems longevity moves much closer to a humanly comprehensible lifespan with the oldest living animal a giant tortoise by the too cute name of “Jonathan” thought to be around 178 years old.  This is still a very long time frame in human terms, and perhaps puts the briefness of our own recent history in perspective: it would be another 26 years after Jonathan hatched from his egg till the first shots of the American Civil War were fired. A lot can happen over the life of a “turtle”.

Individual plants, however, put all individual animals to shame. The oldest non-clonal plant, The Great Basin Bristlecone Pine, has a specimen believed to be 5,062 years old. In some ways this oldest living non-clonal individual is perfectly illustrative of the (relatively) new way human beings have reoriented themselves to time, and even deep time.When this specimen of pine first emerged from a cone human beings had only just invented a whole set of tools that would make the transmission of cultural rather than genetic information across vast stretches of time possible. During the 31st century B.C.E. we invented monumental architecture such as Stonehenge and the pyramids of Egypt whose builders still “speak” to us, pose questions to us, from millennia ago. Above all, we invented writing which allowed someone with little more than a clay tablet and a carving utensil to say something to me living 5,000 years in his future.

Humans being the social animals that they are we might ask ourselves about the mortality or potential immortality of groups that survive across many generations, and even for thousands of years. Group that survive for such a long period of time seem to emerge most fully out of the technology of writing which allows both the ability to preserve historical memory and permits a common identity around a core set of ideas. The two major types of human groups based on writing are institutions, and societies which includes not just the state but also the economic, cultural, and intellectual features of a particular group.


Among the biggest mistakes I think those charged with responsibility for an institution or a society can make is to assume that it is naturally immortal, and that such a condition is independent of whatever decisions and actions those in charge of it take. This was part of the charge Augustine laid against the seemingly eternal Roman Empire in his The City of God. The Empire, Augustine pointed out, was a human institution that had grown and thrived from its virtues in the past just as surely as it was in his day dying from its vices. Augustine, however, saw the Church and its message as truly eternal. Empires would come and go but the people of God and their “city” would remain.

It is somewhat ironic, therefore, that the Catholic Church, which chooses a Pope this week, has been so beset by scandal that its very long-term survivability might be thought at stake. Even seemingly eternal institutions, such as the 2,000 year old Church, require from human beings an orientation that might be compared to the way theologians once viewed the relationship of God and nature. Once it was held that constant effort by God was required to keep the Universe from slipping back into the chaos from whence it came. That the action of God was necessary to open every flower. While this idea holds very little for us in terms of our understanding of nature, it is perhaps a good analog for human institutions, states and our own personal relationships which require our constant tending or they give way to mortality.

It is perhaps difficult for us to realize that our own societies are as mortal as the empires of old, and someday my own United States will be no more. America is a very odd country in respect to its’ views of time and history. A society seemingly obsessed with the new and the modern, contemporary debates almost always seek reference and legitimacy on the basis of men who lived and thought over 200 years ago. The Founding Fathers were obsessed with the mortality of states and deliberately crafted a form of government that they hoped might make the United States almost immortal.

Much of the structure of American constitutionalism where government is divided into “branches” which would “check and balance” one another was based on a particular reading of long-lived ancient systems of government which had something like this tripart structure, most notably Sparta and Rome. What “killed” a society, in the view of the Founders, was when one element- the democratic, oligarchic-aristocratic, or kingly rose to dominate all others. Constitutionally divided government was meant to keep this from happening and therefore would support the survival of the United States indefinitely.

Again, it is somewhat bitter irony that the very divided nature of American government that was supposed to help the United States survive into the far future seems to be making it impossible for the political class in the US to craft solutions to the country’s quite serious long-term problems and therefore might someday threaten the very survival of the country divided government was meant to secure.

Anyone interested in the question of the extended survival of their society, indeed of civilization itself, needs to take into account the work of Joseph A. Tainter and his The Collapse of Complex Societies (1988). Here, the archaeologist Tainter not only provides us with a “science” that explains the mortality of societies, his viewpoint, I think, provides us for ways to think about and gives us insight into seeming intractable social and economic and technological bottlenecks that now confront all developed economies: Japan, the EU/UK and the United States.

Tainter, in his Collapse wanted to move us away from vitalist ideas of the end of civilization seen in thinkers such as Oswald Spengler and Arnold Toynbee. We needed, in his view, to put our finger on the material reality of a society to figure out what conditions most often lead them to dissipate i.e. to move from a more complex and integrated form, such as the Roman Empire, to a more simple and less integrated form, such as the isolated medieval fiefdoms that followed.

Grossly oversimplified, Tainter’s answer was a dry two word concept borrowed from economics- marginal utility. The idea is simple if you think about it for a moment. Any society is likely to take advantage of “low-hanging fruit” first. The best land will be the first to be cultivated, the easiest resources to gain access to exploited.

The “fruit”,  however, quickly becomes harder to pick- problems become harder for a society to solve which leads to a growth in complexity. Romans first tapped tillable land around the city, but by the end of the Empire the city needed a complex international network of trade and political control to pump grain from the distant Nile Valley into the city of Rome.

Yet, as a society deploys more and more complex solutions to problems it becomes institutionally “heavy” (the legacy of all the problems it has solved in the past) just as problems become more and more difficult to solve. The result is, at some point, the shear amount of resources that need to be thrown at a problem to solve it are no longer possible and the only lasting solution becomes to move down the chain of complexity to a simpler form. Roman prosperity and civilization drew in the migration of “barbarian” populations in the north whose pressures would lead to the splitting of the Empire in two and the eventual collapse of its Western half.            

It would seem that we have broken through Tainter’s problem of marginal utility with the industrial revolution, but we should perhaps not judge so fast. The industrial revolution and all of its derivatives up to our current digital and biological revolutions, replaced a system in which goods were largely produced at a local level and communities were largely self-sufficient, with a sprawling global network of interconnections and coordinated activities requiring vast amounts of specialized knowledge on the part of human beings who, by necessity, must participate in this system to provide for their most basic needs.

Clothes that were once produced in the home of the individual who would wear them, are now produced thousands of miles away by workers connected to a production and transportation system that requires the coordination of millions of persons many of whom are exercising specialized knowledge. Food that was once grown or raised by the family that consumed it now requires vast systems of transportation, processing, the production of fertilizers from fossil fuels and the work of genetic engineers to design both crops and domesticated animals.

This gives us an indication of just how far up the chain of complexity we have moved, and I think leads inevitably to the questions of whether such increasing complexity might at some point stall for us, or even be thrown into reverse?

The idea that, despite all the whiz-bang! of modern digital technology, we have somehow stalled out in terms of innovation is an idea that has recently gained traction. There was the argument made by the technologist and entrepreneur, Peter Thiel, at the 2009 Singularity Summit, that the developed world faced real dangers of the Singularity not happening quickly enough. Thiel’s point was that our entire society was built around the expectations of exponential technological growth that showed ominous signs of not happening. I only need to think back to my Social Studies textbooks in the 1980s and their projections of the early 2000s with their glittering orbital and underwater cities, both of which I dreamed of someday living in, to realize our futuristic expectations are far from having been met. More depressingly, Thiel points out how all of our technological wonders have not translated into huge gains in economic growth and especially have not resulted in any increase in median income which has been stagnant since the 1970s.

In addition to Theil, you had the economist, Tyler Cowen, who in his The Great Stagnation (2011)  argued compellingly that the real root of America’s economic malaise was that the kinds of huge qualitative innovations that were seen in the 19th and early 20th centuries- from indoor toilets, to refrigerators, to the automobile, had largely petered out after the low hanging fruit- the technologies easiest to reach using the new industrial methods- were picked. I may love my iPhone (if I had one), but it sure doesn’t beat being able to sanitarily go to the bathroom indoors, or keep my food from rotting, or travel many miles overland on a daily basis in mere minutes or hours rather than days.

One reason why technological change is perhaps not happening as fast as boosters such as singularitarians hope, or our society perhaps needs to be able to continue to function in the way we have organized it, can be seen in the comments of the technologists, social critic and novelist, Ramez Naam. In a recent interview for  The Singularity Weblog, Naam points out that one of the things believers in the Singularity or others who hold to ideas regarding the exponential pace of technological growth miss is that the complexity of the problems technology is trying to solve are also growing exponentially, that is problems are becoming exponentially harder to solve. It’s for this reason that Naam finds the singularitarians’ timeline widely optimistic. We are a long long way from understanding the human brain in such a way that it can be replicated in an AI.

The recent proposal of the Obama Administration to launch an Apollo type project to understand the human brain along with the more circumspect, EU funded, Human Brain Project /Blue Brain Project might be seen as attempts to solve the epistemological problems posed by increasing complexity, and are meant to be responses to two seemingly unrelated technological bottlenecks stemming from complexity and the problem of increasing marginal returns.

On the epistemological front the problem seems to be that we are quite literally drowning in data, but are sorely lacking in models by which we can put the information we are gathering together into working theories that anyone actually understands. As Henry Markham the founder of the Blue Brain Project stated:

So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists are forced to become informaticians or robot operators.

This epistemological problem, which was brilliantly discussed by Noam Chomsky in an interview late last year is related to the very real bottleneck in Artificial Intelligence- the very technology Peter Thiel thinks is essentially if we are to achieve the rates of economic growth upon which our assumptions of technological and economic progress depend.

We have developed machines with incredible processing power, and the digital revolution is real, with amazing technologies just over the horizon. Still, these machines are nowhere near doing what we would call “thinking”. Or, to paraphrase the neuroscientist and novelist David Eagleman- the AI WATSON might have been able to beat the very best human being in the game Jeopardy! What it could not do was answer a question obvious to any two year old like “When Barack Obama enters a room, does his nose go with him?”

Understanding how human beings think, it is hoped, might allow us to overcome this AI bottleneck and produce machines that possess qualities such as our own or better- an obvious tool for solving society’s complex problems.

The other bottleneck a large scale research project on the brain is meant to solve is the halted development of psychotropic drugs– a product of the enormous and ever increasing costs for the creation of such products. Itself a product of the complexity of the problem pharmaceutical companies are trying to tackle, namely; how does the human brain work and how can we control its functions and manage its development?  This is especially troubling given the predictable rise in neurological diseases such as Alzheimer’s.   It is my hope that these large scale projects will help to crack the problem of the human brain, and especially as it pertains to devastating neurological disorders, let us pray they succeed.

On the broader front, Tainter has a number of solutions that societies have come up with to the problem of marginal utility two of which are merely temporary and the other long-term. The first is for society to become more complex, integrated, bigger. The old school way to do this was through conquest, but in an age of nuclear weapons and sophisticated insurgencies the big powers seem unlikely to follow that route. Instead what we are seeing is proposals such as the EU-US free trade area and the Trans-Pacific partnership both of which appear to assume that the solution to the problems of globalization is more globalization. The second solution is for a society to find a new source of energy. Many might have hoped this would have come in the form of green-energy rather than in the form it appears to have taken- shale gas, and oil from the tar sands of Canada. In any case, Tainter sees both of these solutions as but temporary respites for the problem of marginal utility.

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity. It is a solution that strangely echoes that of the “immortal jellyfish” the turritopsis nutricula, the only path complex entities have discovered that allows them to survive into something that whispers eternity.

Image description: From the National Gallery in London. “The Cave Of Eternity” (1680s) by Luca Giordan.“The serpent biting its tail symbolises Eternity. The crowned figure of Janus holds the fleece from which the Three Fates draw out the thread of life. The hooded figure is Demagorgon who receives gifts from Nature, from whose breasts pours forth milk. Seated at the entrance to the cave is the winged figure of Chronos, who represents Time.”

Big Brother, Big Data and The Forked Path Revisited

This week witnessed yet another examples of the distortions caused by the 9/11 Wars on the ideals that underlie the American system of government and the ballooning of the powers and reach of the national security state.  In a 16 page Justice Department memo obtained by the NBC News reporter, Michael Isikoff, legal justifications were outlined for the extra-judicial killings of American citizens deemed to pose a “significant threat” to the United States. The problem here is who gets to define what such a threat is. The absence of any independent judicial process outside of the executive branch that can determine whether the rights of an American citizen can be stripped, including the condition of being considered innocent before being proved guilty amounts to an enormous increase in executive power that will likely survive the Obama Administration. This is not really news in that we already knew that extra-judicial killings (in the case of one known suspect, Anwar al-Awlaki, at least) had already taken place. What was not known was just how sweeping, permanent, and without clear legal boundaries these claims of an executive right to kill American citizens absent the proof of guilt actually were. Now we know.

This would be disturbing information if it stood alone by itself, but it does not stand alone. What we have seen since the the attacks on 9/11 is the spread of similar disturbing trends which have only been accelerated by technological changes such as the growth of Big-Data and robotics. Given the news, I thought it might be a good idea to reblog a post I had written back in September where I tried to detail these developments. What follows is a largely unedited version of that original post.

………………………………………………………………………………

The technological ecosystem in which political power operates tends to mark out the possibility space for what kinds of political arrangements, good and bad, exist within that space. Orwell’s Oceania and its sister tyrannies were imagined in what was the age of big, centralized media. Here the Party had under its control not only the older printing press, having the ability to craft and doctor, at will, anything created using print from newspapers, to government documents, to novels. It also controlled the newer mediums of radio and film, and, as Orwell imagined, would twist those technologies around backwards to serve as spying machines aimed at everyone.

The questions, to my knowledge, Orwell never asked was what was the Party to do with all that data? How was it to store, sift through, make sense of, or locate locate actual threats within it the  yottabytes of information that would be gathered by recording almost every conversation, filming or viewing almost every movement, of its citizens lives? In other words, the Party would have ran into the problem of Big Data. Many of Orwellian developments since 9/11 have come in the form of the state trying to ride the wave of the Big Data tsunami unleashed with the rise of the internet, an attempt create it’s own form of electronic panopticon.

In their book Top Secret America: The Rise of the New American Security State, Dana Priest, and ,William Arkin, of the Washington Post present a frightening picture of the surveillance and covert state that has mushroomed in the United States since 9/11. A vast network of endeavors which has grown to dwarf, in terms of cummulative numbers of programs and operations, similar efforts, during the unarguably much more dangerous Cold War. (TS 12)

Theirs’ is not so much a vision of an America of dark security services controlled behind the scenes by a sinister figure like J. Edgar Hoover, as it is one of complexity gone wild. Priest and Arkin paint a picture of Top Secret America as a vast data sucking machine, vacuuming up every morsel of information with the intention of correctly “connecting the dots”, (150) in the hopes of preventing another tragedy like 9/11.

So much money was poured into intelligence gathering after 9/11, in so many different organizations, that no one, not the President, nor the Director of the CIA, nor any other official has a full grasp of what is going on. The security state, like the rest of the American government, has become reliant on private contractors who rake in stupendous profits. The same corruption that can be found elsewhere in Washington is found here. Employees of the government and the private sector spin round and round in a revolving door between the Washington connections brought by participation in political establishment followed by big-time money in the ballooning world of private security and intelligence. Priest quotes one American intelligence official  who had the balls to describe the insectous relationship between government and private security firms as “a self-licking ice cream cone”. (TS 198)

The flood of money that inundated the intelligence field in after  9/11 has created what Priest and Arkin call an “alternative geography” companies doing covert work for the government that exist in huge complexes, some of which are large enough to contain their very own “cities”- shopping centers, athletic facilities, and the like. To these are added mammoth government run complexes some known and others unknown.

Our modern day Winston Smiths, who work for such public and private intelligence services, are tasked not with the mind numbing work of doctoring history, but with the equally superfluous job of repackaging the very same information that had been produced by another individual in another organization public or private each with little hope that they would know that the other was working on the same damned thing. All of this would be a mere tragic waste of public money that could be better invested in other things, but it goes beyond that by threatening the very freedoms that these efforts are meant to protect.

Perhaps the pinnacle of the government’s Orwellian version of a Google FaceBook mashup is the gargantuan supercomputer data center in Bluffdale Nevada built and run by the premier spy agency in the age of the internet- the National Security Administration or NSA. As described by James Bamford for Wired Magazine:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

It had been thought that domestic spying by the NSA, under a super-secret program with the Carl Saganesque name, Stellar Wind, had ended during the G.W. Bush administration, but if the whistleblower, William Binney, interviewed in this chilling piece by Laura Poitras of the New York Times, is to be believed, the certainly unconstitutional program remains very much in existence.

The bizarre thing about this program is just how wasteful it is. After all, don’t private companies, such as FaceBook and Google not already possess the very same kinds of data trails that would be provided by such obviously unconstitutional efforts like those at Bluffdale? Why doesn’t the US government just subpoena internet and telecommunications companies who already track almost everything we do for commercial purposes? The US government, of course, has already tried to turn the internet into a tool of intelligence gathering, most notably, with the stalled Cyber Intelligence Sharing and Intelligence Act, or CISPA , and perhaps it is building Bluffdale in anticipation that such legislation will fail, that however it is changed might not be to its liking, or because it doesn’t want to be bothered with the need to obtain warrants or with constitutional niceties such as our protection against unreasonable search and seizure.

If such behemoth surveillance instruments fulfill the role of the telescreens and hidden microphones in Orwell’s 1984, then the role the only group in the novel whose name actually reflects what it is- The Spies – children who watch their parents for unorthodox behavior and turn them in, is taken today by the American public itself. In post 9/11 America it is, local law enforcement, neighbors, and passersby who are asked to “report suspicious activity”. People who actually do report suspicious activity have their observations and photographs recorded in an ominous sounding data base that Orwell himself might have named called The Guardian. (TS 144)

As Priest writes:

Guardian stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. Most are not even suspected of one. What they have done is appear, to a town sheriff, a traffic cop, or even a neighbor to be acting suspiciously”. (TS 145)

Such information is reported to, and initially investigated by, the personnel in another sort of data collector- the “fusion centers” which had been created in every state after 9/11.These fusion centers are often located in rural states whose employees have literally nothing to do. They tend to be staffed by persons without intelligence backgrounds, and who instead hailed from law enforcement, because those with even the bare minimum of foreign intelligence experience were sucked up by the behemoth intelligence organizations, both private and public, that have spread like mould around Washington D.C.

Into this vacuum of largely non-existent threats came “consultants” such as Montijo Walid Shoebat, who lectured fusion center staff on the fantastical plot of Muslims to establish Sharia Law in the United States. (TS 271-272). A story as wild as the concocted boogeymen of Goldstein and the Brotherhood in Orwell’s dystopia.

It isn’t only Mosques, or Islamic groups that find themselves spied upon by overeager local law enforcement and sometimes highly unprofessional private intelligence firms. Completely non-violent, political groups, such as ones in my native Pennsylvania, have become the target of “investigations”. In 2009 the private intelligence firm the Institute for Terrorism Research and Response compiled reports for state officials on a wide range of peaceful political groups that included: “The Pennsylvania Tea Party Patriots Coalition, the Libertarian Movement, anti-war protesters, animal-rights groups, and an environmentalist dressed up as Santa Claus and handing out coal-filled stockings” (TS 146). A list that is just about politically broad enough to piss everybody off.

Like the fusion centers, or as part of them, data rich crime centers such as the Memphis Real Time Crime Center are popping up all over the United States. Local police officers now suck up streams of data about the environments in which they operate and are able to pull that data together to identify suspects- now by scanning licence plates, but soon enough, as in Arizona, where the Maricopa County Sheriff’s office was creating up to 9,000 biometric, digital profiles a month (TS 131) by scanning human faces from a distance.

Sometimes crime centers used the information gathered for massive sweeps arresting over a thousand people at a clip. The result was an overloaded justice and prison system that couldn’t handle the caseload (TS 144), and no doubt, as was the case in territories occupied by the US military, an even more alienated and angry local population.

From one perspective Big Data would seem to make torture more not less likely as all information that can be gathered from suspects, whatever their station, becomes important in a way it wasn’t before, a piece in a gigantic, electronic puzzle. Yet, technological developments outside of Big Data, appear to point in the direction away from torture as a way of gathering information.

“Controlled torture”, the phrase burns in my mouth, has always been the consequence of the unbridgeable space between human minds. Torture attempts to break through the wall of privacy we possess as individuals through physical and mental coercion. Big Data, whether of the commercial or security variety, hates privacy because it gums up the capacity to gather more and more information for Big Data to become what so it desires- Even Bigger Data. The dilemma for the state, or in the case of the Inquisition, the organization, is that once the green light has been given to human sadism it is almost impossible to control it. Torture, or the knowledge of torture inflicted on loved ones, breeds more and more enemies.

Torture’s ham fisted and outwardly brutal methods today are going hopelessly out of fashion. They are the equivalent of rifling through someone’s trash or breaking into their house to obtain useful information about them. Much better to have them tell you what you need to know because they “like” you.

In that vein, Priest describes some of the new interrogation technologies being developed by the government and private security technology firms. One such technology is an “interrogation booth” that contain avatars with characteristics (such as an older Hispanic woman) that have been psychologically studied to produce more accurate answers from those questioned. There are ideas to replace the booth with a tiny projector mounted on a soldier’s or policeman’s helmet to produce the needed avatar at a moments notice. There was also a “lie detecting beam” that could tell- from a distance- whether someone was lying by measuring miniscule changes on a person’s skin. (TS 169) But if security services demand transparency from those it seeks to control they offer up no such transparency themselves. This is the case not only in the notoriously secretive nature of the security state, but also in the way the US government itself explains and seeks support for its policies in the outside world.

Orwell, was deeply interested in the abuse of language, and I think here too, the actions of the American government would give him much to chew on. Ever since the disaster of the war in Iraq, American officials have been obsessed with the idea of “soft-power”. The fallacy that resistance to American policy was a matter of “bad messaging” rather than the policy itself. Sadly, this messaging was often something far from truthful and often fell under what the government termed” Influence operations” which, according to Priest:

Influence operations, as the name suggests, are aimed at secretly influencing or manipulating the opinions of foreign audiences, either on an actual battlefield- such as during a feint in a tactical battle- or within civilian populations, such as undermining support for an existing government or terrorist group (TS 59)

Another great technological development over the past decade has been the revolution in robotics, which like Big Data is brought to us by the ever expanding information processing powers of computers, the product of Moore’s Law.

Since 9/11 multiple forms of robots have been perfected, developed, and deployed by the military, intelligence services and private contractors only the most discussed and controversial of which have been flying drones. It is with these and other tools of covert warfare, such as drones, and in his quite sweeping understanding and application of executive power that President Obama has been even more Orwellian than his predecessor.

Obama may have ended the torture of prisoners captured by American soldiers and intelligence officials, and he certainly showed courage and foresight in his assassination of Osama Bin Laden, a fact by which the world can breathe a sigh of relief. The problem is that he has allowed, indeed propelled, the expansion of the instruments of American foreign policy that are largely hidden from the purview and control of the democratic public. In addition to the surveillance issues above, he has put forward a sweeping and quite dangerous interpretation of executive power in the forms of indefinite detention without trial found in the NDAAengaged in the extrajudicial killings of American citizens, and asserted the prerogative, questionable under both the constitution and international law, to launch attacks, both covert and overt, on countries with which the United States is not officially at war.

In the words of Conor Friedersdorf of the Atlantic writing on the unprecedented expansion of executive power under the Obama administration and comparing these very real and troubling developments to the paranoid delusions of right-wing nuts, who seem more concerned with the fantastical conspiracy theories such as the Social Security Administration buying hollow-point bullets:

… the fact that the executive branch is literally spying on American citizens, putting them on secret kill lists, and invoking the state secrets privilege to hide their actions doesn’t even merit a mention.  (by the right-wing).

Perhaps surprisingly, the technologies created in the last generation seem tailor made for the new types of covert war the US is now choosing to fight. This can perhaps best be seen in the ongoing covert war against Iran which has used not only drones but brand new forms of weapons such the Stuxnet Worm.

The questions posed to us by the militarized versions of Big Data, new media, Robotics, and spyware/computer viruses are the same as those these phenomena pose in the civilian world: Big Data; does it actually provide us with a useful map of reality, or instead drown us in mostly useless information? In analog to the question of profitability in the economic sphere: does Big Data actually make us safer? New Media, how is the truth to survive in a world where seemingly any organization or person can create their own version of reality. Doesn’t the lack of transparency by corporations or the government give rise to all sorts of conspiracy theories in such an atmosphere, and isn’t it ultimately futile, and liable to backfire, for corporations and governments to try to shape all these newly enabled voices to its liking through spin and propaganda? Robotics; in analog to the question of what it portends to the world of work, what is it doing to the world of war? Is Robotics making us safer or giving us a false sense of security and control? Is it engendering an over-readiness to take risks because we have abstracted away the very human consequences of our actions- at least in terms of the risks to our own soldiers. In terms of spyware and computer viruses: how open should our systems remain given their vulnerabilities to those who would use this openness for ill ends?

At the very least, in terms of Big.Data, we should have grave doubts. The kind of FaceBook from hell the government has created didn’t seem all that capable of actually pulling information together into a coherent much less accurate picture. Much like their less technologically enabled counterparts who missed the collapse of the Eastern Bloc and fall of the Soviet Union, the new internet enabled security services missed the world shaking event of the Arab Spring.

The problem with all of these technologies, I think, is that they are methods for treating the symptoms of a diseased society, rather than the disease itself. But first let me take a detour through Orwell vision of the future of capitalist, liberal democracy seen from his vantage point in the 1940s.

Orwell, and this is especially clear in his essay The Lion and the Unicorn, believed the world was poised between two stark alternatives: the Socialist one, which he defined in terms of social justice, political liberty, equal rights, and global solidarity, and a Fascist or Bolshevist one, characterized by the increasingly brutal actions of the state in the name of caste, both domestically and internationally.

He wrote:

Because the time has come when one can predict the future in terms of an “either–or”. Either we turn this war into a revolutionary war (I do not say that our policy will be EXACTLY what I have indicated above–merely that it will be along those general lines) or we lose it, and much more besides. Quite soon it will be possible to say definitely that our feet are set upon one path or the other. But at any rate it is certain that with our present social structure we cannot win. Our real forces, physical, moral or intellectual, cannot be mobilised.

It is almost impossible for those of us in the West who have been raised to believe that capitalist liberal democracy is the end of the line in terms of political evolution to remember that within the lifetimes of people still with us (such as my grandmother who tends her garden now in the same way she did in the 1940’s) this whole system seemed to have been swept up into the dustbin of history and that the future lie elsewhere.

What the brilliance of Orwell missed, the penetrating insight of Aldous Huxley in his Brave New World caught: that a sufficiently prosperous society would lull it’s citizens to sleep, and in doing so rob them both of the desire for revolutionary change and their very freedom.

As I have argued elsewhere, Huxley’s prescience may depend on the kind of economic growth and general prosperity that was the norm after the Second World War. What worries me is that if the pessimists are proven correct, if we are in for an era of resource scarcity, and population pressures, stagnant economies, and chronic unemployment that Huxley’s dystopia will give way to a more brutal Orwellian one.

This is why we need to push back against the Orwellian features that have crept upon us since 9/11. The fact is we are almost unaware that we building the architecture for something truly dystopian and should pause to think before it is too late.

To return to the question of whether the new technologies help or hurt here: It is almost undeniable that all of the technological wonders that have emerged since 9/11 are good at treating the symptoms of social breakdown, both abroad and at home. They allow us to kill or capture persons who would harm largely innocent Americans, or catch violent or predatory criminals in our own country, state, and neighborhood. Where they fail is in getting to the actual root of the disease itself.

American would much better serve  its foreign policy interest were it to better align itself with the public opinion of the outside world insofar as we were able to maintain our long term interests and continue to guarantee the safety of our allies. Much better than the kind of “information operation” supported by the US government to portray a corrupt, and now deposed, autocrat like Yemen’s  Abdullah Saleh as “an anti-corruption activist”, would be actual assistance by the US and other advanced countries in…. I duknow… fighting corruption. Much better Western support for education and health in the Islamic world that the kinds of interference in the internal political development of post-revolutionary Islamic societies driven by geopolitical interest and practiced by the likes of Iran and Saudi Arabia.

This same logic applies inside the United States as well. It is time to radically roll back the Orwellian advances that have occurred since 9/11. The dangers of the war on terrorism were always that they would become like Orwell’s “continuous warfare”, and would perpetually exist in spite, rather than because of the level of threat. We are in danger of investing so much in our security architecture, bloated to a scale that dwarfs enemies, which we have blown up in our own imaginations into monstrous shadows, that we are failing to invest in the parts of our society that will actually keep us safe and prosperous over the long-term.

In Orwell’s Oceania, the poor, the “proles” were largely ignored by the surveillance state. There is a danger here that with the movement of what were once advanced technologies into the hands of local law enforcement: drones, robots, biometric scanners, super-fast data crunching computers, geo-location technologies- that domestically we will move even further in the direction of treating the symptoms of social decay, rather than dealing with the underlying conditions that propel it.

The fact of the matter is that the very equality, “the early paradise”, a product of democratic socialism and technology, Orwell thought was at our fingertips has retreated farther and farther from us. The reasons for this are multiple; To name just a few: financial   concentration automation, the end of “low hanging fruit” and their consequent high growth rates brought by industrialization,the crisis of complexity and the problem of ever more marginal returns. This retreat, if it lasts, would likely tip the balance from Huxley’s stupification by consumption to Orwell’s more brutal dystopia initiated by terrified elites attempting to keep a lid on things.

In a state of fear and panic we have blanketed the world with a sphere of surveillance, propaganda and covert violence at which Big Brother himself would be proud. This is shameful, and threatens not only to undermine our very real freedom, but to usher in a horribly dystopian world with some resemblance to the one outlined in Orwell’s dark imaginings. We must return to the other path.

The Dangers of Religious Rhetoric to the Trans-humanist Project

Thomas_Cole_-_Expulsion_from_the_Garden_of_Eden

When I saw that  the scientist and science-fiction novelist, David Brin, had given a talk at a recent Singularity Summit with the intriguing title “So you want to make gods?  Now why would that bother anybody? my hopes for the current intellectual debate between science and religion and between rival versions of our human future were instantly raised. Here was a noted singularitarian, I thought, who might raise questions about how the framing of the philosophy surrounding the Singularity was not only alienating to persons of more traditional religious sentiments, but threatened to give rise to a 21st century version of the culture wars that would make current debates over teaching evolution in schools or the much more charged disputes over abortion look quaint, and that could ultimately derail us from many of the technological achievements that lie seemingly just over the horizon which promise to vastly improve and even transform the human condition.

Upon listening to Brin’s lecture those hopes were dashed.

Brin’s lecture is a seemingly lite talk to a friendly audience punctuated by jokes some of them lame, and therefore charming, but his topic is serious indeed. He defines the real purpose of his audience to be “would be god-makers” “indeed some of you want to become gods” and admonishes them to avoid the fate of their predecessors such as Giordano Bruno of being burned at the stake.

The suggestion Brin makes for  how singularitarians are to avoid the fate of Bruno, a way to prevent the conflict between religion and science seem, at first, like humanistic and common sense advice: Rather than outright rejection and even ridicule of the religious, singularitarians are admonished to actually understand the religious views of their would be opponents and especially the cultural keystone of their religious texts.

Yet the purpose of such understanding soon becomes clear.  Knowledge of the Bible, in the eyes of Brin, should give singularitarians the ability to reframe their objectives in Christian terms. Brin lays out some examples to explain his meaning. His suggestion that the mythological Adam’s first act of naming things defines the purpose of humankind as a co-creator with God is an interesting and probably largely non-controversial one. It’s when he steps into the larger Biblical narrative that things get tricky.

Brin finds the seeming justification for the expulsion of Adam and Eve from the Garden of Eden to be particularly potent for singularitarians:

And the LORD God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever. Genesis 3:22 King James Bible


Brin thinks this passage can be used as a Biblical justification for the singularitarian aim of personal immortality and god-like powers. The debate he thinks is not over “can we?”, but merely a matter of “when should we?” attain these ultimate ends.

The other Biblical passage Brin thinks singularitarians can use to their advantage in their debate with Christians is found in the story of the Tower of Babel.  

And the LORD said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.  Genesis 11:6 King James Bible

As in the story of the expulsion from the Garden of Eden, Brin thinks the story of the Tower of Babel can be used to illustrate that human beings, according to Christianity’s  own scriptures, have innately god-like powers. What the debate between singularitarians and Christians is, therefore, largely a matter of when and how human beings reach their full God-given potential.

Much of Brin’s lecture is animated by an awareness of the current conflict between science and religion. He constructs a wider historical context to explain this current tension.  For him, new ideas and technologies have the effect of destabilizing hierarchy, and have always given rise in the past to a counter-revolution supported and egged on by counter-revolutionary oligarchs. The United States is experiencing another one of these oligarchic putsches as evidenced in books such as The Republican War on Science. Brin thinks that Fermi’s Paradox or the “silence” of the universe, the seeming lack of intelligent civilizations other than our own might be a consequence of the fact that counter-revolutionaries or “grouches” tend to win their struggle with the forces of progress. His hope is that our time has come and that this is the moment where those allied under the banner of progress might win.

The questions which gnawed at me after listening to Brin’s speech was whether or not his prescriptions really offered a path to diminishing the conflict between religion and science or were they merely a means to its further exacerbation?
The problem, I think, is that however brilliant a physicists and novelists Brin might be he is a rather poor religious scholar and even worse as a historian, political scientist or sociologist.

Part of the problem here stems from the fact that Brin appears less interested in opening up a dialogue between the singularitarians and other religious communities than he is at training them in his terms verbal “judo” so as to be able to neutralize and proselytize to their most vociferous theological opponents- fundamentalist Christians. The whole thing put me in mind of how the early Jesuits were taught to argue their non-Catholic opponents into the ground.  Be that as it may, the Christianity that Brin deals with is of a literalist sort in which stories such as the expulsion from the Garden of Eden or the Tower of Babel are to be taken as the actual word of God. But this literalism is primarily a feature of some versions of Protestantism not Christianity as a whole.

The idea that the book of Genesis is literally true is not the teaching of the Catholic, Anglican, or large parts of the Orthodox Church the three of which make up the bulk of Christians world-wide. Quoting scripture back at these groups won’t get a singularitarian  anywhere. Rather, they would likely find themselves in the discussion they should be having, a heated philosophical discussion over humankind’s role and place in the universe and where the very idea of “becoming a god” is ridiculous in the sense that God is understood in non-corporal, indefinable way, indeed as something that is sometimes more akin to our notion of “nothing” than it is to anything else we can speak of.  The story Karen Armstrong tells in her 2009, The Case for God.

The result of framing the singularitarian argument on literalist terms may result in the alienation of what should be considered more pro-science Christian groups who are much less interested in aligning the views and goals of science with those found directly in the Bible than in finding a way to navigate through our technologically evolving society in a way that keeps the essence of their particular culture of religious practice and the beliefs found in their ethical perspective developed over millenia intact.

If Brin goes astray in terms of his understanding of religion he misses even more essential elements when viewed through the eyes on an historian. He seems to think that doctrinal disputes over the meaning of religious text are less dangerous than disputes between different and non-communicating systems of belief, but that’s not what history shows. Protestants and Catholics murdered one another for centuries even when the basic outlines of their interpretations of the Bible were essentially the same. Today, it seems not a month goes by without some report of Sunni-Muslim on Shia-Muslim violence or vice versa. Once the initial shock for Christian fundamentalist of singularitarians quoting the Bible wears off, fundamentalists seem likely to be incensed that they are stealing “their” book, for a quite alien purpose.

It’s not just that Brin’s historical understanding of inter/intra-religious conflict is a little off, it’s that he perpetuates the myth of eternal conflict between science and religion in the supposed name of putting an end to it. The myth of the conflict between science and religion that includes the sad tale of the visionary Giordano Bruno whose fate Brin wants his listeners to avoid, dates no later than the late 19th century created by staunch secularists such as Robert Ingersoll and John William Draper. (Karen Armstrong, The Case for God, pp. 251-252)

Yes, it is true that the kinds of naturalistic explanations that constitute modern science emerged first within the context of the democratic city-states of ancient Greece, but if one takes the case of the biggest most important martyr for the freedom of thought in history, Socrates, as of any importance one sees that science and democracy are not partners that are of necessity glued to the hip. The relationship between science, democracy, and oligarchy in the early modern period is also complex and ambiguous.

Take the case of perhaps the most famous case of religions assault on religion- Galileo. The moons which Galileo discovered orbiting around Jupiter are known today as the Galilean moons. As was pointed out by Michael Nielsen, (@27 min) what is less widely known is that Galileo initially named them the medicean moons after his very oligarchic patrons in the Medici family.


Battles over science in the early modern period are better seen as conflicts between oligarchic groups rather than a conflict where science stood in the support of democratizing forces that oligarchs sought to contain. Science indeed benefited from this competition and some, such as Paul A. David, argue that the scientific revolution would have been unlikely without the kinds of elaborate forms of patronage by the wealthy of scientific experiments and more importantly- mass publication.

The “new science” that emerged in the early modern period did not necessarily give rise to liberation narratives either. Newton’s cosmology was used in England to justify the rule of the “higher” over the “lower” orders, just as the court of France’s Sun-king had its nobles placed in well defined “orbits” “circling” around their sovereign. (Karen Armstrong, The Case for God,  p. 216)

Brin’s history and his read of current and near future political and social development seems to be almost Marxist in the sense that the pursuit of scientific knowledge and technological advancement will inevitably lead to further democratization. Such a “faith” I believe to be dangerous. If science and technology prove to be democratizing forces it will be because we have chosen to make them so, but a backlash is indeed possible. Such a “counter-revolution” can most likely be averted not by technologists taking on yet more religious language and concepts and proselytizing to the non-converted. Rather, we can escape this fate by putting some distance between the religious rhetoric of singularitarians and those who believe in the liberating and humanist potential of emerging technologies. For if transhumanists frame their goals to be the extension of the healthy human lifespan to the longest length possible and the increase of available intelligence, both human and artificial, so as to navigate and solve the problems of our complex societies almost everyone would assent. Whereas if transhumanists continue to be dragged into fights with the religious over goals such as “immortality”, “becoming gods” or “building gods”(an idea that makes as much sense as saying you were going to build the Tao or design Goodness)  we might find ourselves in the 21st century version of a religious war.