The Earth’s Inexplicable Solitude

Throw your arms wide out to represent the span of all of Earthly time. Our planet forms at the tip of your left arm’s longest finger, and the Cambrian begins at the wrist of your right arm. The rise of complex life lies in the palm of your right hand, and, if you choose, you can wipe out all of human history ‘in a single stroke with a medium grained nail file’  

Lee Billings, Five Billion Years of Solitude (145)  

For most of our days and for most of the time we live in the world of Daniel Kahneman’s experiencing self. What we pay attention to is whatever is right in front of us, which can range from the pain of hunger to the boredom of cubicle walls. Nature has probably wired us this way, the stone age hunter and gatherer still in our heads, where the failure to focus on the task at hand came with the risk of death. A good deal of modern society, and especially contemporary technology such as smart phones, leverages this presentness and leaves us trapped in its muck, a reality Douglas Rushkoff brilliantly lays out in his Present Shock.      

Yet, if the day to day world is what rules us and is most responsible for our happiness our imagination has given us the ability to leap beyond it. We can at a whim visit our own personal past or imagined future but spend even more of our time inhabiting purely imagined worlds. Indeed, perhaps Kahneman’s “remembering self” should be replaced by an imagining self, for our memories aren’t all that accurate to begin with, and much of remembering takes the form of imagining ourselves in a sort of drama or comedy in which we are the protagonist and star.

Sometimes imagined worlds can become so mesmerizing they block out the world in front of our eyes. In Plato’s cave it is the real world that is thought of as shadows and the ideas in our heads that are real and solid. Plato was taking a leap not just in perception but in time. Not only is it possible to roll out and survey the canvass of our own past and possible future or the past and future of the world around, you can leap over the whole thing and end up looking down at the world from the perspective of eternity. And looking down meant literally down, with timeless eternity located in what for Plato and his Christian and Muslim descendants was the realm of the stars above our heads.

We can no longer find a physical location for eternity, but rather than make time shallow this has instead allowed us to grasp its depth, that is, we have a new appreciation for how much the canvass of time stretches out behind us and in front of us. Some may want an earth that is only thousands of years old as was evident in the recent much publicized debate between the creationist Ken Ham and Bill Nye, but even Pat Robertson now believes in deep time.   

Recently, The Long Now Foundation,  held a 20th anniversary retrospective “The Long Now, Now” a discussion between two of the organization’s founders- Brian Eno and Danny Hillis. The Long Now Foundation may not be dedicated to deep time, but its 10,000 year bookends, looking that far back, and that far ahead, still doubles the past time horizon of creationists, and given the association between creationism and ideas of impending apocalypse, no doubt comparatively adds millennia to the sphere of concern regarding the human future as well.    

Yet, as suggested above, creationists aren’t the only ones who can be accused of having a diminished sense of time. Eno acknowledged that the genesis for the Long Now Foundation and its project of the 10,000 year clock stemmed from his experience living in “edgy Soho” where he found the idea of “here” constrained to just a few blocks rather than New York or the United States and the idea of “now” limited to at most a few days or weeks in front of one’s face. This was, as Eno notes, before the “Wall Street crowd” muscled its way in. High-speed traders have now compressed time to such small scales that human beings can’t even perceive it.  

What I found most interesting about the Eno-Hillis discussion was how they characterized their expanded notion of time, something they credited not merely to the clock project but to their own perspective gained from age. Both of their time horizons had expanded forward and backward and the majority of what they now read was history despite Eno’s background as a musician and Hillis’ as an engineer. Hillis’ study of history had led him to the view that there were three main ways of viewing the human story.

For most of human history our idea of time was cyclical- history wasn’t going anywhere but round and round. A second way of viewing history was that it was progressive- things were getting better and better- a view which had its most recent incantation beginning in the Enlightenment and was now, in both Hillis and Eno’s view, coming to a close. For both, we were entering a stage where our understanding of the human story was of a third type “improvisational” in which we were neither moving relentlessly forward or repeating but had to “muddle” our way through, with some things getting better, and others worse, but no clear understanding as to where we might end up.    

Still, if we wish to reflect on deep time even 10,000 years is not nearly enough. A great recent example of such reflection  is Lee Billings Five Billion Years of Solitude, which, though it is written as a story of our search for life outside of the solar system, is just as much or more a meditation on the depth of past and future.

When I was a kid there were 9 known planets all within our solar system, and none beyond, and now, though we have lost poor Pluto, we have discovered over a thousand planets orbiting suns other than our own with estimates in the Milky Way alone on the order of 100 billion. A momentous change few of us have absorbed, and much of Five Billion Years of Solitude reflects upon our current failure to value these discoveries, or respond to the nature of the universe that has been communicated by them. It is also a reflection on our still present solitude, the very silence of a universe that is believed to be fertile soil for life may hint that no civilization ever has or survived long enough, or devoted themselves in earnest enough, to reach into the beyond.

Perhaps our own recent history provides some clues explaining the silence. Our technology has taken on a much different role than what Billings imagined as a child mesmerized by the idea of humans breaking out beyond the bounds of earth. His pessimism captured best not by the funding cutbacks and withdrawal of public commitment or cancellation of everything from SETI to NASA’S Terrestrial Planet Finder (TPF) or the ESA’s Darwin, but in Billings’ conversations with Greg Laughlin an astrophysicist and planet hunter at UC Santa Cruz.  Laughlin was now devoting part of his time and the skills he had learned as a planet hunter to commodity trading. At which Billings lamented:

The planet’s brightest scientific minds no longer leveraged the most powerful technologies to grow and expand human influence far out beyond earth, but to sublime and compress our small isolated world into an even more infinitesimal, less substantial state. As he described for me the dark arts of reaping billion dollar profits from sub-cent scale price changes rippling at near light-speed around the globe, Laughlin shook his head in quiet awe. Such feats, he said, were “much more difficult than finding an earth-like exoplanet”. (112)

Billings finds other, related, possible explanations for our solitude as well. He discusses the thought experiment of UC San Diego’s Tom Murphy who tried to extrapolate the world’s increasing energy use into the future at an historical rate of 2.3 percent per year. To continue to grow at that rate, which the United States has done since the middle of the seventeenth-century, we would have to encase every star in the Milky Way galaxy within an energy absorbing Dyson sphere within 2,500 years. At which Billings concludes:

If technological civilization like ours are common in the universe, the fact that we have yet to see stars or entire galaxies dimming before our eyes beneath starlight-absorbing veneers of Dyson spheres suggests that our own present era of exponential growth may be anomalous, not only to our past, but also to our future.

Perhaps even with a singularity we can not continue the exponential trend lines we have been on since the industrial revolution. Technological civilization may peak much closer to our own level of advancement than we realize, or may more often than not destroy itself, but, if the earth is any example, life itself once established is incredibly resilient.

As Billings shows us in the depths of time the earth has been a hot house or a ball of ice with glaciers extending to the equator. Individual species and even whole biomes may disappear under the weight of change and shocks, but life itself holds on. If our current civilization proves suicidal we will not be the first form of life that has so altered the earthly environment that it has destroyed both itself and much of the rest of life on earth.

In this light Billings discusses the discovery of the natural gas fields of the Marcellus Shale and the explosive growth of fracking, the shattering of the earth using water under intense pressure, which while it has been an economic boon to my beloved Pennsylvania, and is less of a danger to us as a greenhouse gas than demon coal, presents both short term and longer term dangers.

The problem with the Marcellus is that it is merely the largest of many such gas shale field located all over the earth. Even if natural gas is a less potent greenhouse gas than coal it still contributes to global warming and its very cheapness may delay our necessary move away from fossil fuels in total if we are to avoid potentially catastrophic levels of warming.

The Marcellus was created by eons of anaerobic bacteria trapped in underwater mountain folds which released hydrogen sulfide toxic to almost any form of life and leading to a vast accumulation of carbon as dead bacteria could no longer be decomposed. Billings muses whether we ourselves might be just another form of destructive bacteria.

Removed from its ancient context, the creation of the Marcellus struck me as eerily familiar. A new source of energy and nutrients flows into an isolated population. The population balloons and blindly grows, occasionally crashing when it surpasses the carrying capacity of its environment. The modern drill rigs shattering stone to harvest carbon from boom- and- bust waves of ancient death suddenly seemed like echoes, portents of history repeating itself on the grandest of scales. (130)

Technological civilization does not seem to be a gift to life on the planet on which it emerges, so much as it is a curse and danger, until, that is, the star upon which life depends itself becomes a danger or through stellar- death no longer produces the energy necessary for life. Billings thinks we have about 500 million years before the sun heats up so much the earth loses all its water. Life on earth will likely only survive the warming sun if we or our descendants do, whether we literally tow the planet to a more distant orbit or settle earthly life elsewhere, but in the mean time the biosphere will absorb our hammer blows and shake itself free of us entirely if we can not control our destructive appetites.

Over the very, very long term the chain of life that began on earth almost four billion years ago will only continue if we manage to escape our solar system entirely, but for now, the quest to find other living planets is less a  matter of finding a new home than it is about putting the finishing touches on the principle of Copernican Mediocrity, the idea that there is nothing especially privileged about earth, and, above all, ourselves.

And yet, the more we learn about the universe the more it seems that the principle of Copernican Mediocrity will itself need to be amended.  In the words of Billings’ fellow writer and planet hunter Caleb Scharf  the earth is likely “special but not significant”. Our beloved sun burns hotter than most stars, our gas giants larger are farther from our parent star, our stabilizing moon unlikely. How much these rarity factors play in the development of life, advanced life and technological civilization is anybody’s guess, and answering this question one of the main motivations behind the study of exoplanets and the search for evidence of technological civilization beyond earth. Yet, Billings wants to remind us that even existing at all is a very low probability event.

Only “the slimmest fraction of interstellar material is something so sophisticated as a hydrogen atom. To simply be any piece of ordinary matter- a molecule, a wisp of gas, a rock, a star, a person- appears to be an impressive and statistically unlikely accomplishment.” (88) Astrophysicists ideas of the future of the universe seem to undermine Copernican mediocrity as well for, if their models are right, the universe will spend most of its infinite history not only without stars and galaxies and people, but without even stable atoms.  Billings again laments:

Perhaps its just a failure of imagination to see no hope for life in such a bleak, dismal future. Or, maybe, the predicted evolution of the universe is a portent against Copernican mediocrity, a sign that the bright age of bountiful galaxies, shining stars, unfolding only a cosmic moment after the dawn of all things, is in fact rather special. (89)

I think this failure of imagination stems from something of a lack of gratitude on the part of human beings, and is based on a misunderstanding that for something to be meaningful it needs to last “forever.” The glass, for me, is more than half-full, for, even given the dismal views of astrophysicists on the universe’s future there is still as much as 100 billion years left for life to play out on its stage. And life and intelligence in this universe will likely not be the last.

Billings himself capture the latter point. The most prominent theory of how the Big Bang occurred, the “inflationary model” predicts an infinity of universes- the multiverse. Echoing Giordano Bruno, he writes:

Infinity being ,well, infinite, it would follow that the multiverse would host infinitudes of living beings on a limitless number of other worlds. (91)

I care much less that the larger infinity of these universes are lifeless than that an infinity of living worlds will exist as well.

As Billings points out, this expanded canvass of time and decentering on ourselves is a return to the philosophy of Democritus which has come down to us especially from Lucretius’ On the Nature of Things the point being one of the ways to avoid anxiety and pressure in our day-to-day would is to remember how small we and our problems are in the context of the big picture.

Still, one is tempted to ask what this vastly expanded canvass both in time past and future and in the potential number of sentient feeling beings means for the individual human life?

In a recent interview in The Atlantic, author Jennifer Percy describes how she was drawn away from physics and towards fiction because fiction allowed her to think through questions about human existence that science could not. Here father had looked to the view of human beings as insignificant with glee in a way she could not.

He flees from what messy realm of human existence, what he calls “dysfunctional reality” or “people problems.” When you imagine that we’re just bodies on a rock, small concerns become insignificant. He keeps an image above his desk, taken by the Hubble space telescope, that from a distance looks like an image of stars—but if you look more closely, they are not stars, they are whole galaxies. My dad sees that, imagining the tiny earth inside one of these galaxies—and suddenly, the rough day, the troubles at work, they disappear.

The kind of diminishment of the individual human life that Percy’s father found comforting, she instead found terrifying and almost nihilistic. Upon encountering fiction such as Lawrence Sargent Hall’s The Ledge, Percy realized fiction:

It helped me formulate questions about how the immensity and cruelty of the universe coexists with ordinary love, the everyday circumstances of human beings. The story leaves us with an image of this fisherman caught man pitilessly between these two worlds. It posed a question that became an obsession, and that followed me into my writing: what happens to your character when nature and humanity brutally encounter one another?

Trying to think and feel our way through this tension of knowing that we and our concerns are so small, but our feelings are so big, is perhaps the best we can do. Escaping the tedium and stress of the day through the contemplation of the depth of time and space is, no doubt a good thing, but it would be tragic to use such immensities as a means of creating space between human hearts or no longer finding the world that exists between and around us to be one of exquisite beauty and immeasurable value- a world that is uniquely ours to wonder at and care for.

Erasmus reads Kahneman, or why perfect rationality is less than it’s cracked up to be

sergei-kirillov-a-jester-with-the-crown-of-monomakh-1999

The last decade or so has seen a renaissance is the idea that human beings are something far short of rational creatures. Here are just a few prominent examples: there was Nassim Taleb with his The Black Swan, published before the onset of the financial crisis, which presented Wall Street traders caught in the grip of their optimistic narrative fallacies, that led them to “dance” their way right over a cliff. There was the work of Philip Tetlock which proved that the advice of most so-called experts was about as accurate as chimps throwing darts. There were explorations into how hard-wired our ideological biases are with work such as that of Jonathan Haidt in his The Righteous Mind.

There was a sustained retreat from the utilitarian calculator of homo economicus, seen in the rise of behavioral economics, popularized in the book, Nudge which took human irrational quirks at face value and tried to find wrap arounds, subtle ways to trick the irrational mind into doing things in its own long term interest. Among all of these none of the uncoverings of the less than fully rational actors most of us, no all of us, are have been more influential than the work of Daniel Kahneman, a psychologist with a Nobel Laureate in Economics and author of the bestseller Thinking Fast and Slow.

The surprising thing, to me at least, is that we forgot we were less than fully rational in the first place. We have known this since before we had even understood what rationality, meaning holding to the principle of non-self contradiction, was. How else do you explain Ulysses’ pact, where the hero had himself chained to his ship’s mast so he could both hear the sweet song of the sirens and not plunge to his own death? If the Enlightenment had made us forget the weakness of our rational souls Nietzsche and Freud eventually reminded us.

The 20th century and its horrors should have laid to rest the Enlightenment idea that with increasing education would come a golden age of human rationality, as Germany, perhaps Europe’s most enlightened state became the home of its most heinous regime. Though perhaps one might say that what the mid-20th century faced was not so much a crisis of irrationality as the experience of the moral dangers and absurdities into which closed rational systems that held to their premises as axiomatic articles of faith could lead. Nazi and Soviet totalitarianism had something of this crazed hyper-rational quality as did the Cold War nuclear suicide pact of mutually assured destruction. (MAD)

As far as domestic politics and society were concerned, however, the US largely avoided the breakdown in the Enlightenment ideal of human rationality as the basis for modern society. It was a while before we got the news. It took a series of institutional failures- 9/11, the financial crisis, the botched, unnecessary wars in Afghanistan and Iraq to wake us up to our own thick headedness.

Human folly has now come in for a pretty severe beating, but something in me has always hated a weakling being picked on, and finds it much more compelling to root for the underdog. What if we’ve been too tough on our less than perfect rationality? What if our inability to be efficient probability calculators or computers isn’t always a design flaw but sometimes one of our strengths?

I am not the first person to propose such a heresy of irrationalism. Way back in 1509 the humanist Desiderius Erasmus wrote his riotous, The Praise of Folly with something like this in mind. The book has what amounts to three parts all in the form of a speech by the goddess of Folly. The first part attempts to show how, in the world of everyday people, Folly makes the world go round and guides most of what human beings do. The second part is a critique of the society that surrounded Erasmus, a world of inept and corrupt kings, princes and popes and philosophers. The third part attempts to show how all good Christians, including Christ himself, are fools, and here Erasmus means fool as a compliment. As we all should know, good hearted people, who are for that very reason nieve, often end up being crushed by the calculating and rapacious.

It’s the lessons of the first part of The Praise of Folly that I am mostly interested in here. Many of Erasmus’ intuitions not only can be backed up by the empirical psychological studies of  Daniel Kahneman, they allow us to flip the import of Kahneman’s findings on their head. Foolish, no?

Here are some examples: take the very simple act of a person opening their own business. The fact that anyone engages in such a risk while believing in their likely success  is an example of what Kahneman calls an optimism bias. Overconfident entrepreneurs are blind to the fact, as Kahneman puts it:

The chances that a small business will survive in the United States are about 35%. But the individuals who open such businesses do not believe statistics apply to them. (256)

Optimism bias may result in the majority of entrepreneurs going under, but what would we do without it? Their sheer creative-destructive churn surely has some positive net effect on our economy and the employment picture, and those 35% of successful businesses are most likely adding long lasting and beneficial tweaks to modern life,  and even sometimes revolutions born in garages.

Yet, optimism bias doesn’t only result in entrepreneurial risk. Erasmus describes such foolishness this way:

To these (fools) are to be added those plodding virtuosos that plunder the most inward recesses of nature for the pillage of a new invention and rake over sea and land for the turning up some hitherto latent mystery and are so continually tickled with the hopes of success that they spare for no cost nor pains but trudge on and upon a defeat in one attempt courageously tack about to another and fall upon new experiments never giving over till they have calcined their whole estate to ashes and have not money enough left unmelted to purchase one crucible or limbeck…

That is, optimism bias isn’t just something to be found in business risks. Musicians and actors dreaming of becoming stars, struggling fiction authors and explorers and scientists all can be said to be biased towards the prospect of their own success. Were human beings accurate probability calculators where would we get our artists? Where would our explorers have come from? Instead of culture we would all (if it weren’t already too much the case) be trapped permanently in cubicles of practicality.

Optimism bias is just one of the many human cognitive flaws Kahneman identifies. Take this one, error of affective forecasting and its role in those oh, so important of human institutions, marriage and children. For Kahneman, many people base their decision to get married on how they feel when in the swirl of romance thinking that marriage will somehow secure this happiness indefinitely. Yet the fact is, married persons, according to Kahneman:

Unless they think happy thoughts about their marriage for much of the day, it will not directly influence their happiness. Even newlyweds who are lucky enough to enjoy a state of happy preoccupation with their love will eventually return to earth, and their experienced happiness will again depend, as it does for the rest of us, on the environment and activities of the present moment. (400)

Unless one is under the impression that we no longer need marriage or children, again, one can be genuinely pleased that, at least in some cases, we suffer from the cognitive error of affective forecasting. Perhaps societies, such as Japan, that are in the process of erasing themselves as they forego marriage and even more so childbearing might be said to suffer from too much reason.

Yet another error Kahneman brings our attention to is the focusing illusion. Our world is what we attend to and our feelings towards it have less to do with objective reality than what it is we are paying attention to. Something like focusing illusion accounts for a fact that many of us in good health would find hard to believe;namely, the fact that paraplegics are no more miserable than the rest of us. Kahneman explains it this way:

Most of the time, however, paraplegics work, read, enjoy jokes and friends, and get angry when they read about politics in the newspaper.

Adaption to a new situation, whether good or bad, consists, in large part, of thinking less and less about it. In that sense, most long-term circumstances, including paraplegia and marriage, are part-time states that one inhabits only when one attends to them. (405)  

Erasmus identifies something like the focusing illusion in states like dementia where the old are made no longer capable of reflecting on the breakdown of their body and impending death, but he captured it best, I think in these lines:

But there is another sort of madness that proceeds from Folly so far from being any way injurious or distasteful that it is thoroughly good desirable and this happens when harmless mistake in the judgment of the mind is freed from those cares would otherwise gratingly afflict it smoothed over with a content and faction it could not under other so happily enjoy. (78)

We may complain against our hedonic setpoints, but as the psychologists Dan Gilbert points out they not only offer us resilience on the downside- we will adjust to almost anything life throws at us- such set points should caution us that in any one thing- the perfect job, the perfect spouse lies the key to happiness. But that might leave us with a question, what exactly is this self we are trying to win happiness for?

A good deal of Kahneman’s research deals with the disjunction between two selves in the human person, what he calls the experiencing self and the remembering self. It appears that the remembering self usually gets the last laugh in that it guides our behavior. The most infamous example of this is Kahneman’s colonoscopy study where the pain of the procedure was tracked minute by minute and then compared with questions later on related to decisions in the future.

The surprising thing was that future decisions were biased not by the frequency or duration of pain over the course of the procedure but how the procedure ended. The conclusion dictated how much negative emotion was associated with the procedure, that is, the experiencing self seemed to have lost its voice over how the procedure was judged and how it would be approached in the future.

Kahneman may have found an area where the dominance of the remembering self over the experiencing self are irrational, but again, perhaps it is generally good that endings, that closure is the basis upon which events are judged. It’s not the daily pain and sacrifice that goes into Olympic training that counts so much as outcomes, and the meaning of some life event does really change based on its ultimate outcome. There is some wisdom in the words of Solon that we should  “Count no man happy until the end is known”which doesn’t mean no one can be happy until they are dead, but that we can’t judge the meaning of a life until its story has concluded.

“Okay then”, you might be thinking, “what is the point of all this, that we should try to be irrational”? Not exactly. My point is we perhaps should take a slight breather from the critique of human nature from standpoint of its less than full rationality, that our flaws, sometimes, and just sometimes, are what makes life enjoyable and interesting. Sometimes our individual rationality may serve larger social goods of which we are foolishly oblivious.

When Erasmus wrote his Praise of Folly he was clear to separate the foolishness of everyday people from the folly of those in power, whether that power be political power, or intellectual power such as that of the scholastic theologians. Part of what distinguished the fool on the street from the fool on the throne or cloister was the claim of the latter to be following the dictates of reason- the reason of state or the internal logic of a theology that argued over angels on the head of pins. The fool on the street knew he was a fool or at least experienced the world as opaque, whereas the fools in power thought they had all the answers. For Erasmus, the very certainty of those in power along with the mindlessness of their goals made them, instead, even bigger fools than the rest of us.

One of our main weapons against power has always been to laugh at the foolishness of those who wield it. We could turn even a psychopathically rational monster like Hitler into a buffoon because even he, after all, was one of us. Perhaps if we ever do manage to create machines vastly more intelligent than ourselves we will have lost something by no longer being able to make jokes at the expense of our overlords. Hyper-rational characters like DATA from Star Trek or Sheldon from the Big Bang are funny because they are either endearingly trying to enter the mixed-up world of human irrationality, or because their total rationality, in a human context, is itself a form of folly. Super-intelligent machines might not want to become flawed humans, and though they might still be fools just like their creators, likely wouldn’t get the joke.

Welcome to the New Age of Revolution

Fall of the Bastille

Last week the prime minister of Ukraine, Mykola Azarov, resigned under pressure from a series of intense riots that had spread from Kiev to the rest of the country. Photographs from the riots in The Atlantic blew my mind, like something out of a dystopian steampunk flic. Many of the rioters were dressed in gas masks that looked as if they had been salvaged from World War I. As weapons they wielded homemade swords, molotov cocktails, and fireworks. To protect their heads some wore kitchen pots and spaghetti strainers.

The protestors were met by riot police in hypermodern black suits of armor, armed with truncheons, tear gas, and shotguns, not all of them firing only rubber bullets. Orthodox priests with crosses and icons in their hands, sometimes placed themselves perilously between the rioters and the police, hoping to bring calm to a situation that was spinning out of control.

Even for Ukraine, it was cold during the weeks of the riots. A situation that caused the blasts from water cannons used by the police to crystalize shortly after contact. The detritus of protesters covered in sheets of ice like they had be shot with some kind of high tech freeze gun.

Students of mine from the Ukraine were largely in sympathy with the protestors, but feared civil war unless something changed quickly. The protests had been brought on by a backdoor deal with Russia to walk away from talks aimed at Ukraine joining the European Union. Protests over that agreement led to the passage of an anti-protest law that only further inflamed the rioters. The resignation of the Russophile prime minister  seemed to calm the situation for a time, but with the return of the Ukrainian president Viktor Yanukovych  to work (he was supposedly ill during the heaviest of the protests) the situation has once again become volatile. It was Yanukovych who was responsible  for cutting the deal with Russia and pushing through draconian limits on the freedom of assembly which had sparked the protests in the first place.

Ukraine, it seem, is a country being torn in two, a conflict born of demographics and history.  Its eastern, largely Russian speaking population looking towards Russia and its western, largely Ukrainian speaking population looking towards Europe. In this play both Russia and the West are no doubt trying to influence the outcome of events in their favor, and thus exacerbating the instability.

Yet, while such high levels of tension are new, the problem they reveal is deep in historical terms- the cultural tug of war over Ukraine between Russia and Europe, East and West, stretches at least as far back as the 14th century when western Ukraine was brought into the European cultural orbit by the Poles. Since then, and often with great brutality on the Russian side, the question of Ukrainian identity, Slavic or Western, has been negotiated and renegotiated over centuries- a question that will perhaps never be fully resolved and whose very tension may be what it actually means to be Ukrainian.

Where Ukraine goes from here is anybody’s guess, but despite its demographic and historical particularities, its recent experience adds to the growing list of mass protests that have toppled governments, or at least managed to pressure governments into reversing course, that have been occurring regularly since perhaps 2008 with riots in Greece.

I won’t compile a comprehensive list but will simply state the mass protests and riots I can cite from memory. There was the 2009 Green Revolution in Iran that was subsequently crushed by the Iranian government. There was the 2010 Jasmine Revolution in Tunisia which toppled the government there and began what came to be the horribly misnamed “Arab Spring”. By 2011 mass protests had overthrown Hosni Mubarak in Egypt, and riots had broken out in London. 2012 saw a lull in mass protests, but in 2013 they came back with a vengeance. There were massive riots in Brazil over government cutbacks for the poor combined with extravagant spending in preparation for the 2014 World Cup, there were huge riots in Turkey which shook the government of the increasingly authoritarian Recep Tayyip Erdoğan, and a military coup in the form of mass protests that toppled the democratically elected Islamist president in Egypt. Protesters in Thailand have be “occupying” the capital since early January. And now we have Ukraine.

These are just some of the protests that were widely covered in the media. Sometimes, quite large, or at least numerous protests are taking place in a country and they are barely reported in the news at all.  Between 2006-2010 there were 180,000 reported “mass incidents” in China. It seems the majority of these protests are related to local issues and not against the national government, but the PRC has been adept at keeping them free of the prying eyes of Western media.

The abortive 2009 riots in Iran that were the first to be called a “Twitter Revolution” by Western media and digerati.  The new age of revolution often explained in terms of the impact of the communications revolution, and social media. We have had time to find out that just how little a role Western, and overwhelmingly English language media platforms, such as Twitter and FaceBook, have played in this new upsurge of revolutionary energy, but that’s not the whole story.

I wouldn’t go so far as to say technology has been irrelevant in bringing about our new revolutionary era, I’d just put the finger on another technology, namely mobile phones. In 2008 the number of mobile devices had, in the space of a decade, gone from a rich world luxury into the hands of 4 billion people. By 2013, 6 billion of the world’s 7 billion people had some sort of mobile device, more people than had access to working toilets.

It is the very disjunction between the number of people able to communicate and hence act en masse and those lacking what we in the developed world consider necessities that should get our attention- a potentially explosive situation. And yet, we have known since Alexis de Tocqueville that revolutions are less the product of the poor who have always known misery than stem from a rising middle class whose ambitions have been frustrated.

Questions I would ask a visitor from the near future if I wanted to gauge the state of the world a decade or two hence would be if the rising middle class in the developing world had put down solid foundations, and if, and to what extent, it had been cut off at the legs from either the derailment of the juggernaut of the Chinese economy, rising capacity of automation, or both?

The former fear seems to be behind the recent steep declines in the financial markets where the largest blows have been suffered by developing economies. The latter is a longer term risk for developing economies, which if they do not develop quickly enough may find themselves permanently locked out of what has been the traditional development arch of capitalist economic development moving from agriculture to manufacturing to services.

Automation threatens the wage competitiveness of developing economy workers on all stages of that scale. Poor textile workers in Bangladesh competing with first world robots, Indians earning a middle class wage working at call centers or doing grunt legal or medical work increasingly in competition with more and more sophisticated ,and in the long run less expensive, bots.

Intimately related to this would be my last question for our near future time traveler; namely, does the global trend towards increasing inequality continue, increase, or dissipate? With the exception of government incompetence and corruption combined with mobile enabled youth, rising inequality appears to be the only macro trend that these revolts share, though, this must not be the dominant factor, otherwise, protests would be the largest and most frequent in the country with the fastest growing inequality- the US.

Revolutions, as in the mobilization of a group of people massive enough and active enough to actually overthrow a government are a modern phenomenon and are so for a reason. Only since the printing press and mass literacy has the net of communication been thrown wide enough where revolution, as opposed to mere riots, has become possible. The Internet and even more so mobile technology have thrown that net even further, or better deeper, with literacy no longer being necessary, and with the capacity for intergroup communication now in real time and no longer in need of or under the direction of a center- as was the case in the era of radio and television.

Technology hasn’t resulted in the “end of history”, but quite the opposite. Mobile technology appears to facilitate the formation of crowds, but what these crowds mobilize around are usually deep seated divisions which the society in which protests occur have yet to resolve or against widely unpopular decisions made over the people’s head.

For many years now we have seen this phenomenon from financial markets one of the first area to develop deep, rapidly changing interconnections based on the digital revolution. Only a few years back, democracy seemed to have come under the thumb of much more rapidly moving markets, but now, perhaps, a populist analog has emerged.

What I wonder is how the state will respond to this, or how this new trend of popular mobilization may intersect with yet another contemporary trend- mass surveillance by the state itself?

The military theorist Carl von Clausewitz came up with his now famous concept of the “fog of war” defined as “the uncertainty in situational awareness experienced by participants in military operations. The term seeks to capture the uncertainty regarding one’s own capability, adversary capability, and adversary intent during an engagement, operation, or campaign.”  If one understands revolution as a kind of fever pitch exchange of information leading to action leading to exchange of information and so on, then, all revolutions in the past could be said to have taken place with all players under such a fog.

Past revolutions have only been transparent to historians. From a bird’s eye view and in hindsight scholars of the mother of all revolutions, the French, can see the effects of Jean-Paul Marat’s pamphlets and screeds inviting violence, the published speeches of the moralistic tyrant Robespierre, the plotting letters of Marie-Antoinette to the Austrians or the counter-revolutionary communique of General Lafayette. To the actors in the French Revolution itself the motivations and effects of other players were always opaque, the origin, in part, of the revolution’s paranoia and Reign of Terror which Robespierre saw as a means of unmasking conspirators and hypocrites.

With the new capacity of governments to see into communication, revolutions might be said to be becoming transparent in real time. Insecure governments that might be toppled by mass protest would seem to have an interest in developing the capacity to monitor the communication and track the movement of their citizens. Moore’s Law has made what remained an unachievable goal of total surveillance by the state relatively cheap.

During revolutionary situations foreign governments (with the US at the top of the list), may have the inclination to peer into revolutions through digital surveillance and in some cases will likely use this knowledge to interfere so as to shape outcomes in its own favor. States that are repressive towards their own people, such as China, will likewise try to use these surveillance tools to ensure revolutions never happen or to steer them toward preferred outcomes if they should occur despite best efforts.

One can only hope that the ability to see into a revolution while it is happening does not engender the illusion that we can also control its outcome, for as the riots and revolutions of the past few years have shown, moves against a government may be enabled by technology imported from outside, but the fate of such actions is decided by people on the ground who alone might be said have full responsibility for the future of the society in which revolution has occurred.

Foreign governments are engaged in a dangerous form of hubris if they think they can steer outcomes in their favor oblivious to local conditions and governments that think technology gives them a tool by which they can ignore the cries of their citizens are allowing the very basis on which they stand to rot underneath them and eventually collapse. A truth those who consider themselves part of a new global elite should heed when it comes to the issue of inequality.

The Dark Side of a World Without Boundaries

Peter Blume: The Eternal City

Peter Blume: The Eternal City

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Of course, the benefit to paralyzed, or persons otherwise locked tightly in the straitjacket of their own skull, of technologies to communicate directly from the human brain to machines or to other human beings is undeniable. The potential to expand the range of human experience through directly experiencing the thoughts and emotions of others is, also, of course, great. Next weekend being Super Bowl Sunday, I can’t help but think how cool it would be to experience the game through the eyes of Peyton Manning, or Russell Wilson. How amazing would a concert or an orchestra be if experienced likewise in this way?

Still, one need not necessarily be a professional dystopian and unbudgeable cassandra to come up with all kinds of quite frightening scenarios that might arise through the erosion of boundaries between human minds, all one needs to do is pay attention to less sanguine developments of the latest iteration of a once believed to be utopian technology, that is, the Internet and the social Web, to get an idea of some of the possible dangers.

The Internet was at one time prophesied to user in a golden age of transparency and democratic governance, and promised the empowerment of individuals and small companies. Its legacy, at least at this juncture, is much more mixed. There is little disputing that the Internet and its successor mobile technologies have vastly improved our lives, yet it is also the case that these technologies have led to an explosion in the activities of surveillance and control, by nefarious individuals and criminal groups, corporations and the state. Nicolelis’ vision of eroded boundaries between human minds is but the logical conclusion of the trend towards transparency. Given how recently we’ve been burned by techno-utopianism in precisely this area a little skepticism is certainly in order.

The first question that might arise is whether direct brain-to-brain communication (especially when invasive technologies are required) will ever out-compete the kinds of mediated mind-to-mind technology we have had for quite sometime time, that is, both spoken and written language. Except in very special circumstances, not all of them good, language seem to retain its advantages over direct brain-to-brain communication, and, in cases where the advantage of language over such direct communication are used it be may be less of a gain in communicative capacity than a signal that normalcy has broken down.

Couples in the first rush of new love may want to fuse their minds for a time, but a couple that been together for decades? Seems less likely, though there might be cases of pathological jealousy or smothering control bordering on abuse when one half of a couple would demand such a thing. The communion with fellow human beings offered by religion might gain something from the ability of individuals to bridge the chasm that separates them from others, but such technologies would also be a “godsend” for fanatics and cultists. I can not decide whether a mega-church such as Yoido Full Gospel Church in a world where Nicolelis’ brain-nets are possible would represent a blessed leap in human empathetic capacity or a curse.

Nicolelis seems to assume that the capacity to form brain-nets will result in some kind of idealized and global neural version of FaceBook, but human history seems to show that communication technology is just as often used to hermetically seal group off from group and becomes a weapon in the primary human moral dilemma ,which is not the separation of individual from individual, so much as the rivalry between different groups. We seem unable to exit ourselves from such rivalry even when the stakes are merely imagined- as many of us will do next Sunday, and has Nicolelis himself should have known from his beloved soccer where the rivalry expressed in violent play has a tendency to slip into violent riots in the real world.

Direct brain-to-brain communication would seem to have real advantages over language when persons are joined together in teams acting as a unit in response to a rapidly changing situation. Groups such as fire-fighters. By far the greatest value of such capacities would be found in small military units such as Platoons, members who are already bonded in a close experiential and deep emotional bond, as in on display in the documentary- Restrepo. Whatever their virtuous courage, armed bands killing one another are about as far as you can get from the global kumbaya of Nicolelis’ brain-net.

If such technologies were non-invasive, would they be used by employers to monitor the absence of sustained attention while on the job? What of poor dissidents under the thumb of a madman like Kim Jong Un .Imagine such technologies in the hands of a pimp, or even, the kinds of slave owners who, despite our obliviousness, still exist.

One of the problems with the transparency paradigm, and the new craze for an “Internet of things” where everything in the environment of an individual is transformed into Web connected computers is the fact that anything that is a computer, by that very fact, becomes hackable. If someone is worried about their digital data being sucked up and sold on an elaborate black market, about webcams being used as spying tools, if one has concern that connecting one’s home to the web might make one vulnerable, how much more so should be worried if our very thoughts could be hacked? The opportunities for abuse seem legion.

Everything is shaped by personal experience. Nicolelis whose views of the collective mind were forged by the crowds that overthrew the Brazilian military dictatorship and his beloved soccer games. But the crowd is not always wise. I being a person who has always valued solitude and time with individuals and small groups over the electric energy of crowds have no interest in being part of a hive mind to any degree more than the limited extent I might be said to already be in such a mind by writing a piece such as this.

It is a sad fact, but nonetheless true, that should anything like Nicolelis’ brain-net ever be created it can not be under the assumptions of the innate goodness of everyone. As our experience with the Internet should have taught us, an open system with even a minority of bad actors leads to a world of constant headaches and sometimes what amount to very real dangers.

The World Beyond Boundaries

360 The Virgin Oil Painting by Gustav Klimt

I  first came across Miguel Nicolelis in an article for the MIT Technology Review entitled The Brain is not computable: A leading neuroscientist says Kurzweil’s Singularity isn’t going to happen. Instead, humans will assimilate machines. That got my attention. Nicolelis, if you haven’t already heard of him, is one of the world’s top researchers in building brain-computer interfaces. He is the mind behind the project to have a paraplegic using a brain controlled exoskeleton make the first kick in the 2014 World Cup. An event that takes place in Nicolelis’ native Brazil.

In the interview, Nicolelis characterizes the singularity “as a bunch of hot air”. His reasoning being that “The brain is not computable and no engineering can reproduce it,”. He explains himself this way:

You can’t predict whether the stock market will go up or down because you can’t compute it,” he says. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

This non-computability of consciousness, he thinks, has negative implications for the prospect of ever “downloading” (or uploading) human consciousness into a computer.

“Downloads will never happen,” he declares with some confidence.

Science journalism, like any sort of journalism needs a “hook” and the hook here was obviously a dig at a number of deeply held beliefs among the technorati; namely, that AI was on track to match and eventually surpass human level intelligence, that the brain could be emulated computationally, and that, eventually, the human personality could likewise be duplicated through computation.

The problem with any hook is that they tend to leave you with a shallow impression of the reality of things. If the world is too complex to be represented in software it is even less able to be captured in a magazine headline or 650 word article. For that reason,  I wanted a clearer grasp of where Nicolelis was coming from, so I bought his recent and excellent, if a little dense, book, Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Let me start with a little of  Nicolelis’ research and from there flesh out the neuroscientist’s view of our human-machine future, a view I found both similar in many respects and at the same time very different from perspectives typical today of futurists thinking about such things.

If you want to get an idea of just how groundbreaking Nicolelis’ work is, the best thing to do is to peruse the website of his lab.  Nicolelis and his colleagues have done conducted experiments where a monkey has controlled the body of a robot located on the other side of the globe, and where another simian has learned to play a videogame with its thoughts alone. Of course, his lab is not interested in blurring the lines between monkeys and computers for the heck of it, and the immediate aim of their research is to improve the lives of those whose ties between their bodies and their minds have been severed, that is, paraplegics. A fact which explains Nicolelis’ bold gamble to successfully demonstrate his lab’s progress by having a paralyzed person kickoff the World Cup.

For how much the humanitarian potential of this technology is inspiring, it is the underlying view of the brain the work of the Nicolelis Lab appears to experimentally support and the neuroscientist’s longer term view of the potential of technology to change the human condition that are likely to have the most lasting importance. They are views and predictions that put Nicolelis more firmly in the trans-humanist camp than might be gleaned from his MIT interview.

The first aspect of Nicolelis’ view of the brain I found stunning was the mind’s extraordinary plasticity when it came to the body. We might tend to think of our brain and our body as highly interlocked things, after all, our brains have spent their whole existence as part of one body- our own. This a reality that the writer, Paul Auster, turns into the basis of his memoir Winter Journal which is essentially the story of his body’s movement through time, its urges, muscles, scars, wrinkles, ecstasies and pains.

The work of Nicolelis’ Lab seems to sever the cord we might thinks joins a particular body and the brain or mind that thinks of it as home. As he states it in Beyond Boundaries:

The conclusion from more than two decades of experiments is that the brain creates a sense of body ownership through a highly adaptive, multimodal process, which can, through straightforward manipulations of visual, tactile, and body position (also known as proprioception) sensory feedback, induce each of us, in a matter of seconds, to accept another whole new body as being the home of our conscious existence. (66)

Psychologists have had an easy time with tricks like fooling a person into believing they possess a limb that is not actually theirs, but Nicolelis is less interested in this trickery than finding a new way to understand the human condition in light of his and others findings.

The fact that the boundaries of the brain’s body image are not limited to the body that brain is located in is one way to understand the perhaps almost unique qualities of the extended human mind. We are all ultimately still tool builders and users, only now our tools:

… include technological tools with which we are actively engaged, such as a car, bicycle, or walking stick; a pencil or a pen, spoon, whisk or spatula; a tennis racket, golf club, a baseball glove or basketball; a screwdriver or hammer; a joystick or computer mouse; and even a TV remote control or Blackberry, no matter how weird that may sound. (217)

Specialized skills honed over a lifetime can make a tool an even more intimate part of the self. The violin, an appendage of a skilled musician, a football like a part of the hand of a seasoned quarterback. Many of the most prized people in society are in fact master tool users even if we rarely think of them this way.

Even with our master use of tools, the brain is still, in Nicolelis’ view,trapped within a narrow sphere surrounding its particular body. It is here where he sees advances in neuroscience eventually leading to the liberation of the mind from its shell. The logical outcome of minds being able to communicate directly to computers is a world where, according to Nicolelis:

… augmented humans make their presence felt in a variety of remote environments, through avatars and artificial tools controlled by thought alone. From the depths of the oceans to the confines of supernovas, even to the tiny cracks of intracellular space, human reach will finally catch up to our voracious appetite to explore the unknown. (314)

He characterizes this as Mankind’s “epic journey of emancipation from the obsolete bodies they have inhabited for millions of years” (314) Yet, Nicolelis sees human communication with machines as but a stepping stone to the ultimate goal- the direct exchange of thoughts between human minds. He imagines the sharing of what has forever been the ultimately solipsistic experience of what it is like to be a particular individual with our own very unique experience of events, something that can never be fully captured even in the most artful expressions of,  language. This exchange of thoughts, which he calls “brainstorms” is something Nicolelis does not limit to intimates- lovers and friends- but which he imagines giving rise to a “brain- net”.

Could we one day, down the road of a remote future, experience what it is to be part of a conscious network of brains, a collectively thinking true brain-net? (315)

… I have no doubt that the rapacious voracity with which most of us share our lives on the Web today offers just a hint of the social hunger that resides deep in human nature. For this reason, if a brain- net ever becomes practicable,  I suspect it will spread like a supernova explosion throughout human societies. (316)

Given this context, Nicolelis’ view on the Singularity and the emulation or copying of human consciousness on a machine is much more nuanced than the impression one is left with from the MIT interview. It is not that he discounts the possibility that “advanced machines may come to dominate and even dominate the human race” (302) , but that he views it as a low probability danger relative to the other catastrophic risks faced by our species.

His views on prospect of human level intelligence in machines is less that high level machine intelligence is impossible, but that our specific type of intelligence is non-replicable. Building off of Stephen Jay Gould’s idea of the “life tape”  the reason being that we can not duplicate through engineering the sheer contingency that lies behind the evolution of human intelligence. I understand this in light of an observation by the philosopher Daniel Dennett, that I remember but cannot place, that it may be technically feasible to replicate mechanically an exact version of a living bird, but that it may prove prohibitively expensive, as expensive as our journeys to the moon, and besides we don’t need to exactly replicate a living bird- we have 747s. Machine intelligence may prove to be like this where we are never able to replicate our own intelligence other than through traditional and much more exciting means, but where artificial intelligence is vastly superior to human intelligence in many domains.

In terms of something like uploading, Nicolelis does believe that we will be able to record and store human thoughts- his brainstorms- at some place in the future, we may be able to record  the whole of a life in this way, but he does not think this will mean the preservation of a still experiencing intelligence anymore than a piece by Chopin is the actual man. He imagines us deliberately recording the memories of individuals and broadcasting them across the universe to exist forever in the background of the cosmos which gave rise to us.

I can imagine all kinds of wonderful developments emerging should the technological advances Nicolelis imagines coming to pass. It would revolutionize psychological therapy, law, art and romance. It would offer us brand new ways to memorialize and remember the dead.

Yet, Nicolelis’ Omega Point- a world where all human being are brought together into one embracing common mind, has been our dream at least since Plato, and the very antiquity of these urges should give us pause, for what history has taught us is that the optimistic belief that “this time is different” has never proved true. A fact which should encourage us to look seriously, which Nicolelis himself refuse to do, at the potential dark side of the revolution in neuroscience this genius Brazilian is helping to bring about. It is less a matter of cold pessimism to acknowledge this negative potential as it is a matter of steeling ourselves against disappointment, at the the least, and in preventing such problem from emerging in the first place at best, a task I will turn to next time…

Silicon Secessionists

Moore's Utopia

Lately, there have be weird mumblings about secession coming from an unexpected corner. We’ve come to expect that there are hangers on to the fallen Confederate States of America, or Texans hankering after their lost independent Republic, but Silicon Valley? Really? The idea, at least at first blush, seems absurd.

We have the tycoon founder of PayPal and early FaceBook investor, Peter Thiel, whose hands seem to be in every arch-conservative movement under the sun, and who is a vocal supporter of utopian seasteading. The idea of creating a libertarian oasis of artificial islands beyond the reach of law, regulation and taxes.

Likewise, Zoltan Istvan’s novel The Transhumanist Wager uses the idolatry of Silicon Valley’s Randian individualism and technophilia as lego blocks with which to build an imagined “Transhumania”.  A moveable artificial island that is, again, free from the legal and regulatory control of the state.

A second venture capitalist, Tim Draper, recently proposed shattering mammoth California into six pieces, with Silicon Valley to become its own separate state. There are plans to build a techno-libertarian Galt’s Gulch type city-state in Chile, a geographical choice which given Chile’s brutal experience with right-wing economics via Pinochet and the Chicago-school is loaded with historical irony.

Yet another Silicon Valley tech entrepreneur, Elon Musk, hopes to do better than all of these and move his imagined utopian experiment off of the earth, to Mars. Perhaps, he could get some volunteer’s from Winnipeg whose temperature earlier this month under a “polar vortex” was colder than that around the Curiosity Rover tooling around in the dead red dust of the planet of war.

What in the world is going on?

By far the best articulation of Silicon Valley’s new secessionists urges I have seen comes from  Balaji Srinivasan, who doesn’t consider himself a secessionist along the lines of John C Calhoun at all. In an article for Wired back in November  Srinivasan laid out what I found to be a quite intriguing argument for a kind of Cambrian explosion of new polities. The Internet now allows much easier sorting of individuals based on values and its only a step or two ahead to imagine virtual associations becoming physical ones.

I have to say that I find much to like in the idea of forming small, new political societies as a means of obtaining forms of innovation we sorely lack- namely political and economic innovation. I also think Srinivasan and others  are onto something in that that small societies, which get things right, seem best positioned to navigate the complex landscape of our globalized world. I myself would much prefer a successful democratic-socialist small society, such as a Nordic one like Finland, to a successful capitalist-authoritarian on like Singapore, but the idea of a plurality of political systems operating at a small scale doesn’t bother me in the least as long as belonging to such polities is ultimately voluntary.

The existence of such societies might even help heal one of the main problems of the larger pluralist societies, such as our own, to which these new communities might remain attached. Pluralist societies are great on diversity, but often bad on something older, and invariably more intolerant types of society had in droves; namely the capacity of culture to form a unified physical and intellectual world- a kind of home- at least for those lucky enough to believe in that world and be granted a good place within it.

Even though I am certain that, like most past efforts  have, the majority of these newly formed polities would fail, as have the utopian experiments in the past, we would no doubt learn something from them. And some might even succeed and become the legacy of those bold enough to dream of the new.

One might wonder, however, why this recent interest in utopian communities has been so strongly represented both by libertarians and Silicon Valley technolphiles? Nothing against libertarian experiments per se, but there are, after all a whole host of other ideological groups that could be expected to be attracted to the idea of forming new political communities where their principles could be brought to fruition. Srinivasan, again, provides us with the most articulate answer to this question.

In a speech I had formerly misattributed to one of the so-called neo-reactionaries (apologies), Srinivasan lays out the case for what he calls “Silicon Valley’s ultimate exit”.

He begins by asking in all seriousness “Is the USA the Microsoft of Nations?”and then goes on to draw the distinction between two different types of responses to institutional failure- Voice versus Exit. Voice essentially means aiming to change an institution from within whereas Exit is flight or in software terms “forking” to form a new institution whether that be anything from a corporation to a state. Srinivasan thinks Exit is an important form of political leverage pressuring a system to adopt reform or face flight.

The problem I see is the logic behind the choice of Exit over Voice which threatens a kind of social disintegration. Indeed, the rationale for Exit behind libertarian flight which Srinivasan draws seems not only to assume an already existent social disintegration, but proposes to act as an accelerant for more.

Srinivasan’s argument is that Silicon Valley is on the verge of becoming the target of the old elites which he calls “The Paper Belt: based in:Boston with higher ed; New York City with Madison Avenue, books, Wall Street, and newspapers; Los Angeles with movies, music, Hollywood; and, of course, DC with laws and regulations, formally running it.” That Silicon Valley with it’s telecommunications revolution was “putting a horse head in all of their beds. We are becoming stronger than all of them combined.” That the elites of the Paper Belt  “are basically going to try to blame the economy on Silicon Valley, and say that it is iPhone and Google that done did it, not the bailouts and the bankruptcies and the bombings…” And that  “What they’re basically saying is: rule by DC means people are going back to work and the emerging meme is that rule by us is rule by Terminators. We’re going to take all the jobs.”

Given what has actually happened so far Srinivasan’s tone seems almost paranoid. Yes, the shine is off the apple (pun intended) of Silicon Valley, but the most that seems to be happening are discussions about how to get global tech companies to start paying their fair share of taxes. And the Valley has itself woken up to the concerns of civil libertarians that tech companies were being us by the US as a giant listening device.

Srinivasan himself admits that unemployment due to advances in AI and automation is a looming crisis, but rather than help support society, something that even a libertarian like Peter Diamandis has admitted may lead to the requirement for a universal basic income, Srinivasan instead seems to want to run away from the society he helped create.

And therein lies the dark side of what all this Silicon Valley talk of flight is about. As much as it’s about experimentation,or Exit, it’s also about economic blackmail and arbitrage. It’s like a marriage where one partner, rather than engage even in discussions where they contemplate sacrificing some of their needs threatens at the smallest pretense to leave.

Arbitrage has been the tool by which the global, (to bring back the good old Marxist term) bourgeoisie, has been able to garner such favorable conditions for itself over the past generation. “Just try to tax us, and we will move to a place with lower or no taxation”, “Just try to regulate us and we will move to a place with lower or no regulation”, it says.

Yet, both non-excessive taxation, and prudent regulation are the way societies keep themselves intact in the face of the short-sightedness and greed at the base of any pure market. Without them, shared social structures and common infrastructure decays and all costs- pollution etc- are externalized onto the society as a whole. Maybe what we need is not so much more and better tools for people to opt out, which Srinivasan proposes, than a greater number and variety of ways for people to opt in. Better ways of providing the information and tools of Voice that are relevant, accessible, and actionable.

Perhaps what’s happened is that we’ve come almost full round from our start in feudalism. We started with a transnational church and lords locked in the place of their local fiefdoms and moved to nation-states where ruling elites exercised control over a national territory where concern for the broad society underneath along with its natural environment was only fully extended with the expansion of the right to vote almost universally across society.

With the decline of the national state as the fundamental focus of our loyalty we are now torn in multiple directions, between our country, our class, by our religious and philosophical orientations, by our concern for the local or its invisibility, or our concern for the global or its apparent irrelevance.  Yet, despite our virtuality we still belong to physical communities, our neighborhood, country and our shared earth.

Closer to our own time, this hope to escape the problems of society by flight and foundation of new uncorrupted enclaves is an idea buried deep in the founding myth of Silicon Valley. The counter-culture from which many of the innovators of Silicon Valley emerged wanted nothing to do with America’s deep racial and Cold War era problems. They wanted to “drop out” and instead ended up sparking a revolution that not only challenged the whitewashed elites of the “Paper Belt”, but ended up creating a new set of problems, which the responsibility of adulthood should compel them to address.

The elite that has emerged from Silicon Valley is perhaps the first in history dis-attached from any notion of physical space, even the physical space of our shared earth. But “ultimate exit” is an illusion, at least for the vast majority of us, for even if we could settle the stars or retreat into an electron cloud, the distances are far too great and both are too damned cold.

Knowledge and Power, Or Dark Thoughts In Winter

???????????????????????????????

For people in cold climes, winter, with its short days and hibernation inducing frigidity,  is a season to let one’s pessimistic imagination roam. It may be overly deterministic, but I often wonder whether those who live in climates that do not vary with the seasons, so that they live where it is almost always warm and sunny, or always cold and grim, experience less often over the course of a year the full spectrum of human sentiments and end up being either too utopian for reality to justify, or too dystopian for those lucky enough to be here and have a world to complain about in the first place.

The novel I wrote about last time, A Canticle for Leibowitz, is a winter book because it is such a supremely pessimistic one. It presents a world that reaches a stage of technological maturity only to destroy itself again, and again.

What we would consider progress occurs only in terms of Mankind’s technological not its moral capacity. The novel ends with yet another nuclear holocaust only this time the monks who hope to preserve knowledge set out not for the deserts of earth, but the newly discovered planets around nearby stars -the seeds of a new civilization, but in all likelihood not the beginning of an eternal spring.

It’s a cliche to say that among the biggest problems facing us is that our moral or ethical progress has not kept pace with our technological and scientific progress, but labeling something a cliche doesn’t of necessity mean it isn’t true. Miller, the author of  A Canticle for Leibowitz was tapping into a deep historical anxiety that this disjunction between our technological and moral capacity constituted the ultimate danger for us, and defined the problem in a certain, and I believe ultimately very useful way.

Yet, despite Miller’s and others’ anxiety we are still here, so the fear that the chasm between our technological and moral capacity will destroy us remains just that, an anxiety based on a projected future. It is a fear with a long backstory.

All cultures might have hubris myths or warnings about unbridled curiosity, remember Pandora and her jar, or Icarus and his melted wings, but Christianity had turned this warning against pride into the keystone for a whole religious cosmology. That is, in the Christian narrative, especially in the writings of Augustine, death, and with it the need for salvation, comes into the world out of the twin sins of Eve’s pride and curiosity.

It was an ancient anxiety, embedded right in the heart of Christianity, and which burst into consciousness with renewed vigor, during the emergence of modern science, an event that occurred at the same time as Christian revival and balkanization. A kind of contradiction that many thinkers during the early days of the scientific revolution from Isaac Newton, to Francis Bacon, to John Milton to Thomas More found themselves faced with; namely, if the original sin of our first parents was a sin of curiosity, how could a deeply religious age justify its rekindled quest for knowledge?

It is probably hard for most of us to get our minds around just how religious many of the figures during the scientific revolution were given our own mythology regarding the intractable war between science and religion, and the categories into which secular persons, who tend to rely on science, and religious persons, who far too often exhibit an anti-scientific bias, now often fall. Yet, a scientific giant like Newton was in great measure a Christian fundamentalist by today’s standards. One of the most influential publicists for the “new science” was Francis Bacon who saw as the task of science bringing back the state of knowledge found in the “prelapsarian” world, that is, the world before the fall of Adam and Eve.

As I have written about previously, Bacon was one of the first to confront the contradiction between the urge for new (in his view actually old) knowledge and the traditional Christian narrative regarding forbidden knowledge and the sin of pride. His answer was that the millennium was at hand and therefore a moral revival of humanity was taking place that would parallel and buffer the revival of knowledge. Knowledge was to be used for “the improvement of man’s estate”, and his new science was understood as the ultimate tool of Christian charity. In Bacon’s view, such science would only prove ruinous were it used for the sinful purposes of the lust for individual and group aggrandizement and power.

Others were not so optimistic.

Thomas More, for instance, who is credited with creating the modern genre of utopia wasn’t sketching out a blueprint for a perfect world as he was critiquing his own native England, while at the same time suggesting that no perfect world was possible due to Man’s sinfulness, or what his dear friend, Erasmus called “folly”.

Yet, the person who best captured the religious tensions and anxieties present when a largely Christian Europe embarked on its scientific revolution was the blind poet, John Milton. We don’t normally associate Milton with the scientific revolution, but we should. Milton, not only visited the imprisoned Galileo, he made the astronomer and his ideas into recurring themes, presented in a positive light, in his Paradise Lost. Milton also wrote a stunning defense on the freedom of thought, the Areopagitica, which would have made Galileo a free man.

Paradise Lost is, yes, a story in the old Christian vein of warnings against hubris and unbridled curiosity, but it is also a story about power. Namely, how the conclusion that we are “self-begot”, most likely led not to a state of utopian-anarchic godlessness, but the false belief that we ourselves could take the place of God, that is, the discovery of knowledge was tainted not when we, like Adam in Milton’s work, sought answers to our questions regarding the nature of the world, but the minute this knowledge was used as a tool of power against and rule over others.

From the time of Milton to the World Wars of the 20th century the balance between a science that had “improved man’s estate” and that which had served as the tool of power leaned largely in the direction of the former, though Romantics like Percy and Mary Shelley gave us warnings.

The idea that science and technology were tools for the improvement of the conditions of living for the mass of mankind rather than instruments in the pursuit of the perennial human vices of greed and ambition was not the case, of course, if one lived in a non-scientifically empowered non-Western civilization and were at the ends of the barrels of Western gun boats, a fact that we in the West need should not forget now that the scientific revolution and its technology for good and ill is now global. In the West itself, however, this other, darker side, of science and technology was largely occulted even in the face of the human devastation of 19th century wars.

The Second World War, and especially, the development of nuclear weapons brought this existential problem back fully into consciousness, and A Canticle for Leibowitz is a near pitch-perfect representative of this thinking, almost the exact opposite of another near contemporary Catholic thinker, Teilhard de Chardin’s view of technology as the means to godhead in his Phenomenon of Man.

There are multiple voices of conscience in A Canticle for Leibowitz all of which convey a similar underlying message, that knowledge usurped by power constitutes the gravest of dangers.  There is the ageless, wandering Jew on the search for a messiah that never manifests himself and therefore remains in a fallen world in which he lives a life of eternal exile. There is the Poet who in his farcical way condemns the alliance between the holders of knowledge, both the Memorabilia, and the new and secular collegium, and the new centers of military power.

And then there are the monks of the Albertian Order of Leibowitz itself. Here is a dialogue between the abbot Dom Paulo and the lead scholar of the new collegium, Thon Taddeo, on the later’s closeness with the rising satrap,  Hannegan. It is a dialogue which captures the essential message behind A Canticle for Leibowitz. 

Thon Taddeo:

Let’s be frank with each other, Father. I can’t fight the prince that makes my work possible- no matter what I think of his policies or his politics. I appear to support him, superficially, or at least to overlook him- for the sake of the collegium. If he extends his lands, the collegium may incidentally profit. If the collegium prospers, mankind will profit from our work.

What can I do about it? Hannegan is prince, not I.

Dom Paulo:

But you promise to begin restoring Man’s control over nature. But who will govern the use of that power to control natural forces? Who will use it? To what end? How will you hold him in check.  Such decisions can still be made. But if you and your group don’t make them now, others will soon make them for you. (206)

And, of course, the wrong decisions are made and power and knowledge are aligned a choice which unfolds in the book’s final section as another abbot, Dom Zerchi reflects on a world on the eve of another nuclear holocaust:

Listen, are we helpless? Are we doomed to do it again, and again and again? Have we no choice but to play the Phoenix in an unending sequence of rise and fall?

Are we doomed to it, Lord, chained to the pendulum of our own mad clockwork helpless to stop its swing? (245)

The problem, to state it simply, is that we are not creatures that are wholly, innately good, a fact which did not constitute a danger to human civilization or even earthly life until the 20th century. Our quenchless curiosity has driven a progressive expansion of the scale of our powers which has reached the stage where it has the dangers of intersecting with our flaws, and not just our capacity to engage in evil actions, but our foolishness and our greed, to harms billions of persons, or even destroy life on earth.  This is the tragic view of the potential dangers of our newly acquired knowledge.

The Christian genealogy of this tragic view provides the theological cosmology behind A Canticle for Leibowitz, yet we shouldn’t be confused into thinking Christianity is the only place where such sober pessimism can be found.

Take Hinduism: Once, when asked what he thought of Gibbon’s Decline and Fall of the Roman Empire Gandhi responded that Gibbon was excellent at compiling “vast masses of facts”, but that the truth he revealed by doing so was nothing compared to the ancient Hindu classic the Mahabharata. According to Pankaj Mishra, Gandhi’s held that:

 The truth lay in the Mahabharata‘s portrait of the elemental human forces of greed and hatred: how they disguise themselves as self-righteousness and lead to a destructive war in which there are no victors, only survivors inheriting an immense wasteland.

Buddhism contains similar lessons about how the root of human suffering was to be found in our consciousness (or illusion) of our own separateness when combined with our desire.

Religions, because they in part contain Mankind’s longest reflections on human nature tend to capture this tragic condition of ultimately destructive competition between sentient beings with differing desires and wills, a condition which we may find are not only possessed by our fellow animals, but may be part of our legacy to any sentient machines that are our creations as well. Original sin indeed!

Yet recently, religion has been joined by secular psychology that is reviving Freudian pessimism though on a much more empirically sound basis. Contemporary psychology, the most well known of which is the work of Daniel Kahneman, has revealed the extent to which human beings are riddled with cognitive and moral faults which stand in the way of rational assessment and moral decisions- truths about which the world religions have long been aware.

The question becomes, then, what, if anything, can we do about this? Yet, right out of the gate we might stumble on the assumption behind the claim that our technological knowledge has advanced while our moral nature has remained intractably the same. That assumption is claim that the Enlightenment project of reforming human nature has failed.

For the moment I am only interested in two diametrically opposed responses to this perceived failure. The first wants to return to the pre-Enlightenment past, to a world built around the assumptions of Mankind’s sinfulness and free of the optimistic assumptions regarding democracy, equality and pluralism while the second thinks we should use the tools of the type of progress that clearly is working- our scientific and technological progress- to reach in and change human nature so that it better conforms to where we would like Mankind to be in the moral sense.

A thinker like, Glenn W. Olsen, the author of The Turn to Transcendence: The Role of Religion in the Twenty-first Century, is a very erudite and sophisticated version, not exactly of fundamentalism, but a recent reactionary move against modernity. His conclusion is the the Enlightenment project of reforming Mankind into rational and moral creatures has largely failed, so it might be best to revive at least some of the features of the pre-Enlightenment social-religious order that were built on less optimistic assumptions regarding Mankind’s animal nature, but more optimistic ones about our ultimate spiritual transcendence of those conditions which occur largely in the world to come.

Like the much less erudite fellow travelers of Olsen that go by the nom de guerre of neo-reactionaries, Olsen thinks this need to revive pre-Enlightenment forms of orientation to the world will require abandoning our faith in democracy, equality, and pluralism.  

A totally opposite view, though equally pessimistic in its assumptions regarding human nature, is that of those who propose using the tools of modern science, especially modern neuroscience and neuropharmacology, to cure human beings of their cognitive and moral flaws. Firmly in this camp is someone like the bio-ethicist, Julian Savulescu who argues that using these tools might be our only clear escape route from a future filled with terrorism and war.

Both of these perspectives have been countered by Steven Pinker in his monumental The Better Angels of Our Nature. Pinker’s is the example par excellence for the argument that the Enlightenment wasn’t a failure at all- but actually worked. People today are much less violent and more tolerant than at any time in the past. Rather than seeing our world as one that has suffered moral decay at worst, and the failure of progressive assumptions regarding human nature at best, Pinker presents a world where we are in every sense morally more advanced than our ancestors who had no compunction in torturing people on The Wheel or enslaving millions of individuals. So much for the nostalgia of neo-reactionaries.

And Pinker’s argument seems to undermine the logic behind the push for moral enhancement as well, for if current “technologies” such as universal education are working, in that violence has been in almost precipitous decline, why the need to push something far more radical and intrusive?

Here I’ll add my own two-cents, for I can indeed see an argument for cognitive and moral enhancement as a humane alternative to our barbaric policy of mass incarceration where many of the people we currently lock up and conceal in order to hide from ourselves our own particular variety of barbarism are there because of deficits of cognition and self-control. Unlike Savulescu, however, I do not see this as an answer to our concerns with security whether in the form of state-vs-state war or a catastrophic version of terrorism. Were we so powerfully that we could universally implement such moral enhancements and ensure that they were not used instead to tie individuals even closer together in groups that stood  in rivalry against other groups then we would not have these security concerns in the first place.

Our problem is not that the Enlightenment has failed but that it has succeeded in creating educated publics who now live in an economic and political system from which they feel increasingly alienated. These are problems of structure and power that do not easily lend themselves to any sort of technological fix, but ones that require political solutions and change. Yet, even if we could solve these problems other more deeply rooted and existential dangers might remain.

The real danger to us, as it has always been, is less a matter of individual against individual than tribe against tribe, nation against nation, group against group. Reason does not solve the problem here, because reason is baked into the very nature of our conflict, as each group, whether corporation, religious sect, or country pursues its own rational good whose consequence is often to the detriment of other groups.

The danger becomes even more acute when we realize that as artificial intelligence increases in capability many of our decisions will be handed over to machines, who, with all rationality, and no sense of a moral universe that demands something different,  continue the war of all against all that has been our lot since the rebellion of “Lucifer’s angels in heaven”.