Welcome to the New Age of Revolution

Fall of the Bastille

Last week the prime minister of Ukraine, Mykola Azarov, resigned under pressure from a series of intense riots that had spread from Kiev to the rest of the country. Photographs from the riots in The Atlantic blew my mind, like something out of a dystopian steampunk flic. Many of the rioters were dressed in gas masks that looked as if they had been salvaged from World War I. As weapons they wielded homemade swords, molotov cocktails, and fireworks. To protect their heads some wore kitchen pots and spaghetti strainers.

The protestors were met by riot police in hypermodern black suits of armor, armed with truncheons, tear gas, and shotguns, not all of them firing only rubber bullets. Orthodox priests with crosses and icons in their hands, sometimes placed themselves perilously between the rioters and the police, hoping to bring calm to a situation that was spinning out of control.

Even for Ukraine, it was cold during the weeks of the riots. A situation that caused the blasts from water cannons used by the police to crystalize shortly after contact. The detritus of protesters covered in sheets of ice like they had be shot with some kind of high tech freeze gun.

Students of mine from the Ukraine were largely in sympathy with the protestors, but feared civil war unless something changed quickly. The protests had been brought on by a backdoor deal with Russia to walk away from talks aimed at Ukraine joining the European Union. Protests over that agreement led to the passage of an anti-protest law that only further inflamed the rioters. The resignation of the Russophile prime minister  seemed to calm the situation for a time, but with the return of the Ukrainian president Viktor Yanukovych  to work (he was supposedly ill during the heaviest of the protests) the situation has once again become volatile. It was Yanukovych who was responsible  for cutting the deal with Russia and pushing through draconian limits on the freedom of assembly which had sparked the protests in the first place.

Ukraine, it seem, is a country being torn in two, a conflict born of demographics and history.  Its eastern, largely Russian speaking population looking towards Russia and its western, largely Ukrainian speaking population looking towards Europe. In this play both Russia and the West are no doubt trying to influence the outcome of events in their favor, and thus exacerbating the instability.

Yet, while such high levels of tension are new, the problem they reveal is deep in historical terms- the cultural tug of war over Ukraine between Russia and Europe, East and West, stretches at least as far back as the 14th century when western Ukraine was brought into the European cultural orbit by the Poles. Since then, and often with great brutality on the Russian side, the question of Ukrainian identity, Slavic or Western, has been negotiated and renegotiated over centuries- a question that will perhaps never be fully resolved and whose very tension may be what it actually means to be Ukrainian.

Where Ukraine goes from here is anybody’s guess, but despite its demographic and historical particularities, its recent experience adds to the growing list of mass protests that have toppled governments, or at least managed to pressure governments into reversing course, that have been occurring regularly since perhaps 2008 with riots in Greece.

I won’t compile a comprehensive list but will simply state the mass protests and riots I can cite from memory. There was the 2009 Green Revolution in Iran that was subsequently crushed by the Iranian government. There was the 2010 Jasmine Revolution in Tunisia which toppled the government there and began what came to be the horribly misnamed “Arab Spring”. By 2011 mass protests had overthrown Hosni Mubarak in Egypt, and riots had broken out in London. 2012 saw a lull in mass protests, but in 2013 they came back with a vengeance. There were massive riots in Brazil over government cutbacks for the poor combined with extravagant spending in preparation for the 2014 World Cup, there were huge riots in Turkey which shook the government of the increasingly authoritarian Recep Tayyip Erdoğan, and a military coup in the form of mass protests that toppled the democratically elected Islamist president in Egypt. Protesters in Thailand have be “occupying” the capital since early January. And now we have Ukraine.

These are just some of the protests that were widely covered in the media. Sometimes, quite large, or at least numerous protests are taking place in a country and they are barely reported in the news at all.  Between 2006-2010 there were 180,000 reported “mass incidents” in China. It seems the majority of these protests are related to local issues and not against the national government, but the PRC has been adept at keeping them free of the prying eyes of Western media.

The abortive 2009 riots in Iran that were the first to be called a “Twitter Revolution” by Western media and digerati.  The new age of revolution often explained in terms of the impact of the communications revolution, and social media. We have had time to find out that just how little a role Western, and overwhelmingly English language media platforms, such as Twitter and FaceBook, have played in this new upsurge of revolutionary energy, but that’s not the whole story.

I wouldn’t go so far as to say technology has been irrelevant in bringing about our new revolutionary era, I’d just put the finger on another technology, namely mobile phones. In 2008 the number of mobile devices had, in the space of a decade, gone from a rich world luxury into the hands of 4 billion people. By 2013, 6 billion of the world’s 7 billion people had some sort of mobile device, more people than had access to working toilets.

It is the very disjunction between the number of people able to communicate and hence act en masse and those lacking what we in the developed world consider necessities that should get our attention- a potentially explosive situation. And yet, we have known since Alexis de Tocqueville that revolutions are less the product of the poor who have always known misery than stem from a rising middle class whose ambitions have been frustrated.

Questions I would ask a visitor from the near future if I wanted to gauge the state of the world a decade or two hence would be if the rising middle class in the developing world had put down solid foundations, and if, and to what extent, it had been cut off at the legs from either the derailment of the juggernaut of the Chinese economy, rising capacity of automation, or both?

The former fear seems to be behind the recent steep declines in the financial markets where the largest blows have been suffered by developing economies. The latter is a longer term risk for developing economies, which if they do not develop quickly enough may find themselves permanently locked out of what has been the traditional development arch of capitalist economic development moving from agriculture to manufacturing to services.

Automation threatens the wage competitiveness of developing economy workers on all stages of that scale. Poor textile workers in Bangladesh competing with first world robots, Indians earning a middle class wage working at call centers or doing grunt legal or medical work increasingly in competition with more and more sophisticated ,and in the long run less expensive, bots.

Intimately related to this would be my last question for our near future time traveler; namely, does the global trend towards increasing inequality continue, increase, or dissipate? With the exception of government incompetence and corruption combined with mobile enabled youth, rising inequality appears to be the only macro trend that these revolts share, though, this must not be the dominant factor, otherwise, protests would be the largest and most frequent in the country with the fastest growing inequality- the US.

Revolutions, as in the mobilization of a group of people massive enough and active enough to actually overthrow a government are a modern phenomenon and are so for a reason. Only since the printing press and mass literacy has the net of communication been thrown wide enough where revolution, as opposed to mere riots, has become possible. The Internet and even more so mobile technology have thrown that net even further, or better deeper, with literacy no longer being necessary, and with the capacity for intergroup communication now in real time and no longer in need of or under the direction of a center- as was the case in the era of radio and television.

Technology hasn’t resulted in the “end of history”, but quite the opposite. Mobile technology appears to facilitate the formation of crowds, but what these crowds mobilize around are usually deep seated divisions which the society in which protests occur have yet to resolve or against widely unpopular decisions made over the people’s head.

For many years now we have seen this phenomenon from financial markets one of the first area to develop deep, rapidly changing interconnections based on the digital revolution. Only a few years back, democracy seemed to have come under the thumb of much more rapidly moving markets, but now, perhaps, a populist analog has emerged.

What I wonder is how the state will respond to this, or how this new trend of popular mobilization may intersect with yet another contemporary trend- mass surveillance by the state itself?

The military theorist Carl von Clausewitz came up with his now famous concept of the “fog of war” defined as “the uncertainty in situational awareness experienced by participants in military operations. The term seeks to capture the uncertainty regarding one’s own capability, adversary capability, and adversary intent during an engagement, operation, or campaign.”  If one understands revolution as a kind of fever pitch exchange of information leading to action leading to exchange of information and so on, then, all revolutions in the past could be said to have taken place with all players under such a fog.

Past revolutions have only been transparent to historians. From a bird’s eye view and in hindsight scholars of the mother of all revolutions, the French, can see the effects of Jean-Paul Marat’s pamphlets and screeds inviting violence, the published speeches of the moralistic tyrant Robespierre, the plotting letters of Marie-Antoinette to the Austrians or the counter-revolutionary communique of General Lafayette. To the actors in the French Revolution itself the motivations and effects of other players were always opaque, the origin, in part, of the revolution’s paranoia and Reign of Terror which Robespierre saw as a means of unmasking conspirators and hypocrites.

With the new capacity of governments to see into communication, revolutions might be said to be becoming transparent in real time. Insecure governments that might be toppled by mass protest would seem to have an interest in developing the capacity to monitor the communication and track the movement of their citizens. Moore’s Law has made what remained an unachievable goal of total surveillance by the state relatively cheap.

During revolutionary situations foreign governments (with the US at the top of the list), may have the inclination to peer into revolutions through digital surveillance and in some cases will likely use this knowledge to interfere so as to shape outcomes in its own favor. States that are repressive towards their own people, such as China, will likewise try to use these surveillance tools to ensure revolutions never happen or to steer them toward preferred outcomes if they should occur despite best efforts.

One can only hope that the ability to see into a revolution while it is happening does not engender the illusion that we can also control its outcome, for as the riots and revolutions of the past few years have shown, moves against a government may be enabled by technology imported from outside, but the fate of such actions is decided by people on the ground who alone might be said have full responsibility for the future of the society in which revolution has occurred.

Foreign governments are engaged in a dangerous form of hubris if they think they can steer outcomes in their favor oblivious to local conditions and governments that think technology gives them a tool by which they can ignore the cries of their citizens are allowing the very basis on which they stand to rot underneath them and eventually collapse. A truth those who consider themselves part of a new global elite should heed when it comes to the issue of inequality.

The Dark Side of a World Without Boundaries

Peter Blume: The Eternal City

Peter Blume: The Eternal City

The problem I see with Nicolelis’ view of the future of neuroscience, which I discussed last time, is not that I find it unlikely that a good deal of his optimistic predictions will someday come to pass, it is that he spends no time at all talking about the darker potential of such technology.

Of course, the benefit to paralyzed, or persons otherwise locked tightly in the straitjacket of their own skull, of technologies to communicate directly from the human brain to machines or to other human beings is undeniable. The potential to expand the range of human experience through directly experiencing the thoughts and emotions of others is, also, of course, great. Next weekend being Super Bowl Sunday, I can’t help but think how cool it would be to experience the game through the eyes of Peyton Manning, or Russell Wilson. How amazing would a concert or an orchestra be if experienced likewise in this way?

Still, one need not necessarily be a professional dystopian and unbudgeable cassandra to come up with all kinds of quite frightening scenarios that might arise through the erosion of boundaries between human minds, all one needs to do is pay attention to less sanguine developments of the latest iteration of a once believed to be utopian technology, that is, the Internet and the social Web, to get an idea of some of the possible dangers.

The Internet was at one time prophesied to user in a golden age of transparency and democratic governance, and promised the empowerment of individuals and small companies. Its legacy, at least at this juncture, is much more mixed. There is little disputing that the Internet and its successor mobile technologies have vastly improved our lives, yet it is also the case that these technologies have led to an explosion in the activities of surveillance and control, by nefarious individuals and criminal groups, corporations and the state. Nicolelis’ vision of eroded boundaries between human minds is but the logical conclusion of the trend towards transparency. Given how recently we’ve been burned by techno-utopianism in precisely this area a little skepticism is certainly in order.

The first question that might arise is whether direct brain-to-brain communication (especially when invasive technologies are required) will ever out-compete the kinds of mediated mind-to-mind technology we have had for quite sometime time, that is, both spoken and written language. Except in very special circumstances, not all of them good, language seem to retain its advantages over direct brain-to-brain communication, and, in cases where the advantage of language over such direct communication are used it be may be less of a gain in communicative capacity than a signal that normalcy has broken down.

Couples in the first rush of new love may want to fuse their minds for a time, but a couple that been together for decades? Seems less likely, though there might be cases of pathological jealousy or smothering control bordering on abuse when one half of a couple would demand such a thing. The communion with fellow human beings offered by religion might gain something from the ability of individuals to bridge the chasm that separates them from others, but such technologies would also be a “godsend” for fanatics and cultists. I can not decide whether a mega-church such as Yoido Full Gospel Church in a world where Nicolelis’ brain-nets are possible would represent a blessed leap in human empathetic capacity or a curse.

Nicolelis seems to assume that the capacity to form brain-nets will result in some kind of idealized and global neural version of FaceBook, but human history seems to show that communication technology is just as often used to hermetically seal group off from group and becomes a weapon in the primary human moral dilemma ,which is not the separation of individual from individual, so much as the rivalry between different groups. We seem unable to exit ourselves from such rivalry even when the stakes are merely imagined- as many of us will do next Sunday, and has Nicolelis himself should have known from his beloved soccer where the rivalry expressed in violent play has a tendency to slip into violent riots in the real world.

Direct brain-to-brain communication would seem to have real advantages over language when persons are joined together in teams acting as a unit in response to a rapidly changing situation. Groups such as fire-fighters. By far the greatest value of such capacities would be found in small military units such as Platoons, members who are already bonded in a close experiential and deep emotional bond, as in on display in the documentary- Restrepo. Whatever their virtuous courage, armed bands killing one another are about as far as you can get from the global kumbaya of Nicolelis’ brain-net.

If such technologies were non-invasive, would they be used by employers to monitor the absence of sustained attention while on the job? What of poor dissidents under the thumb of a madman like Kim Jong Un .Imagine such technologies in the hands of a pimp, or even, the kinds of slave owners who, despite our obliviousness, still exist.

One of the problems with the transparency paradigm, and the new craze for an “Internet of things” where everything in the environment of an individual is transformed into Web connected computers is the fact that anything that is a computer, by that very fact, becomes hackable. If someone is worried about their digital data being sucked up and sold on an elaborate black market, about webcams being used as spying tools, if one has concern that connecting one’s home to the web might make one vulnerable, how much more so should be worried if our very thoughts could be hacked? The opportunities for abuse seem legion.

Everything is shaped by personal experience. Nicolelis whose views of the collective mind were forged by the crowds that overthrew the Brazilian military dictatorship and his beloved soccer games. But the crowd is not always wise. I being a person who has always valued solitude and time with individuals and small groups over the electric energy of crowds have no interest in being part of a hive mind to any degree more than the limited extent I might be said to already be in such a mind by writing a piece such as this.

It is a sad fact, but nonetheless true, that should anything like Nicolelis’ brain-net ever be created it can not be under the assumptions of the innate goodness of everyone. As our experience with the Internet should have taught us, an open system with even a minority of bad actors leads to a world of constant headaches and sometimes what amount to very real dangers.

Knowledge and Power, Or Dark Thoughts In Winter

???????????????????????????????

For people in cold climes, winter, with its short days and hibernation inducing frigidity,  is a season to let one’s pessimistic imagination roam. It may be overly deterministic, but I often wonder whether those who live in climates that do not vary with the seasons, so that they live where it is almost always warm and sunny, or always cold and grim, experience less often over the course of a year the full spectrum of human sentiments and end up being either too utopian for reality to justify, or too dystopian for those lucky enough to be here and have a world to complain about in the first place.

The novel I wrote about last time, A Canticle for Leibowitz, is a winter book because it is such a supremely pessimistic one. It presents a world that reaches a stage of technological maturity only to destroy itself again, and again.

What we would consider progress occurs only in terms of Mankind’s technological not its moral capacity. The novel ends with yet another nuclear holocaust only this time the monks who hope to preserve knowledge set out not for the deserts of earth, but the newly discovered planets around nearby stars -the seeds of a new civilization, but in all likelihood not the beginning of an eternal spring.

It’s a cliche to say that among the biggest problems facing us is that our moral or ethical progress has not kept pace with our technological and scientific progress, but labeling something a cliche doesn’t of necessity mean it isn’t true. Miller, the author of  A Canticle for Leibowitz was tapping into a deep historical anxiety that this disjunction between our technological and moral capacity constituted the ultimate danger for us, and defined the problem in a certain, and I believe ultimately very useful way.

Yet, despite Miller’s and others’ anxiety we are still here, so the fear that the chasm between our technological and moral capacity will destroy us remains just that, an anxiety based on a projected future. It is a fear with a long backstory.

All cultures might have hubris myths or warnings about unbridled curiosity, remember Pandora and her jar, or Icarus and his melted wings, but Christianity had turned this warning against pride into the keystone for a whole religious cosmology. That is, in the Christian narrative, especially in the writings of Augustine, death, and with it the need for salvation, comes into the world out of the twin sins of Eve’s pride and curiosity.

It was an ancient anxiety, embedded right in the heart of Christianity, and which burst into consciousness with renewed vigor, during the emergence of modern science, an event that occurred at the same time as Christian revival and balkanization. A kind of contradiction that many thinkers during the early days of the scientific revolution from Isaac Newton, to Francis Bacon, to John Milton to Thomas More found themselves faced with; namely, if the original sin of our first parents was a sin of curiosity, how could a deeply religious age justify its rekindled quest for knowledge?

It is probably hard for most of us to get our minds around just how religious many of the figures during the scientific revolution were given our own mythology regarding the intractable war between science and religion, and the categories into which secular persons, who tend to rely on science, and religious persons, who far too often exhibit an anti-scientific bias, now often fall. Yet, a scientific giant like Newton was in great measure a Christian fundamentalist by today’s standards. One of the most influential publicists for the “new science” was Francis Bacon who saw as the task of science bringing back the state of knowledge found in the “prelapsarian” world, that is, the world before the fall of Adam and Eve.

As I have written about previously, Bacon was one of the first to confront the contradiction between the urge for new (in his view actually old) knowledge and the traditional Christian narrative regarding forbidden knowledge and the sin of pride. His answer was that the millennium was at hand and therefore a moral revival of humanity was taking place that would parallel and buffer the revival of knowledge. Knowledge was to be used for “the improvement of man’s estate”, and his new science was understood as the ultimate tool of Christian charity. In Bacon’s view, such science would only prove ruinous were it used for the sinful purposes of the lust for individual and group aggrandizement and power.

Others were not so optimistic.

Thomas More, for instance, who is credited with creating the modern genre of utopia wasn’t sketching out a blueprint for a perfect world as he was critiquing his own native England, while at the same time suggesting that no perfect world was possible due to Man’s sinfulness, or what his dear friend, Erasmus called “folly”.

Yet, the person who best captured the religious tensions and anxieties present when a largely Christian Europe embarked on its scientific revolution was the blind poet, John Milton. We don’t normally associate Milton with the scientific revolution, but we should. Milton, not only visited the imprisoned Galileo, he made the astronomer and his ideas into recurring themes, presented in a positive light, in his Paradise Lost. Milton also wrote a stunning defense on the freedom of thought, the Areopagitica, which would have made Galileo a free man.

Paradise Lost is, yes, a story in the old Christian vein of warnings against hubris and unbridled curiosity, but it is also a story about power. Namely, how the conclusion that we are “self-begot”, most likely led not to a state of utopian-anarchic godlessness, but the false belief that we ourselves could take the place of God, that is, the discovery of knowledge was tainted not when we, like Adam in Milton’s work, sought answers to our questions regarding the nature of the world, but the minute this knowledge was used as a tool of power against and rule over others.

From the time of Milton to the World Wars of the 20th century the balance between a science that had “improved man’s estate” and that which had served as the tool of power leaned largely in the direction of the former, though Romantics like Percy and Mary Shelley gave us warnings.

The idea that science and technology were tools for the improvement of the conditions of living for the mass of mankind rather than instruments in the pursuit of the perennial human vices of greed and ambition was not the case, of course, if one lived in a non-scientifically empowered non-Western civilization and were at the ends of the barrels of Western gun boats, a fact that we in the West need should not forget now that the scientific revolution and its technology for good and ill is now global. In the West itself, however, this other, darker side, of science and technology was largely occulted even in the face of the human devastation of 19th century wars.

The Second World War, and especially, the development of nuclear weapons brought this existential problem back fully into consciousness, and A Canticle for Leibowitz is a near pitch-perfect representative of this thinking, almost the exact opposite of another near contemporary Catholic thinker, Teilhard de Chardin’s view of technology as the means to godhead in his Phenomenon of Man.

There are multiple voices of conscience in A Canticle for Leibowitz all of which convey a similar underlying message, that knowledge usurped by power constitutes the gravest of dangers.  There is the ageless, wandering Jew on the search for a messiah that never manifests himself and therefore remains in a fallen world in which he lives a life of eternal exile. There is the Poet who in his farcical way condemns the alliance between the holders of knowledge, both the Memorabilia, and the new and secular collegium, and the new centers of military power.

And then there are the monks of the Albertian Order of Leibowitz itself. Here is a dialogue between the abbot Dom Paulo and the lead scholar of the new collegium, Thon Taddeo, on the later’s closeness with the rising satrap,  Hannegan. It is a dialogue which captures the essential message behind A Canticle for Leibowitz. 

Thon Taddeo:

Let’s be frank with each other, Father. I can’t fight the prince that makes my work possible- no matter what I think of his policies or his politics. I appear to support him, superficially, or at least to overlook him- for the sake of the collegium. If he extends his lands, the collegium may incidentally profit. If the collegium prospers, mankind will profit from our work.

What can I do about it? Hannegan is prince, not I.

Dom Paulo:

But you promise to begin restoring Man’s control over nature. But who will govern the use of that power to control natural forces? Who will use it? To what end? How will you hold him in check.  Such decisions can still be made. But if you and your group don’t make them now, others will soon make them for you. (206)

And, of course, the wrong decisions are made and power and knowledge are aligned a choice which unfolds in the book’s final section as another abbot, Dom Zerchi reflects on a world on the eve of another nuclear holocaust:

Listen, are we helpless? Are we doomed to do it again, and again and again? Have we no choice but to play the Phoenix in an unending sequence of rise and fall?

Are we doomed to it, Lord, chained to the pendulum of our own mad clockwork helpless to stop its swing? (245)

The problem, to state it simply, is that we are not creatures that are wholly, innately good, a fact which did not constitute a danger to human civilization or even earthly life until the 20th century. Our quenchless curiosity has driven a progressive expansion of the scale of our powers which has reached the stage where it has the dangers of intersecting with our flaws, and not just our capacity to engage in evil actions, but our foolishness and our greed, to harms billions of persons, or even destroy life on earth.  This is the tragic view of the potential dangers of our newly acquired knowledge.

The Christian genealogy of this tragic view provides the theological cosmology behind A Canticle for Leibowitz, yet we shouldn’t be confused into thinking Christianity is the only place where such sober pessimism can be found.

Take Hinduism: Once, when asked what he thought of Gibbon’s Decline and Fall of the Roman Empire Gandhi responded that Gibbon was excellent at compiling “vast masses of facts”, but that the truth he revealed by doing so was nothing compared to the ancient Hindu classic the Mahabharata. According to Pankaj Mishra, Gandhi’s held that:

 The truth lay in the Mahabharata‘s portrait of the elemental human forces of greed and hatred: how they disguise themselves as self-righteousness and lead to a destructive war in which there are no victors, only survivors inheriting an immense wasteland.

Buddhism contains similar lessons about how the root of human suffering was to be found in our consciousness (or illusion) of our own separateness when combined with our desire.

Religions, because they in part contain Mankind’s longest reflections on human nature tend to capture this tragic condition of ultimately destructive competition between sentient beings with differing desires and wills, a condition which we may find are not only possessed by our fellow animals, but may be part of our legacy to any sentient machines that are our creations as well. Original sin indeed!

Yet recently, religion has been joined by secular psychology that is reviving Freudian pessimism though on a much more empirically sound basis. Contemporary psychology, the most well known of which is the work of Daniel Kahneman, has revealed the extent to which human beings are riddled with cognitive and moral faults which stand in the way of rational assessment and moral decisions- truths about which the world religions have long been aware.

The question becomes, then, what, if anything, can we do about this? Yet, right out of the gate we might stumble on the assumption behind the claim that our technological knowledge has advanced while our moral nature has remained intractably the same. That assumption is claim that the Enlightenment project of reforming human nature has failed.

For the moment I am only interested in two diametrically opposed responses to this perceived failure. The first wants to return to the pre-Enlightenment past, to a world built around the assumptions of Mankind’s sinfulness and free of the optimistic assumptions regarding democracy, equality and pluralism while the second thinks we should use the tools of the type of progress that clearly is working- our scientific and technological progress- to reach in and change human nature so that it better conforms to where we would like Mankind to be in the moral sense.

A thinker like, Glenn W. Olsen, the author of The Turn to Transcendence: The Role of Religion in the Twenty-first Century, is a very erudite and sophisticated version, not exactly of fundamentalism, but a recent reactionary move against modernity. His conclusion is the the Enlightenment project of reforming Mankind into rational and moral creatures has largely failed, so it might be best to revive at least some of the features of the pre-Enlightenment social-religious order that were built on less optimistic assumptions regarding Mankind’s animal nature, but more optimistic ones about our ultimate spiritual transcendence of those conditions which occur largely in the world to come.

Like the much less erudite fellow travelers of Olsen that go by the nom de guerre of neo-reactionaries, Olsen thinks this need to revive pre-Enlightenment forms of orientation to the world will require abandoning our faith in democracy, equality, and pluralism.  

A totally opposite view, though equally pessimistic in its assumptions regarding human nature, is that of those who propose using the tools of modern science, especially modern neuroscience and neuropharmacology, to cure human beings of their cognitive and moral flaws. Firmly in this camp is someone like the bio-ethicist, Julian Savulescu who argues that using these tools might be our only clear escape route from a future filled with terrorism and war.

Both of these perspectives have been countered by Steven Pinker in his monumental The Better Angels of Our Nature. Pinker’s is the example par excellence for the argument that the Enlightenment wasn’t a failure at all- but actually worked. People today are much less violent and more tolerant than at any time in the past. Rather than seeing our world as one that has suffered moral decay at worst, and the failure of progressive assumptions regarding human nature at best, Pinker presents a world where we are in every sense morally more advanced than our ancestors who had no compunction in torturing people on The Wheel or enslaving millions of individuals. So much for the nostalgia of neo-reactionaries.

And Pinker’s argument seems to undermine the logic behind the push for moral enhancement as well, for if current “technologies” such as universal education are working, in that violence has been in almost precipitous decline, why the need to push something far more radical and intrusive?

Here I’ll add my own two-cents, for I can indeed see an argument for cognitive and moral enhancement as a humane alternative to our barbaric policy of mass incarceration where many of the people we currently lock up and conceal in order to hide from ourselves our own particular variety of barbarism are there because of deficits of cognition and self-control. Unlike Savulescu, however, I do not see this as an answer to our concerns with security whether in the form of state-vs-state war or a catastrophic version of terrorism. Were we so powerfully that we could universally implement such moral enhancements and ensure that they were not used instead to tie individuals even closer together in groups that stood  in rivalry against other groups then we would not have these security concerns in the first place.

Our problem is not that the Enlightenment has failed but that it has succeeded in creating educated publics who now live in an economic and political system from which they feel increasingly alienated. These are problems of structure and power that do not easily lend themselves to any sort of technological fix, but ones that require political solutions and change. Yet, even if we could solve these problems other more deeply rooted and existential dangers might remain.

The real danger to us, as it has always been, is less a matter of individual against individual than tribe against tribe, nation against nation, group against group. Reason does not solve the problem here, because reason is baked into the very nature of our conflict, as each group, whether corporation, religious sect, or country pursues its own rational good whose consequence is often to the detriment of other groups.

The danger becomes even more acute when we realize that as artificial intelligence increases in capability many of our decisions will be handed over to machines, who, with all rationality, and no sense of a moral universe that demands something different,  continue the war of all against all that has been our lot since the rebellion of “Lucifer’s angels in heaven”.

Preparing for a New Dark Age

Monk Scribe

Back what now itself seems a millennium ago, when I was a senior in high school and freshman in college, I used to go to yard sales. I wasn’t looking for knickknacks or used appliances, but for cheap music and mostly for books. If memory serves me you could usually get a paperback for 50 cents, four of them for a dollar, and a hard cover for a buck.

I have no idea what made me purchase the particular books I did, and especially works of fiction. At that point in my life I didn’t so much know what literature was as I had heard rumors that there was something out there called literature I’d likely be interested in. Unlike Stephen Greenblatt, who I wrote about last time, I certainly didn’t buy books for the sexually suggestive covers, and thankfully, for given the area I was living at the time, I would now be surrounded by shelves of harlequin romances- though, come to think of it, it might have made me more skillful in love.

I don’t buy so many books anymore, having become a Kindle man where I press a button and wallah a work I’m after appears magically on my little screen. I also live in an area with very good libraries- both public and university- which for a bibliophile like myself is about as good as Florida for a person who worships the sun.

Yet, I still have maybe a hundred books surrounding me that I own but have never read. Sometimes, I’ll rummage through my shelves to pick out a book I probably haven’t even opened since I bought it, and the untouched pages will be brittle and break under  my clumsy fingers. The other day, I came across Walter M. Miller, Jrs’ novel A Canticle for Leibowitz. I’ve been working on a story with Catholic and dystopian/utopian technological themes and thought it might be a good idea to read this science-fiction classic before I proceeded any further into the labyrinth of the tale I was crafting because I knew it dealt with similar ideas.

I did not anticipate the power for me of this wonderful little novel. It touched on themes I had been thinking about for sometime- the search for a long range view that looked to the past as well as the future, the tension between knowledge and power, and the understanding that this tension was an existential component of the human condition, the brake on all our utopian aspirations, and perhaps the “original sin” that would ultimately sink us.            

I will look at the deeper lessons of A Canticle for Leibowitz sometime in the future, for now I just want to talk about its suggestions for the long range human future and specifically one aspect of that long range future- how do we preserve human knowledge so as to avoid ever going through another long dark age?   

A Canticle for Leibowitz was published in 1960. Had Miller sketched out rather than merely stated the apocalyptic conditions that precede the world portrayed in the novel it would have certainly given our own generations versions of the apocalypse with shows like The Walking Dead a run for their money. The novel occurs after the world has been destroyed in a nuclear holocaust known as The Flame Deluge. After the horrors unleashed by the war, including the creation hordes of radioactive mutants from “the demon Fall Out” , the masses seek revenge on the holders of knowledge they deem responsible- murdering them and destroying their works Khmer Rouge style in a world-wide intellectual genocide known as the Simplification.

A Jewish electrical engineer, Isaac Edward Leibowitz,,who had been working for the US military in the run up to the war joins the Catholic Church, perhaps the only long lived institution able to survive the Simplification, and founds a monastic order and monastery known as the Albertian Order of Leibowitz. The order is committed to preserving human knowledge from the Simplification by book smuggling (booklegging) and afterwards aims to store and preserve this knowledge in their Utah desert monastery, a collection of thoughts from the past which they call the Memorabilia.

Unlike our own apocalyptic anxieties which seem so artificial, as if we’ve become addicted to the adrenaline high of scarring ourselves nearly to death, the fears Miller was giving voice to were frighteningly real. Three years after his novel’s publication we really did almost destroy ourselves in a Flame Deluge with the Cuban Missile Crisis and only escaped our own destruction by a hair’s breath.

Yet, even with these real world anxieties, or perhaps because they were so real, A Canticle for Leibowitz is a rip-roaring funny book, Canterbury Tales funny or even Monty Python funny. Especially the first part, which deals with a hapless monk- Francis- who discovers original manuscripts of the soon to be sainted Lebowitz himself. The rest of the book is not as humorous, much more tragic, as we watch humanity make the same mistake over again with knowledge being used in the name of the lust for power, and that lust for power enabled by knowledge again nearly destroying us.    

Miller, of course, was playing with real history, the way the Christian monasteries had preserved knowledge in Western Europe after the fall of Rome. It was a theme explored, though from a much different angle, by Isaac Asimov some years earlier in his Foundation Series where the preservation of knowledge through the establishment of two different “foundations” at the ends of the galaxy is a deliberate effort to shorten a galactic dark age from tens of thousands to a “mere” thousand years.

Monks of the Albertian Order of Leibowitz risked themselves to gather and preserve their Memorabilia, knowledge which they did not understand, willing to wait thousands of years if necessary for the day when “an Interrogator would come, and things would be fitted together again.”  (62)

If I take the picture presented by Stephen Greenblatt in his The Swerve: How the World Became Modern as historically accurate, the reasons our  historical monks ended up preserving knowledge was much more accidental than their analogous novelization by Miller. According to Greenblatt, the monasteries ended up preserving knowledge due to a contingent rule of some orders that monks spend some of their time reading. To read, of course, requires something written and monasteries became one of the few places in the early middle ages to not only collect but preserve books through copying.

Still, according to Greenblatt, we shouldn’t be confused that they were doing so in anticipation of a rebirth of learning, and weren’t all that intellectually engaged with the books they preserved and copied. During reading or copying monks were forbidden to discuss the books they had in front of them, which is probably good for us. They became instead immense hive-mind photocopiers cloning and shelving a hodge podge of surviving works from the ancients, a task which had someone not done with detail and regularity would have quickly led to the disappearance of the vast intellectual heritage of the classical world.  The thoughts preserved on papyrus and animals skins would have in a short time been eaten away by literal book worms.

There is an argument out there, Francis Fukuyama’s is the one that comes to mind, that we are unlikely to experience the kinds of cyclical declines and dark ages seen in prior periods of human history because knowledge is now global. I think there are some other holes in that argument, but for now I won’t quibble, and want to focus on only one chasm- the possibility that the entire globe could experience some hammer blow that would shatter civilization everywhere all at one go.

These are catastrophic risks, things that we should be intensely focused on avoiding in the first place, as The Global Catastrophic Risks Institute, and Future of Humanity Institute among others, have been urging us to, but which we should also implement ways of absorbing the hit should it come. Basically catastrophic risks are disasters, natural or man-made that would have the effect of devastating human civilization on a global not just a local scale. They are not likely but have a chance that is less than zero.

Though we dogged the bullet that haunted Miller, we might still be faced with the threat of global thermonuclear war at some point in the future. Current saber rattling in the Pacific is not a good sign. We could be whacked by a massive object from outer space such as the one which wiped out the dinosaurs, or zapped by a gamma-ray burst, or crushed by the super-intelligences we are trying to build in an AI apocalypse, there could be a super-pandemic, perhaps created deliberately by some group of technologically proficient, nihilistic maniacs trying to kill us all, or a truly runaway greenhouse effect triggered by a methane release in the warming artic. In other words, there are a lot of things that might near push us back to the stone age even if no one of them are particularly likely.

To return to my question above: in the face of a catastrophic scenario how could we preserve human knowledge so as to avoid ever going through another long dark age? The first issue that strikes me when I start thinking about this is the quite practical one of what medium would be best to store information for the long haul?

Right now, of course, we are all about digital copying and storage. Google has so far scanned a little over 20 million books, a service I love, and that has kept my acidic fingers off of a gem like the first publication of Adam Smith’s Moral Sentiments, though the company’s public service in doing this has not been without controversy.

You’ve also got to hand it to the scandinavians who seem to do everything with meditative forethought. (I credit the six months of darkness.) The Norwegians not only have the Svalbard global seed bank which preserves the world’s agricultural inheritance in a Norad like facility in the icy north near the north pole, but are now aiming to digitize and make available all the world’s books in Norwegian.  Should a global catastrophe occur having done so might have cause in the words of Alexis Madrigal:

…Norwegians become to 27th-century humans what the Greeks were to the Renaissance. Everyone names the children of the space colonies Per and Henrik, Amalie and Sigrid. The capital of our new home planet will be christened Oslo.

Even absent Google, Americans aren’t totally left in the dust as archivers by our polar- bear- pale brethren up north. We have the quite respectable Internet Archive and the world oldest (although the word “old” seems strange here) digital library, Project Guttenberg.  

Yet, there are a number of possible catastrophic scenarios, such as an AI Apocalypse, where this capacity to easily store and recover digitized information might be irrevocably lost. You also need functioning electricity grids and/or battery production manufacturing capacity both of which seem at danger should a truly big-one occur. Disturbing on this score is the fact that a number of libraries are eliminating their physical collections as they embrace digitization, something that the Internet Archive is now trying to rectify by collecting actual physical copies of books.

Venerable institutions such as The Library of Congress and The Smithsonian Institution already have extensive physical collections, not just of books, but of physical artifacts as well, and should they somehow survive a global catastrophe, I picture them being our equivalent of the Library of Alexandria where people will flock to access not just books but working versions of vital technologies that might otherwise have been lost, such as electrical lighting, which the monk, Brother Kornhoer, in A Canticle for Leibowitz has to jerry- rig back into existence almost from scratch.

The use of paper as a medium to store our books and blueprints at first seems like the tried and true option, after all it served us so well in the past, but as anyone knows who has a book more than 50 years old, modern paper decays very fast. And using paper as our medium of storage also assumes that whatever catastrophic event has happened has left us with enough trees. Even the antique version of paper, sometimes made from animal skins, succumbs after a few centuries to the literal “book worm”, and you also need either printing presses or whole human institutions of scriveners such as the monasteries and monks to make copies.

The late classical world already had monastic institutions that were widespread before the loss of knowledge- a loss which took a long time to unfold. Our own loss of knowledge, should it (however unlikely) occur, seems less likely to creep into being then come along with a bang, and in the age of Scarlett Johansson who wants to be a monk?

As always thinking about the deep future, the The Long Now Foundation has its Rosetta Project where it preserves the world’s languages on electroformed solid-nickel disk, a model which might serve as a template for long-term information storage. Here’s their description:

The Rosetta Disk fits in the palm of your hand, yet it contains over 13,000 pages of information on over 1,500 human languages. The pages are microscopically etched and then electroformed in solid nickel, a process that raises the text very slightly – about 100 nanometers – off of the surface of the disk. Each page is only 400 microns across – about the width of 5 human hairs – and can be read through a microscope at 650X as clearly as you would from print in a book. Individual pages are visible at a much lower magnification of 100X.

Something like the Rosetta Disk avoids the ravages of the book worm, and will certainly last a long time, but you do need a microscope to read it, and it’s pretty easy to imagine a future where microscopes are a rare or even non-existent tool. We could make larger versions of the Rosetta Disk so that the text is readable to the naked eye, but then we run into the limitations of cost: we can’t very well copy even more than a handful of the books in existence using this method.  And they would only be reproducible on a large scale basis it seems by using one of the other methods.

Then again, we could always look to nature. Life on earth has over 3 billion year leg up on human beings when it comes to storing and passing along information- it’s called DNA. You can put an amazingly large amount of information on an equally amazingly small segment of DNA as in about half a million DVDs of storage on half a gram!  In the beginning of 2013 researchers in the UK were able to encode Shakespeare’s sonnets, and MLK’s “I have a dream!” speech among other things on DNA. Much more than any medieval abbot, nature abhors copying errors, and therefore DNA makes not merely a great storage medium, as long as where it is stored is cool and dry it can last for thousands of years, but a means to make copies with near hundred percent fidelity.

DNA exceeds digital media for storage and copying and matches something like the Rosetta Disk for longevity, the problem is the technology to make, store, and read such DNA texts is relatively high tech, and therefore vulnerable or unworkable in many catastrophic scenarios. It’s also much less readily searchable than digital media or even indexed paper texts.

Perhaps what we need to make sure a good bulk of the world’s knowledge survives a global catastrophe is a tiered system of preservation with only the most essential technical and scientific information, including how to build and use other forms of information dissemination and storage, put on something like large Rosetta Disks, a second level of not as essential but important and culturally significant knowledge being stored on long-lasting paper, almost everything on digital media, and absolutely everything we could get our hands on stored on DNA.

All of these things would have to be done before the occurrence of any catastrophic event that lunged us backward into a new dark age. Once the lights went out we certainly shouldn’t expect, like Miller, that the Catholic Church would play the same role in preserving knowledge as it had in the past, for, as Mark Twain said, “history doesn’t repeat, it rhymes”.  Indeed, should we create the kinds of information preservation mechanisms I outlined above, we would need an organization already dedicated to those mechanisms to manage those efforts, and in today’s world such an organization seems likely to be secular.

I can imagine a type of global organization whose members were in their day-to-day reality scattered across differing organizations we have in place today for disseminating and storing knowledge: universities, major libraries, scientific institutions such as the Royal Society a small number of whom would in a pre-catastrophe world run the types of information preservation efforts I have sketched out who, in the unlikely case that a global catastrophic event occurred, would work slowly and over generations to re-establish the world’s learning.

I have already suggested ways we might pay for this.

The great bulk of what we would need to re-establish should a large chunk of the world’s knowledge be destroyed would be technological and scientific. Knowledge that would be essential would be things like, agricultural techniques and science, the Germ theory of disease and the techniques behind vaccinations, how to build and maintain infrastructure such as sewage and plumbing disposal, energy utilizing systems including electrical grids, civil engineering, and the technology behind knowledge behind storing and sharing information. Above all, the scientific method would need to be put firmly back in place.

One of the problems I foresee should an almost complete blackout occur are gaps in knowledge domains that are essentially unpredictable before hand. That is, it seems a safer bet to assume that not only will knowledge have been lost but the knowledge of how to understand whatever knowledge has remained might be lost as well.

It would certainly be an interesting interdisciplinary project to design the kinds of texts that would be necessary to re-establish some field of science should it almost completely disappear. To do so would probably require philosophers and historians of science, mathematicians, practitioners of the science itself, linguists, cultural anthropologists, and instructional designers who were adept at teaching complex ideas to those with minimum starting points in terms of literacy and numeracy.

Given that the source of a global catastrophe is perhaps most likely to come via our own scientifically induced prowess it’s quite sensible to ask if we should be making all this effort to salvage our scientific and technological capacity in the first place? This relationship between our knowledge and our possible destruction is a question dealt with on a profound level in A Canticle for Leibowitz, and I’ll turn to it next time. Yet, as we know man does not live on bread alone, so what of the preservation less material knowledge, the art and wisdom that is the legacy of our global civilization?

Hopefully we would be able to preserve at least some of our human cultural legacy. Thinking about what we might save from our culture under severe constraints in terms of number might be an interesting and perhaps even revealing parlor game ,so I’ll end this post by inviting you to play.

If you could save only 10 books, 10 songs, and 10 artworks from all of human history that should make it through a catastrophic event which would you choose?

An Epicurean Christmas Letter To Transhumanists

Botticelli Spring- Primivera

Whatever little I retain from my Catholic upbringing, the short days of the winter and the Christmas season always seem to turn my thoughts to spiritual matters and the search for deeper meanings. It may be a cliche, but if you let it hit you, the winter and coming of the new year can’t help but remind you endings, and sometimes even the penultimate ending of death. After all, the whole world seems dead now,  frozen like some morgue-corpse, although this one, if past is prelude, really will rise from the dead with the coming of spring.

Now, I would think death is the last thing most people think of, especially during what for many of us is such a busy, drowned in tinsel, time of the year. The whole subject is back there buried with the other detritus of life, such as how we get the food we’ll stuff ourselves with over the holidays, or the origin of the presents, from tinker-toys to diamond rings, that some of us will wrap up and hide under trees. It’s like the Jason Isbell song The Elephant that ends with the lines:

There’s one thing that’s real clear to me,


no one dies with dignity.


We just try to ignore the elephant somehow

This aversion to even thinking about death is perhaps the unacknowledged biggest obstacle for transhumanists whose goal, when all is said and done, is to conquer death. It’s similar to the kind of aversion that lies behind our inability to tackle climate change.Who wants to think about something so dreadful?

There are at least some people who do want to think of something so dreadful, and not only that, they want to tie a bow around it and make it appear some wonderful present left under the tree by Kris Kringle. Maria Konovalenko recently panned a quite silly article in the New York Times by Daniel Callahan who was himself responding to the hyperbolic coverage of Google’s longevity initiative, Calico. Here’s Callahan questioning the push for extended longevity:

And exactly what are the potential social benefits? Is there any evidence that more old people will make special contributions now lacking with an average life expectancy close to 80? I am flattered, at my age, by the commonplace that the years bring us wisdom — but I have not noticed much of it in myself or my peers. If we weren’t especially wise earlier in life, we are not likely to be that way later.

Perhaps not, but neither did we realize the benefits of raising life expectancy from 45 to near 80 between 1900 and today, such as The Rolling Stones. Callahan himself is a still practicing heart surgeon- he’s 83- and I’m assuming, because he’s still here, that he wouldn’t rather be dead. And even if one did not care about pushing the healthy human lifespan out further for oneself, how could one not wish for such an opportunity for one’s children? Even 80 years is really too short for all of the life projects we might fulfill, barely long enough to feel at home into the “world in which we’re thrown” ,quite literally, like the calf the poet Diane Ackerman helped deliver and described in her book Deep Play:

When it lifted its fluffy head and looked at me, its eyes held the absolute bewilderment of the newly born. A moment before it had enjoyed the even, black nowhere of the womb, and suddenly its world was full of color, movement, and noise. I have never seen anything so shocked to be alive. (141)

And if increased time to be here would likely be good for us as individuals, sufficient time to learn what we should learn and do what we should do, I agree as well with Vernor Vinge that greatly expanded human longevity would likely be an uncomparable good for society not least because it might refocus the mind on the longer term health of the societies and planet we call home.

That said, I do have some concern that my transhumanists friends are losing something by not acknowledging the death elephant given that they’re are too busy trying to push it out of the room. The problem I see is that many transhumanists are, how to put this, old, and can’t afford or aren’t sufficiently convinced in the potential of cryonics to put faith in it as a “backup”. Even when they embrace being deep- froze many of their loved ones are unlikely to be so convinced ,and, therefore, they will watch or have knowledge of their parents, siblings, spouse and friends experiencing a death that transhumanists understand to be nothing short of dark oblivion.

Lately it seems some have been trying to stare this oblivion in the face. Such, I take it, is the origin of classical composer David Lang’s haunting album Death Speaks. I do not think Lang’s personification of death in the ghostly voice of Shara Worden, or the presentation of the warm embrace of the grave as a sort of womb, should be considered “deathist”, even if death in his work is sometimes represented as final rest from the weariness of life, and anthropomorphized into a figure that loves even as she goes about her foul business of killing us.  Rather, I see the piece as merely the attempt to understand death through metaphor, which is sometimes all we have, and personally found the intimacy both chilling and thought provoking.

This is the oblivion we are all too familiar of biological death, which given sufficient time for technological advancement we may indeed escape as we might someday even exit biology itself, but I suspect that even over the very, very long run, some sort of personal oblivion regardless of how advanced our technology is likely inevitable.

As I see it, given the nature of the universe and its continuous push towards entropy we are unlikely to ever fully conquer death so much as phase change into new timescales and mechanisms of mortality. The reason for us thinking otherwise is, I think, our insensitivity to the depth of time. Even a 10,000 year old you is a mayfly compared to the age of our sun, let alone the past and future of the universe. What of “you” today would be left after 10,000 years, 100,000, a million, a billion years of survival? I would think not much, or at least not much more than would have survived on smaller time scales that you pass on today- your genes, your works, your karma. How many of phase changes exist between us today and where the line through us and our descendants ends is anyone’s guess, but maintaining the core of a particular human personality throughout all of these transformations seems like a very long shot indeed.

Even if the core of ourselves could be kept in existence through these changes what are the prospects that it would survive into the end of the universe, not to mention beyond?  As Lawrence Krauss pointed out, the physics seem to lean in the direction that in a universe with a finite amount of energy which is infinitely expanding no form of intelligence can engage in thinking for an infinite amount of time. Not even the most powerful form of intelligence we can imagine, as long as we use our current understanding of the laws of physics as boundary conditions, can truly be immortal.

On a more mundane level, even if a person could be fully replicated as software or non-biological hardware these systems too have their own versions of mortality (are you still running Windows ME and driving a Pinto?), and the preservation of a replicated person would require continuous activity to keep this person as software and/or non-biological hardware in a state of existence while somehow retaining the integrity of the self.

What all this adds up to is that if one adopts a strict atheism based on what science tells us is the nature of reality one is almost forced to come to terms with the prospect of personal oblivion at some point in the future, however far out that fate can be delayed. Which is not to say that reprieve should not be sought in the first place, only that we shouldn’t confuse the temporal expansion of human longevity, whether biological or through some other means, with the attainment of actual immortality. Breaking through current limits to human longevity would likely confront us with new limits we would still be faced with the need to overcome.

Some transhumanists who are pessimistic about the necessary breakthroughs to keep them in existence occurring in the short run, within their lifetime, cling to a kind of “Quantum Zen”, as Giulio Prisco recently put it, where self and loved ones are resurrected in a kind of cosmic reboot in the far future. Speaking of the protagonist of Zoltan Istvan’s Transhumanist Wager here’s how Prisco phrased it:

Like Jethro, I consider technological resurrection (Tipler, quantum weirdness, or whatever) as a possibility, and that is how I cope with my conviction that indefinite lifespans and post-biological life will not be developed in time for us, but later.

 To my eyes at least, this seems less a case of dealing with the elephant in the room than zapping it with a completely speculative invisible-izing raygun. If the whole moral high ground of secularists over the religious is that the former tie themselves unflinchingly to the findings of empirical science, while the latter approach the world through the lens of unquestioning faith, then clinging to a new faith, even if it is a faith in the future wonders of science and technology surrenders that high ground.

That is, we really should have doubts about any idea, whatever its use of scientific language, that isn’t falsifiable and is based on mere speculation (even the speculation of notable physicists) on future technological potential. Shouldn’t we want to live on the basis of what we can actually know through proof, right now?

How then, as a secular person, which I take most transhumanists to be, do you deal with idea of personal oblivion? It might seem odd to turn to a Roman Epicurean natural philosopher and poet born a century before Christ to answer such a question, but Titus Lucretius Carus, usually just called Lucretius, offered us one way of dampening the fear of death while still holding a secular view of the world.  At least that’s what Stephen Greenblatt found was the effect of  Lucretius’ only major work- On the Nature of Things.

Greenblatt found his secondhand copy of On the Nature of Things in a college book bin attracted as much by the summer- of- love suggestiveness of the 1960’s cover as anything else. He cracked it open that summer and found a book that no doubt seemed to reflect directly the spirit of the times, beginning as it does with a prayer to the goddess of love, Venus, and a benediction to the power of sexual attraction over even Mars the god of war.

It was also a book in the words of Lucretius whose purpose was to “ to free men’s minds from fear of the bonds religious scruples have imposed” (124) As Greenblatt describes it in his book The Swerve: How the World Became Modern, in Lucretius’ On the Nature of Things he found refuge from his own painful experience not with death, but the thought of it, and not even the fear of his own oblivion, but that of his mother’s fear of the same.  As Greenblatt writes of his mother:

It was death itself- simply ceasing to be- that terrified her. From as far back as I can remember, she brooded obsessively on the imminence of her end, invoking it again and again, especially at moments of parting. My life was full of operatic scenes of farwell. When she went with my father from Boston to New York  for the weekend, when I went off to summer camp, even- when things were especially hard for her- when I left the house for school, she clung tightly to me, speaking of her fragility and of the distinct possibility that I would never see her again. If we walked somewhere together, she would frequently come to a halt, as if she were about to keel over. Sometimes she would show me a vein pulsing in her neck, and taking my finger, make me feel it for myself, the sign of her heart dangerously racing. (3)

The Swerve tells the history of On the Nature of Things, its loss after the collapse of Roman civilization, its nearly accidental preservation by Christian monks, rediscovery in the early Renaissance and deep and all but forgotten impact on the sentiment of modernity having had an influence on figures as diverse as Shakespeare, Bruno, Galileo, More, Montesquieu and Jefferson. Yet, Greenblatt’s interest in On the Nature of Things was born of a personal need to understand and dispel anxiety over death, so it’s best to look at Lucretius’ book itself to see how that might be done.

Lucretius was a secular thinker before there was even a name for such a thing. He wanted a naturalistic explanation of the world where the gods, if they existed, played no role either in the workings of nature or the affairs of mankind. The basis to everything he held was a fundamental level of particles he sometimes called “atoms” and it was the non-predetermined interaction of these atoms that gave rise to everything around us, from stars and planets to animals and people.

From this basis Lucretius arrived at a picture of the universe that looked amazingly like our own. There is an evolution of the universe- stars and planets- from simpler elements and the evolution of life. Anything outside this world made of atoms is ultimately irrelevant to us. There is no need to placate the unseen gods or worry what they think of us.

Everything we experience for good and ill including the lucky accident of our own existence and our ultimate demise is from the “swerve” of underlying atoms. The Lucretian world makes no sharp division, as ancients and medievals often did, between the earthly world and the world of the sky above our heads.

The universe is finite in matter if infinite in size, and there are likely other worlds in it with intelligent life like our own. In the Copernican sense we are not at the center of things either as a species or individually. All we can experience, including ourselves, is made of the same banal substance of atoms going about their business of linking and unlinking with one another. And, above all, everything that belongs to this universe built of atoms is mortal, a fleeting pattern destined to fall apart.

On the Nature of Things is the strangest of hybrids. It is a poem, a scientific text and a self-help book all at the same time. Lucretius addresses his poem to Gaius Memmius an unknown figure whom the author aims to free from the fear of the gods and death. Lucretius advises Memmius  that death is nothing to fear for it will be no different to us than all the time that passed before we were born. To rage against no longer existing through the entirety of the future is no more sensical than raging that we did not exist through the entirety of the past.

Think how the long past age of hoary time

Before our birth is nothing to us now

This in a mirror

Nature shows to us

Of what will be hereafter when we’re dead

Does this seem terrible is this so sad?

Is it not less troubled than our daily sleep? (118)

______________________________

I know, I know, this is the coldest of cold comforts.

Yet, Lucretius was an Epicurean whose ultimate aim was that we be wise enough to keep in our view the simple pleasures of being alive, right now, in the moment in which we were lucky enough to be living. While reading On the Nature of Things I had in my ear the constant goading whisper- “Enjoy your life!” Lucretius’ fear was that we would waste our lives away in fear and anticipation of life, or its absence, in the future. That we would be of those:

Whose life was living death while yet you live

And see the light who spend the greater part

Of life in sleep still snoring while awake.( 122-123)

It is not that Lucretius advises us to take up the pleasure seeking life of hedonism, but he urges us to not waste our preciously short time here with undue anxiety over things that are outside of our control or in our control to only a limited extent. On The Nature of Things admonishes us to start not from the position of fear or anger that the universe intends to eventually “kill” us, but from one of gratitude that out of a stream of randomly colliding atoms we were lucky enough to have been born in the first place.

This message in a bottle from an ancient Epicurean reminded me of the conclusion to the aforementioned Diane Ackerman’s Deep Play where she writes to imagined inhabitants of the far future that might let her live again and concludes in peaceful lament:

If that’s not possible, then I will have to make due with the playgrounds of mortality, and hope that at the end of my life I can say simply, wholeheartedly that it was grace enough to be born and live. (212)

 Nothing that happens, or fails to happen, within our lifetimes, or after it, can take away this joy that it was to live, and to know it.

Don’t Be Evil!

Panopticon Prisoner kneeling

However interesting a work it is, Eric Schmidt and Jared Cohen’s The New Digital Age is one of those books where if you come to it as a blank slate you’ll walk away from it with a very distorted chalk drawing of what the world actually looks like. Above all, you’ll walk away with the idea that intrusive and questionable surveillance was something those other guys did, the bad guys, not the American government, or US corporations, and certainly not Google where Schmidt sits as executive chairman . Much ink is spilt on explaining the egregious abuses of Internet freedom by the likes of countries like China and Iran, or what in the vast majority of cited cases, are abuses by non-Western companies,  but when it comes to the US itself or any of its corporations engaging in similar practices the book is eerily silent.

I may not know what a mote is, but I do know I am supposed to pluck my own out of my eye first. Only then can I get seriously down to the business of pointing out the other guy’s mote, or even helping him yank it out.

The New Digital Age (I’ll call it the NDA from here on on to shorten things up), is full of the most reasonable and vanilla sort of advice on the need to balance our conflicting needs for security and privacy, but given its silence on the question of what the actual security/surveillance system in the US actually is, we’re left without the information needed to make such judgements. Let me put that silence in context.

The publication date for the NDA was April, 23 2013. The smoke screen of conspicuous- for- their- absence facts that are never discussed extends not only forward in time- something to be expected given the Edward Snowden revelations were one month out (May, 20 2013)- but, more disturbingly backward in time as well.  That is, Schmidt and Cohen couldn’t really be expected, legally if not morally, to discuss the revelations Snowden would later bring to light. Still, they should be expected to have addressed serious claims about the relationship between American technology companies and the US security state which were already public knowledge.

There had been extensive reporting on the intersection of technology and US government spying since at least 2010. These weren’t stories by Montana survivalists or persons camped out at Area 51, but hard hitting journalists with decades covering national security; namely, the work of Dana Priest and the Washington Post. If my memory and the book’s index serves me, neither Priest nor the Post are mentioned in the NDA.

Over a year before NDA was published Wired’s James Bamford had written a stunning piece on the NSA’s construction of its huge data center in Bluffdale, Utah, the goal of which was to suck up and store indefinitely the electronic records of all of us- which is the main thing we are arguing about. The main debate is over whether the government has a right to force private companies to provide all the digital data on their customers which the government will then synthesize, organize and store. If you’re an American you’re lucky enough to have the government require a warrant to look at your records. (Although the court in change of this-the FISA court- is not really known for turning such requests down). If you’re unlucky enough to not be an American then the government can peruse your records whenever the hell it wants to- thank you very much.

The NSA gets two pages devoted to it in the NDA’s 257 pages both of which are about how open minded and clever the agency is for hiring pimply- faced hackers. Say, what?

The more I think about what had to be the deliberate silence that runs throughout the whole of the NDA the more infuriating it becomes, but at least now Google et al have gotten religion- or at least I hope. On December, 9 2013 Google, Facebook, Apple, Microsoft, Twitter, Yahoo, LinkedIn, and AOL sent an open letter to the White House urging new restrictions on the government’s ability to seize, use and store information gleaned from them. This is a hopeful sign, but I am not sure we be handing out Liberty Medals just yet.

For one, this move against the government was not inspired by civil libertarians or even robust reporting, but by threats to the very business model upon which the companies who signed the document are based. As The Economist puts its:

The entire business model of firms like Google, FaceBook and Twitter relies on harvesting intimate information provided by users and then selling that data on to advertisers.

It was private firms that persuaded people to give up lists of their friends, their most sensitive personal communications, and to constantly broadcast their location in real-time. If you had told even the noisiest spook in 1983, that within 30 years, much of the populace would be carrying around a tracking device that kept a permanent record of everywhere they had ever visited, he’d have thought you mad.”

Let’s say you’re completely comfortable with the US government keeping such records on you. Perhaps the majority of Americans are unconcerned about this and think it the price of safety. But I doubt Americans would feel as blaise if it was the Chinese or the Russians or heaven forbid the French or any other government whose apparatchiks could go through their online personal and financial records at will. Therein lies the threat to American companies whose ultimate aspirations are global.  Companies that are seen, rightly or wrongly, as a tool of the US government will lose the trust not mainly of US citizens but of international customers. An ensuing race to the exits and nationalization of the Internet would most likely be driven not by Iranian Mullahs or a testosterone- charged Vladimir Putin paddling around in a submersible like a Bond villain,  but by Western Europeans and other democratic societies who were already uncomfortable with the idea that corporations should be trusted by individuals who had made themselves as transparent as the Utah sky.

The Germans, to take one example, were already freaked out by Google Street View of all things and managed to have the company abandon that service there. Revulsion at the Snowden revelations is perhaps the one thing that unites the otherwise bickering nationalities of the EU. TED, an event that began as a Silicon Valley lovefest looked a lot different when it was held in Brussels in October, with Mikko Hypponen urging the secession of Europeans from the American Internet infrastructure and the creation of their own open-sourced platforms. It’s the fear of being thought of as downright Orwellian that seems most likely to have inspired Google’s move to abandon facial recognition on Google Glass.

With the Silicon Valley Letter we might think we’re in the home stretch of this struggle to re-establish the right to privacy, but the sad fact is this fight’s just beginning. As the Economist pointed out none of the giants that provide the hardware and “plumbing” for the Internet, such as Cisco, and AT&T signed the open letter, less afraid, it seems, of losing customers because these are national brick-and-mortar companies in a way the eight signatories of the open letter to the Obama Administration are not.  For civil libertarians to win this fight Americans have to not only get those hardware companies on board, but compel the government to deconstruct a massive amount of spying infrastructure.

That is, we need to get the broader American public to care enough to exert sustained pressure on the government and some of the richest companies in the country to reverse course. Otherwise, the NSA facility at Bluffdale will continue sucking up its petabytes of overwhelmingly useless information like some obsessive Mormon genealogist until the mechanical levithan lurches to obsolescence or is felled by the sheppard’s stone of better encryption.

The NSA facility that stands today in the Utah desert may offer a treasure trove for the historian of the far future, a kind of massive junkyard of collective memory filled with all our sense and non-sense. If we don’t get our act straight, what it will also be is a historical monument to the failure of our two centuries and some old experiment with freedom.

Maps:how the physical world conquered the virtual

World map 1600

If we look back to the early days when the Internet was first exploding into public consciousness, in the 1980’s, and even more so in the boom years of the 90’s, what we often find is a kind of utopian sentiment around this new form of “space”. It wasn’t only that a whole new plane of human interaction seemed to be unfolding into existence almost overnight, it was that “cyberspace” seemed poised to swallow the real world- a prospect which some viewed with hopeful anticipation and others with doom.

Things have not turned out that way.

The person who invented the term “cyberspace”, William Gibson, the science fiction author of the classic- Neuromancer- himself thinks that when people look back on the era when the Internet emerged what will strike them as odd is how we could have confused ourselves into thinking that the virtual world and our work-a-day one were somehow distinct. Gibson characterizes this as the conquest of the real by the virtual. Yet, one can see how what has happened is better thought of as the reverse by taking even a cursory glance at our early experience and understanding of cyberspace.

Think back, if you are old enough, and you can remember, when the online world was supposed to be one where a person could shed their necessarily limited real identity for a virtual one. There were plenty of anecdotes, not all of them insidious, of people faking their way through a contrived identity the unsuspecting thought was real: men coming across as women, women as men, the homely as the beautiful. Cyberspace seemed to level traditional categories and the limits of geography. A poor adolescent could hobnob with the rich and powerful. As long as one had an Internet connection, country of origin and geographical location seemed irrelevant.

It should not come as any surprise, then, that  an early digital reality advocate such as Nicole Stenger could end her 1991 essay Mind is a leaking rainbow with the utopian flourish:

According to Satre, the atomic bomb was what humanity had found to commit collective suicide. It seems, by contrast, that cyberspace, though born of a war technology, opens up a space for collective restoration, and for peace. As screens are dissolving, our future can only take on a luminous dimension! / Welcome to the New World! (58)

Ah, if only.

Even utopian rhetoric was sometimes tempered with dystopian fears. Here is Mark Pesce the inventor of VRML code in his 1997 essay Ignition:

The power over this realm has been given to you. You are weaving the fabric of perception in information perceptualized. You could – if you choose – turn our world into a final panopticon – a prison where all can been seen and heard and judged by a single jailer. Or you could aim for its inverse, an asylum run by the inmates. The esoteric promise of cyberspace is of a rule where you do as you will; this ontology – already present in the complex system know as Internet – stands a good chance of being passed along to its organ of perception.

The imagery of a “final panopticon” is doubtless too morbid for us at this current stage whatever the dark trends. What is clear though is that cyberspace is a dead metaphor for what the Internet has become- we need a new one. I think we could do worse than the metaphor of the map. For, what the online world has ended up being is less an alternative landscape than a series of cartographies by which we organize our relationship with the world outside of our computer screens, a development with both liberating and troubling consequences.

Maps have always been reflections of culture and power rather than reflections of reality. The fact that medieval maps in the West had Jerusalem in their centers wasn’t expressing a geologic but a spiritual truth although few understood the difference. During the Age of Exploration what we might think of as realistic maps were really navigational aids for maritime trading states, a latent fact present in what the mapmakers found important to display and explain.

The number and detail of maps along with the science of cartography rose in tandem with the territorial anchoring of the nation-state. As James C. Scott points out in his Seeing Like a State maps were one of the primary tools of the modern state whose ambition was to make what it aimed to control “legible” and thus open to understanding by bureaucrats in far off capitals and their administration.

What all of this has to do with the fate of cyberspace, the world where we live today, is that the Internet, rather than offering us an alternative version of physical space and an escape hatch from its problems has instead evolved into a tool of legibility. What is made legible in this case is us. Our own selves and the micro-world’s we inhabit have become legible to outsiders. Most of the time these outsiders are advertisers who target us based on our “profile”, but sometimes this quest to make individuals legible is by the state- not just in the form of standardized numbers and universal paperwork but in terms of the kinds of information a state could only once obtain by interrogation- the state’s first crack at making individuals legible.      

A recent book by Google CEO Eric Schmitt co-authored with foreign policy analyst Jared Cohen- The New Digital Age is chalk full of examples of corporate advertisers’ and states’ new powers of legibility. They write:

The key advance ahead is personalization. You’ll be able to customize your devices- indeed much of the technology around you- to fit your needs, so that the environment reflects your preferences.

At your fingertips will be an entire world’s worth of digital content, constantly updated, ranked and categorized to help you find the music, movies, shows, books, magazines, blogs and art you like. (23)

Or as journalist Farhad Manjoo quotes Amit Singhal of Google:

I can imagine a world where I don’t even need to search. I am just somewhere outside at noon, and my search engine immediately recommends to me the nearby restaurants that I’d like because they serve spicy food.

There is a very good reason why I did not use the world “individuals” in place of “corporate advertisers” above- a question of intent. Whose interest does the use of such algorithms to make the individual legible ultimately serve? If it my interest then search algorithms might tell me where I can get a free or even pirated copy of the music, video etc I will like so much. It might remind me of my debts, and how much I would save if I skip dinner at the local restaurant and cook my quesadillas at home. Google and all its great services, along with similar tech giants aiming to map the individual such as FaceBook aren’t really “free”. While using them I am renting myself to advertisers. All maps are ultimately political.

With the emergence mobile technology and augmented reality the physical world has wrestled the virtual one to the ground like Jacob did to the angel. Virtual reality is now repurposed to ensconce all of us in our own customized micro-world. Like history? Then maybe your smartphone or Google Glasses will bring everything historical around you out into relief. Same if you like cupcakes and pastry or strip clubs. These customized maps already existed in our own heads, but now we have the tools for our individualized cartography- the only price being constant advertisements.

There’s even a burgeoning movement among the avant garde, if there can still be said to be such a thing, against this kind of subjection of the individual to corporate dictated algorithms and logic. Inspired by mid-20 century leftists such as Guy Debord with his Society of the Spectacle practitioners of what is called psychogeography are creating and using apps such as Drift  that lead the individual on unplanned walks around their own neighborhoods, or Random GPS that have your car’s navigation system remind you of the joys of getting lost.

My hope is that we will see other versions of these algorithm inverters and breakers and not just when it comes to geography. How about similar things for book recommendations or music or even dating? We are creatures that sometimes like novelty and surprise, and part of the wonder of life is fortuna–  its serendipitous accidents.

Yet, I think these tools will most likely ramp up the social and conformist aspects of our nature. We shouldn’t think they will be limited to corporate persuaders. I can imagine “Catholic apps” that allow one to monitor one’s sins, and a whole host of funny and not so funny ways groups will use the new methods of making the individual legible to tie her even closer to the norms of the group.

A world where I am surrounded by a swirl of constant spam, or helpful and not so helpful suggestions, the minute I am connected, indeed, a barrage that never ends except when I am sleeping because I am always connected, may be annoying, but it isn’t all that scary. It’s when we put these legibility tools in the hands of the state that I get a little nervous.

As Schmitt and Cohen point out one of the most advanced forms of such efforts at mapping the individual is an entity called Platforma Mexico which is essentially a huge database that is able to identify any individual and tie them to their criminal record.

Housed in an underground bunker in the Secretariat of Public Security compound in Mexico City, this large database integrates intelligence, crime reports and real time data from surveillance cameras and other inputs from across the country. Specialized algorithms can extract patterns, project social graphs and monitor restive areas for violence and crime as well as for natural disasters and other emergencies.  (174)

The problem I have here is the blurring of the line between the methods used for domestic crime and those used for more existential threats, namely- war. Given that crime in the form of the drug war is an existential threat for Mexico this might make sense, but the same types of tools are being perfected by authoritarian states such as China, which is faced not with an existential threat but with growing pressures for reform, and also in what are supposed to be free societies like the United States where a non-existential threat in the form of terrorism- however already and potentially horrific- is met with similar efforts by the state to map individuals.

Schmitt and Cohen point out how there is a burgeoning trade between autocratic countries and their companies which are busy perfecting the world’s best spyware. An Egyptian firm Orascom owns a 25 percent share of the panopticonic sole Internet provider in North Korea. (96) Western companies are in the game as well with the British Gamma Group’s sale of spyware technology to Mubarak’s Egypt being just one recent example.

Yet, if corporations and the state are busy making us legible there has also been a democratization of the capacity for such mapmaking, which is perhaps the one of the reasons why states are finding governance so difficult. Real communities have become almost as easy to create as virtual ones because all such communities are merely a matter of making and sustaining human relationships and understanding their maps.

Schmitt and Cohen imagine virtual governments in exile waiting in the wings to strike at the precipitous moment. Political movements can be created off the shelf supported by their own ready made media entities and the authors picture socially conscious celebrities and wealthy individuals running with this model in response to crises. Every side in a conflict can now have its own media wing whose primary goal is to shape and own the narrative. Even whole bureaucracies could be preserved from destruction by keeping its map and functions in the cloud.

Sometimes virtual worlds remain limited to the way they affect the lives of individuals but are politically silent. A popular mass multiplayer game such as World of Warcraft may have as much influence on an individual’s life as other invisible kingdoms such as those of religion. An imagined online world becomes real the moment its map is taken as a prescription for the physical world.  Are things like the Hizb ut-Tahrir which aims at the establishment of a pan-Islamic caliphate or The League of the South which promotes a second secession of American states “real” political organizations or fictional worlds masking themselves as political movements? I suppose only time will tell.

Whatever the case, society seems torn between the mapmakers of the state who want to use the tools of the virtual world to impose order on the physical and an almost chaotic proliferation using the same tools by groups of all kinds creating communities seemingly out of thin air.

All this puts me in mind of nothing so much as China Mieville’s classic of New Weird fiction City and the City. It’s a crime novel with the twist that it takes place in two cities- Beszel  and Ul Qoma that exist in the same physical space and are superimposed on top of one another. No doubt Mieville was interested in telling a good story, and getting us thinking about the questions of borders and norms, but it’s a pretty good example of the mapping I’ve been talking about- even if it is an imagined one.

In City and the City an inhabitant of Beszel  isn’t allowed to see or interact with what’s going on in Ul Qoma and vice versa otherwise they commit a crime called “breach” and there’s a whole secretive bi-city agency called Breach that monitors and prosecutes those infractions. There’s even an imaginary (we are led to believe) third city “Orciny” that exist on-top of Beszel and Ul Qoma and secretly controls the other two.

This idea of multiple identities- consumer, political- overlaying the same geographical space seems a perfect description of our current condition. What is missing here, though, is the sharp borders imposed by Breach. Such borders might appear quicker and in different countries than one might have supposed thanks to the recent revelations that the United States has been treating the Internet and its major American companies like satraps. Only now has Silicon Valley woken up to the fact that its close relationship with the American security state threatens its “transparency” based business- model with suicide. The re-imposition of state sovereignty over the Internet would mean a territorialization of the virtual world- a development that would truly constitute its conquest by the physical. To those possibilities I will turn next time…

Shedding Light on The Dark Enlightenment

Eye of Sauron

There has been some ink spilt lately at the IEET over a new movement that goes by the Tolkienesque name, I kid you not, of the dark enlightenment also called neo-reactionaries.  Khannea Suntzu has looked at the movement from the standpoint of American collapse and David Brin within the context of a rising oligarchic neo-feudalism.  

I have my own take on the neo-reactionary movement somewhat distinct from that of either Suntzu or Brin, which I will get to below, but first a recap.  Neo-reactionaries are a relatively new group of thinkers on the right that in general want to abandon the modern state, built such as it is around the pursuit of the social welfare, for lean-and-mean governance by business types who know in their view how to make the trains run on time. They are sick of having to “go begging” to the political class in order to get what they want done. They hope to cut out the middle-man. It’s obvious that oligarchs run the country so why don’t we just be honest about it and give them the reins of power? We could even appoint a national CEO- if the country remains in existence- we could call him the king. Oh yeah, on top of that we should abandon all this racial and sexual equality nonsense. We need to get back to the good old days when the color of a man’s skin and having a penis really meant something- put the “super” back in superior.

At first blush the views of those hoping to turn the lights out on enlightenment (anyone else choking on an oxymoron) appear something like those of the kind of annoying cousin you try to avoid at family reunions. You know, the kind of well off white guy who thinks the Civil Rights Movement was a communist plot, calls your wife a “slut” (their words, not mine) and thinks the real problem with America is that we give too much to people who don’t have anything and don’t lock up or deport enough people with skin any darker than Dove Soap. Such people are the moral equivalent of flat-earthers with no real need to take them seriously, though they can make for some pretty uncomfortable table conversation and are best avoided like a potato salad that has been out too long in the sun.

What distinguishes neo-reactionaries from run of the mill ditto heads or military types with a taste for Dock Martins or short pants is that they tend to be latte drinking Silicon Valley nerds who have some connection to both the tech and trans-humanist communities.

That should get this audience’s attention.

To continue with the analogy from above:  it’s as if your cousin had a friend, let’s just call him totally at random here… Peter Thiel, who had a net worth of 1.5 billion and was into, among other things, working closely with organizations such as the NSA through a data mining firm he owned- we’ll call it Palantir (damned Frodo Baggins again!) and who serves as a deep pocket for groups like the Tea Party. Just to go all conspiracy on the thing let’s make your cousin’s “friend” a sitting member on something we’ll call The Bilderberg Group a secretive cabal of the world’s bigwigs who get together to talk about what they really would like done in the world. If that was the case the last thing you should do is leave your cousin ranting to himself while you made off for another plate of Mrs. T’s Pierogies.  You should take the maniac seriously because he might just be sitting on enough cash to make his atavistic dreams come true and put you at risk of sliding off a flattened earth.

All this might put me at risk of being accused of lobbing one too many ad hominem, so let me put some meat on the bones of the neo-reactionaries. The Super Friends or I guess it should be Legion of Doom of neo-reaction can be found on the website Radish where the heroes of the dark enlightenment are laid out in the format of Dungeons and Dragons or Pokémon cards (I can’t make this stuff up). Let’s just start out with the most unfunny and disturbing part of the movement- its open racism and obsession with the 19th century pseudo-science of dysgenics.

Here’s James Donald who from his card I take to be a dwarf, or perhaps an elf, I’m not sure what the difference is, who likes to fly on a winged tauntaun like that from The Empire Strikes Back.

To thrive, blacks need simpler, harsher laws, more vigorously enforced, than whites.  The average black cannot handle the freedom that the average white can handle. He is apt to destroy himself.  Most middle class blacks had fathers who were apt to frequently hit them hard with a fist or stick or a belt, because lesser discipline makes it hard for blacks to grow up middle class.  In the days of Jim Crow, it was a lot easier for blacks to grow up middle class.

Wow, and I thought a country where one quarter of African American children will have experienced at least one of their parents behind bars– thousands of whom will die in prison for nonviolent offenses– was already too harsh. I guess I’m a patsy.

Non-whites aren’t the only ones who come in for derision by the neo-reactionaries a fact that can be summed up by the post- title of one of their minions, Alfred W. Clark, who writes the blog Occam’s RazorAre Women Who Tan SlutsThere’s no need to say anything more to realize poor William of Occam is rolling in his grave.

Beyond this neo-Nazism for nerds quality neo-reactionaries can make one chuckle especially when it comes to “policy innovations” such as bringing back kings.

Here’s modern day Beowulf Mencius Moldbug:

What is England’s problem?  What is the West’s problem?  In my jaundiced, reactionary mind, the entire problem can be summed up in two words – chronic kinglessness.  The old machine is missing a part.  In fact, it’s a testament to the machine’s quality that it functioned so long, and so well, without that part.

Yeah, that’s the problem.

Speaking of atavists, one thing that has always confused me about the Tea Party is that I have never been sure which imaginary “golden age” they wanted us to return to. Is it before desegregation? Before FDR? Prior to the creation of the Federal Reserve (1913)? Or maybe it’s back to the antebellum south? Or maybe back to the Articles of Confederation? Well, at least the neo-reactionaries know where they want to go- back before the American Revolution. Obviously since this whole democracy thing hasn’t worked out we should bring back the kings, which makes me wonder if these guys have mourning parties on Bastille Day.

Okay, so the dark voices behind neo-reaction are a bunch of racist/sexist nerds who have a passion for kings and like to be presented as characters on D&D cards. They have some potentially deep pockets, but other than that troubling fact why should we give them more than a few seconds of serious thought?

Now I need to exchange my satirical cap for my serious one for the issues are indeed serious. I think understanding neo-reaction is important for two reasons: they are symptomatic of deeper challenges and changes occurring politically, and they have appeared as a response to and on the cusp of a change in our relationship to Silicon Valley a region that has been the fulcrum point for technological, economic and political transformation over the past generation.

Neo-reaction shouldn’t be viewed in a vacuum. It has appeared at a time when the political and economic order we have had since at least the end of the Second World War which combines representative democracy, capitalist economics and some form of state supported social welfare (social democracy) is showing signs of its age.

If this was just happening in the United States whose 224 year old political system emerged before almost everything we take to be modern such as this list at random: universal literacy, industrialization, railroads, telephones, human flight, the Theory of Evolution, Psychoanalysis, Quantum Mechanics, Genetics, “the Bomb”, television, computers, the Internet and mobile technology then we might be able, as some have, to blame our troubles on an antiquated political system, but the creaking is much more widespread.

We have the upsurge in popularity of the right in Europe such as that seen in France with its National Front. Secessionist movements are gaining traction in the UK. The right in the form of Hindu Nationalism under a particular obnoxious figure- Narendra Modi -is poised to win Indian elections. There is the implosion of states in the Middle East such as Syria and revolution and counter revolution in Egypt. There are rising nationalist tensions in East Asia.

All this is coming against the backdrop of rising inequality. The markets are soaring no doubt pushed up by the flood of money being provided by the Federal Reserve,  yet the economy is merely grinding along. Easy money is the de facto cure for our deflationary funk and pursued by all the world’s major central banks in the US, the European Union and now especially, Japan.

The far left has long abandoned the idea that 21st century capitalism is a workable system with the differences being over what the alternative to it should be- whether communism of the old school such as that of Slavoj Žižek  or the anarchism of someone like David Graeber. Leftists are one thing the Pope is another, and you know a system is in trouble when the most conservative institution in history wants to change the status quo as Pope Francis suggested when he recently railed against the inhumanity of capitalism and urged for its transformation.

What in the world is going on?

If your house starts leaning there’s something wrong with the foundation, so I think we need to look at the roots of our current problems by going back to the gestation of our system- that balance of representative democracy, capitalism and social democracy I mentioned earlier whose roots can be found not in the 20th century but in the century prior.

The historical period that is probably most relevant for getting a handle on today’s neo-reactionaries is the late 19th century when a rage for similar ideas infected Europe. There was Nietzsche in Germany and Dostoevsky in Russia (two reactionaries I still can’t get myself to dislike both being so brilliant and tragic). There was Maurras in France and Pareto in Italy. The left, of course, also got a shot of B-12 here as well with labor unions, socialist political parties and seriously left-wing intellectuals finally gaining traction. Marxism whose origins were earlier in the century was coming into its own as a political force.  You had writers of socialist fiction such as Edward Bellamy and Jack London surging in popularity. Anarchists were making their mark, though, unfortunately, largely through high profile assassinations and bomb throwing. A crisis was building even before the First World War whose centenary we will mark next year.

Here’s historian JM Roberts from his Europe 1880-1945 on the state of politics in on the eve, not after, the outbreak of the First World War.

Liberalism had institutionalized the pursuit of happiness, yet its own institutions seemed to stand in the way of achieving the goal; liberal’s ideas could, it seemed, lead liberalism to turn on itself.

…the practical shortcomings of democracy contributed to a wave of anti-parliamentarianism. Representative institutions had for nearly a century been the shibboleth of liberalism. An Italian sociologist now stigmatized them ‘as the greatest superstition of modern times.’ There was violent criticism of them, both practical and theoretical. Not surprisingly, this went furthest in constitutional states where parliamentary institutions were the formal framework of power but did not represent social realities. Even where parliaments (as in France or Great Britain) had already shown they possessed real power, they were blamed for representing the wrong people and for being hypocritical shams covering self-interest. Professional politicians- a creation of the nineteenth century- were inevitably, it was said, out of touch with real needs.

Sounds familiar, doesn’t it?

Liberalism, by which Roberts means a combination of representative government and laissez faire capitalism- including free trade- was struggling. Capitalism had obviously brought wealth and innovation but also enormous instability and tensions. The economy had a tendency to rocket towards the stars only to careen earthward and crash leaving armies of the unemployed. The small scale capitalism of earlier periods was replaced by continent straddling bureaucratic corporations. The representative system which had been based on fleeting mobilization during elections or crises had yet to adjust to a situation where mass mobilization through the press, unions, or political groups was permanent and unrelenting.

The First World War almost killed liberalism. The Russian Revolution, Great Depression, rise of fascism and World War Two were busy putting nails in its coffin when the adoption of social democracy and Allied Victory in the war revived the corpse. Almost the entirety of the 20th century was a fight over whether the West’s hybrid system, which kept capitalism and representative democracy, but tamed the former could outperform state communism- and it did.

In the latter half of the 20th century the left got down to the business of extending the rights revolution to marginalized groups while the right fought for the dismantling of many of the restrictions that had been put on the capitalist system during its time of crisis. This modus vivendi between left and right was all well and good while the economy was growing and while the extension of legal rights rather than social rights for marginalized groups was the primary issue, but by the early 21st century both of these thrusts were spent.

Not only was the right’s economic model challenged by the 2008 financial crisis, it had nowhere left to go in terms of realizing its dreams of minimal government and dismantling of the welfare state without facing almost impossible electoral hurdles. The major government costs in the US and Europe were pensions and medical care for the elderly- programs that were virtually untouchable. The left too was realizing that abstract legal rights were not enough.  Did it matter that the US had an African American president when one quarter of black children had experienced a parent in prison, or when a heavily African American city such as Philadelphia has a child poverty rate of 40%? Addressing such inequities was not an easy matter for the left let alone the extreme changes that would be necessary to offset rising inequality.

Thus, ironically, the problem for both the right and the left is the same one- that governments today are too weak. The right needs an at least temporarily strong government to effect the dismantling of the state, whereas the left needs a strong government not merely to respond to the grinding conditions of the economic “recovery”, but to overturn previous policies, put in new protections and find some alternative to the current political and economic order. Dark enlightenment types and progressives are confronting the same frustration while having diametrically opposed goals. It is not so much that Washington is too powerful as it is that the power it has is embedded in a system, which, as Mark Leibovich portrays brilliantly, is feckless and corrupt.  

Neo-reactionaries tend to see this as a product of too much democracy, whereas progressives will counter that there is not enough. Here’s one of the princes of darkness himself, Nick Land:

Where the progressive enlightenment sees political ideals, the dark enlightenment sees appetites. It accepts that governments are made out of people, and that they will eat well. Setting its expectations as low as reasonably possible, it seeks only to spare civilization from frenzied, ruinous, gluttonous debauch.

Yet, as the experience in authoritarian societies such as Libya, Egypt and Syria shows (and even the authoritarian wonderchild of China is feeling the heat) democratic societies are not the only ones undergoing acute stresses. The universal nature of the crisis of governance is brought home in a recent book by Moisés Naím. In his The End of Power  Naím lays out how every large structure in society: armies, corporations, churches and unions are seeing their power decline and are being challenged by small and nimble upstarts.

States are left hobbled by smallish political parties and groups that act as spoilers preventing governments from getting things done. Armies with budgets in the hundreds of billions of dollars are hobbled by insurgents with IEDs made from garage door openers and cell phones. Long-lived religious institutions, most notably the Catholic Church, are losing parishioners to grassroots preachers while massive corporations are challenged by Davids that come out of nowhere to upend their business models with a simple stone.

Naím has a theory for why this is happening. We are in the midst of what he calls The More, The Mobility and The Mentality Revolutions. Only the last of those is important for my purposes. Ruling elites are faced today with the unprecedented reality that most of their lessers can read. Not only that, the communications revolution which has fed the wealth of some of these elites has significantly lowered the barriers to political organization and speech. Any Tom, Dick and now Harriet can throw up a website and start organizing for or against some cause. What this has resulted in is a sort of Cambrian explosion of political organization, and just as in any acceleration of evolution you’re likely to get some pretty strange mutants- and so here we are.

Some on the left are urging us to adjust our progressive politics to the new distributed nature of power.  The writer Steven Johnson in his recent Future Perfect: The case for progress in a networked age calls collaborative efforts by small groups “peer-to-peer networks”, and in them he sees a glimpse of our political past (the participatory politics of the ancient Greek polis and late medieval trading states) becoming our political future. Is this too “reactionary”?

Peer-to-peer networks tend to bring local information back into view. The fact that traditional centralized loci of power such as the federal government and national and international media are often found lacking when it comes to local knowledge is a problem of scale. As Jane Jacobs has pointed out , government policies are often best when crafted and implemented at the local level where differences and details can be seen.

Wikipedia is a good example of Johnson’s peer-to-peer model as is Kickstarter. In government we are seeing the spread of participatory budgeting where the local public is allowed to make budgetary decisions. There is also a relatively new concept known as “liquid democracy” that not only enables the creation of legislation through open-sourced platforms but allows people to “trade” their votes in the hopes that citizens can avoid information overload by targeting their vote to areas they care most about, and presumably for this reason, have the greatest knowledge of.

So far, peer-to-peer networks have been successful at revolt- The Tea Party is peer-to-peer as was Occupy Wall Street. Peer-to-peer politics was seen in the Move-ON movement and has dealt defeat to recent legislation such as SOPA. Authoritarian regimes in the Middle East were toppled by crowd sourced gatherings on the street.

More recently than Johnson’s book there is New York’s new progressive mayor-  Bill de Blasio’s experiment with participatory politics with his Talking Transition Tent on Canal Street. There, according to NPR, New Yorkers can:

….talk about what they want the next mayor to do. They can make videos, post videos and enter their concerns on 48 iPad terminals. There are concerts, panels on everything from parks to education. And they can even buy coffee and beer.

Democracy, coffee and beer- three of my favorite things!

On the one hand I love this stuff, but me being me I can’t help but have some suspicions and this relates, I think, to the second issue about neo-reactionaries I raised above; namely, that they are reflecting something going on with our relationship to Silicon Valley a change in public perception of the tech culture and its tools from hero and wonderworker to villain and illusionist.

As I have pointed out elsewhere the idea that technology offered an alternative to the lumbering bureaucracy of state and corporations is something embedded deep in the foundation myth of Silicon Valley. The use of Moore’s Law as a bridge to personalized communication technology was supposed to liberate us from the apparatchiks of the state and the corporation- remember Apple’s “1984” commercial?

It hasn’t quite turned out that way. Yes, we are in a condition of hyper economic and political competition largely engendered by technology, but it’s not quite clear that we as citizens have gained rather than “power centers” that use these tools against one another and even sometimes us. Can anyone spell NSA?

We also went from innovation, and thus potential wealth, being driven by guys in their garages to, on the American scene, five giants that largely own and control all of virtual space: Google, Facebook, Amazon, Apple and Micro-Soft with upstarts such as Instagram being slurped up like Jonah was by the whale the minute they show potential growth.

Rather than result in a telecommuting utopia with all of us working five hours a day from the comfort of our digitally connected home, technology has led to a world where we are always “at work”, wages have not moved since the 1970’s and the spectre of technological unemployment is on the wall. Mainstream journalists such as John Micklethwait of The Economist are starting to see a growing backlash against Silicon Valley as the public becomes increasingly estranged from digerati who have not merely failed to deliver on their Utopian promises, but are starving the government for revenue as they hide their cash in tax havens all the while cosying up to the national security state.

Neo-reactionaries are among the first of Silicon Valleians to see this backlash building hence their only half joking efforts to retreat to artificial islands or into outer space. Here is Balaji Srinivasan whose speech was transcribed by one of the dark illuminati who goes by the moniker Nydwracu:

The backlash is beginning. More jobs predicted for machines, not people; job automation is a future unemployment crisis looming. Imprisoned by innovation as tech wealth explodes, Silicon Valley, poverty spikes… they are basically going to try to blame the economy on Silicon Valley, and say that it is iPhone and Google that done did it, not the bailouts and the bankruptcies and the bombings, and this is something which we need to identify as false and we need to actively repudiate it.

Srinivasan would have at least some things to use in defense of Silicon Valley: elites there have certainly been socially conscious about global issues. Where I differ is on their proposed solutions. As I have written elsewhere, Valley bigwigs such as Peter Diamandis think the world’s problems can be solved by letting the technology train keep on rolling and for winners such as himself to devote their money and genius to philanthropy.  This is unarguably a good thing, what I doubt, however, is that such techno-philanthropy can actually carry the load now held up by governments while at the same time those made super rich by capitalism’s creative destruction flee the tax man leaving what’s left of government to be funded on the backs of a shrinking middle class.

As I have also written elsewhere the original generation of Silicon Valley innovators is acutely aware of our government’s incapacity to do what states have always done- to preserve the past, protect the the present and invest in the future. This is the whole spirit behind the saint of the digerati Stewart Brand’s Long Now Foundation in which I find very much to admire. The neo-reactionaries too have latched upon this short term horizon of ours, only where Brand saw our time paralysis in a host of contemporary phenomenon, neo-reactionaries think there is one culprit- democracy. Here again is dark prince Nick Land:

Civilization, as a process, is indistinguishable from diminishing time-preference (or declining concern for the present in comparison to the future). Democracy, which both in theory and evident historical fact accentuates time-preference to the point of convulsive feeding-frenzy, is thus as close to a precise negation of civilization as anything could be, short of instantaneous social collapse into murderous barbarism or zombie apocalypse (which it eventually leads to). As the democratic virus burns through society, painstakingly accumulated habits and attitudes of forward-thinking, prudential, human and industrial investment, are replaced by a sterile, orgiastic consumerism, financial incontinence, and a ‘reality television’ political circus. Tomorrow might belong to the other team, so it’s best to eat it all now.

The problem here is not that Land has drug this interpretation of the effect of democracy straight out of Plato’s Republicwhich he has, or that it’s a kid who eats the marshmallow leads to zombie apocalypse reading of much more complex political relationships- which it is as well.  Rather, it’s that there is no real evidence that it is true, and indeed the reason it’s not true might give those truly on the radical left who would like to abandon the US Constitution for something more modern and see nothing special in its antiquity reason for pause.

The study,of course, needs to be replicated, but a paper just out by Hal Hershfield, Min Bang and Elke Weber at New York University seems to suggest that the way to get a country to pay serious attention to long term investments is not to give them a deep future but a deep past and not just any past- the continuity of their current political system.

As Hershfield states it:

Our thinking is that the countries who have a longer past are better able see further forward into the future and think about extending the time period that they’ve already been around into the distant future. And that might make them care a bit more about how environmental outcomes are going to play out down the line.

And from further commentary on that segment:

Hershfield is not using the historical age of the country, but when it got started in its present form, when its current form of government got started. So he’s saying the U.S. got started in the year 1776. He’s saying China started in the year 1949.

Now, China, of course, though, is thousands of years old in historical terms, but Hershfield is using the political birth of the country as the starting point for his analysis. Now, this is potentially problematic, because for some countries like China, there’s a very big disparity in the historical age and when the current form of government got started. But Hershfield finds even when you eliminate those countries from the equation, there’s still a strong connection between the age of the country and its willingness to invest in environmental issues.

The very existence of strong environmental movements and regulation in democracies should be enough to disprove Land’s thesis about popular government’s “compulsive feeding frenzy”.  Democracies should have stripped their environments bare like a dog with a Thanksgiving turkey bone. Instead the opposite has happened. Neo-reactionaries might respond with something about large hunting preserves supported by the kings, but the idea that kings were better stewards of the environment and human beings (I refuse to call them “capital”)  because they own them as personal property can be countered with two words and a number King Leopold II.

Yet, we progressives need to be aware of the benefits of political continuity. The right with their Tea Party and their powdered wigs has seized American history. They are selling a revolutionary dismantling of the state and the deconstruction of hard fought for legacies in the name of returning to “purity”, but this history is ours as much as theirs even if our version of it tends to be as honest about the villains as the heroes. Neo-reactionaries are people who have woken up to the reality that the conservative return to “foundations” has no future. All that is left for them is to sit around daydreaming that the American Revolution and all it helped spark never happened, and that the kings still sat on their bedeckled thrones.

We Are All Dorian Grey Now

Echo_and_Narcissus

If someone on the street stopped and asked you what you thought was the meaning behind Oscar Wilde’s novel A Picture of Dorian Grey you’d probably blurt out ,like the rest of us, that it had something to do with a frightening portrait, the dangers of pursuing immortality, and, if one remembered vague details about Wilde’s life, you might bring up the fact that it must have gotten him into a lot of trouble on account of its homoeroticism.

This at least is what I thought before I sat down and actually read the novel. What I found there instead was a remarkable reflection on human consciousness all the more remarkable because in 1890 when the novel was published we really had no idea how the mind worked. Wilde would arrive at his viewpoint from introspection and observation alone. His reflections on thought and emotion are all the more important because many of the ways his protagonist becomes unhinged from the people around him, and thus cursed to a fateful demise, are a kind of amplified intimation of what social media is doing to us today. Let me explain.      

The science writer and poet, Diane Ackerman in her book An  Alchemy of the Mind list three ways evolution has “played tricks” on us: 1) We have brains that can imagine states of perfection they can’t achieve, 2) We have brains that compare our insides to other people’s outsides, 3) We have brains desperate to stay alive, yet we are finite beings that perish. (p. 6). Although the novel is not discussed by Ackerman,  A Picture of Dorian Grey explores all three of these “tricks”, though the third one, which is perhaps often thought to be the main subject of the novel is in fact the least important one in terms of the story and its meaning.

The whole tragedy of Dorian Grey begins, of course, when Dorian looking upon his portrait by Basil Hallward dreams he could exchange his own temporal youth for the youth frozen in time by the painter:

… if only it were the other way! If it were I who was to be always young, and the picture that was to grow old! For that-for that- I would give everything! Yes, there is nothing in the world I would not give! I would give my soul for that! (25-26)

What drives Dorian to make his “deal with the devil” is his ability to imagine something that cannot be- his own beauty frozen in time- like the dead portrait- while he himself remains living. Of course, the mark of maturity (or sanity) is being able to tell the difference between our dreams and reality and learn somehow to accept the gap. What this devil’s wager brings Dorian will be a failure to mature though this will not mean that he will not experience the other emotional manifestation of aging that emerges if one fails to grow with experience- that is degeneration and corruption.

If we do ever master the underlying mechanisms of aging, especially cognitive aging, this question of maturation and development seems sure to come up. We may all wish were as good looking as when we were 19 or as deft on the basketball court, but few of us would hope for the return of the emotional rollercoaster of adolescence or the lack of foresight we had as children. Many of these emotional changes from youth to middle age and older are based upon our accumulated experience, but they also follow what seems to be a natural maturing process in the brain. What we lose in reflex- time and rapid memory recall we more than makeup for in our ability to see the big picture and exercise emotional control. Aging in this sense, can indeed be said to be healthy.

These changes are certainly a factor when it comes to the relationship between aging and crime. Not only do the rates of crime track chronological age with the majority of crimes committed by young men in their late teens and early 2o’s, but rates of recidivism for those convicted and imprisoned plummets by more than half for those 59-69 years old versus 18- 24 year olds.

What does all of that have to do with Dorian Grey? More than you might think. In the novel it is not merely Dorian’s physical development that is arrested, but his emotional development as well. The eternal youth goes to parties, visits opium dens, visits brothels and does other things that remain unnamed. He does not work, does not marry, does not raise children. He has no real connection with other human beings besides perhaps Lord Henry Wotton who has introduced him to the philosophy of Hedonism, though even there the connection is tenuous. Dorian Grey gains knowledge but does not learn, neither from his own pain or the pain he inflicts on others.  Had he been arrested he would make the perfect recidivist.

Yet, it is in Ackerman’s second trick she thinks evolution has played on us: “having a brain that compares our insides to others outsides” that the real depth of The Picture of Dorian Grey can be found, and its most  important message in light of our own social media age. Indeed the whole novel might be thought of as a meditation on that simple, but universal, trap of our own nature. But first I must discuss the question of empathy.

Dorian manages to ruin almost every life he touches. His first romance with the poor actress Sibyl Vane ends in her suicide, his blackmail of Adrian Singleton leads to his suicide as well. He will murder Basil Hallward the person who perhaps loved him most in the world. There are countless other named and unnamed victims of Dorian’s influence.

What makes Dorian heartless is that everyone around him becomes a mere surface, a plaything of his own mind. It was his cruelty to Sibyl Vane that led to her suicide. Hallward who plays the Mephistopheles to Dorian’s Faust explains the actresses death to him this way:

No she will never come to life. She has played her final part. But you must think of that lonely death in the tawdry dressing- room simply as a strange fragment of some Jacobean tragedy, as a wonderful scene from Webster, or Ford or Cyril Tourneur. The girl never really lived so she never really died. To you at least she was always a dream a phantom… (103)

To which Dorian eventually responds regarding the girl’s death:

It has been a marvelous experience! That is all. I wonder if life has still in store for me anything as marvelous. (103)

We might reel in revulsion at Dorian’s response to the death of another human being, at his stone- heartedness in light of the pain the girl’s death has brought to her loved ones, a death he bears a great deal of responsibility for, but his lack of empathy holds up a mirror to ourselves and might be something we need to be more fully in tune with now that our digitally mediated lives allow us to so much more easily turn real people into the mere characters of our own life play.

There was the tragic case of Rebecca Sedwick a 12 year old girl who killed herself by jumping from an industrial platform after repeated bullying from her peers including multiple incitements by these young women for the girl to kill herself. The response of one of the bullies to the news of Rebecca Sedwick’s death was not horrified remorse, but this message which she posted on FaceBook:

Yes I know I bullied Rebecca and she killed herself, but I don’t give a f—k.

Two of the bullying girls had been arrested and charged with felony harassment, though the charges have now been dropped.

In addition to this kind of direct cruelty there is another form of digitally mediated cruelty where people are willing spectators and sometimes accomplices of heinous acts: snuff films, pornography with some participants unaware that they are being filmed, especially the epidemic of child pornography. Deliberate cruelty to animals is another form of a digitized house of horrors as well.

This cruel pixelization of human beings (and animals) can strike us in seemingly innocent forms as well, as the media critic Douglas Rushkoff wrote of our infatuation with reality TV in his Present Shock :

We readily accept the humiliation of a contestant at the American Idol audition- such as William Hung, a Chinese American boy who revived the song “She Bangs” – even when he doesn’t know we are laughing at him. The more of this media we enjoy the more spectacularly cruel it must be to excite our attention…. (pp. 36-37)

On top of this are all the kinds of daily cruelty we engage in with our simulations- violent video games, or hit shows like “The Walking Dead”.

Is all of this digital cruelty having an effect on our ability to feel empathy? It appears so. Researchers are starting to pick up on what they are calling the “empathy deficit” a decline in the level of self-reported empathetic concern over the last generation. As Keith O’Brien wrote in The Boston Globe:

Starting around a decade ago, scores in two key areas of empathy begin to tumble downward. According to the analysis, perspective-taking, often known as cognitive empathy — that is, the ability to think about how someone else might feel — is declining. But even more troubling, Konrath noted, is the drop-off the researchers have charted in empathic concern, often known as emotional empathy. This is the ability to exhibit an emotional response to someone’s else’s distress.”

Between 1979 and 2009, according to the new research, empathic concern dropped 48 percent.

The worst case scenario that I can think of is that we are witnessing an early stage of the shift away from the non-violent society, we, despite our wars, created over the 19th and 20th centuries. A decline that Steven Pinker so excellently laid out in his Better Angels of Our Nature.  

Pinker traced our development of a society incredibly less violent and less accepting of violence than any of its predecessor. We appear to have a much more developed sense of empathy than any society in the human past. One of the major players in this transition from a society where cruelty was ubiquitous to one where it invokes feelings of revulsion and horror Pinker believes was the rise of reading, of universal literacy and especially the reading of novels.

Reading is a technology for perspective-taking. When someone else’s thoughts are in your head, you are observing the the world from that person’s vantage point. Not only are you taking in sights and sounds that you could not experience firsthand, but you have stepped inside that person’s mind and are temporarily sharing his attitudes and reactions.

Stepping into someone else’s vantage point reminds you that the other fellow has a first-person present-tense, ongoing stream of consciousness that is very much like your own but not the same as your  own. It’s not a very big leap to suppose that the habit of reading other people’s words could put one in the habit of entering other people’s minds, including their pleasures and pains.

What visual/digital media fails to give us, what still allows even bad novels to give great films a run for their money, is any real grass of characters internal lives. The more cut off from other’s internal lives we become, the more they become mere surfaces with which we can play the “film” of our lives the only internal life we’ll ever know with certainty to be real.

This failure to recognize other’s internal lives is like the flip-side of “having brains that compare our insides to other people’s outsides” to which I will now after that long digression return. Perhaps all of the characters in The Picture of Dorian Grey mistake the surface for the truth of things. This is the philosophy of Lord Wotton, the origin of Dorian’s dark pact, and even infects the most emotionally attuned character in the novel, Basil.

In one scene, Basil comes to warn Dorian about his growing public infamy. It is a warning that will eventually lead to Bail’s murder at Dorian’s hands. The painter cannot believe the stories swirling around about the unaged man who once sat in his studio and was the muse of his art. Basil gives this explanation for his disbelief:

Mind you, I don’t believe these rumors at all. At least I can’t I can’t believe them when I see you. Sin is a thing that writes itself across a man’s face. It cannot be concealed. People talk sometimes about secret vices. There are no such things. If a wretched man has a vice, it shows itself in the lines of his mouth, the droop of his eyelids, the moulding of his hands even.  (149-150)

Basil believes Dorian is good because his outside is beautiful, or he confuses his own virtue with Dorian’s appearance. This mistaking of outside with inside is the curse of the FaceBook generation.  As Stephen Marche explained it in his article Is FaceBook Making Us Lonely? we have put ourselves in a situation where we are constantly confusing people’s lives with their social media profiles and updates where we often only see photos the beautiful kids making crafts- not the temper tantrums, the wonderful action packed vacation- not the three days in the bathroom with the runs, or the sexy new car or gorgeous home rather than the crippling car and home payments.

In other words, we assume people to be happy because their digital persona is happy, but we have no idea of the details. We should be able to guess that, just like us, the internal lives of everyone else are “messy”, but this takes the kinds of empathy we seem to be losing.  We are stuck with a surface of a surface and are apt to confuse it with the real. To top it all off we have to then smooth out our own messiness for public consumption, or worse fall into a pit where we try to dig out of our own complexity, so we like the personas of others we mistake for real people can be “happy”.

Here’s Marche:

Not only must we contend with the social bounty of others; we must foster the appearance of our own social bounty. Being happy all the time, pretending to be happy, actually attempting to be happy—it’s exhausting. “

So what of the last of Ackerman’s tricks of evolution that “we have brains desperate to stay alive, yet we are finite beings that perish.” Although it might seem surprising, the exploration of this desire to be immortal is the element least explored in A Picture of Dorian Grey.

The reason for this is that Dorian is not so much interested in immortal life as he is in eternal youth, or better the eternal beauty that is natural for some in their youth. A deal with the devil to live forever, but be homely, is not one vain Dorian would have made.

Dorian is certainly afraid of dying, he is terrified and sickened by the fear that Sibyl’s brother will kill him. But in the short space between the appearance of this fear and Dorian’s actual death there is no room to explore Wilde’s thoughts regarding death.Dorian does not live forever, he dies a middle aged man.

A new radio series The Confessions of Dorian Grey does explore this theme barely developed by Wilde, imaging a Dorian Grey who lives rather than dies and takes his corruption through different times periods after the fin-de siecle. I look forward to seeing where the writers and producers take the theme.

___________________________________________________________________

In the back of my version of A Picture of Dorian Grey is a wonderful essay The ‘Conclusion’ to Pater’s Renaissance  by Wilde that is really about what it means to be alive and to be conscious and the place of art in that. Wilde writes:

… the whole scope of observation is dwarfed to the narrow chamber of the individual mind. Experience already reduced to a swarm of impressions, is ringed round for each one of us by that thick wall of personality through which no real voice has ever pierced on its way to us, or from us to that which we can only conjecture to be without. Every one of those impressions is the impression of the individual in his isolation, each mind keeping as a solitary prisoner its own dream of a world.  (226)

Wilde wanted to grab hold of the fleeting impressions of life for but a moment and locate the now. Art, and in that sense “art for art’s’” sake was his means of doing this a spiritual recognition of the beauty of the now.

What ruined Dorian Grey was not the or sentiment or truth of that, though Wilde was exploring the dangers implicit in such a view. The black box nature of our consciousness, the fact that we create our own very unique internal worlds, is among the truths of modern neuroscience. Rather, Dorian’s failure lie in his unwillingness to recognize the “dream of a world” of others was as real and precious as his own.

Caves, Creationism and the Divine Wonder of Deep Time

The Mutiliation of Uranus by Saturn

Last week was my oldest daughter’s 5th birthday in my mind the next “big” birthday after the always special year one. I decided on a geology themed day one of whose components were her, me and my younger daughter who’s 3 taking a trip to a local limestone cave that holds walk through tours.

Given that we were visiting the cave on a weekday we had the privilege of getting a private tour: just me, the girls and our very friendly and helpful tour guide. I was really hoping the girls would get to see some bats, which I figured would be hibernating by now. Sadly, the bats were gone. In some places upwards of 90% of them had been killed by an epidemic with the understated name of “White Nose Syndrome” a biological catastrophe I hadn’t even been aware of and a crisis both for bats and farmers who depend on them for insect control and pollination.

For a local show-cave this was pretty amazing- lots of intricate stalactites and stalagmites, thin rock in the form of billowing ribbons a “living” growth moving so slowly it appears to be frozen in time. I found myself constantly wondering how long it took for these formations to take shape. “We tell people thousands of years” our guide told us. “We have no idea how old the earth is”. I thought to myself that I was almost certain that we had a pretty good idea of the age of the earth – 4.5 billion, but did not press- too interested in the caves features and exploring with the girls. I later realized I should have, it would have likely made our guide feel more at ease.

Towards the end of our tour we spotted a sea shell embedded in the low ceiling above us, and I picked up the girls one at a time so they could inspect it with their magnifying glass. I felt the kind of vertigo you feel when you come up against deep time. Here was the echo of a living thing from eons ago viewed by the living far far in its future.

Later I found myself thinking about our distance in time from the sea shell and the cave that surrounded it. How much time separated us and the shell? How old was the cave? How long had the things we had seen taken to form? Like any person who doesn’t know much about something does I went to our modern version of Delphi’s Oracle- Google.

When I Googled the very simple question: “how old are limestone caves?” a very curious thing happened. The very first link that popped up wasn’t Wikipedia or a geology site but The Institute for Creation Research.  That wasn’t the only link to creationist websites. Many, perhaps the majority, of articles written on the age of caves were by creationists, the ones that I read in seemingly scientific language, difficult for a non-scientist/non-geologist to parse. Creationists seem to be as interested in the age of caves as speleologist, and I couldn’t help but wonder, why?

Unless one goes looking or tries to remain conscious of it, there are very few places where human beings confront deep time- that is time far behind (or in front) of thousands of years by which we reckon human historical time.  The night sky is one of these places, though we have so turned the whole world into a sprawling Las Vegas that few of us can even see into the depths of night any more. Another place is natural history museums where prehistoric animals are preserved and put on display. Creationists have attempted to tackle the latter by designing their very own museums such as The Creation Museum replete with an alternative history of the universe where, among other things, dinosaurs once lived side-by-side with human beings like in The Flintstones.

Another place where a family such as my own might confront deep time is in canyons and caves. The Grand Canyon has a wonderful tour called The Trail of Time that gives some idea of the scale of geological time where tourists start at the top of the canyon in present time and move step by step and epoch by epoch to the point where the force of the Colorado River has revealed a surface 2 billion years old.

Caves are merely canyons under the ground and in both their structure and their slow growing features- stalactites and the like- give us a glimpse into the depths of geologic time. Creationists feel compelled to steel believers in a 5,000 year old earth against the kinds of doubts and questions that would be raised after a family walks through a cave. Hence all of the ink spilt arguing over how long it takes a stalagmite to grow five feet tall and look like a melting Santa Claus. What a shame.

It was no doubt the potential prickliness of his tourists that led our poor guide to present the age of the earth or the passage of time within the cave as open questions he could not address. After all, he didn’t know me from Adam and one slip of the word “million” from his mouth might have resulted in what should have been an exciting outing turning into a theological debate. As he said he was not, after all, a geologist and had merely found himself working in the cave after his father had passed away.

As regular readers of my posts well known, I am far from being an anti-religious person. Religion to me is one of the more wondrous inventions and discoveries we human beings have come up with, but religion, understood in this creationist sense seems to me a very real diminishment not merely of the human intellect but of the idea of the divine itself.

I do not mean to diminish the lives of people who believe in such pseudo-science. One of the most hardworking and courageous persons I can think of was a man blinded by a mine in Vietnam. Once we were discussing what he would most like to see were his sight restored and he said without hesitation “The Creation Museum!”. I think this man’s religious faith was a well spring for his motivation and courage, and this, I believe is what religions are for- to provide strength for us to deal with the trials and tribulations of human life. Yet, I cannot help but think that the effort to black- hole- like suck in and crush our ideas of creation so that it fits within the scope of our personal lives isn’t just an assault on scientific truth but a suffocation of our idea of the divine itself.

The Genesis story certainly offers believers and non-believers alike deep reflection on what it means to be a moral creature, but much of this opportunity for reflection is lost when the story is turned into a science text book. Not only that, both creation and creator become smaller. How limited is the God of creationists whose work they constrict from billions into mere thousand of years and whose overwhelming complexity and wonder they reduce to a mere 788,280 human words!  With bitter irony creationists diminish the depth of the work God has supposedly made so that man can exalt himself to the center of the universe and become the primary character of the story of creation. In trying to diminish the scale and depth of the universe in space and time they are committing the sin of Milton’s Satan- that is pride.

The more we learn of the universe the deeper it becomes. Perhaps the most amazing projects in NASA’s history were two very recent one- Kepler and Hubble. Their effects on our understanding of our place in the universe are far more profound than the moon landings or anything else the agency has done.

Hubble’s first Deep Field image was taken over ten consecutive days in December of 1995. What it discovered in the words of Lance Wallace over at The Atlantic:

What researchers found when they focused the Hubble over those 10 days on that tiny speck of darkness, Mather said, shook their worlds. When the images were compiled, they showed not just thousands of stars, but thousands of galaxies. If a tiny speck of darkness in the night sky held that many galaxies, stars and—as scientists were beginning to realize—associated planets … the number of galaxies, stars, and planets the universe contained had to be breathtakingly larger than they’d previously imagined.

The sheer increased scale of the universe has led scientist to believe that it is near impossible that we are “alone” in the cosmos. The Kepler Mission has filled in the details with recent studies suggesting that there may be billions of earth like planets in the universe.   If we combine these two discoveries with the understanding of planet hunter Dimitar Sasselov, who thinks that not only are we at the very beginning of the prime period for life in the universe because it has taken this long for stars to produce the heavy elements that are life’s prerequisites, but that we also have a very long time perhaps as much as 100 billion years for this golden age of life to play out, we get an idea of just how prolific creation is and will be beside which a God who creates only one living planet and one intelligent species seems tragically sterile.

To return underground, caves were our first cathedrals- witness Lascaux. It is even possible that our idea of the Underworld as the land of the dead grew out of the bronze age temple complex of Alepotrypa inspiring the Greek idea of Hades that served as the seed through which the similar ideas of Sheol held by the Jews and revved up by Christians to the pinnacle of horror shows with the idea of Hell.

I like to think that this early understanding of the Underworld as the land of the dead and the use of caves as temples reflects an intuitive understanding of deep time. Walking into a cave is indeed, in a sense, entering the realm of the dead because it is like walking into the earth’s past. What is seen there is the movement of time across vast scales. The shell my daughter’s peered at with their magnifying glass, for instance, was the exoskeleton of a creature that lived perhaps some 400 million years ago in the Silurian Period when what is now Pennsylvania was located at the equator and the limestone that was the product of decay the shells inhabitant’s relatives began to form.

Recognition of this deep time diminishes nothing of the human scale and spiritual meaning of this moment taken to stop and stare at something exquisite peeking at us from the ceiling- quite the opposite. Though I might be guilty of overwinding,  it was a second or two 400 million years or perhaps one might say 13 billion years in the making- who couldn’t help thanking God for that?