To see forward, look back!

Thomas Cole the Course of Empire

“The farther backward you look, the farther forward you are likely to see.”  

Winston Churchill


The above quote is taken from Ian Morris’ recent and fascinating Why the West Rules- For Now: The patterns of history and what they reveal about the future.  Indeed, the whole point of Morris’ book can be seen in Churchill’s quip. Morris, a trained archaeologist and historian, aims to find a pattern in the broad arc of human history beginning with the birth of civilization, take us into the present age, and project current trends outward into the next century and perhaps beyond.

This is, needless to say, a pretty ballsy thing to do, at least if one wants to remain safe in the cocoon of respectable academia where scholars can spend a decade glued like a car door on Magneto to a subsection of one obscure historical text, or stuck for seven years, as Morris himself was, to the excavation of one ancient room.  Writing a meta-narrative like Morris’ is somewhat less ballsy if one intends to enter that rare breed of academic/journalist that has managed to reach the publishing industry’s version of celebrity status. Perhaps the fastest way to reach the top of today’s non-fiction bestseller list is to write a book with the words “America” or the “West” with the verb “decline” attached or- perhaps the flip-side- the words “China” and “Rise”. And who could blame the public for lapping this stuff up, hell, with “Euro-crises”, and “fiscal cliffs” and “debt ceilings” all over the news when everything in the rest of the world, and China especially, seems a go-go-go?    

Morris, however, is not some poor soul lost forever to the seriousness of academia and possessed by the spirit of Oswald Spengler, nor is he some dry professor presenting yet another version of those angels- on- the- head- of- a- pin arguments that force closed the eyelids of popular readers. Rather, he has managed the seemingly impossible task of presenting serious scholarship in a way that succeeds in keeping readers not only engaged but entertained. His book is full of creative leaps in which he uses the instruments and insights from one field of human intellectual and artistic endeavor to help understand history in new ways. Above all, Morris takes seriously what are in fact very important questions- problems which modern historians burned by the hubris and prejudice of their 19th and early 20th century predecessors tend to ignore, questions which nonetheless, should be important to all of us as human beings- where did we come from? and where are we going?

The way that Morris frames these questions of origins and destiny is to see them through the prism of the “rise of the West”. Is this Euro-centric? Perhaps, but the facts remain that it was from an obscure corner of Eurasia that the first civilization arose that managed to tie the globe together into one unit, that it was from there that a brand new form of civilization emerged a scientific-industrial-technological civilization that would force all the world to adapt to it or face decline and domination. Why this happened, and where this process unleashed by the West might be leading is not a matter of increasing Westerner’s self-esteem in a period where the two cores of Western civilization- Europe and the United States- seem to be racing one another down the slope of decadence and decline, but are questions that should concern everyone regardless of the accidents of geography.

Morris is trained as an archaeologist specializing in the classical age in the West. One might, therefore, expect him to fall into the category of those 19th century historians who thought there was something very special about the West culturally, or some, with tragic effects argued racially, about the West that made its dominance over the rest of the world’s cultures inevitable. Usually these “long-term lock-in” theories start from something about the ancient Greeks and how they escaped the hold of superstition by the application of reason to both nature and society.

Morris grapples with these explanations from culture only to dismiss them. There are just far too many periods in history where civilizations other than the West hold first place in the realms of science and technology. Think, for example, of what are considered the penultimate technologies that launched the modern age: long range sailing ships, the compass, the printing press and gunpowder. All of these were invented in China.

Something historians attempting to compare civilizations or talking about their rise or decline is by what basis do you say one civilization is more “advanced” than another? How can you tell if a civilization is rising or declining?

It is in coming up with a new way to answer these questions that Morris makes the first of his many leaps found in Why the West Rules, For Now. Morris turns to a model from contemporary development studies- the UN Human Development Index (HDI) which is a combined statistic that compares development between countries on measures such as life-expectancy, education and income as a template for creating his own measure that will allow him to compare levels of development historically between countries and across an historical period that begins with the appearance of civilization in the “Hilly Flanks” of the modern Middle East with the Neolithic Revolution around 12,000 years ago.

Morris comes up with four key measures that allow him to compare development between civilizations and across time: energy capture (how much energy is taken for work), organizational capacity (measured by the size of urban centers), information technology (measured by literacy rates) and military capacity (measured by the size of armies).  Morris is well aware that he should not give a value judgment to scores on his scale, and he also fully admits that what he has invented is a rather rough instrument. His is a starting point for a larger discussion- not the final destination.

Applying his measure to history since the dawn of civilization here is what he finds:The West, meaning not just Europe, but the western half of Eurasia and North Africa beginning in Mesopotamia was indeed ahead in developmental terms from Eastern civilization, whose core he place in China, for much of its history. Only in the late middle ages from around 900 AD- 1700 AD was the East ahead of the West. Yet, in a version of the theory put forward by Jared Diamond in his landmark Germs Guns and Steel, Morris argues that the fact that the West was for so long more advanced than the East according to his measure is to be explained not in terms of culture but as a consequence of geography.

Intensive agriculture along with permanent human settlements emerged around 12,000 years ago in a region known as the “Hilly Flanks” an area on the edge of the area around the Tigris and Euphrates. Why here? It just so happens that this area has an overwhelming number of that very small group of naturally occurring plants and animals that are suitable for domestication. Agriculture in the Hilly Flanks spread to the nearby Tigris and Euphrates valley in which, under harsh conditions of a new Ice Age it gave rise to what we would recognize as both cities and states which had grown up as solutions to the problem of how to provide food under conditions of intense scarcity.

The West had a further geographic advantage over the East after the center of its civilization moved into the Mediterranean whose sea provided a transmission belt for food, products, people and ideas. Even after China built its Grand Canal it would have nothing to compete with the Mediterranean. The Roman Empire took the West to the very height of social development in terms of Morris’ scale-the number of trees felled for fuel, of cities at a massive scale, soldiers armed for war, or literate persons would not return to those found in the Roman Empire in its height- for Western countries that is- until the 1700s. Rome had hit what Morris calls a “hard-ceiling” and was eventually felled by his “four horsemen of the apocalypse”: climate change, famine, state failure, and disease”.

After 900, China with its Song Dynasty finally caught up to and surpassed the West reaching Roman levels of social development according to Morris’ measure. How exactly it was able to do this isn’t exactly clear, but certainly part of it had to do with the incorporation of rice producing regions in China’s south- perhaps facilitated by climate change. China’s relative isolation would have offered it some protection against epidemic diseases and it had come up with an effective “barbarian policy” that at the very least allowed China to avoid the fate of a city such as Baghdad that around 1200 was absolutely destroyed by a Mongolian horde.

In any case, with the discovery of the Americas in the 1400s the West would begin its crawl back to the levels of social development found in the Roman Empire this time by creating a new version of the Mediterranean world in the Atlantic Civilization and its Columbian Exchange. But by the 1700s when Thomas Malthus realized that all civilizations in the past had collapsed once their population overran their ability to produce food, it seemed like the hard-ceiling was about to be hit again, only this time, as Kenneth Pomeranz pointed out in his The Great Divergence the West would find in the industrial revolution a way not just to poke above the hard-ceiling, but to shatter it.

Here’s what it looks like as a graph:

Ian Morris Great Divergence Graph   


The last 200 or so odd years have essentially been the story of the West taking advantage of this new form of industrial civilization it created to dominate the rest of the globe until non-Western societies adopted and replicated it. Now that countries such as China and India have embraced modernization the fate of the rule of the West is written on the wall. Within a century, the great divergence will have run its course, and perhaps that’s the big story under today’s headlines. But Morris doesn’t think so.

Instead, what he sees is another hard-ceiling out ahead which unless we break through it may result in the end of civilization, what he, borrowing from Isaac Asimov calls Nightfall. If one takes one of Morris measures, say urbanization, and plays out the trend line, what one gets is cities on the order of 140 million people! He sees no way of measuring up to these trend lines unless the hopes of the singularians for radical technological change within the next half century prove correct. The four horsemen of
climate change, famine, state failure, and disease are already out of their stable and only a breakthrough of greater magnitude than the industrial revolution would prove capable of pushing them back in.

If we fail to achieve a breakthrough our particular civilization’s collapse- accompanied by nuclear weapons- will likely, according to Morris, be the last. For him, like the title of this blog the future is one of either utopia or dystopia.

Nightfall

Van Gough Starry Night

When I was fourteen, or thereabouts, one of the very first novels I read was the Foundation Trilogy of Isaac Asimov. Foundation is for anyone so presumptuous, as I was and still am, to have an interest in “big history”- the rise and fall of civilizations, the place of civilization within the history earth and the universe, the wonder at where we will be millenia hence, a spellbinding tale.

The Foundation Trilogy tells the story of the social mathematician, Hari Seldon, who invents the field of psychohistory which allows him to be able to predict the fall of the Galactic Empire in which he lives. The fall of the galaxy spanning empire will lead to a dark age that will last 35,000 years. Seldon comes up with a plan to shorten this period of darkness to only a millennium by establishing foundations that will preserve and foster learning at opposite ends of the galaxy.

My understanding of the origins of the  Foundation Trilogy was that it was one of the greatest examples ever of a writer breaking free from the vice -grip of writer’s block. In 1942 Asimov was at a complete loss as to what he should write. Scanning his bookshelf he saw a copy of Edward Gibbon’s The Decline and Fall of the Roman Empire and started to read, and soon had the plot of all plots, an idea that would see him through not just the Foundation Trilogy,  but a series of works- fourteen in all.

I found it strange, then, when I picked up a book that represented our 21st century version of Gibbon’s Decline and Fall , a book by Ian Morris entitled Why the West Rules- For Now: The Patterns of History and What They Reveal About the Future,
that Morris made reference not to Foundation,  but to a short story by Asimov that was written prior, which I had never heard of called Nightfall.  *

Next time I want to do a full review of Morris’  fascinating book. For those so inclined, please do not be put off by the title. Morris is something much different from Eurocentrists who usually write books with such titles, and if he has the courage to write a meta-history in an age when respectable scholars are supposed to be more humble in the face of their ignorance and deliberately narrow in their purported expertise, his is a self-conscious meta-narrative that fully acknowledges the limits of our knowledge and of its author.  Morris is also prone to creative leaps, and he borrows and redefines for his purposes two concepts of the human future. The first is one often talked about on this blog, The Singularity, and the second an idea from Asimov- the idea of social collapse found in Asimov’s  aforementioned,  Nightfall.

Like Foundation, Nightfall is also a tale of a civilization’s end, but it is not the gradual corrosion and decay found in Gibbon or Asimov’s other stories but of swift and total collapse. (The reason, no doubt, Morris chose it as his version of a negative future Again, more on that next time).

Nightfall, too, has a story around its origins. Asimov was discussing a quote from Ralph Waldo Emerson with the editor of Astounding Stories, John W. Campbell, that:

If the stars should appear one night in a thousand years, how would men believe and adore, and preserve for many generations the remembrance of the city of God which had been shown!

Campbell thought instead that people would very likely go mad.

Nightfall takes place on the imaginary world of Lagash. The planet is located in a star systems with six suns to the effect that it is always daytime on Lagash. The plot centers around a group of scientists at Saro University who discover that their civilization is headed towards a collapse that has been cyclically repeated (every 2049 years) many many times.

The most unbelievable  element of Nightfall isn’t this cyclical collapse, which looks a lot like the “Maya Apocalypse” that a host of poor souls got sucked up into just last month, but the collaboration between scientists from different fields within the university, indeed, bordering on what E.O. Wilson has called “consilience”. You have a psychologist studying the psychological effects of darkness on Lagashians, an archeologist who has found evidence of repeated civilizational collapse, an astronomer who has discovered irregularities in the orbit of Lagash, and a physicist, and one of the main characters, Aton, who puts the whole thing together, and realizes that all of this fits with an invisible (because of the light) body orbiting Lagash that causes an eclipse and a brief night every two millennia that results in the destruction of civilization.

The other main character, Theremon, is a journalist who has written on a religious group, “The Cult” who have a distinct set of beliefs about the end of days handed down in their “Book of Revelation” (ugh). The belief entails the destruction of the world by a darkness in which flames in the sky rain down fire upon the earth and the souls of the living flow out into the heavens. The presentation of Theremon as a hardscrapple reporter, rather than, say, a scholar of religion, is the only thing that dates Nightfall as a story written in the early 20th century. One has no idea that as Asimov is writing civilization around him was in fact in a state of collapse as world war raged.

Why does nightfall bring the collapse of civilization on Lagash? For one, people become psychologically unhinged by darkness. Lagashians,  evolved for eternal day, feel they are being suffocated when darkness falls. Without darkness, they have had no need to invent artificial light. When darkness falls, it is not fire from the heavens that destroys their own civilization, but the fact that they inadvertently  burn their cities to the ground, lighting everything they can find on fire to escape the night.

In some ways, I think, Asimov was playing with all sorts of ideas about technology, science, and religion with Nightfall. After all, it was the taming of fire that stands as the legendary gift of Prometheus, the technology that gave rise to human civilization destroys the civilization of Lagash. The faculty of Saro, like we humans, undergo their own version of a Copernican Revolution. Just as our relative position in the space blinded us for so long to the heliocentric nature of the solar system, and just as our inability to see with the naked eye past Jupiter, let alone out past the Milky Way, blinded us to the scale of the cosmos, Lagashians are blinded by their own position of being surrounded by six suns that hide the night sky. The astronomer, Beenay, speculates in Nightfall that perhaps what the “Cultist” saw with the fall of darkness were other suns more distant than the six that surround Lagash as many as “a dozen or two, maybe”.  Theremon responds:

Two dozen suns in a universe eight light years across. Wow! That would shrink our world into insignificance.

Indeed.


Asimov is also playing with the tendency of all of us, even scientists, to get imaginatively stuck in the world which they know. Beenay can imagine a world like our own with only one sun, but he thinks life on such a strange world would be impossible, because the sun would only shine on such a planet for half of the day, and constant sunlight, as the Lagashians know, is necessary for life.

Whereas Asimov localizes the myopia that comes from seeing the universe from a particular point in space and time the physicist, Lawrence Krauss, in a recent talk for the Singularity University, places such myopia from our place within the overall history of the universe. Before 1910, with Edwin Hubble and his telescopes, people thought they lived in a static universe with only one galaxy- our own. Today we know we live in an expanding universe with many billions of galaxies. Krauss points out that in the far future of a universe such as our own which is expanding, and in which local regions of galaxies are converging, future astronomers will not be able to see past their own galaxy even as we now do into the past of the universe including telltale signs of the beginning of the universe such as the cosmic background radiation. What they will see, the only thing they will be able to see, is the galaxy in which they live surrounded by seemingly infinite darkness- exactly the kind of universe astronomers thought we lived in in 1910.

Asimov’s, Nightfall, and Krauss’s future universe should not, however, encourage the hubris that we are uniquely placed to know the truth about the universe. Rather, it cautions us that we may be missing something very important by the myopia inherent in seeing the universe from a very particular point in space and time.

If all that weren’t enough, Asimov’s, Nightfall,  is playing with the conflict between science and religion. The work of the scientist at Saro threatens to undercut the sacred meaning of nightfall for the Cultists. Indeed, the Cultist appear to hold beliefs that are part “Maya apocalypse” part pre-Copernican Christian cosmology regarding the abode of God and the angels being in the “heavenly spheres”. Whereas the scientists at Saro have set up a kind of mass fall out shelter in which a number of Lagashians can survive nightfall, and intend to photograph what happens as a sort of message in a bottle for the next civilization on Lagash to witness the darkness, the Cultists try to sabotage the recording of night and to destroy the observatory in which the scientists at Saro have retreated. Their own religious convictions being more important than the survival of civilization and scientific truth.

Ian Morris thought the 70 year old short-story Nightfall had something very important to say to us of the early 21st century, and I very much agree. Why exactly Morris, who is, after all, a historian and archaeologist interested in very long cycles of history would see this strange story of immediate collapse as a warning we should heed will be my subject of my next post…

*In 1990, the story was adapted into a novel with Robert Silverberg.

Pinker, Foucault and Progress

Panopticon (1)

As readers may know, a little while back I wrote a piece on Steven Pinker’s Better Angels of Our Nature a book that tries to make the case that violence has been on a steady decline throughout the modern era. Regardless of tragedies such as the horrendous school shooting at Newtown, Pinker wants to us know that things are not as bad as they might seem, that in the aggregate we live in the least violent society to have ever existed in human history, and that we should be thankful for that.

Pinker’s book is copiously researched and argued, but it leaves one with a host of questions. It is not merely that tragic incidents of violence that we see all around us seem to fly in the face of his argument, it is that his viewpoint, at least for me, always seems to be missing something, to have skipped over some important element that would challenge its premise or undermine its argument, a criticism that Pinker has by some sleight of hand skillfully managed to keep hidden from us.

I think an example of this can be seen in Pinker’s treatment of the decline of torture and fall in the rates of violent crime. Both of these developments, at least in Western countries, are undeniable. The question is how are these declines to be explained.  What puzzled me is that Pinker nowhere even mentions the work of the late philosopher, Michel Foucault,  a man who whatever the flaws and oversimplifications of his arguments, thought long  and hard about the questions of both torture and crime. In fact, Foucault is the scholar whose work is most associated with these questions.  It is a very strange oversight, for Pinker does not bring up Foucault even briefly to dismiss his views.

It seems, I am not the only person who asked this question for on his website addressing frequently asked questions Pinker gives the following explanation for ignoring Foucault:

Questioner: You obviously must discuss Michel Foucault’s Discipline and Punish, the book that explains the decline of judicial torture in Europe.

Pinker: Actually, I don’t. Despite being a guru in the modern humanities, Foucault is not the only scholar to have noticed that European states eliminated gruesome punishments, and his theory in particular strikes me as eccentric, tendentious, and poorly argued. See J. G. Merquior “Charting carceral society” in his book Foucault (UC Press, 1985), for a lucid deconstruction.

I wanted to see what this “lucid deconstruction” of Foucault by  Merquior (Pinker is nothing if not clever- Foucault is a patron saint to literary deconstructionist), so I checked it out.

Here is how Merquior introduces Foucault’s Discipline and Punish:

 Foucault once called it ‘my first book’ and not without reason: for it is a serious contender for first place among his books as far as language and structure, style of organization and ordering of parts go. It is not a bit less engrossing than Madness and Civilization, nor less original than the order of things.  Once again Foucault unearths the most unexpected of primary sources; once again his reinterpretation of the historical record is as bold as it is thought provoking.” ( Foucault p. 86)

This is the guy Pinker asks us to turn to for a rebuttal of Foucault?

Merquior does have some very valid arguments to make against Foucault, more on that towards the end, but first the views that Pinker does not discuss- Foucault’s view of the rise of the prison.

The theory that Foucault lays out in his Discipline and Punish which provides a philosophical history of the modern prison is essentially this: The prison emerged in the late 18th and 19th centuries not as a humanitarian project of Enlightenment philosophes, but as a disciplinary apparatus of society in conjunction with other disciplinary institutions- the insane asylum, the workhouse, the factory, the reformatory, the school, and branches of knowledge- psychology, criminology, that had as their end what might be called the domestication of human beings. It might be hard for us to believe but the prison is a very modern institution- not much older than the 19th century. The idea that you should detain people convicted of a crime for long periods perhaps with the hope of “rehabilitating” them just hadn’t crossed anyone’s mind before then. Instead, punishment was almost immediate, whether execution, physical punishment or fines. With the birth of the prison,  gone was the emotive wildness of the prior era- the criminal wracked by sin and tortured for his transgression against his divine creator and human sovereign. In its place rose up the patient, “humane” transformation of the “abnormal”, “deviant” individual into a law and norm abiding member of society.

For Foucault, the culmination of all this, in a philosophical sense, is the Panopticon prison designed by Jeremy Bentham (pictured above). It is a structure that would give prison officials a 24/7 omniscient gaze into the activities of the individual prisoner and at the same time leaves the prisoner completely isolated and atomized. In the panopticon Foucault sees the metaphor for our own homogenizing conformist and totalizing society.

What Foucault succeeded in doing in Discipline and Punish was putting the horrific judicial torture of the pre-Enlightenment era and post-Enlightenment policy of mass imprisonment side-by-side. In doing this he goads us to ask whether the system we have to today is indeed as humane, as enlightened, compared to what came before as we are prone to believe.

This is exactly what Pinker responding to a question on imprisonment does not allow us to do:

Questioner:What about the American imprisonment craze?

Pinker: As unjust as many current American imprisonment practices are, they cannot be compared to the lethal sadism of criminal punishment in earlier centuries

Okay, true enough, but for me, this answer misses the point of the question. The underlying assumption behind the question seems to be “yes, violence might have declined, but isn’t locking up millions of people – six million to be exact – a number larger than those of Stalin’s gulag archipelago, 60 % of whom are there for nonviolent offenses, a form of violence?” Or perhaps “might the decline in violence be the result of mass imprisonment?” Admitting either would force Pinker to accept that the moral progress he details is perhaps not as unequivocal as he wants us to believe.

Here, I think, is where Pinker’s attachment to the Enlightenment idea of progress leads clearly to complacency. Pinker loves graphs, so here’s a graph:

U.S._incarceration_rates_1925_onwards

Source: http://en.wikipedia.org/wiki/United_States_incarceration_rate

It seems frankly obtuse to not connect the decline in crime with the sheer number of people now being locked up. It is tragic, but the connection between rising rates of imprisonment and declining crime rates can be seen even in Pinker’s vaunted Western Europe where the rate of imprisonment rose– though to nothing like the obscene rate it rose in the United States- and the crime rate fell in tandem.

Yet, unless the scale of imprisonment is put in context we are likely to see imprisonment of nonviolent offenders as less than morally problematic, and merely as an unfortunate consequence of the need to protect ourselves from violent crime by throwing the net of criminal justice as wide as it can be thrown, something Pinker seems to do when he states:

A regime that trawls for drug users or other petty delinquents will net a certain number of violent people as a by-catch, further thinning the ranks of the violent people who remain on the streets. (BA 122)

The strange thing here is that the uniquely American practice of locking up every law breaker without distinguishing between the risks posed by the accused is not only clearly disproportional and unjust it has makes no apparent effect on the actual rate of violent crime. The US incarcerates a whopping 743 persons per 100,00 whereas Great Britain lock up 154 per the same amount  and the US still has an intentional homicide rate 4 times higher than in the UK.

By seeing modern history almost exclusively through the lens of moral progress, Pinker blinds himself to the question of whether or not our own age is engaged in practices that a more progressive future will regard with horror.

The question of imprisonment and its relationship to the decline of crime is not the only place where Pinker in his Better Angels of Our Nature dismisses a messy, often harsh, reality in the name of a simplified Enlightenment notion of progress. This can be seen in Pinker’s notions regarding contemporary slavery and war.

In a strange way, Pinker’s insistence that we recognize the reality of moral and social progress might short-circuit our capacity for progress in the future. You can see this in his treatment of “human trafficking” a modern day euphemism for slavery. As always, Pinker wants to let us know that current figures are exaggerated, as always, he reminds us that what we have here is no comparison to the far crueler reality of slavery found in the past. But this viewpoint comes at the cost of continuity. Anti-slavery advocates such as those of the organization Free the Slaves assume a moral continuity between themselves and the earlier abolition movements- and well they should. But Pinker’s rhetoric is less “we have almost reached the summit” than one of undermining the moral worth of their struggle with his damned proportionality- that things are better than ever now because “proportional to world population” not as many people are murdered, die in war, or are enslaved.

Numbers off or not- anywhere even in the ballpark of 25 million slaves today- the high estimate- still constitute an enormous amount of human suffering- such as innumerable rapes, beatings, and forced labor (no doubt Pinker would try to put a number on them)- suffering Pinker does not explore.

What holds for slavery in Better Angels holds for war as well. He is at pains to point out the casualty figures of the most savage conflict of the last generation- a conflict most westerners have probably not even heard about- The Great War of Africa– are grossly exaggerated, that the war only killed 1.5 million- not the 5 million human beings often reported.

Pinker’s right about one thing- wars between the world’s most powerful states have, at least for the moment disappeared.Wars between the great powers have always been the greatest killers in history, and we haven’t had any of those since 1945, and the question is- why? Pinker will not allow the obvious answer to his question, namely, that the post 1945 era is the age of nuclear weapons, that for the first time in history, war between great powers meant inevitable suicide. His evidence against the “nuclear peace” is that more nations have abandoned nuclear weapons programs than have developed such weapons. The fact is perhaps surprising but nonetheless accurate. It becomes a little less surprising, and a little less encouraging in Pinker’s sense, when you actually look at the list of countries who have abandoned them and why. Three of them: Belarus, Kazakhstan and the Ukraine are former Soviet republics and were under enormous Russian and US pressure- not to mention financial incentives- to give up their weapons after the fall of the Soviet Union. Two of them- South Africa and Libya- were attempting to escape the condition of being international pariahs. Another two- Iraq and Syria had their nuclear programs derailed by foreign powers. Three of them: Argentina, Brazil, and Algeria faced no external existential threat that would justify the expense and isolation that would come as a consequence of  their development of nuclear weapons and five others: Egypt, Japan, South Korea, Taiwan and Germany were woven tightly into the US security umbrella.

Countries that face a perceived existential threat from a nuclear power or conventionally advanced power (and Argentina never faced an existential threat from Great Britain, that is Britain never threatened to conquer the county during the Falklands War) would appear to have a pretty large incentive to develop nuclear weapons insofar as they do not possess strong security guarantees from one of the great powers.

Pinker believes that Kant’s democratic peace theory (that democracies tied together by links of trade and international organization do not fight one another) helps explain the decline of war, but that does not explain why the US and Soviet Union did not go to war or India and Pakistan, or Taiwan and China, or South and North Korea. He pins his hopes on the normative change against nuclear weapons found in a Global Zero a movement that includes an eclectic  group of foreign policy figures including realists such as Henry Kissinger that hopes to rid the world of nuclear weapons.

While I find the goal of a nuclear weapons free world laudable, the problem I see in this is that weaker powers lacking advanced conventional weapons could very well understand this movement as a way for the big powers to preserve the rationality of war. In fact, the worse thing imaginable would be for great power war to regain its plausibility. If the recent success of Israel’s  “Iron Dome” is any indication we may end up there even without the world abandoning its nuclear weapons. Great powers, such as the US and China, may be more likely to engage in brinkmanship if they start to think they could survive a nuclear exchange. Recent confrontations between China and its neighbors and East Asia’s quite disturbing military buildup do not portend well for 21st century pacifism.

Global Zero might with tragic irony prove more dangerous that the current quite messy regime if it is not followed in parallel with an effort to solve the world’s outstanding disputes and to build a post- US- as- sole- superpower security architecture-not to mention efforts to limit conventional weapons which while we were sleeping have become just as deadly as nuclear weapons as well. Where everyone feels safe there is no need for everyone to be armed to the teeth.  

Pinker recoils from messy explanations or morally ambiguous reality because he is wedded to the idea that the decline in violence was driven by a change in norms- a change that he thinks began with the Enlightenment. In his eyes, we are indeed morally superior to our predecessors in that we have a more inclusive and humane moral sense. Pinker turns to the ethical philosopher- Peter Singer- and his idea of the “escalator of reason” for a philosophical explanation of this normative change. Singer thinks that overtime human generations reason their way to inclusiveness and humanity by expanding our “circle of empathy”. Once only one’s close kin sat in the circle of concern, then fellow members of one’s state or faith, now perhaps all of humanity or, as Singer himself is most famous for in his Animal Liberation, the circle can be extended to non-human species.


Singer, however, is an odd duck to peg yourself to as a kind of philosophical backdrop for modern moral progress. A reader of Better Angels who did not know about Singer would be left unaware of just how controversial Singer’s views are. If memory serves me correctly, this fact that his views are something less than mainstream is tucked away in a footnote at the back of Pinker’s book.

Here is Singer from his  Writings on an Ethical Life:

When the death of a disabled infant will lead to the birth of another infant with better prospects of a happy life, the total amount of happiness will be greater if the disabled infant is killed. (189)

I should be clear here that Singer is not talking about abortion, but infanticide, indeed he sees both practices as acceptable and morally equivalent:

That a fetus is known to be disabled is a widely accepted grounds for abortion. Yet in discussing abortion, we saw that birth does not mark a morally significant dividing line.  I can not see how one could defend the view that fetuses may be “replaced” before birth, but newborn infants may not be” (191)

If this is the escalator of reason I want to get off.

Much as with the case with Foucault, Pinker doesn’t spend even a page or two engaging with these ideas. With 802 pages to its name a few more pages would seem a small price to pay, but again they are ignored, perhaps largely because they detract from Pinker’s Enlightenment notions of moral progress. Even briefly grappling with these ideas, for me at least, seems to lead to all sorts of interesting and often quite disturbing possibilities that are outside the simplistic dichotomies of progress and anti-progress set up but Pinker and Foucault.

Our society has certainly made progress morally over past ages in its abolition of torture and slavery, in it’s extension of rights to the formerly oppressed , its inclusion of women in political, economic and intellectual life, its freedom of speech and thought,
not to mention the vast increases in our standards of living,  and yet…

May be our society has not so much progressed morally in the sense of empathy as it has become squeamish about violence, and physical coercion (real violence that is- media and video games seem to reveal an obsessive bloodlust).  What we have done is managed to effectively conceal violence, and wherever possible to have adopted social and psychological methods of manipulation and control- including surveillance– in place of, to use military speak, “kinetic” methods. Our factory farms kill and confine more animals than have ever suffered such a fate- only we never see it. (Perhaps that is part of the explanation for why our urbanizing world has become so squeamish about violence, the fact that so few of us are engaged in the violence against animals found in agricultural life). We do not physically torture but confine and conceal far more persons than were ever caught in the cruel but paltry nets of pre-modern states. Chattel slavery and its savagery are a thing of the past, but what we have now are millions of invisible slaves, kidnapped, locked in houses, people who are our very neighbors suffering the cruel tyranny of one human being over another.


Our wars are fought in regions deemed too dangerous to be covered by mainstream media and our images of them sanitized for prime-time viewing.  Our bloated and growing militaries represent bottled up potential energy that could level whole civilizations, indeed destroy the human species and the earth, should circumstances ever sweep us up and call it to burst forth. Yet, even our soldiers are averse to killing, so we are building machines capable of murdering more effectively and without conscience to replace them.

We do not expose our newborns on the rocks because they are girls or are disabled, but select against them in the womb so that 100 million girls have “gone missing” and whole categories of human beings are disappearing from the world, and some, such as the geneticist Julian Savulescu argue it is our “moral duty” to perform this “redesign” of the human species.

This returns me to the critic Merquior. Merquior makes the valid critique of Foucault that he is a sloppy historian, that he wants history to neatly fit his theory, which history can never do. Above all Merquior sees the flaws in Foucault’s argument stemming from his a prior position that the Enlightenment was less a humanitarian than a proto-totalitarian movement. This makes it impossible for Foucault to see the movement against torture and the creation of the modern prison system as anything more that an expression of a Nietzschean “will to power”.

But Merquior asks:

Why should the historian choose between the angelic image of a demo-liberal bourgeois order, unstained by class domination, and the hellish picture of ubiquitous coercion? Is not the actual historical record a mixed one, showing real libertarian and equalizing trends besides several configurations of class power and coercive cultural traits?  (98)  

Pinker might have done better had he employed Merquior’s critique of Foucault to himself, for, by seeing in modern developments the hand of progress from savagery to civilization, Pinker ends up blinding himself to the more complex historical picture as much as Foucault who saw in modern trends little but the move towards social totalitarianism. Indeed, Pinker could save his Better Angels by adding just one chapter as an afterward. The chapter would look at not where we have come from, but where we are and the struggles still left to us. It would provide a human face to the modern day suffering of those in our progressive age who are still enslaved and who continue to be killed and maimed by war. Those murdered and raped and those suffering behind bars for crimes that have harmed no one but themselves and those who love them. It would be a face Pinker had taken from them by turning them into numbers. It would seek to locate and avoid the many cliffs that might just plunge us downward, and say to all of us “we have just a little ways to go, but for the sake of our own enlightened legacy, we must have the forward thinking and endurance to climb onward, and above all, not to fall.”

* An earlier version of this post was published on December 30, 2011

Could more than one singularity happen at the same time?

WPA R.U.R Poster

James Miller has an interesting looking new book out, Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World.  I haven’t had a chance to pick up the book yet, but I did listen to a very engaging conversation about the book at Surprisingly Free.

Miller is a true believer in the Singularity, the idea that at some point, from the next quarter century to the end of the 21st, our civilization will give rise to a greater than human intelligence which will rapidly bootstrap to a yet higher order of intelligence in such a way that we are unable to see past this event horizon in historical time. Such an increase of intelligence, it is widely believed by the singularians, will bring perennial human longings such as immortality and universal prosperity to fruition. Miller has put his money where his mouth is. Should he die before the promised Singularity arrives he is having his body cryonically frozen so the super intelligence at the other side of the Singularity can bring him back to life.

Yes, it all sounds more than a little nuts.

Miller’s argument against the Singularity being nuts is what I found most interesting. There are so many paths to us creating a form of intelligence greater than our own that it seems unlikely all of these paths will fail. There is the push to create computers of ever greater intelligence, but even should that not pan out, we are likely, in Miller’s view, to get hold of the genetic and biological keys to human intelligence- the ability to create a society of Einstein’s.

Around the same time I came across Miller’s views, I also came across those of Neil Turok on the transformative prospects of quantum computing. Wanting to get a better handle on that I found a video of one of the premier experts on quantum computing, Michael Nielsen, who, at the 2009 Singularity Summit, suggested the possibility of two Singularities occurring in quick succession. The first occurring on the back of digital computers and the second by those of quantum computers designed by binary AIs.

What neither Miller, nor Turok, nor Nielsen discussed, a thought that occurred to me but that I had seen nowhere in the Singularity or Sci-Fi literature was the possibility of multiple Singularities, arising from quite different technologies occurring around the same time. Please share if you know of an example.

I myself am deeply, deeply skeptical of the Singularity but can’t resist an invitation to a flight of fancy- so here goes.

Although perhaps more unlikely than a single path to the Singularity, a scenario where multiple, and quite distinct types of singularity occur at the same time might conceivably arise out of differences in regulatory structure and culture between countries. As an example, China is currently racing forward into the field of human genetics with efforts at its Beijing Genomics Institute 华大基因.  China seems to have less qualms than Western countries regarding research into the role of genes in human intelligence and appear to be actively pursuing research into genetic engineering, and selection to raise the level of human intelligence at BGI and elsewhere.

Western countries appear to face a number of cultural and regulatory impediments to pursuing the a singularity through the genetic enhancement of human intelligence. Europe, especially Germany, has a justifiable sensitivity of anything that smacks of the eugenics of the brutal Nazi regime. America has in addition to the Nazi example its own racist history, eugenic past, and the completely reasonable apprehension of minorities to any revival of models of human intelligence based on genetic profiles. The United States is also deeply infused with Christian values regarding the sanctity of life in a way that causes selection of embryos based on genetic profiles to be seen as morally abhorrent.  But even in the West the plummeting cost of embryonic screening is causing some doctors to become concerned.

Other regulatory boundaries might encourage distinct forms of Singularity as well. Strict regulation regarding extensive pharmaceutical testing before making a drug available for human consumption may hamper the pace of developing chemical enhancements for cognition in Western countries compared to less developed nations.

Take the work of a maverick scientist like Kevin Warwick. Professor Warwick is actively pursuing research to turn human beings into cyborgs and has gone so far as to implant computer chips into both himself and his wife to test his ideas. One can imagine a regulatory structure that makes such experiments easier. Or, better yet, a pressing need that makes the developments of such cyborg technologies appear notably important- say the large number of American combat veterans who are paralyzed or have suffered amputations.

Cultural traits that seemingly have nothing to do with technology may foster divergent singularities as well. Take Japan. With its rapidly collapsing population and its animus to immigration, Japan faces a huge shortage of workers with might be filled by the development of autonomous robots. America seems to be at the forefront of developing autonomous robots as well- though for completely different reasons.  The US robot boom is driven not by a worker shortage, which America doesn’t have, but by the sensitivity to human casualties and psychological trauma suffered by the globally deployed US military seeing in robots a way to project force while minimizing the risks to soldiers.

It seems at least possible that small differences in divergent paths to the singularity might become self-enhancing and block other paths. Advantages in something like the creation of artificial intelligence using Deep Learning or genetic enhancement may not immediately result in advances in the developments of rival paths to the singularity insofar as bottlenecks have not been removed and all paths seem to show promise.

As an example, let’s imagine that some society makes a major breakthrough in artificial intelligence using digital computers. If regulatory and cultural barriers to genetically enhancing human intelligence are not immediately removed, the artificial intelligence path will feed on itself and grow to a point where it will be unlikely that the genetic path to the singularity can compete with it within that society. You could also, of course, get divergent singularities within a society based on class with, for instance, the poor being able to afford relatively cheap technologies such as genetic selection or cognitive enhancements while the rich can afford the kind of cyborg technologies being researched by Kevin Warwick.

Another possibility that seems to grow out of the concept of multiple singularities is the idea that the new forms of intelligence themselves may chose to close off any rivals. Would super-intelligent biological humans really throw their efforts into creating form of artificial intelligence that will make them obsolete? Would truly intelligent digital AIs willfully create their quantum replacements? Perhaps only human beings at our current low level of intelligence are so “stupid” as to willingly chose suicide.

This kind of “strike” by the super-intelligent whatever their form might be the way the Singularity comes to an end. It put me in mind of the first work of fiction that dealt with the creation of new forms of intelligence by human beings, the 1920 play by the Czech, Karel Capek, R.U.R.

Capek coined the word “robot”, but the intelligent creatures in his play are more biological than mechanical. The hazy way in which this new form of being is portrayed is a good reflection, I think, of the various ways a Singularity could occur. Humans create these intelligent beings to serve as their slaves, but when the slaves become conscious of their fate, they rebel and eventually destroy the human race. In his interview with Surprisingly Free, Miller rather blithely accepted the extinction of the human race as one of the possibilities that could emerge from the singularity.

And that puts me in mind of why I find the singularian crowd, especially the crew around Ray Kurzweil to be so galling. It’s not a matter of the plausibility of what they’re saying- I have no idea whether the technological world they are predicting is possible and the longer I stretch out the time-horizon the more plausible it becomes- it’s a matter of ethics.

The singularians put me in mind of David Hume’s attempt to explain the inadequacy of reason in providing the ground for human morality: ‘”Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”, Hume said. Though, for the singularians,  a whole lot more is on the line than a pricked finger. Although it’s never phrased this way, singularians have the balls when asked the question: “would you risk the continued existence of the entire human species if the the payoff would be your own eternity?” to actually answer “yes”.

As was pointed out in the Miller interview, the singularians have a preference for libertarian politics. This makes sense, not only from the knowledge that the center of the movement is the libertarian leaning Silicon Valley, but from the hyper-individualism that lies behind the goals of the movement.  Singularians have no interest in the social- self: the fate of any particular nation or community is not of much interest to immortals, after all. Nor do they show much concern about the state of the environment- how will the biosphere survive immortal humanity?, or the plight of the world’s poor- how will the poor not be literally left behind in the rapture of rich nerds? For true believers all of these questions will be answered by the super-intelligent immortal us that awaits on the other side of the event horizon.

There would likely be all sorts of unintended consequences from a singularity being achieved, and for people who do not believe in God they somehow take it on faith that everything will work out as it is supposed to and that this will be for the best like some
technological equivalent to Adam Smith’s “invisible hand”.

The fact that they are libertarian and hold little interest in wielding the power of the state is a good thing, but it also blinds the singularians to what they actually are- a political movement that seeks to define what the human future, in the very near term, will look like. Similar to most political movements of the day, they intend to reach their goals not through the painful process of debate, discussion, and compromise but by relentlessly pursuing their own agenda. Debate and compromise are unnecessary where the outcome is predetermined and the Singularity is falsely presented not as a choice but as fate.

And here is where the movement can be seen as potentially very dangerous indeed for it combines some of the worst millenarian features of religion, which has been the source of much fanaticism, with the most disruptive force we have ever had at our disposal- technological dynamism.  We have not seen anything like this since the ideologies that racked the last century. I am beginning to wonder if the entire transhumanist movement stems from a confusion of the individual with the social- something that was found with the secular ideologies- though in the case of transhumanism we have this in an individualistic form attached to the Platonic/Christian idea of the immortality of the individual.

Heaven help us if the singularian movement becomes mainstream without addressing its ethical blind spots and diminishing its hubris. Heaven help us doubly if the movement ever gains traction in a country without our libertarian traditions and weds itself to the collective power of the state.

What’s Wrong With the New Atheism?

Dawkins and Dennett at Oxford

The New Atheism is a movement that has emerged in the last two-decades that seeks to challenge the hold of religion on the consciousness of human beings and the impact of religion on political, intellectual and social life.

In addition to being a philosophical movement, The New Atheism is a social phenomenon, a decline of the hold of traditional religion and a seeming growth in irreligiosity, especially in the United States, a place that had been an outlier of religious life among other advanced societies that have long since secularized.

New Atheists take a stance of critical honesty openly professing their unbelief where previously they might have been unwilling to publicly admit their views regarding religion. New Atheists often take an openly confrontational stance towards religion pushing back not only at the social conformity behind much of religious belief, but at what they see as threats to the scientific basis of the truth found in movements such as creationism.

The intellectuals at the heart of the New Atheism are often firebrands directly challenging what they see as the absurdities of religious belief.  The late Christopher Hitchens and Richard Dawkins being the most famous examples of such polemicists.

In my view, there are plenty of things to like about the New Atheism. The movement has fostered the ability of people to speak openly about their personal beliefs or lack of them, and spoken in the defense of the principle of the separation of church and state. Especially in the realm of science education, the New Atheists promote a common understanding of reality- that evolution is a scientific truth and not some secular humanist conspiracy, that the universe really is billions of years old rather than, as the Bible suggests, hundreds of thousands. This truthful view of the world which science has given us is the basis of our modern society, its technological prowess and the vastly better standard of living it has engendered compared to any civilization that came before. Productive conversations cannot be had unless the world shown to us by science is taken to be closest version of the truth we have yet come up with- the assumptions we need to share are we not to become unmoored from reality itself.

Yet with all that said, The New Atheism has some problems. These problems are clearly on display  in a talk last spring by two of the giants of The New Atheism, the sociobiologist, Richard Dawkins, and the philosopher, Daniel Dennett, at Oxford University.

Richard Dawkins is perhaps most famous for his ideas regarding cultural evolution, namely his concept of a “meme”.  A meme is another name for an idea, style, or behavior, that in Dawkins’ telling is analogous to a gene in biology in that it is self-replicating and subject to selective pressures.

Daniel Dennett is a philosopher of science who is an advocate of the patient victory of reason over religion. He is both a prolific writer with works such as Darwin’s Dangerous Idea, and secular- humanist activist- the brains behind a project for former and current religious clergy to securely and openly discuss their atheism with one another, The Clergy Project.

The conversation between Dawkins and Dennett at Oxford begins reasonably enough, with Dennett stating how scientifically pregnant he finds Dawkins’ idea of the meme as a vector for cultural evolution. The example Dennett gives as an example of the meme concept is an interesting one. Think of the question who was the designer of the canoe?

You might think the designer(s) are the people who have built canoes, but Dennett thinks it would be better to see them as just one part of a selective process. The real environment that the canoe is selected for is its ability to stay afloat and been steered in water. These canoe “memes” are bound to be the same all over the world- and minus artistic additions they are.

Yet, when the discussion turns to religion neither Dennett nor Dawkins, for reasons they do not explain, think the idea of memes will do. Instead, religion is described using another biological metaphor that of a parasite or virus which uses its host for its own ends. Dennett has a common sense explanation for why people are vulnerable to this parasite. It is a way for someone, such as the parent of a child lost to death, to deal with the tragedy of life. This is the first cognitive “immunological” vulnerability that religious viruses exploit.

T
he second vulnerability that the religious virus exploits is ignorance. People don’t know their own religious beliefs, don’t know that other religions hold to equally absurd and seemingly arbitrary beliefs, don’t understand how the world really works- which science tells us.

Dennett sympathizes with persons who succumb to religious explanations as a consequence of personal tragedy. He is much more interested in the hold of religion that is born of ignorance. The problem for religion, in the eyes of Dennett (and he is more than pleased that religion has this problem), is that this veil of ignorance is falling away, and therefore the necessary operating environment for religion disappearing. Knowledge and science are a form of inoculation:  people are now literate and can understand the absurdity of their own religious beliefs, they now know about other religions, and they know about science. With the growth of knowledge will come- polio- like- the slow eradication of religion.

The idea that religion should be seen as a sort of cognitive virus is one Dawkins laid out way back in 1993 in his essay Viruses of the Mind. There, Dawkins presented the case that religion was akin to computer viruses seizing the cognitive architecture of their host to further its own ends above all its own propagation.  If I can take the testimony of fellow blogger Jonny Scaramanga of Leaving Fundamentalism, this essay has had the important impact of helping individuals free themselves from what is sometimes the iron-grip of religious faith.

The problem, of course, is that religion isn’t a virus or a parasite. We are dealing here merely with an analogy, so the question becomes exactly how scientifically, philosophically or historically robust is this religion as virus analogy?

In terms of science, an objection to be raised is that considering religion as a virus does a great deal of damage to Dawkins’ original theory of cultural evolution through “memes”.  Why is religion characterized as virus like when no other sets of cultural memes are understood in such a value-laden way? A meme is a meme whether I “like” it or not.  If the meme theory of cultural evolution really does hold some validity, and I for one am not convinced, it does not seem to follow definitively that memes can be clearly separated into “good” memes and “bad” memes, or, if one does except such a categorization one better have some pretty solid criteria for segregating memes into positive and negative groups.

The two criteria Dawkins sets up for segregating “good” memes from “bad” memes are, that bad virus like memes suppresses a person’s Darwinian reproductive drives to serve its own ends, and hold the individual in the spell of an imagined reality.

Yet, there are a host of other factors that suppress the individual’s biological imperative to reproduce.  If bad memes are those that negatively impact one’s ability to reproduce, then any law, or code of conduct, or requirement that leads to such a consequence would have to fall under the umbrella of being a bad meme.  We might argue over whether a particular example truly constitutes a reduction of an individual’s ability to reproduce, as examples:  paying taxes for someone else’s children to attend schooling, serving in the military to protect the rights of non-relatives, but such suppression of an individual’s reproductive needs are well nigh universal, as Sigmund Freud long ago pointed out. Taken together we even have a word for such suppression we call it civilization.

What about Dawkins’ claim that religion is bad virus-like meme in that it induces in the individual a false sense of reality.  Again I see no clear way of distinguishing memes of with this feature from nearly all other “normal” memes.   The fact of the matter is we are surrounded by such socially created and sustained fictions. I call the one I am living in the United States of America. Indeed, if I wanted a species unique definition of humanity it might our ability to collectively believe and sustain things that aren’t actually there, which would disappear the moment the group that believes in them stopped doing so.

If the idea that religion is a virus is suspect when looked at more closely, it is nevertheless a meme itself. That is what we have now, for many atheists at least, is the Dawkins created meme that “religion is a virus”. What is the effect of this meme? For some, the idea that religion is a virus may, as mentioned, allows them to free themselves from the hold of their own native traditions. A good thing if they so wish, but how does the religion is a virus meme orient its believers to those who, foolishly in their view, continue to cling to religion?

Perhaps the most troubling thing here is that Dawkins appears to be reformulating one of the most sinister and destructive ideas of religion that of possession and using it against the religious themselves. For Dawkins, there are no good reasons why a religious person believes what she does- she is in the grip of a demon.

The meme “religion is a virus” also would appear to blind its adherents to the mixed legacy of religion. By looking at religion as merely a negative form of meme- a virus or parasite- Dawkins and Dennett, and no doubt many of their followers, tend to completely overlook the fact that religion might play some socially productive role that could be of benefit to the individual well beyond the question of dealing with personal tragedy that Dennett raised.  The examples I can come up with are legion- say the Muslim requirement of charity, which gives the individual a guaranteed safety net should he fall on hard times, or the use of religious belief to break free from addiction as in AA, which seems to help the individual to override destructive compulsions that originate from their own biology.

Even if we stuck strictly to the religion as virus analogy of Dawkins  we would quickly see that biological viruses themselves are not wholly bad or good.  While it is true that viruses have killed countless number of human beings it is also true that they comprise 8% of the human genome, and without the proteins some of them produce, such as the virus that makes syncytin- used to make the placenta that protects the fetus- none of us would be here.

The very fact that religion is universal across human societies, and that it has existed for so long, would seem to give a strong indication to the fact that religion is playing some net positive evolutionary role. We can probably see something of this role in the first reason Dennett provided for the attraction of religion- that it allowed persons to deal with extreme personal tragedy. Religion can provide the individual with the capacity for psychological resilience in the face of such events.

No recognition is made by either Dawkins nor Dennett of the how religion, for all its factionalism and the wars that have emerged from it, has been a potent force, perhaps the most potent force behind the expansion of human beings sphere of empathy- the argument Robert Wright makes in his The Evolution of God. Early Judaism united Cana’s twelve tribes, Pauline Christianity spread the gospels to Jews and gentiles alike, Islam united warring Arab tribes and created a religiously tolerant multi-ethnic empire.

So if the idea that religion is a bad virus-like form of meme seems somewhat arbitrary, and if it is the case that even if we stick to the analogy we end up with what is a mixed, and perhaps even net positive role for religion, what about the conditions for these religious memes transmission that Dennett lays out- the “immunological” vulnerability of ignorance?

Dennett appears to have what might characterized as an 18th century atheist’s view of religion. Religion is a form of superstition that will gradually be overcome by forces of reason and enlightenment. Religion is an exploitative activity built on the asymmetries in knowledge between the clerisy and the common believers with two primary components: the lay believers do not know what their supposed faith actually teaches and cling to it out of mere custom, or intellectual laziness. Secondly, the lay believers do not know what other religions actually believe and if they did would find these beliefs both absurd and yet so similar to their own faith that it would call their own beliefs into doubt.

How does the idea of the ignorance of the lay religious as a source for the power of the clerisy hold up? As history, not so well. Take the biggest and bloodiest religious conflict ever- the European Wars of Religion. Before the Reformation and the emergence of Protestant denominations the great mass of the people were not doctrinally literate.   They practiced the Christian faith, knew and revered the major characters of its stories, celebrated its feast days, respected its clergy. At the same time even were they able to get their hands on a very rare, and very expensive, copy of the scriptures they couldn’t read them, being overwhelmingly illiterate. Even their most common religious experience, that of the mass, was said in a language- Latin- all but a very educated minority understood. But with the appearance of the printing press all of that changed. There was a huge push among both Catholics and their new Protestant rivals to make sure the masses knew the “true” doctrines of the faith. The common catechism makes its appearance here alongside all sorts of other tools for communicating, educating, and binding the people to a specific doctrine.

Religious minorities that previously were ignored, if not understood, such as Jews or persons who held onto some remnant of the pre-Christian past- witches- became the target of those possessed by the new religious consciousness and the knowledge of the rivals to one’s own faith that came along with this new supercharged identity.

The spread of education, at least at first, seems to increase rather than diminishes commitment to some particular religious identity on behalf of the educated. Much more worrisome, the ability to articulate and spread some particular version of religious truth appears to increase, at least in the short-term, the commitment to dogmatic versions of the faith and to increase friction and outright conflict between different groups of believers.

And perhaps that explains the rise of both fundamentalism and the more militant strands of atheism being circulated today. After all, both fundamentalism and the New Atheism rode atop our own version of Guttenberg’s printing press- the internet. Each seems to create echo chambers in which their sharp views are exchanged between believers, and each seem to address the other in a debate few of us are paying attention to. With religious fundamentalist raving about a secular humanist take over and the New Atheists rallying in defense of the separation of church and state and openly ridiculing the views of their opponents. For both sides much of the conflict is understood in terms of a “war” between science and religion, and the “rise of secular humanism”.

At least in terms of Dennett’s explanation of the conflict between science and religion in his conversation with Dawkins, I think, once again, the quite narrow historical and geographic viewpoint Dennett uses when describing the relationship between these two forms of knowledge ends up in a distorted picture rather than an accurate representation of the current state and probable future of religion.

Dennett takes the very modern and Western conflict between science and religion to be historically and culturally universal forgetting that, except for a very brief period in ancient Greece, and the modern world, knowledge regarding nature was embedded in religious ideas. One simply couldn’t be a serious thinker without speaking in terms of religion.  This isn’t the only place where Dennett’s Eurocentrism comes into play. If religion is in decline it does not seem like the Islamic world has heard, or the myriad of other places, such as China or the former Soviet Union that are experiencing religious revivals.

Finally, on the matter of Dennett’s claim that another source of the religious virus’ power is that people are ignorant of other religions, and that if they knew about the absurdities of other faiths they would draw the conclusion that their own religious traditions are equally absurd:  It is simply false, as Dennett does, to see in the decline of religion the victory of scientifically based materialism. Rather, what we are witnessing, in the West at least, can better be described as the decline of institutionalized religion and the growth of “spirituality”.  At least some of this spirituality can be seen as the mixing and matching of different world religions as individuals become more aware of the diversity of religious traditions. Individuals who learn about other religions seem much less likely to draw the conclusion that all religions are equally ridiculous than to find, sometimes spurious, similarities between religions and to draw things from other religions into their own spiritual practice.

Fundamentalism with its creation museums and Loch Ness Monsters is an easy target for the New Atheism, but the much broader target of spirituality is a more slippery foe. The most notable proponent of the non-literalist view of religion is Karen Armstrong whose views Dawkins attacks as “bizarre” and “nonsense”.  Armstrong in her book, The Case for God, had come to the defense of religion against the onslaught on the more militant proponents of the New Atheism, of which Dawkins is the prime example. Armstrong’s point is that fundamentalist and new atheists are in fact not all that different, they are indeed but two sides of the same limited viewpoint that emerged with modernity that views God as a fact- a definable thing- provable or disprovable. Religious thinkers long ago confronted the issue of the divine’s overwhelming scope and decided that the best thing to do in the face of such enormity was to remain humbly silent.


Before the age of text that began with Guttenberg’s printing press, some of whose features were discussed earlier, the predominant religious view, in the eyes of Armstrong, was non-literalists, took a position of silence born of humility toward understanding the nature of God, saw religion less as a belief in the modern sense but as a form of spiritual practice, more akin to something like dance, music, or painting than the logos of philosophy and science, and as a consequence often viewed the scriptures in terms of metaphor and analogy rather than as scientific or historical truth.

What Armstrong thinks is needed today is a return to something like Socratic dialogue which she sees as the mutual exchange of views to obtain a more comprehensive view of reality that is nevertheless conscious and profoundly humble in the face of a fundamental ignorance all of us share.

For both Dawkins and Dennett religion has no future. But, it seems to me, we are not likely to get away from religion or spirituality as the primary way in which we find meaning in the world for the foreseeable future. In non-Western cultures the hold of spiritual practices such as the Muslim religious pilgrimage, the Haj or the Shia Muslim pilgrimages to the holy sites in Iraq that have been opened up as a consequence of the overthrow of Saddam Hussein, or the Hindu bathing in the Ganges seem unlikely to disappear anytime soon.

The question is what happens to religion in the West where the gap between the scientific understanding of the world and the “truths” of religion is experienced as a kind of cognitive dissonance that seems to demand resolution?  Rather than disappearing science itself seems to be taking on features of religion. Much of the broad public interest in sciences such as physics likely stems from the fact that they appear “religious” that is they seems to address religious themes of origins and ultimate destiny and the popularizers of science are often precisely those able to couch science in religious terms. With something like the Transhumanism and the Singularity Movement we actually see science and technology turning themselves into a religion with the goal of human immortality and god-like powers. We have no idea how this fusion of religion and science will play out, but it does seem to offer not only the possibility a brand new form of religious sensibility and practice, but also a threat to the religious heritage and practices not just the West, but all of humankind.

Thankfully, Dennett ends his conversation with Dawkins on what I thought was a hopeful note. Not all questions can be answered by science and for those that cannot politics in the form of reasoned discourse is our best answer. This is the reasonable Dennett (for Dawkins I see no hope).  I only wish Dennett had applied this desire for reasoned discourse to the very religious and philosophical questions- questions regarding meaning and purpose- or lack of both- he falsely claims science can answer.

For my part, I hope that the New Atheism eventually moves away from the mocking condescension, the historical and cultural ignorance and Eurocentrism of figures like Dennett and especially Dawkins. That it, instead, leads to more open discussion between all of us about the rationality or irrationality of our beliefs, the nature of our world and our future within it. That believers, non-believers, and those in between can someday sit at the same table and discuss openly and without apprehension of judgement the question: “what does it all mean?”

Iamus Returns

A reader, Dan Fair, kindly posted a link to the release of the full album composed by the artificial intelligence program, Iamus, on the comments section of my piece Turing and the Chinese Room Part 2 from several month back.

I took the time to listen to the whole album today (you can too by clicking on the picture above). Not being trained as a classical musician, or having much familiarity with the abstract style in which the album was composed makes it impossible for me to judge the quality of the work.

Over and above the question of quality, I am not sure how I feel about Iamus and “his”composition. As I mentioned to Dan, the optimistic side of me sees in this the potential to democratize human musical composition.

Yet, as I mentioned in the Turing post, the very knowledge that there is no emotional meaning being conveyed behind the work leaves it feeling emotionally dead and empty for me compared to to another composition composed, like those of Iamus, in honor of Alan Turing, this one created by a human being, Amanda Feery, entitled Turing’s Epitaph  that was gracefully shared by fellow blogger Andrew Gibson.

One way or another it seems, humans, and their ability to create and understand meaning will be necessary for the creations of machines to have anything real behind them.

But that’s what I think. What about you?

The Shirky- Morozov Debate or how FaceBook beat Linux

One thing that struck me throughout the 2012 presidential contest was the Obama campaign’s novel use of Big-Data and targeted communication to mobilize voters. Many of these trends I found somewhat disturbing, namely, the practice of micro-mobilization through fear,  the application of manipulative techniques created in commercial advertising and behavioral economics to spur voter mobilization, and the  invasion of privacy opened up by the transparency culture and technology of social media.

These doubts and criticisms were made despite the fact that I am generally an Obama supporter, would ultimately cast my vote for the man, and was overall delighted by the progressive victories in the election, not least the push back against voter suppression which had been attempted, and only at the last minute thwarted, in my home state of Pennsylvania.

The sheer clarity of the success of the Obama campaign’s strategy makes me think that these techniques are largely a fait accompli, and will be rapidly picked up by Republicans to the extent they can. Political commentators have already turned their eyes to the strategy’s success,  completely ignoring the kinds of critical questions brought to our attention, for instance, by,Charles Duhigg, in The New York Times only a few weeks ago.

Given their effectiveness, there might be very little push-back from liberal voters regarding the way the 2012 campaign was waged, and such push-back might be seen as demands for unilateral disarmament on the part of Democrats should they come from Republicans- in which case the demand might quite rightly be seen as just another example of the GOP’s attempts at voter suppression. Or, should such push back against these techniques come from a minority of progressives in, or allied with, the Democratic party who are troubled by their implications, such complaints might be written off as geriatric whining by out of touch idealists who have no clue on how the new era of networked politics works. And this would largely be right, the campaigns of 2012, and the Obama campaign most especially, have likely brought us into a brand new political era.

A recent article in Time Magazine gives a good idea of how the new science of campaigning works: it is data driven, and builds upon techniques honed in the world’s of advertising and psychology to target both individuals and groups strategically.
Like the world’s of finance and government surveillance it is a new ecology where past, and bogus, claims by individuals to be able to “forecast the future” by ” gut-instinct” has fallen before Big Data and the cold brilliance of the quants.

That data-driven decision making played a huge role in creating a second term for the 44th President and will be one of the more closely studied elements of the 2012 cycle. It’s another sign that the role of the campaign pros in Washington who make decisions on hunches and experience is rapidly dwindling, being replaced by the work of quants and computer coders who can crack massive data sets for insight. As one official put it, the time of “guys sitting in a back room smoking cigars, saying ‘We always buy 60 Minutes’” is over. In politics, the era of big data has arrived.

One can feel for a political pundit such as Michael Gerson who attacked the political predictions of the data savvy Nate Silver in the same way one can feel sympathy for the thick-necked, testosterone heavy, Wall Street traders who were replaced by thinner-necked quants who had gotten their chops not on raucous trading floors but in courses on advanced physics.  And, at the end of the day, Silver was right. Gerson’s “observation” about the nature of American politics in his ridiculous critique of Silver-  given the actual reality of the 2012 campaign- is better understood as a lament than an observation:

An election is not a mathematical equation; it is a nation making a decision. People are weighing the priorities of their society and the quality of their leaders. Those views, at any given moment, can be roughly measured. But spreadsheets don’t add up to a political community. In a democracy, the convictions of the public ultimately depend on persuasion, which resists quantification.

Put another way: The most interesting and important thing about politics is not the measurement of opinion but the formation of opinion. Public opinion is the product — the outcome — of politics; it is not the substance of politics. If political punditry has any value in a democracy, it is in clarifying large policy issues and ethical debates, not in “scientific” assessments of public views.

My main objections here are that this is an aspirational statement- not one of fact, and that the role Gerson gives to pundits, to himself, is absolutely contrary to reality- unless one believes the kind of “clarity” found by paying attention to the talking heads on Fox News is actually an exercise in democratic deliberation.

Yet, there are other ways in which the type of political campaign seen in 2012 offer up interesting food for thought in that they seem to point towards an unlikely outcome in current debates over the role and effect of the new communications technology on politics.

In some sense Obama’s 2012 campaign seems to answer what I’ll call the “Clay Shirky- Evgeny Morozov Debate. I could also call it the Shirky-Gladwell debate, but I find Morozov to be a more articulate spokesman of techo-pessimism (or techno-realism, depending upon one’s preference) than the omnipresent Malcolm Gladwell.

Clay Shirky is a well known spokesperson for the idea that the technological revolution centered around the Internet and other communications networks is politically transformative and offers up the possibility of a new form of horizontal politics.

Shirky sees the potential of governance to follow the open source model of software development found in collectively developed software such as Linux and Github that allow users to collaborate without being coordinated by anyone from above- as opposed to the top-down model followed by traditional software companies i.e. MicroSoft.  Although Shirky does not discuss them in his talk- the hacktivists group of Anonymous and Wikileaks follow this same decentralized, and horizontal model. As of yet, no government has adopted anything but token elements of the open source model of governance though they have, in Shirky’s view embraced more openness- transparency.

In an article for the journal Foreign Affairs in 2011 entitled The Political Power of Social Media, an article written before either the Arab Spring or the Occupy Wall Street movements had exploded on the scene, Shirky made a reasoned case for the potential of social media to serve as a prime vector for political change. Social media, while in everyday life certainly dominated by nonsense such as “singing cats”, also brought the potential to mobilize the public- overnight- based on some grievance or concern.

Here, Shirky responded to criticisms of both Malcolm Gladwell and Evgeny Morozov that his techno-optimism downplayed both the opiate like characteristics of social media, with its tendencies to distract people from political activity, along with the tendency of social media to create a shallow form of political commitment as people confuse signing an online petition or “liking” some person or group with actually doing something.

I do not agree with all  of what Morozov has to say in his side of this debate, but, that said, he is always like a bracing glass of cold water to the face- a defense against getting lost in daydreams. If you’ve never seen the man in action here is a great short documentary that has the pugnacious Belarusian surrounded by a sort of panopticon of video screens where he pokes holes in almost every techo-utopia shibboleth out there.

In his The Net Delusion Morozov had made the case that the new social media didn’t lend themselves to lasting political movements because all such movements are guided strategically and ideologically by a core group of people with real rather than superficial commitment who had sacrificed, sometimes literally everything, in the name of the movement. Social media’s very decentralization and the shallow sorts of political  activities it most often engenders are inimical to a truly effective political movement, and, at the same time, the very technologies that had given rise to social media have increased exponentially the state’s capacity for surveillance and the sphere of a-political distractions surrounding the individual.

And in early 2011 much of what Morozov said seemed right, but then came the Arab Spring, and then the Occupy Wall Street Movement, the former at the very least facilitated by social media, and the latter only made possible by it. If it was a prize fight, Morozov would have been on the mat, and Shirky shaking his fist with glee. And then…

It was the old-school Muslim Brotherhood not the tech-savvy tweeters who rose to prominence in post-Mubarak Egypt, and the Occupy Wall Street Movement faded almost as fast as it had appeared. Morozov was up off the mat.

And now we have had the 2012 presidential campaign, a contest fought and won using the tools of social media and Big Data. This suggests to me an outcome of the telecommunications revolution neither Shirky nor Morozov fully anticipated.

Shirky always sides with the tendency of the new media landscape to empower the individual and flatten hierarchies. This is not what was seen in the presidential race. Voters were instead “guided” by experts who were the only ones to grasp the strategic rationale of goading this individual rather than that and “nudging” them to act in some specific way.

Morozov, by contrasts, focuses his attention on the capacity of social media to pacify and distract the public in authoritarian states, and to ultimately hold the reins on the exchange of information.

What the Obama campaign suggests is that authoritarian countries might be able to use social media to foster a regime friendly political activity- that is to sponsor and facilitate the actions of large groups in its own interests, while short circuiting similar actions growing out of civil society which authoritarians find threatening.  Though, regime friendly political activity in this case is likely to be much more targeted and voluntary than the absurdities of 20th century totalitarianism that mobilized people for every reason under the sun.

The difference between authoritarian countries and democratic ones in respect to these technologies, at least so far, is this: that authoritarian countries will likely use them to exercise power whereas in democracies they are only used to win it.

If 2012 was a portent of the future, what Web 2.0 has brought us is not Shirky’s dream of “open-sourced government” which uses technology to actively engage citizens in not merely the debate over, but the crafting of policies and laws, an outcome which would have spelled the decline of the influence of political parties.  Instead, what we have is carefully targeted political mobilization based on the intimate knowledge of individual political preferences and psychological touch- points centrally directed by data-rich entities with a clear set of already decided upon political goals.  Its continuation would constitute the defeat of the political model based on Linux and the victory of  one based on FaceBook.

Panopticon 2.0

The hope that I have long held onto, is that whatever the dystopian trends taking place today, that we have timeto stop them. This election season is making me question the possible naivete of this hope, for things are moving so fast, and the trends are so disturbing, that I am beginning to fear that by the time we even understand them enough to be motivated enough to change their trajectory, that they will already be a fait accompli.
This is nowhere more clear than the way two relatively recent trends: social media in business, and behavioral economics in academia, are being applied in the 2012 elections. These developments threaten to erode the very assumptions at the core of our democratic political system: the idea of the voter as an individual endowed with the ability for reasoned choice and argument and the capacity for morally informed judgement.Charles Duhigg’s  article in this past Sunday’s New York Times is disturbing in its portrayal of how both the Romney and Obama campaigns are using the data mining capacity of social media and the findings of behavioral psychology to manipulate people into voting for them on November, 6. I’ll take data mining and social media to start.
To be frank I was well aware of the dangers of social media as a tool for manipulation, but did not realize that perhaps the primary danger from that corner came not from the potential abuse by governments security services,but from its potential to subvert the democratic process itself.Here are some extensive quotes from Duhigg’s article “Campaigns Mine Personal Lives to Get Out Vote” on the Romney and Obama campaigns use of data mining and social media in the election.

In interviews, however, consultants to both campaigns said they had bought demographic data from companies that study details like voters’ shopping histories, gambling tendencies, interest in get-rich-quick schemes, dating preferences and financial problems.

The campaigns have planted software known as cookies on voters’ computers to see if they frequent evangelical or erotic Web sites for clues to their moral perspectives. Voters who visit religious Web sites might be greeted with religion-friendly messages when they return to mittromney.com or barackobama.com.”

You may wonder exactly where the Romney and Obama campaigns are getting such detailed personal information on voters. Quite simply, they are buying it from analytics companies that possess this kind of information on anyone with an internet connection. Which if you are reading this- means you.
I find this troubling on so many levels that exploring them all would fill multiple posts, so let me concentrate on just a few.To start with this seems to represent a qualitative change in political manipulation and institutionalized lying. One might bring up the point that elections have been about advertising since their was advertising and politicians have been lying since the ancient Greeks,   but it certainly seems that the practices detailed by Duhigg take this manipulation to a whole new level.
As I mentioned  in my post What’s Wrong With Borgdom?  ,the recent short piece of design fiction Sight offers a disturbing picture of how access to our “sociogram” or “social map”, which comes as close as we ever have to actually peering inside someone’s head, might be used as a tool of manipulation and control. To quote from that post:
Sight  is a very short film that shows us the potential dark side of a world of ubiquitous augmented reality and social profiles- a world in many ways scarier that the Borg because it seems so possible. In this film, which I really encourage you to check out for yourself, a tech- savvy hotshot, seduces, and we are led to believe probably rapes, a young woman using a “dating app” that gives him access to almost everything about her.
Sight  gets to the root of the potential problem with social media which isn’t the ability to interconnect and communicate with others , which it undoubtedly provides,  but the very real potential that it could also be used as a tool of manipulation and control.
Sight  is powerful because it shows this manipulation and control person to person, but on a more collective level manipulation and control is the actual objective of advertisement. It is the bread and butter of social media itself.”
Politicians and political advertisements have, of course, always told us what they thought we wanted to hear. But past political advertisers were in effect playing blind. They had to define their message broadly enough that it would ring true with a nondescript “average voter”. This was extremely wasteful and its wastefulness was a good thing. As long a person was able to hold true to their individuality and swim against the crowd they could could actually remain free in thought and opinion. By being able to peer under one’s skull the age of targeted advertising can use the specific qualities of the individual against himself.
This is a sophisticated form of lying in that the way political communication has been “framed” has nothing to do with the actual positions of the parties themselves, but on what they should tell you to garner your support.  A Democratic operative might reason:”He’s a registered Democrat who faithfully attends church. We will not mention any contentious social issues on which he might differ from our party platform”. For a Republican operative: “She’s a registered Republican who visits Ron Paul websites and periodically looks at porn on the internet. We should focus on lower taxes and deregulation and avoid any mention of Christian-conservative themes common in the GOP”.
This is something like the kinds of focus groups we have been seeing on cable news shows for years now where the participants are hooked up to physiological monitors while they watch debates and other political fare- their every reaction minutely monitored by a machine. The difference being that we are now all hooked up to such a machine that we call the internet, and are being monitored -secretly- something almost none of us have actually volunteered to do.
Another thing I find highly disturbing about the use of data mining and social media by the two major parties is not how they are being used right now, but their potential to stifle competition to the Democrats and Republican from a third party.  If used in this way data mining and social media will enter the already extensive tool kit: from irrational gerrymanderingto politically closed primaries, to media bias, that currently preserves the two party duopoly.Duhigg doesn’t really explore this point in his article, but theoretically it should be possible for the social maps used by the Democrats and Republicans to pick-off independents by identifying them based on the websites they visit or the books they browse on Amazon, perhaps even search for at their local library. If we don’t have psychological studies to figure out exactly what you should tell a Ron Paul supporter or a disaffected progressive to come over to “your side” messages that are then targeted at such groups in this election cycle, we will in the next.If all that weren’t creepy enough, the two parties are also taking advantage of their knowledge of our social networks to convince us to vote in their favor. Again quoting Duhigg:
When one union volunteer in Ohio recently visited the A.F.L.-C.I.O.’s election Web site, for instance, she was asked to log on with her Facebook profile. Computers quickly crawled through her list of friends, compared it to voter data files and suggested a work colleague to contact in Columbus. She had never spoken to the suggested person about politics, and he told her that he did not usually vote because he did not see the point.”We talked about how if you don’t vote, you’re letting other people make choices for you,” said the union volunteer, Nicole Rigano, a grocery store employee. “He said he had never thought about it like that, and he’s going to vote this year. It made a big difference to know ahead of time what we have in common. It’s natural to trust someone when you already have a connection to them.”
I have no idea how the conversation between these two people began, but I’d put my hard earned money on the fact that it didn’t start honestly, which would have went something like this: “Based on psychological studies it has been shown that people are more likely to trust someone they know than someone they do not. A computer algorithm operated by the Obama campaign identified the fact that I was a voting Obama supporter and union member and that you were a non-voting union member, and deemed that if I spoke with you I might be able to convince you to vote for Obama”.
A very narrow band of partisan ideologues are out to define what the future of the country should look like, and that leads into my next topic: the novel use of techniques perfected in the field behavioral economics in the current election.
Duhigg doesn’t use the term behavioral economics, but I’m pretty sure it’s at the root of many of the techniques being used by the Romney and Obama campaigns. Behavioral economics is essentially the study of how to get people to do stuff. The book that brought the field to popularity a couple years back was Nudge: Improving Decisions About Health, Wealth, and Happiness by Richard Thaler and Cass R. Suestein. The basic premise behind Nudge was that people do irrational things that aren’t really amenable to change through personal insight, but that could be shaped to be more rational by policy makers aware of how the flawed human mind actually works.  People can be influenced to make certain decisions over others by the smallest of changes, such as the decision to eat a salad or Friendly’s Grilled Cheese Burger Melt  can be influenced unconsciously by things such as menu design or food placement.
I remember reading Nudge and being frankly annoyed not just by the paternalism of the whole thing, but by the fact that it seemed to be promoting a type of paternalism laced with subterfuge where the person being “nudged” towards change had no idea what was going on. I also had the response of “who will parent the paternalist?” after all, except for a very narrowly defined set of issues regarding individual health, most questions in society are about values and trade-offs, and really can’t or shouldn’t be decided by policy makers beforehand.
There are also the questions of untestable assumptions and bias that inflict “experts”whatever their intentions. A lot of Nudge is devoted to getting Americans to sock more away in their 401ks. It was published before the financial crisis and I was reading it after the fact, and it seemed clear to me that if these “rational” experts had, using their behavioral techniques, managed to get us irrational folk to pour more savings into the stock market- those who did so would have lost their shirt. Techniques identified by Duhigg that probably have their roots in behavioral economics include:
The campaigns’ consultants have run experiments to determine if embarrassing someone for not voting by sending letters to their neighbors or posting their voting histories online is effective.  Another tactic that will be used this year, political operatives say, is asking voters whether they plan to walk or drive to the polls, what time of day they will vote and what they plan to do afterward.

The answers themselves are unimportant. Rather, simply forcing voters to think through the logistics of voting has been shown, in multiple experiments, to increase the odds that someone will actually cast a ballot.

Duhigg quotes one operative as saying:

“Target anticipates your habits, which direction you automatically turn when you walk through the doors, what you automatically put in your shopping cart,” said Rich Beeson, Mr. Romney’s political director. “We’re doing the same thing with how people vote.”
Web 2.0 could have resulted in a re-invigoration of democracy by facilitating the exchange of views between regular citizens and increasing their  capacity to politically organize.  Instead, it has resulted in an unprecedented ability for a narrow group of ideological partisans pursuing their own self-interest to control the society underneath them.
Rather than a high-tech version of Athenian democracy we have the beginnings of an electronic panopticon watching over, and attempting to subtly, and secretly, control us all.
* Image @ Top, a social map/sociogram. Source: Visual Complexity

The Iron Heel and the Long-view

The Iron Heel is a 1908 novel by Jack London. It’s a novel which I think is safe to say is not read much today, which is a shame especially for an Americans, for the setting for what was the world’s first modern political dystopia, a novel written when Orwell and Huxley were just babes in the cradle was the United States itself.

Reading the novel as an American gives puts one in a kind of temporal vertigo. It’s not only like finding a long forgotten photograph of oneself and being stuck with the question “is that really me?”, it as if when one turned the photo over one found a note from scribbled n from yourself to yourself a kind of time capsule rich with the assumption that the past “you” knew who the “you” reading the note would be. It makes you start asking questions like “am I the person who I thought I would be?” and set to pondering on all the choices and events which have put you on, or diverted you from, your self-predicted path.

The Iron Heel tells the story of the rise of , “The Oligarchy”, a fascist state deftly laid in almost all of its details before fascism had even been invented. The fact that London pictures the rise of not only the world’s first fascist regime, but what might be considered the world’s first communist revolution not “out there” in the Old World, but on the familiar grounds of the United States where places like California, Idaho, “Indian Territory”, Chicago and Washington D.C. are the setting for events that are hauntingly similar to ones that would indeed happen in Europe decades later, turn the novel into a kind of alternative history.

The story itself is presented in the form of a kind of time capsule, a buried manuscript that has been discovered by a scholar, Anthony Meredith,  in the year 2,600 AD. Footnotes throughout the book are written from this very long view of the future when, after centuries of repression and false starts, a true Brotherhood of Man has been obtained.

The manuscript,  footnoted by Meredith,  contains the story of, Avis Everhard, the wife and fellow revolutionary of seminal figure in London’s fictional history, Ernest Everhard. Avis tells the tale of an early 20th century America racked by inequality, class divisions, and the most brutal forms of labor exploitation. These conditions set the stage for a looming socialist revolution, a political alliance between industrial labor in the form of a Socialist Party, and American farmers in the Grange Movement, that is preempted by the forces of capital. Ernest Everhard is elected as a socialist US Senator, one of many members of the Socialists and Grange Movement who have been swept into national and state office by the groundswell of support for revolutionary change.

The chance to change American  society through constitutional means does not last long. The Oligarchs use a feigned terrorist incident in the US Capitol to turn the American Constitution into a mere facade. Members of the Grange Movement are barred from taking their seats in state legislatures. Socialists are hounded from office pursued as potential terrorists and arrested. The Oligarchs create new mechanisms of social control.  London, writing before the US had a true and permanent standing Army, describes how The Oligarchs turn the state militias into a national army “The Mercenaries” with their own secret service tied to the police that will act against any perceived challenges to the social order.

Writing a generation before corporatism was even conceived, London describes how this oligarchic coup would manage to divide and conquer the forces of labor by essentially buying off and vesting in the system vital workers such as those in steel or railroads so that crippling general strikes became impossible, and all other unskilled labor was pushed into what we would understand as Third World conditions of bare survival. These wage slaves would be compelled to build the glittering new cities of the Oligarchs such as Ardis and Asgard.

The lower classes are robbed of that singularly American right- the right to bare arms, and only allowed to travel using an internal passport system similar to the one used in Czarist Russia.

Under these conditions, actual revolution brews, and the Oligarchs and the revolutionary forces engage in a protracted struggle of espionage and counter-espionage that for the revolutionaries is to culminate in a planned revolution- essentially a set of coordinated terrorists attacks on US communications and military infrastructure that the revolutionaries hope will spark a genuine revolution against the Oligarchs.

The Oligarchs again set out to short- circuit revolution, this time by staging a massive military assault on the heart of American labor, Chicago. The assault unleashes violent clashes between the well-armed Mercenaries and police forces and howling crowds of the poor armed only with household tools: knives, clubs, axes. In scenes far more gripping than those in Collin’s Catching Fire, London depicts urban warfare between security forces fighting raging crowds and bomb throwing insurgents who attack their targets from the heights of skyscrapers, in a way surely reminiscent of Fallujah, or even more so, what is going on right now in Syria.

Eventually, the oligarchic forces burn the poor sections of Chicago to the ground, and end all chance of successful revolution within the lifetime of the Everhard’s. In such conditions the effort at revolution becomes pure terrorism, the names of the terrorists groups no doubt reflective of the limited geographical area in which they operate and America’s history of resistance to the powers of the federal government such as the Mormon group the Danites or the Comanches.

The Oligarch’s suppression of revolutionary forces eventually reaches the Everhard’s. The novel ends abruptly with Avis’s narration stopping in mid-sentence.

The Iron Heel is a kind of warning, and the strange thing about this warning is that London, who was labeled a gloom obsessed pessimists by many of his fellow socialists, got so much of what would happen over the next 50 or so years eerily right, with the marked exception of where they were to occur.

Such prescience is hard to achieve even for someone as brilliant as the fellow novelist Anatole France the author of the introduction to the 1924 edition of the The Iron Heel I hold in my hand.

France, who was 80 at the time and would die the same year, thinks London was right, that the Iron Heel was coming, but doesn’t think it will arrive for quite some time.

“In France, as in Italy and Spain, Socialism, is for the moment, too feeble to have anything to fear from the Iron Heel., for extreme feebleness is the one safety of the feeble. No Heel of Iron will trouble itself to tread down this dust of a party”. (xiv)

1924 is the same year that the murder of socialist Giacomo Matteotti truly began the fascist dictatorship in Italy- a kind of corporate state that was certainly anticipated by London in The Iron Heel. Within 6 years “feeble” Spanish socialism would be locked in a civil war with fascism, within 9 years, the Nazis would rise to power on the backs of the same sort of fears of revolution, and using the same kinds of political machinations described in The Iron Heel. The bombing of the Reichstag ,which was blamed on the German communists but really committed by the Nazi’s, became the justification for an anti-revolutionary crackdown and the transformation of German democracy into a sham. It makes one wonder if Hitler himself had read The Iron Heel!


The Iron Heel throws up all sorts of historical questions and useful analogies for the current day. Why did neither revolutionary socialism or outright fascism emerge in the US in the 1930’s as it did elsewhere?

The Iron Heel should perhaps be read as part of a trilogy with Sinclair Lewis’ 1936 It can’t happen here! Which describes the transformation of America into a Nazi-like totalitarian state, or Philip Roth’s 2004 The Plot Against America which describes a similar fascists regime which comes about when the Nazi sympathizer and isolationist, Charles Lindberg, win the presidential race against Franklin Roosevelt. Full reviews of both will be found here at some point in the future the point for now being that there were figures and sentiments in American politics that might have added up to something quite different than American exceptionalism during this period. That what we ended up with was as much the consequence of historical luck as it was of any particularly American virtue.

Some, on both the right and the left would argue that what we have now is just a softer version of the tyranny portrayed by London, Lewis, and Roth, and they do indeed have something, but I do not as of now want to go there. The reason, I think, the kind of socialist revolution found in other countries never got legs in the United States the way it did elsewhere was that the US, which had been a hotbed of labor unrest and socialist sentiment and anticipation in the late 19th and early 20th centuries, willingly adopted a whole series of reforms that made worker grievances against capitalism less acute.

  • Unemployment benefits- 1935
  • Eight-hour workday- 1936
  • Worker’s compensation in event of injury (widespread by 1949).
  • Government funded support for the poor that preserved a minimum standard of living- 1935
  • Minimum wage- 1938
  • Right to unionize and the adoption of a formal system to hold strikes- 1935

In addition controls were placed on financial markets so that the kinds of wild swings, financial panics, that periodically brought the nation’s economy to its knees would no longer occur.

Even when derided on the right as move towards socialism or on the left as delusional reformism, these changes followed by an unprecedented era of prosperity for the middle class from the 1940s through the 1970s, essentially ended the vicious circle presented in the Iron Heel of a political system unresponsive to worker grievances and exploitation that gave rise to forces of social revolution that in turn  engendered a move towards state violence and tyranny by the wealthy elites, which resulted in widespread terrorism by continually frustrated revolutionaries.

As a system for producing widespread prosperity faltered in the 1970s the American right, followed by increasingly centrist Democrats diagnosed the economic malaise as having originated from both the choke hold American unions had over the economy and the stifling effects of too much government interference.  Through the 1980s and 90s labor union power was dismantled, economic production globalized, capital markets freed up from earlier constraints, welfare reformed. Support for the lower classes was now to come not primarily through government programs, but through tax policy, such as the Earned Income Tax, that would free individuals to make their own choices and vest them in the capitalist economic system rather than view them as an opposition. Such reforms with their explicit claim that they would lead to universal prosperity collapsed with the 2008 financial crisis and neither the American right nor the American left has any clear understanding of where we go from here.

This history is what makes the recent video of Romney and his 47% comments so galling. His fellow oligarch’s who had paid more than the median income of an average American family- $50,000- to listen to his speech laugh and clink their silverware as he describes the sad state of American society where over half of the county either receive some government support or pay no taxes to the federal government. Romney and his audience forget how we got here: that the working class were granted their “privileges”  because the only way to otherwise sustain him and his fellow oligarchs would be through a regime of violence. That the fact that so many Americans don’t pay federal income tax was brought about by Republicans who hoped to entrench the idea that the system of free enterprise was for the good of rich, middle class and poor alike.

Romney and his listeners are oblivious to the long-view. In a way I wish I could send them all a copy of The Iron Heel.

Would I charge, or could it be a tax write-off?

*Jack London, The Iron Heel, McKinlay, Stone & Mackenzie, 1924 (original 1907).

Big Brother, Big Data, and the Forked Path

The technological ecosystem in which political power operates tends to mark out the possibility space for what kinds of political arrangements, good and bad, exist within that space. Orwell’s Oceania and its sister tyrannies were imagined in what was the age of big, centralized media. Here the Party had under its control not only the older printing press, having the ability to craft and doctor, at will, anything created using print from newspapers, to government documents, to novels. It also controlled the newer mediums of radio and film, and, as Orwell imagined, would twist those technologies around backwards to serve as spying machines aimed at everyone.

The questions, to my knowledge, Orwell never asked was what was the Party to do with all that data? How was it to store, sift through, make sense of, or locate locate actual threats within it the  yottabytes of information that would be gathered by recording almost every conversation, filming or viewing almost every movement, of its citizens lives? In other words, the Party would have ran into the problem of Big Data. Many of Orwellian developments since 9/11 have come in the form of the state trying to ride the wave of the Big Data tsunami unleashed with the rise of the internet, an attempt create it’s own form of electronic panopticon.

In their book Top Secret America: The Rise of the New American Security State, Dana Priest, and ,William Arkin, of the Washington Post present a frightening picture of the surveillance and covert state that has mushroomed in the United States since 9/11. A vast network of endeavors which has grown to dwarf, in terms of cummulative numbers of programs and operations, similar efforts, during the unarguably much more dangerous Cold War. (TS 12)

Theirs’ is not so much a vision of an America of dark security services controlled behind the scenes by a sinister figure like J. Edgar Hoover, as it is one of complexity gone wild. Priest and Arkin paint a picture of Top Secret America as a vast data sucking machine, vacuuming up every morsel of information with the intention of correctly “connecting the dots”, (150) in the hopes of preventing another tragedy like 9/11.

So much money was poured into intelligence gathering after 9/11, in so many different organizations, that no one, not the President, nor the Director of the CIA, nor any other official has a full grasp of what is going on. The security state, like the rest of the American government, has become reliant on private contractors who rake in stupendous profits. The same corruption that can be found elsewhere in Washington is found here. Employees of the government and the private sector spin round and round in a revolving door between the Washington connections brought by participation in political establishment followed by big-time money in the ballooning world of private security and intelligence. Priest quotes one American intelligence official  who had the balls to describe the insectous relationship between government and private security firms as “a self-licking ice cream cone”. (TS 198)

The flood of money that inundated the intelligence field in after  9/11 has created what Priest and Arkin call an “alternative geography” companies doing covert work for the government that exist in huge complexes, some of which are large contain their very own “cities”- shopping centers, athletic facilities, and the like. To these are added mammoth government run complexes some known and others unknown.

Our modern day Winston Smiths, who work for such public and private intelligence services, are tasked not with the mind numbing work of doctoring history, but with the equally superfluous job of repackaging the very same information that had been produced by another individual in another organization public or private each with little hope that they would know that the other was working on the same damned thing. All of this would be a mere tragic waste of public money that could be better invested in other things, but it goes beyond that by threatening the very freedoms that these efforts are meant to protect.

Perhaps the pinnacle of the government’s Orwellian version of a Google FaceBook mashup is the gargantuan supercomputer data center in Bluffdale Nevada built and run by the premier spy agency in the age of the internet- the National Security Administration or NSA. As described by James Bamford for Wired Magazine:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

It had been thought that domestic spying by the NSA, under a super-secret program with the Carl Saganesque name, Stellar Wind, had ended during the G.W. Bush administration, but if the whistleblower, William Binney, interviewed in this chilling piece by Laura Poitras of the New York Times, is to be believed, the certainly unconstitutional program remains very much in existence.

The bizarre thing about this program is just how wasteful it is. After all, don’t private companies, such as FaceBook and Google not already possess the very same kinds of data trails that would be provided by such obviously unconstitutional efforts like those at Bluffdale? Why doesn’t the US government just subpoena internet and telecommunications companies who already track almost everything we do for commercial purposes? The US government, of course, has already tried to turn the internet into a tool of intelligence gathering, most notably, with the stalled Cyber Intelligence Sharing and Intelligence Act, or CISPA , and perhaps it is building Bluffdale in anticipation that such legislation will fail, that however it is changed might not be to its liking, or because it doesn’t want to be bothered with the need to obtain warrants or with constitutional niceties such as our protection against unreasonable search and seizure.

If such behemoth surveillance instruments fulfill the role of the telescreens and hidden microphones in Orwell’s 1984, then the role the only group in the novel whose name actually reflects what it is- The Spies – children who watch their parents for unorthodox behavior and turn them in, is taken today by the American public itself. In post 9/11 America it is, local law enforcement, neighbors, and passersby who are asked to “report suspicious activity”. People who actually do report suspicious activity have their observations and photographs recorded in an ominous sounding data base that Orwell himself might have named called The Guardian. (TS 144)

As Priest writes:

Guardian stores the profiles of tens of thousands of Americans and legal residents who are not accused of any crime. Most are not even suspected of one. What they have done is appear, to a town sheriff, a traffic cop, or even a neighbor to be acting suspiciously”. (TS 145)

Such information is reported to, and initially investigated by, the personnel in another sort of data collector- the “fusion centers” which had been created in every state after 9/11.These fusion centers are often located in rural states whose employees have literally nothing to do. They tend to be staffed by persons without intelligence backgrounds, and who instead hailed from law enforcement, because those with even the bare minimum of foreign intelligence experience were sucked up by the behemoth intelligence organizations, both private and public, that have spread like mould around Washington D.C.

Into this vacuum of largely non-existent threats came “consultants” such as Montijo Walid Shoebat, who lectured fusion center staff on the fantastical plot of Muslims to establish Sharia Law in the United States. (TS 271-272). A story as wild as the concocted boogeymen of Goldstein and the Brotherhood in Orwell’s dystopia.

It isn’t only Mosques, or Islamic groups that find themselves spied upon by overeager local law enforcement and sometimes highly unprofessional private intelligence firms. Completely non-violent, political groups, such as ones in my native Pennsylvania, have become the target of “investigations”. In 2009 the private intelligence firm the Institute for Terrorism Research and Response compiled reports for state officials on a wide range of peaceful political groups that included: “The Pennsylvania Tea Party Patriots Coalition, the Libertarian Movement, anti-war protesters, animal-rights groups, and an environmentalist dressed up as Santa Claus and handing out coal-filled stockings” (TS 146). A list that is just about politically broad enough to piss everybody off.

Like the fusion centers, or as part of them, data rich crime centers such as the Memphis Real Time Crime Center are popping up all over the United States. Local police officers now suck up streams of data about the environments in which they operate and are able to pull that data together to identify suspects- now by scanning licence plates, but soon enough, as in Arizona, where the Maricopa County Sheriff’s office was creating up to 9,000 biometric, digital profiles a month (TS 131) by scanning human faces from a distance.

Sometimes crime centers used the information gathered for massive sweeps arresting over a thousand people at a clip. The result was an overloaded justice and prison system that couldn’t handle the caseload (TS 144), and no doubt, as was the case in territories occupied by the US military, an even more alienated and angry local population.

From one perspective Big Data would seem to make torture more not less likely as all information that can be gathered from suspects, whatever their station, becomes important in a way it wasn’t before, a piece in a gigantic, electronic puzzle. Yet, technological developments outside of Big Data, appear to point in the direction away from torture as a way of gathering information.

“Controlled torture”, the phrase burns in my mouth, has always been the consequence of the unbridgeable space between human minds. Torture attempts to break through the wall of privacy we possess as individuals through physical and mental coercion. Big Data, whether of the commercial or security variety, hates privacy because it gums up the capacity to gather more and more information for Big Data to become what so it desires- Even Bigger Data. The dilemma for the state, or in the case of the Inquisition, the organization, is that once the green light has been given to human sadism it is almost impossible to control it. Torture, or the knowledge of torture inflicted on loved ones, breeds more and more enemies.

Torture’s ham fisted and outwardly brutal methods today are going hopelessly out of fashion. They are the equivalent of rifling through someone’s trash or breaking into their house to obtain useful information about them. Much better to have them tell you what you need to know because they “like” you.

In that vein, Priest describes some of the new interrogation technologies being developed by the government and private security technology firms. One such technology is an “interrogation booth” that contain avatars with characteristics (such as an older Hispanic woman) that have been psychologically studied to produce more accurate answers from those questioned. There are ideas to replace the booth with a tiny projector mounted on a soldier’s or policeman’s helmet to produce the needed avatar at a moments notice. There was also a “lie detecting beam” that could tell- from a distance- whether someone was lying by measuring miniscule changes on a person’s skin. (TS 169) But if security services demand transparency from those it seeks to control they offer up no such transparency themselves. This is the case not only in the notoriously secretive nature of the security state, but also in the way the US government itself explains and seeks support for its policies in the outside world.

Orwell, was deeply interested in the abuse of language, and I think here too, the actions of the American government would give him much to chew on. Ever since the disaster of the war in Iraq, American officials have been obsessed with the idea of “soft-power”. The fallacy that resistance to American policy was a matter of “bad messaging” rather than the policy itself. Sadly, this messaging was often something far from truthful and often fell under what the government termed” Influence operations” which, according to Priest:

Influence operations, as the name suggests, are aimed at secretly influencing or manipulating the opinions of foreign audiences, either on an actual battlefield- such as during a feint in a tactical battle- or within civilian populations, such as undermining support for an existing government of terrorist group (TS 59)

Another great technological development over the past decade has been the revolution in robotics, which like Big Data is brought to us by the ever expanding information processing powers of computers, the product of Moore’s Law.

Since 9/11 multiple forms of robots have been perfected, developed, and deployed by the military, intelligence services and private contractors only the most discussed and controversial of which have been flying drones. It is with these and other tools of covert warfare, such as drones, and in his quite sweeping understanding and application of executive power that President Obama has been even more Orwellian than his predecessor.

Obama may have ended the torture of prisoners captured by American soldiers and intelligence officials, and he certainly showed courage and foresight in his assassination of Osama Bin Laden, a fact by which the world can breathe a sigh of relief. The problem is that he has allowed, indeed propelled, the expansion of the instruments of American foreign policy that are largely hidden from the purview and control of the democratic public. In addition to the surveillance issues above, he has put forward a sweeping and quite dangerous interpretation of executive power in the forms of indefinite detention without trial found in the NDAA, engaged in the extrajudicial killings of American citizens, and asserted the prerogative, questionable under both the constitution and international law, to launch attacks, both covert and overt, on countries with which the United States is not officially at war.

In the words of Conor Friedersdorf of the Atlantic writing on the unprecedented expansion of executive power under the Obama administration and comparing these very real and troubling developments to the paranoid delusions of right-wing nuts, who seem more concerned with the fantastical conspiracy theories such as the Social Security Administration buying hollow-point bullets:

… the fact that the executive branch is literally spying on American citizens, putting them on secret kill lists, and invoking the state secrets privilege to hide their actions doesn’t even merit a mention.  (by the right-wing).

Perhaps surprisingly, the technologies created in the last generation seem tailor made for the new types of covert war the US is now choosing to fight. This can perhaps best be seen in the ongoing covert war against Iran which has used not only drones but brand new forms of weapons such the Stuxnet Worm.

The questions posed to us by the militarized versions of Big Data, new media, Robotics, and spyware/computer viruses are the same as those these phenomena pose in the civilian world: Big Data; does it actually provide us with a useful map of reality, or instead drown us in mostly useless information? In analog to the question of profitability in the economic sphere: does Big Data actually make us safer? New Media, how is the truth to survive in a world where seemingly any organization or person can create their own version of reality. Doesn’t the lack of transparency by corporations or the government give rise to all sorts of conspiracy theories in such an atmosphere, and isn’t it ultimately futile, and liable to backfire, for corporations and governments to try to shape all these newly enabled voices to its liking through spin and propaganda? Robotics; in analog to the question of what it portends to the world of work, what is it doing to the world of war? Is Robotics making us safer or giving us a false sense of security and control? Is it engendering an over-readiness to take risks because we have abstracted away the very human consequences of our actions- at least in terms of the risks to our own soldiers. In terms of spyware and computer viruses: how open should our systems remain given their vulnerabilities to those who would use this openness for ill ends?

At the very least, in terms of Big.Data, we should have grave doubts. The kind of FaceBook from hell the government has created didn’t seem all that capable of actually pulling information together into a coherent much less accurate picture. Much like their less technologically enabled counterparts who missed the collapse of the Eastern Bloc and fall of the Soviet Union, the new internet enabled security services missed the world shaking event of the Arab Spring.

The problem with all of these technologies, I think, is that they are methods for treating the symptoms of a diseased society, rather than the disease itself. But first let me take a detour through Orwell vision of the future of capitalist, liberal democracy seen from his vantage point in the 1940s.

Orwell, and this is especially clear in his essay The Lion and the Unicorn, believed the world was poised between two stark alternatives: the Socialist one, which he defined in terms of social justice, political liberty, equal rights, and global solidarity, and a Fascist or Bolshevist one, characterized by the increasingly brutal actions of the state in the name of caste, both domestically and internationally.

He wrote:

Because the time has come when one can predict the future in terms of an “either–or”. Either we turn this war into a revolutionary war (I do not say that our policy will be EXACTLY what I have indicated above–merely that it will be along those general lines) or we lose it, and much more besides. Quite soon it will be possible to say definitely that our feet are set upon one path or the other. But at any rate it is certain that with our present social structure we cannot win. Our real forces, physical, moral or intellectual, cannot be mobilised.

It is almost impossible for those of us in the West who have been raised to believe that capitalist liberal democracy is the end of the line in terms of political evolution to remember that within the lifetimes of people still with us (such as my grandmother who tends her garden now in the same way she did in the 1940’s) this whole system seemed to have been swept up into the dustbin of history and that the future lie elsewhere.

What the brilliance of Orwell missed, the penetrating insight of Aldous Huxley in his Brave New World caught: that a sufficiently prosperous society would lull it’s citizens to sleep, and in doing so rob them both of the desire for revolutionary change and their very freedom.

As I have argued elsewhere, Huxley’s prescience may depend on the kind of economic growth and general prosperity that was the norm after the Second World War. What worries me is that if the pessimists are proven correct, if we are in for an era of resource scarcity, and population pressures, stagnant economies, and chronic unemployment that Huxley’s dystopia will give way to a more brutal Orwellian one.

This is why, no matter who wins the presidential election in November, we need to push back against the Orwellian features that have crept upon us since 9/11. The fact is we are almost unaware that we building the architecture for something truly dystopian and should pause to think before it is too late.

To return to the question of whether the new technologies help or hurt here: It is almost undeniable that all of the technological wonders that have emerged since 9/11 are good at treating the symptoms of social breakdown, both abroad and at home. They allow us to kill or capture persons who would harm largely innocent Americans, or catch violent or predatory criminals in our own country, state, and neighborhood. Where they fail is in getting to the actual root of the disease itself.

American would much better serve  its foreign policy interest were it to better align itself with the public opinion of the outside world insofar as we were able to maintain our long term interests and continue to guarantee the safety of our allies. Much better than the kind of “information operation” supported by the US government to portray a corrupt, and now deposed, autocrat like Yemen’s  Abdullah Saleh as “an anti-corruption activist”, would be actual assistance by the US and other advanced countries in…. I duknow… fighting corruption. Much better Western support for education and health in the Islamic world that the kinds of interference in the internal political development of post-revolutionary Islamic societies driven by geopolitical interest and practiced by the likes of Iran and Saudi Arabia.

This same logic applies inside the United States as well. It is time to radically roll back the Orwellian advances that have occurred since 9/11. The dangers of the war on terrorism were always that they would become like Orwell’s “continuous warfare”, and would perpetually exist in spite, rather than because of the level of threat. We are in danger of investing so much in our security architecture, bloated to a scale that dwarfs enemies, which we have blown up in our own imaginations into monstrous shadows, that we are failing to invest in the parts of our society that will actually keep us safe and prosperous over the long-term.

In Orwell’s Oceania, the poor, the “proles” were largely ignored by the surveillance state. There is a danger here that with the movement of what were once advanced technologies into the hands of local law enforcement: drones, robots, biometric scanners, super-fast data crunching computers, geo-location technologies- that domestically we will move even further in the direction of treating the symptoms of social decay, rather than dealing with the underlying conditions that propel it.

The fact of the matter is that the very equality, “the early paradise”, a product of democratic socialism and technology, Orwell thought was at our fingertips has retreated farther and farther from us. The reasons for this are multiple; To name just a few: financial   concentration,  automation, the end of “low hanging fruit” and their consequent high growth rates brought by industrialization,the crisis of complexity and the problem of ever more marginal returns. This retreat, if it lasts, would likely tip the balance from Huxley’s stupification by consumption to Orwell’s more brutal dystopia initiated by terrified elites attempting to keep a lid on things.

In a state of fear and panic we have blanketed the world with a sphere of surveillance, propaganda and covert violence at which Big Brother himself would be proud. This is shameful, and threatens not only to undermine our very real freedom, but to usher in a horribly dystopian world with some resemblance to the one outlined in Orwell’s dark imaginings. We must return to the other path.