Mary Shelley’s other horror story; Lessons for Super-pandemics

The Last Man

Back in the early 19th century a novel was written that tells the story of humanity’s downfall in the 21st century.  Our undoing was the consequence of a disease that originates in the developing world and radiates outward eventually spreading into North America, East Asia, and ultimately Europe. The disease proves unstoppable causing the collapse of civilization, our greatest cities becoming grave sites of ruin. For all the reader is left to know, not one human being survives the pandemic.

We best know the woman who wrote The Last Man in 1825 as the author of  Frankenstein, but it seems Mary Shelley had more than one dark tale up her sleeve. Yet, though the destruction wrought by disease in The Last Man is pessimistic to the extreme, we might learn some lessons from the novel that would prove helpful to understanding not only the very deadly, if less than absolute ruination, of the pandemic of the moment- Ebola- and even more regarding the dangers from super-pandemics more likely to emerge from within humanity than from what is a still quite dangerous nature herself.

The Last Man tells the story of son of a nobleman who had lost his fortune to gambling, Lionel Verney, who will become the sole remaining man on earth as humanity is destroyed by a plague in the 21st century. Do not read the novel hoping to get a glimpse of Shelley’s view of what our 21st century world would be like, for it looks almost exactly like the early 19th century, with people still getting around on horseback and little in the way of future technology.

My guess is that Shelley’s story is set in the “far future” in order to avoid any political heat for a novel in which England has become a republic. Surely, if she meant it to take place in a plausible 21st century, and had somehow missed the implications of the industrial revolution, there would at least have been some imagined political differences between that world and her own. The same Greco-Turkish conflict that raged in the 1820’s rages on in Shelley’s imagined 21st century with only changes in the borders of the war. Indeed, the novel is more of a reflection and critique on the Romantic movement, with Lord Byron making his appearance in the form of the character Lord Raymond, and Verney himself a not all that concealed version of Mary Shelley’s deceased husband Percy.

In The Last Man Shelley sets out to undermine all the myths of the Romantic movement, myths of the innocence of nature, the redemptive power of revolutionary politics and the transformative power of art. While of historical interests such debates offer us little in terms of the meaning of her story for us today. That meaning, I think,  can be found in the state of epidemiology, which on the very eve of Shelley’s story was about to undergo a revolution, a transformation that would occur in parallel with humanity’s assertion of general sovereignty over nature, the consequence of the scientific and industrial revolutions.

Reading The Last Man one needs to be carefully aware that Shelley has no idea of how disease actually works. In the 1820’s the leading theory of what caused diseases was the miasma theory, which held that they were caused by “bad air”. When Shelley wrote her story miasma theory was only beginning to be challenged by what we now call the “germ theory” of disease with the work of scientists such as Agostino Bassi. This despite the fact that we had known about microscopic organisms since the 1500s and their potential role in disease had been cited as early as 1546 by the Italian polymath Girolamo Fracastoro. Shelley’s characters thus do things that seem crazy in the light of germ theory; most especially, they make no effort to isolate the infected.

Well, some do. In The Last Man it is only the bad characters that try to run away or isolate themselves from the sick. The supremely tragic element in the novel is how what is most important to us, our small intimate circles, which we cling to despite everything, can be done away with by nature’s cruel shrug. Shelley’s tale is one of extreme pessimism not because it portrays the unraveling of human civilization, and turns our monuments into ruins, and eventually, dust, but because of how it portrays a world where everyone we love most dearly leave us almost overnight. The novel gives one an intimate portrait of what its like to watch one’s beloved family and friends vanish, a reality Mary Shelley was all too well acquainted with, having lost her husband and three children.

Here we can find the lesson we can take for the Ebola pandemic for the deaths we are witnessing today in west Africa are in a very real sense a measure of people’s humanity as if nature, perversely, set out to target those who are acting in a way that is most humane. For, absent modern medical infrastructure, the only ones left to care for the infected is the family of the sick themselves.

This is how is New York Times journalist Helene Cooper explained it to interviewer Terry Gross of Fresh Air:

COOPER: That’s the hardest thing, I think, about the disease is it does make pariahs out of the people who are sick. And it – you know, we’re telling the family people – the family members of people with Ebola to not try to help them or to make sure that they put on gloves. And, you know, that’s, you know, easier – I think that can be easier said than done. A lot of people are wearing gloves, but for a lot of people it’s really hard.

One of the things – two days after I got to Liberia, Thomas Eric Duncan sort of happened in the U.S. And, you know, I was getting all these questions from people in the U.S. about why did he, you know, help his neighbor? Why did he pick up that woman who was sick? Which is believed to be how we got it. And I set out trying to do this story about the whole touching thing because the whole culture of touching had gone away in Liberia, which was a difficult thing to understand. I knew the only way I could do that story was to talk to Ebola survivors because then you can ask people who actually contracted the disease because they touched somebody else, you know, why did you touch somebody? It’s not like you didn’t know that, you know, this was an Ebola – that, you know, you were putting yourself in danger. So why did you do it?

And in all the cases, the people I talked to there were, like, family members. There was this one woman, Patience, who contracted it from her daughter who – 2-year-old daughter, Rebecca – who had gotten it from a nanny. And Rebecca was crying, and she was vomiting and, you know, feverish, and her mom picked her up. When you’re seeing a familiar face that you love so much, it’s really, really hard to – I think it’s a physical – you have to physically – to physically restrain yourself from touching them is not as easy as we might think.

The thing we need to do to ensure naturally occurring pandemics such as Ebola cause the minimum of human suffering is to provide support for developing countries lacking the health infrastructure to respond to or avoid being the vectors for infectious diseases. We especially need to address the low number of doctors per capita found in some countries through, for example, providing doctor training programs. In a globalized world being our brother’s keeper is no longer just a matter of moral necessity, but helps preserve our own health as well.

A super-pandemic of the kind imagined by Mary Shelley, though, is an evolutionary near impossibility. It is highly unlikely that nature by itself would come up with a disease so devastating we will not be able to stop before it kills us in the billions. Having co-evolved with microscopic life some human being’s immune system, somewhere, anticipates even nature’s most devious tricks. We are also in the Anthropocene now, able to understand, anticipate, and respond to the deadliest games nature plays. Sadly, however, the 21st century could experience, as Shelley imagined, the world’s first super-pandemic only the source of such a disaster wouldn’t be nature- it would be us.

One might think I am referencing bio-terrorism, yet the disturbing thing is that the return address for any super-pandemic is just as likely to be stupid and irresponsible scientists as deliberate bioterrorism. Such is the indication from what happened in 2011 when the Dutch scientist Ron Fouchier deliberately turned the H5N1 bird flu into a form that could potentially spread human-to-human. As reported by Laurie Garrett:

Fouchier told the scientists in Malta that his Dutch group, funded by the U.S. National Institutes of Health, had “mutated the hell out of H5N1,” turning the bird flu into something that could infect ferrets (laboratory stand-ins for human beings). And then, Fouchier continued, he had done “something really, really stupid,” swabbing the noses of the infected ferrets and using the gathered viruses to infect another round of animals, repeating the process until he had a form of H5N1 that could spread through the air from one mammal to another.

Genetic research has become so cheap and easy that what once required national labs and huge budgets to do something nature would have great difficulty achieving through evolutionary means can now be done by run-of-the-mill scientists in simple laboratories, or even by high school students. The danger here is that scientists will create something so novel that  evolution has not prepared any of us for, and that through stupidity and lack of oversight it will escape from the lab and spread through human populations.

News of the crazy Dutch experiments with H5N1 was followed by revelations of mind bogglingly lax safety procedures around pandemic diseases at federal laboratories where smallpox virus had been forgotten in a storage area and pathogens were passed around in Ziploc bags.

The U.S. government, at least, has woken up to the danger imposing a moratorium on such research until their true risks and rewards can be understood and better safety standards established. This has already, and will necessarily, negatively impact potentially beneficial research. Yet what else, one might ask should the government do given the potential risks? What will ultimately be needed is an international treaty to monitor, regulate, and sometimes even ban certain kinds of research on pandemic diseases.

In terms of all the existential risks facing humanity in the 21st century, man-made super-pandemics are the one with the shortest path between reality and nightmare. The risk from runaway super-intelligence remains theoretical, based upon hypothetical technology that, for all we know, may never exist. The danger of runaway global warming is real, but we are unlikely to feel the full impact this century. Meanwhile, the technologies to create a super-pandemic in large part already here with the key uncertainty being how we might control such a dangerous potential if, as current trends suggest, the ability to manipulate and design organisms at the genetic level continues to both increase and democratize. Strangely enough, Mary Shelley’s warning in her Frankenstein about the dangers of science used for the wrong purposes has the greatest likelihood of coming in the form of her Last Man.

 

2040′s America will be like 1840′s Britain, with robots?

Christopher Gibbs Steampunk

Looked at in a certain light, Adrian Hon’s History of the Future in 100 Objects can be seen as giving us a window into a fictionalized version of an intermediate technological stage we may be entering. It is the period when the gains in artificial intelligence are clearly happening, but they have yet to completely replace human intelligence. The question if it AI ever will actually replace us is not of interest to me here. It certainly won’t be tomorrow, and technological prediction beyond a certain limited horizon is a fool’s game.

Nevertheless, some features of the kind of hybrid stage we have entered are clearly apparent. Hon built an entire imagined world around them from with “amplified-teams” (AI working side by side with groups of humans) as one of the major elements of 21st century work, sports, and much else besides.

The economist Tyler Cowen perhaps did Hon one better, for he based his very similar version of the future not only on things that are happening right now, but provided insight on what we should do as job holders and bread-winners in light of the rise of ubiquitous, if less than human level, artificial intelligence. One only wishes that his vision had room for more politics, for if Cowen is right, and absent us taking collective responsibility for the type of future we want to live in, 2040’s America might look like the Britain found in Dickens, only we’ll be surrounded by robots.

Cowen may seem a strange duck to take up the techno-optimism mantle, but he did in with gusto in his recent book Average is Over. The book in essence is a sequel to Cowen’s earlier best seller The Great Stagnation in which he argued that developed economies, including the United States, had entered a period of secular stagnation beginning in the 1970’s. The reason for this stagnation was that advanced economies had essentially picked all the “low hanging fruit” of the industrial revolution.

Arguing that we are in a period of technological stagnation at first seems strange, but when I reflect a moment on the meaning of facts such as not flying all that much faster than would have been common for my grandparents in the 1960’s, the kitchen in my family photos from the Carter days looking surprisingly like the kitchen I have right now- minus the paneling, or saddest of all, from the point of view of someone brought up on Star Trek, Star Wars and Our Star Blazers with a comforter sporting Viking 2 and Pioneer, the fact that, not only have we failed to send human visitors to Mars or beyond, we haven’t even been back to the moon. Hell we don’t even have any human beings beyond low-earth orbit.

Of course, it would be silly to argue there has been no technological progress since Nixon. Information, communication and computer technology have progressed at an incredible speed, remaking much of the world in their wake, and have now seemingly been joined by revolutions in biotechnology and renewable energy.

And yet, despite how revolutionary these technologies have been, they have not been able to do the heavy lifting of prior forms of industrialization due to the simple fact that they haven’t been as qualitatively transformative as the industrial revolution. If I had a different job I could function just fine without the internet, and my life would be different only at the margins. Set the technological clock by which I live back to the days preceding industrialization, before electricity, and the internal combustion engine, and I’d be living the life of my dawn-to-dusk Amish neighbors- a different life entirely.

Average is Over is a followup to Cowen’s earlier book in that in it he argues that technological changes now taking place will have an impact that will shake us out of our stagnation, or at least how that stagnation is itself evolving into something quite different with some being able to escape its pull while others fall even further behind.

Like Hon, Cowen thinks intermediate level AI is what we should be paying attention to rather than Kurzweil or Bostrom- like hopes and fears regarding superintelligence. Also like Hon, Cowen thinks the most important aspect of artificial intelligence in the near future is human-AI teams. This is the lesson Cowen takes from, among other things, freestyle chess.

For those who haven’t been paying attention to the world of competitive chess, freestyle chess is what emerged once people were able to buy a chess playing program that could beat the best players in the world for a few dollars to play on one’s phone. One might of thought that would be the death knell for human chess, but something quite different has happened. Now, some of the most popular chess games are freestyle meaning human-machine vs human-machine.

The moral Cowen draws from freestyle chess is that the winners of these games, and he extrapolates, the economic “games” of the future, are those human beings who are most willing to defer to the decisions of the machine. I find this conclusion more than a little chilling given we’re talk about real people here rather than Knight or Pawns, but Cowen seems to think it’s just common sense.

In its simplest form Cowen’s argument boils down to the prediction that an increasing amount of human work in the future will come in the form of these AI-human teams. Some of this, he admits, will amount to no workers at all with the human part of the “team” reduced to an unpaid customer. I now almost always scan and bag my own goods at the grocery store, just as I can’t remember the last time I actually spoke to a bank teller who wasn’t my mom. Cowen also admits that the rise of AI might mean the world actually gets “dumber” our interactions with our environment simplified to foster smooth integration with machines and compressed to meet their limits.

In his vision intelligent machines will revolutionize everything from medicine to education to business management and negotiation to love. The human beings who will best thrive in this new environment will be those whose work best complements that of intelligent machines, and this will be the case all the way from the factory floor to the classroom. Intelligent machines should improve human judgement in areas such as medical diagnostics and would even replace judges in the courtroom if we are ever willing to take the constitutional plunge. Teachers will go from educators to “coaches” as intelligent machines allow individualized instruction , but education will still require a human touch when it comes to motivating students.

His message to those who don’t work well with intelligent machines is – good luck. He sees automation leading to an ever more competitive job market in which many will fail to develop the skills necessary to thrive. Those unfortunate ones will be left to fend for themselves in the face of an increasingly penny-pinching state. There is one area, however, where Cowen thinks you might find refuge if machines just aren’t your thing-marketing. Indeed, he sees marketing as one of the major growth areas in the new otherwise increasingly post-human economy.

The reason for this is simple. In the future there are going to be less ,not more, people with surplus cash to spend on all the goods built by a lot of robots and a handful of humans. One will have to find and persuade those with real incomes to part with some of their cash. Computers can do the finding, but it will take human actors to sell the dream represented by a product.

The world of work presented in Cowen’s Average is Over is almost exclusively that of the middle class and higher who find their way with ease around the Infosphere, or whatever we want to call this shell of information and knowledge we’ve built around ourselves. Either that or those who thrive economically will be those able to successfully pitch whatever it is they’re selling to wealthy or well off buyers, sometimes even with the help of AI that is able to read human emotions.

I wish Cowen had focused more on what it will be like to be poor in such a world. One thing is certain, it will not be fun. For one, he sees further contraction rather than expansion of the social safety net, and widespread conservatism, rather than any attempts at radically new ways of organizing our economy, society and politics. Himself a libertarian conservative, Cowen sees such conservatism baked into the demographic cake of our aging societies. The old do not lead revolutions and given enough of them they can prevent the young from forcing any deep structural changes to society.

Cowen also has a thing for so-called “moral enhancement” though he doesn’t call it that. Moral enhancement need not only come from conservative forces, as the extensive work on the subject by the progressive James Hughes shows, but in the hands of both Hon and Cowen, moral enhancement is a bulwark of conservative societies, where the world of middle class work and the social safety net no longer function, or even exist, in the ways they had in the 20th century.

Hon with his neuroscience background sees moral enhancement leveraging off of our increasing mastery over the brain, but manifesting itself in a revival of religious longings related to meaning, a meaning that was for a long time provided by work, callings and occupations that he projects will become less and less available as we roll through the 21st century with human workers replaced by increasingly intelligent machines. Cowen, on the other hand, sees moral enhancement as the only way the poor will survive in an increasingly competitive and stingy environment, though his enhancement is to take place by more traditional means, the return of strict schools that inculcate victorian era morals such as self-control and above all conscientiousness in the young. Cowen is far from alone in thinking that in an era when machines are capable of much of the physical and intellectual labor once done by human beings what will matter most to individual success is ancient virtues.

In Cowen’s world the rich with money to burn are chased down with a combination of AI, behavioral economics, targeted consumer surveillance, and old fashioned, fleshy persuasion to part with their cash, but what will such a system be like for those chronically out of work? Even should mass government surveillance disappear tomorrow, (fat chance) it seems the poor will still face a world where the forces behind their ever more complex society become increasingly opaque, responsible humans harder to find, and in which they are constantly “nudged” by people who claim to know better. For the poor, surveillance technologies will likely be used not to sell them stuff which they can’t afford, but are a tool of the repo-man, and debt collector, parole officer, and cop that will slowly chisel away whatever slim column continues to connect them the former middle class world of their parents. It is a world more akin to the 1940’s or even the 1840’s than it is to anything we have taken to be normal since the middle of the 20th century.

I do not know if such a world is sustainable over the long haul, and pray that it is not. The pessimist in me remembers that the classical and medieval world’s existed for long periods of time with extreme levels of inequality in both wealth and power, the optimist chimes in that these were ages when the common people did not know how to read. In any case, it is not a society that must by some macabre logic of economic determinism come about. The mechanism by which Cowen sees no sustained response to such a future coming into being is our own political paralysis and generational tribalism. He seems to want this world more than he is offering us a warning of it arrival. Let’s decide to prove him wrong for the technologies he puts so much hope in could be used in totally different ways and in the service of a juster form of society.

However critical I am of Cowen for accepting such a world as a fait accompli, the man still has some rather fascinating things to say. Take for instance his view of the future of science:

Once genius machines start coming up with new theories…. intelligibility will seem like a legacy from the very distant past. ( 220)

For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….

Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.

This City is Our Future

Erich Kettelhut Metropolis Sketch

If you wish to understand the future you need to understand the city, for the human future is an overwhelmingly urban future. The city may have always been synonymous with civilization, but the rise of urban humanity has been something that has almost all occurred after the onset of the industrial revolution. In 1800 a mere 3 percent of humanity lived in cities of over one million people. By 2050, 75  percent of humanity will be urbanized. India alone might have 6 cities with a population of over 10 million.    

The trend towards megacities is one into which humanity as we speak is accelerating in a process we do not fully understand let alone control. As the counterinsurgency expert David Kilcullen writes in his Out of the Mountains:

 To put it another way, these data show that the world’s cities are about to be swamped by a human tide that will force them to absorb- in just one generation- the same population growth that occurred in all of human history up to 1960. And virtually all of this growth will happen in the world’s poorest areas- a recipe for conflict, for crises in health, education and in governance, and for food water and energy scarcity.  (29)

Kilcullen sees 4 trends including urbanization that he thinks are reshaping human geography all of which can be traced to processes that began in the industrial revolution: the aforementioned urbanization and growth of megacities, population growth, littoralization and connectedness.

In terms of population growth: The world’s population has exploded going from 750 million in 1750 to a projected  9.1 – 9.3 billion by 2050. The rate of population growth is thankfully slowing, but barring some incredible catastrophe, the earth seems destined to gain the equivalent of another China and India all within the space of a generation. Almost all of this growth will occur in poor and underdeveloped countries already stumbling under the pressures of the populations they have.

One aspect of population growth Kilcullen doesn’t really discuss is the aging of the human population. This is normally understood in terms of the failure of advanced societies in Japan, South Korea in Europe to reach replacement levels so that the number of elderly are growing faster than the youth to support them, a phenomenon that is also happening in China as a consequence of their draconian one child policy. Yet, the developing world, simply because of the sheer numbers and increased longevity will face its own elderly crisis as well as tens of millions move into age-related conditions of dependency. As I have said in the past, gaining a “longevity dividend” is not a project for spoiled Westerners alone, but is primarily a development issue.

Another trend Kilcullen explores is littoralization, the concentration of human populations near the sea. A fact that was surprising to a landlubber such as myself, Kilcullen points out that in 2012 80% of human beings lived within 60 miles of the ocean. (30) A number that is increasing as the interiors of the continents are hollowed out of human inhabitants.

Kilcullen doesn’t discuss climate change much but the kinds of population dislocations that might be caused by moderate not to mention severe sea level rise would be catastrophic should certain scenarios for climate change play out. This goes well beyond islands or wealthy enclaves such as Miami, New Orleans or Manhattan. Places such as these and Denmark may have the money to engineer defenses against the rising sea, but what of a poor country such as Bangladesh? There, almost 200 million people might find themselves in flight from the relentless forward movement of the oceans. To where will they flee?

It is not merely the displacement of tens of millions of people, or more, living in low-lying coastal areas. Much of the world’s staple crop of rice is produced in deltas which would be destroyed by the inundation of the salt-water seas.

The last and most optimistic of Kilcullen’s trends is growing connectedness. He quotes the journalist John Pollack:

Cell-phone penetration in the developing world reached 79 percent in 2011. Cisco estimates that by 2015 more people in sub-saharan Africa,  South and Southeast Asia and the Middle East will have Internet access than electricity at home.

What makes this less optimistic is the fact as Pollack continues:

Across much of the world, this new information power sits uncomfortably upon layers of corrupt and inefficient government.  (231)

One might have thought that the communications revolution had made geography irrelevant or “flat” in Thomas Friedman’s famous term. Instead, the world has become“spiky” with the concentration of people, capital, and innovation in cities spread across the globe and interconnected with one another. The need for concentration as a necessary condition for communication is felt by the very rich and the very poor alike, both of whom collect together in cities. Companies running sophisticated trading algorithms have reshaped the very landscape to get closer to the heart of the Internet and gain a speed advantage over competitors so small they can not be perceived by human beings.

Likewise, the very poor flood to the world’s cities, because they can gain access to networks of markets and capital, but more recently, because only there do they have access to electricity that allows them to connect with one another or the larger world, especially in terms of their ethnic diaspora or larger civilizational community, through mobile devices and satellite TV. And there are more of these poor struggling to survive in our 21st century world than we thought, 400 million more of them according to a recent report.

For the urban poor and disenfranchised of the cities what the new connectivity can translate into is what Audrey Kurth Croninn has called the new levee en mass.  The first levee en mass was that of the French Revolution where the population was mobilized for both military and revolutionary action by new short length publications written by revolutionary writers such as Robespierre, Saint-Just or the blood thirsty Marat. In the new levee en mass, crowds capable of overthrowing governments- witness, Tunisia, Egypt and Ukraine can be mobilized by bloggers, amateur videographers, or just a kind of swarm intelligence emerging on the basis of some failure of the ruling classes.

Even quite effective armies, such as ISIS now sweeping in from Syria and taking over swaths of Iraq can be pulled seemingly out of thin air. The mobilizing capacity that was once the possession of the state or long-standing revolutionary groups has, under modern conditions of connectedness, become democratized even if the money behind them can ultimately be traced to states.

The movement of the great mass of human beings into cities portends the movement of war into cities, and this is the underlying subject of Kilcullen’s book, the changing face of war in an urban world. Given that the vast majority of countries in which urbanization is taking place will be incapable of fielding advanced armies the kinds of conflicts likely to be encountered there Kilcullen thinks will be guerilla wars whether pitting one segment of society off against another or drawing in Western armies.

The headless, swarm tactics of guerrilla war, which as the author Lawrence H. Keeley reminded us is in some sense a more evolved, “natural” and ultimately more effective form of warfare than the clashing professional armies of advanced states, its roots stretching back into human prehistory and the ancient practices of both hunting and tribal warfare, are given a potent boost by local communication technologies such as traditional radio communication and mesh networks. The crowd or small military group able to be tied together by an electronic web that turns them into something more like an immune system than a modern centrally directed army.

Attempting to avoid the high casualties so often experienced when advanced armies try to fight guerrilla wars, those capable of doing so are likely to turn to increasingly sophisticated remote and robotic weapons to fight these conflicts for them. Kilcullen is troubled by this development, not the least, because it seems to relocate the risk of war onto the civilian population of whatever country is wielding them, the communities in which remote warriors live or where their weapons themselves designed and built, arguably legitimate targets of a remote enemy a community might not even be aware it is fighting. Perhaps the real key is to try to prevent conflicts that might end with our military engagement in the first place.

Cities likely to experience epidemic crime, civil war or revolutionary upheaval are also those that have in Kilcullen’s terms gone “feral”, meaning the order usually imposed by the urban landscape no longer operates due to failures of governance. Into such a vacuum criminal networks often emerge which exchanges the imposition of some semblance of order for the control of illicit trade. All of these things: civil war, revolution, and international crime represent pull factors for Western military engagement whether in the name of international stability, humanitarian concerns or for more nefarious ends most of which are centered on resource extraction. The question is how can one prevent cities from going feral in the first place, avoiding the deep discontent and social breakdown that leads to civil war, revolution or the rise of criminal cartels all of which might end with the military intervention of advanced countries?

The solution lies in thinking of the city as a type of organism with “inflows” such as water, food, resources, manufactured products and capital and “outflows”, especially waste. There is also the issue of order as a kind of homeostasis. A city such as Beijing or Shanghai with their polluted skies is a sick organism as is the city of Dhaka in Bangladesh with its polluted waters or a city with a sky-high homicide rate such as Guatemala City or Sao Paulo. The beautiful thing about the new technologically driven capacity for mass mobilization is that it forces governments to take notice of the people’s problems or figuratively (and sometimes literally lose their heads). The problem is once things have gone badly enough to inspire mass riots the condition is likely systemic and extremely difficult to solve, and that the kinds of protests the Internet and mobile have inspired, at least so far, have been effective at toppling governments, but unable to either found or serve as governments themselves.

At least one answer to the problems of urban geography that could potentially allow cities to avoid instability is “Big-Data” or so-called “smart cities” where the a city is minutely monitored in real time for problems which then initiate quick responses by city authorities. There are several problems here, the first being the costs of such systems, but that might be the least insurmountable one, the biggest being the sheer data load.

As Kilcullen puts it in the context of military intelligence, but which could just as well be stated as the problem of city administrators, international NGOs and aid agencies.

The capacity to intercept, tag, track and locate specific cell phone and Internet users from a drone already exists, but distinguishing signal from noise in a densely connected, heavily trafficked piece of digital space is a daunting challenge. (238)

Kilcullen’s answer to the incomplete picture provided by the view from above, from big data, is to combine this data with the partial but deep view of the city by its inhabitants on the ground. In its essence a city is the stories and connections of those that live in them. Think of the deep, if necessarily narrow perspective of a major city merchant or even a well connected drug dealer. Add this to the stories of those working in social and medical services, police officers, big employers. socialites etc and one starts to get an idea of the biography of a city. Add to that the big picture of flows and connections and one starts to understand the city for what it is, a complex type of non-biological organism that serves as a stage for human stories.

Kilcullen has multiple examples of where knowledge of the big picture from experts has successfully aligned with grassroots organization to save societies on the brink of destruction an alignment he calls “co-design”. He cites the Women of Liberia Mass Action for Peace where grassroots organizer Leymah Gbowee leveraged the expertise of Western NGOs to stop the civil war in Liberia. CeaseFire Chicago uses a big-picture model of crime literally based on epidemiology and combines that with community level interventions to stop violent crime before it occurs.

Another group Kilcullen discusses is Crisis Mappers which offers citizens everywhere in the world access to the big picture, what the organization describes as “the largest and most active international community of experts, practitioners, policy makers, technologists, researchers, journalists, scholars, hackers and skilled volunteers engaged at the intersection between humanitarian crises, technology, crowd-sourcing, and crisis mapping.” (253)

On almost all of this I find Kilcullen to be spot on. The problem is that he fails to tackle the really systemic issue which is inequality. What is necessary to save any city, as Kilcullen acknowledges, is a sense of shared community. What I would call a sense of shared past and future. Insofar as the very wealthy in any society or city are connected to and largely identify with their wealthy fellow elites abroad rather than their poor neighbors, a city and a society is doomed, for only the wealthy have the wherewithal to support the kinds of social investments that make a city livable for its middle classes let alone its poor.

The very globalization that has created the opportunity for the rich in once poor countries to rise, and which connects the global poor to their fellow sufferers both in the same country and more amazingly across the world has cleft the connection between poor and rich in the same society. It is these global connections between classes which gives the current situation a revolutionary aspect, which as Marx long ago predicted, is global in scope.

The danger is that the very wealthy classes use the new high tech tools for monitoring citizens into a way to avoid systemic change, either by using their ability to intimately monitor so-called “revolutionaries” and short-circuit legitimate protest or by addressing the public’s concern in only the most superficial of ways.

The long term solution to the new era of urban mankind is giving people who live in cities the tools, including increasing sophisticated tools of data gathering and simulation, to control their own fates to find ways to connect revolutionary movements to progressive forces in societies where cities are not failing, and their tools for dealing with all the social and environmental problems cities face, and above all, to convince the wealthy to support such efforts, both in their own locality as well as on a global scale. For, the attempt at total control of a complex entity like a city through the tools of the security state, like the paper flat Utopian cities of state worshipers of days past, is to attempt building a castle in the thin armed sky.

 

 

 

 

 

Don’t Be Evil!

Panopticon Prisoner kneeling

However interesting a work it is, Eric Schmidt and Jared Cohen’s The New Digital Age is one of those books where if you come to it as a blank slate you’ll walk away from it with a very distorted chalk drawing of what the world actually looks like. Above all, you’ll walk away with the idea that intrusive and questionable surveillance was something those other guys did, the bad guys, not the American government, or US corporations, and certainly not Google where Schmidt sits as executive chairman . Much ink is spilt on explaining the egregious abuses of Internet freedom by the likes of countries like China and Iran, or what in the vast majority of cited cases, are abuses by non-Western companies,  but when it comes to the US itself or any of its corporations engaging in similar practices the book is eerily silent.

I may not know what a mote is, but I do know I am supposed to pluck my own out of my eye first. Only then can I get seriously down to the business of pointing out the other guy’s mote, or even helping him yank it out.

The New Digital Age (I’ll call it the NDA from here on on to shorten things up), is full of the most reasonable and vanilla sort of advice on the need to balance our conflicting needs for security and privacy, but given its silence on the question of what the actual security/surveillance system in the US actually is, we’re left without the information needed to make such judgements. Let me put that silence in context.

The publication date for the NDA was April, 23 2013. The smoke screen of conspicuous- for- their- absence facts that are never discussed extends not only forward in time- something to be expected given the Edward Snowden revelations were one month out (May, 20 2013)- but, more disturbingly backward in time as well.  That is, Schmidt and Cohen couldn’t really be expected, legally if not morally, to discuss the revelations Snowden would later bring to light. Still, they should be expected to have addressed serious claims about the relationship between American technology companies and the US security state which were already public knowledge.

There had been extensive reporting on the intersection of technology and US government spying since at least 2010. These weren’t stories by Montana survivalists or persons camped out at Area 51, but hard hitting journalists with decades covering national security; namely, the work of Dana Priest and the Washington Post. If my memory and the book’s index serves me, neither Priest nor the Post are mentioned in the NDA.

Over a year before NDA was published Wired’s James Bamford had written a stunning piece on the NSA’s construction of its huge data center in Bluffdale, Utah, the goal of which was to suck up and store indefinitely the electronic records of all of us- which is the main thing we are arguing about. The main debate is over whether the government has a right to force private companies to provide all the digital data on their customers which the government will then synthesize, organize and store. If you’re an American you’re lucky enough to have the government require a warrant to look at your records. (Although the court in change of this-the FISA court- is not really known for turning such requests down). If you’re unlucky enough to not be an American then the government can peruse your records whenever the hell it wants to- thank you very much.

The NSA gets two pages devoted to it in the NDA’s 257 pages both of which are about how open minded and clever the agency is for hiring pimply- faced hackers. Say, what?

The more I think about what had to be the deliberate silence that runs throughout the whole of the NDA the more infuriating it becomes, but at least now Google et al have gotten religion- or at least I hope. On December, 9 2013 Google, Facebook, Apple, Microsoft, Twitter, Yahoo, LinkedIn, and AOL sent an open letter to the White House urging new restrictions on the government’s ability to seize, use and store information gleaned from them. This is a hopeful sign, but I am not sure we be handing out Liberty Medals just yet.

For one, this move against the government was not inspired by civil libertarians or even robust reporting, but by threats to the very business model upon which the companies who signed the document are based. As The Economist puts its:

The entire business model of firms like Google, FaceBook and Twitter relies on harvesting intimate information provided by users and then selling that data on to advertisers.

It was private firms that persuaded people to give up lists of their friends, their most sensitive personal communications, and to constantly broadcast their location in real-time. If you had told even the noisiest spook in 1983, that within 30 years, much of the populace would be carrying around a tracking device that kept a permanent record of everywhere they had ever visited, he’d have thought you mad.”

Let’s say you’re completely comfortable with the US government keeping such records on you. Perhaps the majority of Americans are unconcerned about this and think it the price of safety. But I doubt Americans would feel as blaise if it was the Chinese or the Russians or heaven forbid the French or any other government whose apparatchiks could go through their online personal and financial records at will. Therein lies the threat to American companies whose ultimate aspirations are global.  Companies that are seen, rightly or wrongly, as a tool of the US government will lose the trust not mainly of US citizens but of international customers. An ensuing race to the exits and nationalization of the Internet would most likely be driven not by Iranian Mullahs or a testosterone- charged Vladimir Putin paddling around in a submersible like a Bond villain,  but by Western Europeans and other democratic societies who were already uncomfortable with the idea that corporations should be trusted by individuals who had made themselves as transparent as the Utah sky.

The Germans, to take one example, were already freaked out by Google Street View of all things and managed to have the company abandon that service there. Revulsion at the Snowden revelations is perhaps the one thing that unites the otherwise bickering nationalities of the EU. TED, an event that began as a Silicon Valley lovefest looked a lot different when it was held in Brussels in October, with Mikko Hypponen urging the secession of Europeans from the American Internet infrastructure and the creation of their own open-sourced platforms. It’s the fear of being thought of as downright Orwellian that seems most likely to have inspired Google’s move to abandon facial recognition on Google Glass.

With the Silicon Valley Letter we might think we’re in the home stretch of this struggle to re-establish the right to privacy, but the sad fact is this fight’s just beginning. As the Economist pointed out none of the giants that provide the hardware and “plumbing” for the Internet, such as Cisco, and AT&T signed the open letter, less afraid, it seems, of losing customers because these are national brick-and-mortar companies in a way the eight signatories of the open letter to the Obama Administration are not.  For civil libertarians to win this fight Americans have to not only get those hardware companies on board, but compel the government to deconstruct a massive amount of spying infrastructure.

That is, we need to get the broader American public to care enough to exert sustained pressure on the government and some of the richest companies in the country to reverse course. Otherwise, the NSA facility at Bluffdale will continue sucking up its petabytes of overwhelmingly useless information like some obsessive Mormon genealogist until the mechanical levithan lurches to obsolescence or is felled by the sheppard’s stone of better encryption.

The NSA facility that stands today in the Utah desert may offer a treasure trove for the historian of the far future, a kind of massive junkyard of collective memory filled with all our sense and non-sense. If we don’t get our act straight, what it will also be is a historical monument to the failure of our two centuries and some old experiment with freedom.

Maps:how the physical world conquered the virtual

Fortuna_or_Fortune

If we look back to the early days when the Internet was first exploding into public consciousness, in the 1980’s, and even more so in the boom years of the 90’s, what we often find is a kind of utopian sentiment around this new form of “space”. It wasn’t only that a whole new plane of human interaction seemed to be unfolding into existence almost overnight, it was that “cyberspace” seemed poised to swallow the real world- a prospect which some viewed with hopeful anticipation and others with doom.

Things have not turned out that way.

The person who invented the term “cyberspace”, William Gibson, the science fiction author of the classic- Neuromancer- himself thinks that when people look back on the era when the Internet emerged what will strike them as odd is how we could have confused ourselves into thinking that the virtual world and our work-a-day one were somehow distinct. Gibson characterizes this as the conquest of the real by the virtual. Yet, one can see how what has happened is better thought of as the reverse by taking even a cursory glance at our early experience and understanding of cyberspace.

Think back, if you are old enough, and you can remember, when the online world was supposed to be one where a person could shed their necessarily limited real identity for a virtual one. There were plenty of anecdotes, not all of them insidious, of people faking their way through a contrived identity the unsuspecting thought was real: men coming across as women, women as men, the homely as the beautiful. Cyberspace seemed to level traditional categories and the limits of geography. A poor adolescent could hobnob with the rich and powerful. As long as one had an Internet connection, country of origin and geographical location seemed irrelevant.

It should not come as any surprise, then, that  an early digital reality advocate such as Nicole Stenger could end her 1991 essay Mind is a leaking rainbow with the utopian flourish:

According to Satre, the atomic bomb was what humanity had found to commit collective suicide. It seems, by contrast, that cyberspace, though born of a war technology, opens up a space for collective restoration, and for peace. As screens are dissolving, our future can only take on a luminous dimension! / Welcome to the New World! (58)

Ah, if only.

Even utopian rhetoric was sometimes tempered with dystopian fears. Here is Mark Pesce the inventor of VRML code in his 1997 essay Ignition:

The power over this realm has been given to you. You are weaving the fabric of perception in information perceptualized. You could – if you choose – turn our world into a final panopticon – a prison where all can been seen and heard and judged by a single jailer. Or you could aim for its inverse, an asylum run by the inmates. The esoteric promise of cyberspace is of a rule where you do as you will; this ontology – already present in the complex system know as Internet – stands a good chance of being passed along to its organ of perception.

The imagery of a “final panopticon” is doubtless too morbid for us at this current stage whatever the dark trends. What is clear though is that cyberspace is a dead metaphor for what the Internet has become- we need a new one. I think we could do worse than the metaphor of the map. For, what the online world has ended up being is less an alternative landscape than a series of cartographies by which we organize our relationship with the world outside of our computer screens, a development with both liberating and troubling consequences.

Maps have always been reflections of culture and power rather than reflections of reality. The fact that medieval maps in the West had Jerusalem in their centers wasn’t expressing a geologic but a spiritual truth although few understood the difference. During the Age of Exploration what we might think of as realistic maps were really navigational aids for maritime trading states, a latent fact present in what the mapmakers found important to display and explain.

The number and detail of maps along with the science of cartography rose in tandem with the territorial anchoring of the nation-state. As James C. Scott points out in his Seeing Like a State maps were one of the primary tools of the modern state whose ambition was to make what it aimed to control “legible” and thus open to understanding by bureaucrats in far off capitals and their administration.

What all of this has to do with the fate of cyberspace, the world where we live today, is that the Internet, rather than offering us an alternative version of physical space and an escape hatch from its problems has instead evolved into a tool of legibility. What is made legible in this case is us. Our own selves and the micro-world’s we inhabit have become legible to outsiders. Most of the time these outsiders are advertisers who target us based on our “profile”, but sometimes this quest to make individuals legible is by the state- not just in the form of standardized numbers and universal paperwork but in terms of the kinds of information a state could only once obtain by interrogation- the state’s first crack at making individuals legible.      

A recent book by Google CEO Eric Schmitt co-authored with foreign policy analyst Jared Cohen- The New Digital Age- is chalk full of examples of corporate advertisers’ and states’ new powers of legibility. They write:

The key advance ahead is personalization. You’ll be able to customize your devices- indeed much of the technology around you- to fit your needs, so that the environment reflects your preferences.

At your fingertips will be an entire world’s worth of digital content, constantly updated, ranked and categorized to help you find the music, movies, shows, books, magazines, blogs and art you like. (23)

Or as journalist Farhad Manjoo quotes Amit Singhal of Google:

I can imagine a world where I don’t even need to search. I am just somewhere outside at noon, and my search engine immediately recommends to me the nearby restaurants that I’d like because they serve spicy food.

There is a very good reason why I did not use the world “individuals” in place of “corporate advertisers” above- a question of intent. Whose interest does the use of such algorithms to make the individual legible ultimately serve? If it my interest then search algorithms might tell me where I can get a free or even pirated copy of the music, video etc I will like so much. It might remind me of my debts, and how much I would save if I skip dinner at the local restaurant and cook my quesadillas at home. Google and all its great services, along with similar tech giants aiming to map the individual such as FaceBook aren’t really “free”. While using them I am renting myself to advertisers. All maps are ultimately political.

With the emergence mobile technology and augmented reality the physical world has wrestled the virtual one to the ground like Jacob did to the angel. Virtual reality is now repurposed to ensconce all of us in our own customized micro-world. Like history? Then maybe your smartphone or Google Glasses will bring everything historical around you out into relief. Same if you like cupcakes and pastry or strip clubs. These customized maps already existed in our own heads, but now we have the tools for our individualized cartography- the only price being constant advertisements.

There’s even a burgeoning movement among the avant garde, if there can still be said to be such a thing, against this kind of subjection of the individual to corporate dictated algorithms and logic. Inspired by mid-20 century leftists such as Guy Debord with his Society of the Spectacle practitioners of what is called psychogeography are creating and using apps such as Drift  that lead the individual on unplanned walks around their own neighborhoods, or Random GPS that have your car’s navigation system remind you of the joys of getting lost.

My hope is that we will see other versions of these algorithm inverters and breakers and not just when it comes to geography. How about similar things for book recommendations or music or even dating? We are creatures that sometimes like novelty and surprise, and part of the wonder of life is fortuna-  its serendipitous accidents.

Yet, I think these tools will most likely ramp up the social and conformist aspects of our nature. We shouldn’t think they will be limited to corporate persuaders. I can imagine “Catholic apps” that allow one to monitor one’s sins, and a whole host of funny and not so funny ways groups will use the new methods of making the individual legible to tie her even closer to the norms of the group.

A world where I am surrounded by a swirl of constant spam, or helpful and not so helpful suggestions, the minute I am connected, indeed, a barrage that never ends except when I am sleeping because I am always connected, may be annoying, but it isn’t all that scary. It’s when we put these legibility tools in the hands of the state that I get a little nervous.

As Schmitt and Cohen point out one of the most advanced forms of such efforts at mapping the individual is an entity called Platforma Mexico which is essentially a huge database that is able to identify any individual and tie them to their criminal record.

Housed in an underground bunker in the Secretariat of Public Security compound in Mexico City, this large database integrates intelligence, crime reports and real time data from surveillance cameras and other inputs from across the country. Specialized algorithms can extract patterns, project social graphs and monitor restive areas for violence and crime as well as for natural disasters and other emergencies.  (174)

The problem I have here is the blurring of the line between the methods used for domestic crime and those used for more existential threats, namely- war. Given that crime in the form of the drug war is an existential threat for Mexico this might make sense, but the same types of tools are being perfected by authoritarian states such as China, which is faced not with an existential threat but with growing pressures for reform, and also in what are supposed to be free societies like the United States where a non-existential threat in the form of terrorism- however already and potentially horrific- is met with similar efforts by the state to map individuals.

Schmitt and Cohen point out how there is a burgeoning trade between autocratic countries and their companies which are busy perfecting the world’s best spyware. An Egyptian firm Orascom owns a 25 percent share of the panopticonic sole Internet provider in North Korea. (96) Western companies are in the game as well with the British Gamma Group’s sale of spyware technology to Mubarak’s Egypt being just one recent example.

Yet, if corporations and the state are busy making us legible there has also been a democratization of the capacity for such mapmaking, which is perhaps the one of the reasons why states are finding governance so difficult. Real communities have become almost as easy to create as virtual ones because all such communities are merely a matter of making and sustaining human relationships and understanding their maps.

Schmitt and Cohen imagine virtual governments in exile waiting in the wings to strike at the precipitous moment. Political movements can be created off the shelf supported by their own ready made media entities and the authors picture socially conscious celebrities and wealthy individuals running with this model in response to crises. Every side in a conflict can now have its own media wing whose primary goal is to shape and own the narrative. Even whole bureaucracies could be preserved from destruction by keeping its map and functions in the cloud.

Sometimes virtual worlds remain limited to the way they affect the lives of individuals but are politically silent. A popular mass multiplayer game such as World of Warcraft may have as much influence on an individual’s life as other invisible kingdoms such as those of religion. An imagined online world becomes real the moment its map is taken as a prescription for the physical world.  Are things like the Hizb ut-Tahrir which aims at the establishment of a pan-Islamic caliphate or The League of the South which promotes a second secession of American states “real” political organizations or fictional worlds masking themselves as political movements? I suppose only time will tell.

Whatever the case, society seems torn between the mapmakers of the state who want to use the tools of the virtual world to impose order on the physical and an almost chaotic proliferation using the same tools by groups of all kinds creating communities seemingly out of thin air.

All this puts me in mind of nothing so much as China Mieville’s classic of New Weird fiction City and the City. It’s a crime novel with the twist that it takes place in two cities- Beszel  and Ul Qoma that exist in the same physical space and are superimposed on top of one another. No doubt Mieville was interested in telling a good story, and getting us thinking about the questions of borders and norms, but it’s a pretty good example of the mapping I’ve been talking about- even if it is an imagined one.

In City and the City an inhabitant of Beszel  isn’t allowed to see or interact with what’s going on in Ul Qoma and vice versa otherwise they commit a crime called “breach” and there’s a whole secretive bi-city agency called Breach that monitors and prosecutes those infractions. There’s even an imaginary (we are led to believe) third city “Orciny” that exist on-top of Beszel and Ul Qoma and secretly controls the other two.

This idea of multiple identities- consumer, political- overlaying the same geographical space seems a perfect description of our current condition. What is missing here, though, is the sharp borders imposed by Breach. Such borders might appear quicker and in different countries than one might have supposed thanks to the recent revelations that the United States has been treating the Internet and its major American companies like satraps. Only now has Silicon Valley woken up to the fact that its close relationship with the American security state threatens its “transparency” based business- model with suicide. The re-imposition of state sovereignty over the Internet would mean a territorialization of the virtual world- a development that would truly constitute its conquest by the physical. To those possibilities I will turn next time…

Shedding Light on The Dark Enlightenment

Eye of Sauron

There has been some ink spilt lately at the IEET over a new movement that goes by the Tolkienesque name, I kid you not, of the dark enlightenment also called neo-reactionaries.  Khannea Suntzu has looked at the movement from the standpoint of American collapse and David Brin within the context of a rising oligarchic neo-feudalism.  

I have my own take on the neo-reactionary movement somewhat distinct from that of either Suntzu or Brin, which I will get to below, but first a recap.  Neo-reactionaries are a relatively new group of thinkers on the right that in general want to abandon the modern state, built such as it is around the pursuit of the social welfare, for lean-and-mean governance by business types who know in their view how to make the trains run on time. They are sick of having to “go begging” to the political class in order to get what they want done. They hope to cut out the middle-man. It’s obvious that oligarchs run the country so why don’t we just be honest about it and give them the reins of power? We could even appoint a national CEO- if the country remains in existence- we could call him the king. Oh yeah, on top of that we should abandon all this racial and sexual equality nonsense. We need to get back to the good old days when the color of a man’s skin and having a penis really meant something- put the “super” back in superior.

At first blush the views of those hoping to turn the lights out on enlightenment (anyone else choking on an oxymoron) appear something like those of the kind of annoying cousin you try to avoid at family reunions. You know, the kind of well off white guy who thinks the Civil Rights Movement was a communist plot, calls your wife a “slut” (their words, not mine) and thinks the real problem with America is that we give too much to people who don’t have anything and don’t lock up or deport enough people with skin any darker than Dove Soap. Such people are the moral equivalent of flat-earthers with no real need to take them seriously, though they can make for some pretty uncomfortable table conversation and are best avoided like a potato salad that has been out too long in the sun.

What distinguishes neo-reactionaries from run of the mill ditto heads or military types with a taste for Dock Martins or short pants is that they tend to be latte drinking Silicon Valley nerds who have some connection to both the tech and trans-humanist communities.

That should get this audience’s attention.

To continue with the analogy from above:  it’s as if your cousin had a friend, let’s just call him totally at random here… Peter Thiel, who had a net worth of 1.5 billion and was into, among other things, working closely with organizations such as the NSA through a data mining firm he owned- we’ll call it Palantir (damned Frodo Baggins again!) and who serves as a deep pocket for groups like the Tea Party. Just to go all conspiracy on the thing let’s make your cousin’s “friend” a sitting member on something we’ll call The Bilderberg Group a secretive cabal of the world’s bigwigs who get together to talk about what they really would like done in the world. If that was the case the last thing you should do is leave your cousin ranting to himself while you made off for another plate of Mrs. T’s Pierogies.  You should take the maniac seriously because he might just be sitting on enough cash to make his atavistic dreams come true and put you at risk of sliding off a flattened earth.

All this might put me at risk of being accused of lobbing one too many ad hominem, so let me put some meat on the bones of the neo-reactionaries. The Super Friends or I guess it should be Legion of Doom of neo-reaction can be found on the website Radish where the heroes of the dark enlightenment are laid out in the format of Dungeons and Dragons or Pokémon cards (I can’t make this stuff up). Let’s just start out with the most unfunny and disturbing part of the movement- its open racism and obsession with the 19th century pseudo-science of dysgenics.

Here’s James Donald who from his card I take to be a dwarf, or perhaps an elf, I’m not sure what the difference is, who likes to fly on a winged tauntaun like that from The Empire Strikes Back.

To thrive, blacks need simpler, harsher laws, more vigorously enforced, than whites.  The average black cannot handle the freedom that the average white can handle. He is apt to destroy himself.  Most middle class blacks had fathers who were apt to frequently hit them hard with a fist or stick or a belt, because lesser discipline makes it hard for blacks to grow up middle class.  In the days of Jim Crow, it was a lot easier for blacks to grow up middle class.

Wow, and I thought a country where one quarter of African American children will have experienced at least one of their parents behind bars- thousands of whom will die in prison for nonviolent offenses- was already too harsh. I guess I’m a patsy.

Non-whites aren’t the only ones who come in for derision by the neo-reactionaries a fact that can be summed up by the post- title of one of their minions, Alfred W. Clark, who writes the blog Occam’s RazorAre Women Who Tan SlutsThere’s no need to say anything more to realize poor William of Occam is rolling in his grave.

Beyond this neo-Nazism for nerds quality neo-reactionaries can make one chuckle especially when it comes to “policy innovations” such as bringing back kings.

Here’s modern day Beowulf Mencius Moldbug:

What is England’s problem?  What is the West’s problem?  In my jaundiced, reactionary mind, the entire problem can be summed up in two words – chronic kinglessness.  The old machine is missing a part.  In fact, it’s a testament to the machine’s quality that it functioned so long, and so well, without that part.

Yeah, that’s the problem.

Speaking of atavists, one thing that has always confused me about the Tea Party is that I have never been sure which imaginary “golden age” they wanted us to return to. Is it before desegregation? Before FDR? Prior to the creation of the Federal Reserve (1913)? Or maybe it’s back to the antebellum south? Or maybe back to the Articles of Confederation? Well, at least the neo-reactionaries know where they want to go- back before the American Revolution. Obviously since this whole democracy thing hasn’t worked out we should bring back the kings, which makes me wonder if these guys have mourning parties on Bastille Day.

Okay, so the dark voices behind neo-reaction are a bunch of racist/sexist nerds who have a passion for kings and like to be presented as characters on D&D cards. They have some potentially deep pockets, but other than that troubling fact why should we give them more than a few seconds of serious thought?

Now I need to exchange my satirical cap for my serious one for the issues are indeed serious. I think understanding neo-reaction is important for two reasons: they are symptomatic of deeper challenges and changes occurring politically, and they have appeared as a response to and on the cusp of a change in our relationship to Silicon Valley a region that has been the fulcrum point for technological, economic and political transformation over the past generation.

Neo-reaction shouldn’t be viewed in a vacuum. It has appeared at a time when the political and economic order we have had since at least the end of the Second World War which combines representative democracy, capitalist economics and some form of state supported social welfare (social democracy) is showing signs of its age.

If this was just happening in the United States whose 224 year old political system emerged before almost everything we take to be modern such as this list at random: universal literacy, industrialization, railroads, telephones, human flight, the Theory of Evolution, Psychoanalysis, Quantum Mechanics, Genetics, “the Bomb”, television, computers, the Internet and mobile technology then we might be able, as some have, to blame our troubles on an antiquated political system, but the creaking is much more widespread.

We have the upsurge in popularity of the right in Europe such as that seen in France with its National Front. Secessionist movements are gaining traction in the UK. The right in the form of Hindu Nationalism under a particular obnoxious figure- Narendra Modi -is poised to win Indian elections. There is the implosion of states in the Middle East such as Syria and revolution and counter revolution in Egypt. There are rising nationalist tensions in East Asia.

All this is coming against the backdrop of rising inequality. The markets are soaring no doubt pushed up by the flood of money being provided by the Federal Reserve,  yet the economy is merely grinding along. Easy money is the de facto cure for our deflationary funk and pursued by all the world’s major central banks in the US, the European Union and now especially, Japan.

The far left has long abandoned the idea that 21st century capitalism is a workable system with the differences being over what the alternative to it should be- whether communism of the old school such as that of Slavoj Žižek  or the anarchism of someone like David Graeber. Leftists are one thing the Pope is another, and you know a system is in trouble when the most conservative institution in history wants to change the status quo as Pope Francis suggested when he recently railed against the inhumanity of capitalism and urged for its transformation.

What in the world is going on?

If your house starts leaning there’s something wrong with the foundation, so I think we need to look at the roots of our current problems by going back to the gestation of our system- that balance of representative democracy, capitalism and social democracy I mentioned earlier whose roots can be found not in the 20th century but in the century prior.

The historical period that is probably most relevant for getting a handle on today’s neo-reactionaries is the late 19th century when a rage for similar ideas infected Europe. There was Nietzsche in Germany and Dostoevsky in Russia (two reactionaries I still can’t get myself to dislike both being so brilliant and tragic). There was Maurras in France and Pareto in Italy. The left, of course, also got a shot of B-12 here as well with labor unions, socialist political parties and seriously left-wing intellectuals finally gaining traction. Marxism whose origins were earlier in the century was coming into its own as a political force.  You had writers of socialist fiction such as Edward Bellamy and Jack London surging in popularity. Anarchists were making their mark, though, unfortunately, largely through high profile assassinations and bomb throwing. A crisis was building even before the First World War whose centenary we will mark next year.

Here’s historian JM Roberts from his Europe 1880-1945 on the state of politics in on the eve, not after, the outbreak of the First World War.

Liberalism had institutionalized the pursuit of happiness, yet its own institutions seemed to stand in the way of achieving the goal; liberal’s ideas could, it seemed, lead liberalism to turn on itself.

…the practical shortcomings of democracy contributed to a wave of anti-parliamentarianism. Representative institutions had for nearly a century been the shibboleth of liberalism. An Italian sociologist now stigmatized them ‘as the greatest superstition of modern times.’ There was violent criticism of them, both practical and theoretical. Not surprisingly, this went furthest in constitutional states where parliamentary institutions were the formal framework of power but did not represent social realities. Even where parliaments (as in France or Great Britain) had already shown they possessed real power, they were blamed for representing the wrong people and for being hypocritical shams covering self-interest. Professional politicians- a creation of the nineteenth century- were inevitably, it was said, out of touch with real needs.

Sounds familiar, doesn’t it?

Liberalism, by which Roberts means a combination of representative government and laissez faire capitalism- including free trade- was struggling. Capitalism had obviously brought wealth and innovation but also enormous instability and tensions. The economy had a tendency to rocket towards the stars only to careen earthward and crash leaving armies of the unemployed. The small scale capitalism of earlier periods was replaced by continent straddling bureaucratic corporations. The representative system which had been based on fleeting mobilization during elections or crises had yet to adjust to a situation where mass mobilization through the press, unions, or political groups was permanent and unrelenting.

The First World War almost killed liberalism. The Russian Revolution, Great Depression, rise of fascism and World War Two were busy putting nails in its coffin when the adoption of social democracy and Allied Victory in the war revived the corpse. Almost the entirety of the 20th century was a fight over whether the West’s hybrid system, which kept capitalism and representative democracy, but tamed the former could outperform state communism- and it did.

In the latter half of the 20th century the left got down to the business of extending the rights revolution to marginalized groups while the right fought for the dismantling of many of the restrictions that had been put on the capitalist system during its time of crisis. This modus vivendi between left and right was all well and good while the economy was growing and while the extension of legal rights rather than social rights for marginalized groups was the primary issue, but by the early 21st century both of these thrusts were spent.

Not only was the right’s economic model challenged by the 2008 financial crisis, it had nowhere left to go in terms of realizing its dreams of minimal government and dismantling of the welfare state without facing almost impossible electoral hurdles. The major government costs in the US and Europe were pensions and medical care for the elderly- programs that were virtually untouchable. The left too was realizing that abstract legal rights were not enough.  Did it matter that the US had an African American president when one quarter of black children had experienced a parent in prison, or when a heavily African American city such as Philadelphia has a child poverty rate of 40%? Addressing such inequities was not an easy matter for the left let alone the extreme changes that would be necessary to offset rising inequality.

Thus, ironically, the problem for both the right and the left is the same one- that governments today are too weak. The right needs an at least temporarily strong government to effect the dismantling of the state, whereas the left needs a strong government not merely to respond to the grinding conditions of the economic “recovery”, but to overturn previous policies, put in new protections and find some alternative to the current political and economic order. Dark enlightenment types and progressives are confronting the same frustration while having diametrically opposed goals. It is not so much that Washington is too powerful as it is that the power it has is embedded in a system, which, as Mark Leibovich portrays brilliantly, is feckless and corrupt.  

Neo-reactionaries tend to see this as a product of too much democracy, whereas progressives will counter that there is not enough. Here’s one of the princes of darkness himself, Nick Land:

Where the progressive enlightenment sees political ideals, the dark enlightenment sees appetites. It accepts that governments are made out of people, and that they will eat well. Setting its expectations as low as reasonably possible, it seeks only to spare civilization from frenzied, ruinous, gluttonous debauch.

Yet, as the experience in authoritarian societies such as Libya, Egypt and Syria shows (and even the authoritarian wonderchild of China is feeling the heat) democratic societies are not the only ones undergoing acute stresses. The universal nature of the crisis of governance is brought home in a recent book by Moisés Naím. In his The End of Power  Naím lays out how every large structure in society: armies, corporations, churches and unions are seeing their power decline and are being challenged by small and nimble upstarts.

States are left hobbled by smallish political parties and groups that act as spoilers preventing governments from getting things done. Armies with budgets in the hundreds of billions of dollars are hobbled by insurgents with IEDs made from garage door openers and cell phones. Long-lived religious institutions, most notably the Catholic Church, are losing parishioners to grassroots preachers while massive corporations are challenged by Davids that come out of nowhere to upend their business models with a simple stone.

Naím has a theory for why this is happening. We are in the midst of what he calls The More, The Mobility and The Mentality Revolutions. Only the last of those is important for my purposes. Ruling elites are faced today with the unprecedented reality that most of their lessers can read. Not only that, the communications revolution which has fed the wealth of some of these elites has significantly lowered the barriers to political organization and speech. Any Tom, Dick and now Harriet can throw up a website and start organizing for or against some cause. What this has resulted in is a sort of Cambrian explosion of political organization, and just as in any acceleration of evolution you’re likely to get some pretty strange mutants- and so here we are.

Some on the left are urging us to adjust our progressive politics to the new distributed nature of power.  The writer Steven Johnson in his recent Future Perfect: The case for progress in a networked age calls collaborative efforts by small groups “peer-to-peer networks”, and in them he sees a glimpse of our political past (the participatory politics of the ancient Greek polis and late medieval trading states) becoming our political future. Is this too “reactionary”?

Peer-to-peer networks tend to bring local information back into view. The fact that traditional centralized loci of power such as the federal government and national and international media are often found lacking when it comes to local knowledge is a problem of scale. As Jane Jacobs has pointed out , government policies are often best when crafted and implemented at the local level where differences and details can be seen.

Wikipedia is a good example of Johnson’s peer-to-peer model as is Kickstarter. In government we are seeing the spread of participatory budgeting where the local public is allowed to make budgetary decisions. There is also a relatively new concept known as “liquid democracy” that not only enables the creation of legislation through open-sourced platforms but allows people to “trade” their votes in the hopes that citizens can avoid information overload by targeting their vote to areas they care most about, and presumably for this reason, have the greatest knowledge of.

So far, peer-to-peer networks have been successful at revolt- The Tea Party is peer-to-peer as was Occupy Wall Street. Peer-to-peer politics was seen in the Move-ON movement and has dealt defeat to recent legislation such as SOPA. Authoritarian regimes in the Middle East were toppled by crowd sourced gatherings on the street.

More recently than Johnson’s book there is New York’s new progressive mayor-  Bill de Blasio’s experiment with participatory politics with his Talking Transition Tent on Canal Street. There, according to NPR, New Yorkers can:

….talk about what they want the next mayor to do. They can make videos, post videos and enter their concerns on 48 iPad terminals. There are concerts, panels on everything from parks to education. And they can even buy coffee and beer.

Democracy, coffee and beer- three of my favorite things!

On the one hand I love this stuff, but me being me I can’t help but have some suspicions and this relates, I think, to the second issue about neo-reactionaries I raised above; namely, that they are reflecting something going on with our relationship to Silicon Valley a change in public perception of the tech culture and its tools from hero and wonderworker to villain and illusionist.

As I have pointed out elsewhere the idea that technology offered an alternative to the lumbering bureaucracy of state and corporations is something embedded deep in the foundation myth of Silicon Valley. The use of Moore’s Law as a bridge to personalized communication technology was supposed to liberate us from the apparatchiks of the state and the corporation- remember Apple’s “1984” commercial?

It hasn’t quite turned out that way. Yes, we are in a condition of hyper economic and political competition largely engendered by technology, but it’s not quite clear that we as citizens have gained rather than “power centers” that use these tools against one another and even sometimes us. Can anyone spell NSA?

We also went from innovation, and thus potential wealth, being driven by guys in their garages to, on the American scene, five giants that largely own and control all of virtual space: Google, Facebook, Amazon, Apple and Micro-Soft with upstarts such as Instagram being slurped up like Jonah was by the whale the minute they show potential growth.

Rather than result in a telecommuting utopia with all of us working five hours a day from the comfort of our digitally connected home, technology has led to a world where we are always “at work”, wages have not moved since the 1970’s and the spectre of technological unemployment is on the wall. Mainstream journalists such as John Micklethwait of The Economist are starting to see a growing backlash against Silicon Valley as the public becomes increasingly estranged from digerati who have not merely failed to deliver on their Utopian promises, but are starving the government for revenue as they hide their cash in tax havens all the while cosying up to the national security state.

Neo-reactionaries are among the first of Silicon Valleians to see this backlash building hence their only half joking efforts to retreat to artificial islands or into outer space. Here is Balaji Srinivasan whose speech was transcribed by one of the dark illuminati who goes by the moniker Nydwracu:

The backlash is beginning. More jobs predicted for machines, not people; job automation is a future unemployment crisis looming. Imprisoned by innovation as tech wealth explodes, Silicon Valley, poverty spikes… they are basically going to try to blame the economy on Silicon Valley, and say that it is iPhone and Google that done did it, not the bailouts and the bankruptcies and the bombings, and this is something which we need to identify as false and we need to actively repudiate it.

Srinivasan would have at least some things to use in defense of Silicon Valley: elites there have certainly been socially conscious about global issues. Where I differ is on their proposed solutions. As I have written elsewhere, Valley bigwigs such as Peter Diamandis think the world’s problems can be solved by letting the technology train keep on rolling and for winners such as himself to devote their money and genius to philanthropy.  This is unarguably a good thing, what I doubt, however, is that such techno-philanthropy can actually carry the load now held up by governments while at the same time those made super rich by capitalism’s creative destruction flee the tax man leaving what’s left of government to be funded on the backs of a shrinking middle class.

As I have also written elsewhere the original generation of Silicon Valley innovators is acutely aware of our government’s incapacity to do what states have always done- to preserve the past, protect the the present and invest in the future. This is the whole spirit behind the saint of the digerati Stewart Brand’s Long Now Foundation in which I find very much to admire. The neo-reactionaries too have latched upon this short term horizon of ours, only where Brand saw our time paralysis in a host of contemporary phenomenon, neo-reactionaries think there is one culprit- democracy. Here again is dark prince Nick Land:

Civilization, as a process, is indistinguishable from diminishing time-preference (or declining concern for the present in comparison to the future). Democracy, which both in theory and evident historical fact accentuates time-preference to the point of convulsive feeding-frenzy, is thus as close to a precise negation of civilization as anything could be, short of instantaneous social collapse into murderous barbarism or zombie apocalypse (which it eventually leads to). As the democratic virus burns through society, painstakingly accumulated habits and attitudes of forward-thinking, prudential, human and industrial investment, are replaced by a sterile, orgiastic consumerism, financial incontinence, and a ‘reality television’ political circus. Tomorrow might belong to the other team, so it’s best to eat it all now.

The problem here is not that Land has drug this interpretation of the effect of democracy straight out of Plato’s Republic- which he has, or that it’s a kid who eats the marshmallow leads to zombie apocalypse reading of much more complex political relationships- which it is as well.  Rather, it’s that there is no real evidence that it is true, and indeed the reason it’s not true might give those truly on the radical left who would like to abandon the US Constitution for something more modern and see nothing special in its antiquity reason for pause.

The study,of course, needs to be replicated, but a paper just out by Hal Hershfield, Min Bang and Elke Weber at New York University seems to suggest that the way to get a country to pay serious attention to long term investments is not to give them a deep future but a deep past and not just any past- the continuity of their current political system.

As Hershfield states it:

Our thinking is that the countries who have a longer past are better able see further forward into the future and think about extending the time period that they’ve already been around into the distant future. And that might make them care a bit more about how environmental outcomes are going to play out down the line.

And from further commentary on that segment:

Hershfield is not using the historical age of the country, but when it got started in its present form, when its current form of government got started. So he’s saying the U.S. got started in the year 1776. He’s saying China started in the year 1949.

Now, China, of course, though, is thousands of years old in historical terms, but Hershfield is using the political birth of the country as the starting point for his analysis. Now, this is potentially problematic, because for some countries like China, there’s a very big disparity in the historical age and when the current form of government got started. But Hershfield finds even when you eliminate those countries from the equation, there’s still a strong connection between the age of the country and its willingness to invest in environmental issues.

The very existence of strong environmental movements and regulation in democracies should be enough to disprove Land’s thesis about popular government’s “compulsive feeding frenzy”.  Democracies should have stripped their environments bare like a dog with a Thanksgiving turkey bone. Instead the opposite has happened. Neo-reactionaries might respond with something about large hunting preserves supported by the kings, but the idea that kings were better stewards of the environment and human beings (I refuse to call them “capital”)  because they own them as personal property can be countered with two words and a number King Leopold II.

Yet, we progressives need to be aware of the benefits of political continuity. The right with their Tea Party and their powdered wigs has seized American history. They are selling a revolutionary dismantling of the state and the deconstruction of hard fought for legacies in the name of returning to “purity”, but this history is ours as much as theirs even if our version of it tends to be as honest about the villains as the heroes. Neo-reactionaries are people who have woken up to the reality that the conservative return to “foundations” has no future. All that is left for them is to sit around daydreaming that the American Revolution and all it helped spark never happened, and that the kings still sat on their bedeckled thrones.