City As Superintelligence

Medieval Paris

A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings. The movement goes by the name of smart-cities, or sometimes sentient cities, and the fate of the project is intimately tied to the fate of humanity in the 21st century and beyond because the question of how the city is organized will define the world we live in from here forwards -the beginning of era of urban mankind.

Here are just some of many possible examples of smart cities at work, there is the city of Sondgo in South Korea a kind of testing ground for companies such as Cisco which can experiment with integrated technologies, to quote a recent article on the subject, such as:

TelePresence system, an advanced videoconferencing technology that allows residents to access a wide range of services including remote health care, beauty consulting and remote learning, as well as touch screens that enable residents to control their unit’s energy use.

Another example would be IBM’s Smart City Initiative in Rio which has covered that city with a dense network of sensors and cameras that allow centralized monitoring and control of vital city functions, and was somewhat brazenly promoted by that city’s mayor during a TED Talk in 2012. New York has set up a similar system, but it is in the non-Western world where smart cities will live or die because it is there where almost all of the world’s increasingly rapid urbanization is taking place.

Thus India, which has yet to urbanize like its neighbor, and sometimes rival, China, has plans to build up to 100 smart cities with 4.5 billion of the funding towards such projects being provided by perhaps the most urbanized country on the planet- Japan.

China continues to urbanize at a historically unprecedented pace with 250 million of its people- the equivalent of the entire population of the United States a generation ago- to move to its cities in the next 12 years. (I didn’t forget a zero.) There you have a city that few of us have even heard of – Chongqing, – which The Guardian several years back characterized as “the fastest growing urban center on the planet”  with more people in it than the entire countries of Peru and Iraq. No doubt in response to urbanization pressure, and at least back in 2011, Cisco was helping that city with its so-called Peaceful Chongqing Project an attempt to blanket the city in 500,000 video surveillance cameras- a collaboration that was possibly derailed by allegations by Edward Snowden that the NSA had infiltrated or co-opted U.S. companies.

Yet there are other smart-city initiatives that go beyond monitoring technologies. Under this rubric should fall the renewed interest in arcologies- massive buildings that contain within them an entire city, and thus in principle allow a city to be managed in terms of its climate, flows, etc. in the same way the internal environment of a skyscraper can be managed. China had an arcology in the works in Dongtan, which appears to have been scrapped over corruption and cost overrun concerns. Dubai has its green arcology in Masdar City, but it’s in Russia in the grip of a 21st century version of czarism, of all places, where the mother of all arcologies is planned, architect Norman Foster’s Crystal Island which, if actually built, would be the largest structure on the planet.

On the surface, there is actually much to like about smart-cities and their related arcologies. Smart-cities hold out the promise of greater efficiency for an energy starved and warming world. They should allow city management to be more responsive to citizens. All things being equal, smart-cities should be better than “dumb” ones at responding to everything from common fires and traffic accidents to major man- made and natural disasters. If Wendy Orent is correct as she wrote in a recent issue of AEON that we have less to fear from pandemics emerging from the wilderness such as Ebola than those that evolve in areas of extreme human density, smart-city applications should make the response to pandemics both quicker and more effective.

Especially in terms of arcologies, smart-cities represent something relatively new. We’ve had our two major models of the city since the early to mid-20th century, whether the skyscraper cities pioneered by New York and Chicago or the cul-de-sac suburban sprawl of cities dominated by the automobile like Phoenix. Cities going up now in the developing world certainly look more modern than American cities many of whose infrastructure is in a state of decay, but the model is the same, with the marked exception of all those super-trains.

All that said there are problems with smart-cities and the thinker who has written most extensively on the subject Anthony M. Townsend lays them out excellently in his book Smart Cities: Big-Data, Civic Hackers and the Quest for a New Utopia. Townsend sees three potential problems with smart-cities- they might prove, in his terms, “buggy. brittle, and bugged”.  

Like all software, the kinds that will be used to run smart-cities might exhibit unwanted surprises. We’ve seen this in some of the most sophisticated software we have running, financial trading algorithms whose “flash crashes” have felled billion dollar companies.

The loss of money, even a great deal of money, is something any reasonably healthy society should be able to absorb, but what if buggy software made essential services go off line over an extended period? Cascades from services now coupled by smart-city software could take out electricity and communication in a heat wave or re-run of last winter’s “polar vortex” and lead to loss of life. Perhaps having functions separated in silos and with a good deal of redundancy, even at the cost of inefficiency, is  “smarter” than having them tightly coupled and under one system. That is, after all, how the human brain works.  

Smart-cities might also be brittle. We might not be able to see that we had built a precarious architecture that could collapse in the face of some stressor or effort to intentionally harm- ahem- Windows. Computers crash and sometimes do so for reasons we are completely at a loss to identify. Or, imagine someone blackmailing a city by threatening to shut it down after having hacked its management system. Old school dumb-cities don’t really crash, even if they can sicken and die, and its hard to say they can be hacked.

Would we be in danger of decreasing a city’s resilience by compressing its complexity into an algorithm? If something like Stephen Wolfram’s principles of computational equivalence  and computational irreducibility is correct then the city is already a kind of computation and no model we can create of it will ever be more effective than this natural computation itself.

Or, to make my meaning clearer, imagine that you had a person that had suffered some horrible accident where to save them you had to replace all of his body’s natural information processing with a computer program. Such a program who have to regulate everything from breathing to metabolism to muscle movement ,along with the immune system, and exchange between neurons, not to mention a dozen other things. My guess is that you’d have to go out many generations of such programs before they are anywhere near workable. That the first generations would miss important elements, be based on wrong assumptions on how things worked, and would be loaded with perhaps catastrophic design errors that you couldn’t identify until the program was fully run in multiple iterations.

We are blissfully unaware that we are the product of billions of years of “engineering” where “design” failures were weeded out by evolution. Cities have only a few thousand years of a similar type of evolution behind them, but trying to control a great number of their functions via algorithms run by “command centers” might pose similar risks to my body example.  Reducing city functions to something we can compute in silicon might oversimplify the city in such a way as to reduce its resilience to stressors cities have naturally evolved to absorb. That is, there is, in all use of “Big-Data”, a temptation to interpret reality only in light of the model that scaffolds this data or reframe problems in ways that can mathematically be modeled. We set ourselves up for crises when we confuse the map with the territory or as Jaron Lanier said:

 What makes something real is that it is Impossible to represent it to completion.

Lastly, and as my initial examples of smart-cities should have indicated, smart-cities are by design bugged. They are bugged so as to surveil their citizens in an effort to prevent crime or terrorism or even just respond to accidents or disasters. Yet the promise of safety comes at the cost of one of the virtues of city living – the freedom granted from anonymity. But even if we care nothing for such things I’ve got news- trading privacy for security doesn’t even work.

Chongqing may spend tens of millions of dollars installing CCTV cameras, but would be hooligans or criminals or just people who don’t like being watched such as those in London, have a twenty dollar answer to all these gizmos- it’s called a hoodie. Likewise, a simple pattern of dollar store facepaint, strategically applied, can short-circuit the most sophisticated facial recognition software. I will never cease to be amazed at human ingenuity.    

We need to acknowledge that it is largely companies or individuals with extremely deep pockets and even deeper political connections that are promoting this model of the city. Townsend estimates it is potentially a 100 billion dollar business. We need to exercise our historical memory and recall how it was automobile companies that lobbied for and ended up creating our world of sprawl. Before investing millions or even billions cities need to have an idea of what kind of future they want to have and not be swayed by the latest technological trends.

This is especially the case when it comes to cities in the developing world where the conditions often resemble something more out of Dicken’s 19th century than even the 20th. When I enthusiastically asked a Chinese student about the arcology at Dongtan he responded with something like “Fools! Who would want to live in such a thing! It’s a waste of money. We need clean air and water, not such craziness!” And he’s no doubt largely right. And perhaps we might be happy that the project ultimately unraveled and say with Thoreau:

As for your high towers and monuments, there was a crazy fellow once in this town who undertook to dig through to China, and he got so far that, as he said, he heard the Chinese pots and kettles rattle; but I think that I shall not go out of my way to admire the hole which he made. Many are concerned about the monuments of the West and the East — to know who built them. For my part, I should like to know who in those days did not build them — who were above such trifling.

The age of connectivity, if it’s done thoughtfully, could bring us cities that are cleaner, greener, more able to deal with the shocks of disaster or respond to the spread of disease. Truly smart-cities should support a more active citizenry, a less tone-deaf bureaucracy, a more socially and culturally rich, entertaining and more civil life- the very reasons human beings have chosen to live in cities in the first place .

If the age of connectivity is done wrong cities will have poured scarce resources down a hole of corruption as deep as the one dug by Thoreau’s townsman, will have turned vibrant cultural and historical environments into corporate “flat-pack” versions of tomorrowland, and most frighteningly of all turned the potential democratic agora of the city into a massive panopticon of Orwellian monitoring and control. Cities, in the Wolfram not Bostrom sense, are already a sort of super-intelligence or better, hive mind of its interconnected yet free individuals more vibrant and important than any human built structure imaginable.  Will we let them stay that way?

How our police became Storm-troopers

Ferguson Riot Police

Scott Olson Getty Images via: International Business Times

The police response to protests and riots in Ferguson, Missouri were filled with images that have become commonplace all over the world in the last decade. Police dressed in once futuristic military gear confronting civilian protesters as if they were a rival army. The uniforms themselves put me in mind of nothing so much as the storm-troopers from Star Wars. I guess that would make the rest of us the rebels.

A democracy has entered a highly unstable state when its executive elements, the police and security services it pays for through its taxes, that exist for the sole purpose of protecting and preserving that very community, are turned against it. I would have had only a small clue as to how this came about were it not for a rare library accident.   

I was trying to get out a book on robots in warfare for a project I am working on, but had grabbed the book next to it by mistake. Radley Balko’s The Rise of the Warrior Cop has been all over the news since Ferguson broke, and I wasn’t the first to notice it because within a day or two of the crisis the book was recalled. The reason is that Ferguson has focused public attention on an issue we should have been grappling with for quite some time – the militarization of America’s police forces. How that came about is the story The Rise of the Warrior Cop lays out cogently and with power.

As Balko explains much of what we now take as normal police functions would have likely been viewed by the Founders as “a standing army”, something they were keen to prevent. In addition to the fact that Americans were incensed by the British use of soldiers to exercise police functions, the American Revolution had been inspired in part by the use by the British of “General Warrants” that allowed them to bust into American and search homes in their battle against smuggling. From its beginning the United States has had a tradition of separation between military and police power along with a tradition of limiting police power, indeed, this the reason our constitutional government exists in the first place.

Balko points out how the U.S. as it developed its own police forces, something that became necessary with the country’s urbanization and modernization, maintained these traditions which only fairly recently started to become eroded largely beginning with the Nixon administration’s “law and order” policy and especially the “war on drugs” launched under Reagan.

In framing the problem of drug use as a war rather than a public health concern we started down the path of using the police to enforce military style solutions. If drug use is a public health concern then efforts will go into providing rehabilitation services for addicts, addressing systemic causes and underlying perceptions, and legalization as a matter of personal liberty where doing so does not pose inordinate risk to the public. If the problem of drug use is framed as a war then this means using kinetic action to disrupt and disable “enemy” forces. It means adhering as close to the limits of what is legally allowable when using force to protect one’s own “troops”. It mean mass incarceration of captured enemy forces. Fighting a war means that training and equipment needs focus on the effective use of force and not “social work”.

The militarization of America’s police forces that began in earnest with the war on drugs, Balko reminds us, is not an issue that can easily be reduced to Conservative vs Liberal, Republican vs Democrat. In the 1990’s conservatives were incensed at police brutality and misuse of military style tactics at Waco and Ruby Ridge. Yet conservatives largely turned a blind eye to the same brutality turned against anarchists and anti-globalization protestors in The Battle of Seattle in 1999. Conservatives have largely supported the militarized effort to stomp out drug abuse and the use of swat teams to enforce laws against non-violent offenders, especially illegal immigrants.

The fact that police were increasingly turning to military tactics and equipment was not, however, all an over-reaction. It was inspired by high profile events such as the Columbine massacre, and a dramatic robbery in North Hollywood in 1997. In the latter the two robbers Larry Phillips, Jr. and Emil Mătăsăreanu wore body armor police with light weapons could not penetrate. The 2008 attacks in Mumbai in which a small group of heavily armed and well trained terrorists were able to kill 164 people and temporarily cripple large parts of the city should serve as a warning of what happens when police can not rapidly deploy lethal force as should a whole series of high profile “lone wolf” style shootings. Police can thus rationally argue that they need access to heavy weapons when needed and swat teams and training for military style contingencies as well. It is important to remember that the police daily put their lives at risk in the name of public safety.

Yet militarization has gone too far and is being influenced more by security corporations and their lobbyists than conditions in actual communities. If the drug war and attention grabbing acts of violence was where the militarization of America’s police forces began, 9-11 and the wars in Afghanistan and Iraq acted as an accelerant on the trend. These events launched a militarized-police-industrial complex, the country was flooded with grants from the Department of Homeland Security which funded even small communities to set up swat teams and purchase military grade equipment. Veterans from wars which were largely wars of occupation and counter-insurgency were naturally attracted to using these hard won skill sets in civilian life- which largely meant either becoming police or entering the burgeoning sector of private security.

So that’s the problem as laid out by Balko, what is his solution? For Balko, the biggest step we could take to rolling back militarization is to end the drug war and stop using military style methods to enforce immigration law. He would like to see a return to community policing, if not quite Mayberry, then at least something like the innovative program launched in San Antonio which uses police as social workers rather than commandos in to respond to mental health related crime.

Balko also wants us to end our militarized response to protests. There is no reason why protesters in a democratic society should be met by police wielding automatic weapons or dispersed through the use of tear gas. We can also stop the flood of federal funding being used by local police departments to buy surplus military equipment. Something that the Obama administration prompted by Ferguson seems keen to review.

A positive trend that Balko sees is the ubiquity of photography and film permitted by smart phones which allows protesters to capture brutality as it occurs a right which everyone has, despite the insistence of some police in protest situations to the contrary, and has been consistently upheld by U.S. courts. Indeed the other potentially positive legacy of Ferguson other than bringing the problem of police militarization into the public spotlight, for there is no wind so ill it does not blow some good, might be that it has helped launch true citizen based and crowd-sourced media.

My criticism of The Rise of the Warrior Cop to the extent I have any is that Balko only tells the American version of this tale, but it is a story that is playing out globally. The inequality of late capitalism certainly plays a role in this. Wars between states has at least temporarily been replaced by wars within states. Global elites who are more connected to their rich analogs in other countries than they are to their own nationals find themselves turning to a large number of the middle class who find themselves located in one form or another in the security services of the state. Elites pursue equally internationalized rivals, such as drug cartels and terrorist networks like one would a cancerous tumor- wishing to rip it out by force- not realizing this form of treatment is not getting to the root of the problem and might even end up killing the patient.

More troublingly they use these security services to choke off mass protests by the poor and other members of the middle class now enabled by mobile technologies because they find themselves incapable of responding to the problems that initiated these protests with long-term political solutions. This relates to another aspect of the police militarization issue Balko doesn’t really explore, namely the privatization of police services as those who can afford them retreat behind the fortress of private security while the conditions of the society around them erode.

Maybe there was a good reason that The Rise of the Warrior Cop was placed on the library shelf next to books on robot weapons after all. It may sound crazy, but perhaps in the not so far off future elites will automate policing as they are automating everything else. Mass protests, violent or not, will be met not with flesh and blood policemen but military style robots and drones. And perhaps only then will once middle class policemen made poor by the automation of their calling realize that all this time they have been fighting on the wrong side of the rebellion.

A Cure for Our Deflated Sense of The Future

Progressland 1964 World's Fair

There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.

The strange thing, perhaps, is that anyone would be disappointed in the fact that a fictional world has failed to become real in the first place. No one, I hope, feels like the present is constricted and dull because there aren’t any flying dragons in it to slay. The problem, then, might lie in the way science-fiction is understood in the minds of some avid fans- not as fiction, but as plausible future history, or even a sort of “preview” and promise of all the cool things that await.

Future- deflation is a kind of dulling hang-over from a prior bout of future-inflation when expectations got way out ahead of themselves. If, mostly boys, now become men, feel let down by inflated expectations driven by what proved to be the Venetian sunset, rather than the beginning, of the space race regarding orbital cities, bases on the moon and Mars, and a hundred other things, their experience is a little like girls, fed on a diet of romance, who have as adults tasted the bitter reality of love. Following the rule I suppose I’d call it romance-deflation- cue the Viagra jokes.

Okay, so that’s the condition, how might it be cured? The first step to recovery is admitting you have a problem and identifying its source. Perhaps the main culprit behind future-deflation is the crack cocaine of CGI- (and I really do mean CGI as in computer generated graphics which I’ve written about before). Whereas late 20th century novels,  movies, and television shows served as gateway drugs to our addiction to digital versions of the future, CGI and the technologies that will follow is the true rush, allowing us to experience versions of the future that just might be more real than reality itself.

There’s a phenomenon discovered by social psychologists studying motivation that says that it’s a mistake to visualize your idealized outcomes too clearly for to do so actually diminishes your motivation to achieve them. You get many of the same emotional rewards without any of the costs, and on that account never get down to doing the real thing. Our ability to create not just compelling but mind blowing visualizations of technologies that are nowhere on the horizon has become so good, and will only get better, that it may be exacerbating disappointment with the present state of technology and the pace of technological change- increasing the sense of “where’s my jet pack”.

There’s a theory that I’ve heard discussed by Brian Eno that the reason we haven’t been visited by any space aliens is that civilizations at a certain point fall into a state of masturbatory self-satisfaction. They stop actually doing stuff because the imagination of doing things becomes so much better and easier than the difficult and much less satisfying achievements experienced in reality.

The cure for future deflation is really just adulthood. We need to realize that the things we would like to do and see done are hard and expensive and take long commitments over time- often far past our own lifetimes- to achieve. We need to get off our Oculus Rift weighed down assess and actually do stuff. Elon Musk with his SpaceX seems to realize this, but with a ticket to Mars to cost 500 thousand dollars one can legitimately wonder whether he’ll end up creating an escape hatch from earth for the very rich that the rest of us will be stuck gawking at on our big-screen TVs.

And therein lies the heart of the problem, for it’s actually less important for the majority of us what technologies are available in the future than the largely non-technological question of how such a future is politically and economically organized which will translate into how many of us have access to these technologies.  

The question we should be asking when thinking about things we should be doing now to shape the future is a simple and very human one – “what kind of world do I hope my grandchildren live in?” A part of the answer to this question is going to involve technology and scientific advancement, but not as much of it as we might think. Other types of questions dealing with issues such as the level of equality, peace and security, a livable environment, and amount of freedom and purpose, are both more important and more capable of being influenced by the average person.  These are things we can pursue even if we have no connection to the communities of science and technology. We could even achieve many of these things should technological progress completely stall with the technological kit we already have.

In a way because it emerged in tandem with the technological progress started with the scientific and industrial revolutions science-fiction seemed to own the future, and those who practiced the art largely did well by us in giving it shape- at least in our minds. But in reality the future was just on loan, and it might do us best to take back a large portion of it and encourage everyone who wants to have more say in defining it. Or better, here’s my advice: for those techno-progressives not involved directly in the development of science and technology focus more of your efforts on the progressive side of the scale. That way, if even part of the promised future arrives you won’t be confined to just watching it while wearing your Oculus Rift or stuck in your seat at the IMAX.

The First Machine War and the Lessons of Mortality

Lincoln Motor Co., in Detroit, Michigan, ca. 1918 U.S. Army Signal Corps Library of Congress

I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war.

In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net.

But the most frightening aspect of the new war are weapons based on plant biology. Such weapons, if they do not immediately scar the face and infect the bodies of those who had been targeted, relocate themselves in the soil like spores waiting to release and kill and maim when conditions are ripe- the ultimate terrorist weapon.

Amid all this the author searches for what human meaning might be in a world where men are caught between a world of warring machines.  In the end he comes to understand himself as mere cog in a great machine, a metallic artifice that echoes and rides rhythms of biological nature including his own.

__________________________________

A century and a week back from today humanity began its first war of machines. (July, 28 1914). There had been forerunners such as the American Civil War and the Franco-Prussian War in the 19th century, but nothing before had exposed the full features and horrors of what war between mechanized and industrial societies would actually look like until it came quite unexpectedly in the form of World War I.

Those who wish to argue that the speed of technological development is exaggerated need only to look to at the First World War where almost all of the weapons we now use in combat were either first used or used to full effect- the radio, submarine, airplane, tank and other vehicles using the internal combustion engine. Machine guns were let loose in new and devastating ways as was long-range artillery.

Although again there were forerunners, the first biological and chemical weapons saw there true debut in WWI. The Germans tried to infect the city of St. Petersburg with a strain of the plague, but the most widely used WMDs were chemical weapons, some of them derived from the work on the nitrogen cycle of plants, gases such chlorine and mustard gas, which killed less than they maimed, and sometimes sat in the soil ready to emerge like poisonous mushrooms when weather conditions permitted.

Indeed, the few other weapons in our 21st century arsenal that can’t be found in the First World War such as the jet, rocket, atomic bomb, radar, and even the computer, would make their debut only a few decades after the end of that war, and during what most historians consider its second half- World War II.

What is called the Great War began, as our 9-11 wars began, with a terrorist attack. The Archduke of Austria- Hungary Franz Ferdinand assassinated by the ultimate nobody, a Serbian nationalist not much older than a boy- Gavrilo Princip- whose purely accidental success (he was only able to take his shot because the car the Archduke was riding in had taken a wrong turn) ended up being the catalyst for the most deadly war in human history up until that point, a conflict that would unleash nearly a century of darkness and mortal danger upon the world.

For the first time it would be a war that would be experienced by people thousands of miles from the battlefield in almost real time via the relatively new miracle of the radio. This was only part of the lightning fast feedback loop that launched and sped European civilization from a minor political assassination to total war. As I recall from Stephen Kern’s 1983 The Culture of Time and Space 1880-1914   political caution and prudence found themselves crushed in the vice of compressed space and time and suffered the need to make policy decisions that aligned with the need, not of human beings and their slow, messy and deliberative politics, but the pace of machines. Once the decision to mobilize was made it was almost impossible to stop it without subjecting the nation to extreme vulnerability, once a jingoistic public was whipped up to demand revenge and action via the new mass media it was again nearly impossible to silence and temper.

The First World War is perhaps the penultimate example of what Nassim Taleb called a “Black Swan” an event whose nature failed to be foreseen and whose effect ended up changing the future radically from the shape it had previously been projected to have. Taleb defines a Black Swan this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Secondly it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

Taleb has something to teach futurists who he suggests might start looking for a different day job. For what he is saying is it is the events we do not and cannot predict that will have the greatest influence on the future, and on account of this blindness the future is essentially unknowable. The best we can do in his estimation is to build resilience, robustness and redundancy, which it is hoped might allow us to survive, or even gain in the face of multiple forms of unpredictable crises, to become, in his terms “anti-fragile”.

No one seems to think another world war is possible today, which might give us reason for worry. We do have more circuit breakers in place which might allow us to dampen a surge in the direction of war between the big states in the face of a dangerous event such as Japan downing a Chinese plane in their dispute over Pacific islands, but many more and stronger ones need to be created to avoid such frictions spinning out of control.

States continue to prepare for limited conventional wars against one another. China practices and plans to retake disputed islands including Taiwan by force, and to push the U.S. Navy deeper into the Pacific, while the U.S. and Japan practice retaking islands from the Chinese. We do this without recognizing that we need to do everything possible to prevent such potential clashes in the first place because we have no idea once they begin where or how they will end.  As in financial crises, the further in time we become removed from the last great crisis the more likely we are to have fallen into a dangerous form of complacency, though the threat of nuclear destruction may act as an ultimate break.

The book I began this essay with is, of course, not some meditation on 21st or even 22nd century war, but the First World War itself. Ernst Jünger’s Storm of Steel is perhaps the best book ever written on the Great War and arguably one of the best books written on the subject of war- period.

It is Jünger’s incredible powers of observation and his desire for reflection that give the book such force. There is a scene that will ever stick with me where Jünger is walking across the front lines and sees a man sitting there in seemingly perfect stoicism as the war rages around him. It’s only when he looks closer that Jünger realizes the man is dead and that he has no eyes in his sockets- they have been blown out from an explosion behind.

Unlike another great book on the First World War, Erich Remarque’s All Quiet on the Western Front, Storm of Steel is not a pacifist book. Jünger is a soldier who sees the war as a personal quest and part of his duty as a man. His bravery is undeniable, but he does not question the justice or rationality of the war itself, a fact that would later allow Jünger’s war memoir to be used as a propaganda tool by the Nazis.

Storm of Steel has about it something of the view of war found in the ancients- that it was sent by the gods and there was nothing that could be done about it but to fulfill one’s duty within it. In the Hindu holy book the Bhagavad Gita the prince Arjuna is filled with doubt over the moral righteousness of the battle he is about to engage in- to which the god Krishna responds:

 …you Arjuna, are only a mortal appointee to carry out my divine will, since the Kauravas are destined to die either way, due to their heap of sins. Open your eyes O Bhaarata and know that I encompass the Karta, Karma and Kriya, all in myself. There is no scope for contemplation now or remorse later, it is indeed time for war and the world will remember your might and immense powers for time to come. So rise O Arjuna!, tighten up your Gandiva and let all directions shiver till their farthest horizons, by the reverberation of its string.

Jünger lived in a world that had begun to abandon the gods, or rather adopted new materialist versions of them – whether the force of history or evolution- stories in which Jünger like Arjuna comes to see himself as playing a small part.

 The incredible massing of forces in the hour of destiny, to fight for a distant future, and the violence it so surprisingly, suddenly unleashed, had taken me for the first time into the depths of something that was more than mere personal experience. That was what distinguished it from what I had been through before; it was an initiation that not only had opened red-hot chambers of dread but had also led me through them. (255-256)

Jünger is not a German propagandist. He seems blithely unaware of the reasons his Kaiser’s government was arguing why the war was being fought. The dehumanization of the other side which was a main part of the war propaganda towards the end of the conflict, and on both sides, does not touch him. He is a mere soldier whose bravery comes from the recognition of his own ultimate mortality just as the mortality of everyone else allows him to kill without malice, as a mere matter of the simple duty of a combatant in war.

Because his memoir of the conflict is so authentic, so without bias or apparent political aims, he ends up conveying truths about war which it is difficult for civilians to understand and this difficulty in understanding can be found not only in pacifists, but in nationalist war-mongers with no experience of actual combat.

If we open ourselves to to the deepest meditations of those who have actually experienced war, what we find is that combat seems to bring the existential reality of the human condition out from its normal occlusion by the tedium of everyday living. To live in the midst of war is a screaming reminder that we are mortal and our lives ultimately very short. In war it is very clear that we are playing a game of chance against death, which is merely the flip side of the statistical unlikelihood of our very existence, as if our one true God was indeed chance itself. Like any form of gambling, victory against death itself becomes addictive.

War makes it painfully clear to those who fight in it that we are hanging on just barely to this thread, this thin mortal coil, where our only hope for survival for a time is to hang on tightly to those closest to us- war’s famed brotherhood in arms. These experiences, rather than childish flag- waving notions of nationalism, are likely the primary source of what those who have experience of only of peace often find unfathomable- that soldiers from the front often eagerly return to battle. It is a shared experience among those who have experienced combat that often leads soldiers to find more in common with the enemies they have engaged than their fellow citizens back home who have never been to war.

The essential lessons of Storm of Steel are really spiritual answers to the question of combat. Jünger’s war experience leads him to something like Buddhist non-attachment both to himself and to the futility of the bird-eye view justifications of the conflict.

The nights brought heavy bombardment like swift, devastating summer thunderstorms. I would lie on my bunk on a mattress of fresh grass, and listen, with a strange and quite unjustified feeling of security, to the explosions all around that sent the sand trickling out of the walls.

At such moments, there crept over me a mood I hadn’t known before. A profound reorientation, a reaction to so much time spent so intensely, on the edge. The seasons followed one another, it was winter and then it was summer again, but it was still war. I felt I had got tired, and used to the aspect of war, but it was from familiarity that I observed what was in front of me in a new and subdued light. Things were less dazzlingly distinct. And I felt the purpose with which I had gone out to fight had been used up and no longer held. The war posed new, deeper puzzles. It was a strange time altogether. (260)

In another scene Jünger comes upon a lone enemy officer while by himself on patrol.

 I saw him jump as I approached, and stare at me with gaping eyes, while I, with my face behind my pistol, stalked up to him slowly and coldly. A bloody scene with no witnesses was about to happen. It was a relief to me, finally, to have the foe in front of me and within reach. I set the mouth of my pistol at the man’s temple- he was too frightened to move- while my other fist grabbed hold of his tunic…

With a plaintive sound, he reached into his pocket, not to pull out a weapon, but a photograph which he held up to me. I saw him on it, surrounded by numerous family, all standing on a terrace.

It was a plea from another world. Later, I thought it was blind chance that I had let him go and plunged onward. That one man of all often appeared in my dreams. I hope that meant he got to see his homeland again. (234)

The unfortunate thing about the future of war is not that human beings seem likely to play an increasingly diminishing role as fighters in it, as warfare undergoes the same process of automation, which has resulted in the fact that so few of us now grow our food or produce our goods. Rather, it is the fact that wars will continue to be fought and human beings, which will come to mean almost solely non-combatants, will continue to die in them.

The lessons Jünger took from war are not so much the product of war itself as they emerge from intense reflection on our own and others mortality. They are the same type of understanding and depth often seen in those who suffer long periods of death, the terminally ill, who die not in the swift “thief in the night” mode of accidents or bodily failure, but slip from the world with enough time to and while retaining the capacity to reflect. Even the very young who are terminally ill often speak of a diminishing sense of their own importance, a need to hang onto the moment, a drive to live life to the full, and the longing to treat others in a spirit of charity and mercy.

Even should the next “great war” be fought almost entirely by machines we can retain these lessons as a culture as long as we give our thoughts over to what it means to be a finite creature with an ending and will have the opportunity to experience them personally as long as we are mortal, and given the impossibility of any form of eternity no matter how far we will extend our lifespans, mortal we always will be.

 

 

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.

 

This City is Our Future

Erich Kettelhut Metropolis Sketch

If you wish to understand the future you need to understand the city, for the human future is an overwhelmingly urban future. The city may have always been synonymous with civilization, but the rise of urban humanity has been something that has almost all occurred after the onset of the industrial revolution. In 1800 a mere 3 percent of humanity lived in cities of over one million people. By 2050, 75  percent of humanity will be urbanized. India alone might have 6 cities with a population of over 10 million.    

The trend towards megacities is one into which humanity as we speak is accelerating in a process we do not fully understand let alone control. As the counterinsurgency expert David Kilcullen writes in his Out of the Mountains:

 To put it another way, these data show that the world’s cities are about to be swamped by a human tide that will force them to absorb- in just one generation- the same population growth that occurred in all of human history up to 1960. And virtually all of this growth will happen in the world’s poorest areas- a recipe for conflict, for crises in health, education and in governance, and for food water and energy scarcity.  (29)

Kilcullen sees 4 trends including urbanization that he thinks are reshaping human geography all of which can be traced to processes that began in the industrial revolution: the aforementioned urbanization and growth of megacities, population growth, littoralization and connectedness.

In terms of population growth: The world’s population has exploded going from 750 million in 1750 to a projected  9.1 – 9.3 billion by 2050. The rate of population growth is thankfully slowing, but barring some incredible catastrophe, the earth seems destined to gain the equivalent of another China and India all within the space of a generation. Almost all of this growth will occur in poor and underdeveloped countries already stumbling under the pressures of the populations they have.

One aspect of population growth Kilcullen doesn’t really discuss is the aging of the human population. This is normally understood in terms of the failure of advanced societies in Japan, South Korea in Europe to reach replacement levels so that the number of elderly are growing faster than the youth to support them, a phenomenon that is also happening in China as a consequence of their draconian one child policy. Yet, the developing world, simply because of the sheer numbers and increased longevity will face its own elderly crisis as well as tens of millions move into age-related conditions of dependency. As I have said in the past, gaining a “longevity dividend” is not a project for spoiled Westerners alone, but is primarily a development issue.

Another trend Kilcullen explores is littoralization, the concentration of human populations near the sea. A fact that was surprising to a landlubber such as myself, Kilcullen points out that in 2012 80% of human beings lived within 60 miles of the ocean. (30) A number that is increasing as the interiors of the continents are hollowed out of human inhabitants.

Kilcullen doesn’t discuss climate change much but the kinds of population dislocations that might be caused by moderate not to mention severe sea level rise would be catastrophic should certain scenarios for climate change play out. This goes well beyond islands or wealthy enclaves such as Miami, New Orleans or Manhattan. Places such as these and Denmark may have the money to engineer defenses against the rising sea, but what of a poor country such as Bangladesh? There, almost 200 million people might find themselves in flight from the relentless forward movement of the oceans. To where will they flee?

It is not merely the displacement of tens of millions of people, or more, living in low-lying coastal areas. Much of the world’s staple crop of rice is produced in deltas which would be destroyed by the inundation of the salt-water seas.

The last and most optimistic of Kilcullen’s trends is growing connectedness. He quotes the journalist John Pollack:

Cell-phone penetration in the developing world reached 79 percent in 2011. Cisco estimates that by 2015 more people in sub-saharan Africa,  South and Southeast Asia and the Middle East will have Internet access than electricity at home.

What makes this less optimistic is the fact as Pollack continues:

Across much of the world, this new information power sits uncomfortably upon layers of corrupt and inefficient government.  (231)

One might have thought that the communications revolution had made geography irrelevant or “flat” in Thomas Friedman’s famous term. Instead, the world has become“spiky” with the concentration of people, capital, and innovation in cities spread across the globe and interconnected with one another. The need for concentration as a necessary condition for communication is felt by the very rich and the very poor alike, both of whom collect together in cities. Companies running sophisticated trading algorithms have reshaped the very landscape to get closer to the heart of the Internet and gain a speed advantage over competitors so small they can not be perceived by human beings.

Likewise, the very poor flood to the world’s cities, because they can gain access to networks of markets and capital, but more recently, because only there do they have access to electricity that allows them to connect with one another or the larger world, especially in terms of their ethnic diaspora or larger civilizational community, through mobile devices and satellite TV. And there are more of these poor struggling to survive in our 21st century world than we thought, 400 million more of them according to a recent report.

For the urban poor and disenfranchised of the cities what the new connectivity can translate into is what Audrey Kurth Croninn has called the new levee en mass.  The first levee en mass was that of the French Revolution where the population was mobilized for both military and revolutionary action by new short length publications written by revolutionary writers such as Robespierre, Saint-Just or the blood thirsty Marat. In the new levee en mass, crowds capable of overthrowing governments- witness, Tunisia, Egypt and Ukraine can be mobilized by bloggers, amateur videographers, or just a kind of swarm intelligence emerging on the basis of some failure of the ruling classes.

Even quite effective armies, such as ISIS now sweeping in from Syria and taking over swaths of Iraq can be pulled seemingly out of thin air. The mobilizing capacity that was once the possession of the state or long-standing revolutionary groups has, under modern conditions of connectedness, become democratized even if the money behind them can ultimately be traced to states.

The movement of the great mass of human beings into cities portends the movement of war into cities, and this is the underlying subject of Kilcullen’s book, the changing face of war in an urban world. Given that the vast majority of countries in which urbanization is taking place will be incapable of fielding advanced armies the kinds of conflicts likely to be encountered there Kilcullen thinks will be guerilla wars whether pitting one segment of society off against another or drawing in Western armies.

The headless, swarm tactics of guerrilla war, which as the author Lawrence H. Keeley reminded us is in some sense a more evolved, “natural” and ultimately more effective form of warfare than the clashing professional armies of advanced states, its roots stretching back into human prehistory and the ancient practices of both hunting and tribal warfare, are given a potent boost by local communication technologies such as traditional radio communication and mesh networks. The crowd or small military group able to be tied together by an electronic web that turns them into something more like an immune system than a modern centrally directed army.

Attempting to avoid the high casualties so often experienced when advanced armies try to fight guerrilla wars, those capable of doing so are likely to turn to increasingly sophisticated remote and robotic weapons to fight these conflicts for them. Kilcullen is troubled by this development, not the least, because it seems to relocate the risk of war onto the civilian population of whatever country is wielding them, the communities in which remote warriors live or where their weapons themselves designed and built, arguably legitimate targets of a remote enemy a community might not even be aware it is fighting. Perhaps the real key is to try to prevent conflicts that might end with our military engagement in the first place.

Cities likely to experience epidemic crime, civil war or revolutionary upheaval are also those that have in Kilcullen’s terms gone “feral”, meaning the order usually imposed by the urban landscape no longer operates due to failures of governance. Into such a vacuum criminal networks often emerge which exchanges the imposition of some semblance of order for the control of illicit trade. All of these things: civil war, revolution, and international crime represent pull factors for Western military engagement whether in the name of international stability, humanitarian concerns or for more nefarious ends most of which are centered on resource extraction. The question is how can one prevent cities from going feral in the first place, avoiding the deep discontent and social breakdown that leads to civil war, revolution or the rise of criminal cartels all of which might end with the military intervention of advanced countries?

The solution lies in thinking of the city as a type of organism with “inflows” such as water, food, resources, manufactured products and capital and “outflows”, especially waste. There is also the issue of order as a kind of homeostasis. A city such as Beijing or Shanghai with their polluted skies is a sick organism as is the city of Dhaka in Bangladesh with its polluted waters or a city with a sky-high homicide rate such as Guatemala City or Sao Paulo. The beautiful thing about the new technologically driven capacity for mass mobilization is that it forces governments to take notice of the people’s problems or figuratively (and sometimes literally lose their heads). The problem is once things have gone badly enough to inspire mass riots the condition is likely systemic and extremely difficult to solve, and that the kinds of protests the Internet and mobile have inspired, at least so far, have been effective at toppling governments, but unable to either found or serve as governments themselves.

At least one answer to the problems of urban geography that could potentially allow cities to avoid instability is “Big-Data” or so-called “smart cities” where the a city is minutely monitored in real time for problems which then initiate quick responses by city authorities. There are several problems here, the first being the costs of such systems, but that might be the least insurmountable one, the biggest being the sheer data load.

As Kilcullen puts it in the context of military intelligence, but which could just as well be stated as the problem of city administrators, international NGOs and aid agencies.

The capacity to intercept, tag, track and locate specific cell phone and Internet users from a drone already exists, but distinguishing signal from noise in a densely connected, heavily trafficked piece of digital space is a daunting challenge. (238)

Kilcullen’s answer to the incomplete picture provided by the view from above, from big data, is to combine this data with the partial but deep view of the city by its inhabitants on the ground. In its essence a city is the stories and connections of those that live in them. Think of the deep, if necessarily narrow perspective of a major city merchant or even a well connected drug dealer. Add this to the stories of those working in social and medical services, police officers, big employers. socialites etc and one starts to get an idea of the biography of a city. Add to that the big picture of flows and connections and one starts to understand the city for what it is, a complex type of non-biological organism that serves as a stage for human stories.

Kilcullen has multiple examples of where knowledge of the big picture from experts has successfully aligned with grassroots organization to save societies on the brink of destruction an alignment he calls “co-design”. He cites the Women of Liberia Mass Action for Peace where grassroots organizer Leymah Gbowee leveraged the expertise of Western NGOs to stop the civil war in Liberia. CeaseFire Chicago uses a big-picture model of crime literally based on epidemiology and combines that with community level interventions to stop violent crime before it occurs.

Another group Kilcullen discusses is Crisis Mappers which offers citizens everywhere in the world access to the big picture, what the organization describes as “the largest and most active international community of experts, practitioners, policy makers, technologists, researchers, journalists, scholars, hackers and skilled volunteers engaged at the intersection between humanitarian crises, technology, crowd-sourcing, and crisis mapping.” (253)

On almost all of this I find Kilcullen to be spot on. The problem is that he fails to tackle the really systemic issue which is inequality. What is necessary to save any city, as Kilcullen acknowledges, is a sense of shared community. What I would call a sense of shared past and future. Insofar as the very wealthy in any society or city are connected to and largely identify with their wealthy fellow elites abroad rather than their poor neighbors, a city and a society is doomed, for only the wealthy have the wherewithal to support the kinds of social investments that make a city livable for its middle classes let alone its poor.

The very globalization that has created the opportunity for the rich in once poor countries to rise, and which connects the global poor to their fellow sufferers both in the same country and more amazingly across the world has cleft the connection between poor and rich in the same society. It is these global connections between classes which gives the current situation a revolutionary aspect, which as Marx long ago predicted, is global in scope.

The danger is that the very wealthy classes use the new high tech tools for monitoring citizens into a way to avoid systemic change, either by using their ability to intimately monitor so-called “revolutionaries” and short-circuit legitimate protest or by addressing the public’s concern in only the most superficial of ways.

The long term solution to the new era of urban mankind is giving people who live in cities the tools, including increasing sophisticated tools of data gathering and simulation, to control their own fates to find ways to connect revolutionary movements to progressive forces in societies where cities are not failing, and their tools for dealing with all the social and environmental problems cities face, and above all, to convince the wealthy to support such efforts, both in their own locality as well as on a global scale. For, the attempt at total control of a complex entity like a city through the tools of the security state, like the paper flat Utopian cities of state worshipers of days past, is to attempt building a castle in the thin armed sky.

 

 

 

 

 

Waiting for World War III

The Consequences of War Paul Rubens

Everyone alive today owes their life to a man most of us have never heard of, and that I didn’t even know existed until last week. On September, 26 1983, just past mid-night, Soviet lieutenant colonel Stanislav Petrov was alerted by his satellite early warning system that an attack from an American ICBM was underway. Normal protocol should have resulted in Petrov giving the order to fire Russian missiles at the US in response. Petrov instead did nothing, unable to explain to himself why the US would launch only one missile rather than a massive first strike in the hope of knocking out Russia’s capacity to retaliate. Then, something that made greater sense- more missiles appeared on Petrov’s radar screen, yet he continued to do nothing. And then more. He refused to give the order to fire, and he waited, and waited.

No news ever came in that night of the devastation of Soviet cities and military installations due to the detonation of American nuclear warheads, because, as we know, there never was such an attack. What Petrov had seen was a computer error, an electronic mirage, and we are here, thank God, because he believed in the feelings in his gut over the data illusion on his screen.

That is the story as told by Christopher Coker in his book Warrior Geeks: How 21st Century Technology is Changing the Way We Fight and Think About War. More on that book another time, but now to myself. During the same time Petrov was saving us through morally induced paralysis I was a budding cold warrior, a passionate supporter of Ronald Reagan and his massive defense buildup. I had drawn up detailed war scenarios calculating precisely the relative strengths of the two opposing power blocs, was a resident expert in Soviet history and geography. I sincerely thought World War Three was inevitable in my lifetime. I was 11 years old.

Anyone even slightly younger than me has no memory of living in a world where you went to sleep never certain we wouldn’t blow the whole thing up over night. I was a weird kid, as I am a weird adult, and no doubt hypersensitive to the panic induced by too close a relationship with modern media. Yet, if the conversations I have had with people in my age group over the course of my lifetime are any indication, I was not totally alone in my weirdness. Other kids too would hear jets rumbling overhead at night and wonder if the sounds were missiles coming to put an end to us all, were haunted by movies like The Day After or inspired by Red Dawn. Other kids staged wars in their neighborhoods fighting against “robot”like Russians.

During the early 1980’s world war wasn’t something stuck in a black and white movie, but a brutal and epic thing our grandfathers told us about, that some of our teachers had fought in. A reality that, with the end of detente and in light of the heated rhetoric of the Reagan years, felt as much part of the future as part of the past. It was not just a future of our imaginations, and being saved by Stanislav Petrov wasn’t the only time we dodged the bullet in those tense years.

Whatever the fear brought on by 9-11, this anxiety that we might just be fool enough to completely blow up our own world is long gone. The last twenty three years since the fall of the Soviet Union have been, in this sense, some of the most blessed in human history, a time when the prospect of the big powers pulverizing each other to death has receded from the realm of possibility. I am starting to fear its absence cannot last.

Perhaps it’s Russian aggression against Ukraine that has revived my pre-teen anxieties, it’s seizure of Crimea, veiled threats to conquer the Russophone eastern regions of the country, Putin’s jingoistic speech before the Kremlin. Of course, of course, I don’t think world war will come from the crisis in Ukraine now matter how bad it gets there. Rather, I am afraid we were wrong to write the possibility of war between the big powers out of human history permanently. That one of these times, and when we do not expect it, 10 years or 20 years or 100 years from now one of these dust ups will result in actual clashes between the armed forces of the big powers, a dangerous game that the longer we played it would hold the real risk of ending in the very nightmare we had avoided the night Petrov refused to fire.

Disputes over which the big powers might come to blows are not hard to come up with. There is China’s dispute with Japan, the Philippines, other, and ultimately the United States, over islands in the Pacific, there is the lingering desire for China to incorporate Taiwan, there is the legacy conflict on the Korean peninsula, clashes between India and China, disputes over resources and trade routes through an arctic opened up by global warming, or possible future fights over unilateral geoengineering. Then there are frictions largely unanticipated , as we now see, Russia’s panic induced aggression against Ukraine which brings it back into collision with NATO.

Still, precise predictions about the future is a game for fools. Hell, I can still remember when “experts” in all seriousness predicted a coming American war with Japan. I am aiming, rather, for something more general.   The danger I see is that the big powers start to engage in increasingly risky behavior precisely because they think world war is now impossible. That all of us will have concluded that limited and lukewarm retaliation is the only rational response to aggression given that the existential stakes are gone. As a foolish eleven year old I saw the risk of global catastrophe worth taking if the alternative was totalitarian chains. I am an adult now, hopefully much wiser, and with children of my own, whose lives I would not risk to save Ukraine from dismemberment along ethnic/linguistic lines or to stop China from asserting its rising power in the Pacific. I am certainly not alone in this, but fear such sanity will make me party to an avalanche. That the decline of the fear that states may go too far in aggressive action may lead them to go so far they accidentally spark a scale of war we have deemed inconceivable.

My current war pessimism over the long term might also stem from the year I am in, 2014, a solemn centenary of the beginning of the First World War. Back when I was in high school, World War I was presented with an air of tragic determinism. It was preordained, or so we were taught, the product of an unstable system alliance system, growing nationalism and imperialism. It was a war that was in some sense “wanted” by those who had fought it. Historians today have called this determinism into question. Christopher Clark in his massive The Sleepwalkers details just how important human mistakes and misperceptions were to the outbreak of the war, the degree to which opportunities to escape the tragedy were squandered because no one knew just how enormous the tragedy they were unleashing would become.

Another historian, Holger Afflerbach, in his essay The Topos of Improbable War in Europe Before 1914 shows how few were the voices in Europe that expected a continental or world war. Even the German military that wanted conflict was more afraid until the war broke out, and did not end quickly, that conflict would be averted at the last minute rather than stopped. The very certainty that a world war could not be fought, it part because of the belief that modern weapons had become too terrible, led to risk taking and refusal to compromise, which made such a war more likely as the crisis that began with the assassination of Archduke Fransferdinad unfolded.

If World War II can be considered an attempt by the aggrieved side to re-fight the First World War, what followed  Japan’s surrender was very different, a difference perhaps largely due to one element- the presence of nuclear weapons. What dissuaded big states in the Cold War era from directly fighting one another was likelihood that the potential costs of doing so were too high relative to the benefits that would accrue from any victory. The cost in a nuclear age was destruction itself.

Yet, for those costs to be an effective deterrent the threat of their use had to be real. Both sides justified their possible suicide in a nuclear holocaust on the grounds that they were engaged in a Manichean struggle where the total victory of the opposing side was presented as being in some sense worse than the destruction of the world itself. Yes, I know this was crazy, yet, by some miracle, we’re still here, and whether largely despite of or because of this insanity we cannot truly know.

Still, maybe I’m wrong. Perhaps I should not be so uncertain over the reason why there have been no wars between the big powers in the modern era, perhaps my anxiety that the real threat of nuclear annihilation might have been responsible is just my eleven year old self coming back to haunt me. It’s just possible that nuclear weapons had nothing to do with the long peace between great powers. Some have argued that there were other reasons big states have seemingly stopped fighting other big states since the end of World War II, that what changed were not so much weapons but norms regarding war. Steven Pinker most famously makes this case in his Better Angels of Our Nature: Why Violence has Declined.

Sadly, I have my doubts regarding Pinker’s argument. Here’s me from an earlier piece:

His evidence against the “nuclear peace” is that more nations have abandoned nuclear weapons programs than have developed such weapons. The fact is perhaps surprising but nonetheless accurate. It becomes a little less surprising, and a little less encouraging in Pinker’s sense, when you actually look at the list of countries who have abandoned them and why. Three of them: Belarus, Kazakhstan and the Ukraine are former Soviet republics and were under enormous Russian and US pressure- not to mention financial incentives- to give up their weapons after the fall of the Soviet Union. Two of them- South Africa and Libya- were attempting to escape the condition of being international pariahs. Another two- Iraq and Syria had their nuclear programs derailed by foreign powers. Three of them: Argentina, Brazil, and Algeria faced no external existential threat that would justify the expense and isolation that would come as a consequence of  their development of nuclear weapons and five others: Egypt, Japan, South Korea, Taiwan and Germany were woven tightly into the US security umbrella.

I am sure you have noticed that Ukraine is on that list. Had Ukraine not given up its nuclear weapons it is almost certain that it would not have seen Crimea seized by the Russians, or find itself facing the threat by Moscow to split the country in two.

A little more on Pinker: he spends a good part of his over 800 page book showing us just how savagely violent human societies were in the past. Tribal societies had homicide rates that rival or exceed the worst inner cities. Human are natural Hobbesians given to “a war of all against all”, but, in his view we have been socialized out of such violence, and not just as individuals, but in terms of states.

Pinker’s idea of original human societies being so violent and civilization as a kind of domestication of mankind away from this violence left me with many unanswered questions. If we were indeed so naturally violent how or why did we establish societies in the first place? Contrary to his claim, didn’t the institutionalization of violence in the form of war between states actually make our situation worse? How could so many of us cringe from violence at even a very early age, if we were naturally wired to be killers?

I couldn’t resolve any of these questions until I had read Mark Pagel’s Wired for Culture. What Pagel showed is that most of us are indeed naturally “wired” to be repulsed by violence the problem is this repulsion has a very sensitive off switch. The way it can be turned off is when our community is threatened either by those who had violated community norms, so-called moral anger, or when violence is directed towards rival groups outside of the community. In such cases we can be far more savage than the most vicious of animals with our creativity and inventiveness turned to the expression of cruelty.

Modern society is one that has cordoned off violence. We don’t have public hangings anymore and cringe at the death of civilians at the hands of our military (when we are told about them.) Yet this attitude towards violence is so new we can not reasonably expect it has become permanent.

I have no intention of picking on the Russians, and Bush’s “Axis of Evil” speech would have done just as well or better here, but to keep things current: Putin in his bellicose oration before the Kremlin pressed multiple sides of the Pagel’s violence “off switch”:

He presented his opponents as an evil rival “tribe”:

However, those who stood behind the latest events in Ukraine had a different agenda: they were preparing yet another government takeover; they wanted to seize power and would stop short of nothing. They resorted to terror, murder and riots. Nationalists, neo-Nazis, Russophobes and anti-Semites executed this coup. They continue to set the tone in Ukraine to this day.

And called for the defense of the community and the innocent:

Let me say one other thing too. Millions of Russians and Russian-speaking people live in Ukraine and will continue to do so. Russia will always defend their interests using political, diplomatic and legal means. But it should be above all in Ukraine’s own interest to ensure that these people’s rights and interests are fully protected. This is the guarantee of Ukraine’s state stability and territorial integrity.

What this should show us, and Americans certainly shouldn’t need a lesson in here, is that norms against violence (though violence in Ukraine has so far, thankfully been low), can be easily turned off given the right circumstance. Putin, by demonizing his Ukrainian opponents, and claiming that Russia would stand in defense of the rights of the Russian minority in Ukraine was rallying the Russian population for a possible escalation of violence should his conditions not be met. His speech was met with a standing ovation. It is this ease by which our instincts for violence can be turned on that suggests Pinker may have been too optimistic in thinking war was becoming a thing of the past if we are depending  on a change in norms alone.

Then there is sheer chance. Pinker’s theory of the decline of violence in general relies on Gaussian bell curves, averages over long stretches of time, but if we should have learned anything from Nassim Taleb and his black swans and the financial crisis, its the fat tails that should worry us most. The occurrence of a highly improbable event that flips our model of the world and the world itself on its head and collapses the Gaussian curve. Had Stanislav Petrov decided to fire his ICBMS rather than sit on his hands, Pinker’s decline of violence, up to that point, would have looked like statistical noise masking the movement towards the real event- an unprecedented expression of human violence that would have killed the majority of the human race.

Like major financial crises that happen once in a century, or natural disasters that appear over longer stretches of time, anything we’ve once experienced can happen again with the probability of recurrence often growing over time.  If human decision making is the primary factor involved, as it is in economic crises and war, the probability of such occurrences may increase as the generation whose errors in judgement brought on the economic crisis or war recedes from the scene taking their acquired wisdom and caution with them.

And we are losing sight of this possibility. Among military theorists rather than defense contractors, Colin S. Grey is one of an extreme minority trying to remind us the war between the big powers is not impossible as he writes in Another Bloody Century

If we, grant with some reservations, that there is a trend away from interstate warfare, there hovers in the background the thought that this is a trend that might be reversed abruptly. No country that is a significant player in international security, not least the United States, has yet reorganized and transformed its regular military establishment to reflect the apparent demise  of ‘old’ (interstate wars and the rise of new ones.

Grey, for one, does not think that we’ll see a replay of 20th century world wars with massive, industrial armies fighting it out on land and sea. The technology today is simply far too different than it was in the first half of the last century. War has evolved and is evolving into something very different, but interstate war will likely return.

We might not see the recurrence of world war but merely skirmishes between the big powers. This would be more of a return to normalcy than anything else. World wars, involving the whole of society, especially civilians, are a very modern phenomenon dating perhaps no earlier than the French Revolution. In itself a return to direct clashes between the big powers would be very bad, but not so bad as slippage into something much worse, something that might happen because escalation had gone beyond the point of control.

The evolution of 21st century war may make such great power skirmishes more likely. Cyber-attacks have, so far at least, come with little real world consequences for the attacking country. As was the case with the German officer corps in World War I, professional soldiers, who have replaced draftees and individuals seeking a way out of poverty as the basis of modern militaries seem likely more eager to fight so as to display their skills, and may in time be neurologically re-engineered so as to deal with the stresses of combat. It is at least conceivable that professional soldiers might be the first class to have full legal access to technological and biological enhancements being made possible by advances in prosthetics, mind-computer interfaces and neuroscience.

Governments as well as publics may become more willing to engage in direct conflict as relatively inexpensive and expendable drones and robots replace airmen and soldiers. Ever more of warfighting might come to resemble a videogame with soldiers located far from the battlefield.  Both war and the international environment in which wars are waged has evolved and is evolving into something very unlike that which we have experienced since the end of the Cold War. The father out it comes the more likely that the next big war will be a transhumanist or post-human version of war, and there are things we can do now that might help us avoid it- subjects I will turn to in the near future.