A Cure for Our Deflated Sense of The Future

Progressland 1964 World's Fair

There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.

The strange thing, perhaps, is that anyone would be disappointed in the fact that a fictional world has failed to become real in the first place. No one, I hope, feels like the present is constricted and dull because there aren’t any flying dragons in it to slay. The problem, then, might lie in the way science-fiction is understood in the minds of some avid fans- not as fiction, but as plausible future history, or even a sort of “preview” and promise of all the cool things that await.

Future- deflation is a kind of dulling hang-over from a prior bout of future-inflation when expectations got way out ahead of themselves. If, mostly boys, now become men, feel let down by inflated expectations driven by what proved to be the Venetian sunset, rather than the beginning, of the space race regarding orbital cities, bases on the moon and Mars, and a hundred other things, their experience is a little like girls, fed on a diet of romance, who have as adults tasted the bitter reality of love. Following the rule I suppose I’d call it romance-deflation- cue the Viagra jokes.

Okay, so that’s the condition, how might it be cured? The first step to recovery is admitting you have a problem and identifying its source. Perhaps the main culprit behind future-deflation is the crack cocaine of CGI- (and I really do mean CGI as in computer generated graphics which I’ve written about before). Whereas late 20th century novels,  movies, and television shows served as gateway drugs to our addiction to digital versions of the future, CGI and the technologies that will follow is the true rush, allowing us to experience versions of the future that just might be more real than reality itself.

There’s a phenomenon discovered by social psychologists studying motivation that says that it’s a mistake to visualize your idealized outcomes too clearly for to do so actually diminishes your motivation to achieve them. You get many of the same emotional rewards without any of the costs, and on that account never get down to doing the real thing. Our ability to create not just compelling but mind blowing visualizations of technologies that are nowhere on the horizon has become so good, and will only get better, that it may be exacerbating disappointment with the present state of technology and the pace of technological change- increasing the sense of “where’s my jet pack”.

There’s a theory that I’ve heard discussed by Brian Eno that the reason we haven’t been visited by any space aliens is that civilizations at a certain point fall into a state of masturbatory self-satisfaction. They stop actually doing stuff because the imagination of doing things becomes so much better and easier than the difficult and much less satisfying achievements experienced in reality.

The cure for future deflation is really just adulthood. We need to realize that the things we would like to do and see done are hard and expensive and take long commitments over time- often far past our own lifetimes- to achieve. We need to get off our Oculus Rift weighed down assess and actually do stuff. Elon Musk with his SpaceX seems to realize this, but with a ticket to Mars to cost 500 thousand dollars one can legitimately wonder whether he’ll end up creating an escape hatch from earth for the very rich that the rest of us will be stuck gawking at on our big-screen TVs.

And therein lies the heart of the problem, for it’s actually less important for the majority of us what technologies are available in the future than the largely non-technological question of how such a future is politically and economically organized which will translate into how many of us have access to these technologies.  

The question we should be asking when thinking about things we should be doing now to shape the future is a simple and very human one – “what kind of world do I hope my grandchildren live in?” A part of the answer to this question is going to involve technology and scientific advancement, but not as much of it as we might think. Other types of questions dealing with issues such as the level of equality, peace and security, a livable environment, and amount of freedom and purpose, are both more important and more capable of being influenced by the average person.  These are things we can pursue even if we have no connection to the communities of science and technology. We could even achieve many of these things should technological progress completely stall with the technological kit we already have.

In a way because it emerged in tandem with the technological progress started with the scientific and industrial revolutions science-fiction seemed to own the future, and those who practiced the art largely did well by us in giving it shape- at least in our minds. But in reality the future was just on loan, and it might do us best to take back a large portion of it and encourage everyone who wants to have more say in defining it. Or better, here’s my advice: for those techno-progressives not involved directly in the development of science and technology focus more of your efforts on the progressive side of the scale. That way, if even part of the promised future arrives you won’t be confined to just watching it while wearing your Oculus Rift or stuck in your seat at the IMAX.

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

The First Machine War and the Lessons of Mortality

Lincoln Motor Co., in Detroit, Michigan, ca. 1918 U.S. Army Signal Corps Library of Congress

I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war.

In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net.

But the most frightening aspect of the new war are weapons based on plant biology. Such weapons, if they do not immediately scar the face and infect the bodies of those who had been targeted, relocate themselves in the soil like spores waiting to release and kill and maim when conditions are ripe- the ultimate terrorist weapon.

Amid all this the author searches for what human meaning might be in a world where men are caught between a world of warring machines.  In the end he comes to understand himself as mere cog in a great machine, a metallic artifice that echoes and rides rhythms of biological nature including his own.

__________________________________

A century and a week back from today humanity began its first war of machines. (July, 28 1914). There had been forerunners such as the American Civil War and the Franco-Prussian War in the 19th century, but nothing before had exposed the full features and horrors of what war between mechanized and industrial societies would actually look like until it came quite unexpectedly in the form of World War I.

Those who wish to argue that the speed of technological development is exaggerated need only to look to at the First World War where almost all of the weapons we now use in combat were either first used or used to full effect- the radio, submarine, airplane, tank and other vehicles using the internal combustion engine. Machine guns were let loose in new and devastating ways as was long-range artillery.

Although again there were forerunners, the first biological and chemical weapons saw there true debut in WWI. The Germans tried to infect the city of St. Petersburg with a strain of the plague, but the most widely used WMDs were chemical weapons, some of them derived from the work on the nitrogen cycle of plants, gases such chlorine and mustard gas, which killed less than they maimed, and sometimes sat in the soil ready to emerge like poisonous mushrooms when weather conditions permitted.

Indeed, the few other weapons in our 21st century arsenal that can’t be found in the First World War such as the jet, rocket, atomic bomb, radar, and even the computer, would make their debut only a few decades after the end of that war, and during what most historians consider its second half- World War II.

What is called the Great War began, as our 9-11 wars began, with a terrorist attack. The Archduke of Austria- Hungary Franz Ferdinand assassinated by the ultimate nobody, a Serbian nationalist not much older than a boy- Gavrilo Princip- whose purely accidental success (he was only able to take his shot because the car the Archduke was riding in had taken a wrong turn) ended up being the catalyst for the most deadly war in human history up until that point, a conflict that would unleash nearly a century of darkness and mortal danger upon the world.

For the first time it would be a war that would be experienced by people thousands of miles from the battlefield in almost real time via the relatively new miracle of the radio. This was only part of the lightning fast feedback loop that launched and sped European civilization from a minor political assassination to total war. As I recall from Stephen Kern’s 1983 The Culture of Time and Space 1880-1914   political caution and prudence found themselves crushed in the vice of compressed space and time and suffered the need to make policy decisions that aligned with the need, not of human beings and their slow, messy and deliberative politics, but the pace of machines. Once the decision to mobilize was made it was almost impossible to stop it without subjecting the nation to extreme vulnerability, once a jingoistic public was whipped up to demand revenge and action via the new mass media it was again nearly impossible to silence and temper.

The First World War is perhaps the penultimate example of what Nassim Taleb called a “Black Swan” an event whose nature failed to be foreseen and whose effect ended up changing the future radically from the shape it had previously been projected to have. Taleb defines a Black Swan this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Secondly it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

Taleb has something to teach futurists who he suggests might start looking for a different day job. For what he is saying is it is the events we do not and cannot predict that will have the greatest influence on the future, and on account of this blindness the future is essentially unknowable. The best we can do in his estimation is to build resilience, robustness and redundancy, which it is hoped might allow us to survive, or even gain in the face of multiple forms of unpredictable crises, to become, in his terms “anti-fragile”.

No one seems to think another world war is possible today, which might give us reason for worry. We do have more circuit breakers in place which might allow us to dampen a surge in the direction of war between the big states in the face of a dangerous event such as Japan downing a Chinese plane in their dispute over Pacific islands, but many more and stronger ones need to be created to avoid such frictions spinning out of control.

States continue to prepare for limited conventional wars against one another. China practices and plans to retake disputed islands including Taiwan by force, and to push the U.S. Navy deeper into the Pacific, while the U.S. and Japan practice retaking islands from the Chinese. We do this without recognizing that we need to do everything possible to prevent such potential clashes in the first place because we have no idea once they begin where or how they will end.  As in financial crises, the further in time we become removed from the last great crisis the more likely we are to have fallen into a dangerous form of complacency, though the threat of nuclear destruction may act as an ultimate break.

The book I began this essay with is, of course, not some meditation on 21st or even 22nd century war, but the First World War itself. Ernst Jünger’s Storm of Steel is perhaps the best book ever written on the Great War and arguably one of the best books written on the subject of war- period.

It is Jünger’s incredible powers of observation and his desire for reflection that give the book such force. There is a scene that will ever stick with me where Jünger is walking across the front lines and sees a man sitting there in seemingly perfect stoicism as the war rages around him. It’s only when he looks closer that Jünger realizes the man is dead and that he has no eyes in his sockets- they have been blown out from an explosion behind.

Unlike another great book on the First World War, Erich Remarque’s All Quiet on the Western Front, Storm of Steel is not a pacifist book. Jünger is a soldier who sees the war as a personal quest and part of his duty as a man. His bravery is undeniable, but he does not question the justice or rationality of the war itself, a fact that would later allow Jünger’s war memoir to be used as a propaganda tool by the Nazis.

Storm of Steel has about it something of the view of war found in the ancients- that it was sent by the gods and there was nothing that could be done about it but to fulfill one’s duty within it. In the Hindu holy book the Bhagavad Gita the prince Arjuna is filled with doubt over the moral righteousness of the battle he is about to engage in- to which the god Krishna responds:

 …you Arjuna, are only a mortal appointee to carry out my divine will, since the Kauravas are destined to die either way, due to their heap of sins. Open your eyes O Bhaarata and know that I encompass the Karta, Karma and Kriya, all in myself. There is no scope for contemplation now or remorse later, it is indeed time for war and the world will remember your might and immense powers for time to come. So rise O Arjuna!, tighten up your Gandiva and let all directions shiver till their farthest horizons, by the reverberation of its string.

Jünger lived in a world that had begun to abandon the gods, or rather adopted new materialist versions of them – whether the force of history or evolution- stories in which Jünger like Arjuna comes to see himself as playing a small part.

 The incredible massing of forces in the hour of destiny, to fight for a distant future, and the violence it so surprisingly, suddenly unleashed, had taken me for the first time into the depths of something that was more than mere personal experience. That was what distinguished it from what I had been through before; it was an initiation that not only had opened red-hot chambers of dread but had also led me through them. (255-256)

Jünger is not a German propagandist. He seems blithely unaware of the reasons his Kaiser’s government was arguing why the war was being fought. The dehumanization of the other side which was a main part of the war propaganda towards the end of the conflict, and on both sides, does not touch him. He is a mere soldier whose bravery comes from the recognition of his own ultimate mortality just as the mortality of everyone else allows him to kill without malice, as a mere matter of the simple duty of a combatant in war.

Because his memoir of the conflict is so authentic, so without bias or apparent political aims, he ends up conveying truths about war which it is difficult for civilians to understand and this difficulty in understanding can be found not only in pacifists, but in nationalist war-mongers with no experience of actual combat.

If we open ourselves to to the deepest meditations of those who have actually experienced war, what we find is that combat seems to bring the existential reality of the human condition out from its normal occlusion by the tedium of everyday living. To live in the midst of war is a screaming reminder that we are mortal and our lives ultimately very short. In war it is very clear that we are playing a game of chance against death, which is merely the flip side of the statistical unlikelihood of our very existence, as if our one true God was indeed chance itself. Like any form of gambling, victory against death itself becomes addictive.

War makes it painfully clear to those who fight in it that we are hanging on just barely to this thread, this thin mortal coil, where our only hope for survival for a time is to hang on tightly to those closest to us- war’s famed brotherhood in arms. These experiences, rather than childish flag- waving notions of nationalism, are likely the primary source of what those who have experience of only of peace often find unfathomable- that soldiers from the front often eagerly return to battle. It is a shared experience among those who have experienced combat that often leads soldiers to find more in common with the enemies they have engaged than their fellow citizens back home who have never been to war.

The essential lessons of Storm of Steel are really spiritual answers to the question of combat. Jünger’s war experience leads him to something like Buddhist non-attachment both to himself and to the futility of the bird-eye view justifications of the conflict.

The nights brought heavy bombardment like swift, devastating summer thunderstorms. I would lie on my bunk on a mattress of fresh grass, and listen, with a strange and quite unjustified feeling of security, to the explosions all around that sent the sand trickling out of the walls.

At such moments, there crept over me a mood I hadn’t known before. A profound reorientation, a reaction to so much time spent so intensely, on the edge. The seasons followed one another, it was winter and then it was summer again, but it was still war. I felt I had got tired, and used to the aspect of war, but it was from familiarity that I observed what was in front of me in a new and subdued light. Things were less dazzlingly distinct. And I felt the purpose with which I had gone out to fight had been used up and no longer held. The war posed new, deeper puzzles. It was a strange time altogether. (260)

In another scene Jünger comes upon a lone enemy officer while by himself on patrol.

 I saw him jump as I approached, and stare at me with gaping eyes, while I, with my face behind my pistol, stalked up to him slowly and coldly. A bloody scene with no witnesses was about to happen. It was a relief to me, finally, to have the foe in front of me and within reach. I set the mouth of my pistol at the man’s temple- he was too frightened to move- while my other fist grabbed hold of his tunic…

With a plaintive sound, he reached into his pocket, not to pull out a weapon, but a photograph which he held up to me. I saw him on it, surrounded by numerous family, all standing on a terrace.

It was a plea from another world. Later, I thought it was blind chance that I had let him go and plunged onward. That one man of all often appeared in my dreams. I hope that meant he got to see his homeland again. (234)

The unfortunate thing about the future of war is not that human beings seem likely to play an increasingly diminishing role as fighters in it, as warfare undergoes the same process of automation, which has resulted in the fact that so few of us now grow our food or produce our goods. Rather, it is the fact that wars will continue to be fought and human beings, which will come to mean almost solely non-combatants, will continue to die in them.

The lessons Jünger took from war are not so much the product of war itself as they emerge from intense reflection on our own and others mortality. They are the same type of understanding and depth often seen in those who suffer long periods of death, the terminally ill, who die not in the swift “thief in the night” mode of accidents or bodily failure, but slip from the world with enough time to and while retaining the capacity to reflect. Even the very young who are terminally ill often speak of a diminishing sense of their own importance, a need to hang onto the moment, a drive to live life to the full, and the longing to treat others in a spirit of charity and mercy.

Even should the next “great war” be fought almost entirely by machines we can retain these lessons as a culture as long as we give our thoughts over to what it means to be a finite creature with an ending and will have the opportunity to experience them personally as long as we are mortal, and given the impossibility of any form of eternity no matter how far we will extend our lifespans, mortal we always will be.

 

 

Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.

 

Malthusian Fiction and Fact

Wind_Up_Girl_by_Raphael_Lacoste-big

Prophecies of doom, especially when they’re particularly frightening, have a way of sticking with us in a way more rosy scenarios never seem to do. We seem to be wired this way by evolution, and for good reason.  It’s the lions that almost ate you that you need to remember, not the ones you were lucky enough not to see. Our negative bias is something we need to be aware of, and where it seems called for, lean against, but that doesn’t mean we should dismiss and ignore every chicken little as a false prophet even when his predictions turn out to be wrong, not just once, but multiple times. For we can never really discount completely the prospect that chicken little was right after all, and it just took the sky a long, long time to fall.

 The Book of Revelation is a doom prophecy like that, but it is not one that any secular or non-Christian person in their right mind would subscribe to. A better example is the prediction of Thomas Malthus who not only gave us a version of Armageddon compatible with natural science, but did so in the form of what was perhaps the most ill timed book in human history.

Malthus in his  An Essay on the Principle of Population published in 1798 was actually responding to one of history’s most wild-eyed optimist. While he had hidden himself away from French Revolutionary Jacobins who wanted to chop off his head, Nicolas de Condorcet had the balls to write his Sketch for a Historical Picture of the Progress of the Human Spirit, which argued not only that the direction of history was one of continuous progress without limit, but that such progress might one day grant mankind biological immortality. Condorcet himself wouldn’t make it, dying just after being captured, and before he could be drug off to the Guillotine, whether from self-ingested poison or just exhaustion at being hunted we do not know.

The argument for the perpetual progress of mankind was too much for the pessimist Malthus to bear. Though no one knew how long we had been around it was obvious to anyone who knew their history that our numbers didn’t just continually increase let alone things get better in some linear fashion. Malthus thought he had fingered the culprit – famine. Periods of starvation, and therefore decreases in human numbers, were brought about whenever human beings exceeded their ability to feed themselves, which had a tendency to happen often. As Malthus observed the production of food grew in a linear fashion while human population increased geometrically or exponentially.  If you’re looking for the origins of today’s debate between environmentalists and proponents of endless growth from whatever side of the political ledger- here it is-with Condorcet and Malthus.

Malthus could not have been worse in terms of timing. The settlement of the interiors of North America and Eurasia along with the mechanization of agriculture were to add to the world untold tons of food. The 19th century was also when we discovered what proved to be the key to unleashing nature’s bounty- the nitrogen cycle- which we were able to start to take advantage of in large part because in Peru we had stumbled across a hell of a lot of bat poop.

Supplies of guano along with caliche deposits from the Atacama desert of Chile would provide the world with soil supercharging nitrogen up until the First World War when the always crafty Germans looking for ways to feed themselves and at the same time make poison gas to kill the allies discovered the Haber-Bosch process. Nitrogen rich fertilizer was henceforth to be made using fossil fuels. All of that really took off just in time to answer another Malthusian prophecy  Paul Ehrlich’s The Population Bomb of 1968, which looked at the quite literally exploding human population and saw mass starvation. The so-called “Green Revolution” proved those predictions wrong through a mixture of nitrogen fertilizers, plant breeding and pesticides. We had dodged the bullet yet again.

Malthusians today don’t talk much about mass starvation, which seems to make sense in a world where even the developing world has a growing problem with obesity, but the underlying idea of Malthus, that the biological-system of the earth has systemic limits and humans have a tendency to grow towards exceeding them still has legs, and I would argue, we should take this view very seriously indeed.

We can be fairly certain there are limits out there, though we are in the dark as to what exactly those limits are. Limits to what? Limits to what we can do or take from the earth or subject it to before we make it unlivable the majority of creatures who evolved for its current environment- including ourselves- unless we want to live here under shells as we would on a dead planet like Mars or like in the 70s science-fiction classic Logan’s Run itself inspired by Malthusian fears about runaway population growth.

The reason why any particular Malthus is likely wrong is that the future is in essence unknowable. The retort to deterministic pessimism is often deterministic optimism, something has always saved us in the past, usually human inventiveness i.e. technology and will therefore save us in the future. I don’t believe in either sort of determinism we are not fated to destroy ourselves, but the silence of the universe shouldn’t fill us with confidence that we are fated to save ourselves either. Our future is in our hands.

What might the world look like if today’s Malthusian fears prove true, if we don’t stumble upon or find the will and means to address the problems of what people are now calling the Anthropocene, the era when humanity itself has become the predominant biological, and geochemical force on earth? The problems are not hard to identify and are very real. We are heating up the earth with the carbon output of our technological civilization, and despite all the progress in solar power generation, demon coal is still king. Through our very success as a species we appear to be unleashing the sixth great extinction upon life on earth, akin to the cataclysmic events that almost destroyed life on earth in the deep past. Life and the geography of the planet will bear our scars.

Paolo Bacigalupi is a novelist of the Anthropocene bringing to life dystopian visions of what the coming centuries might look like if we have failed to solve our current Malthusian challenges. The 23rd century Bacigalupi presents in his novel The Windup Girl is not one any of us would want to live in, inheriting the worst of all worlds. Global warming has caused a catastrophic level of sea rise, yet the age of cheap and abundant energy is also over. We are in an era with a strange mixture of the pre-industrial and the post industrial- dirigibles rather than planes ply the air, sailing ships have returned. Power is once again a matter of simple machines and animals with huge biologically engineered elephants turning vast springs to produce energy which for all that is painfully scarce. Those living in the 23rd century look back to our age of abundance, what they call the Expansion, as a golden age.

What has most changed in the 23rd century is our relationship to the living world and especially to food. Bacigalupi has said that the role of science-fiction authors is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” What Bacigalupi was especially interested in extrapolating was the way biotechnology and agribusiness seem to be trying to take ownership over food. Copyrighting genes, preventing farmers from re-using, engaging in bioprospecting. The latter becomes biopiracy when companies copyright genetic samples and leverage indigenous knowledge without compensating the peoples from whom this biological knowledge and material has been gleaned.

In the neo-medieval world of The Windup Girl agribusinesses known as “calorie companies” with names like AgriGen, PurCal and RedStar don’t just steal indigenous knowledge and foreign plants, and hide this theft under the cloak of copyright, they hire mercenaries to topple governments, unleash famines, engage in a sophisticated form of biological warfare. It as if today’s patent obsessed and abusing pharmaceutical companies started to deliberately create and release diseases in order to reap the profits when they swept in and offered cures. A very frightening possible future indeed.

Genetic engineering, whether deliberately or by accident, has brought into being new creatures that outcompete the other forms of life. There are cats called cheshires that roam everywhere and  possess chameleon like camouflaging capabilities. There are all sorts of diseases, mostly deadly, diseases that make the trans-genetic jump from plants to humans.

Congruent with the current Malthusianism, The Windup Girl is really not focused on population as our primary problem. Indeed, the humanoid windups, a people whose condition and fate take center stage in the novel were created not because of population growth, but in response to population decline. In a world without abundant energy robots are a non-starter. Facing collapsing population the Japanese have bred a race of infertile servants called windups for their mechanical like motions. Just one of the many examples where Bacigalupi is able to turn underlying cultural assumptions into a futuristic reality the windups are “more  Japanese than the Japanese” and combine elements of robots and geishas.

The novel is the story of one of these windups, Emiko, who has been abandoned by her owner in Thailand coming to find her soul in a world that considers her soulless. She is forced to live as a sexual slave and is barbarically brutalized by Somdet Chaopraya’ the regent of the child queen, and murders him as a result. Emiko had become involved with a Westerner, Anderson Lake working for AgriGen who is searching out the Thais most valuable resource, their seed bank, the Norwegian seed bank of Svalbard having been destroyed in an act of industrial terrorism by the calorie companies, along with a mysterious scientist called Gibbons who keeps the Thais one step ahead of whatever genetic plague nature or the calorie companies can throw at them.

Anderson and the calories companies are preparing a sort of coup to unseat the powerful Environment Ministry once lead by a murdered figure named Jaidee to open up trade and especially access to the genetic wealth of Thailand. The coup is eventually undone,  Bangkok flooded and Emiko living in hiding the deserted city offered the chance to create more of her own kind by Gibbons who relishes playing God.

What I found most amazing about The Windup Girl wasn’t just the fact that Bacigalupi made such a strange world utterly believable, but that he played out the story in a cosmology that is Buddhist, which gives a certain depth to how the characters understand their own suffering.

The prayers of the monks are all that can sometimes be done in what are otherwise hopeless situation. The ancient temples to faith have survived better than the temples to money- the skyscrapers built during the age of expansion. Kamma (karma) gives the Asian characters in the novel a shared explanatory framework, one not possessed by foreigners (the farang). It is the moral conscience that haunts their decisions and betrayals. They have souls and what they do here and not do will decide the fate of the their soul in the next life.

The possession of a soul is a boundary against the windups. Genetic creations are considered somehow soulless not subject to the laws of kamma and therefore outside of the moral universe, inherently evil. This belief is driven home with particular power in a scene in which the character Kanya comes across a butterfly. It is beautiful, bejeweled a fragile thing that has traveled great distances and survived great hardships. It lands on her finger and she studies it. She lets it sit on her finger and then she shepherds it into the palm of her hand. She crushes it into dust. Remorseless. “Windups have no souls. But they are beautiful.”  (241)

It isn’t only a manufactured pollinator like the butterfly that are considered soulless and treated accordingly. Engineered humans like Emiko are treated in such a way as well. They exist either as commodities to be used or as something demonic to be destroyed. But we know different. Emiko is no sexual toy she is a person, genetically engineered or not, which means, among much else, that she can be scarred by abuse.

 In the privacy of the open air and the setting sun she bathes. It is a ritual process, a careful cleansing. The bucket of water, a fingering of soap. She squats beside the bucket and ladles the warm water over herself. It is a precise thing, a scripted act as deliberate as Jo No Mai, each move choreographed, a worship of scarcity.

Animals bathe to remove dirt. Only humans, only something inhabiting a moral universe, possessing a moral memory both of which might be the best materialist way of understanding the soul, bathe in the hope of removing emotional scars. If others could see into Emiko, or her into the hearts of other human beings, then both would see that she has a soul to the extent that human beings might be said to have one. The American farang Anderson thinks:

 She is an animal. Servile as a dog.

He wonders if she were a real person if he would feel more incensed at the abuse she suffers. It’s an odd thing, being with a manufactured creature, built and trained to serve.

She admits herself that her soul wars with herself. That she does not rightly know which parts of her are hers alone and which have been inbuilt genetically. Does her eagerness to serve come from some portion of canine DNA that makes her assume that natural people outrank her for pack loyalty? Or is it simply the training she has spoken of? (184)

As we ask of ourselves- nature or nurture? Both are part of our kamma as is what we do or fail to do with them.

The scientist, Gibbons who saves the Thais from the genetic assaults of the the calorie companies and holds the keys to their invaluable seed bank is not a hero, but himself a Promethean monster surrounding himself with genetically engineered beings he treats as sexual toys, his body rotting from disease. To Kanya’s assertion that his disease is his kamma, Gibbons shouts:

Karma? Did you say karma? And what sort of karma is it that ties your entire country to me, my rotting body.  (245)

Gibbons is coming from a quite different religious tradition, a Christian tradition, or rather the Promethean scientific religion which emerged from it in which we play the role of god.

We are nature. Our every tinkering is nature, our every biological striving. We are what we are, and the world is ours. We are its gods. Your only difficulty is your unwillingness to unleash your potential fully upon it.

I want to shake you sometimes. If you would just let me, I could be your god and shape you to the Eden that beckons us.

To which Kanya replies: “I’m Buddhist.”

Gibbons:

 And we all know windups have no soul. No rebirth for them. They will have to find their own gods protect them. Their own gods to pray for their dead. Perhaps I will be that one, and your windup children will pray to me for salvation.(243)

This indeed is the world that open up at the end of the novel, with Gibbons becoming Emiko’s “God” promising to make her fertile and her race plentiful. One get the sense that the rise of the windups will not bode well for humanity our far future kamma eventually tracing itself back to the inhuman way we treated Emiko.

Bacigalupi has written an amazing book, even if it has its limitations. The Windup Girl managed to avoid being didactic but is nonetheless polemical- it has political points to score. It’s heroes are either Malthusian or carried by events with all the bad guys wearing Promethean attire. In this sense it doesn’t take full advantage of the capacity of fiction for political and philosophical insight which can only be accomplished when one humanizes all sides, not just ones own.

Still, it was an incredible novel, one of the best to come out of our new century. Unlike most works of fiction, however, we can only hope that by the time we reach the 23rd century the novel represents it will be forgotten. For otherwise, the implication would be that after many false alarms Malthusian fiction has after a long history of false alarms, become Malthusian fact.

 

This City is Our Future

Erich Kettelhut Metropolis Sketch

If you wish to understand the future you need to understand the city, for the human future is an overwhelmingly urban future. The city may have always been synonymous with civilization, but the rise of urban humanity has been something that has almost all occurred after the onset of the industrial revolution. In 1800 a mere 3 percent of humanity lived in cities of over one million people. By 2050, 75  percent of humanity will be urbanized. India alone might have 6 cities with a population of over 10 million.    

The trend towards megacities is one into which humanity as we speak is accelerating in a process we do not fully understand let alone control. As the counterinsurgency expert David Kilcullen writes in his Out of the Mountains:

 To put it another way, these data show that the world’s cities are about to be swamped by a human tide that will force them to absorb- in just one generation- the same population growth that occurred in all of human history up to 1960. And virtually all of this growth will happen in the world’s poorest areas- a recipe for conflict, for crises in health, education and in governance, and for food water and energy scarcity.  (29)

Kilcullen sees 4 trends including urbanization that he thinks are reshaping human geography all of which can be traced to processes that began in the industrial revolution: the aforementioned urbanization and growth of megacities, population growth, littoralization and connectedness.

In terms of population growth: The world’s population has exploded going from 750 million in 1750 to a projected  9.1 – 9.3 billion by 2050. The rate of population growth is thankfully slowing, but barring some incredible catastrophe, the earth seems destined to gain the equivalent of another China and India all within the space of a generation. Almost all of this growth will occur in poor and underdeveloped countries already stumbling under the pressures of the populations they have.

One aspect of population growth Kilcullen doesn’t really discuss is the aging of the human population. This is normally understood in terms of the failure of advanced societies in Japan, South Korea in Europe to reach replacement levels so that the number of elderly are growing faster than the youth to support them, a phenomenon that is also happening in China as a consequence of their draconian one child policy. Yet, the developing world, simply because of the sheer numbers and increased longevity will face its own elderly crisis as well as tens of millions move into age-related conditions of dependency. As I have said in the past, gaining a “longevity dividend” is not a project for spoiled Westerners alone, but is primarily a development issue.

Another trend Kilcullen explores is littoralization, the concentration of human populations near the sea. A fact that was surprising to a landlubber such as myself, Kilcullen points out that in 2012 80% of human beings lived within 60 miles of the ocean. (30) A number that is increasing as the interiors of the continents are hollowed out of human inhabitants.

Kilcullen doesn’t discuss climate change much but the kinds of population dislocations that might be caused by moderate not to mention severe sea level rise would be catastrophic should certain scenarios for climate change play out. This goes well beyond islands or wealthy enclaves such as Miami, New Orleans or Manhattan. Places such as these and Denmark may have the money to engineer defenses against the rising sea, but what of a poor country such as Bangladesh? There, almost 200 million people might find themselves in flight from the relentless forward movement of the oceans. To where will they flee?

It is not merely the displacement of tens of millions of people, or more, living in low-lying coastal areas. Much of the world’s staple crop of rice is produced in deltas which would be destroyed by the inundation of the salt-water seas.

The last and most optimistic of Kilcullen’s trends is growing connectedness. He quotes the journalist John Pollack:

Cell-phone penetration in the developing world reached 79 percent in 2011. Cisco estimates that by 2015 more people in sub-saharan Africa,  South and Southeast Asia and the Middle East will have Internet access than electricity at home.

What makes this less optimistic is the fact as Pollack continues:

Across much of the world, this new information power sits uncomfortably upon layers of corrupt and inefficient government.  (231)

One might have thought that the communications revolution had made geography irrelevant or “flat” in Thomas Friedman’s famous term. Instead, the world has become“spiky” with the concentration of people, capital, and innovation in cities spread across the globe and interconnected with one another. The need for concentration as a necessary condition for communication is felt by the very rich and the very poor alike, both of whom collect together in cities. Companies running sophisticated trading algorithms have reshaped the very landscape to get closer to the heart of the Internet and gain a speed advantage over competitors so small they can not be perceived by human beings.

Likewise, the very poor flood to the world’s cities, because they can gain access to networks of markets and capital, but more recently, because only there do they have access to electricity that allows them to connect with one another or the larger world, especially in terms of their ethnic diaspora or larger civilizational community, through mobile devices and satellite TV. And there are more of these poor struggling to survive in our 21st century world than we thought, 400 million more of them according to a recent report.

For the urban poor and disenfranchised of the cities what the new connectivity can translate into is what Audrey Kurth Croninn has called the new levee en mass.  The first levee en mass was that of the French Revolution where the population was mobilized for both military and revolutionary action by new short length publications written by revolutionary writers such as Robespierre, Saint-Just or the blood thirsty Marat. In the new levee en mass, crowds capable of overthrowing governments- witness, Tunisia, Egypt and Ukraine can be mobilized by bloggers, amateur videographers, or just a kind of swarm intelligence emerging on the basis of some failure of the ruling classes.

Even quite effective armies, such as ISIS now sweeping in from Syria and taking over swaths of Iraq can be pulled seemingly out of thin air. The mobilizing capacity that was once the possession of the state or long-standing revolutionary groups has, under modern conditions of connectedness, become democratized even if the money behind them can ultimately be traced to states.

The movement of the great mass of human beings into cities portends the movement of war into cities, and this is the underlying subject of Kilcullen’s book, the changing face of war in an urban world. Given that the vast majority of countries in which urbanization is taking place will be incapable of fielding advanced armies the kinds of conflicts likely to be encountered there Kilcullen thinks will be guerilla wars whether pitting one segment of society off against another or drawing in Western armies.

The headless, swarm tactics of guerrilla war, which as the author Lawrence H. Keeley reminded us is in some sense a more evolved, “natural” and ultimately more effective form of warfare than the clashing professional armies of advanced states, its roots stretching back into human prehistory and the ancient practices of both hunting and tribal warfare, are given a potent boost by local communication technologies such as traditional radio communication and mesh networks. The crowd or small military group able to be tied together by an electronic web that turns them into something more like an immune system than a modern centrally directed army.

Attempting to avoid the high casualties so often experienced when advanced armies try to fight guerrilla wars, those capable of doing so are likely to turn to increasingly sophisticated remote and robotic weapons to fight these conflicts for them. Kilcullen is troubled by this development, not the least, because it seems to relocate the risk of war onto the civilian population of whatever country is wielding them, the communities in which remote warriors live or where their weapons themselves designed and built, arguably legitimate targets of a remote enemy a community might not even be aware it is fighting. Perhaps the real key is to try to prevent conflicts that might end with our military engagement in the first place.

Cities likely to experience epidemic crime, civil war or revolutionary upheaval are also those that have in Kilcullen’s terms gone “feral”, meaning the order usually imposed by the urban landscape no longer operates due to failures of governance. Into such a vacuum criminal networks often emerge which exchanges the imposition of some semblance of order for the control of illicit trade. All of these things: civil war, revolution, and international crime represent pull factors for Western military engagement whether in the name of international stability, humanitarian concerns or for more nefarious ends most of which are centered on resource extraction. The question is how can one prevent cities from going feral in the first place, avoiding the deep discontent and social breakdown that leads to civil war, revolution or the rise of criminal cartels all of which might end with the military intervention of advanced countries?

The solution lies in thinking of the city as a type of organism with “inflows” such as water, food, resources, manufactured products and capital and “outflows”, especially waste. There is also the issue of order as a kind of homeostasis. A city such as Beijing or Shanghai with their polluted skies is a sick organism as is the city of Dhaka in Bangladesh with its polluted waters or a city with a sky-high homicide rate such as Guatemala City or Sao Paulo. The beautiful thing about the new technologically driven capacity for mass mobilization is that it forces governments to take notice of the people’s problems or figuratively (and sometimes literally lose their heads). The problem is once things have gone badly enough to inspire mass riots the condition is likely systemic and extremely difficult to solve, and that the kinds of protests the Internet and mobile have inspired, at least so far, have been effective at toppling governments, but unable to either found or serve as governments themselves.

At least one answer to the problems of urban geography that could potentially allow cities to avoid instability is “Big-Data” or so-called “smart cities” where the a city is minutely monitored in real time for problems which then initiate quick responses by city authorities. There are several problems here, the first being the costs of such systems, but that might be the least insurmountable one, the biggest being the sheer data load.

As Kilcullen puts it in the context of military intelligence, but which could just as well be stated as the problem of city administrators, international NGOs and aid agencies.

The capacity to intercept, tag, track and locate specific cell phone and Internet users from a drone already exists, but distinguishing signal from noise in a densely connected, heavily trafficked piece of digital space is a daunting challenge. (238)

Kilcullen’s answer to the incomplete picture provided by the view from above, from big data, is to combine this data with the partial but deep view of the city by its inhabitants on the ground. In its essence a city is the stories and connections of those that live in them. Think of the deep, if necessarily narrow perspective of a major city merchant or even a well connected drug dealer. Add this to the stories of those working in social and medical services, police officers, big employers. socialites etc and one starts to get an idea of the biography of a city. Add to that the big picture of flows and connections and one starts to understand the city for what it is, a complex type of non-biological organism that serves as a stage for human stories.

Kilcullen has multiple examples of where knowledge of the big picture from experts has successfully aligned with grassroots organization to save societies on the brink of destruction an alignment he calls “co-design”. He cites the Women of Liberia Mass Action for Peace where grassroots organizer Leymah Gbowee leveraged the expertise of Western NGOs to stop the civil war in Liberia. CeaseFire Chicago uses a big-picture model of crime literally based on epidemiology and combines that with community level interventions to stop violent crime before it occurs.

Another group Kilcullen discusses is Crisis Mappers which offers citizens everywhere in the world access to the big picture, what the organization describes as “the largest and most active international community of experts, practitioners, policy makers, technologists, researchers, journalists, scholars, hackers and skilled volunteers engaged at the intersection between humanitarian crises, technology, crowd-sourcing, and crisis mapping.” (253)

On almost all of this I find Kilcullen to be spot on. The problem is that he fails to tackle the really systemic issue which is inequality. What is necessary to save any city, as Kilcullen acknowledges, is a sense of shared community. What I would call a sense of shared past and future. Insofar as the very wealthy in any society or city are connected to and largely identify with their wealthy fellow elites abroad rather than their poor neighbors, a city and a society is doomed, for only the wealthy have the wherewithal to support the kinds of social investments that make a city livable for its middle classes let alone its poor.

The very globalization that has created the opportunity for the rich in once poor countries to rise, and which connects the global poor to their fellow sufferers both in the same country and more amazingly across the world has cleft the connection between poor and rich in the same society. It is these global connections between classes which gives the current situation a revolutionary aspect, which as Marx long ago predicted, is global in scope.

The danger is that the very wealthy classes use the new high tech tools for monitoring citizens into a way to avoid systemic change, either by using their ability to intimately monitor so-called “revolutionaries” and short-circuit legitimate protest or by addressing the public’s concern in only the most superficial of ways.

The long term solution to the new era of urban mankind is giving people who live in cities the tools, including increasing sophisticated tools of data gathering and simulation, to control their own fates to find ways to connect revolutionary movements to progressive forces in societies where cities are not failing, and their tools for dealing with all the social and environmental problems cities face, and above all, to convince the wealthy to support such efforts, both in their own locality as well as on a global scale. For, the attempt at total control of a complex entity like a city through the tools of the security state, like the paper flat Utopian cities of state worshipers of days past, is to attempt building a castle in the thin armed sky.