How to avoid drowning in the Library of Babel

Library of Babel

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by is using the same tools we use in understanding the present which is also, at least partly, hidden from  direct view by the dilemma inherent in our use of language. To understand anything we need language and symbols but neither are the thing we are actually trying to grasp. Even the brain itself, through the senses, is a form of funnel giving us a narrow stream of information that we conflate with the entire world.

We use often science to get us beyond the box of our heads, and even, to a limited extent, to see into the future. Less discussed is the fact that we also use good metaphors, fables, and stories to attack the the wall of the unknown obliquely. To get the rough outlines of something like the essence of reality while still not being able to see it as it truly is.

It shouldn’t surprise us that some of the most skilled writers at this sideways form of wall scaling ended up becoming blind. To be blind, especially to have become blind after a lifetime of being able to see, is to realize that there is a form to the world which is shrouded in darkness, that one can get to know only in bare outlines, and only with indirect methods. Touch in the blind goes from a way to know what something feels like for the sighted to an oblique and sometimes more revealing way to learn what something actually is.

The blind John Milton was a genius at using symbolism and metaphor to uncover a deeper and hidden reality, and he probably grasped more deeply than anyone before or after him that our age would be defined not by the chase after reality, but our symbolic representation of its “value”, especially in the form of our ever more virtual talisman of money. Another, and much more recent, writer skilled at such an approach, whose work was even stranger than Milton’s in its imagery, was Jorge Luis Borges whose writing defies easy categorization.  

Like Milton did for our confusion of the sign with the signified in his Paradise Lost, Borges would do for the culmination of this symbolization in the  “information age” with perhaps his most famous short-story- The Library at Babel.  A story that begins with the strange line:

The universe (which others call the Library) is composed of an indefinite, perhaps infinite number of hexagonal galleries. (112)

The inhabitants of Borges’ imagined Library (and one needs to say inhabitants rather than occupants, for the Library is not just a structure but is the entire world, the universe in which all human-beings exist) “live” in smallish compartments attached to vestibules on hexagonal galleries lined with shelves of books. Mirrors are so placed that when one looks out of their individual hexagon they see a repeating pattern of them- the origin of arguments over whether or not the Library is infinite or just appears to be so.

All this would be little but an Escher drawing in the form of words were it not for other aspects that reflect Borges’ pure genius. The Library contains not just all books, but all possible books, and therein lies the dilemma of its inhabitants-  for if all possible books exist it is impossible to find any particular book.

Some book explaining the origin of the Library and its system of organization logically must exist, but that book is essentially undiscoverable, surrounded by a perhaps infinite number of other books the vast number of which are just nonsensical combinations of letters. How could the inhabitants solve such a puzzle? A sect called the Purifiers thinks it has the answer- to destroy all the nonsensical books- but the futility of this project is soon recognized. Destruction barely puts a dent in the sheer number of nonsensical books in the Library.

The situation within the Library began with the hope that all knowledge, or at least the key to all knowledge, would be someday discovered but has led to despair as the narrator reflects:

I am perhaps misled by old age and fear but I suspect that the human species- the only species- teeters at the verge of extinction, yet that the Library- enlightened, solitary, infinite, perfectly unmoving, armed with precious volumes, pointless, incorruptible and secret will endure. (118)

The Library of Babel published in 1941 before the “information age” was even an idea seems to capture one of the paradoxes of our era. For the period in which we live is indeed experiencing an exponential rise of knowledge, of useful information, but this is not the whole of the story and does not fully reveal what is not just a wonder but a predicament.

The best exploration of this wonder and predicament is James Gleick’s recent brilliant history of the “information age” in his The Information: A History, A Theory, A Flood. In that book Gleick gives us a history of information from primitive times to the present, but the idea that everything: from our biology to our society to our physics can be understood as a form of information is a very recent one. Gleick himself uses Borges’ strange tale of the Library of Babel as metaphor through which we can better grasp our situation.

The imaginative leap into the information age dates from seven year after Borges’ story was published, in 1948, when Claude Shannon a researcher for Bell Labs published his essay “A Mathematical Theory of Communication”. Shannon was interested in how one could communicate messages over channels polluted with noise, but his seemingly narrow aim ended up being revolutionary for fields far beyond Shannon’s purview. The concept of information was about to take over biology – in the form of DNA, and would conquer communications in the form of the Internet and exponentially increasing powers of computation – to compress and represent things as bits.

The physicists John Wheeler would come to see the physical universe itself and its laws as a form of digital information- “It from bit”.

Otherwise put, every “it” — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

Gleick finds this understanding of reality as information to be both true in a deep sense and also in many ways troubling.  We are it seems constantly told that we are producing more information on a yearly basis than all of human history combined, although what this actually means is another thing. For there is a wide and seemingly unbridgeable gap between information and what would pass for knowledge or meaning. Shannon himself acknowledged this gap, and deliberately ignored it, for the meaningfulness of the information transmitted was, he thought, “irrelevant to the engineering problem” involved in transmitting it.

The Internet operates in this way, which is why a digital maven like Nicholas Negroponte is critical of the tech-culture’s shibboleth of “net neutrality”.   To humorize his example, a digital copy of a regular sized novel like Fahrenheit 451 is the digital equivalent of one second of video from What does the Fox Say?, which, however much it cracks me up, just doesn’t seem right. The information content of something says almost nothing about its meaningfulness, and this is a problem for anyone looking at claims that we are producing incredible amounts of “knowledge” compared to any age in the past.

The problem is perhaps even worse than at first blush. Take these two sets of numbers:

9      2           9              5              5              3              4              7              7              10

And:

9     7            4              3              8             5              4              2              6              3

The first set of numbers is a truly random set of the numbers 1 through 10 created using a random number generator. The second set has a pattern. Can you see it?

In the second set every other number is a prime number between 2 and 7. Knowing this rule vastly decreases the possible set of 10 numbers between 1 and 10 which a set of numbers can fit. But here’s the important takeaway-  the the first set of numbers contains vastly more information than the second set. Randomness increases the information content of a message. (If anyone out there has a better grasp of information theory thinks this example is somehow misleading or incorrect please advise.)

The new tools of the age have meant that a flood of bits now inundates us the majority of which is without deep meaning or value or even any meaning at all.We are confronted with a burst dam of information pouring itself over us. Gleick captures our dilemma with a quote from the philosopher of “enlightened doomsaying” Jean-Pierre Dupuy:

I take “hell” in its theological sense i.e. a place which is devoid of grace – the underserved, unnecessary, surprising, unforeseen. A paradox is at work here, ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.  (418)

It is not that useful and meaningful information is not increasing, but that our exponential increase in information of value- of medical and genetic information and the like has risen in tandem with an exponential increase in nonsense and noise. This is the flood of Gleick’s title and the way to avoid drowning in it is to learn how to swim or get oneself a boat.

A key to not drowning is to know when to hold your breath. Pushing for more and more data collection might sometimes be a waste of resources and time because it is leading one away from actionable knowledge. If you think the flood of useless information hitting you that you feel compelled to process is overwhelming now, just wait for the full flowering of the “Internet of Things” when everything you own from your refrigerator to your bathroom mirror will be “talking” to you.

Don’t get me wrong, there are aspects that are absolutely great about the Internet of Things. Take, for example, the work of the company Indoo.rs that has created a sort of digital map throughout the San Francisco International Airport that allows a blind person using bluetooth to know: “the location of every gate, newsstand, wine bar and iPhone charging station throughout the 640,000-square-foot terminal.”

This is exactly the kind of purpose the Internet of Things should be used for. A modern form of miracle that would have brought a different form sight to those blind like Milton and Borges. We could also take a cue from the powers of imagination found in these authors and use such technologies to make our experience of the world deeper our experience more multifaceted, immersive. Something like that is a far cry from an “augmented reality” covered in advertisements, or work like that of neuroscientist David Eagleman, (whose projects I otherwise like), on a haptic belt that would massage its wearer into knowing instantaneously the contents of the latest Twitter feed or stock market gyrations.

Our digital technology where it fails to make our lives richer should be helping us automate, meaning to make automatic and without thought, all that is least meaningful in our lives – this is what machines are good at. In areas of life that are fundamentally empty we should be decreasing not increasing our cognitive load. The drive for connectivity seems pushing in the opposite direction, forcing us to think about things that were formerly done by our thoughtless machines over which we lacked precision but also didn’t give us much to think about.

Yet even before the Internet of Things has fully bloomed we already have problems. As Gleick notes, in all past ages it was a given that most of our knowledge would be lost. Hence the logic of a wonderful social institution like the New York Public Library. Gleick writes with sharp irony:

Now expectations have inverted. Everything may be recorded and preserved…. Where does it end? Not with the Library of Congress.

Perhaps our answer is something like the Internet Archive whose work on capturing recording all of the internet as it comes into being and disappears is both amazing and might someday prove essential to the survival of our civilization- the analog to copies having been made of the famed collection at the lost Library at Alexandria. (Watch this film).

As individuals, however, we are still faced with the monumental task of separating the wheat from the chaff in the flood of information, the gems from the twerks. Our answer to this has been services which allow the aggregation and organization of this information, where as Gleick writes:

 Searching and filtering are all that stand between this world and the Library of Babel.  (410)

And here is the one problem I had with Gleick’s otherwise mind blowing book because he doesn’t really deal with questions of power. A good deal of power in our information society is shifting to those who provide these kinds of aggregating and sifting capabilities and claim on that basis to be able to make sense of a world brimming with noise and nonsense. The NSA makes this claim and is precisely the kind of intelligence agency one would predict to emerge in an information age.

Anne Neuberger, the Special Assistant to NSA Director Michael Rogers and the Director of the NSA’s Commercial Solutions Center recently gave a talk at the Long Now Foundation. It was a hostile audience of Silicon Valley types who felt burned by the NSA’s undermining of digital encryption, the reputation of American tech companies, and civil liberties. But nothing captured the essence of our situation better than a question from Paul Saffo.

Paul Saffo: It seems like the intelligence community have always been data optimists. That if we just had more data, and we know that after every crisis, it’s ‘well if we’d just have had  more data we’d connect the dots.’ And there’s a classic example, I think of 9-11 but this was said by the Church Committee in 1975. It’s become common to translate criticism of intelligence results into demands for enlarged data collection. Does more data really lead to more insight?

As a friend is fond of saying ‘perhaps the data haystack that the intelligence community has created has become too big to ever find the needle in.’    

Neuberger’s telling answer was that the NSA needed better analytics, something that the agency could learn from the big data practices of business. Gleick might have compared her answer to a the plea for a demon by a character in a story by Stanislaw Lem whom he quotes.

 We want the Demon, you see, to extract from the dance of atoms only information that is genuine like mathematical formulas, and fashion magazines.  (The Information 425)

Perhaps the best possible future for the NSA’s internet hoovering facility at Bluffdale  would be for it to someday end up as a memory bank of the our telecommunications in the 21st century, a more expansive version of the Internet Archive. What it is not, however,is a way to anticipate the future on either small or large scales, or at least that is what history tells us. More information or data does not equal more knowledge.

We are forced to find meaning in the current of the flood. It is a matter of learning what it is important to pay attention to and what to ignore, what we should remember and what we can safely forget, what the models we use to structure this information can tell us and what they can’t, what we actually know along with the boundaries of that which we don’t. It is not an easy job, or as Gleick says:

 Selecting the genuine takes work; then forgetting takes even more work.

And it’s a job, it seems, that just gets harder as our communications systems become ever more burdened by deliberate abuse ( 69.6 percent of email in 2014 was spam up almost 25 percent from a decade earlier) and in an age when quite extraordinary advances in artificial intelligence are being used not to solve our myriad problems, but to construct an insidious architecture of government and corporate stalking via our personal “data trails”, which some confuse with our “self”. A Laplacian ambition that Dag Kittlaus unguardedly praises as  “Incremental revenue nirvana.”  Pity the Buddha.

As individuals, institutions and societies we need to find ways stay afloat and navigate through the flood of information and its abuse that has burst upon us, for as in Borges’ Library there is no final escape from the world it represents. As Gleick powerfully concludes The Information:

We are all patrons of the Library of Babel now and we are the librarians too.

The Library will endure, it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors looking for lines of meaning amid the leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thought and collecting the thoughts of others, and every so often glimpsing mirrors in which we may recognize creatures of the information.”

 

Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

How our police became Storm-troopers

Ferguson Riot Police

Scott Olson Getty Images via: International Business Times

The police response to protests and riots in Ferguson, Missouri were filled with images that have become commonplace all over the world in the last decade. Police dressed in once futuristic military gear confronting civilian protesters as if they were a rival army. The uniforms themselves put me in mind of nothing so much as the storm-troopers from Star Wars. I guess that would make the rest of us the rebels.

A democracy has entered a highly unstable state when its executive elements, the police and security services it pays for through its taxes, that exist for the sole purpose of protecting and preserving that very community, are turned against it. I would have had only a small clue as to how this came about were it not for a rare library accident.   

I was trying to get out a book on robots in warfare for a project I am working on, but had grabbed the book next to it by mistake. Radley Balko’s The Rise of the Warrior Cop has been all over the news since Ferguson broke, and I wasn’t the first to notice it because within a day or two of the crisis the book was recalled. The reason is that Ferguson has focused public attention on an issue we should have been grappling with for quite some time – the militarization of America’s police forces. How that came about is the story The Rise of the Warrior Cop lays out cogently and with power.

As Balko explains much of what we now take as normal police functions would have likely been viewed by the Founders as “a standing army”, something they were keen to prevent. In addition to the fact that Americans were incensed by the British use of soldiers to exercise police functions, the American Revolution had been inspired in part by the use by the British of “General Warrants” that allowed them to bust into American and search homes in their battle against smuggling. From its beginning the United States has had a tradition of separation between military and police power along with a tradition of limiting police power, indeed, this the reason our constitutional government exists in the first place.

Balko points out how the U.S. as it developed its own police forces, something that became necessary with the country’s urbanization and modernization, maintained these traditions which only fairly recently started to become eroded largely beginning with the Nixon administration’s “law and order” policy and especially the “war on drugs” launched under Reagan.

In framing the problem of drug use as a war rather than a public health concern we started down the path of using the police to enforce military style solutions. If drug use is a public health concern then efforts will go into providing rehabilitation services for addicts, addressing systemic causes and underlying perceptions, and legalization as a matter of personal liberty where doing so does not pose inordinate risk to the public. If the problem of drug use is framed as a war then this means using kinetic action to disrupt and disable “enemy” forces. It means adhering as close to the limits of what is legally allowable when using force to protect one’s own “troops”. It mean mass incarceration of captured enemy forces. Fighting a war means that training and equipment needs focus on the effective use of force and not “social work”.

The militarization of America’s police forces that began in earnest with the war on drugs, Balko reminds us, is not an issue that can easily be reduced to Conservative vs Liberal, Republican vs Democrat. In the 1990’s conservatives were incensed at police brutality and misuse of military style tactics at Waco and Ruby Ridge. Yet conservatives largely turned a blind eye to the same brutality turned against anarchists and anti-globalization protestors in The Battle of Seattle in 1999. Conservatives have largely supported the militarized effort to stomp out drug abuse and the use of swat teams to enforce laws against non-violent offenders, especially illegal immigrants.

The fact that police were increasingly turning to military tactics and equipment was not, however, all an over-reaction. It was inspired by high profile events such as the Columbine massacre, and a dramatic robbery in North Hollywood in 1997. In the latter the two robbers Larry Phillips, Jr. and Emil Mătăsăreanu wore body armor police with light weapons could not penetrate. The 2008 attacks in Mumbai in which a small group of heavily armed and well trained terrorists were able to kill 164 people and temporarily cripple large parts of the city should serve as a warning of what happens when police can not rapidly deploy lethal force as should a whole series of high profile “lone wolf” style shootings. Police can thus rationally argue that they need access to heavy weapons when needed and swat teams and training for military style contingencies as well. It is important to remember that the police daily put their lives at risk in the name of public safety.

Yet militarization has gone too far and is being influenced more by security corporations and their lobbyists than conditions in actual communities. If the drug war and attention grabbing acts of violence was where the militarization of America’s police forces began, 9-11 and the wars in Afghanistan and Iraq acted as an accelerant on the trend. These events launched a militarized-police-industrial complex, the country was flooded with grants from the Department of Homeland Security which funded even small communities to set up swat teams and purchase military grade equipment. Veterans from wars which were largely wars of occupation and counter-insurgency were naturally attracted to using these hard won skill sets in civilian life- which largely meant either becoming police or entering the burgeoning sector of private security.

So that’s the problem as laid out by Balko, what is his solution? For Balko, the biggest step we could take to rolling back militarization is to end the drug war and stop using military style methods to enforce immigration law. He would like to see a return to community policing, if not quite Mayberry, then at least something like the innovative program launched in San Antonio which uses police as social workers rather than commandos in to respond to mental health related crime.

Balko also wants us to end our militarized response to protests. There is no reason why protesters in a democratic society should be met by police wielding automatic weapons or dispersed through the use of tear gas. We can also stop the flood of federal funding being used by local police departments to buy surplus military equipment. Something that the Obama administration prompted by Ferguson seems keen to review.

A positive trend that Balko sees is the ubiquity of photography and film permitted by smart phones which allows protesters to capture brutality as it occurs a right which everyone has, despite the insistence of some police in protest situations to the contrary, and has been consistently upheld by U.S. courts. Indeed the other potentially positive legacy of Ferguson other than bringing the problem of police militarization into the public spotlight, for there is no wind so ill it does not blow some good, might be that it has helped launch true citizen based and crowd-sourced media.

My criticism of The Rise of the Warrior Cop to the extent I have any is that Balko only tells the American version of this tale, but it is a story that is playing out globally. The inequality of late capitalism certainly plays a role in this. Wars between states has at least temporarily been replaced by wars within states. Global elites who are more connected to their rich analogs in other countries than they are to their own nationals find themselves turning to a large number of the middle class who find themselves located in one form or another in the security services of the state. Elites pursue equally internationalized rivals, such as drug cartels and terrorist networks like one would a cancerous tumor- wishing to rip it out by force- not realizing this form of treatment is not getting to the root of the problem and might even end up killing the patient.

More troublingly they use these security services to choke off mass protests by the poor and other members of the middle class now enabled by mobile technologies because they find themselves incapable of responding to the problems that initiated these protests with long-term political solutions. This relates to another aspect of the police militarization issue Balko doesn’t really explore, namely the privatization of police services as those who can afford them retreat behind the fortress of private security while the conditions of the society around them erode.

Maybe there was a good reason that The Rise of the Warrior Cop was placed on the library shelf next to books on robot weapons after all. It may sound crazy, but perhaps in the not so far off future elites will automate policing as they are automating everything else. Mass protests, violent or not, will be met not with flesh and blood policemen but military style robots and drones. And perhaps only then will once middle class policemen made poor by the automation of their calling realize that all this time they have been fighting on the wrong side of the rebellion.

A Cure for Our Deflated Sense of The Future

Progressland 1964 World's Fair

There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.

The strange thing, perhaps, is that anyone would be disappointed in the fact that a fictional world has failed to become real in the first place. No one, I hope, feels like the present is constricted and dull because there aren’t any flying dragons in it to slay. The problem, then, might lie in the way science-fiction is understood in the minds of some avid fans- not as fiction, but as plausible future history, or even a sort of “preview” and promise of all the cool things that await.

Future- deflation is a kind of dulling hang-over from a prior bout of future-inflation when expectations got way out ahead of themselves. If, mostly boys, now become men, feel let down by inflated expectations driven by what proved to be the Venetian sunset, rather than the beginning, of the space race regarding orbital cities, bases on the moon and Mars, and a hundred other things, their experience is a little like girls, fed on a diet of romance, who have as adults tasted the bitter reality of love. Following the rule I suppose I’d call it romance-deflation- cue the Viagra jokes.

Okay, so that’s the condition, how might it be cured? The first step to recovery is admitting you have a problem and identifying its source. Perhaps the main culprit behind future-deflation is the crack cocaine of CGI- (and I really do mean CGI as in computer generated graphics which I’ve written about before). Whereas late 20th century novels,  movies, and television shows served as gateway drugs to our addiction to digital versions of the future, CGI and the technologies that will follow is the true rush, allowing us to experience versions of the future that just might be more real than reality itself.

There’s a phenomenon discovered by social psychologists studying motivation that says that it’s a mistake to visualize your idealized outcomes too clearly for to do so actually diminishes your motivation to achieve them. You get many of the same emotional rewards without any of the costs, and on that account never get down to doing the real thing. Our ability to create not just compelling but mind blowing visualizations of technologies that are nowhere on the horizon has become so good, and will only get better, that it may be exacerbating disappointment with the present state of technology and the pace of technological change- increasing the sense of “where’s my jet pack”.

There’s a theory that I’ve heard discussed by Brian Eno that the reason we haven’t been visited by any space aliens is that civilizations at a certain point fall into a state of masturbatory self-satisfaction. They stop actually doing stuff because the imagination of doing things becomes so much better and easier than the difficult and much less satisfying achievements experienced in reality.

The cure for future deflation is really just adulthood. We need to realize that the things we would like to do and see done are hard and expensive and take long commitments over time- often far past our own lifetimes- to achieve. We need to get off our Oculus Rift weighed down assess and actually do stuff. Elon Musk with his SpaceX seems to realize this, but with a ticket to Mars to cost 500 thousand dollars one can legitimately wonder whether he’ll end up creating an escape hatch from earth for the very rich that the rest of us will be stuck gawking at on our big-screen TVs.

And therein lies the heart of the problem, for it’s actually less important for the majority of us what technologies are available in the future than the largely non-technological question of how such a future is politically and economically organized which will translate into how many of us have access to these technologies.  

The question we should be asking when thinking about things we should be doing now to shape the future is a simple and very human one – “what kind of world do I hope my grandchildren live in?” A part of the answer to this question is going to involve technology and scientific advancement, but not as much of it as we might think. Other types of questions dealing with issues such as the level of equality, peace and security, a livable environment, and amount of freedom and purpose, are both more important and more capable of being influenced by the average person.  These are things we can pursue even if we have no connection to the communities of science and technology. We could even achieve many of these things should technological progress completely stall with the technological kit we already have.

In a way because it emerged in tandem with the technological progress started with the scientific and industrial revolutions science-fiction seemed to own the future, and those who practiced the art largely did well by us in giving it shape- at least in our minds. But in reality the future was just on loan, and it might do us best to take back a large portion of it and encourage everyone who wants to have more say in defining it. Or better, here’s my advice: for those techno-progressives not involved directly in the development of science and technology focus more of your efforts on the progressive side of the scale. That way, if even part of the promised future arrives you won’t be confined to just watching it while wearing your Oculus Rift or stuck in your seat at the IMAX.

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

The First Machine War and the Lessons of Mortality

Lincoln Motor Co., in Detroit, Michigan, ca. 1918 U.S. Army Signal Corps Library of Congress

I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war.

In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net.

But the most frightening aspect of the new war are weapons based on plant biology. Such weapons, if they do not immediately scar the face and infect the bodies of those who had been targeted, relocate themselves in the soil like spores waiting to release and kill and maim when conditions are ripe- the ultimate terrorist weapon.

Amid all this the author searches for what human meaning might be in a world where men are caught between a world of warring machines.  In the end he comes to understand himself as mere cog in a great machine, a metallic artifice that echoes and rides rhythms of biological nature including his own.

__________________________________

A century and a week back from today humanity began its first war of machines. (July, 28 1914). There had been forerunners such as the American Civil War and the Franco-Prussian War in the 19th century, but nothing before had exposed the full features and horrors of what war between mechanized and industrial societies would actually look like until it came quite unexpectedly in the form of World War I.

Those who wish to argue that the speed of technological development is exaggerated need only to look to at the First World War where almost all of the weapons we now use in combat were either first used or used to full effect- the radio, submarine, airplane, tank and other vehicles using the internal combustion engine. Machine guns were let loose in new and devastating ways as was long-range artillery.

Although again there were forerunners, the first biological and chemical weapons saw there true debut in WWI. The Germans tried to infect the city of St. Petersburg with a strain of the plague, but the most widely used WMDs were chemical weapons, some of them derived from the work on the nitrogen cycle of plants, gases such chlorine and mustard gas, which killed less than they maimed, and sometimes sat in the soil ready to emerge like poisonous mushrooms when weather conditions permitted.

Indeed, the few other weapons in our 21st century arsenal that can’t be found in the First World War such as the jet, rocket, atomic bomb, radar, and even the computer, would make their debut only a few decades after the end of that war, and during what most historians consider its second half- World War II.

What is called the Great War began, as our 9-11 wars began, with a terrorist attack. The Archduke of Austria- Hungary Franz Ferdinand assassinated by the ultimate nobody, a Serbian nationalist not much older than a boy- Gavrilo Princip- whose purely accidental success (he was only able to take his shot because the car the Archduke was riding in had taken a wrong turn) ended up being the catalyst for the most deadly war in human history up until that point, a conflict that would unleash nearly a century of darkness and mortal danger upon the world.

For the first time it would be a war that would be experienced by people thousands of miles from the battlefield in almost real time via the relatively new miracle of the radio. This was only part of the lightning fast feedback loop that launched and sped European civilization from a minor political assassination to total war. As I recall from Stephen Kern’s 1983 The Culture of Time and Space 1880-1914   political caution and prudence found themselves crushed in the vice of compressed space and time and suffered the need to make policy decisions that aligned with the need, not of human beings and their slow, messy and deliberative politics, but the pace of machines. Once the decision to mobilize was made it was almost impossible to stop it without subjecting the nation to extreme vulnerability, once a jingoistic public was whipped up to demand revenge and action via the new mass media it was again nearly impossible to silence and temper.

The First World War is perhaps the penultimate example of what Nassim Taleb called a “Black Swan” an event whose nature failed to be foreseen and whose effect ended up changing the future radically from the shape it had previously been projected to have. Taleb defines a Black Swan this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Secondly it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

Taleb has something to teach futurists who he suggests might start looking for a different day job. For what he is saying is it is the events we do not and cannot predict that will have the greatest influence on the future, and on account of this blindness the future is essentially unknowable. The best we can do in his estimation is to build resilience, robustness and redundancy, which it is hoped might allow us to survive, or even gain in the face of multiple forms of unpredictable crises, to become, in his terms “anti-fragile”.

No one seems to think another world war is possible today, which might give us reason for worry. We do have more circuit breakers in place which might allow us to dampen a surge in the direction of war between the big states in the face of a dangerous event such as Japan downing a Chinese plane in their dispute over Pacific islands, but many more and stronger ones need to be created to avoid such frictions spinning out of control.

States continue to prepare for limited conventional wars against one another. China practices and plans to retake disputed islands including Taiwan by force, and to push the U.S. Navy deeper into the Pacific, while the U.S. and Japan practice retaking islands from the Chinese. We do this without recognizing that we need to do everything possible to prevent such potential clashes in the first place because we have no idea once they begin where or how they will end.  As in financial crises, the further in time we become removed from the last great crisis the more likely we are to have fallen into a dangerous form of complacency, though the threat of nuclear destruction may act as an ultimate break.

The book I began this essay with is, of course, not some meditation on 21st or even 22nd century war, but the First World War itself. Ernst Jünger’s Storm of Steel is perhaps the best book ever written on the Great War and arguably one of the best books written on the subject of war- period.

It is Jünger’s incredible powers of observation and his desire for reflection that give the book such force. There is a scene that will ever stick with me where Jünger is walking across the front lines and sees a man sitting there in seemingly perfect stoicism as the war rages around him. It’s only when he looks closer that Jünger realizes the man is dead and that he has no eyes in his sockets- they have been blown out from an explosion behind.

Unlike another great book on the First World War, Erich Remarque’s All Quiet on the Western Front, Storm of Steel is not a pacifist book. Jünger is a soldier who sees the war as a personal quest and part of his duty as a man. His bravery is undeniable, but he does not question the justice or rationality of the war itself, a fact that would later allow Jünger’s war memoir to be used as a propaganda tool by the Nazis.

Storm of Steel has about it something of the view of war found in the ancients- that it was sent by the gods and there was nothing that could be done about it but to fulfill one’s duty within it. In the Hindu holy book the Bhagavad Gita the prince Arjuna is filled with doubt over the moral righteousness of the battle he is about to engage in- to which the god Krishna responds:

 …you Arjuna, are only a mortal appointee to carry out my divine will, since the Kauravas are destined to die either way, due to their heap of sins. Open your eyes O Bhaarata and know that I encompass the Karta, Karma and Kriya, all in myself. There is no scope for contemplation now or remorse later, it is indeed time for war and the world will remember your might and immense powers for time to come. So rise O Arjuna!, tighten up your Gandiva and let all directions shiver till their farthest horizons, by the reverberation of its string.

Jünger lived in a world that had begun to abandon the gods, or rather adopted new materialist versions of them – whether the force of history or evolution- stories in which Jünger like Arjuna comes to see himself as playing a small part.

 The incredible massing of forces in the hour of destiny, to fight for a distant future, and the violence it so surprisingly, suddenly unleashed, had taken me for the first time into the depths of something that was more than mere personal experience. That was what distinguished it from what I had been through before; it was an initiation that not only had opened red-hot chambers of dread but had also led me through them. (255-256)

Jünger is not a German propagandist. He seems blithely unaware of the reasons his Kaiser’s government was arguing why the war was being fought. The dehumanization of the other side which was a main part of the war propaganda towards the end of the conflict, and on both sides, does not touch him. He is a mere soldier whose bravery comes from the recognition of his own ultimate mortality just as the mortality of everyone else allows him to kill without malice, as a mere matter of the simple duty of a combatant in war.

Because his memoir of the conflict is so authentic, so without bias or apparent political aims, he ends up conveying truths about war which it is difficult for civilians to understand and this difficulty in understanding can be found not only in pacifists, but in nationalist war-mongers with no experience of actual combat.

If we open ourselves to to the deepest meditations of those who have actually experienced war, what we find is that combat seems to bring the existential reality of the human condition out from its normal occlusion by the tedium of everyday living. To live in the midst of war is a screaming reminder that we are mortal and our lives ultimately very short. In war it is very clear that we are playing a game of chance against death, which is merely the flip side of the statistical unlikelihood of our very existence, as if our one true God was indeed chance itself. Like any form of gambling, victory against death itself becomes addictive.

War makes it painfully clear to those who fight in it that we are hanging on just barely to this thread, this thin mortal coil, where our only hope for survival for a time is to hang on tightly to those closest to us- war’s famed brotherhood in arms. These experiences, rather than childish flag- waving notions of nationalism, are likely the primary source of what those who have experience of only of peace often find unfathomable- that soldiers from the front often eagerly return to battle. It is a shared experience among those who have experienced combat that often leads soldiers to find more in common with the enemies they have engaged than their fellow citizens back home who have never been to war.

The essential lessons of Storm of Steel are really spiritual answers to the question of combat. Jünger’s war experience leads him to something like Buddhist non-attachment both to himself and to the futility of the bird-eye view justifications of the conflict.

The nights brought heavy bombardment like swift, devastating summer thunderstorms. I would lie on my bunk on a mattress of fresh grass, and listen, with a strange and quite unjustified feeling of security, to the explosions all around that sent the sand trickling out of the walls.

At such moments, there crept over me a mood I hadn’t known before. A profound reorientation, a reaction to so much time spent so intensely, on the edge. The seasons followed one another, it was winter and then it was summer again, but it was still war. I felt I had got tired, and used to the aspect of war, but it was from familiarity that I observed what was in front of me in a new and subdued light. Things were less dazzlingly distinct. And I felt the purpose with which I had gone out to fight had been used up and no longer held. The war posed new, deeper puzzles. It was a strange time altogether. (260)

In another scene Jünger comes upon a lone enemy officer while by himself on patrol.

 I saw him jump as I approached, and stare at me with gaping eyes, while I, with my face behind my pistol, stalked up to him slowly and coldly. A bloody scene with no witnesses was about to happen. It was a relief to me, finally, to have the foe in front of me and within reach. I set the mouth of my pistol at the man’s temple- he was too frightened to move- while my other fist grabbed hold of his tunic…

With a plaintive sound, he reached into his pocket, not to pull out a weapon, but a photograph which he held up to me. I saw him on it, surrounded by numerous family, all standing on a terrace.

It was a plea from another world. Later, I thought it was blind chance that I had let him go and plunged onward. That one man of all often appeared in my dreams. I hope that meant he got to see his homeland again. (234)

The unfortunate thing about the future of war is not that human beings seem likely to play an increasingly diminishing role as fighters in it, as warfare undergoes the same process of automation, which has resulted in the fact that so few of us now grow our food or produce our goods. Rather, it is the fact that wars will continue to be fought and human beings, which will come to mean almost solely non-combatants, will continue to die in them.

The lessons Jünger took from war are not so much the product of war itself as they emerge from intense reflection on our own and others mortality. They are the same type of understanding and depth often seen in those who suffer long periods of death, the terminally ill, who die not in the swift “thief in the night” mode of accidents or bodily failure, but slip from the world with enough time to and while retaining the capacity to reflect. Even the very young who are terminally ill often speak of a diminishing sense of their own importance, a need to hang onto the moment, a drive to live life to the full, and the longing to treat others in a spirit of charity and mercy.

Even should the next “great war” be fought almost entirely by machines we can retain these lessons as a culture as long as we give our thoughts over to what it means to be a finite creature with an ending and will have the opportunity to experience them personally as long as we are mortal, and given the impossibility of any form of eternity no matter how far we will extend our lifespans, mortal we always will be.

 

 

Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.