How our police became Storm-troopers

Ferguson Riot Police

Scott Olson Getty Images via: International Business Times

The police response to protests and riots in Ferguson, Missouri were filled with images that have become commonplace all over the world in the last decade. Police dressed in once futuristic military gear confronting civilian protesters as if they were a rival army. The uniforms themselves put me in mind of nothing so much as the storm-troopers from Star Wars. I guess that would make the rest of us the rebels.

A democracy has entered a highly unstable state when its executive elements, the police and security services it pays for through its taxes, that exist for the sole purpose of protecting and preserving that very community, are turned against it. I would have had only a small clue as to how this came about were it not for a rare library accident.   

I was trying to get out a book on robots in warfare for a project I am working on, but had grabbed the book next to it by mistake. Radley Balko’s The Rise of the Warrior Cop has been all over the news since Ferguson broke, and I wasn’t the first to notice it because within a day or two of the crisis the book was recalled. The reason is that Ferguson has focused public attention on an issue we should have been grappling with for quite some time – the militarization of America’s police forces. How that came about is the story The Rise of the Warrior Cop lays out cogently and with power.

As Balko explains much of what we now take as normal police functions would have likely been viewed by the Founders as “a standing army”, something they were keen to prevent. In addition to the fact that Americans were incensed by the British use of soldiers to exercise police functions, the American Revolution had been inspired in part by the use by the British of “General Warrants” that allowed them to bust into American and search homes in their battle against smuggling. From its beginning the United States has had a tradition of separation between military and police power along with a tradition of limiting police power, indeed, this the reason our constitutional government exists in the first place.

Balko points out how the U.S. as it developed its own police forces, something that became necessary with the country’s urbanization and modernization, maintained these traditions which only fairly recently started to become eroded largely beginning with the Nixon administration’s “law and order” policy and especially the “war on drugs” launched under Reagan.

In framing the problem of drug use as a war rather than a public health concern we started down the path of using the police to enforce military style solutions. If drug use is a public health concern then efforts will go into providing rehabilitation services for addicts, addressing systemic causes and underlying perceptions, and legalization as a matter of personal liberty where doing so does not pose inordinate risk to the public. If the problem of drug use is framed as a war then this means using kinetic action to disrupt and disable “enemy” forces. It means adhering as close to the limits of what is legally allowable when using force to protect one’s own “troops”. It mean mass incarceration of captured enemy forces. Fighting a war means that training and equipment needs focus on the effective use of force and not “social work”.

The militarization of America’s police forces that began in earnest with the war on drugs, Balko reminds us, is not an issue that can easily be reduced to Conservative vs Liberal, Republican vs Democrat. In the 1990’s conservatives were incensed at police brutality and misuse of military style tactics at Waco and Ruby Ridge. Yet conservatives largely turned a blind eye to the same brutality turned against anarchists and anti-globalization protestors in The Battle of Seattle in 1999. Conservatives have largely supported the militarized effort to stomp out drug abuse and the use of swat teams to enforce laws against non-violent offenders, especially illegal immigrants.

The fact that police were increasingly turning to military tactics and equipment was not, however, all an over-reaction. It was inspired by high profile events such as the Columbine massacre, and a dramatic robbery in North Hollywood in 1997. In the latter the two robbers Larry Phillips, Jr. and Emil Mătăsăreanu wore body armor police with light weapons could not penetrate. The 2008 attacks in Mumbai in which a small group of heavily armed and well trained terrorists were able to kill 164 people and temporarily cripple large parts of the city should serve as a warning of what happens when police can not rapidly deploy lethal force as should a whole series of high profile “lone wolf” style shootings. Police can thus rationally argue that they need access to heavy weapons when needed and swat teams and training for military style contingencies as well. It is important to remember that the police daily put their lives at risk in the name of public safety.

Yet militarization has gone too far and is being influenced more by security corporations and their lobbyists than conditions in actual communities. If the drug war and attention grabbing acts of violence was where the militarization of America’s police forces began, 9-11 and the wars in Afghanistan and Iraq acted as an accelerant on the trend. These events launched a militarized-police-industrial complex, the country was flooded with grants from the Department of Homeland Security which funded even small communities to set up swat teams and purchase military grade equipment. Veterans from wars which were largely wars of occupation and counter-insurgency were naturally attracted to using these hard won skill sets in civilian life- which largely meant either becoming police or entering the burgeoning sector of private security.

So that’s the problem as laid out by Balko, what is his solution? For Balko, the biggest step we could take to rolling back militarization is to end the drug war and stop using military style methods to enforce immigration law. He would like to see a return to community policing, if not quite Mayberry, then at least something like the innovative program launched in San Antonio which uses police as social workers rather than commandos in to respond to mental health related crime.

Balko also wants us to end our militarized response to protests. There is no reason why protesters in a democratic society should be met by police wielding automatic weapons or dispersed through the use of tear gas. We can also stop the flood of federal funding being used by local police departments to buy surplus military equipment. Something that the Obama administration prompted by Ferguson seems keen to review.

A positive trend that Balko sees is the ubiquity of photography and film permitted by smart phones which allows protesters to capture brutality as it occurs a right which everyone has, despite the insistence of some police in protest situations to the contrary, and has been consistently upheld by U.S. courts. Indeed the other potentially positive legacy of Ferguson other than bringing the problem of police militarization into the public spotlight, for there is no wind so ill it does not blow some good, might be that it has helped launch true citizen based and crowd-sourced media.

My criticism of The Rise of the Warrior Cop to the extent I have any is that Balko only tells the American version of this tale, but it is a story that is playing out globally. The inequality of late capitalism certainly plays a role in this. Wars between states has at least temporarily been replaced by wars within states. Global elites who are more connected to their rich analogs in other countries than they are to their own nationals find themselves turning to a large number of the middle class who find themselves located in one form or another in the security services of the state. Elites pursue equally internationalized rivals, such as drug cartels and terrorist networks like one would a cancerous tumor- wishing to rip it out by force- not realizing this form of treatment is not getting to the root of the problem and might even end up killing the patient.

More troublingly they use these security services to choke off mass protests by the poor and other members of the middle class now enabled by mobile technologies because they find themselves incapable of responding to the problems that initiated these protests with long-term political solutions. This relates to another aspect of the police militarization issue Balko doesn’t really explore, namely the privatization of police services as those who can afford them retreat behind the fortress of private security while the conditions of the society around them erode.

Maybe there was a good reason that The Rise of the Warrior Cop was placed on the library shelf next to books on robot weapons after all. It may sound crazy, but perhaps in the not so far off future elites will automate policing as they are automating everything else. Mass protests, violent or not, will be met not with flesh and blood policemen but military style robots and drones. And perhaps only then will once middle class policemen made poor by the automation of their calling realize that all this time they have been fighting on the wrong side of the rebellion.

A Cure for Our Deflated Sense of The Future

Progressland 1964 World's Fair

There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.

The strange thing, perhaps, is that anyone would be disappointed in the fact that a fictional world has failed to become real in the first place. No one, I hope, feels like the present is constricted and dull because there aren’t any flying dragons in it to slay. The problem, then, might lie in the way science-fiction is understood in the minds of some avid fans- not as fiction, but as plausible future history, or even a sort of “preview” and promise of all the cool things that await.

Future- deflation is a kind of dulling hang-over from a prior bout of future-inflation when expectations got way out ahead of themselves. If, mostly boys, now become men, feel let down by inflated expectations driven by what proved to be the Venetian sunset, rather than the beginning, of the space race regarding orbital cities, bases on the moon and Mars, and a hundred other things, their experience is a little like girls, fed on a diet of romance, who have as adults tasted the bitter reality of love. Following the rule I suppose I’d call it romance-deflation- cue the Viagra jokes.

Okay, so that’s the condition, how might it be cured? The first step to recovery is admitting you have a problem and identifying its source. Perhaps the main culprit behind future-deflation is the crack cocaine of CGI- (and I really do mean CGI as in computer generated graphics which I’ve written about before). Whereas late 20th century novels,  movies, and television shows served as gateway drugs to our addiction to digital versions of the future, CGI and the technologies that will follow is the true rush, allowing us to experience versions of the future that just might be more real than reality itself.

There’s a phenomenon discovered by social psychologists studying motivation that says that it’s a mistake to visualize your idealized outcomes too clearly for to do so actually diminishes your motivation to achieve them. You get many of the same emotional rewards without any of the costs, and on that account never get down to doing the real thing. Our ability to create not just compelling but mind blowing visualizations of technologies that are nowhere on the horizon has become so good, and will only get better, that it may be exacerbating disappointment with the present state of technology and the pace of technological change- increasing the sense of “where’s my jet pack”.

There’s a theory that I’ve heard discussed by Brian Eno that the reason we haven’t been visited by any space aliens is that civilizations at a certain point fall into a state of masturbatory self-satisfaction. They stop actually doing stuff because the imagination of doing things becomes so much better and easier than the difficult and much less satisfying achievements experienced in reality.

The cure for future deflation is really just adulthood. We need to realize that the things we would like to do and see done are hard and expensive and take long commitments over time- often far past our own lifetimes- to achieve. We need to get off our Oculus Rift weighed down assess and actually do stuff. Elon Musk with his SpaceX seems to realize this, but with a ticket to Mars to cost 500 thousand dollars one can legitimately wonder whether he’ll end up creating an escape hatch from earth for the very rich that the rest of us will be stuck gawking at on our big-screen TVs.

And therein lies the heart of the problem, for it’s actually less important for the majority of us what technologies are available in the future than the largely non-technological question of how such a future is politically and economically organized which will translate into how many of us have access to these technologies.  

The question we should be asking when thinking about things we should be doing now to shape the future is a simple and very human one – “what kind of world do I hope my grandchildren live in?” A part of the answer to this question is going to involve technology and scientific advancement, but not as much of it as we might think. Other types of questions dealing with issues such as the level of equality, peace and security, a livable environment, and amount of freedom and purpose, are both more important and more capable of being influenced by the average person.  These are things we can pursue even if we have no connection to the communities of science and technology. We could even achieve many of these things should technological progress completely stall with the technological kit we already have.

In a way because it emerged in tandem with the technological progress started with the scientific and industrial revolutions science-fiction seemed to own the future, and those who practiced the art largely did well by us in giving it shape- at least in our minds. But in reality the future was just on loan, and it might do us best to take back a large portion of it and encourage everyone who wants to have more say in defining it. Or better, here’s my advice: for those techno-progressives not involved directly in the development of science and technology focus more of your efforts on the progressive side of the scale. That way, if even part of the promised future arrives you won’t be confined to just watching it while wearing your Oculus Rift or stuck in your seat at the IMAX.

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

The First Machine War and the Lessons of Mortality

Lincoln Motor Co., in Detroit, Michigan, ca. 1918 U.S. Army Signal Corps Library of Congress

I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war.

In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net.

But the most frightening aspect of the new war are weapons based on plant biology. Such weapons, if they do not immediately scar the face and infect the bodies of those who had been targeted, relocate themselves in the soil like spores waiting to release and kill and maim when conditions are ripe- the ultimate terrorist weapon.

Amid all this the author searches for what human meaning might be in a world where men are caught between a world of warring machines.  In the end he comes to understand himself as mere cog in a great machine, a metallic artifice that echoes and rides rhythms of biological nature including his own.

__________________________________

A century and a week back from today humanity began its first war of machines. (July, 28 1914). There had been forerunners such as the American Civil War and the Franco-Prussian War in the 19th century, but nothing before had exposed the full features and horrors of what war between mechanized and industrial societies would actually look like until it came quite unexpectedly in the form of World War I.

Those who wish to argue that the speed of technological development is exaggerated need only to look to at the First World War where almost all of the weapons we now use in combat were either first used or used to full effect- the radio, submarine, airplane, tank and other vehicles using the internal combustion engine. Machine guns were let loose in new and devastating ways as was long-range artillery.

Although again there were forerunners, the first biological and chemical weapons saw there true debut in WWI. The Germans tried to infect the city of St. Petersburg with a strain of the plague, but the most widely used WMDs were chemical weapons, some of them derived from the work on the nitrogen cycle of plants, gases such chlorine and mustard gas, which killed less than they maimed, and sometimes sat in the soil ready to emerge like poisonous mushrooms when weather conditions permitted.

Indeed, the few other weapons in our 21st century arsenal that can’t be found in the First World War such as the jet, rocket, atomic bomb, radar, and even the computer, would make their debut only a few decades after the end of that war, and during what most historians consider its second half- World War II.

What is called the Great War began, as our 9-11 wars began, with a terrorist attack. The Archduke of Austria- Hungary Franz Ferdinand assassinated by the ultimate nobody, a Serbian nationalist not much older than a boy- Gavrilo Princip- whose purely accidental success (he was only able to take his shot because the car the Archduke was riding in had taken a wrong turn) ended up being the catalyst for the most deadly war in human history up until that point, a conflict that would unleash nearly a century of darkness and mortal danger upon the world.

For the first time it would be a war that would be experienced by people thousands of miles from the battlefield in almost real time via the relatively new miracle of the radio. This was only part of the lightning fast feedback loop that launched and sped European civilization from a minor political assassination to total war. As I recall from Stephen Kern’s 1983 The Culture of Time and Space 1880-1914   political caution and prudence found themselves crushed in the vice of compressed space and time and suffered the need to make policy decisions that aligned with the need, not of human beings and their slow, messy and deliberative politics, but the pace of machines. Once the decision to mobilize was made it was almost impossible to stop it without subjecting the nation to extreme vulnerability, once a jingoistic public was whipped up to demand revenge and action via the new mass media it was again nearly impossible to silence and temper.

The First World War is perhaps the penultimate example of what Nassim Taleb called a “Black Swan” an event whose nature failed to be foreseen and whose effect ended up changing the future radically from the shape it had previously been projected to have. Taleb defines a Black Swan this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Secondly it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

Taleb has something to teach futurists who he suggests might start looking for a different day job. For what he is saying is it is the events we do not and cannot predict that will have the greatest influence on the future, and on account of this blindness the future is essentially unknowable. The best we can do in his estimation is to build resilience, robustness and redundancy, which it is hoped might allow us to survive, or even gain in the face of multiple forms of unpredictable crises, to become, in his terms “anti-fragile”.

No one seems to think another world war is possible today, which might give us reason for worry. We do have more circuit breakers in place which might allow us to dampen a surge in the direction of war between the big states in the face of a dangerous event such as Japan downing a Chinese plane in their dispute over Pacific islands, but many more and stronger ones need to be created to avoid such frictions spinning out of control.

States continue to prepare for limited conventional wars against one another. China practices and plans to retake disputed islands including Taiwan by force, and to push the U.S. Navy deeper into the Pacific, while the U.S. and Japan practice retaking islands from the Chinese. We do this without recognizing that we need to do everything possible to prevent such potential clashes in the first place because we have no idea once they begin where or how they will end.  As in financial crises, the further in time we become removed from the last great crisis the more likely we are to have fallen into a dangerous form of complacency, though the threat of nuclear destruction may act as an ultimate break.

The book I began this essay with is, of course, not some meditation on 21st or even 22nd century war, but the First World War itself. Ernst Jünger’s Storm of Steel is perhaps the best book ever written on the Great War and arguably one of the best books written on the subject of war- period.

It is Jünger’s incredible powers of observation and his desire for reflection that give the book such force. There is a scene that will ever stick with me where Jünger is walking across the front lines and sees a man sitting there in seemingly perfect stoicism as the war rages around him. It’s only when he looks closer that Jünger realizes the man is dead and that he has no eyes in his sockets- they have been blown out from an explosion behind.

Unlike another great book on the First World War, Erich Remarque’s All Quiet on the Western Front, Storm of Steel is not a pacifist book. Jünger is a soldier who sees the war as a personal quest and part of his duty as a man. His bravery is undeniable, but he does not question the justice or rationality of the war itself, a fact that would later allow Jünger’s war memoir to be used as a propaganda tool by the Nazis.

Storm of Steel has about it something of the view of war found in the ancients- that it was sent by the gods and there was nothing that could be done about it but to fulfill one’s duty within it. In the Hindu holy book the Bhagavad Gita the prince Arjuna is filled with doubt over the moral righteousness of the battle he is about to engage in- to which the god Krishna responds:

 …you Arjuna, are only a mortal appointee to carry out my divine will, since the Kauravas are destined to die either way, due to their heap of sins. Open your eyes O Bhaarata and know that I encompass the Karta, Karma and Kriya, all in myself. There is no scope for contemplation now or remorse later, it is indeed time for war and the world will remember your might and immense powers for time to come. So rise O Arjuna!, tighten up your Gandiva and let all directions shiver till their farthest horizons, by the reverberation of its string.

Jünger lived in a world that had begun to abandon the gods, or rather adopted new materialist versions of them – whether the force of history or evolution- stories in which Jünger like Arjuna comes to see himself as playing a small part.

 The incredible massing of forces in the hour of destiny, to fight for a distant future, and the violence it so surprisingly, suddenly unleashed, had taken me for the first time into the depths of something that was more than mere personal experience. That was what distinguished it from what I had been through before; it was an initiation that not only had opened red-hot chambers of dread but had also led me through them. (255-256)

Jünger is not a German propagandist. He seems blithely unaware of the reasons his Kaiser’s government was arguing why the war was being fought. The dehumanization of the other side which was a main part of the war propaganda towards the end of the conflict, and on both sides, does not touch him. He is a mere soldier whose bravery comes from the recognition of his own ultimate mortality just as the mortality of everyone else allows him to kill without malice, as a mere matter of the simple duty of a combatant in war.

Because his memoir of the conflict is so authentic, so without bias or apparent political aims, he ends up conveying truths about war which it is difficult for civilians to understand and this difficulty in understanding can be found not only in pacifists, but in nationalist war-mongers with no experience of actual combat.

If we open ourselves to to the deepest meditations of those who have actually experienced war, what we find is that combat seems to bring the existential reality of the human condition out from its normal occlusion by the tedium of everyday living. To live in the midst of war is a screaming reminder that we are mortal and our lives ultimately very short. In war it is very clear that we are playing a game of chance against death, which is merely the flip side of the statistical unlikelihood of our very existence, as if our one true God was indeed chance itself. Like any form of gambling, victory against death itself becomes addictive.

War makes it painfully clear to those who fight in it that we are hanging on just barely to this thread, this thin mortal coil, where our only hope for survival for a time is to hang on tightly to those closest to us- war’s famed brotherhood in arms. These experiences, rather than childish flag- waving notions of nationalism, are likely the primary source of what those who have experience of only of peace often find unfathomable- that soldiers from the front often eagerly return to battle. It is a shared experience among those who have experienced combat that often leads soldiers to find more in common with the enemies they have engaged than their fellow citizens back home who have never been to war.

The essential lessons of Storm of Steel are really spiritual answers to the question of combat. Jünger’s war experience leads him to something like Buddhist non-attachment both to himself and to the futility of the bird-eye view justifications of the conflict.

The nights brought heavy bombardment like swift, devastating summer thunderstorms. I would lie on my bunk on a mattress of fresh grass, and listen, with a strange and quite unjustified feeling of security, to the explosions all around that sent the sand trickling out of the walls.

At such moments, there crept over me a mood I hadn’t known before. A profound reorientation, a reaction to so much time spent so intensely, on the edge. The seasons followed one another, it was winter and then it was summer again, but it was still war. I felt I had got tired, and used to the aspect of war, but it was from familiarity that I observed what was in front of me in a new and subdued light. Things were less dazzlingly distinct. And I felt the purpose with which I had gone out to fight had been used up and no longer held. The war posed new, deeper puzzles. It was a strange time altogether. (260)

In another scene Jünger comes upon a lone enemy officer while by himself on patrol.

 I saw him jump as I approached, and stare at me with gaping eyes, while I, with my face behind my pistol, stalked up to him slowly and coldly. A bloody scene with no witnesses was about to happen. It was a relief to me, finally, to have the foe in front of me and within reach. I set the mouth of my pistol at the man’s temple- he was too frightened to move- while my other fist grabbed hold of his tunic…

With a plaintive sound, he reached into his pocket, not to pull out a weapon, but a photograph which he held up to me. I saw him on it, surrounded by numerous family, all standing on a terrace.

It was a plea from another world. Later, I thought it was blind chance that I had let him go and plunged onward. That one man of all often appeared in my dreams. I hope that meant he got to see his homeland again. (234)

The unfortunate thing about the future of war is not that human beings seem likely to play an increasingly diminishing role as fighters in it, as warfare undergoes the same process of automation, which has resulted in the fact that so few of us now grow our food or produce our goods. Rather, it is the fact that wars will continue to be fought and human beings, which will come to mean almost solely non-combatants, will continue to die in them.

The lessons Jünger took from war are not so much the product of war itself as they emerge from intense reflection on our own and others mortality. They are the same type of understanding and depth often seen in those who suffer long periods of death, the terminally ill, who die not in the swift “thief in the night” mode of accidents or bodily failure, but slip from the world with enough time to and while retaining the capacity to reflect. Even the very young who are terminally ill often speak of a diminishing sense of their own importance, a need to hang onto the moment, a drive to live life to the full, and the longing to treat others in a spirit of charity and mercy.

Even should the next “great war” be fought almost entirely by machines we can retain these lessons as a culture as long as we give our thoughts over to what it means to be a finite creature with an ending and will have the opportunity to experience them personally as long as we are mortal, and given the impossibility of any form of eternity no matter how far we will extend our lifespans, mortal we always will be.

 

 

Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.

Plato and the Physicist: A Multicosmic Love Story

Life as a Braid of Space Time

 

So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.

I should start though with my initial question of how Tegmark manages to fit something very much like Plato’s Theory of the Forms into the seemingly chaotic landscape of multiverse theories. If you remember back to your college philosophy classes, you might recall something of Plato’s idea of forms, which in its very basics boils down to this: Plato thought there was a world of perfect, eternally existing ideas of which our own supposedly real world was little more than a shadow. The idea sounds out there until you realize that Plato was thinking like a mathematician. We should remember that over the walls of Plato’s Academy was written “Let no man ignorant of geometry enter here”, and for the Greeks geometry was the essence of mathematics. Plato aimed to create a school of philosophical mathematicians much more than he hoped to turn philosophers into a sect of moral geometers.

Probably almost all mathematicians and physicists hold to some version of platonism, which means that they think mathematical structures are something discovered rather than a form of language invented by human beings. Non- mathematicians, myself very much included, often have trouble understanding this, but a simple example from Plato himself might help clarify.

When the Greeks played around with shapes for long enough they discovered things. And here we really should say discover because they had no idea shapes had these properties until they stumbled upon them through play.Plato’s dialogue Meno gave us the most famous demonstration of the discovery rather than invention of mathematical structures. Socrates asks a “slave boy” (we should take this to be the modern day equivalent of the man off the street) to figure out the area of a square which is double that of a square with a length of 2. The key, as Socrates leads the boy to see, is that one should turn the square with the side of 2 into a right triangle the length of whose hypotenuse is then seen as equal to one of the lengths of the doubled square allowing you easily calculate its area. The slave boy explains his measurement epiphany as the “recovery of knowledge from a past life.”

The big gap between Plato and modern platonists is that the ancient philosopher thought the natural world was a flawed copy of the crystalline purity of the mathematics of  thought. Contrast that with Newton who saw the hand of God himself in nature’s calculable regularities. The deeper the scientists of the modern age probed with their new mathematical tools the more nature appeared as Galileo said “ a book written in the language of mathematics”. For the moderns mathematical structures and natural structures became almost one and the same. The Spanish filmmaker and graphic designer Cristóbal Vila has a beautiful short over at AEON reflecting precisely this view.

It’s that “almost” that Tegmark has lept over with his Mathematical Universe Hypothesis (MUH). The essence of the MUH is not only that mathematical structures have an independent identity, or that nature is a book written in mathematics, but that the nature is a mathematical structure and just as all mathematical structures exist independent of whether we have discovered them or not, all logically coherent universes exists whether or not we have discovered their structures. This is platonism with a capital P, the latter half explaining how the MUH intersects with the idea of the multiverse.

One of the beneficial things Tegmark does with his book is to provide a simple to understand set of levels for different ideas that there is more than one universe.

Level I: Beyond our cosmological horizon

A Level I multiverse is the easiest for me to understand. It is within the lifetime of people still alive that our universe was held to be no bigger than our galaxy. Before that people thought the entirety of what was consisted of nothing but our solar system, so it is no wonder that people thought humanity was the center of creation’s story. As of right now the observable universe is around 46 billion light years across, actually older than the age of the universe due to its expansion. Yet, why should we think this observable horizon constitutes everything when such assumption has never proved true in the past? The Level I multiverse holds that there are entire other universes outside the limit of what we can observe.

Level II: Universes with different physical constants

The Level II multiverse again makes intuitive sense to me. If one assumes that the Big Bang was not the first or the last of its kind, and  if one assumes there are whole other, potentially an infinite number of universes, why assume that our is the only way a universe should be organized? Indeed, having a variety of physical constants to choose from would make the fine tuning of our own universe make more sense.

Level III: Many-worlds interpretation of quantum mechanics

This is where I start to get lost, or at least this particular doppelganger of me starts to get lost. Here we find Hugh Everett’s interpretation of quantum unpredictability. Rather than Schrodinger’s Cat being pushed from a superposition of states between alive and dead when you open the box, exposing the feline causes the universe to split- in one universe you have an alive cat, and in another a dead one. It gets me dizzy just thinking about it, just imagine the poor cat- wait, I am the cat!

Level IV: Ultimate ensemble

Here we have Tegmark’s model itself where every universe that can represented as a logically consistent mathematical structure is said to actually exist. In such a multiverse when you roll a six-sided die, there end up being six universes corresponding to each of the universes, but there is no universe where you have rolled a “1 not 1” , and so on. If a universe’s mathematical structure can be described, then that universe can be said to exist there being, in Tegmark’s view, no difference between such a mathematical structure and a universe.

I had previously thought the idea of the multiverse was a way to give scale to the shadow of our ignorance and expand our horizon in space and time. As mentioned, we had once thought all that is was only as big as our solar system and merely thousands of years old. By the 19th century the universe had expanded to the size of our galaxy and the past had grown to as much as 400 million years. By the end of the 20th century we knew there were at least 100 billion galaxies in the universe and that its age was 13.7 billion. There is no reason to believe that we have grasped the full totality of existence, that the universe, beyond our observable horizon isn’t even bigger, and the past deeper. There is “no sign on the Big Bang saying ‘this happened only once’” as someone once said cleverly whose attribution I cannot find.

Ideas of the multiverse seemed to explain the odd fact that the universe seems fine-tuned to provide the conditions for life, Martin Rees “six numbers” such as Epsilon (ε)- the strength of the force binding nucleons to nuclei. If you have a large enough sample of universes then the fact that some universes are friendly for life starts to make more sense. The problem, I think, comes in when you realize just how large this sample size has to be to get you to fine tuning- somewhere on the order of 10 ^200. What this means is that you’ve proposed the existence of a very very large or even infinite number of values, as far as we know which are unobservable to explain essentially six. If this is science, it is radically different from the science we’ve known since Galileo dropped cannon balls off of the Leaning Tower of Pisa.

For whatever reason, rather than solidify my belief in the possibility of the multiverse, or convert me to platonism, Tegmark’s book left me with a whole host of new questions, which is what good books do. The problem is my damned doppelgangers who can be found not only at the crazy quantum Level III, but at the levels I thought were a preserve of Copernican Mediocrity – Levels I and II, or as Tegmark says.

The only difference between Level I and Level III is where your doppelgängers reside.

Yet, to my non-physicist eyes, the different levels of multiverse sure seems distinct. Level III seems to violate Copernican Mediocrity with observers and actors being able to call into being whole new timelines with even the most minutea laden of their choices, whereas Levels I and II simply posit that a universe sufficiently large enough and sufficiently extended enough in time would allow for repeat performances down to the smallest detail- perhaps the universe is just smaller than that, or less extended in time, or there is some sort of kink whereby when what the late Stephen J Gould called the “life tape” is replayed you can never get the same results twice.

Still, our intuitions about reality have often been proven wrong, so no theory can be discounted on the basis of intuitive doubts. There are other reasons, however, why we might use caution when it comes to multiverse theories, namely, their potential risk to the scientific endeavor itself.  The fact that we can never directly observe parts of the multiverse that are not our own means that we would have to step away from falsifiability as the criteria for scientific truth. The physicist Sean Carroll  argues that falsifiability is a weak criteria, what makes a theory scientific is that it is “direct” (says something definite about how reality works) and “empirical”, by which he no longer means the Popperian notion of falsifiability, but its ability to explain the world. He writes:

Consider the multiverse.

If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.

We can’t (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe.

One could look at Tegmark’s MUH and Carroll’s comments as a broadening of our scientific and imaginative horizons and the continuation of our powers to explain into realms beyond what human beings will ever observe. The idea of a 22nd version of Plato’s Academy using amazingly powerful computers to explore all the potential universes ala Tegmark’s MUH is an attractive future to me. Yet, given how reliant we are on science and the technology that grows from it, and given the role of science in our society in establishing the consensus view of what our shared physical reality actually is, we need to be cognizant and careful of what such a changed understanding of science actually might mean.

The physicist, George Ellis, for one, thinks the multiverse hypothesis, and not just Tegmark’s version of it, opens the door to all sorts of pseudoscience such as Intelligent Design. After all, the explanation that the laws and structure of our universe can be understood only by reference to something “outside” is the essence of explanations from design as well, and just like the multiverse, cannot be falsified.

One might think that the multiverse was a victory of theorizing over real world science, but I think Sean Carroll is essentially right when he defends the multiverse theory by saying:

 Science is not merely armchair theorizing; it’s about explaining the world we see, developing models that fit the data.

It’s the use of the word “model” here rather than “theory” that is telling. For a model is a type of representation of something whereas a theory constitutes an attempt at a coherent self-contained explanation. If the move from theories to models was only happening in physics then we might say that this had something to do merely with physics as a science rather than science in general. But we see this move all over the place.

Among, neuroscientists, for example, there is no widely agreed upon theory of how SSRIs work, even though they’ve been around for a generation, and there’s more. In a widely debated speech Noam Chomsky argued that current statistical models in AI were bringing us no closer to the goal of AGI or the understanding of human intelligence because they lacked any coherent theory of how intelligence works. As Yaden Katz wrote for The Atlantic:

Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

Likewise, the field of systems biology and especially genomic science is built not on theory but on our ability to scan enormous databases of genetic information looking for meaningful correlations. The new field of social physics is based on the idea that correlations of human behavior can be used as governance and management tools, and business already believes that statistical correlation is worth enough to spend billions on and build an economy around.

Will this work as well as the science we’ve had for the last five centuries? It’s too early to tell, but it certainly constitutes a big change for science and the rest of us who depend upon it. This shouldn’t be taken as an unqualified defense of theory- for if theory was working then we wouldn’t be pursuing this new route of data correlation whatever the powers of our computers. Yet, those who are pushing this new model of science should be aware of its uncertain success, and its dangers.

The primary danger I can see from these new sorts of science, and this includes the MUH, is that it challenges the role of science in establishing the consensus reality which we all must agree upon. Anyone who remembers their Thomas Kuhn can recall that what makes science distinct from almost any system of knowledge we’ve had before, is that it both enforces a consensus view of physical reality beyond which an individual’s view of the world can be considered “unreal”, and provides a mechanism by which this consensus reality can be challenged and where the challenge is successful overturned.

With multiverse theories we are in approaching what David Engelman calls Possibilism the exploration of every range of ways existence can be structured that is compatible with the findings of science and is rationally coherent. I find this interesting as a philosophical and even spiritual project, but it isn’t science, at least as we’ve understood science since the beginning of the modern world. Declaring the project to be scientific blurs the lines between science and speculation and might allow people to claim the kind of understanding over uncertainty that makes politics and consensus decisions regarding acute needs of the present, such a global warming, or projected needs of the future impossible.

Let me try to clarify this. I found it very important that in Our Mathematical Universe Tegmark tried to tackle the problem of existential risks facing the human future. He touches upon everything from climate change, to asteroid impacts, to pandemics to rogue AI. Yet, the very idea that there are multiple versions of us out there, and that our own future is determined seems to rob these issues of their urgency. In an “infinity” of predetermined worlds we destroy ourselves, just as in an “infinity” of predetermined worlds we do what needs to be done. There is no need to urge us forward because, puppet-like, we are destined to do one thing or the other on this particular timeline.

Morally and emotionally, how is what happens in this version of the universe in the future all that different from what happens in other universe? Persons in those parallel universes are even closer to us, our children, parents, spouses, and even ourselves than the people of the future on our own timeline. According to the deterministic models of the multiverse, the world of these others are outside of our influence and both the expansion or contraction of our ethical horizon leave us in the same state of moral paralysis. Given this, I will hold off on believing in the multiverse, at least on the doppelganger scale of Level I and II, and especially Levels III and IV until it actually becomes established as a scientific fact,which it is not at the moment, and given our limitations, perhaps never will be, even if it is ultimately true.

All that said, I greatly enjoyed Tegmark’s book, it was nothing if not thought provoking. Nor would I say it left me with little but despair, for in one section he imagined a Spinoza-like version of eternity that will last me a lifetime, or perhaps I should say beyond.  I am aware that I will contradict myself here: his image that gripped me was of an individual life seen as a braid of space-time. For Tegmark, human beings have the most complex space-time braids we know of. The idea vastly oversimplified by the image above.

About which Tegmark explains:

At both ends of your spacetime braid, corresponding to your birth and death, all the threads gradually separate, corresponding to all your particles joining, interacting and finally going their own separate ways. This makes the spacetime structure of your entire life resemble a tree: At the bottom, corresponding to early times, is an elaborate system of roots corresponding to the spacetime trajectories of many particles, which gradually merge into thicker strands and culminate in a single tube-like trunk corresponding to your current body (with a remarkable braid-like pattern inside as we described above). At the top, corresponding to late times, the trunk splits into ever finer branches, corresponding to your particles going their own separate ways once your life is over. In other words, the pattern of life has only a finite extent along the time dimension, with the braid coming apart into frizz at both ends.

Because mathematical structures always exist whether or not anyone has discovered them, our life braid can be said to have always existed and will always exist. I have never been able to wrap my head around the religious idea of eternity, but this eternity I understand. Someday I may even do a post on how the notion of time found in the MUH resembles the medieval idea of eternity as nunc stans, the standing-now, but for now I’ll use it to address more down to earth concerns.

My youngest daughter, philosopher that she is, has often asked me “where was I before I was born?”. To which my lame response has been “you were an egg” which for a while made big breakfasts difficult. Now I can just tell her to get out her crayons to scribble, and we’ll color our way to something profound.

 

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.