Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and are certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

How to avoid drowning in the Library of Babel

Library of Babel

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by is using the same tools we use in understanding the present which is also, at least partly, hidden from  direct view by the dilemma inherent in our use of language. To understand anything we need language and symbols but neither are the thing we are actually trying to grasp. Even the brain itself, through the senses, is a form of funnel giving us a narrow stream of information that we conflate with the entire world.

We use often science to get us beyond the box of our heads, and even, to a limited extent, to see into the future. Less discussed is the fact that we also use good metaphors, fables, and stories to attack the the wall of the unknown obliquely. To get the rough outlines of something like the essence of reality while still not being able to see it as it truly is.

It shouldn’t surprise us that some of the most skilled writers at this sideways form of wall scaling ended up becoming blind. To be blind, especially to have become blind after a lifetime of being able to see, is to realize that there is a form to the world which is shrouded in darkness, that one can get to know only in bare outlines, and only with indirect methods. Touch in the blind goes from a way to know what something feels like for the sighted to an oblique and sometimes more revealing way to learn what something actually is.

The blind John Milton was a genius at using symbolism and metaphor to uncover a deeper and hidden reality, and he probably grasped more deeply than anyone before or after him that our age would be defined not by the chase after reality, but our symbolic representation of its “value”, especially in the form of our ever more virtual talisman of money. Another, and much more recent, writer skilled at such an approach, whose work was even stranger than Milton’s in its imagery, was Jorge Luis Borges whose writing defies easy categorization.  

Like Milton did for our confusion of the sign with the signified in his Paradise Lost, Borges would do for the culmination of this symbolization in the  “information age” with perhaps his most famous short-story- The Library at Babel.  A story that begins with the strange line:

The universe (which others call the Library) is composed of an indefinite, perhaps infinite number of hexagonal galleries. (112)

The inhabitants of Borges’ imagined Library (and one needs to say inhabitants rather than occupants, for the Library is not just a structure but is the entire world, the universe in which all human-beings exist) “live” in smallish compartments attached to vestibules on hexagonal galleries lined with shelves of books. Mirrors are so placed that when one looks out of their individual hexagon they see a repeating pattern of them- the origin of arguments over whether or not the Library is infinite or just appears to be so.

All this would be little but an Escher drawing in the form of words were it not for other aspects that reflect Borges’ pure genius. The Library contains not just all books, but all possible books, and therein lies the dilemma of its inhabitants-  for if all possible books exist it is impossible to find any particular book.

Some book explaining the origin of the Library and its system of organization logically must exist, but that book is essentially undiscoverable, surrounded by a perhaps infinite number of other books the vast number of which are just nonsensical combinations of letters. How could the inhabitants solve such a puzzle? A sect called the Purifiers thinks it has the answer- to destroy all the nonsensical books- but the futility of this project is soon recognized. Destruction barely puts a dent in the sheer number of nonsensical books in the Library.

The situation within the Library began with the hope that all knowledge, or at least the key to all knowledge, would be someday discovered but has led to despair as the narrator reflects:

I am perhaps misled by old age and fear but I suspect that the human species- the only species- teeters at the verge of extinction, yet that the Library- enlightened, solitary, infinite, perfectly unmoving, armed with precious volumes, pointless, incorruptible and secret will endure. (118)

The Library of Babel published in 1941 before the “information age” was even an idea seems to capture one of the paradoxes of our era. For the period in which we live is indeed experiencing an exponential rise of knowledge, of useful information, but this is not the whole of the story and does not fully reveal what is not just a wonder but a predicament.

The best exploration of this wonder and predicament is James Gleick’s recent brilliant history of the “information age” in his The Information: A History, A Theory, A Flood. In that book Gleick gives us a history of information from primitive times to the present, but the idea that everything: from our biology to our society to our physics can be understood as a form of information is a very recent one. Gleick himself uses Borges’ strange tale of the Library of Babel as metaphor through which we can better grasp our situation.

The imaginative leap into the information age dates from seven year after Borges’ story was published, in 1948, when Claude Shannon a researcher for Bell Labs published his essay “A Mathematical Theory of Communication”. Shannon was interested in how one could communicate messages over channels polluted with noise, but his seemingly narrow aim ended up being revolutionary for fields far beyond Shannon’s purview. The concept of information was about to take over biology – in the form of DNA, and would conquer communications in the form of the Internet and exponentially increasing powers of computation – to compress and represent things as bits.

The physicists John Wheeler would come to see the physical universe itself and its laws as a form of digital information- “It from bit”.

Otherwise put, every “it” — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

Gleick finds this understanding of reality as information to be both true in a deep sense and also in many ways troubling.  We are it seems constantly told that we are producing more information on a yearly basis than all of human history combined, although what this actually means is another thing. For there is a wide and seemingly unbridgeable gap between information and what would pass for knowledge or meaning. Shannon himself acknowledged this gap, and deliberately ignored it, for the meaningfulness of the information transmitted was, he thought, “irrelevant to the engineering problem” involved in transmitting it.

The Internet operates in this way, which is why a digital maven like Nicholas Negroponte is critical of the tech-culture’s shibboleth of “net neutrality”.   To humorize his example, a digital copy of a regular sized novel like Fahrenheit 451 is the digital equivalent of one second of video from What does the Fox Say?, which, however much it cracks me up, just doesn’t seem right. The information content of something says almost nothing about its meaningfulness, and this is a problem for anyone looking at claims that we are producing incredible amounts of “knowledge” compared to any age in the past.

The problem is perhaps even worse than at first blush. Take these two sets of numbers:

9      2           9              5              5              3              4              7              7              10

And:

9     7            4              3              8             5              4              2              6              3

The first set of numbers is a truly random set of the numbers 1 through 10 created using a random number generator. The second set has a pattern. Can you see it?

In the second set every other number is a prime number between 2 and 7. Knowing this rule vastly decreases the possible set of 10 numbers between 1 and 10 which a set of numbers can fit. But here’s the important takeaway-  the the first set of numbers contains vastly more information than the second set. Randomness increases the information content of a message. (If anyone out there has a better grasp of information theory thinks this example is somehow misleading or incorrect please advise.)

The new tools of the age have meant that a flood of bits now inundates us the majority of which is without deep meaning or value or even any meaning at all.We are confronted with a burst dam of information pouring itself over us. Gleick captures our dilemma with a quote from the philosopher of “enlightened doomsaying” Jean-Pierre Dupuy:

I take “hell” in its theological sense i.e. a place which is devoid of grace – the underserved, unnecessary, surprising, unforeseen. A paradox is at work here, ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.  (418)

It is not that useful and meaningful information is not increasing, but that our exponential increase in information of value- of medical and genetic information and the like has risen in tandem with an exponential increase in nonsense and noise. This is the flood of Gleick’s title and the way to avoid drowning in it is to learn how to swim or get oneself a boat.

A key to not drowning is to know when to hold your breath. Pushing for more and more data collection might sometimes be a waste of resources and time because it is leading one away from actionable knowledge. If you think the flood of useless information hitting you that you feel compelled to process is overwhelming now, just wait for the full flowering of the “Internet of Things” when everything you own from your refrigerator to your bathroom mirror will be “talking” to you.

Don’t get me wrong, there are aspects that are absolutely great about the Internet of Things. Take, for example, the work of the company Indoo.rs that has created a sort of digital map throughout the San Francisco International Airport that allows a blind person using bluetooth to know: “the location of every gate, newsstand, wine bar and iPhone charging station throughout the 640,000-square-foot terminal.”

This is exactly the kind of purpose the Internet of Things should be used for. A modern form of miracle that would have brought a different form sight to those blind like Milton and Borges. We could also take a cue from the powers of imagination found in these authors and use such technologies to make our experience of the world deeper our experience more multifaceted, immersive. Something like that is a far cry from an “augmented reality” covered in advertisements, or work like that of neuroscientist David Eagleman, (whose projects I otherwise like), on a haptic belt that would massage its wearer into knowing instantaneously the contents of the latest Twitter feed or stock market gyrations.

Our digital technology where it fails to make our lives richer should be helping us automate, meaning to make automatic and without thought, all that is least meaningful in our lives – this is what machines are good at. In areas of life that are fundamentally empty we should be decreasing not increasing our cognitive load. The drive for connectivity seems pushing in the opposite direction, forcing us to think about things that were formerly done by our thoughtless machines over which we lacked precision but also didn’t give us much to think about.

Yet even before the Internet of Things has fully bloomed we already have problems. As Gleick notes, in all past ages it was a given that most of our knowledge would be lost. Hence the logic of a wonderful social institution like the New York Public Library. Gleick writes with sharp irony:

Now expectations have inverted. Everything may be recorded and preserved…. Where does it end? Not with the Library of Congress.

Perhaps our answer is something like the Internet Archive whose work on capturing recording all of the internet as it comes into being and disappears is both amazing and might someday prove essential to the survival of our civilization- the analog to copies having been made of the famed collection at the lost Library at Alexandria. (Watch this film).

As individuals, however, we are still faced with the monumental task of separating the wheat from the chaff in the flood of information, the gems from the twerks. Our answer to this has been services which allow the aggregation and organization of this information, where as Gleick writes:

 Searching and filtering are all that stand between this world and the Library of Babel.  (410)

And here is the one problem I had with Gleick’s otherwise mind blowing book because he doesn’t really deal with questions of power. A good deal of power in our information society is shifting to those who provide these kinds of aggregating and sifting capabilities and claim on that basis to be able to make sense of a world brimming with noise and nonsense. The NSA makes this claim and is precisely the kind of intelligence agency one would predict to emerge in an information age.

Anne Neuberger, the Special Assistant to NSA Director Michael Rogers and the Director of the NSA’s Commercial Solutions Center recently gave a talk at the Long Now Foundation. It was a hostile audience of Silicon Valley types who felt burned by the NSA’s undermining of digital encryption, the reputation of American tech companies, and civil liberties. But nothing captured the essence of our situation better than a question from Paul Saffo.

Paul Saffo: It seems like the intelligence community have always been data optimists. That if we just had more data, and we know that after every crisis, it’s ‘well if we’d just have had  more data we’d connect the dots.’ And there’s a classic example, I think of 9-11 but this was said by the Church Committee in 1975. It’s become common to translate criticism of intelligence results into demands for enlarged data collection. Does more data really lead to more insight?

As a friend is fond of saying ‘perhaps the data haystack that the intelligence community has created has become too big to ever find the needle in.’    

Neuberger’s telling answer was that the NSA needed better analytics, something that the agency could learn from the big data practices of business. Gleick might have compared her answer to a the plea for a demon by a character in a story by Stanislaw Lem whom he quotes.

 We want the Demon, you see, to extract from the dance of atoms only information that is genuine like mathematical formulas, and fashion magazines.  (The Information 425)

Perhaps the best possible future for the NSA’s internet hoovering facility at Bluffdale  would be for it to someday end up as a memory bank of the our telecommunications in the 21st century, a more expansive version of the Internet Archive. What it is not, however,is a way to anticipate the future on either small or large scales, or at least that is what history tells us. More information or data does not equal more knowledge.

We are forced to find meaning in the current of the flood. It is a matter of learning what it is important to pay attention to and what to ignore, what we should remember and what we can safely forget, what the models we use to structure this information can tell us and what they can’t, what we actually know along with the boundaries of that which we don’t. It is not an easy job, or as Gleick says:

 Selecting the genuine takes work; then forgetting takes even more work.

And it’s a job, it seems, that just gets harder as our communications systems become ever more burdened by deliberate abuse ( 69.6 percent of email in 2014 was spam up almost 25 percent from a decade earlier) and in an age when quite extraordinary advances in artificial intelligence are being used not to solve our myriad problems, but to construct an insidious architecture of government and corporate stalking via our personal “data trails”, which some confuse with our “self”. A Laplacian ambition that Dag Kittlaus unguardedly praises as  “Incremental revenue nirvana.”  Pity the Buddha.

As individuals, institutions and societies we need to find ways stay afloat and navigate through the flood of information and its abuse that has burst upon us, for as in Borges’ Library there is no final escape from the world it represents. As Gleick powerfully concludes The Information:

We are all patrons of the Library of Babel now and we are the librarians too.

The Library will endure, it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors looking for lines of meaning amid the leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thought and collecting the thoughts of others, and every so often glimpsing mirrors in which we may recognize creatures of the information.”

 

Privacy Strikes Back, Dave Eggers’ The Circle and a Response to David Brin

I believe that we have turned a corner: we have finally attained Peak Indifference to Surveillance. We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilization will only decline from here on in.  Cory Doctorow

If I was lucky enough to be teaching a media studies class right now I would assign two books to be read in tandem. The first of these books, Douglas Rushkoff’s Present Shocka book I have written about before, gives one the view of our communications landscape from 10,000 feet. Asking how can we best understand what is going on, with not just Internet and mobile technologies, but all forms of modern communication including that precious antique, the narrative book or novel.

Perspectives from “above” have the strength that they give you a comprehensive view, but human meaning often requires another level, an on-the-ground emotional level, that good novels, perhaps still more than any other medium, succeed at brilliantly. Thus, the second book I would assign in my imaginary media studies course would be Dave Eggers’ novel The Circle where one is taken into a world right on the verge of our own, which because it seems so close, and at the same time so creepy, makes us conscious of changes in human communication less through philosophy, although the book has plenty of that too, as through our almost inarticulable discomfort. Let me explain:

The Circle tells the story of a 20 something young woman, Mae Holland, who through a friend lands her dream job at the world’s top corporation, named, you guessed it, the Circle. To picture Circle, imagine some near future where Google swallowed FaceBook and Twitter and the resulting behemoth went on to feed on and absorb all the companies and algorithms that now structure our lives: the algorithm that suggests movies for you at NetFlix, or books and products on Amazon, in addition to all the other Internet services you use like online banking. This monster of a company is then able integrate all of your online identities into one account, they call it “TruYou”.

Having escaped a dead end job in a nowhere small town utility company, Mae finds herself working at the most powerful, most innovative, most socially conscious and worker friendly company on the planet. “Who else but utopians could make utopia. “ (30) she muses, but there are, of course, problems on the horizon.

The Circle is the creation of a group called the “3 Wise men”. One of these young wise men, Bailey, is the philosopher of the group. Here he is musing about the creation of small, cheap, ubiquitous and high definition video cameras that the company is placing anywhere and everywhere in a program called SeeChange:

Folks, we’re at the dawn of the Second Enlightenment. And I am not talking about a new building on campus. I am talking about an era where we don’t allow the vast majority of human thought and action and achievement to escape as if from a leaky bucket. We did that once before. It was called the Middle Ages, the Dark Ages. If not for the monks, everything the world had ever learned would have been lost. Well, we live in a similar time and we’re losing the vast majority of what we do and see and learn. But it doesn’t have to be that way. Not with these cameras, and not with the mission of the Circle. (67)

The philosophy of the company is summed up, in what the reader can only take as echoes of Orwell, in slogans such as “all that happens must be known” , “privacy is theft”, “secrets are lies”, and “to heal we must know, to know we must share”.

Our protagonist, Mae, has no difficulty with this philosophy. She is working for what she believes is the best company in the world, and it is certainly a company that on the surface all of us would likely want to work for: there are non-stop social events which include bringing in world class speakers, free cultural and sporting events and concerts. The company supports the relief of a whole host of social problems. Above all, there are amazing benefits which include the company covering the healthcare costs of Mae’s father who is stricken with an otherwise bankrupting multiple sclerosis.

What Eggers is excellent at is taking a person who is in complete agreement with the philosophy around her and making her humanity become present in her unintended friction with it. It’s really impossible to convey without pasting in large parts of the book just how effective Eggers is at presenting the claustrophobia that comes from a too intense use of social technology. The shame Mae is made to feel from missing out on a coworker’s party, the endless rush to keep pace with everyone’s updates, and the information overload and data exhaustion that results, the pressure of always being observed and “measured”, both on the job and off, the need to always present oneself in the best light, to “connect” with others who share the same views, passions and experiences,the anxiety that people will share something, such as intimate or embarrassing pictures one would not like shared, the confusion of “liking” and “disliking” with actually doing something, with the consequence that not sharing one’s opinion begins to feel like moral irresponsibility.

Mae desperately wants to fit into the culture of transparency found at the Circle, but her humanity keeps getting in the way. She has to make a herculean effort to keep up with the social world of the company, mistakenly misses key social events, throws herself into sexual experiences and relationships she would prefer not be shared, keeps the pain she is experiencing because of her father’s illness private.

She also has a taste for solitude, real solitude, without any expectation that she bring something out of it- photos or writing to be shared. Mae has a habit of going on solo kayaking excursions, and it is in these that her real friction with the culture of the Circle begins. She relies on an old fashioned brochure to identify wildlife and fails to document and share her experiences. As an HR representative who berates her for this “selfish” practice states it:

You look at your paper brochure, and that’s where it ends. It ends with you. (186)

The Circle is a social company built on the philosophy of transparency and anyone who fails to share, it is assumed, must not really buy into that worldview. The “wise man” Bailey, as part of the best argument against keeping secrets I have ever read captures the ultimate goal of this philosophy:

A circle is the strongest shape in the universe. Nothing can beat it,  nothing can improve upon it.  And that’s what we want to be: perfect. So any information that eludes us, anything that’s not accessible, prevents us from being perfect.  (287)

The growing power of the Circle, the way it is swallowing everything and anything, does eventually come in for scrutiny by a small group of largely powerless politicians, but as was the case with the real world Julian Assange, transparency, or the illusion of it, can be used as a weapon against the weak as much as one against the strong. Suddenly all sorts of scandalous ilk becomes known to exist on these politicians computers and their careers are destroyed. Under “encouragement” from the Circle politicians “go transparent” their every move recorded so as to be free from the charge of corruption as the Circle itself takes over the foundational mechanism of democracy- voting.

The transparency that the Circle seeks is to contain everyone, and Mae herself, after committing the “crime” of temporarily taking a kayak for one of her trips and being caught by a SeeChange camera, at the insistence of Bailey, becomes one of only two private citizens to go transparent, with almost her every move tracked and recorded.

If Mae actually believes in the philosophy of transparency and feels the flaw is with her, despite almost epiphanies that would have freed her from its grip, there are voices of solid opposition. There is Mae’s ex-boyfriend, Mercer, who snaps at her while at dinner with her parents, and had it been Eggers’ intention would have offered an excellent summation of Rushkoff’s Present Shock.

Here, though, there are no oppressors. No one’s forcing you to do this. You willingly tie yourself to these leashes. And you willingly become utterly socially autistic. You no longer pick up on basic human communication cues. You’re at a table with three humans, all of whom you know and are trying to talk to you, and you’re staring at a screen searching for strangers in Dubai.  (260)

There is also another of the 3 wise men, Ty, who fears where the company he helped create is leading and plots to destroy it. He cries to Mae:

This is it. This is the moment where history pivots. Imagine you could have been there before Hitler became chancellor. Before Stalin annexed Eastern Europe. We’re on the verge of having another very hungry, very evil empire on our hands, Mae. Do you understand? (401)

Ty says of his co-creator Bailey:

This is the moment he has been waiting for, the moment when all souls are connected. This is his rapture, Mae! Don’t you see how extreme this view is? His idea is radical, and in another era would have been a fringe notion espoused by an eccentric adjunct professor somewhere: that all information, personal or not, should be shared by all.  (485)

If any quote defines what I mean by radical transparency it is that one immediately above. It is, no doubt, a caricature and the individuals who adhere to something like it in the real world do so in shades, along a spectrum. One of the thinkers who does so, and whose thought might therefore shed light on what non-fictional proponents of transparency are like is the science fiction author, David Brin, who took some umbrage over at the IEET in response to my last post.

In that post itself I did not really have Brin in mind, partly because, like Julian Assange, his views have always seemed to me more cognizant of the deep human need for personal privacy, in a way the figures I mentioned there; Mark Zuckerberg, Kevin Kelly and Jeff Stibel; have not, and thus his ideas were not directly relevant to where I originally intended to go in the second part of my post, which was to focus on precisely this need. Given his sharp criticism, it now seems important that I address his views directly and thus swerve for a moment away from my main narrative.

Way back in 1997, Brin had written a quite prescient work The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom? which accurately gauged the way technology and culture were moving on the questions of surveillance and transparency. In a vastly simplified form, Brin’s argument was that given the pace of technological change, which makes surveillance increasingly easier and cheaper, our best protection against elite or any other form of surveillance, is not to put limits on or stop that surveillance, but our capacity to look back, to watch the watchers, and make as much as possible of what they do transparent.

Brin’s view that transparency is the answer for surveillance leads him to be skeptical of traditional approaches, such as those of the ACLU, that think laws are the primary means to protect us because technology, in Brin’s perspective, will always any outrun any legal limitations on the capacities to surveil.

While I respect Brin as a fellow progressive and admire his early prescience, I simply have never found his argument compelling, and think in fact his and similar sentiments held by early figures at the dawn of the Internet age have led us down a cul de sac.

Given the speed at which technologies of surveillance have been and are being developed it has always been the law and its ability to give long lasting boundaries to the permissible and non-permissible that is our primary protection against them. Indeed, the very fall in cost and rise in capacity of surveillance technologies, a reality which Brin believes make legal constraints largely unworkable, in fact make law, one of our oldest technologies, and which no man should be above or below, our best weapon in privacy’s defense.

The same logic of the deterrent threat of legal penalties that we will need for, say, preventing a woman from being tracked and stalked by a jealous ex boyfriend using digital technology, will be necessary to restrain corporations and the state. It does not help a stalked woman just to know she is being stalked, to be able to “watch her watcher”, rather, she needs to be able to halt the stalking. Perhaps she can digitally hide, but she especially needs the protection of the force of law which can impose limitations and penalties on anyone who uses technological capacities for socially unacceptable ends, and in the same way, citizens are best protected not by being able to see into government prying, but by prohibiting that prying under penalty of law in the first place.We already do this effectively, the problem is that the law has lagged behind technology.

Admittedly, part of the issue is that technology has moved incredibly fast, but a great deal of law’s slowness has come from a culture that saw no problem with citizens being monitored and tracked 24/7- a culture which Brin helped create.

The law effectively prohibits authorities from searching your home without a warrant and probable cause, something authorities have been “technologically” able to do since we started living in shelters. Phone tapping, again, without a warrant and probable cause, has been prohibited to authorities in the US since the late 1960’s- authorities had been tapping phones since shortly after the phone was invented in the 1890’s. Part of the problem today is that no warrant is required for the government to get your “meta-data” who you called  or where you were as indicated by GPS. When your email exists in the “cloud” and not on your personal device those emails can in some cases be read without any oversight from a judge. These gaps in Fourth Amendment protections exist because the bulk of US privacy law that was meant to deal with electronic communications was written before even email, existed, indeed, before most of us knew what the Internet was. The law can be slow, but it shouldn’t be that slow, email, after all, is pretty damned old.  

There’s a pattern here in that egregious government behavior or abuse of technological capacities – British abuses in the American colonies, the American government and law enforcement’s egregious behavior and utilization of wiretapping/recording capacities in the 1960’s, results in the passing of restrictions on the use of such techniques and technological capacities. Knowing about those abuses is only a necessary condition of restricting or putting a stop to them.

I find no reason to believe the pattern will not repeat itself again and that we will soon push for and achieve restrictions on the surveillance power of government and others which will work until the powers- that- be find ways to get around them and new technology will allow those who wish to surveil in an abusive way allow them to do so. Then we’ll be back at this table again in the endless cat and mouse game that we of necessity must play if we wish to retain our freedom.

Brin seems to think that the fact that “elites” always find ways to get around such restrictions is a reason for not having such restrictions in the first place, which is a little like asking why should you clean your house when it just gets dirty again. As I see it, our freedom is always in a state of oscillation between having been secured and being at risk. We preserve it by asserting our rights during times of danger, and, sadly, this is one of those times.

I agree with Brin that the development of surveillance technologies are such that they themselves cannot directly be stopped, and spying technologies that would have once been the envy of the CIA or KGB, such as remote controlled drones with cameras, or personal tracking and bugging devices, are now available off the shelf to almost everyone an area in which Brin was eerily prescient in this. Yet, as with another widespread technology that can be misused, such as the automobile, their use needs to be licensed, regulated, and where necessary, prohibited. The development of even more sophisticated and intrusive surveillance technologies may not be preventable, but it can certainly be slowed, and tracked into directions that better square with long standing norms regarding privacy or even human nature itself.

Sharp regulatory and legal limits on the use of surveillance technologies would likely derail a good deal innovation and investment in the technologies of surveillance, which is exactly the point. Right now billions of dollars are flowing in the direction of empowering what only a few decades ago we would have clearly labeled creeps, people watching other people in ways they shouldn’t be, and these creeps can be found at the level of the state, the corporation and the individual.

On the level of individuals, transparency is not a solution for creepiness, because, let’s face it, the social opprobrium of being known as a creep (because everyone is transparent) is unlikely to make them less creepy- it is their very insensitivity to such social rules that make them creeps in the first place. All transparency would have done would be to open the “shades” of the victim’s “window”. Two-way transparency is only really helpful, as opposed to inducing a state of fear in the watched,  if the perception of intrusive watching allows the victim to immediately turn such watching off, whether by being able to make themselves “invisible”, or, when the unwanted watching has gone too far, to bring down the force of the law upon it.

Transparency is a step in the solution to this problem, as in, we need to somehow require tracking apps or surveillance apps in private hands to notify the person being tracked, but it is only a first step. Once a person knows they are being watched by another person they need ways to protect themselves, to hide, and the backup of authorities to stop harassment.

In the economic sphere, the path out of the dead end we’ve driven ourselves into might lie in the recognition that the commodity for sale in the transaction between Internet companies and advertisers, the very reason they have and continue pushing to make us transparent and surveilling us in the first place, is us. We would do well to remember, as Hannah Arendt pointed out in her book The Human Condition, that the root of our conception of privacy lies in private property. The ownership of property, “one’s own private place in the world” was once considered the minimum prerequisite for the possession of political rights.

Later, property as in land was exchanged for the portable property of our labor, which stemmed from our own body, and capital. We have in a sense surrendered control over the “property” of ourselves and become what Jaron Lanier calls digital peasants. Part of the struggle to maintain our freedoms will mean reasserting control over this property- which means our digital data and genetic information. How exactly this might be done is not clear to me, but I can see outlines. For example, it is at least possible to imagine something like digital “strikes” in which tracked consumers deliberately make themselves opaque to compel better terms.

In terms of political power, the use of law, as opposed to mere openness or transparency, to constrain the abuse of surveillance powers by elites would square better with our history. For the base of the Western democratic tradition (at least in its modern incantation) is not primarily elites’ openness to public scrutiny, or their competition with one another, as Brin argues is the case in The Transparent Society, (though admittedly the first especially is very important) but the constraints on power of the state, elites, the mob, or nefarious individuals provided by the rule of law which sets clear limits on how power, technologically enabled or otherwise, can be used.

The argument that prohibition, or even just regulation, never works and comparisons to the failed drug war I find too selective to be useful when discussing surveillance technologies. Society has prohibitions on all sorts of things that are extremely effective if never universally so.

In the American system, as mentioned, police and government are severely constrained in how they are able to obtain evidence against suspects or targets. Past prohibitions against unreasonable searches and surveillance have actually worked. Consumer protection laws dissuade corporations from abusing, putting customers at risk, or even just misusing consumer’s information. Environmental protection laws ban certain practices or place sharp boundaries on their use. Individuals are constrained in how they can engage with one another socially or how they can use certain technologies without their privilege (e.g driving) to use such technologies being revoked.

Drug and alcohol prohibitions, both having to push against the force of highly addictive substances, are exceptions the general rule that thoughtful prohibition and regulation works. The ethical argument is over what we should prohibit and what we should permit and how. It is ultimately a debate over what kind of society we want to live in based on our technological capacities, which should not be confused with a society determined by those capacities.

The idea that laws, regulations, and prohibitions under certain circumstances is well.., boring  shouldn’t be an indication that it is also wrong. The Transparent Society was a product of its time, the 1990’s, a prime example of the idea that as long as the playing field was leveled spontaneous order would emerge and that government “interference” through regulation and law (and in a democracy that is working the government is us) would distort this natural balance. It was the same logic that got us into the financial crisis and a species of an eternal human illusion that this time is different. Sometimes the solution to a problem is merely a matter of knowing your history and applying common sense, and the solution to the problem of mass surveillance is to exert our power as citizens of a democracy to regulate and prohibit it where we see fit. Or to sum it all up-we need updated surveillance laws.

It would be very unfair to Brin to say his views are as radical as the Circle’s philosopher Bailey, for, as mentioned, Brin is very cognizant and articulate regarding the human need for privacy at the level of individual intimacy. Eggers’ fictional entrepreneur-philosopher’s vision is a much more frightening version of radical transparency entailing the complete loss of private life. Such total transparency is victorious over privacy at the conclusion of The Circle. For, despite Mae’s love for Ty, he is unable to convince her to help him to destroy the company, and she betrays him.

We are left with the impression that the Circle, as a consequence of Mae’s allegiance to its transparency project, has been able as Lee Billings said in a different context,” to sublime and compress our small isolated world into an even more infinitesimal, less substantial state”  that our world is about to be enveloped in a dark sphere.

Yet, it would be wrong to view even Bailey in the novel as somehow “evil”, something that might make the philosophy of the Circle in some sense even more disturbing. The leadership of the Circle (with the exception of the Judas Ty) doesn’t view what they are building as somehow creepy or dangerous, they see it as a sort of paradise. In many ways they are actually helping people and want to help them. Mae in the beginning of Eggers’ novel is right- the builders of the Circle are utopians as were those who thought and continue to think radical transparency would prove the path to an inevitably better world.

As drunks are known for speaking the truth, an inebriated circler makes the connection between the aspirations of the Circle and those of religion:

….you’re gonna save all the souls. You’re gonna get everyone in the same place, you’re gonna teach them all the same things. There can be one morality, one set of rules. Imagine! (395)

The kinds of religious longings lying behind the mission of the Circle is even better understood by comparison to that first utopian, Plato, and his character Glaucon’s myth of the Ring of Gyges in The Republic. The ring makes its possessor invisible and the question it is used to explore is what human beings might do were there no chance they might get caught. The logic of the Circle is like a reverse Ring of Gyges making everyone perfectly visible. Bailey, thinks Mae had stolen the kayak because she thought she couldn’t be seen, couldn’t get caught:

All because you were being enabled by ,what, some cloak of invisibility? (296)

If not being able to watch people would make them worse, being able to fully and completely watch them, so the logic goes, would inevitably make them better.

In making this utopian assumption proponents of radical transparency both fictional and real needed to jettison some basic truths about the human condition we are only now relearning. A pattern that has, sadly, happened many times before.

Utopia does not feel like utopia if upon crossing the border you can’t go back home.  And upon reaching utopia we almost always want to return home because every utopia is built on a denial of or oversimplification regarding our multidimensional and stubbornly imperfectable human nature, and this would be the case whether or not our utopia was free of truly bad actors, creeps or otherwise.

The problem one runs into, in the transparency version of utopia, as in any other, is that given none of us are complete, or are less complete than we wish others to understand us to be, the push for us to be complete in an absolute sense often leads to its opposite. On social networks, we end up showcasing not reality, but a highly edited and redacted version of it: not the temper tantrums, but our kids at their cutest, not vacation disasters, but their picture perfect moments.

Pushing us, imperfect creatures that we are, towards total transparency leads almost inevitably to hypocrisy and towards exhausting and ultimately futile efforts at image management. All this becomes even more apparent when asymmetries in power between the watched and watcher are introduced. Employees are with reason less inclined to share that drunk binge over the weekend if they are “friends” with their boss on FaceBook. I have had fellow bloggers tell me they are afraid to express their opinions because of risks to their employment prospects. No one any longer knows whether the image one can find of a person on a social network is the “real” one or a carefully crafted public facade.

These information wars ,where every side is attempting to see as deeply as possible into the other while at the same time presenting an image of itself which best conforms to its own interest, is found up and down the line from individuals to corporations and all the way up to states. The quest for transparency, even when those on the quest mean no harm, is less about making oneself known than eliminating the uncertainty of others who are, despite all our efforts, not fully knowable. As Mae reflects:

It occurred to her, in a sudden moment of clarity, that what had always caused her anxiety, or stress, or worry, was not any one force, nothing independent and external- it wasn’t danger to herself or the calamity of other people and their problems. It was internal: it was subjective: it was not knowing. (194)

And again:

It was not knowing that was the seed of madness, loneliness, suspicion, fear. But there were ways to solve all this. Clarity had made her knowable to the world, and had made her better, had brought her close, she hoped., to perfection. Now the world would follow. Full transparency would bring full access and there would be no more not knowing. (465)

Yet, this version of eliminating uncertainty is an illusion. In fact, the more information we collect the more uncertainty increases, a point made brilliantly by another young author, who is also a scientist, Pippa Goldschmidt in her novel, The Falling Sky. To a talk show host who defines science as the search for answers she replies “That’s exactly what science isn’t about…. it’s about quantifying uncertainty.

Mae, at one point in the novel is on the verge of understanding this:

That the volume of information, of data, of judgments of measurement was too much, and there were too many people, and too many desires of too many people, and too much pain of too many people, and having it all constantly collated, collected, added and aggregated, and presented to her as if it was tidy and manageable- it was too much.  (410)

Sometimes one can end up in the exact opposite destination of where one wants to go if one is confused about the direction to follow to get there. Many of the early advocates of radical transparency thought our openness would make Orwellian nightmares of intrusive and controlling states less likely. Yet, by being blissfully unconcerned about our fundamental right to privacy, by promoting corporate monitoring and tracking of our every behavior, we have not created a society that is more open and humane but given spooks tools, democratic states would never have been able to openly construct, to spy upon us in ways that would have brought smiles to the faces of the totalitarian dictators and J Edgar Hoovers of the 20th century. We have given criminals and creeps the capability to violate the intimate sphere of our lives, and provided real authoritarian dictatorships the template and technologies to make Orwell’s warnings a reality.

Eggers, whose novel was released shortly after the first Snowden revelations was certainly capturing a change in public perception regarding the whole transparency project. It is the sense that we have been headed in the wrong direction an unease that signals the revival of our internal sense of orientation, that the course we are headed on does not feel right, and in fact somehow hints at grave dangers.

It was an unease captured equally well and around the same time by Vienna Teng’s hauntingly beautiful song Hymn of Axicom (brought to my attention by reader, Gregory Maus). Teng’s heavenly music and metalized voice- meant to be the voice of the world largest private database-  make the threshold we are at risk of crossing identified by Eggers to be somehow beautiful yet ultimately terrifying.

Giving voice to this unease and questioning the ultimate destination of the radical transparency project has done and will likely continue to do us well.  At a minimum, as the quote from Cory Doctorow with which this post began indicates, a wall between citizens and even greater mass surveillance, at least by the state, may have been established by recent events.

Yet, even if the historical pattern of our democracy repeats itself, that we are able to acknowledge and turn into law protections against a new mutation in the war of power against freedom, if privacy is indeed able to “strike back”, the proponents of radical transparency were certainly right about one thing, we can never put the genie fully back in the bottle, even if we are still free enough to restrain him with the chains of norms, law, regulation and market competition.

The technologies of transparency may not have affected a permanent change in the human condition in our relationship to the corporation and the state, criminals and the mob and the just plain creepy, unless, that is, we continue to permit it, but they have likely permanently affected the social world much closer to our daily concerns- our relationship with our family and friends our community and tribe. They have upended our relationship with one of the most precious of human ways of being, with solitude, though not our experience of loneliness, a subject that will have to wait until another time…

Don’t Be Evil!

Panopticon Prisoner kneeling

However interesting a work it is, Eric Schmidt and Jared Cohen’s The New Digital Age is one of those books where if you come to it as a blank slate you’ll walk away from it with a very distorted chalk drawing of what the world actually looks like. Above all, you’ll walk away with the idea that intrusive and questionable surveillance was something those other guys did, the bad guys, not the American government, or US corporations, and certainly not Google where Schmidt sits as executive chairman . Much ink is spilt on explaining the egregious abuses of Internet freedom by the likes of countries like China and Iran, or what in the vast majority of cited cases, are abuses by non-Western companies,  but when it comes to the US itself or any of its corporations engaging in similar practices the book is eerily silent.

I may not know what a mote is, but I do know I am supposed to pluck my own out of my eye first. Only then can I get seriously down to the business of pointing out the other guy’s mote, or even helping him yank it out.

The New Digital Age (I’ll call it the NDA from here on on to shorten things up), is full of the most reasonable and vanilla sort of advice on the need to balance our conflicting needs for security and privacy, but given its silence on the question of what the actual security/surveillance system in the US actually is, we’re left without the information needed to make such judgements. Let me put that silence in context.

The publication date for the NDA was April, 23 2013. The smoke screen of conspicuous- for- their- absence facts that are never discussed extends not only forward in time- something to be expected given the Edward Snowden revelations were one month out (May, 20 2013)- but, more disturbingly backward in time as well.  That is, Schmidt and Cohen couldn’t really be expected, legally if not morally, to discuss the revelations Snowden would later bring to light. Still, they should be expected to have addressed serious claims about the relationship between American technology companies and the US security state which were already public knowledge.

There had been extensive reporting on the intersection of technology and US government spying since at least 2010. These weren’t stories by Montana survivalists or persons camped out at Area 51, but hard hitting journalists with decades covering national security; namely, the work of Dana Priest and the Washington Post. If my memory and the book’s index serves me, neither Priest nor the Post are mentioned in the NDA.

Over a year before NDA was published Wired’s James Bamford had written a stunning piece on the NSA’s construction of its huge data center in Bluffdale, Utah, the goal of which was to suck up and store indefinitely the electronic records of all of us- which is the main thing we are arguing about. The main debate is over whether the government has a right to force private companies to provide all the digital data on their customers which the government will then synthesize, organize and store. If you’re an American you’re lucky enough to have the government require a warrant to look at your records. (Although the court in change of this-the FISA court- is not really known for turning such requests down). If you’re unlucky enough to not be an American then the government can peruse your records whenever the hell it wants to- thank you very much.

The NSA gets two pages devoted to it in the NDA’s 257 pages both of which are about how open minded and clever the agency is for hiring pimply- faced hackers. Say, what?

The more I think about what had to be the deliberate silence that runs throughout the whole of the NDA the more infuriating it becomes, but at least now Google et al have gotten religion- or at least I hope. On December, 9 2013 Google, Facebook, Apple, Microsoft, Twitter, Yahoo, LinkedIn, and AOL sent an open letter to the White House urging new restrictions on the government’s ability to seize, use and store information gleaned from them. This is a hopeful sign, but I am not sure we be handing out Liberty Medals just yet.

For one, this move against the government was not inspired by civil libertarians or even robust reporting, but by threats to the very business model upon which the companies who signed the document are based. As The Economist puts its:

The entire business model of firms like Google, FaceBook and Twitter relies on harvesting intimate information provided by users and then selling that data on to advertisers.

It was private firms that persuaded people to give up lists of their friends, their most sensitive personal communications, and to constantly broadcast their location in real-time. If you had told even the noisiest spook in 1983, that within 30 years, much of the populace would be carrying around a tracking device that kept a permanent record of everywhere they had ever visited, he’d have thought you mad.”

Let’s say you’re completely comfortable with the US government keeping such records on you. Perhaps the majority of Americans are unconcerned about this and think it the price of safety. But I doubt Americans would feel as blaise if it was the Chinese or the Russians or heaven forbid the French or any other government whose apparatchiks could go through their online personal and financial records at will. Therein lies the threat to American companies whose ultimate aspirations are global.  Companies that are seen, rightly or wrongly, as a tool of the US government will lose the trust not mainly of US citizens but of international customers. An ensuing race to the exits and nationalization of the Internet would most likely be driven not by Iranian Mullahs or a testosterone- charged Vladimir Putin paddling around in a submersible like a Bond villain,  but by Western Europeans and other democratic societies who were already uncomfortable with the idea that corporations should be trusted by individuals who had made themselves as transparent as the Utah sky.

The Germans, to take one example, were already freaked out by Google Street View of all things and managed to have the company abandon that service there. Revulsion at the Snowden revelations is perhaps the one thing that unites the otherwise bickering nationalities of the EU. TED, an event that began as a Silicon Valley lovefest looked a lot different when it was held in Brussels in October, with Mikko Hypponen urging the secession of Europeans from the American Internet infrastructure and the creation of their own open-sourced platforms. It’s the fear of being thought of as downright Orwellian that seems most likely to have inspired Google’s move to abandon facial recognition on Google Glass.

With the Silicon Valley Letter we might think we’re in the home stretch of this struggle to re-establish the right to privacy, but the sad fact is this fight’s just beginning. As the Economist pointed out none of the giants that provide the hardware and “plumbing” for the Internet, such as Cisco, and AT&T signed the open letter, less afraid, it seems, of losing customers because these are national brick-and-mortar companies in a way the eight signatories of the open letter to the Obama Administration are not.  For civil libertarians to win this fight Americans have to not only get those hardware companies on board, but compel the government to deconstruct a massive amount of spying infrastructure.

That is, we need to get the broader American public to care enough to exert sustained pressure on the government and some of the richest companies in the country to reverse course. Otherwise, the NSA facility at Bluffdale will continue sucking up its petabytes of overwhelmingly useless information like some obsessive Mormon genealogist until the mechanical levithan lurches to obsolescence or is felled by the sheppard’s stone of better encryption.

The NSA facility that stands today in the Utah desert may offer a treasure trove for the historian of the far future, a kind of massive junkyard of collective memory filled with all our sense and non-sense. If we don’t get our act straight, what it will also be is a historical monument to the failure of our two centuries and some old experiment with freedom.

Maps:how the physical world conquered the virtual

World map 1600

If we look back to the early days when the Internet was first exploding into public consciousness, in the 1980’s, and even more so in the boom years of the 90’s, what we often find is a kind of utopian sentiment around this new form of “space”. It wasn’t only that a whole new plane of human interaction seemed to be unfolding into existence almost overnight, it was that “cyberspace” seemed poised to swallow the real world- a prospect which some viewed with hopeful anticipation and others with doom.

Things have not turned out that way.

The person who invented the term “cyberspace”, William Gibson, the science fiction author of the classic- Neuromancer- himself thinks that when people look back on the era when the Internet emerged what will strike them as odd is how we could have confused ourselves into thinking that the virtual world and our work-a-day one were somehow distinct. Gibson characterizes this as the conquest of the real by the virtual. Yet, one can see how what has happened is better thought of as the reverse by taking even a cursory glance at our early experience and understanding of cyberspace.

Think back, if you are old enough, and you can remember, when the online world was supposed to be one where a person could shed their necessarily limited real identity for a virtual one. There were plenty of anecdotes, not all of them insidious, of people faking their way through a contrived identity the unsuspecting thought was real: men coming across as women, women as men, the homely as the beautiful. Cyberspace seemed to level traditional categories and the limits of geography. A poor adolescent could hobnob with the rich and powerful. As long as one had an Internet connection, country of origin and geographical location seemed irrelevant.

It should not come as any surprise, then, that  an early digital reality advocate such as Nicole Stenger could end her 1991 essay Mind is a leaking rainbow with the utopian flourish:

According to Satre, the atomic bomb was what humanity had found to commit collective suicide. It seems, by contrast, that cyberspace, though born of a war technology, opens up a space for collective restoration, and for peace. As screens are dissolving, our future can only take on a luminous dimension! / Welcome to the New World! (58)

Ah, if only.

Even utopian rhetoric was sometimes tempered with dystopian fears. Here is Mark Pesce the inventor of VRML code in his 1997 essay Ignition:

The power over this realm has been given to you. You are weaving the fabric of perception in information perceptualized. You could – if you choose – turn our world into a final panopticon – a prison where all can been seen and heard and judged by a single jailer. Or you could aim for its inverse, an asylum run by the inmates. The esoteric promise of cyberspace is of a rule where you do as you will; this ontology – already present in the complex system know as Internet – stands a good chance of being passed along to its organ of perception.

The imagery of a “final panopticon” is doubtless too morbid for us at this current stage whatever the dark trends. What is clear though is that cyberspace is a dead metaphor for what the Internet has become- we need a new one. I think we could do worse than the metaphor of the map. For, what the online world has ended up being is less an alternative landscape than a series of cartographies by which we organize our relationship with the world outside of our computer screens, a development with both liberating and troubling consequences.

Maps have always been reflections of culture and power rather than reflections of reality. The fact that medieval maps in the West had Jerusalem in their centers wasn’t expressing a geologic but a spiritual truth although few understood the difference. During the Age of Exploration what we might think of as realistic maps were really navigational aids for maritime trading states, a latent fact present in what the mapmakers found important to display and explain.

The number and detail of maps along with the science of cartography rose in tandem with the territorial anchoring of the nation-state. As James C. Scott points out in his Seeing Like a State maps were one of the primary tools of the modern state whose ambition was to make what it aimed to control “legible” and thus open to understanding by bureaucrats in far off capitals and their administration.

What all of this has to do with the fate of cyberspace, the world where we live today, is that the Internet, rather than offering us an alternative version of physical space and an escape hatch from its problems has instead evolved into a tool of legibility. What is made legible in this case is us. Our own selves and the micro-world’s we inhabit have become legible to outsiders. Most of the time these outsiders are advertisers who target us based on our “profile”, but sometimes this quest to make individuals legible is by the state- not just in the form of standardized numbers and universal paperwork but in terms of the kinds of information a state could only once obtain by interrogation- the state’s first crack at making individuals legible.      

A recent book by Google CEO Eric Schmitt co-authored with foreign policy analyst Jared Cohen- The New Digital Age is chalk full of examples of corporate advertisers’ and states’ new powers of legibility. They write:

The key advance ahead is personalization. You’ll be able to customize your devices- indeed much of the technology around you- to fit your needs, so that the environment reflects your preferences.

At your fingertips will be an entire world’s worth of digital content, constantly updated, ranked and categorized to help you find the music, movies, shows, books, magazines, blogs and art you like. (23)

Or as journalist Farhad Manjoo quotes Amit Singhal of Google:

I can imagine a world where I don’t even need to search. I am just somewhere outside at noon, and my search engine immediately recommends to me the nearby restaurants that I’d like because they serve spicy food.

There is a very good reason why I did not use the world “individuals” in place of “corporate advertisers” above- a question of intent. Whose interest does the use of such algorithms to make the individual legible ultimately serve? If it my interest then search algorithms might tell me where I can get a free or even pirated copy of the music, video etc I will like so much. It might remind me of my debts, and how much I would save if I skip dinner at the local restaurant and cook my quesadillas at home. Google and all its great services, along with similar tech giants aiming to map the individual such as FaceBook aren’t really “free”. While using them I am renting myself to advertisers. All maps are ultimately political.

With the emergence mobile technology and augmented reality the physical world has wrestled the virtual one to the ground like Jacob did to the angel. Virtual reality is now repurposed to ensconce all of us in our own customized micro-world. Like history? Then maybe your smartphone or Google Glasses will bring everything historical around you out into relief. Same if you like cupcakes and pastry or strip clubs. These customized maps already existed in our own heads, but now we have the tools for our individualized cartography- the only price being constant advertisements.

There’s even a burgeoning movement among the avant garde, if there can still be said to be such a thing, against this kind of subjection of the individual to corporate dictated algorithms and logic. Inspired by mid-20 century leftists such as Guy Debord with his Society of the Spectacle practitioners of what is called psychogeography are creating and using apps such as Drift  that lead the individual on unplanned walks around their own neighborhoods, or Random GPS that have your car’s navigation system remind you of the joys of getting lost.

My hope is that we will see other versions of these algorithm inverters and breakers and not just when it comes to geography. How about similar things for book recommendations or music or even dating? We are creatures that sometimes like novelty and surprise, and part of the wonder of life is fortuna–  its serendipitous accidents.

Yet, I think these tools will most likely ramp up the social and conformist aspects of our nature. We shouldn’t think they will be limited to corporate persuaders. I can imagine “Catholic apps” that allow one to monitor one’s sins, and a whole host of funny and not so funny ways groups will use the new methods of making the individual legible to tie her even closer to the norms of the group.

A world where I am surrounded by a swirl of constant spam, or helpful and not so helpful suggestions, the minute I am connected, indeed, a barrage that never ends except when I am sleeping because I am always connected, may be annoying, but it isn’t all that scary. It’s when we put these legibility tools in the hands of the state that I get a little nervous.

As Schmitt and Cohen point out one of the most advanced forms of such efforts at mapping the individual is an entity called Platforma Mexico which is essentially a huge database that is able to identify any individual and tie them to their criminal record.

Housed in an underground bunker in the Secretariat of Public Security compound in Mexico City, this large database integrates intelligence, crime reports and real time data from surveillance cameras and other inputs from across the country. Specialized algorithms can extract patterns, project social graphs and monitor restive areas for violence and crime as well as for natural disasters and other emergencies.  (174)

The problem I have here is the blurring of the line between the methods used for domestic crime and those used for more existential threats, namely- war. Given that crime in the form of the drug war is an existential threat for Mexico this might make sense, but the same types of tools are being perfected by authoritarian states such as China, which is faced not with an existential threat but with growing pressures for reform, and also in what are supposed to be free societies like the United States where a non-existential threat in the form of terrorism- however already and potentially horrific- is met with similar efforts by the state to map individuals.

Schmitt and Cohen point out how there is a burgeoning trade between autocratic countries and their companies which are busy perfecting the world’s best spyware. An Egyptian firm Orascom owns a 25 percent share of the panopticonic sole Internet provider in North Korea. (96) Western companies are in the game as well with the British Gamma Group’s sale of spyware technology to Mubarak’s Egypt being just one recent example.

Yet, if corporations and the state are busy making us legible there has also been a democratization of the capacity for such mapmaking, which is perhaps the one of the reasons why states are finding governance so difficult. Real communities have become almost as easy to create as virtual ones because all such communities are merely a matter of making and sustaining human relationships and understanding their maps.

Schmitt and Cohen imagine virtual governments in exile waiting in the wings to strike at the precipitous moment. Political movements can be created off the shelf supported by their own ready made media entities and the authors picture socially conscious celebrities and wealthy individuals running with this model in response to crises. Every side in a conflict can now have its own media wing whose primary goal is to shape and own the narrative. Even whole bureaucracies could be preserved from destruction by keeping its map and functions in the cloud.

Sometimes virtual worlds remain limited to the way they affect the lives of individuals but are politically silent. A popular mass multiplayer game such as World of Warcraft may have as much influence on an individual’s life as other invisible kingdoms such as those of religion. An imagined online world becomes real the moment its map is taken as a prescription for the physical world.  Are things like the Hizb ut-Tahrir which aims at the establishment of a pan-Islamic caliphate or The League of the South which promotes a second secession of American states “real” political organizations or fictional worlds masking themselves as political movements? I suppose only time will tell.

Whatever the case, society seems torn between the mapmakers of the state who want to use the tools of the virtual world to impose order on the physical and an almost chaotic proliferation using the same tools by groups of all kinds creating communities seemingly out of thin air.

All this puts me in mind of nothing so much as China Mieville’s classic of New Weird fiction City and the City. It’s a crime novel with the twist that it takes place in two cities- Beszel  and Ul Qoma that exist in the same physical space and are superimposed on top of one another. No doubt Mieville was interested in telling a good story, and getting us thinking about the questions of borders and norms, but it’s a pretty good example of the mapping I’ve been talking about- even if it is an imagined one.

In City and the City an inhabitant of Beszel  isn’t allowed to see or interact with what’s going on in Ul Qoma and vice versa otherwise they commit a crime called “breach” and there’s a whole secretive bi-city agency called Breach that monitors and prosecutes those infractions. There’s even an imaginary (we are led to believe) third city “Orciny” that exist on-top of Beszel and Ul Qoma and secretly controls the other two.

This idea of multiple identities- consumer, political- overlaying the same geographical space seems a perfect description of our current condition. What is missing here, though, is the sharp borders imposed by Breach. Such borders might appear quicker and in different countries than one might have supposed thanks to the recent revelations that the United States has been treating the Internet and its major American companies like satraps. Only now has Silicon Valley woken up to the fact that its close relationship with the American security state threatens its “transparency” based business- model with suicide. The re-imposition of state sovereignty over the Internet would mean a territorialization of the virtual world- a development that would truly constitute its conquest by the physical. To those possibilities I will turn next time…

The Algorithms Are Coming!

Attack of the Blob

It might not make a great b-movie from the late 50’s, but the rise of the algorithms over the last decade has been just as thrilling, spectacular, and yes, sometimes even scary.

I was first turned on to the rise of algorithms by the founder of the gaming company Area/Code , Ken Slavin, and his fascinating 2011 talk on the subject at TED.  I was quickly draw to one Slavin’s illustration of the new power of algorithms in the world of finance.  Algorithms now control more than 70% of US financial transactions meaning that the majority of decisions regarding the buying and selling of assets are now done by machines. I initially took, indeed I still take, the rise of algorithms in finance to be a threat to democracy. It took me much longer to appreciate Slavin’s deeper point that algorithms have become so powerful that they represent a new third player on the stage of human experience: Nature-Humanity-Algorithms. First to finance.

The global financial system has been built around the electronic net we have thrown over the world. Assets are traded at the speed of light. The system rewards those best equipped to navigate this system granting stupendous profits to those with the largest processing capacity and the fastest speeds. Processing capacity means access to incredibly powerful supercomputers, but the question of speed is perhaps more interesting.

Slavin points out how the desire to shave off a few milliseconds of trading time has led to the hollowing out of whole skyscrapers in Manhattan. We tend to think of the Internet as something that is “everywhere” but it actually has a location of sorts in the form of its 13 core root servers through which all of its traffic flows. The desire to get close to route servers and therefore be able to move faster has led not only to these internally re-configured skyscrapers, but the transformation of the landscape itself.

By far the best example of the needs of algorithms shaping the world is the 825 mile fiber optic trench dug from Chicago to New York by the company Spread Networks. Laying the tunnel for this cable was done by cutting through my formidable native Alleghenies rather than following, as regular communications networks do, the old railway lines.

Slavin doesn’t point this out, but the 13 milliseconds of trading advantage those using this cable is only partially explained by its direct route between Chicago and New York. The cable is also “dark fiber” meaning it does not need to compete with other messages zipping through it. It’s an exclusive line- the private jet of the web. Some alien archaeologist who stumbled across this cable would be able to read the economic inequality endemic to early 21st century life. The Egyptians had pyramids, we have a tube of glass hidden under one of the oldest mountain ranges on earth.

Perhaps the best writer on the intersection of digital technology and finance is the Wall Street Journal’s  Scott Peterson with books like his Quants, and even more so his Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock MarketIn Dark Pools Peterson takes us right into the heart of new algorithm based markets where a type of evolutionary struggle is raging that few of us are even aware of. There are algorithms that exist as a type of “predator” using their speed to out maneuver slow moving “herbivores” such as the mutual funds and pension funds in which the majority of us little-guys, if we have any investments at all, have our money parked. Predators, because they can make trades milliseconds faster than these “slow” funds can see a change in market position- say selling a huge chunk of stock- and then pounce taking an advantageous position relative to the sale leaving the slow mover with much less than would have been gained or much more than would have been lost had these lightning fast piranhas not been able to strike.

To protect themselves the slow moving funds have not only established things like “decoy” algorithms to throw the predators off their trail, but have shifted much of their trading into non-public markets the “dark-pools” of Peterson’s title. Yet, even these pools have become infected with predator algos. No environment is safe- the evolutionary struggle goes on.

However this ends, and it might end very badly, finance is not the only place where we have seen the rise of the algorithms. The books recommended for you by Amazon or the movies Netflix informs you might lead to a good movie night are all based on sophisticated algorithms about who you are. The same kinds of algorithms that try to “understand” you are used by online dating services or even your interaction with the person on the other end of the line at customer service.

Christopher Steiner in his Automate This  points out that little ditty at the beginning of every customer service call “this call may be monitored…” is used not so much as we might be prone to think it is- a way to gauge the performance of the person who is supposed to help you with your problem as it is to add you to a database of personality types.  Your call can be guided to someone skilled in whatever personality type you have. Want no nonsense answers? No problem! Want a shoulder to cry on? Ditto!

The uber-dream of the big technology companies is to have their algorithms understand every element of our lives and “help” us to make decisions accordingly. Whether or not help should actually be in quotes is something for us as individuals and even more so as a society to decide with the main questions being how much of our privacy are we willing to give up in order to have smooth financial transactions, and is this kind of guidance a help or a hindrance to the self-actualization we all prize?

The company closest to achieving this algorithmic mastery over our lives is Google as Steven Kovach points out in a recent article with the somewhat over the top title Google’s plan to take over the world. Most of us might think of Google as a mere search company that offers a lot of cool compliments such as Google Earth. But, as its founders have repeatedly said, the ultimate goal of the company is to achieve true artificial intelligence, a global brain covering the earth.

 Don’t think the state, which Nietzsche so brilliantly called “the coldest of all cold monsters”  hasn’t caught on to the new power and potential of algorithms. Just as with Wall Street firms and tech companies the state has seized on the capabilities of advances in artificial intelligence and computing power which allow the scanning of enormous databases. Recent revelations regarding the actions of the NSA should have come as no surprise. Not conspiracy theorists, but reputable journalists such as the Washington Post’s Dana Priest  had already informed us that the US government was sweeping up huge amounts of data about people all over the world, including American citizens, under a program with the Orwellian name of The Guardian.  Reporting by James Bamford of Wired in March of last year had already informed us that:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

The NSA scandals have the potential of shifting the ground under US Internet companies, especially companies such as Google whose business model and philosophy are built around the idea of an open Internet. Countries have even more reason now to be energetic in pursuing “Internet sovereignty”, the idea that each county should have the right and power to decide how the Internet is used within its borders.

In many cases, such as in Europe, this might serve to protect citizens against the prying eyes of the US security state, but we should not be waving the flag of digitopia quite yet. There are likely to be many more instances of the state using “Internet sovereignty” not to protect its people from US snoops, but authoritarian regimes from the democratizing influences of the outside world. Algorithms and the ecosystem of the Internet in which most of them exists might be moving from being the vector of a new global era of human civilization to being just another set of tools in the arsenal of state power. Indeed, the use of algorithms as weapons and the Internet as a means of delivery is already well under way.

At this early date it’s impossible to know whether the revolution in algorithms will ultimately be for the benefit of tyranny or freedom. As of right now I’d unfortunately have to vote for the tyrants. The increase in the ability to gather and find information in huge pools of data has, as is well known, given authoritarian regimes such as China the ability to spy on its netizens that would make the more primitive totalitarians of the 20th century salivate. Authoritarians have even leveraged the capacity of commercial firms to increase their own power, a fact that goes unnoticed when people discuss the anti- authoritarian “Twitter Revolutions” and the like.

Such was the case in Tunisia during its revolution in 2011 where the state was able to leverage the power of a commercial company- FaceBook- to spy on its citizens. Of course, resistance is fought through the Internet as well. As Parmy Olson points out in her We Are Anonymous it was not the actions of the US government but one of the most politically motivated of the members of the hacktivist groups Anonymous and LulzSec, a man with the moniker “Sabu” who later turned FBI informant that was able to launch a pushback of this authoritarian takeover of the Internet. Evidence if there ever was any of that hacktivism even when using Distributed Denial of Service Attacks (DDOS) can be a legitimate form of political speech

Yet, unlike in the movies, even the rebels in this story aren’t fully human. Anonymous’ most potent weapon DDOS attacks rely on algorithmic bots to infect or inhabit host  computers and then strike at some set moment causing a system to crash  due to surges in traffic. Still, it isn’t this hacktivism of groups like Anonymous and LulzSec that should worry any of us, but the weaponization of the Internet by states, corporations and criminals.

Perhaps the first well known example of a weaponized algorithms was the Stuxnet Worm deployed by the US, Israel, or both, against the Iranian nuclear program. This was a very smart computer worm that could find and disable valuable pieces of Iran’s nuclear infrastructure leaving one to wonder whether the algo wars on Wall Street are just a foretaste of a much bigger and more dangerous evolutionary struggle.

Hacktivist groups like Anonymous or LulzSec have made DDOS attacks famous. What I did not know, until I read Parmy Olson, is that companies are using botnets to attack other companies as when Bollywood used the company AiPlex to attack the well known copyright violators such as Pirate Bay by using DDOS attacks. What this in all likelihood means is that AiPlex unknown to their owners infiltrated perhaps millions of computers (maybe your computer) to take down companies whose pirated materials you might never have viewed. Indeed, it seems the majority of DDOS attacks are little but a-political thuggery- mobsters blackmailing gambling houses with takedowns on large betting days and that sort of Sopranosesque type of thing.

Indeed, the “black-hats” of criminal groups are embracing the algorithmic revolution with abandon. A lot of this is just annoying: it’s algorithms that keep sending you all those advertisements about penis enlargement, or unclaimed lottery winnings, but it doesn’t stop there. One of the more disturbing things I took away from Mark Bowden’s Worm the First Digital War   is that criminals who don’t know the first thing about programming can buy “kits”, crime algorithms they can customize to, say, find and steal your credit card information by hacking into your accounts. The criminal behind this need only press a few buttons and whola! he’s got himself his very own cyber-burglar.

 The most advanced of these criminal algorithms- though it might be a weapon of some state or terrorist group, we just don’t know- is the Conficker Worm, the subject of Bowden’s book which was able to not only infects millions of computers by exploiting a whole in Windows- can you believe it?!- but has created the mother of all botnets, an algorithm capable of taking down large parts of the Internet if it chose, but for whatever reason just sits there without doing a damned thing.

As for algorithms and less kinetic forms of conflict, the Obama Campaign of 2012 combined the same mix of the capability to sort huge data sets combined with the ability to sort individuals based on psychological/social profiles that we see being used by tech companies and customer service firms. Just like the telemarketers or CSRs the Obama campaign was able to tailor their approach  to the individual on the other end of their canvasing – amplifying their persuasive power. That such mobilizing prowess has not also led to an actual capacity to govern is another matter.

All this is dark, depressing stuff, I know. So, I should end with a little light. As Steiner points out in his Automate This, our new found power to play with huge data sets, and,  in what sounds like an oxymoron, customize automation, promises a whole host of amazing benefits. One of these might be our ability to have a 24/7 personal AI “physician” that monitors our health and drastically reduces medical costs. A real bone for treating undeserved patients whether in rural Appalachia or the develping world.

Steiner is also optimistic when it comes to artists. Advanced algorithms now allow, and they’ll just get better, buyers  to link with sellers in a way that has never been possible before. A movie company might be searching for a particular piece of music for their film. Now, through a service like Music-Xray  the otherwise invisible musician can be found.

Here I have to put my pessimist cap back on for just a minute, for the idea that algorithms can help artists be viable, as of this writing, is just that a hope. Sadly, there is little evidence for it in reality. This is a point hit home by the recent Atlantic Online article: The Reality of the Music Business Today: 1 Million Plays = $16.89. The algorithm used by the Internet music service, Pandora, may have helped a million people find musician David Lowery and his song “Low”, but its business model seems incapable of providing even successful musicians with meaningful income. The point that the economic model we have built around the “guy with the biggest computer” has been a bust for artists of all sorts is most strongly driven home by the virtual reality pioneer and musician Jaron Lanier. Let’s hope Lanier is ultimately wrong and algorithms eventually provide a way of linking artists and their patrons, but we are far, far from there yet. At the very least they should provide artists and writers with powerful tools to create their works.

Their are more grounds for optimism. The advance of algorithms is one of the few lit paths out of our current economic malaise. Their rise appears to signal that the deceleration in innovation which emerged because of the gap between the flood of information we could gather, the new discoveries we were making, and our ability to model those discoveries coherently, may be coming to an end almost as soon as it was identified. Advanced algorithms should allow us to make potent and amazing new models of the natural world. In the long run they may allow us to radically expand the artistic, philosophical and religious horizons of intelligence creating visions of the world of which we can today barely dream.

On a more practical and immediate level, advanced algorithms that can handle huge moving pieces of information seem perfect for dealing with something like responding to a natural disaster or managing the day to day flows of a major city such as routing traffic or managing services- something IBM is pioneering with its Smart Cities Projects in New York City and Rio De Janeiro.

What algorithms are not good at, at least so far, and we can see this in everything from the Obama campaign in light of its political aftermath, to the war on terrorism, to the 300,000 person protests in Rio this past week despite how “smart” the city, is expanding their horizon beyond the immediate present to give us solutions for the long term political, economic and social challenges we confront, instead merely acting as a globe- sized amplifier of such grievances which can bring down governments but not creating a lasting political order.  To truly solve our problems we still need the mother of all bots, collective human intelligence. I am still old fashioned enough to call it democracy.