How to avoid drowning in the Library of Babel

Library of Babel

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by is using the same tools we use in understanding the present which is also, at least partly, hidden from  direct view by the dilemma inherent in our use of language. To understand anything we need language and symbols but neither are the thing we are actually trying to grasp. Even the brain itself, through the senses, is a form of funnel giving us a narrow stream of information that we conflate with the entire world.

We use often science to get us beyond the box of our heads, and even, to a limited extent, to see into the future. Less discussed is the fact that we also use good metaphors, fables, and stories to attack the the wall of the unknown obliquely. To get the rough outlines of something like the essence of reality while still not being able to see it as it truly is.

It shouldn’t surprise us that some of the most skilled writers at this sideways form of wall scaling ended up becoming blind. To be blind, especially to have become blind after a lifetime of being able to see, is to realize that there is a form to the world which is shrouded in darkness, that one can get to know only in bare outlines, and only with indirect methods. Touch in the blind goes from a way to know what something feels like for the sighted to an oblique and sometimes more revealing way to learn what something actually is.

The blind John Milton was a genius at using symbolism and metaphor to uncover a deeper and hidden reality, and he probably grasped more deeply than anyone before or after him that our age would be defined not by the chase after reality, but our symbolic representation of its “value”, especially in the form of our ever more virtual talisman of money. Another, and much more recent, writer skilled at such an approach, whose work was even stranger than Milton’s in its imagery, was Jorge Luis Borges whose writing defies easy categorization.  

Like Milton did for our confusion of the sign with the signified in his Paradise Lost, Borges would do for the culmination of this symbolization in the  “information age” with perhaps his most famous short-story- The Library at Babel.  A story that begins with the strange line:

The universe (which others call the Library) is composed of an indefinite, perhaps infinite number of hexagonal galleries. (112)

The inhabitants of Borges’ imagined Library (and one needs to say inhabitants rather than occupants, for the Library is not just a structure but is the entire world, the universe in which all human-beings exist) “live” in smallish compartments attached to vestibules on hexagonal galleries lined with shelves of books. Mirrors are so placed that when one looks out of their individual hexagon they see a repeating pattern of them- the origin of arguments over whether or not the Library is infinite or just appears to be so.

All this would be little but an Escher drawing in the form of words were it not for other aspects that reflect Borges’ pure genius. The Library contains not just all books, but all possible books, and therein lies the dilemma of its inhabitants-  for if all possible books exist it is impossible to find any particular book.

Some book explaining the origin of the Library and its system of organization logically must exist, but that book is essentially undiscoverable, surrounded by a perhaps infinite number of other books the vast number of which are just nonsensical combinations of letters. How could the inhabitants solve such a puzzle? A sect called the Purifiers thinks it has the answer- to destroy all the nonsensical books- but the futility of this project is soon recognized. Destruction barely puts a dent in the sheer number of nonsensical books in the Library.

The situation within the Library began with the hope that all knowledge, or at least the key to all knowledge, would be someday discovered but has led to despair as the narrator reflects:

I am perhaps misled by old age and fear but I suspect that the human species- the only species- teeters at the verge of extinction, yet that the Library- enlightened, solitary, infinite, perfectly unmoving, armed with precious volumes, pointless, incorruptible and secret will endure. (118)

The Library of Babel published in 1941 before the “information age” was even an idea seems to capture one of the paradoxes of our era. For the period in which we live is indeed experiencing an exponential rise of knowledge, of useful information, but this is not the whole of the story and does not fully reveal what is not just a wonder but a predicament.

The best exploration of this wonder and predicament is James Gleick’s recent brilliant history of the “information age” in his The Information: A History, A Theory, A Flood. In that book Gleick gives us a history of information from primitive times to the present, but the idea that everything: from our biology to our society to our physics can be understood as a form of information is a very recent one. Gleick himself uses Borges’ strange tale of the Library of Babel as metaphor through which we can better grasp our situation.

The imaginative leap into the information age dates from seven year after Borges’ story was published, in 1948, when Claude Shannon a researcher for Bell Labs published his essay “A Mathematical Theory of Communication”. Shannon was interested in how one could communicate messages over channels polluted with noise, but his seemingly narrow aim ended up being revolutionary for fields far beyond Shannon’s purview. The concept of information was about to take over biology – in the form of DNA, and would conquer communications in the form of the Internet and exponentially increasing powers of computation – to compress and represent things as bits.

The physicists John Wheeler would come to see the physical universe itself and its laws as a form of digital information- “It from bit”.

Otherwise put, every “it” — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

Gleick finds this understanding of reality as information to be both true in a deep sense and also in many ways troubling.  We are it seems constantly told that we are producing more information on a yearly basis than all of human history combined, although what this actually means is another thing. For there is a wide and seemingly unbridgeable gap between information and what would pass for knowledge or meaning. Shannon himself acknowledged this gap, and deliberately ignored it, for the meaningfulness of the information transmitted was, he thought, “irrelevant to the engineering problem” involved in transmitting it.

The Internet operates in this way, which is why a digital maven like Nicholas Negroponte is critical of the tech-culture’s shibboleth of “net neutrality”.   To humorize his example, a digital copy of a regular sized novel like Fahrenheit 451 is the digital equivalent of one second of video from What does the Fox Say?, which, however much it cracks me up, just doesn’t seem right. The information content of something says almost nothing about its meaningfulness, and this is a problem for anyone looking at claims that we are producing incredible amounts of “knowledge” compared to any age in the past.

The problem is perhaps even worse than at first blush. Take these two sets of numbers:

9      2           9              5              5              3              4              7              7              10

And:

9     7            4              3              8             5              4              2              6              3

The first set of numbers is a truly random set of the numbers 1 through 10 created using a random number generator. The second set has a pattern. Can you see it?

In the second set every other number is a prime number between 2 and 7. Knowing this rule vastly decreases the possible set of 10 numbers between 1 and 10 which a set of numbers can fit. But here’s the important takeaway-  the the first set of numbers contains vastly more information than the second set. Randomness increases the information content of a message. (If anyone out there has a better grasp of information theory thinks this example is somehow misleading or incorrect please advise.)

The new tools of the age have meant that a flood of bits now inundates us the majority of which is without deep meaning or value or even any meaning at all.We are confronted with a burst dam of information pouring itself over us. Gleick captures our dilemma with a quote from the philosopher of “enlightened doomsaying” Jean-Pierre Dupuy:

I take “hell” in its theological sense i.e. a place which is devoid of grace – the underserved, unnecessary, surprising, unforeseen. A paradox is at work here, ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.  (418)

It is not that useful and meaningful information is not increasing, but that our exponential increase in information of value- of medical and genetic information and the like has risen in tandem with an exponential increase in nonsense and noise. This is the flood of Gleick’s title and the way to avoid drowning in it is to learn how to swim or get oneself a boat.

A key to not drowning is to know when to hold your breath. Pushing for more and more data collection might sometimes be a waste of resources and time because it is leading one away from actionable knowledge. If you think the flood of useless information hitting you that you feel compelled to process is overwhelming now, just wait for the full flowering of the “Internet of Things” when everything you own from your refrigerator to your bathroom mirror will be “talking” to you.

Don’t get me wrong, there are aspects that are absolutely great about the Internet of Things. Take, for example, the work of the company Indoo.rs that has created a sort of digital map throughout the San Francisco International Airport that allows a blind person using bluetooth to know: “the location of every gate, newsstand, wine bar and iPhone charging station throughout the 640,000-square-foot terminal.”

This is exactly the kind of purpose the Internet of Things should be used for. A modern form of miracle that would have brought a different form sight to those blind like Milton and Borges. We could also take a cue from the powers of imagination found in these authors and use such technologies to make our experience of the world deeper our experience more multifaceted, immersive. Something like that is a far cry from an “augmented reality” covered in advertisements, or work like that of neuroscientist David Eagleman, (whose projects I otherwise like), on a haptic belt that would massage its wearer into knowing instantaneously the contents of the latest Twitter feed or stock market gyrations.

Our digital technology where it fails to make our lives richer should be helping us automate, meaning to make automatic and without thought, all that is least meaningful in our lives – this is what machines are good at. In areas of life that are fundamentally empty we should be decreasing not increasing our cognitive load. The drive for connectivity seems pushing in the opposite direction, forcing us to think about things that were formerly done by our thoughtless machines over which we lacked precision but also didn’t give us much to think about.

Yet even before the Internet of Things has fully bloomed we already have problems. As Gleick notes, in all past ages it was a given that most of our knowledge would be lost. Hence the logic of a wonderful social institution like the New York Public Library. Gleick writes with sharp irony:

Now expectations have inverted. Everything may be recorded and preserved…. Where does it end? Not with the Library of Congress.

Perhaps our answer is something like the Internet Archive whose work on capturing recording all of the internet as it comes into being and disappears is both amazing and might someday prove essential to the survival of our civilization- the analog to copies having been made of the famed collection at the lost Library at Alexandria. (Watch this film).

As individuals, however, we are still faced with the monumental task of separating the wheat from the chaff in the flood of information, the gems from the twerks. Our answer to this has been services which allow the aggregation and organization of this information, where as Gleick writes:

 Searching and filtering are all that stand between this world and the Library of Babel.  (410)

And here is the one problem I had with Gleick’s otherwise mind blowing book because he doesn’t really deal with questions of power. A good deal of power in our information society is shifting to those who provide these kinds of aggregating and sifting capabilities and claim on that basis to be able to make sense of a world brimming with noise and nonsense. The NSA makes this claim and is precisely the kind of intelligence agency one would predict to emerge in an information age.

Anne Neuberger, the Special Assistant to NSA Director Michael Rogers and the Director of the NSA’s Commercial Solutions Center recently gave a talk at the Long Now Foundation. It was a hostile audience of Silicon Valley types who felt burned by the NSA’s undermining of digital encryption, the reputation of American tech companies, and civil liberties. But nothing captured the essence of our situation better than a question from Paul Saffo.

Paul Saffo: It seems like the intelligence community have always been data optimists. That if we just had more data, and we know that after every crisis, it’s ‘well if we’d just have had  more data we’d connect the dots.’ And there’s a classic example, I think of 9-11 but this was said by the Church Committee in 1975. It’s become common to translate criticism of intelligence results into demands for enlarged data collection. Does more data really lead to more insight?

As a friend is fond of saying ‘perhaps the data haystack that the intelligence community has created has become too big to ever find the needle in.’    

Neuberger’s telling answer was that the NSA needed better analytics, something that the agency could learn from the big data practices of business. Gleick might have compared her answer to a the plea for a demon by a character in a story by Stanislaw Lem whom he quotes.

 We want the Demon, you see, to extract from the dance of atoms only information that is genuine like mathematical formulas, and fashion magazines.  (The Information 425)

Perhaps the best possible future for the NSA’s internet hoovering facility at Bluffdale  would be for it to someday end up as a memory bank of the our telecommunications in the 21st century, a more expansive version of the Internet Archive. What it is not, however,is a way to anticipate the future on either small or large scales, or at least that is what history tells us. More information or data does not equal more knowledge.

We are forced to find meaning in the current of the flood. It is a matter of learning what it is important to pay attention to and what to ignore, what we should remember and what we can safely forget, what the models we use to structure this information can tell us and what they can’t, what we actually know along with the boundaries of that which we don’t. It is not an easy job, or as Gleick says:

 Selecting the genuine takes work; then forgetting takes even more work.

And it’s a job, it seems, that just gets harder as our communications systems become ever more burdened by deliberate abuse ( 69.6 percent of email in 2014 was spam up almost 25 percent from a decade earlier) and in an age when quite extraordinary advances in artificial intelligence are being used not to solve our myriad problems, but to construct an insidious architecture of government and corporate stalking via our personal “data trails”, which some confuse with our “self”. A Laplacian ambition that Dag Kittlaus unguardedly praises as  “Incremental revenue nirvana.”  Pity the Buddha.

As individuals, institutions and societies we need to find ways stay afloat and navigate through the flood of information and its abuse that has burst upon us, for as in Borges’ Library there is no final escape from the world it represents. As Gleick powerfully concludes The Information:

We are all patrons of the Library of Babel now and we are the librarians too.

The Library will endure, it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors looking for lines of meaning amid the leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thought and collecting the thoughts of others, and every so often glimpsing mirrors in which we may recognize creatures of the information.”

 

Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.

 

How the Web Will Implode

Jeff Stibel is either a genius when it comes to titles, or has one hell of an editor. The name of his recent book Breakpoint: Why the web will implode, search will be obsolete, and everything you need to know about technology is in your brain was about as intriguing as I had found a title, at least since The Joys of X. In many ways, the book delivers on the promise of its title, making an incredibly compelling argument for how we should be looking at the trend lines in technology, a book which is chalk full of surprising and original observations. The problem is that the book then turns round to come up with almost the opposite conclusions one would expect. It wasn’t the Internet that imploded but my head.

Stibel’s argument in Breakpoint is that all throughout nature and neurology, economics and technology we see this common pattern of slow growth rising quickly to an exponential pace followed by a rapid plateau, a “breakpoint” at which the rate of increase collapses, or even a sharp decline occurs, and future growth slows to a snail’s pace. One might think such breakpoints were a bad thing for whatever it is undergoing them, and when they are followed by a crash they usually are, but in many cases it just ain’t so. When ant colonies undergo a breakpoint they are keeping themselves within a size that their pheromonal communication systems can handle. The human brain grows rapidly in connections between birth and five after which it loses a great deal of those connections through pruning- a process that allows the brain to discard useless information and solidify the types of knowledge it needs- such as the common language being spoken in its environment.

His thesis leads Stibel to all sorts of fascinating observations. Here are just a few: Precision takes a huge amount of energy, and human brains are error prone because they are trading this precision for efficiency. The example is mine, not Stibel’s, but it captures his point: if I did the math right, IBM’s Watson consumed about 4,000 times as much energy as its human opponents, and the machine, as impressive as it was, it couldn’t drive itself there, or get its kids to school that morning, or compose a love poem about Alex Trebek. It could only answer trivia questions.

Stibel points out how the energy consumption of computers and the web are approaching what are likely hard energy ceilings. Continuing on its current trajectory the Internet will consume, in relatively short order, 20% of the world energy, about as much as the percentage of calories that are needed to run the human brain. A prospect that makes the Internet’s growth  rate under current conditions ultimately unsustainable runless we really are determined to fry ourselves with global warming.

Indeed, this 20% mark seems to be a kind of boundary for intelligence, at least if the human brain is any indication. As always with, for me at least, new and surprising observations, Stibel points out how the human brain has been steadily shrinking and losing connections over time. Pound for pound, our modern brain is actually “dumber” than our cave man ancestors. (Not sure how this gels with the Flynn effect.) Big brains are expensive for bodies to maintain, and its caloric ravenousness relative to other essential bodily functions must not be favored by evolution otherwise we’d see more of our lopsided brain to body ratio in nature.  As we’ve been able to offload functions to our tools and to our cultures, evolution has been shedding some of this cost in raw thinking prowess and slowly moving us back towards a more “natural” ratio.             

If the Internet is going to survive it’s going to have to become more energy efficient as well.  Stibel sees this already happening. Mobile has allowed targeted apps rather than websites to be the primary way we get information. Cloud computing allows computational prowess and memory to be distributed and brought together as needed. The need for increased efficiency, Stibel believes, will continue to change the nature of search too. Increasing personalization will allow for ever more targeted information, so that the individual can find just what they are looking for. This becoming “brainlike”, he speculates may actually result in the emergence of something like consciousness from the web.

It is on these last two points, on personalization, and the emergence of consciousness from the Internet that he lost me. Indeed, had Stibel held fast to his idea of the importance of breakpoints he may have seen both personalization and emergent consciousness from the Internet in a much different light.

The quote below captures Steibel’s view of personalization:

We’re moving towards search becoming a kind of personal assistant that knows an awful lot about you. As a side note, some of you may be feeling quite uncomfortable at this point with your new virtual friend. My advice: get used to it. The benefits will be worth it. As Kevin Kelly has said: “Total personalization in this new world will require total transparency. That is going to be the price. If you want to have total personalization, you have to be totally transparent. “ (93)

I suppose the question one should ask of Steibel is transparent to whom and for what? The answer, can be seen in the example of he gives of transparency in action:

Imagine that the Internet can read your thoughts. Your personal computer, now a personal assistant, knows you skipped breakfast, just as your brain knows you skipped breakfast. She also knows that you have back to back meetings, but that your schedule just cleared. So she offers the suggestion “It’s 11:00am and you should really eat before your next meeting. D’Amore’s Pizza Express can deliver to you within 25 minutes. Shall I order your favorite, a large thin crust pizza, light on the cheese with extra red pepper flakes on the side?” (97)

The answer, as Stibel’s example makes apparent, is that one is transparent to advertisers and for them. In the example of D’Amore’s”, what is presented as something that works for you is actually a device on loan to a restaurant- it is their “personal assistant”.

Transparent individuals become a kind of territory mined for resources by those capable of performing the data mining. For the individual being “mined” such extraction can be good or bad, and part of our problem, now and in the future, will be to give the individual the ability to control this mining and refract it in directions that better suit our interest. To decide for ourselves when it is good and we want its benefits ,and are therefore are willing to pay its costs, and when it is bad and we are not.

Stibel thinks personalization is part of the coming “obsolescence of search” and a response of the web to the need for increased efficiency as a way to avoid, for a time, reaching its breakpoint. Yet, looking at our digital data as a sort of contested territory gives us a different version of the web’s breakpoint than the one that Stibel gives us even if it flows naturally from his logic. The fact that corporations and other groups are attempting to court individuals on the basis of having gathered and analysed a host of intimate and not so intimate details on those individuals sparks all kinds of efforts to limit, protect, monopolize, subvert, or steal such information. This is the real “implosion” of the web.

We would do well to remember that the Internet really got its public start as a means of open exchange between scientists and academics, a community of common interest and mutual trust. Trust essentially entails the free flow of information- transparency- and as human beings we probably agree that transparency exists along a spectrum with more information provided to those closest to you and less the further out you go.

Reflecting its origins, the culture of the Internet in its initial years had this sense of widespread transparency and trust baked  into our understanding of it. This period in Eden, even if it just imagined, could not last forever. It has been a long time since the Internet was a community of trust, and it can’t be, it’s just too damned big, even if it took a long time for us to realize this.

The scales have now fallen from our eyes, and we all know that the web has been a boon for all sorts of cyber-criminals and creeps and spooks, a theater of war between states. Recent events surrounding mass surveillance by state security services have amplified this cynicism and decline of trust. Trust, for humans, is like pheromones in Seibel’s ants- it gives the limits of how large a human community can be before breaking off to form a new one, unless some other way of keeping a community together is applied. So far, human societies have discovered three means of keeping societies that have grown beyond the capacity of circles of trust intact: ethnicity, religion and law.

Signs that trust has unraveled are not hard to find. There has been an incredible spike in interest in anti-transparent technologies with “crypto-parties” now being a phenomenon in tech circles. A lot of this interest is coming from private citizens, and sometimes, yes, criminals. Technologies that offer a bubble of protection for individuals against government and corporate snooping seem to be all the rage. Yet even more interest is coming from governments and businesses themselves. Some now seem to want exclusive powers to a “mining territory”- to spy on, and sometimes protect, their own citizens and customers in a domain with established borders. There are, in other words,  splintering pressures building against the Internet, or, as Steven Levy stated there are increased rumblings of:

… a movement to balkanize the Internet—a long-standing effort that would potentially destroy the web itself. The basic notion is that the personal data of a nation’s citizens should be stored on servers within its borders. For some proponents of the idea it’s a form of protectionism, a prod for nationals to use local IT services. For others it’s a way to make it easier for a country to snoop on its own citizens. The idea never posed much of a threat, until the NSA leaks—and the fears of foreign surveillance they sparked—caused some countries to seriously pursue it. After learning that the NSA had bugged her, Brazilian president Dilma Rousseff began pushing a law requiring that the personal data of Brazilians be stored inside the country. Malaysia recently enacted a similar law, and India is also pursuing data protectionism.

As John Schinasi points out in his paper Practicing Privacy Online: Examining Data Protection Regulations Through Google’s Global Expansion, even before the Snowden revelations, which sparked the widespread breakdown of public trust, or even just heated public debate regarding such trust, there were huge differences between different regimes of trust on the Internet, with the US being an area where information was exchanged most freely and privacy against corporations considered contrary to the spirit of American capitalism.

Europe, on the other had, on account of its history, had in the early years of the Internet taken a different stand adhering to an EU directive that was deeply cognizant of the dangers of too much trust being granted to corporations and the state. The problem was this directive is so antiquated, dating from 1995, it not only failed to reflect the Internet as it has evolved, but severely compromised the way the Internet in Europe now works. The way the directive was implemented turned Europe into a patchwork quilt of privacy laws, which was onerous for American companies, but which they were able to often circumvent being largely self-policing in any case under the so-called Safe Harbor provisions.

Then there is the whole different ball game of China, which  Schinasi characterizes as a place where the Internet is seen without apology or sense of limits by officialdom as a tool of for monitoring its own citizens placing huge restrictions on the extension of trust to entities beyond its borders. China under its current regime seems dedicated to carving out its own highly controlled space on the Internet a partnership between its Internet giants and its control freak government , something which we can hope the desire of such companies to go global might help eventually temper.

The US and Europe, in a process largely sparked by the Snowden revelations appear to be drifting apart. Just last week, on March 12, 2014 the European parliament by an overwhelming majority of 621 to 10 (I didn’t forget a zero), passed a law that aims at bringing some uniformity to the chaos of European privacy laws and that would severely restrict the way personal data is used and collected, essentially upending the American transparency model. (Snowden himself testified to the parliament by video link). The Safe Harbor provisions, while not yet abandoned ,as that would take a decision of the European Council rather than the parliament, have not been nixed, but given the broad support for the other changes are clearly in jeopardy. If these trends continue they would constitute something of a breaking apart and consolidation of the Internet- a sad end to the utopian hopes of a global and transparent society that sprung from the Internet’s birth.

Yet, if Steibel’s thesis about breakpoints is correct, it may also be part of a “natural” process.  Where Steibel was really good was when it came to, well… ants. As he repeatedly shows, ants have this amazing capacity to know when their colony, their network, has grown too large and when it’s time to split up and send out a new queen. Human beings are really good at this formation into separate groups too. In fact as Mark Pagel points out in his Wired for Culture its one of the two things human beings are naturally wired to do: to form groups which breakup once they have exceeded the number of people that any one individual can know on a deep level- a number that remains even in the era of FaceBook “friends” right around where it was when we were setting out from Africa 60,000 years ago- about 150.

If we go by the example of ants and human beings the natural breakpoint(s) for the Internet is where bonds of trust become too loose. Where trust is absent, such as in large scale human societies, we have, as mentioned, come up with three major solutions of which only law, rather than ethnicity or religion, is applicable to the Internet.

What we are seeing- the Internet moving towards splitting itself off into rival spheres of trust, deception, protection and control. The only thing that could keep it together as a universal entity would be the adoption of global international law, as opposed to mere law within and between a limited number of countries, which regulated how the Internet is used, limited states from using the tools of cyber-espionage and what often amounts to the same thing cyber-war, international agreements on how exactly corporations could use customer information, and how citizens should be informed regarding the use of their data by companies and the state would all allow the universal promise of the Internet to survive. This would be the kind of “Magna Carta for the Internet” that Sir Tim Berners-Lee the man who wrote the first draft of the first proposal for what would become the world wide web” is calling for with his Web We Want initiative.

If we get to the destination proposed by Berners-Lee our arrival might have been as much from the push of self-interest from multinational corporations as from the pull of noble efforts by defenders of the world’s civil liberties. For, it plausible that to the desire of Internet giants to be global companies may lead help spur the adoption of higher limits against government spying in the name of corporate protections against “industrial” espionage, protections that might intersect with the desire to protect global civil society as seen in the efforts of Berners-Lee and others and that will help establish a firmer ground for the protection of political freedom for individual citizens everywhere. We’ll probably need both push and pull to stem, let alone rollback, the current regime of mass surveillance we have allowed to be built around us.

Thus, those interested in political freedom should throw their support behind Berners-Lee’s efforts. The erection of a “territory” in which higher standards of data protection prevail, as seen in the current moves of the EU, at this juncture, isn’t contrary to a data regime such as that which Berners-Lee proposed where “Bill of Rights for the Internet” is adhered to, but helps this process along. By creating an alternative to the current transparency model being promoted by American corporations and abused by its security services, one which is embraced by Chinese state capitalism as a tool of the authoritarian state, the EU’s efforts, if successful, would offer a region where the privacy (including corporate privacy) necessary for political freedom continues to be held sacred and protected.

Even if efforts such as those of Berners-Lee to globalize these protections should fail, which sadly appears ultimately most likely, efforts such as those of the EU would create a bubble of  protection- a 21st century version of the medieval fortress and city walls.  We would do well to remember that underneath our definition of the law lies an understanding of law as a type of wall hence the fact that we can speak of both being “breached”. Law, like the rules of a sports game are simply a set of rules that are agreed to within a certain defined arena. The more bound the arena the easier it is to establish a set of clear defined and adhered to rules.

To return to Stiebel, all this has implications for the other idea he explored and about which I also have doubts- the emergence of consciousness from the Internet. As he states:

It took millions of years for human to gain intelligence, but it may only take a century for the Internet. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines.

I largely agree with Steibel, especially when he echoes Dan Dennett in saying that artificial intelligence will be about as much like our human consciousness as the airplane is to a bird. Some similarities in terms of underlying principle, but huge differences in engineering and manifestation.  Meaning the path to machine intelligence probably doesn’t lie in brute computational force tried since the 1950’s or the current obsession with reverse engineering the brain, but in networks. Thing is, I just wishes he had said “internets” as in the plural rather than “Internet” singular, or just “networks”, again plural. For my taste, Stiebel has a tone when he’s talking about the emergence of intelligence from the Internet that leans a little too closely to Teilhard de Chardin and his Noosphere or Kevin Kelly and his Technium, all of which could have been avoided had Steibel just stuck with the logic of his breakpoints.

Indeed, given the amount of space he had given to showing how anomalous our human intelligence was and how networks (ants and others) could show intelligent behavior without human type consciousness at all, I was left to wonder why our networks would ever become conscious in the way we are in the first place. If intelligence could emerge from networked computers as Stibel suggests it seems more likely to emerge from well bounded constellations of such computers rather than the network as a global whole- as in our current Internet. If the emergence of AI resembles, which is not the same as replicates, the evolution and principles of the brain it will probably require the same sorts of sharp boundaries we have, the pruning that takes place as we individualize, similar sorts of self-referential goals to ourselves, and some degree of opacity visa-vi other similar entities.

To be fair to Stibel, he admits that we may have already undergone the singularity in something like this sense. What he does not see is that ants or immune systems or economies give us alternative models of how something can be incredibly intelligent and complex but not conscious in the human sense- perhaps human type consciousness is a very strange anomaly rather than an almost pre-determined evolutionary path once the rising complexity train gains enough momentum. AI in this understanding would merely entail truly purposeful coordinated action and goal seeking by complex units, a dangerous situation indeed given that these large units will often be rivals, but one not existentially distinct from what human beings have known since we were advanced enough technologically to live in large cities or fight with massive armies or produce and trade with continent and world straddling corporations.

 Be all that as it may, Stibel’s Breakpoint was still a fascinating read. He not only left me with lots of cool and bizarre tidbits about the world I had not known before, he gave me a new way to think about the old problem of whither our society was headed and if in fact we might be approaching limits to the development of civilization which the scientific and industrial revolution had seemed to suggest we had freed ourselves eternally from. Stibel’s breakpoints were another way for me to understand Joseph A. Tainter’s idea of how and why complex societies collapse and why such collapse should not of necessity fill us with pessimism and doom. Here’s me on Tainter:

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity.

Exponential trends might not be leading us to a stark choice between global society and singularity or our own destruction. We might just be approaching some of Stibel’s breakpoints, and as long as we can keep your wits about us, and not act out of a heightened fear of losing dwindling advantages for us and ours, breakpoints aren’t all necessarily bad- and can even sometimes- be good.

Welcome to the New Age of Revolution

Fall of the Bastille

Last week the prime minister of Ukraine, Mykola Azarov, resigned under pressure from a series of intense riots that had spread from Kiev to the rest of the country. Photographs from the riots in The Atlantic blew my mind, like something out of a dystopian steampunk flic. Many of the rioters were dressed in gas masks that looked as if they had been salvaged from World War I. As weapons they wielded homemade swords, molotov cocktails, and fireworks. To protect their heads some wore kitchen pots and spaghetti strainers.

The protestors were met by riot police in hypermodern black suits of armor, armed with truncheons, tear gas, and shotguns, not all of them firing only rubber bullets. Orthodox priests with crosses and icons in their hands, sometimes placed themselves perilously between the rioters and the police, hoping to bring calm to a situation that was spinning out of control.

Even for Ukraine, it was cold during the weeks of the riots. A situation that caused the blasts from water cannons used by the police to crystalize shortly after contact. The detritus of protesters covered in sheets of ice like they had be shot with some kind of high tech freeze gun.

Students of mine from the Ukraine were largely in sympathy with the protestors, but feared civil war unless something changed quickly. The protests had been brought on by a backdoor deal with Russia to walk away from talks aimed at Ukraine joining the European Union. Protests over that agreement led to the passage of an anti-protest law that only further inflamed the rioters. The resignation of the Russophile prime minister  seemed to calm the situation for a time, but with the return of the Ukrainian president Viktor Yanukovych  to work (he was supposedly ill during the heaviest of the protests) the situation has once again become volatile. It was Yanukovych who was responsible  for cutting the deal with Russia and pushing through draconian limits on the freedom of assembly which had sparked the protests in the first place.

Ukraine, it seem, is a country being torn in two, a conflict born of demographics and history.  Its eastern, largely Russian speaking population looking towards Russia and its western, largely Ukrainian speaking population looking towards Europe. In this play both Russia and the West are no doubt trying to influence the outcome of events in their favor, and thus exacerbating the instability.

Yet, while such high levels of tension are new, the problem they reveal is deep in historical terms- the cultural tug of war over Ukraine between Russia and Europe, East and West, stretches at least as far back as the 14th century when western Ukraine was brought into the European cultural orbit by the Poles. Since then, and often with great brutality on the Russian side, the question of Ukrainian identity, Slavic or Western, has been negotiated and renegotiated over centuries- a question that will perhaps never be fully resolved and whose very tension may be what it actually means to be Ukrainian.

Where Ukraine goes from here is anybody’s guess, but despite its demographic and historical particularities, its recent experience adds to the growing list of mass protests that have toppled governments, or at least managed to pressure governments into reversing course, that have been occurring regularly since perhaps 2008 with riots in Greece.

I won’t compile a comprehensive list but will simply state the mass protests and riots I can cite from memory. There was the 2009 Green Revolution in Iran that was subsequently crushed by the Iranian government. There was the 2010 Jasmine Revolution in Tunisia which toppled the government there and began what came to be the horribly misnamed “Arab Spring”. By 2011 mass protests had overthrown Hosni Mubarak in Egypt, and riots had broken out in London. 2012 saw a lull in mass protests, but in 2013 they came back with a vengeance. There were massive riots in Brazil over government cutbacks for the poor combined with extravagant spending in preparation for the 2014 World Cup, there were huge riots in Turkey which shook the government of the increasingly authoritarian Recep Tayyip Erdoğan, and a military coup in the form of mass protests that toppled the democratically elected Islamist president in Egypt. Protesters in Thailand have be “occupying” the capital since early January. And now we have Ukraine.

These are just some of the protests that were widely covered in the media. Sometimes, quite large, or at least numerous protests are taking place in a country and they are barely reported in the news at all.  Between 2006-2010 there were 180,000 reported “mass incidents” in China. It seems the majority of these protests are related to local issues and not against the national government, but the PRC has been adept at keeping them free of the prying eyes of Western media.

The abortive 2009 riots in Iran that were the first to be called a “Twitter Revolution” by Western media and digerati.  The new age of revolution often explained in terms of the impact of the communications revolution, and social media. We have had time to find out that just how little a role Western, and overwhelmingly English language media platforms, such as Twitter and FaceBook, have played in this new upsurge of revolutionary energy, but that’s not the whole story.

I wouldn’t go so far as to say technology has been irrelevant in bringing about our new revolutionary era, I’d just put the finger on another technology, namely mobile phones. In 2008 the number of mobile devices had, in the space of a decade, gone from a rich world luxury into the hands of 4 billion people. By 2013, 6 billion of the world’s 7 billion people had some sort of mobile device, more people than had access to working toilets.

It is the very disjunction between the number of people able to communicate and hence act en masse and those lacking what we in the developed world consider necessities that should get our attention- a potentially explosive situation. And yet, we have known since Alexis de Tocqueville that revolutions are less the product of the poor who have always known misery than stem from a rising middle class whose ambitions have been frustrated.

Questions I would ask a visitor from the near future if I wanted to gauge the state of the world a decade or two hence would be if the rising middle class in the developing world had put down solid foundations, and if, and to what extent, it had been cut off at the legs from either the derailment of the juggernaut of the Chinese economy, rising capacity of automation, or both?

The former fear seems to be behind the recent steep declines in the financial markets where the largest blows have been suffered by developing economies. The latter is a longer term risk for developing economies, which if they do not develop quickly enough may find themselves permanently locked out of what has been the traditional development arch of capitalist economic development moving from agriculture to manufacturing to services.

Automation threatens the wage competitiveness of developing economy workers on all stages of that scale. Poor textile workers in Bangladesh competing with first world robots, Indians earning a middle class wage working at call centers or doing grunt legal or medical work increasingly in competition with more and more sophisticated ,and in the long run less expensive, bots.

Intimately related to this would be my last question for our near future time traveler; namely, does the global trend towards increasing inequality continue, increase, or dissipate? With the exception of government incompetence and corruption combined with mobile enabled youth, rising inequality appears to be the only macro trend that these revolts share, though, this must not be the dominant factor, otherwise, protests would be the largest and most frequent in the country with the fastest growing inequality- the US.

Revolutions, as in the mobilization of a group of people massive enough and active enough to actually overthrow a government are a modern phenomenon and are so for a reason. Only since the printing press and mass literacy has the net of communication been thrown wide enough where revolution, as opposed to mere riots, has become possible. The Internet and even more so mobile technology have thrown that net even further, or better deeper, with literacy no longer being necessary, and with the capacity for intergroup communication now in real time and no longer in need of or under the direction of a center- as was the case in the era of radio and television.

Technology hasn’t resulted in the “end of history”, but quite the opposite. Mobile technology appears to facilitate the formation of crowds, but what these crowds mobilize around are usually deep seated divisions which the society in which protests occur have yet to resolve or against widely unpopular decisions made over the people’s head.

For many years now we have seen this phenomenon from financial markets one of the first area to develop deep, rapidly changing interconnections based on the digital revolution. Only a few years back, democracy seemed to have come under the thumb of much more rapidly moving markets, but now, perhaps, a populist analog has emerged.

What I wonder is how the state will respond to this, or how this new trend of popular mobilization may intersect with yet another contemporary trend- mass surveillance by the state itself?

The military theorist Carl von Clausewitz came up with his now famous concept of the “fog of war” defined as “the uncertainty in situational awareness experienced by participants in military operations. The term seeks to capture the uncertainty regarding one’s own capability, adversary capability, and adversary intent during an engagement, operation, or campaign.”  If one understands revolution as a kind of fever pitch exchange of information leading to action leading to exchange of information and so on, then, all revolutions in the past could be said to have taken place with all players under such a fog.

Past revolutions have only been transparent to historians. From a bird’s eye view and in hindsight scholars of the mother of all revolutions, the French, can see the effects of Jean-Paul Marat’s pamphlets and screeds inviting violence, the published speeches of the moralistic tyrant Robespierre, the plotting letters of Marie-Antoinette to the Austrians or the counter-revolutionary communique of General Lafayette. To the actors in the French Revolution itself the motivations and effects of other players were always opaque, the origin, in part, of the revolution’s paranoia and Reign of Terror which Robespierre saw as a means of unmasking conspirators and hypocrites.

With the new capacity of governments to see into communication, revolutions might be said to be becoming transparent in real time. Insecure governments that might be toppled by mass protest would seem to have an interest in developing the capacity to monitor the communication and track the movement of their citizens. Moore’s Law has made what remained an unachievable goal of total surveillance by the state relatively cheap.

During revolutionary situations foreign governments (with the US at the top of the list), may have the inclination to peer into revolutions through digital surveillance and in some cases will likely use this knowledge to interfere so as to shape outcomes in its own favor. States that are repressive towards their own people, such as China, will likewise try to use these surveillance tools to ensure revolutions never happen or to steer them toward preferred outcomes if they should occur despite best efforts.

One can only hope that the ability to see into a revolution while it is happening does not engender the illusion that we can also control its outcome, for as the riots and revolutions of the past few years have shown, moves against a government may be enabled by technology imported from outside, but the fate of such actions is decided by people on the ground who alone might be said have full responsibility for the future of the society in which revolution has occurred.

Foreign governments are engaged in a dangerous form of hubris if they think they can steer outcomes in their favor oblivious to local conditions and governments that think technology gives them a tool by which they can ignore the cries of their citizens are allowing the very basis on which they stand to rot underneath them and eventually collapse. A truth those who consider themselves part of a new global elite should heed when it comes to the issue of inequality.

Don’t Be Evil!

Panopticon Prisoner kneeling

However interesting a work it is, Eric Schmidt and Jared Cohen’s The New Digital Age is one of those books where if you come to it as a blank slate you’ll walk away from it with a very distorted chalk drawing of what the world actually looks like. Above all, you’ll walk away with the idea that intrusive and questionable surveillance was something those other guys did, the bad guys, not the American government, or US corporations, and certainly not Google where Schmidt sits as executive chairman . Much ink is spilt on explaining the egregious abuses of Internet freedom by the likes of countries like China and Iran, or what in the vast majority of cited cases, are abuses by non-Western companies,  but when it comes to the US itself or any of its corporations engaging in similar practices the book is eerily silent.

I may not know what a mote is, but I do know I am supposed to pluck my own out of my eye first. Only then can I get seriously down to the business of pointing out the other guy’s mote, or even helping him yank it out.

The New Digital Age (I’ll call it the NDA from here on on to shorten things up), is full of the most reasonable and vanilla sort of advice on the need to balance our conflicting needs for security and privacy, but given its silence on the question of what the actual security/surveillance system in the US actually is, we’re left without the information needed to make such judgements. Let me put that silence in context.

The publication date for the NDA was April, 23 2013. The smoke screen of conspicuous- for- their- absence facts that are never discussed extends not only forward in time- something to be expected given the Edward Snowden revelations were one month out (May, 20 2013)- but, more disturbingly backward in time as well.  That is, Schmidt and Cohen couldn’t really be expected, legally if not morally, to discuss the revelations Snowden would later bring to light. Still, they should be expected to have addressed serious claims about the relationship between American technology companies and the US security state which were already public knowledge.

There had been extensive reporting on the intersection of technology and US government spying since at least 2010. These weren’t stories by Montana survivalists or persons camped out at Area 51, but hard hitting journalists with decades covering national security; namely, the work of Dana Priest and the Washington Post. If my memory and the book’s index serves me, neither Priest nor the Post are mentioned in the NDA.

Over a year before NDA was published Wired’s James Bamford had written a stunning piece on the NSA’s construction of its huge data center in Bluffdale, Utah, the goal of which was to suck up and store indefinitely the electronic records of all of us- which is the main thing we are arguing about. The main debate is over whether the government has a right to force private companies to provide all the digital data on their customers which the government will then synthesize, organize and store. If you’re an American you’re lucky enough to have the government require a warrant to look at your records. (Although the court in change of this-the FISA court- is not really known for turning such requests down). If you’re unlucky enough to not be an American then the government can peruse your records whenever the hell it wants to- thank you very much.

The NSA gets two pages devoted to it in the NDA’s 257 pages both of which are about how open minded and clever the agency is for hiring pimply- faced hackers. Say, what?

The more I think about what had to be the deliberate silence that runs throughout the whole of the NDA the more infuriating it becomes, but at least now Google et al have gotten religion- or at least I hope. On December, 9 2013 Google, Facebook, Apple, Microsoft, Twitter, Yahoo, LinkedIn, and AOL sent an open letter to the White House urging new restrictions on the government’s ability to seize, use and store information gleaned from them. This is a hopeful sign, but I am not sure we be handing out Liberty Medals just yet.

For one, this move against the government was not inspired by civil libertarians or even robust reporting, but by threats to the very business model upon which the companies who signed the document are based. As The Economist puts its:

The entire business model of firms like Google, FaceBook and Twitter relies on harvesting intimate information provided by users and then selling that data on to advertisers.

It was private firms that persuaded people to give up lists of their friends, their most sensitive personal communications, and to constantly broadcast their location in real-time. If you had told even the noisiest spook in 1983, that within 30 years, much of the populace would be carrying around a tracking device that kept a permanent record of everywhere they had ever visited, he’d have thought you mad.”

Let’s say you’re completely comfortable with the US government keeping such records on you. Perhaps the majority of Americans are unconcerned about this and think it the price of safety. But I doubt Americans would feel as blaise if it was the Chinese or the Russians or heaven forbid the French or any other government whose apparatchiks could go through their online personal and financial records at will. Therein lies the threat to American companies whose ultimate aspirations are global.  Companies that are seen, rightly or wrongly, as a tool of the US government will lose the trust not mainly of US citizens but of international customers. An ensuing race to the exits and nationalization of the Internet would most likely be driven not by Iranian Mullahs or a testosterone- charged Vladimir Putin paddling around in a submersible like a Bond villain,  but by Western Europeans and other democratic societies who were already uncomfortable with the idea that corporations should be trusted by individuals who had made themselves as transparent as the Utah sky.

The Germans, to take one example, were already freaked out by Google Street View of all things and managed to have the company abandon that service there. Revulsion at the Snowden revelations is perhaps the one thing that unites the otherwise bickering nationalities of the EU. TED, an event that began as a Silicon Valley lovefest looked a lot different when it was held in Brussels in October, with Mikko Hypponen urging the secession of Europeans from the American Internet infrastructure and the creation of their own open-sourced platforms. It’s the fear of being thought of as downright Orwellian that seems most likely to have inspired Google’s move to abandon facial recognition on Google Glass.

With the Silicon Valley Letter we might think we’re in the home stretch of this struggle to re-establish the right to privacy, but the sad fact is this fight’s just beginning. As the Economist pointed out none of the giants that provide the hardware and “plumbing” for the Internet, such as Cisco, and AT&T signed the open letter, less afraid, it seems, of losing customers because these are national brick-and-mortar companies in a way the eight signatories of the open letter to the Obama Administration are not.  For civil libertarians to win this fight Americans have to not only get those hardware companies on board, but compel the government to deconstruct a massive amount of spying infrastructure.

That is, we need to get the broader American public to care enough to exert sustained pressure on the government and some of the richest companies in the country to reverse course. Otherwise, the NSA facility at Bluffdale will continue sucking up its petabytes of overwhelmingly useless information like some obsessive Mormon genealogist until the mechanical levithan lurches to obsolescence or is felled by the sheppard’s stone of better encryption.

The NSA facility that stands today in the Utah desert may offer a treasure trove for the historian of the far future, a kind of massive junkyard of collective memory filled with all our sense and non-sense. If we don’t get our act straight, what it will also be is a historical monument to the failure of our two centuries and some old experiment with freedom.

Maps:how the physical world conquered the virtual

World map 1600

If we look back to the early days when the Internet was first exploding into public consciousness, in the 1980’s, and even more so in the boom years of the 90’s, what we often find is a kind of utopian sentiment around this new form of “space”. It wasn’t only that a whole new plane of human interaction seemed to be unfolding into existence almost overnight, it was that “cyberspace” seemed poised to swallow the real world- a prospect which some viewed with hopeful anticipation and others with doom.

Things have not turned out that way.

The person who invented the term “cyberspace”, William Gibson, the science fiction author of the classic- Neuromancer- himself thinks that when people look back on the era when the Internet emerged what will strike them as odd is how we could have confused ourselves into thinking that the virtual world and our work-a-day one were somehow distinct. Gibson characterizes this as the conquest of the real by the virtual. Yet, one can see how what has happened is better thought of as the reverse by taking even a cursory glance at our early experience and understanding of cyberspace.

Think back, if you are old enough, and you can remember, when the online world was supposed to be one where a person could shed their necessarily limited real identity for a virtual one. There were plenty of anecdotes, not all of them insidious, of people faking their way through a contrived identity the unsuspecting thought was real: men coming across as women, women as men, the homely as the beautiful. Cyberspace seemed to level traditional categories and the limits of geography. A poor adolescent could hobnob with the rich and powerful. As long as one had an Internet connection, country of origin and geographical location seemed irrelevant.

It should not come as any surprise, then, that  an early digital reality advocate such as Nicole Stenger could end her 1991 essay Mind is a leaking rainbow with the utopian flourish:

According to Satre, the atomic bomb was what humanity had found to commit collective suicide. It seems, by contrast, that cyberspace, though born of a war technology, opens up a space for collective restoration, and for peace. As screens are dissolving, our future can only take on a luminous dimension! / Welcome to the New World! (58)

Ah, if only.

Even utopian rhetoric was sometimes tempered with dystopian fears. Here is Mark Pesce the inventor of VRML code in his 1997 essay Ignition:

The power over this realm has been given to you. You are weaving the fabric of perception in information perceptualized. You could – if you choose – turn our world into a final panopticon – a prison where all can been seen and heard and judged by a single jailer. Or you could aim for its inverse, an asylum run by the inmates. The esoteric promise of cyberspace is of a rule where you do as you will; this ontology – already present in the complex system know as Internet – stands a good chance of being passed along to its organ of perception.

The imagery of a “final panopticon” is doubtless too morbid for us at this current stage whatever the dark trends. What is clear though is that cyberspace is a dead metaphor for what the Internet has become- we need a new one. I think we could do worse than the metaphor of the map. For, what the online world has ended up being is less an alternative landscape than a series of cartographies by which we organize our relationship with the world outside of our computer screens, a development with both liberating and troubling consequences.

Maps have always been reflections of culture and power rather than reflections of reality. The fact that medieval maps in the West had Jerusalem in their centers wasn’t expressing a geologic but a spiritual truth although few understood the difference. During the Age of Exploration what we might think of as realistic maps were really navigational aids for maritime trading states, a latent fact present in what the mapmakers found important to display and explain.

The number and detail of maps along with the science of cartography rose in tandem with the territorial anchoring of the nation-state. As James C. Scott points out in his Seeing Like a State maps were one of the primary tools of the modern state whose ambition was to make what it aimed to control “legible” and thus open to understanding by bureaucrats in far off capitals and their administration.

What all of this has to do with the fate of cyberspace, the world where we live today, is that the Internet, rather than offering us an alternative version of physical space and an escape hatch from its problems has instead evolved into a tool of legibility. What is made legible in this case is us. Our own selves and the micro-world’s we inhabit have become legible to outsiders. Most of the time these outsiders are advertisers who target us based on our “profile”, but sometimes this quest to make individuals legible is by the state- not just in the form of standardized numbers and universal paperwork but in terms of the kinds of information a state could only once obtain by interrogation- the state’s first crack at making individuals legible.      

A recent book by Google CEO Eric Schmitt co-authored with foreign policy analyst Jared Cohen- The New Digital Age is chalk full of examples of corporate advertisers’ and states’ new powers of legibility. They write:

The key advance ahead is personalization. You’ll be able to customize your devices- indeed much of the technology around you- to fit your needs, so that the environment reflects your preferences.

At your fingertips will be an entire world’s worth of digital content, constantly updated, ranked and categorized to help you find the music, movies, shows, books, magazines, blogs and art you like. (23)

Or as journalist Farhad Manjoo quotes Amit Singhal of Google:

I can imagine a world where I don’t even need to search. I am just somewhere outside at noon, and my search engine immediately recommends to me the nearby restaurants that I’d like because they serve spicy food.

There is a very good reason why I did not use the world “individuals” in place of “corporate advertisers” above- a question of intent. Whose interest does the use of such algorithms to make the individual legible ultimately serve? If it my interest then search algorithms might tell me where I can get a free or even pirated copy of the music, video etc I will like so much. It might remind me of my debts, and how much I would save if I skip dinner at the local restaurant and cook my quesadillas at home. Google and all its great services, along with similar tech giants aiming to map the individual such as FaceBook aren’t really “free”. While using them I am renting myself to advertisers. All maps are ultimately political.

With the emergence mobile technology and augmented reality the physical world has wrestled the virtual one to the ground like Jacob did to the angel. Virtual reality is now repurposed to ensconce all of us in our own customized micro-world. Like history? Then maybe your smartphone or Google Glasses will bring everything historical around you out into relief. Same if you like cupcakes and pastry or strip clubs. These customized maps already existed in our own heads, but now we have the tools for our individualized cartography- the only price being constant advertisements.

There’s even a burgeoning movement among the avant garde, if there can still be said to be such a thing, against this kind of subjection of the individual to corporate dictated algorithms and logic. Inspired by mid-20 century leftists such as Guy Debord with his Society of the Spectacle practitioners of what is called psychogeography are creating and using apps such as Drift  that lead the individual on unplanned walks around their own neighborhoods, or Random GPS that have your car’s navigation system remind you of the joys of getting lost.

My hope is that we will see other versions of these algorithm inverters and breakers and not just when it comes to geography. How about similar things for book recommendations or music or even dating? We are creatures that sometimes like novelty and surprise, and part of the wonder of life is fortuna–  its serendipitous accidents.

Yet, I think these tools will most likely ramp up the social and conformist aspects of our nature. We shouldn’t think they will be limited to corporate persuaders. I can imagine “Catholic apps” that allow one to monitor one’s sins, and a whole host of funny and not so funny ways groups will use the new methods of making the individual legible to tie her even closer to the norms of the group.

A world where I am surrounded by a swirl of constant spam, or helpful and not so helpful suggestions, the minute I am connected, indeed, a barrage that never ends except when I am sleeping because I am always connected, may be annoying, but it isn’t all that scary. It’s when we put these legibility tools in the hands of the state that I get a little nervous.

As Schmitt and Cohen point out one of the most advanced forms of such efforts at mapping the individual is an entity called Platforma Mexico which is essentially a huge database that is able to identify any individual and tie them to their criminal record.

Housed in an underground bunker in the Secretariat of Public Security compound in Mexico City, this large database integrates intelligence, crime reports and real time data from surveillance cameras and other inputs from across the country. Specialized algorithms can extract patterns, project social graphs and monitor restive areas for violence and crime as well as for natural disasters and other emergencies.  (174)

The problem I have here is the blurring of the line between the methods used for domestic crime and those used for more existential threats, namely- war. Given that crime in the form of the drug war is an existential threat for Mexico this might make sense, but the same types of tools are being perfected by authoritarian states such as China, which is faced not with an existential threat but with growing pressures for reform, and also in what are supposed to be free societies like the United States where a non-existential threat in the form of terrorism- however already and potentially horrific- is met with similar efforts by the state to map individuals.

Schmitt and Cohen point out how there is a burgeoning trade between autocratic countries and their companies which are busy perfecting the world’s best spyware. An Egyptian firm Orascom owns a 25 percent share of the panopticonic sole Internet provider in North Korea. (96) Western companies are in the game as well with the British Gamma Group’s sale of spyware technology to Mubarak’s Egypt being just one recent example.

Yet, if corporations and the state are busy making us legible there has also been a democratization of the capacity for such mapmaking, which is perhaps the one of the reasons why states are finding governance so difficult. Real communities have become almost as easy to create as virtual ones because all such communities are merely a matter of making and sustaining human relationships and understanding their maps.

Schmitt and Cohen imagine virtual governments in exile waiting in the wings to strike at the precipitous moment. Political movements can be created off the shelf supported by their own ready made media entities and the authors picture socially conscious celebrities and wealthy individuals running with this model in response to crises. Every side in a conflict can now have its own media wing whose primary goal is to shape and own the narrative. Even whole bureaucracies could be preserved from destruction by keeping its map and functions in the cloud.

Sometimes virtual worlds remain limited to the way they affect the lives of individuals but are politically silent. A popular mass multiplayer game such as World of Warcraft may have as much influence on an individual’s life as other invisible kingdoms such as those of religion. An imagined online world becomes real the moment its map is taken as a prescription for the physical world.  Are things like the Hizb ut-Tahrir which aims at the establishment of a pan-Islamic caliphate or The League of the South which promotes a second secession of American states “real” political organizations or fictional worlds masking themselves as political movements? I suppose only time will tell.

Whatever the case, society seems torn between the mapmakers of the state who want to use the tools of the virtual world to impose order on the physical and an almost chaotic proliferation using the same tools by groups of all kinds creating communities seemingly out of thin air.

All this puts me in mind of nothing so much as China Mieville’s classic of New Weird fiction City and the City. It’s a crime novel with the twist that it takes place in two cities- Beszel  and Ul Qoma that exist in the same physical space and are superimposed on top of one another. No doubt Mieville was interested in telling a good story, and getting us thinking about the questions of borders and norms, but it’s a pretty good example of the mapping I’ve been talking about- even if it is an imagined one.

In City and the City an inhabitant of Beszel  isn’t allowed to see or interact with what’s going on in Ul Qoma and vice versa otherwise they commit a crime called “breach” and there’s a whole secretive bi-city agency called Breach that monitors and prosecutes those infractions. There’s even an imaginary (we are led to believe) third city “Orciny” that exist on-top of Beszel and Ul Qoma and secretly controls the other two.

This idea of multiple identities- consumer, political- overlaying the same geographical space seems a perfect description of our current condition. What is missing here, though, is the sharp borders imposed by Breach. Such borders might appear quicker and in different countries than one might have supposed thanks to the recent revelations that the United States has been treating the Internet and its major American companies like satraps. Only now has Silicon Valley woken up to the fact that its close relationship with the American security state threatens its “transparency” based business- model with suicide. The re-imposition of state sovereignty over the Internet would mean a territorialization of the virtual world- a development that would truly constitute its conquest by the physical. To those possibilities I will turn next time…

How Science and Technology Slammed into a Wall and What We Should Do About It

Captin Future 1944

It might be said that some contemporary futurists tend to use technological innovation and scientific discovery in the same way God was said to use the whirlwind against defiant Job, or Donald Rumsfeld treated the poor citizens of Iraq a decade ago. It’s all about the “shock and awe”. One glance at something like KurzweilAI.Net leaves a reader with the impression that brand new discoveries are flying off the shelf by the nanosecond and that of all our deepest sci-fi dreams are about to come true. No similar effort is made, at least that I know of, to show all the scientific and technological paths that have led  into cul-de-sac, or chart all the projects packed up and put away like our childhood chemistry sets to gather dust in the attic of the human might-have- been.  In exact converse to the world of political news, in technological news it’s the jetpacks that do fly we read about not the ones that never get off the ground.

Aside from the technologies themselves future oriented discussion of the potential of technologies or scientific discovery tends to come in two stripes when it comes to political and ethical concerns: we’re either on the verge of paradise or about to make Frankenstein seem like an amiable dinner guest.

There are a number of problems with this approach to science and technology, I can name more, but here are three: 1) it distorts the reality of innovation and discovery 2) it isn’t necessarily true, 3) the political and ethical questions, which are the most essential ones, are too often presented in a simplistic all- good or all-bad manner when any adult knows that most of life is like ice-cream. It tastes great and will make you fat.

Let’s start with distortion: A futurists’ forum like the aforementioned, KurzweilAI.Net,  by presenting every hint of innovation or discovery side-by-side does not allow the reader to discriminate between both the quality and the importance of such discoveries. Most tentative technological breakouts and discoveries are just that- tentative- and ultimately go nowhere. The first question a reader should ask is whether or not some technique process or prediction has been replicated.  The second question is whether or not the technology or discovery being presented is actually all that important. Anyone who’s ever seen an infomercial knows people invent things everyday that are just minor tweaks on what we already have. Ask anyone trapped like Houdini in a university lab-  the majority of scientific discovery is not about revolutionary paradigm shifts ala Thomas Kuhn but merely filling in the details. Most scientists aren’t Einsteins in waiting. They just edit his paperwork.

Then we have the issue of reality: anyone familiar with the literature or websites of contemporary futurists is left with the impression that we live in the most innovative and scientifically productive era in history. Yet, things may not be as rosy as they might appear when we only read the headlines. At least since 2009, there has been a steady chorus of well respected technologists, scientists and academics telling us that innovation is not happening fast enough, that is that our rates of technological advancement are not merely not exceeding those found in the past, they are not even matching them. A common retort to this claim might be to club whoever said it over the head with Moore’s Law; surely,with computer speeds increasing exponentially it must be pulling everything else along. But, to pull a quote from ol’ Gershwin “ it ain’t necessarily so”.

As Paul Allen, co-founder of Microsoft and the financial muscle behind the Allen Institute for Brain Science pointed out in his 2011 article, The singularity isn’t near, the problems of building ever smaller and faster computer chips is actually relatively simple, but many of the other problems we face, such as understanding how the human brain works and applying the lessons of that model- to the human brain or in the creation of AI- suffer what Allen calls the “complexity break”. The problems have become so complex that they are slowing down the pace of innovation itself. Perhaps it’s not so much of a break as we’ve slammed into a wall.

A good real world example of the complexity break in action is what is happening with innovation in the drug industry where new discoveries have stalled. In the words of Mark Herper at Forbes:

But the drug industry has been very much a counter-example to Kurzweil’s proposed law of accelerating returns. The technologies used in the production of drugs, like DNA sequencing and various types of chemistry approaches, do tend to advance at a Moore’s Law-like clip. However, as Bernstein analyst Jack Scannell pointed out in Nature Review’s Drug Discovery, drug discovery itself has followed the opposite track, with costs increasing exponentially. This is like Moore’s law backwards, or, as Scannell put it, Eroom’s Law.”

It is only when we acknowledge that there is a barrier in front of our hopes for innovation and discovery that we can seek to find its source and try to remove it. If you don’t see a wall you run the risk of running into it and certainly won’t be able to do the smart things: swerve, scale, leap or prepare to bust through.

At least part of the problem stems from the fact that though we are collecting a simply enormous amount of scientific data we are having trouble bringing this data together to either solve problems or aid in our actual understanding of what it is we are studying. Trying to solve this aspect of the innovation problem is a goal of the brilliant young technologist,Jeffrey Hammerbach, founder of Cloudera. Hammerbach has embarked on a project with Mt. Sinai Hospital to apply tools for organizing and analyzing the overwhelming amounts of data gather by companies like Google and FaceBook to medical information in the hopes of spurring new understanding and treatments of diseases. The problem Hammerbach is trying to solve as he acknowledged on a recent interview with Charlie Rose is precisely the one identified by Herper in the quote above, that innovation in treating diseases like mental illness is simply not moving fast enough.

Hammerbach, is our Spiderman. Having helped create the analytical tools that underlie FaceBook he began to wonder if it was worth it quipping: “The best minds of my generation are thinking about how to make people click ads”.  The real problem wasn’t the lack of commercial technology it was the barriers to important scientific discoveries that would actually save people’s lives. Hammerbach’s conscientious objection to what technological innovation was being applied for vs what it wasn’t is, I think, a perfect segway (excuse the pun) to my third point the political and ethical dimension, or lack thereof, in much of futurists writing today.

In my view, futurists too seldom acknowledges the political and ethical dimension in which technology will be embedded. Technologies hoped for in futurists communities such as brain-computer interfaces, radical life extension, cognitive enhancements or AI are treated in the spirit of consumer goods. If they exist we will find a way to pay for them. There is, perhaps, the assumption that such technologies will follow the decreasing cost curves found in consumer electronics: cell phones were once toys for the rich and now the poorest people on earth have them.

Yet, it isn’t clear to me that the technologies hoped for will actually follow decreasing curves and may instead resemble health care costs rather than the plethora of cheap goods we’ve scored thanks to Moore’s Law. It’s also not clear to me that should such technologies be invented to be initially be affordable only by the rich that this gap will at all be acceptable by the mass of the middle class and the poor unless it is closed very, very fast. After all, some futurists are suggesting that not just life but some corporal form of immortality will be at stake. There isn’t much reason to riot if your wealthy neighbor toots around in his jetpack while you’re stuck driving a Pinto. But the equation would surely change if what was at stake was a rich guy living to be a thousand while you’re left broke, jetpackless, driving a Pinto and kicking the bucket at 75.

The political question of equity will thus be important as will much deeper ethical questions as to what we should do and how we should do it. The slower pace of innovation and discovery, if it holds, might ironically, for a time at least, be a good thing for society (though not for individuals who were banking on an earlier date for the arrival of technicolored miracles) for it will give us time to sort these political and ethical questions through.

There are 3 solutions I can think of that would improve the way science and technology is consumed and presented by futurists, help us get through the current barriers to invention and discovery and build our capacity to deal with whatever is on the other side. The first problem, that of distortion, might be dealt with by better sorting of scientific news stories so that the reader has some idea both where the finding being presented lies along the path of scientific discovery or technological innovation and how important a discovery is in the overall framework of a field. This would prevent things occurring such as a preliminary findings regarding the creation of an artificial hippocampus in a rat brain being placed next to the discovery of the Higgs Boson, at least without some color coating or other signification that these discoveries are both widely separated along the path of discovery and of grossly different import.

As to the barriers to innovation and discovery itself: more attempts such as those of Hammerbach’s need to be tried. Walls need to be better identified and perhaps whole projects bringing together government and venture capital resources and money used to scale over or even bust through these blocked paths. As Hammerbach’s case seems to illustrate, a lot of technology is fluff, it’s about click-rates, cool gadgets, and keeping up with the joneses. Yet, technology is also vitally important as the road to some of our highest aspirations. Without technological innovation we can not alleviate human suffering, extend the time we have here, or spread the ability to reach self-defined ends to every member of the human family.

Some technological breakthroughs would actually deserve the appellation. Quantum computing, if viable, and if it lives up to the hype, would be like Joshua’s trumpets against the walls of Jericho in terms of the barriers to innovation we face. This is because, theoretically at least, it would answer the biggest problem of the era, the same one identified by Hammerbach, that we are generating an enormous amount of real data but are having a great deal of trouble organizing this information into actionable units we actually understand. In effect, we are creating an exponentially increasing data base that requires exponentially increasing effort to put the pieces of this puzzle together- running to standstill. Quantum computing, again theoretically at least, would solve this problem by making such databases searchable without the need to organize them beforehand. 

Things in terms of innovation are not, of course, all gloom and doom. One fast moving  field that has recently come to public attention is that of molecular and synthetic biology perhaps the only area where knowledge and capacity is not merely equalling but exceeding Moore’s Law.

To conclude, the very fact that innovation might be slower than we hope- though we should make every effort to get it moving- should not be taken as an unmitigated disaster but as an opportunity to figure out what exactly it is we want to do when many of the hoped for wonders of science and technology actually arrive. At the end of his recent TED-Talk on reviving extinct species, a possibility that itself grows out of the biological revolution of which synthetic biology is a part, Stewart Brand, gives us an idea of what this might look like. When asked if it was ethical for scientist to “play God” in doing such a thing he responded that he and his fellow pioneers were trying to answer the question of if we could revive extinct species not if we should. The ability to successfully revive extinct species, if it worked, would take some time to master, and would be a multi-generational project the end of which Brand, given his age, would not see. This would give us plenty of time to decide if de-extinction was a good idea, a decision he certainly hoped we would make. The only way we can justly do this is to set up democratic forums to discuss and debate the question.

The ex-hippie Brand has been around a long time, and great evidence to me that old age plus experience can still result in what we once called wisdom. He has been present long enough to see many of our technological dreams of space colonies and flying cars and artificial intelligence fail to come true despite our childlike enthusiasm. He seems blissfully unswept up in all the contemporary hoopla, and still less his own importance in the grand scheme of things, and has devoted his remaining years to generation long projects such as the Clock of the Long now, or the revival of extinct species that will hopefully survive into the far future after he is gone.

Brand has also been around long enough to see all the Frankenstein’s that have not broken loose, the GMOs and China Syndromes and all those oh so frightening “test tube babies”.  His attitude towards science and technology seems to be that it is neither savior nor Shiva it’s just the cool stuff we can do when we try. Above all, he knows what the role of scientists and technologists are, and what is the role of the rest of us. Both of the former show us what tricks we can play, but it is up to all of us as to how or if we should play them.