Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.

 

Advertisements

Listening to the Abyss

Black hole

In homage to the international team of scientists working on the Event Horizon Telescope who earlier this month gave us the first ever image of a black hole, below is an essay I wrote for Harvard’s BHI essay competition late last year. The weird reality of the universe we call home never ceases humble and astound me, nor when working together as a species, does our ingenuity at uncovering the sublime order of the natural world cease to be a source of hope and pride.

____________________________

“Of all the conceptions of the human mind, from unicorns to gargoyles to the hydrogen bomb, the most fantastic, perhaps, is the black hole…” Kip Thorne

In the last century we’ve experienced an expansion of cosmic horizons that dwarfs any that occurred before. It’s a universe far larger and weirder than we could have imagined held together by dark matter and propelled apart by dark energy neither of which we truly understand. And out of all the things in this wide Alice in Wonderland universe nothing is as mind- blowingly weird as black holes.

Our expanded horizon has been made possible by a revolution in the tools used by astronomers: vastly improved optical telescopes and the birth of radio, gamma ray, infrared, x-ray astronomy. Only in the last few years have we seen the rise of astronomy based on gravitational waves, the ripples of space-time. Gravitational wave detection gives us the ability not to see, but in some sense, listen to black holes. [i]The question is how many of us will pay any attention to what they are saying?

For what is striking about this recent expansion of our cosmic horizons is how little it has impacted the world outside of science itself, the world of artists, authors, philosophers. [ii]

Certainly, the popularity of movies such as Interstellar, suggests a public hunger for the question begging awe implied by contemporary cosmology, but a comparison with the past reveals just how meh our reaction has been. One could lay blame for this disinterest on our selfie and celebrity based culture, but the problem goes deeper than that, and goes beyond the old complaint about the “two cultures”. [iii] What I think we’ve lost is our ability to experience the natural sublime.

The only expansion of human imagination in terms of our place in the universe that even comes close to the one we’ve experienced in the last century was the one that ended our pre-Copernican view of the world. When Copernicus and those who followed him gave us a new cosmos with the earth no longer at its center and threw the meaning of humanity into question. Yet these discoveries didn’t just circulate among early scientists, but quickly obsessed theologians, poets and philosophers who wrestled with what the new post-Copernican world meant for humanity’s place the universe.

Many of their reflections express not so much joy at our expanded horizons as a sentiment akin to vertigo, as if people on a previously stable earth had been turned upside down and they were in danger of plunging into a sky that had become an infinite pit. In other words, they were seriously weirded out.

Take the famous 18th century fable of Carazan’s Dream. Carazan, a miserly Baghdad merchant is tossed “up” into hell upon his death. He finds not tortures, but a journey through the infinite blackness of space. It is “A dreadful region of eternal silence, loneliness, and darkness” that envelopes Carzan as he loses sight of the illumination of the last stars. He is filled with unbearable anguish as he realizes that even after traveling “ten-thousand times a thousand years” his journey into blackness would not be at an end. [iv]

The 18th century poet and children’s writer Anna Laetitia Barbauld in her poem “A Summer Evening’s Meditation” expressed something similar.

What hand unseen

Impells me onward thro’ the glowing orbs

Of habitable nature, far remote,

To the dread confines of eternal night,

To solitudes of vast unpeopled space [v]

In the 18th century Edmund Burke termed this strange mixture of foreboding and awe the sublime. [vi] It’s an experience that comes not just from looking up into an infinite sky, but whenever one experiences nature in a way that puts our limited human scale and power in perspective. Immanuel Kant brought the sublime into the realm of mathematics. Noting that phenomena for which our everyday imagination showed itself to be woefully insufficient proved tractable with the use of mathematical thinking. Math was a tool for grabbing hold of the weird. [vii]

Another philosopher Arthur Schopenhauer went even deeper. For him, the experience of the sublime had nothing to do with fear or anxiety. Instead, it was the result of a kind of tension that emerged from contemplating the scale of nature relative to our own limits while simultaneously understanding it. Nature that seems so beyond us is instead in us, indeed is us. [viii]

It’s only a short step from this psychologizing of the sublime found in nature to seeing the sublime as originating in technology itself. For what made the natural sublime perceivable in the first place wasn’t so much the capabilities of the individual human mind as the whole history of science and technology up until that point, the knowledge that allowed us to finally apprehend just how large and weird our cosmic home actually was. Thus the natural sublime emerged in parallel with the idea of the technological sublime that would eventually swallow it. [ix]

Starting in the 19th century technology seemed to liberate us from nature. Our own creations became global in scope and were often looked upon with the same quasi-religious awe once limited to natural phenomenon. Yet after the world wars, the creation of nuclear weapons, and the growing environmental consciousness of the 20th century many non-scientists, with good reason, abandoned this idea of the technological sublime. Unfortunately, because the most powerful examples of the sublime in nature are only perceivable through the technology they distrusted, the natural sublime was rejected as well. [x]

Among the technologists of Silicon Valley and elsewhere belief in the technological sublime never really went away, and we’re still under its spell. Though the technological infrastructure we have built across and above the earth has been of great benefit to the material wellbeing of billions of people, culturally, that same structure has proven almost black hole like in its ability to torque our attention back upon ourselves. [xi] The most vocal proponents of the technological sublime use this very language. Our destiny is to enter the technological “singularity” and any attempts to see beyond it are futile. In a rejection of the universe Copernicus gave us, we’re back at the center of events. [xii]

Black holes seem almost ready made to provide a rejoinder to this kind of anthropocentric hubris, and if attended to, may even help us recover our sense of the natural sublime.

They dwarf anything created by human civilization in a way almost impossible to express with the mass of the largest discovered being 17 billion times larger than that of the sun. [xiii] It’s such scale that allows them to exist in the first place. Under normal circumstances gravity is by far the weakest of the four forces, but with enough mass this weakling brakes out of its chains and becomes monstrous. The force of gravity becomes so large that space curves back upon itself, every path outward becomes a path inward towards the center, where even light itself is no longer fast enough to escape the compression of space that veers to infinity, the real singularity.

As objects get closer to the event horizon, beyond which nothing but radiation, escapesthey rotate this cosmic drain faster and faster, whole stars whipping around the black hole at speeds as fasts as 93,00 miles per hour. [xiv] As matter is pulled towards absolute blackness it gives rise to quasars, the brightest objects in the universe glowing with the light of millions of suns.

Falling into a black hole really would be like something out of Carazan’s Dream, only weirder.

From the perspective of an outside observer someone slipping into a massive black hole would slow to the point of being frozen only to be vaporized. Yet the person who actually fell in wouldn’t experience anything at all except the movement ever forward towards the singularity in the same way we can only move forward in time. But what does that even mean? Here we come upon our limits: the apparent irreconcilability of quantum mechanics and general relativity, the limits of computation in finite time. [xv]

Long after the last star has burned out black holes and their weirdness will survive. The only energy source in the universe will be their sluggardly Hawking radiation until, after quadrillions of years, they too melt away. [xvi] Perhaps future civilizations will float just outside the death grip of the event horizon to siphon off the energy of the black hole at the center of the Milky Way, a mother lode with more energy than all of our galaxy’s billions of stars. [xvii] Perhaps space faring civilizations don’t head for the stars, but towards the black. [xvii] Maybe black holes are the womb of space-time giving rise to universes like and unlike our own. Or perhaps, if intelligences like ourselves survive, they will use the wonders of the abyss to create the world anew, which would mean that, in the very long run, the believers in the technological sublime were actually right after all.

[i] Levin, Janna. Black Hole Blues: and Other Songs from Outer Space. Anchor Books, a Division of Penguin Random House LLC, 2017.

[ii] There are, of course, exceptions. An excellent example of which is: Robinson, Marilynne. What Are We Doing Here? FarrarStraus and Giroux, 2018.

[iii] Snow, C. P., and Stefan Collini. The Two Cultures. Cambridge University Press, 2014.

[iv] Fisher, Anne. The Pleasing Instructor Or Entertaining Moralist. G. Robinson, 1777. p. 143

[v] Barbauld, and Lucy Aikin. The Works of Anna Laetitia Barbauld: with a Memoir. Cambridge University Press, 2014. p.122

[vi] Burke wasn’t the first, but he was certainly the most influential figure to bring the sublime into the public discourse with his: Burke, Edmund, et al. A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful. Printed for R. and and J. Taylor, 1772.

[viii] Kant, Immanuel, et al. Critique of Judgement. Oxford Univ. Press, 2008. p. 87

[ix] Schopenhauer, Arthur. The World as Will and Representation. Vol. 1, Cambridge University Press, 2010. p.230

[x] Miller, Perry. The Life of the Mind in America. from the Revolution to the Civil War. Harcourt, Brace & World, 1970.

[xi] A good recent example of this turn against the natural sublime is: Marris, Emma. Rambunctious Garden: Saving Nature in a Post-Wild World. Bloomsbury, 2013.

[xi] A point beautifully made in: Billings, Lee. Five Billion Years of Solitude: the Search for Life among the Stars. Current, 2014.

[xii] Kurzweil, Ray. The Singularity Is near: When Humans Transcend Biology. Duckworth, 2016.

[xiii] Thomas, Jens, et al. “A 17-Billion-Solar-Mass Black Hole in a Group Galaxy with a Diffuse Core.” Nature, vol. 532, no. 7599, 2016, pp. 340–342., doi:10.1038/nature17197.

[xiv] The fastest rotation of a black hole by a star ever recorded: Kuulkers, E., et al. “MAXI J1659−152: the Shortest Orbital Period Black-Hole Transient in Outburst.” Astronomy & Astrophysics, vol. 552, 2013, doi:10.1051/0004-6361/201219447.

[xv] Harlow, Daniel, and Patrick Hayden. “Quantum Computation vs. Firewalls.” Journal of High Energy Physics, vol. 2013, no. 6, 2013, doi:10.1007/jhep06(2013)085.

[xvi] Adams, Fred, and Greg Laughlin. The Five Ages of the Universe: inside the Physics of Eternity. Touchstone, 2000.

[xvii] Lawrence, Albion, and Emil Martinec. “Black Hole Evaporation along Macroscopic Strings.” Physical Review D, vol. 50, no. 4, 1994, pp. 2680–2691., doi:10.1103/physrevd.50.2680.

[xviii] It been suggested that one solution to the Fermi Paradox is that the aliens are “hibernating” waiting for the Black Hole Era when the universe has cooled to make computation extremely efficient: Sandberg, et al. “That Is Not Dead Which Can Eternal Lie: the Aestivation Hypothesis for Resolving Fermi’s Paradox.” [1402.1128] Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition, 27 Apr. 2017, arxiv.org/abs/1705.03394v1.

The Flash Crash of Reality

“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”

                                                                                                 H.P. Lovecraft, The Call of Cthulhu  

“All stable processes we shall predict. All unstable processes we shall control.”

John von Neumann

For at least as long as there has been a form of written language to record such a thing, human beings have lusted after divination. The classical Greeks had their trippy Oracle at Delphi, while the Romans scanned entrails for hidden patterns, or more beautifully, sought out the shape of the future in the murmurations of birds. All ancient cultures, it seems, looked for the signs of fate in the movement of the heavens. The ancient Chinese script may have even originated in a praxis of prophecy, a search for meaning in the branching patterns of “oracle bones” and tortoise shells, signaling that perhaps written language itself originated not with accountants but with prophets seeking to overcome the temporal confines of the second law, in whose measure we are forever condemned.

The promise of computation was that this power of divination was at our fingertips at last. Computers would allow us to outrun time, and thus in seeing the future we’d finally be able to change it or deftly avoid its blows- the goal of all seers in the first place.

Indeed, the binary language underlying computation sprung from the fecund imagination of Gottfried Leibniz who got the idea after he encountered the most famous form of Chinese divination, the I-Ching. The desire to create computing machines emerged with the Newtonian worldview and instantiated its premise; namely, that the world could be fully described in terms of equations whose outcome was preordained. What computers promised was the ability to calculate these equations, offering us a power born of asymmetric information- a kind of leverage via time travel.

Perhaps we should have known that time would not be so easily subdued. Outlining exactly how our recent efforts to know and therefore control the future have failed is the point of James Bridle’s wonderful book  New Dark Age: Technology and the End of the Future.

With the kind of clarity demanded of an effective manifesto, Bridle neatly breaks his book up into ten “C’s”: Chasm, Computation, Climate, Calculation, Complexity, Cognition, Complicity, Conspiracy, Concurrency, and Cloud.

Chasm defines what Bridle believes to be our problem. Our language is no longer up to the task of navigating the world which the complex systems upon which we are dependent have wrought. This failure of language, which amounts to a failure of thought, Bridle traces to the origins of computation itself.

Computation

To vastly oversimplify his argument, the problem with computation is that the detailed models it provides too often tempt us into confusing our map with the territory. Sometimes this leads us to mistrust our own judgement and defer to the “intelligence” of machines- a situation that in the most tragic of circumstances has resulted in what those in the National Parks Service call “death by GPS”. While in other cases our confusion of the model with reality results in the surrender of power to the minority of individuals capable of creating and the corporations which own and run such models.

Computation was invented under the aforementioned premise born with the success of calculus, that everything, including  history itself, could be contained in an equation. It was also seen as a solution to the growing complexity of society. Echoing Stanislaw Lem, one of the founders of modern computers, Vannevar Bush with his “memex” foresaw something like the internet in the 1940s. The mechanization of knowledge the only possible lifeboat in the deluge of information modernity had brought.

Climate

One of the first projected purposes of computers would be not just to predict, but to actually control the weather. And while we’ve certainly gotten better at the former, the best we’ve gotten from the later is Kurt Vonnegut’s humorous takedown of the premise in his novel Cat’s Cradle, which was actually based on his chemist brother’s work on weather control for General Electric. It is somewhat ironic, then, that the very fossil fuel based civilization upon which our computational infrastructure depends is not only making the weather less “controllable” and predictable, but is undermining the climatic stability of the Holocene, which facilitated the rise of a species capable of imagining and building something as sophisticated as computers in the first place.

Our new dark age is not just a product of our misplaced faith in computation, but in the growing unpredictability of the world itself. A reality whose existential importance is most apparent in the case of climate change. Our rapidly morphing climate threatens the very communications infrastructure that allows us to see and respond to its challenges. Essential servers and power sources will likely be drowned under the rising seas, cooling dependent processors taxed by increasing temperatures. Most disturbingly, rising CO2  levels are likely to make human beings dumber. As Bridle writes:

“At 1,000 ppm, human cognitive ability drops by 21 per cent.33 At higher atmospheric concentrations, CO2 stops us from thinking clearly. Outdoor CO2 already reaches 500 ppm”

An unstable climate undermines the bedrock, predictable natural cycles from which civilization itself emerged, that is, those of agriculture.  In a way our very success at controlling nature, by making it predictable is destabilizing the regularity of nature that made its predictability possible in the first place.

It is here that computation reveals its double edged nature, for while computation is the essential tool we need to see and respond to the “hyperobject” that is climate change, it is also one of the sources of this growing natural instability itself. Much of the energy of modern computation directly traceable to fossil fuels, a fact the demon coal lobby has eagerly pointed out.

Calculation

What the explosion of computation has allowed, of course, is an exponential expansion of the power and range of calculation. While one can quibble over whether or not the driving force behind the fact that everything is now software, that is Moore’s Law, has finally proved Ray Kurzweil and his ilk wrong and bent towards the asymptote, the fact is that nothing else in our era has followed the semiconductor’s exponential curve. Indeed, as Bridle shows, precisely the opposite.

For all their astounding benefits, machine learning and big data have not, as Chris Anderson predicted, resulted in the “End of Theory”. Science still needs theory, experiment, and dare I say, humans to make progress, and what is clear is that many areas outside ICT itself progress has not merely slowed but stalled.

Over the past sixty years, rather than experience Moore’s Law type increases, the pharmaceutical industry has suffered the opposite. The so-called Eroom’s Law where: “The number of new drugs approved per billion US dollars spent on research and development has halved every nine years since 1950.”

Part of this stems from the fact that the low hanging fruit of discovery, not just in pharmaceuticals but elsewhere, have already been picked, along with the fact that the problems we’re dealing with are becoming exponentially harder to solve. Yet some portion of the slowdown in research progress is surely a consequence of technology itself, or at least the ways in which computers are being relied upon and deployed. Ease of sharing when combined with status hunting inevitably leads to widespread gaming. Scientists are little different from the rest of us, seeking ways to top Google’s Page Rank, Youtube recommendations, Instagram and Twitter feeds, or the sorting algorithms of Amazon, though for scientists the summit of recognition consists of prestigious journals where publication can make or break a career.

Data being easy to come by, while experimentation and theory remain difficult, has meant that “proof” is often conflated with what turn out to be spurious p-values, or “p-hacking”. The age of big data has also been the age of science’s “replication crisis”, where seemingly ever more findings disappear upon scrutiny.

What all this calculation has resulted in is an almost suffocating level of complexity, which is the source of much of our in-egalitarian turn. Connectivity and transparency were supposed to level the playing field, instead, in areas such as financial markets where the sheer amount of information to be processed has ended up barring new entrants, calculation has provided the ultimate asymmetric advantage to those with the highest capacity to identify and respond within nanoseconds to changing market conditions.

Asymmetries of information lie behind both our largest corporate successes and the rising inequality that they have brought in their wake. Companies such as WalMart and Amazon are in essences logistics operations built on the unique ability of these entities to see and respond first or most vigorously to consumer needs. As Bridle points out this rule of logistics has resulted in a bizarre scrambling of politics, the irony that:

“The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon.”

Yet unlike the corporate giants of old such as General Motors, our 21st century behemoths don’t actually employ all that many people, and in their lust after automation, seem determined to employ even less. The workplace, and the larger society, these companies are building seem even more designed around the logic of machines than factories in the heyday of heavy industry. The ‘chaotic storage’ deployed in Amazon’s warehouses is like something dreamed up by Kafka, but that’s because it emerges out of the alien “mind” of an algorithm, a real world analog to Google’s Deep Dream.

The world in this way becomes less and less sensible, except to the tiny number of human engineers who, for the moment, retain control over its systems. This is a problem that is only going to get worse with the spread of the Internet of Things. An extension of computation not out of necessity, but because capitalism in its current form seems hell bent on turning all products into services so as to procure a permanent revenue stream. It’s not a good idea. As Bridle puts it:

“We are inserting opaque and poorly understood computation at the very bottom of Maslow’s hierarchy of needs – respiration, food, sleep, and homeostasis – at the precise point, that is, where we are most vulnerable.”

What we should know by now, if anything, is that the more connected things are, the more hackable they become, and the more susceptible to rapid and often unexplainable crashes. Turning reality into a type of computer simulations comes with the danger the world at large might experience the kind of “flash crash” now limited to the stock market. Bridle wonders if we’ve already experienced just such a flash crash of reality:

“Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.”

Cognition

It’s perhaps somewhat startling that even as we place ourselves in greater and greater dependence on artificial intelligence we’re still not really certain how or even if machines can think. Of course, we’re far from understanding how human beings exhibit intelligence, but we’ve never been confronted with this issue of inscrutability when it comes to our machines. Indeed, almost the whole point of machines is to replace the “herding cats” element of the organic world with the deterministic reliability of classical physics. Machines are supposed to be precise, predictable, legible, and above all, under the complete control of the human beings who use them.

The question of legibility today hasn’t gone far beyond the traditional debate between the two schools of AI that have rivaled each other since the field’s birth. There are those who believe intelligence is merely the product of connections between different parts of the brain and those who think intelligence has more to do with the mind’s capacity to manipulate symbols. In our era of deep learning the Connectionists hold sway, but it’s not clear if we are getting any closer to machines actually understanding anything, a lack of comprehension that can result in the new comedy genre of silly or disturbing mistakes made by computers, but that also presents us with real world dangers as we place more and more of our decision making upon machines that have no idea what any of the data they are processing actually means even as these programs discover dangerous hacks such as deception that are as old as life itself.

Complicity

Of course, no techno-social system can exist unless it serves the interest of at least some group in a position of power. Bridle draws an analogy between the society in ancient Egypt and our own. There, the power of the priests was premised on their ability to predict the rise and fall of the Nile. To the public this predictive power was shrouded in the language of the religious castes’ ability to commune with the gods, all the while the priests were secretly using the much more prosaic technology of nilometers hidden underground.

Who are the priests of the current moment? Bridle makes a good case that it’s the “three letter agencies”, the NSA, MI5 and their ilk that are the priests of our age. It’s in the interest of these agencies, born in the militarized atmosphere of the Cold War and the War on Terrorism that the logic of radical transparency continues to unfold- where the goal is to see all and to know all.

Who knows how vulnerable these agencies have made our communications architecture in trying to see through it? Who can say, Bridle wonders, if the strongest encryption tools available haven’t already been made useless by some genius mathematician working for the security state? And here is the cruel irony of it all, that the agencies whose quest is to see into everything are completely opaque to the publics they supposedly server. There really is a “deep state” though given our bizzaro-land media landscape our grappling with it quickly gives way conspiracy theories and lunatic cults like Q-Anon.

Conspiracy

The hunt for conspiracy stems from the understandable need of the human mind to simplify. It is the search for clear agency where all we can see are blurred lines. Ironically, believers in conspiracy hold more expansive ideas of power and freedom than those who explain the world in terms of “social forces” or other necessary abstractions. For a conspiracists the world is indeed controllable it’s just a matter that those doing the controlling happen to be terrifying.  None of this makes conspiracy anything but an unhinged way of dealing with reality, just a likely one whenever a large number of individuals feel the world is spinning out of control.

The internet ends up being a double boon for conspiracists politics because it both fragments the shared notion that of reality that existed in the age of print and mass media while allowing individuals who fall under some particular conspiracy’s spell to find one another and validate their own worldview. Yet it’s not just a matter of fragmented media and the rise of filter bubbles that plague us but a kind of shattering of our sense of reality itself.

Concurrency

It is certainly a terrible thing that our current communications and media landscape has fractured into digital tribes with the gap of language and assumptions between us seemingly unbridgeable, and emotion-driven political frictions resulting in what some have called “a cold civil war.” It’s perhaps even more terrifying that this same landscape has spontaneously given way to a kind of disturbed collective unconscious that is amplified, and sometimes created, by AI into what amounts to the lucid dreams of a madman that millions of people, many of them children, experience at once.

Youtube isn’t so much a television channel as it is a portal to the twilight zone, where one can move from videos of strangers compulsively wrapping and unwrapping products to cartoons of Peppa the Pig murdering her parents. Like its sister “tubes” in the porn industry, Youtube has seemingly found a way to jack straight into the human lizard brain. As is the case with slot machines, the system has been designed with addiction in mind, only the trick here is to hook into whatever tangle of weirdness or depravity exists in the individual human soul- and pull.

The even crazier thing about these sites is that the majority of viewers, and perhaps soon creators, are humans but bots. As Bridle writes:

“It’s not just trolls, or just automation; it’s not just human actors playing out an algorithmic logic, or algorithms mindlessly responding to recommendation engines. It’s a vast and almost completely hidden matrix of interactions between desires and rewards, technologies and audiences, tropes and masks.”

Cloud

Bridle thinks one thing is certain, we will never again return to the feeling of being completely in control, and the very illusion that we can be, if we only had the right technical tweak, or the right political leader, is perhaps the greatest danger of our new dark age.

In a sense we’re stuck with complexity and it’s this complex human/machine artifice which has emerged without anyone having deliberately built it that is the source of all the ills he has detailed.

The historian George Dyson recently composed a very similar diagnosis. In his essay Childhood’s End Dyson argued that we are leaving the age of digital and going back to the era of analog. He didn’t mean that we’d shortly be cancelling our subscriptions to Spotify and rediscovering the beauty of LP’s (though plenty of us are doing that), but something much deeper. Rather than, being a model of the world’s knowledge, in some sense, now Google was the world’s knowledge. Rather than represent the world’s social graph, now FaceBook was the world’s social graph.

The problem with analogue systems when compared to digital is that they are hard to precisely control, and thus are full of surprises, some of which are far from good. Our quest to assert control over nature and society hasn’t worked out as planned. According to Dyson:

“Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.”

Bridle’s answer to our situation is to urge us to think, precisely the kind of meditation on the present he has provided with his book. It’s not as wanting a solution as one might suppose, and for me had clear echoes with the perspective put forward by Roy Scranton in his book Learning to Die in the Anthropocene where he wrote:

“We must practice suspending stress-semantic chains of social exhaustion through critical thought, contemplation, philosophical debate, and posing impertinent questions…

We must inculcate ruminative frequencies in the human animal by teaching slowness, attention to detail, argumentative rigor, careful reading, and meditative reflection.”

I’m down with that. Yet the problem I had with Scranton is ultimately the same one I had with Bridle. Where is the politics? Where is human agency? For it is one thing to say that we live in a complex world roamed by “hyperobjects” we at best partly control, but it quite another to discount our capacity for continuous intervention, especially our ability to “act in concert”, that is politically, to steer the world towards desirable ends.

Perhaps what the arrival of a new dark age means is that we’re regaining a sense of humility. Starting about two centuries ago human beings got it into their heads that they had finally gotten nature under their thumb. What we are learning in the 21st century was not only was this view incorrect, but that the human made world itself seems to have a mind of its own. What this means is that we’re likely barred forever from the promised land, condemned to a state of adaptation and response to nature’s cruelty and human injustice, which will only end with our personal death or the extinction of the species, and yet still contains all the joy and wonder of what it means to be a human being cast into a living world.

 

Crushing the Stack

If in The Code Economy Philip Auerswald managed to give us a succinct history of the algorithm, while leaving us with code that floats like a ghost in the ether lacking any anchor in our very much material, economic and political world. Benjamin Bratton tries to bring us back to earth. Bratton’s recent book, The Stack: On software and sovereignty provides us with a sort of schematic with which we can grasp the political economy of code and thus anchor it to the wider world.

The problem is that Bratton, unlike Auerswald, has given us this schematic in the almost impenetrable language of postmodern theory beyond the grasp of even educated readers. Surely this is important for as Ian Bogost pointed out in his review of The Stack: “The book risks becoming a tome to own and display, rather than a tool to use.” This is a shame because the public certainly is in need of maps through which they can understand and seek to control the computational infrastructure that is now embedded in every aspect of our lives, including, and perhaps especially, in our politics. And the failure to understand and democratically regulate such technology leaves society subject to the whims of the often egomaniacal and anti-democratic nerds who design and run such systems.

In that spirit, I’ll try my best below to simplify The Stack into a map we can actually understand and therefore might be inclined to use.

In The Stack Bratton observers that we have entered the era of what he calls “planetary scale computation.” Our whole global system of processing and exchanging information, from undersea fiber-optic cables, satellites, cell-phone towers, server farms, corporate and personal computers along with our ubiquitous smartphones he see sees as “an accidental megastructure” that we have cobbled together without really understanding what we are building. Bratton’s goal, in a sense, is to map this structure by treating it as a “stack”, dissecting it into what he hopes are clearly discernible “layers.” There are six of these: Earth, Cloud, City, Address, Interface and User.

It is the Earth layer that I find both the most important and the most often missed when it comes to discussions of the political economy of code. Far too often the Stack is represented as something that literally is virtual, disconnected from the biosphere in a way that the other complex artificial systems upon which we have come to depend, such as the food system or the energy system, could never be as a matter of simple common sense. And yet the Stack, just like everything else human beings do, is dependent upon and effects the earth. As Bratton puts it in his Lovecraftian prose:

The Stack terraforms the host planet by drinking and vomiting its elemental juices and spitting up mobile phones. After its short career as a little computing brick within a larger megamachine, its fate at the dying end of the electronics component life cycle is just as sad. What is called “electronic waste” inverts the process that pulls entropic reserves of metal and oil from the ground and given form, and instead partially disassembles them and reburies them, sometimes a continent away and sometimes right next door. (p.83)

The rare earth minerals upon which much of modern technology depends come at the cost of environmental degradation and even civil war, as seen in the Democratic Republic of Congo. Huge areas of the earth are now wastelands festooned with the obsolescent silicon of our discarded computers and cell phones picked over by the world’s poorest for whatever wealth might be salvaged.

The Stack consumes upwards of 10 percent of the world’s energy. It’s an amount that is growing despite the major tech players efforts to diminish its footprint by relocating servers in the arctic, and, perhaps soon, under the sea. Although gains in efficiency have, at least temporarily, slowed the rate of growth in energy use.

The threat to the earth from the Stack, as Bratton sees it, is that its ever growing energy and material requirements will end up destroying the carbon based life that created it. It’s an apocalyptic scenario that is less fanciful than it sounds for the Stack is something like the nervous system for the fossil fuel based civilization we have built. Absent our abandonment of that form of civilization we really will create a world that is only inhabitable by machines and machine-like life forms such as bacteria. Wall-e might have been a prophecy and not just a cartoon.

Yet Bratton also sees the Stack as our potential savior, or at least the only way possible without a massive die off of human beings, to get out of this jam. A company like Exxon Mobil with its dependence on satellites and super-computers is only possible with the leverage of the Stack, but then again so is the IPCC.

For the Stack allows us to see nature, to have the tools to monitor, respond to, and perhaps even interfere with the processes of nature many of which the Stack itself is throwing out of kilter. The Stack might even give us the possibility of finding an alternative source of power and construction for itself. One that is compatible with our own survival along with the rest of  life on earth.

After the Earth layer comes the Cloud layer. It is here that Battron expands upon the ideas of Carl Schmitt. A jurist under the Nazi regime, Schmitt’s ideas about the international order have become popular among many on the left at least since the invasion of Iraq by the US in 2003 not as a prescription, but as a disturbingly prescient description of American politics and foreign policy in the wake of 9-11.

In his work The Nomos of the Earth Schmitt critiqued the American dominated international order that had begun with the US entry into WWI and reigned supreme during the Cold War  as a type of order that had, by slipping free the of the anchor of national sovereignty bound to clearly defined territories, set the world on the course of continuous interventions by states into each other’s domestic politics leading to the condition of permanent instability and the threat of total war.

Bratton updates Schmitt’s ideas for our era in which the control of infrastructure has superseded the occupation of territory as the route to power. Control over the nodes of  global networks, where assets are no longer measured in square miles, but in underwater cables, wireless towers, and satellites demands a distributed form of power, and hence helps explain the rise of multinational corporations to their current state of importance.

In terms of the Stack, these are the corporations that make up Bratton’s Cloud Layer, which include not only platforms such as Google and FaceBook, but the ISPs controlling much of the infrastructure upon which these companies (despite their best efforts to build such infrastructure themselves), continue to depend.

Bratton appears to see current geopolitics as a contest between two very different ideas regarding the future of the Cloud. There is the globalist vision found in Silicon Valley companies that aims to abandon the territorial limits of the nation-state and the Chinese model, which seeks to align the Cloud to the interests of the state. The first skirmish of this war Bratton notes was what he calls the Sino-Google War of 2009 in which Google under pressure from the Chinese government to censor its search results eventually withdrew from the country.

Unfortunately for Silicon Valley, along with those hoping we were witnessing the last gasp of the nation-state, not only did Google lose this war, it has recently moved to codify the terms of its surrender, while at the same time we have witnessed both a global resurgence of nationalism and the continuing role of the “deep-state” forcing the Cloud to conform to its interests.

Bratton also sees in the platform capitalism enabled by the Cloud the shape of a possible socialist future- a fulfillment of the dreams of rational, society-wide economic planning that was anticipated with the USSR’s Gosplan, and Project Cybersyn in pre-Pinochet Chile. The Stack isn’t the only book covering this increasingly important and interesting beat.

After the Cloud layer comes the City layer. It is in cities where the density of human population allows the technologies of the Stack to be most apparent. Cities, after all, are thick agglomerations of people and goods in motion all of which are looking for the most efficient path from point A to point B. Cities are composed privatized space made of innumerable walls that dictate entry and exit. They are the perfect laboratory for the logic and tools of the Stack. As Bratton puts it:

We recognize the city he describes as filled with suspicious responsive environments, from ATM PINs, to key cards and parking permits, e-tickets to branded entertainment, personalized recommendations from others who have purchased similar items, mobile social network transparencies, GPS-enabled monitoring of parolees, and customer phone tracking for retail layout optimization.  (p. 157)

Following the City layer we find the Address. In the Stack (or at least in the version of it dreamed up by salesmen for the Internet of Things), everything must have a location in the network, a link to which it can be connected to other persons and things. Something that lacks an address in some sense doesn’t exist for the Stack. An unconnected object or person fails to be a repository for information on which the Stack itself feeds.

We’ve only just entered the era in which our everyday objects speak to one another and in the process can reveal information we might have otherwise hidden about ourselves. What Bratton finds astounding is that in the Address layer we can see that the purpose of our communications infrastructure has become not for humans to communicate with other humans via machines, but for machines to communicate with other machines.

The next layer is that of the Interface. It is the world of programs and apps through which for most of us is the closest we get to code. Bratton says it better:

What are Apps? On the one hand, Apps are software applications and so operate within something like an application layer of a specific device-to-Cloud economy. However, because most of the real information processing is going on in the Cloud, and not in the device in your hand, the App is really more an interface to the real applications hidden away in data centers. As an interface, the App connects the remote device to oceans of data and brings those data to bear on the User’s immediate interests; as a data-gathering tool, the App sends data back to the central horde in response to how the User makes use of it. The App is also an interface between the User and his environment and the things within it, by aiding in looking, writing, subtitling, capturing, sorting, hearing, and linking things and events. (p.142)

The problem with apps is that they offer up an extremely narrow window on the world. Bratton is concerned about the political and social effects of such reality compression, a much darker version of Eli Pariser’s “filter bubble”, where the world itself is refracted into a shape that conforms to the individual’s particular fetishes, shattering a once shared social world.

The rise of filter bubbles are the first sign of a reality crisis Bratton thinks will only get worse with the perfection of augmented reality-there are already AR tours of the Grand Canyon that seek to prove creationism is true.

The Stack’s final layer is that of the User. Bratton here seems mainly concerned with expanding the definition of who or what constitutes one. There’s been a lot of hand-wringing about the use of bots since the 2016 election. California has even passed legislation to limit their use. Admittedly, these short, relatively easy to make programs that allow automated posts or calls are a major problem. Hell, over 90% of the phone calls I receive are now unsolicited robocalls, and given that I know I am not alone in this, such spam might just kill the phone call as a means of human communication. Ironically, the very reason we have cellphones in the first place.

Yet bots have also become the source of what many of us would consider not merely permissible, but desirable speech. It might upset me that countries like Russia and Saudi Arabia are vociferous users of bots to foster their interests among English speaking publics, or scammers using bots to pick people’s pockets, but I actually like the increasing use of bots by NGOs whose missions I support.

Bratton thus isn’t crazy for suggesting we give the bots some space in the form of “rights”. Things might move even further in this direction as bots become increasingly more sophisticated and personalized. Few would go so far as Jamie Susskind in his recent book Future Politics in suggesting we might replace representative government by a system of liquid democracy mediated by bots; one in which bots make political decisions for individuals based on the citizen’s preferences. But, here again, the proposal isn’t as ridiculous or reactionary as it might sound.

Given some issue to decide upon my bot could scan the position on the same by organizations and individuals I trust in regards to that issue. “My” votes on environmental policy could reflect some weighted measure between the views of the World Wildlife Fund, Bill Mckibben and the like, meaning I’d be more likely to make an informed vote than if I had pulled the lever on my own. This is not to say that I agree with this form of politics, or even believe it to be workable. Rather, I merely think that Bratton might be on to something here. That a key question in the User layer will be the place of bots- for good and ill.

The Stack, as Bratton has described it, is not without its problems and thus he ends his book with proposals for how we might build a better Stack. We could turn the Stack into a tool for the observation and management of the global environment. We could give Users design control over the interfaces that now dictate their lives, including the choice to enter and exit when we choose, a right that should be extended to the movement between states as well. We could use the power of platforms to revive something like centrally planned economies and their dream of eliminating waste and scarcity. We could harness the capacity of the Interface layer to build a world of plural utopias, extend and articulate the rights and responsibilities of users in a world full of bots.

Is Bratton right? Is this the world we are in, or at least headed towards. For my money, I think he gets some things spectacularly right, such as his explanation of the view of climate change within the political right:

“For those who would prefer neo-Feudalism and/or tooth-and-nail libertarianism, inaction on climate change is not denialism, rather it is action on behalf of a different strategic conclusion.” (p.306)

Yet, elsewhere I think his views are not only wrong, but sometimes contradictory. I think he largely misses how the Stack is in large part a product of American empire. He, therefore, misinterprets the 2009 spat between Google and China as a battle between two models of future politics, rather than seeing the current splintering of the internet for what it is: the emergence of peer competitors in the arena of information over which the US has for so long been a hegemon.

Bratton is also dismissive of privacy and enraptured by the Internet of Things in a way that can sometimes appear pollyannaish. After all, privacy isn’t just some antiquated right, but one of the few ways to keep hackable systems secure. That he views the IoT as something inevitable and almost metaphysical, rather than the mere marketing it so often is, leads me to believe he really hasn’t thought through what it means to surround ourselves with computers- that is to make everything in our environment hackable. Rather than being destined to plug everything into everything else, we may someday discover that this is not only unnecessary and dangerous, but denotes a serious misunderstanding of what computation is actually for.

Herein lies my main problem with the Stack: though radically different than Yuval Harari, Bratton too seems to have drank the Silicon Valley Kool Aid.  The Stack takes as its assumption that the apps flowing out of the likes of FaceBook and Google and the infrastructure behind them are not merely of world-historical, but of cosmic import. Matter is rearranging itself into a globe spanning intelligence with unlikely seeds like a Harvard nerd who wanted a website to rate hot-chicks. I just don’t buy it.

What I do buy is that the Stack as a concept, or something like it, will be a necessary tool for negotiating our era, where the borders between politics and technology have become completely blurred. One can imagine a much less loquacious and more reality-based version of Bratton’s book that used his layers to give us a better grasp of this situation. In the Earth layer we’d see the imperialism behind the rare-earth minerals underlying our technology, we’d see massive Chinese factories like those of FoxConn, the way in which earth destroying coal continues to be the primary energy source for the Stack.

In the Cloud layer we’d gain insight into server farms and monopolistic ISPs such as Comcast, and come to understand the fight over Net Neutrality. We’d be shown the contours of the global communications infrastructure and the way in which these are plugged into and policed by government actors such as the NSA.

In the City layer we’d interrogate idea of smart cities, along with the automation of inequality and digitization of citizenship along with exploring the role of computation in global finance. In the Address layer we’d uncover the scope of logistics and  find out how platforms such as Amazon work their magic, and ask whether it really was magic or just parasitism, and how we might use these insights for the public good, whether that meant nationalizing the platforms or breaking them into pieces.

In the User layer we’d take a hard look at the addictive psychology behind software, the owners and logic behind well-known companies such as FaceBook along with less well known such as MindGeek. Such an alternative version of the Stack, would not only better inform us as to what the Stack is, but suggest what we might actually do to build ourselves a better one.  

 

The Evolution of Chains

“Progress in society can be thought of as the evolution of chains….

 Phil Auerswald, The Code Economy

“I would prefer not to.”

Herman Melville, Bartleby the Scrivener

 The 21st century has proven to be all too much like living in a William Gibson novel. It may be sad, but at least it’s interesting. What gives the times this cyberpunk feel isn’t so much the ambiance (instead of dark cool of noir we’ve got frog memes and LOLcats), but instead is that absolutely every major happening and all of our interaction with the wider world has something to do with the issue of computation, of code.

Software has not only eaten the economy, as Marc Anderssen predicted back in 2011, but also culture, politics, and war. To understand both the present and the future, then, it is essential to get a handle on what exactly this code that now dominates our lives is, to see it in its broader historical perspective.

Phil Auerswald managed to do something of the sort with his book The Code Economy. A Forty-Thousand Year History. It’s a book that tries to take us to the very beginning, to chart the long march of code to sovereignty over human life. The book defines code (this will prove part of its shortcomings) broadly as a “recipe” a standardized way of achieving some end. Looked at this way human beings have been a species dominated by code since our beginnings, that is with the emergence of language, but there have been clear leaps closer to the reign of code along the way with Auerswald seeing the invention of writing being the first. Written language, it seems, was the invention of bureaucrats, a way for the tribal scribes in ancient Sumer to keep track of temple tributes and debts. And though it soon broke out of these chains and proved a tool as much for building worlds and wonders like the Epic of Gilgamesh as a radical new means of power and control, code has remained linked to the domination of one human group over another ever since.

Auerswald is mostly silent on the mathematical history of code before the modern age. I wish he had spent time discussing predecessors of the computer such as the abacus, the Antikythera mechanism, clocks, and especially the astrolabe. Where exactly did this idea of mechanizing mathematics actually emerge, and what are the continuities and discontinuities between different forms of such mechanization?

Instead, Gottfried Leibniz who envisioned the binary arithmetic that underlies all of our modern day computers is presented like he had just fallen out of the matrix. Though Leibniz’ genius does seem almost inexplicable and sui generis. A philosopher who arguably beat Isaac Newton to the invention of calculus, he was also an inventor of ingenious calculating machines and an almost real-life version of Nostradamus. In a letter to Christiaan Huygens in 1670 Leibniz predicted that with machines using his new binary arithmetic: “The human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes.” He was right, though it would be nearly 300 years until these instruments were up and running.

Something I found particularly fascinating is that Leibniz found a premonition for his binary number system in the Chinese system of divination- the I-ching which had been brought to his attention by Father Bouvet, a Jesuit priest then living in China. It seems that from the beginning the idea of computers, and the code underlying them, has been wrapped up with the human desire to know the future in advance.

Leibniz believed that the widespread adoption of binary would make calculation more efficient and thus mechanical calculation easier, but it wouldn’t be until the industrial revolution that we actually had the engineering to make his dreams a reality. Two developments especially aided in the development of what we might call proto-computers in the 19th century. The first was the division of labor first identified by Adam Smith, the second was Jacquard’s loom. Much like the first lurch toward code that came with the invention of writing, this move was seen as a tool that would extend the reach and powers of bureaucracy.       

Smith’s idea of how efficiency was to be gained from breaking down complex tasks into simple easily repeatable steps served as the inspiration to applying the same methods to computation itself. Faced with the prospect of not having enough trained mathematicians to create the extensive logarithmic and trigonometric tables upon which modern states and commerce was coming to depend, innovators such as Gaspard de Prony in France hit upon the idea of breaking down complex computation into simple tasks. The human “computer” was born.

It was the mechanical loom of Joseph Marie Jacquard that in the early 1800s proved that humans themselves weren’t needed once the creation of a complex pattern had been reduced to routine tasks. It was merely a matter of drawing the two, route human computation and machine enabled pattern making, together which would show the route to Leibniz’ new instrument for thought. And it was Charles Babbage along with his young assistant Ada Lovelace who seemed to graph the implications beyond data crunching of building machines that could “think”  who would do precisely this.

Babbage’s Difference and Analytical Engines, however, remained purely things of the mind. Early industrial age engineering had yet to catch up with Leibniz’ daydreams. Society’s increasing needs for data instead came to be served by a growing army of clerks along with simple adding machines and the like.

In an echo of the Luddites who rebelled against the fact that their craft was being supplanted by the demands of the machine, at least some of these clerks must have found knowledge reduced to information processing dehumanizing. Melville’s Bartleby the Scrivener gives us a portrait of a man who’s become like a computer stuck in a loopy glitch that eventually leads to his being scraped.    

What I think is an interesting aside here, Auerswald  doesn’t really explore the philosophical assumptions behind the drive to make the world computable. Melville’s Bartleby, for example, might have been used as a jumping off point for a discussion about how both the computer the view of human beings as a sort of automata meant to perform a specific task emerged out of a thoroughly deterministic worldview. This view, after all, was the main target of  Melville’s short story where a living person has come to resemble a sort of flawed windup toy, and make direct references to religious and philosophical tracts arguing in favor of determinism; namely, the firebrand preacher Jonathan Edwards sermon On the Will, and the chemist and utopian Joseph Priestley’s book  Doctrine of Philosophical Necessity.  

It is the effect of the development of machine based code on these masses of 19th and early 20th century white collar workers, rather than the more common ground of automation’s effect on the working class, that most interests Auerswald. And he has reasons for this, given that the current surge of AI seems to most threaten low level clerical workers rather than blue collar workers whose jobs have already been automated or outsourced, and in which the menial occupations that have replaced jobs in manufacturing require too much dexterity for even the most advanced robots.

As Auerswald points out, fear of white collar unemployment driven by the automation of cognitive tasks is at least as old as the invention of the Burroughs’s calculating machine in the 1880s. Yet rather than lead to a mass of unemployed clerks, the adoption of adding machines, typewriters, dictaphones only increased the number of people needed to manage and process information. That is, the depth of code into society, or the desire for society to conform to the demands of code placed even higher demands on both humans and machines. Perhaps that historical analogy will hold for us as well. Auerswald doesn’t seem to think that this is a problem.

By the start of the 20th century we still weren’t sure how to automate computation, but we were getting close. In an updated version of De Prony, the meteorologist, Fry Richardson showed how we could predict the weather armed with a factory of human computers. There are echoes of both to be seen in the low-paid laborers working for platforms such as Amazon’s Mechanical Turk who still provide the computation behind the magic-act that is much of contemporary AI.

Yet it was World War II and the Cold War that followed that would finally bootstrap Leibniz’s dream of the computer into reality. The names behind this computational revolution are now legendary: Vannevar Bush, Norbert Wiener, Claude Shannon, John Von Neumann, and especially Alan Turing.

It was these figures who laid the cornerstone for what George Dyson has called “Turing’s Cathedral” for like the great medieval cathedrals the computational megastructure in which all of us are now embedded was the product of generations of people dedicated to giving substance to the vision of its founders. Unlike the builders of Notre Dame or Chartres who labored in the name of ad majorem Dei gloriam, those who constructed Turing’s Cathedral were driven by ideas regarding the nature of thought and the power and necessity of computation.

Surprisingly, this model of computation created in the early 20th century by Turing and his fellow travelers continues to underlie almost all of our digital technology today. It’s a model that may be reaching its limits, and needs to be replaced with a more environmentally sustainable form of computation that is actually capable of helping us navigate and find the real structure of information in a world whose complexity we are finding so deep as to be intractable, and in which our own efforts to master only results in a yet harder to solve maze. It’s a story Auerswald doesn’t tell, and must wait until another time.        

Rather than explore what computation itself means, or what its future might be, Auerswald throws wide open the definition of what constitutes a code. As Erwin Schrodinger observed in his prescient essay What is Life?”, and as we later learned with the discovery of DNA, a code does indeed stand at the root of every living thing. Yet in pushing the definition of code ever farther from its origins in the exchange of information and detailed instructions on how to perform a task, something is surely lost. Thus Auerswald sees not only computer software and DNA as forms of code, but also the work of the late celebrity chef Julia Child, the McDonald’s franchise, WalMart and even the economy itself all as forms of it.

As I suggested at the beginning of this essay, there might be something to be gained by viewing the world through the lens of code. After all, code of one form or another is now the way in which most of the largely private and remaining government bureaucracies that run our lives function. The sixties radicals terrified that computers would be the ultimate tool of bureaucrats guessed better where we were headed. “Do not fold spindle or mutilate.” Right on. Code has also become the primary tool through which the system is hacked- forced to buckle and glitch in ways that expose its underlying artificiality, freeing us for a moment from its claustrophobic embrace. Unfortunately, most of its attackers aren’t rebels fighting to free us, but barbarians at the gates.

Here is where Auerswald really lets us down. Seeing progress as tied to our loss of autonomy and the development of constraints he seems to think we should just accept our fetters now made out of 1’s and 0’s. Yet within his broad definition of what constitutes code, it would appear that at least the creation of laws remains in our power. At least in democracies the law is something that the people collectively make. Indeed, if any social phenomenon can be said to be code like it would be law. It surprising, then, that Auerswald gives it no mention in his book. Meaning it’s not really surprising at all.

You see, there’s an implicit political agenda underneath Auerswald whole understanding of code, or at least a whole set of political and economic assumptions. These are assumptions regarding the naturalness of capitalism, the beneficial nature of behemoths built on code such as WalMart, the legitimacy of global standard setting institutions and protocols, the dominance of platforms over other “trophic layers” in the information economy that provide the actual technological infrastructure on which these platforms run. Perhaps not even realizing that these are just assumptions rather than natural facts Auerswald never develops or defends them. Certainly, code has its own political-economy, but to see it we will need to look elsewhere. Next time.

 

Citizenship as Fob Key

One of the key conceptual difficulties faced by those grappling with the resurgence of nationalism today is to hold fast to the recognition that this return of the nation state occurs in an already globalized world. In other words, this isn’t our grandparents’ nationalism we are confronting but something quite new, and forgetting that fact leads to all kinds of intellectual mistakes, most notably imagining that the ghosts of the likes of Hitler, Tojo, and Mussolini have risen from the grave like Halloween ghouls.

The question we should be asking is what does nationalism in an era of globalization actually look like? And perhaps to answer this question it’s better to zoom in rather than zoom out- to focus on the individual rather than the geopolitical. The question then becomes- what does it mean to live in a globalized world where the state and membership in the group it represents, rather than “withering away” is becoming not merely more important, but something no individual can effectively function without?

A good bit of this outline can be found in Atossa Araxia Abrahamian’s excellent book Cosmopolites: the coming of the global citizen. Yet, despite the title, what Abrahamian depicts there is less some emergent citizen of the world, than a new global regime where citizenship has been transformed from a sense of belonging and moral commitment into a means of access to rights, benefits, protections, and perhaps above all the freedom of movement that come with the correct passport- all of which are provided by the state. In other words citizenship has become a commodity of great value, which, like everything else nowadays can be bought, sold, and most disturbingly, repossessed.

Cosmopolites is especially focused on the plight of the Bidoons a stateless people found throughout the Gulf most notably in Kuwait and the United Arab Emirates. Given the extensive benefits that come with being a citizen of those states, the governments of such countries have been loathe to extend equal status to their Bidoon populations, even when individuals can trace their roots in the area into the deep past.

Ironically, statelessness itself gives rise to protections, namely governments are unable to expel stateless people unless some other country is willing to take them in. It was under these conditions that a sophisticated racket arose which involved the purchasing in bulk of citizenship for the Bidoon on a far off island nation near Madagascar, which doubtless none of the Bidoon residents in the UAE or Kuwait had ever heard of- Comoros.

It’s the kind of brilliant scheme that would make a great plot for a Coen Brothers film. Strip unrecognized populations of any claims they could make as de facto citizens by using your oil money to buy them such status in an impoverished country with little else to sell. A failing state to which they could be deported should they become either economically superfluous or an actual political and social nuisance.

The story of the Bidoon is one where the recipient didn’t so much buy citizenship as were forced to accept the “gift” of what was not merely a useless passport but one that put them at risk of being kicked out of the only country they had ever known along with all they had built within it. Yet the experience of the Bidoon is only one version of the booming market that comprises the passport trade. As Abrahamian documents it, passports can be bought from countries located on all corners of the globe and at wildly variable rates (though no of which are easily accessible to the middle class let alone the world’s poor). Dominica sells its passports for a “paltry” $200,000 dollars, while a high end Austria passport will set you back millions of Euros.

Why do the global rich shell out for such papers? Abrahamian sums it up nicely this way:

People with “good” passports don’t think about them much. But people with “bad” passports think about them a great deal  To the wealthy, this is particularly insulting: A bad passport is like a phantom limb that won’t stop tingling no matter how much money, power, or success they’ve accumulated- a constant reminder that the playing field is never truly level, and that the life for your average Canadian billionaire will be easier than that for a billionaire from Botswana or Peru. (73)

In today’s world, a Swiss passport is among the most valuable possessions on earth giving its holder the right to travel in nearly 90 percent of the world’s countries, live in one of its most prosperous and stable states, not to mention the fringe benefit of living close to one’s cash, with Switzerland being one of the top 3 tax havens on the planet.

Notice that I’ve said nothing yet about citizenship beyond the instrumental. It’s all about where one can live, travel or store wealth. Surely, we associate citizenship with something beyond that: not merely the right, but the obligation, to take political responsibility for the decisions of the community one belongs to through acts such as voting, protest, or holding political office. Citizenship entails some commitment on the part of the individual to the past, present and future of the community in which she lives. It is the seat of rights, but also gives rise to obligations, sacrifices in the name of the greater good, which may even demand that an individual risk her life in the name of its defense.

We might be confused into thinking that citizenship in this latter sense as a bundle of rights and obligations that tie an individual to a particular community has always existed. We would be mistaken. Citizenship as a way of relating to the world is, in its modern incarnation, is no older than the French Revolution. Other than that, it’s existed here and there in fleeting moments of freedom- most famously in the city states of the ancient Greeks- only to be supplanted by notions of empire or religious kinship.

Yet we’d be wrong if we thought that such non-citizenship based political and social orders lacked any notion of what we’d recognize as “rights”. It’s just that those rights, conferred on subjects, did not entail actual control of or responsibility for the fate of society itself on the part of the common individual. At least in the case of religiously based social orders, belonging was tightly connected to the obligation of the rich to care for the poor. Think medieval Europe with its numerous public charities and hospitals, a classless bonding of which we still have echoes at Christmas time and which is alive and well in modern day Islam.

Abrahamian herself seems to share some affinity with the kinds of post-citizenship that emerged in the ancient world after the decline of the Greek city-states thanks to the Cynic Diogenes and later the Stoics. In the face of universal empire these pre-Christian thinkers imagined a type of citizenship freed from the notion of place- kosmopolites- “citizens of the universe.” What our circumstance calls for is a similar conceptual leap- for citizenship defined as rights to be decoupled from the nation-state to cover the entire world. Part of me wishes she was right, but it’s hard for me to see any signs that we’re moving towards a such a borderless world rather than moving to a place where the whole purpose of the state has become to act as a sort of monstrous gated community and fortress. Citizenship has become something like the mother of all fob keys, a means of entry, exit, identification and ownership.

A a fob key citizenship is pretty weird, above all because its possession, more often than not, is a product of pure dumb luck. Those, like myself, lucky enough to be born in a country whose inhabitants are gifted with globally valuable keys have been granted them as a mere matter of where they were born. Yet it’s not something I could sell, and it would even be hard for me to separate myself from it were I to try.

It’s certainly a theory with a lot of holes, but perhaps a good deal of current nativist insecurity over immigration can be seen as a fear of fob key inflation. They want to keep the benefits of being an American all to themselves, a kind of selfishness that blinds them to evil. I’m thinking of obscene proposals coming out of the Trump administration such as taking Green Cards away from people who have accessed public benefits, denying foreign soldiers who have risked their lives in US wars the visas they were promised for doing so, and above all, refusing those fleeing violence abroad the right of refuge they are entitled to under international law. We’ve done even worse than that: we’ve locked their children in cages.

Still, citizenship and its passports are just the meta-fob key for a society that’s come to resemble the opening scene of the 1960’s comedy “Get Smart” where its locked steel door after locked steel door all the way down. After the citizen-fob, you find the universal privatization of geography: high security areas, corporate spaces, gated communities, segregated housing, restricted zoning, and ever more importantly algorithmic sorting.

The amazing thing is just how quickly the whole panoply of instruments we use to identify ourselves in relation to some social organizations (passport/nationality, driver’s ID/state resident, bank card/account holder etc) are being moved to the body itself.

Bio-metrics is a booming business whose whole point is to strictly limit access to some space or good. The end result of which is that a world that was supposed to be becoming “flat” and global is instead taking on a kind of customized typology based on a hierarchy of access to the whole. And worse, the same technological revolution that enables global travel and communication is being put into the service of any ever more surveilled and managed space.

You don’t need to turn to William Gibson to see just how dystopian a future we could be moving toward. China has turned a whole region- Xinjiang- into a giant panopticon where its Muslim, Uyghur population is under a state of constant surveillance and oppression made possible by cell phones, CCTV cameras and AI.

We’re probably less likely to reach a similar destination by a move to dictatorship (fingers crossed) than by worshiping at the altars of safety and convenience. People are already inserting microchips under their skin so they can get through security checks quicker,  Amazon GO already allows customers to pay using their “face”. Even shackles have gone digital.

Probably the smallest identifying trait a person has is their DNA. It’s also more specific to an individual than any other bio-metric. You or your algorithm might confuse my face with someone else’s, but, given the right tools, you’re unlikely to confuse my genes- even if I had a twin.

The big threat from genetics in this context is that it becomes a fob key that doesn’t just limit the movement of individuals, but itself is turned into a sort of unscalable wall. Those with the “right” genes considered part of “us” and therefore eligible for our loyalty, protection or beneficence. At first glance this sounds like a revival of ethnic-nationalism, or even more darkly, the kinds of mania about people of the same “blood” we saw with the Nazis. Yet I think the kind of tech-enable social sorting would more likely give rise to something else- not as evil, but just as dismal.

What DNA screening allows you to do is construct the ultimate dynastic society- tribes constructed out of codons. It’s a perfect fit for the unequal society we now live in where for most of us everything’s a wall and nothings a door. If that’s where were headed, I’ll side with Diogenes and stay in my tub.

Our Potemkin Civil War

“The trouble is that this calamity arose not from any lack of civilization, backwardness, or mere tyranny, but, on the contrary, that it could not be repaired, because there was no longer any “uncivilized” spot on earth, because whether we like it or not we have really started to live in One World. Only with a completely organized humanity could the loss of home and political status become identical with expulsion from humanity altogether.” Hannah Arendt, Decline of the Nation-State and the End of the Right of Man

Something has gone terribly off track when it comes to the nation-state. It has imploded in regions where it had been artificially imposed- the Middle East- is unraveling in some sections of Europe- such as Spain- and perhaps strangest of all, where successfully resurgent against the forces of globalization has done so within the context of an internationally networked, populist right where so-called nationalist align themselves to what are less nation-states than racial and religiously based empires, that is Russia and the United States.

A nation-state exists when there is a near “perfect” correspondence between some ethnic group and the instruments of the state. Reference to the nation has been one way of answering the question: why should I obey the laws? Up until very recently, the state had no way of watching every person all of the time, so short of being absolutely terrifying as was Hobbes’ suggestion, a state had to find a way to encourage people to willingly follow its dictates.

Doing so because it serves the good of the people of which you are part proved to be a particular effective inducement to law following, but as a type of legitimacy and political regime nationalism, isn’t much older than the 19th century. Nationalism emerged in Europe where prior answers to why an individual should follow the laws included because God or the King says so. Being a criminal might lead to hell or the rack, and maybe even both. The nation replaced the myths of God and king with the myth of common origin and shared destiny.

In the strict sense, the United States has never in its history been a nation-state in the European sense of the word, but possessed a hybrid form of legitimacy based in varying degrees on Protestantism, claims of white supremacy, and civic-nationalism. In the latter, the legitimacy of the state is said to flow not from any particular ethnic group but from shared commitments among plural communities to a certain political form and ideal. Civic-nationalism remains the most effective and inclusive form of nationalism to this day. Yet after more than a half a century of having rejected both religion and race as the basis for its political identity, and embraced civic-nationalism as its foundation, the US is now flirting with the revival of atavistic forms of political authority in a way that connects them with strange bedfellows in Moscow and a European right-wing terrified of Islam.

The nation-state often gets a bad rap, but it has played a historical role well beyond inspiring jingoism and ethnic conflict. Nationalism, as in the quest to establish a nation-state, was one of the forces behind the rise of democracy in the 19th century, served as the well-spring for the liberation of colonized peoples in the 20th century, and formed the basis of social democracy and the welfare-state in post-war Europe.

Civic-nationalism has been present in the US since the American Revolution, was fleshed out in times of crisis such as the Civil War, and became by far the dominant form of political legitimacy during the Cold War as first Protestant Anglo-Saxonism and then white supremacy were supplanted by an America’s ideal of a Whig notion of history where society building off of ideas latent in its founding became ever more just and inclusive over time.

It’s not just European-style ethnic-nationalism, which always posed a danger to those who were outside the nation-state’s particular ethnic majority, that’s experiencing a crisis of legitimacy, but civic-nationalism as well. And this decline of civic-nationalism isn’t just happening in the US but also in that other great multi-ethnic democracy India.

Globalization seems to have undermined all of the positive legacies of nationalism and left us only with its shards, spewing forth monstrosities we thought the post-war world had killed. We seem to be in a genuine crisis of legitimacy in which the question of who the state serves, and above all, who is considered inside and outside the community it represents, has been reopened.

This isn’t the first time the nationalism as the source of authority has seemed to be on its deathbed, nor is it the first time that in its unwinding the definition of what constitutes the nation has become confused and taken on strange and blatantly contradictory forms. During the 1940’s when Hannah Arendt was struggling to understand the first period of the nation-state’s decline she too noticed how some of the nation’s most vocal defenders were themselves deeply embedded in international networks promoting nationalist ideologies, how the concept of the nation had been mixed up and supplanted by the idea race, how widespread security, military, and police services (supposedly staffed by the most loyal patriots) so-often coordinated with one another often to the detriment of the national interest.

Right-wing politics today is arguably more international than the right of the 1930’s, or the current left. And just like in the past, the alliance among the parties of the right is a partnership based on destroying the very connectivity that has made their strange international alliance possible. It is an axis driven by the fear of what globalization has enabled, that sees the police state as the solution to a world in flux, and somehow, and perhaps differently than its earlier incarnation, is unable, despite all its bluster, to extract itself from the centripetal force of the global capitalism in which it is embedded.

Here is the point at which the historical analogies with the 1930’s inevitably breakdown. In the 21st century right-wing nationalists aren’t really nationalists in the older sense of the word at all. Instead they are, as Nils Gilman brilliantly points out, the mask worn by a “plutocratic insurgency”. A slow moving counter-revolution whose mission lies less in national greatness than in bending the system to suit its own interests.

Still, we should count ourselves lucky to be living at the beginning of the 21st century rather than the 20th, for the fascists (they should have been called hyper-nationalist) of the last century could gain their mass appeal among the middle classes by positioning themselves as the only alternative to a genuinely revolutionary left. No such revolutionary left exists today except in the right-winger’s mind, his hatred directed not to a real challenge to his way of life, but as a sort of alternatingly villainous and pathetic cartoon.

Not only this, the military aspect of nationalism has changed immeasurably since Hitler and his goons in short-pants dreamed of Lebensraum . Wars over physical territory in the name of empire building no longer make sense, instead what constitutes territory has itself be redefined- (more on that in a moment). Modern day nationalist don’t want to conquer the world, but retreat into their shell, which in the end means turning the state itself into a gargantuan gated community. Thus trying to understand hyper-nationalists today by looking at fascists from the past the last century is a sort of analytical mistake, for the game being played has radically changed.

Perhaps we could get our historical bearings if we extended our view further back into the past and looked at the way systems of political legitimacy have previously sustained themselves only to eventually unravel.

Communication technology has always been at the center of any system of political legitimacy because the state needs to not only communicate what the laws are, but why they should be followed. The Babylonians had their steele, and Emperor Ashoka had his pillars, both like billboards out of The Flintstones. The medieval Church in Europe had its great cathedrals, a cosmic and human social order reified in stone and images blazed in stained glass. All of it a means of communicating to the illiterate masses when reading and writing were confined to an elite- this is the divine and political order in which you exist and to which you must defer.

The modern era of communication, and thus of political legitimacy, began in Europe in the late 1400’s with the widespread adoption of the printing press. We’ve been in an almost uninterrupted period of political instability ever since. Great images and structures were replaced by writing that now appeared and circulated with unprecedented speed. Different forms of arguments regarding what society was, what its borders were, who belonged to it, and what was owed to and from those who belonged to it emerged and fought one another for supremacy, with the border between what constituted religion, what constituted nationality, or what constituted a race often hybridized in ways we now find confusing.

In the late 18th century, European imperialism and African slavery gave rise to what would become the most insidious form of establishing the boundaries of political community- the ideology of white supremacy. Yet in that very same period we saw the emergence of a truly secular and potentially multi-ethnic and democratic form of political legitimacy with the rise of civic-nationalism during the American and French revolutions. (Though it had antecedents in the Italian city-states).

All these forms of political legitimacy, along with antiquarian ones that clung to the authority of church and king, and brand new ones, such as communism, would battle it out over the course of the 19th century with stability only lasting during times of war when the state would be united by a common project whose sacrifices were often justified on the basis of civic-nationalist discourse.

We don’t really see a stabilization of this pattern until after the world wars. In Europe, this was partly because many states had been ethically homogenized due to the conflicts and their aftermath, and in the US because not only had the wars helped fuse a stronger sense of national cohesion along multi-denominational lines, but because the Cold War that followed compelled the US to abandon white supremacy as a basis of national identity out of fear of Soviet incursions in the developing world.

Thus a kind of tamed ethnic-nationalism in Europe and an aspirationally multi-ethnic civic-nationalism in the United States became the dominant forms of political legitimacy in Western countries. Both coincided with what proved to be a temporary embrace of social democracy rather than the laissez-faire capitalism that had been predominant before the Second World War. Again a move inspired by fear of the Soviet Union and the danger of communist revolution.

And all this intersected with the new mass media that was able, for the first time, to center the public around not only a shared form of political legitimacy, but a shared political project in the form of a consumer’s republic and the fight against communism. It was not an interregnum that was destined to last long.

Nevertheless, the liberal consensus and reordering of political legitimacy around civic-nationalism that emerged out of the New Deal and World War II, which saw the last great act of a strong state in forcing an end to racial segregation in the American South, began to unwind in the late 1960’s and has perhaps entered a stage where it could never be rebound.

The feature that has stood at the heart of civic-nationalism since the time of Machiavelli– that of the citizen at arms- died with the debacle of the Vietnam War. Here began the breakdown of trust between technocratic elites and the public, including distrust in the mass media, with the crisis coming to a head with race riots, Watergate, followed by inflation, and the war on drugs. And whereas the left in this period largely abandoned politics for the commune and sexual liberation, the right began the slow process of crawling back into power with the hope of deconstructing the administrative state created in the New Deal. For while the political and economic system had become unstable, the mass of benefits, regulations and protections provided by the state proved exceedingly difficult to kill so long as politics remained democratic.

It was during what proved to be the crescendo of the liberal consensus, after its failure in the Vietnam War and the mobilized alienation of what Nixon called “the silent majority”- the mostly white middle and working who had in the interwar years lost much of their ties to ethnic identity, and who felt alienated from the new cultural currents sweeping the left, drug culture, the anti-war movement, feminism, gay rights, and black power- that the right saw its first major opening since FDR. It could use the gap between the old left and the new left to lever against the entire edifice of the post-war liberal consensus until the structure came crumbling down.

To do this the right would need to bring into being its own majority based consensus. For some astute and wealthy members of the right blamed the country’s persistent liberalism on the whole infrastructure of knowledge, communication and law, which liberals had used since the 1930’s to weave itself almost intractably deep into both society and law. Out of this drive a whole alternative infrastructure of information and knowledge would be created that included not only popular outlets that copied then left-leaning mass media platforms like television and radio, but institutions for policy production such as think tanks.

Thus we get the politicization of reason, not because the broad sweep of academia along with the first think tanks that held to the liberal consensus were a-political, but because the instruments of policy formation no longer proceeded under the kinds of shared assumptions that make reason possible to begin with.

All of these factors of broken consensus have increased rather than diminished in the 21st century as the newly decentralized nature of media production and the disappearance distance as a factor of political alliance meant that any worldview, no matter how niche and bizarre, now had a platform and was able to forge a constituency. Included in these worldviews are all sorts of atavistic ideas regarding race and gender equality many of us mistakenly believed we had evolved beyond.

It seems that retrograde ideas never truly go away, but lie waiting for the right conditions to re-emerge like killer viruses from the melting arctic permafrost. For many racists the conditions of awakening seems to have been the emergence of Trump. As usual, Cory Doctorow said it best:

In a tight race, having a cheap way to reach all the latent Klansmen in a district and quietly inform them that Donald J. Trump is their man is a game-changer.

Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

What can be dizzying is how historically mashed up and so obviously socially constructed many of these groups are, and how easily they can both resemble and even emerge from forms of fandom. (A point I’ve often seen made by Adam Aelkus).

The only step that would truly rid us of the cacophony of crazy would be to ban these groups from the internet altogether. Banning an obscene organization such as InfoWars, as all the major internet platforms, with the exception of Twitter, have now done may be a good thing, but whether such measures actually work, or even ultimately end up being worse than the disease is less certain. Banned groups are already turning to other alternative platforms such as Gab or Hatreon that are more amenable to whatever snake oil they are selling.

Right now I wouldn’t place any bets on either side winning the war between the platforms and those seeking to get their message out there whether it’s crazy or just a narrative those who own the press and the platforms would prefer the public ignore. It isn’t even clear which side a progressive should be rooting for. For while it seems clear that racist language or attempts to deliberately insight violence have no right to their place in the marketplace of ideas, it’s less clear what to do about a controversial figure like Cyprian Nyakundi who, from what I understand, was banned for leaking nude photos of Radio Africa Managing Director Martin Khafafa with young women- a rather attention grabbing way of exposing corruption

The platforms are immensely powerful because they not only control the major  interfaces between users and content producers but the servers on which this content is located. Given the censorship record of the platforms, especially outside the United States, this is nothing we should be comfortable with. To give just one example which I think is informative, FaceBook has essentially banned much of political speech from its platform in South Asia, yet much of this speech, along with all types of unsubstantiated rumors and nonsense continues to circulate through its encrypted WhatsApp, which FaceBook is unable to censor at all.

The platforms thus have an enormous amount of power and yet the structure of the internet itself makes the job of censorship much harder than it might otherwise appear. ISIS- the most reviled and hunted terrorist organization in the world still manages to have an internet presence, the ubiquity of encryption apps makes such censorship increasingly more difficult, not to mention the US commitment to the First Amendment, which makes any complete purge illegal.

And even if we could snap our fingers and make the internet disappear, our former consensus would remain very much in the rearview mirror. For both right and left would still possess a whole media infrastructure based on television, radio and print whose assumptions are so different one might think they originated in different countries.

Indeed, it was tech’s fear of the traditional right in possession of old media and its ability to gin up scandal that played a large part in them giving the alt-right a pass during Trump’s election bid. The platforms’ censorship will thread a careful line so long as Republicans are in control of the government and possess the power to break them up. A frightening prospect is that they will be nudged to turn their censorship towards the “radical left”, so that the Overton Window remains jammed open, but that it leans only in one direction- meaning rightward.

In other words, the internet platforms might find themselves aligned with conservative media to take us back to the world as it existed in the past. Not the long dead liberal consensus but the era of the culture wars that followed them. A phantasmagoria pretending to be a civil war where the masses gnaw virtually at each other over questions of identity and pseudo-scandals while the big shots divvy up the spoils of what really matters in an age dominated by technological infrastructure, namely the design of and access to the domains in and through which we all are now compelled to live, and that now subsume every region of the globe.