The Flash Crash of Reality

“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”

                                                                                                 H.P. Lovecraft, The Call of Cthulhu  

“All stable processes we shall predict. All unstable processes we shall control.”

John von Neumann

For at least as long as there has been a form of written language to record such a thing, human beings have lusted after divination. The classical Greeks had their trippy Oracle at Delphi, while the Romans scanned entrails for hidden patterns, or more beautifully, sought out the shape of the future in the murmurations of birds. All ancient cultures, it seems, looked for the signs of fate in the movement of the heavens. The ancient Chinese script may have even originated in a praxis of prophecy, a search for meaning in the branching patterns of “oracle bones” and tortoise shells, signaling that perhaps written language itself originated not with accountants but with prophets seeking to overcome the temporal confines of the second law, in whose measure we are forever condemned.

The promise of computation was that this power of divination was at our fingertips at last. Computers would allow us to outrun time, and thus in seeing the future we’d finally be able to change it or deftly avoid its blows- the goal of all seers in the first place.

Indeed, the binary language underlying computation sprung from the fecund imagination of Gottfried Leibniz who got the idea after he encountered the most famous form of Chinese divination, the I-Ching. The desire to create computing machines emerged with the Newtonian worldview and instantiated its premise; namely, that the world could be fully described in terms of equations whose outcome was preordained. What computers promised was the ability to calculate these equations, offering us a power born of asymmetric information- a kind of leverage via time travel.

Perhaps we should have known that time would not be so easily subdued. Outlining exactly how our recent efforts to know and therefore control the future have failed is the point of James Bridle’s wonderful book  New Dark Age: Technology and the End of the Future.

With the kind of clarity demanded of an effective manifesto, Bridle neatly breaks his book up into ten “C’s”: Chasm, Computation, Climate, Calculation, Complexity, Cognition, Complicity, Conspiracy, Concurrency, and Cloud.

Chasm defines what Bridle believes to be our problem. Our language is no longer up to the task of navigating the world which the complex systems upon which we are dependent have wrought. This failure of language, which amounts to a failure of thought, Bridle traces to the origins of computation itself.

Computation

To vastly oversimplify his argument, the problem with computation is that the detailed models it provides too often tempt us into confusing our map with the territory. Sometimes this leads us to mistrust our own judgement and defer to the “intelligence” of machines- a situation that in the most tragic of circumstances has resulted in what those in the National Parks Service call “death by GPS”. While in other cases our confusion of the model with reality results in the surrender of power to the minority of individuals capable of creating and the corporations which own and run such models.

Computation was invented under the aforementioned premise born with the success of calculus, that everything, including  history itself, could be contained in an equation. It was also seen as a solution to the growing complexity of society. Echoing Stanislaw Lem, one of the founders of modern computers, Vannevar Bush with his “memex” foresaw something like the internet in the 1940s. The mechanization of knowledge the only possible lifeboat in the deluge of information modernity had brought.

Climate

One of the first projected purposes of computers would be not just to predict, but to actually control the weather. And while we’ve certainly gotten better at the former, the best we’ve gotten from the later is Kurt Vonnegut’s humorous takedown of the premise in his novel Cat’s Cradle, which was actually based on his chemist brother’s work on weather control for General Electric. It is somewhat ironic, then, that the very fossil fuel based civilization upon which our computational infrastructure depends is not only making the weather less “controllable” and predictable, but is undermining the climatic stability of the Holocene, which facilitated the rise of a species capable of imagining and building something as sophisticated as computers in the first place.

Our new dark age is not just a product of our misplaced faith in computation, but in the growing unpredictability of the world itself. A reality whose existential importance is most apparent in the case of climate change. Our rapidly morphing climate threatens the very communications infrastructure that allows us to see and respond to its challenges. Essential servers and power sources will likely be drowned under the rising seas, cooling dependent processors taxed by increasing temperatures. Most disturbingly, rising CO2  levels are likely to make human beings dumber. As Bridle writes:

“At 1,000 ppm, human cognitive ability drops by 21 per cent.33 At higher atmospheric concentrations, CO2 stops us from thinking clearly. Outdoor CO2 already reaches 500 ppm”

An unstable climate undermines the bedrock, predictable natural cycles from which civilization itself emerged, that is, those of agriculture.  In a way our very success at controlling nature, by making it predictable is destabilizing the regularity of nature that made its predictability possible in the first place.

It is here that computation reveals its double edged nature, for while computation is the essential tool we need to see and respond to the “hyperobject” that is climate change, it is also one of the sources of this growing natural instability itself. Much of the energy of modern computation directly traceable to fossil fuels, a fact the demon coal lobby has eagerly pointed out.

Calculation

What the explosion of computation has allowed, of course, is an exponential expansion of the power and range of calculation. While one can quibble over whether or not the driving force behind the fact that everything is now software, that is Moore’s Law, has finally proved Ray Kurzweil and his ilk wrong and bent towards the asymptote, the fact is that nothing else in our era has followed the semiconductor’s exponential curve. Indeed, as Bridle shows, precisely the opposite.

For all their astounding benefits, machine learning and big data have not, as Chris Anderson predicted, resulted in the “End of Theory”. Science still needs theory, experiment, and dare I say, humans to make progress, and what is clear is that many areas outside ICT itself progress has not merely slowed but stalled.

Over the past sixty years, rather than experience Moore’s Law type increases, the pharmaceutical industry has suffered the opposite. The so-called Eroom’s Law where: “The number of new drugs approved per billion US dollars spent on research and development has halved every nine years since 1950.”

Part of this stems from the fact that the low hanging fruit of discovery, not just in pharmaceuticals but elsewhere, have already been picked, along with the fact that the problems we’re dealing with are becoming exponentially harder to solve. Yet some portion of the slowdown in research progress is surely a consequence of technology itself, or at least the ways in which computers are being relied upon and deployed. Ease of sharing when combined with status hunting inevitably leads to widespread gaming. Scientists are little different from the rest of us, seeking ways to top Google’s Page Rank, Youtube recommendations, Instagram and Twitter feeds, or the sorting algorithms of Amazon, though for scientists the summit of recognition consists of prestigious journals where publication can make or break a career.

Data being easy to come by, while experimentation and theory remain difficult, has meant that “proof” is often conflated with what turn out to be spurious p-values, or “p-hacking”. The age of big data has also been the age of science’s “replication crisis”, where seemingly ever more findings disappear upon scrutiny.

What all this calculation has resulted in is an almost suffocating level of complexity, which is the source of much of our in-egalitarian turn. Connectivity and transparency were supposed to level the playing field, instead, in areas such as financial markets where the sheer amount of information to be processed has ended up barring new entrants, calculation has provided the ultimate asymmetric advantage to those with the highest capacity to identify and respond within nanoseconds to changing market conditions.

Asymmetries of information lie behind both our largest corporate successes and the rising inequality that they have brought in their wake. Companies such as WalMart and Amazon are in essences logistics operations built on the unique ability of these entities to see and respond first or most vigorously to consumer needs. As Bridle points out this rule of logistics has resulted in a bizarre scrambling of politics, the irony that:

“The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon.”

Yet unlike the corporate giants of old such as General Motors, our 21st century behemoths don’t actually employ all that many people, and in their lust after automation, seem determined to employ even less. The workplace, and the larger society, these companies are building seem even more designed around the logic of machines than factories in the heyday of heavy industry. The ‘chaotic storage’ deployed in Amazon’s warehouses is like something dreamed up by Kafka, but that’s because it emerges out of the alien “mind” of an algorithm, a real world analog to Google’s Deep Dream.

The world in this way becomes less and less sensible, except to the tiny number of human engineers who, for the moment, retain control over its systems. This is a problem that is only going to get worse with the spread of the Internet of Things. An extension of computation not out of necessity, but because capitalism in its current form seems hell bent on turning all products into services so as to procure a permanent revenue stream. It’s not a good idea. As Bridle puts it:

“We are inserting opaque and poorly understood computation at the very bottom of Maslow’s hierarchy of needs – respiration, food, sleep, and homeostasis – at the precise point, that is, where we are most vulnerable.”

What we should know by now, if anything, is that the more connected things are, the more hackable they become, and the more susceptible to rapid and often unexplainable crashes. Turning reality into a type of computer simulations comes with the danger the world at large might experience the kind of “flash crash” now limited to the stock market. Bridle wonders if we’ve already experienced just such a flash crash of reality:

“Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.”

Cognition

It’s perhaps somewhat startling that even as we place ourselves in greater and greater dependence on artificial intelligence we’re still not really certain how or even if machines can think. Of course, we’re far from understanding how human beings exhibit intelligence, but we’ve never been confronted with this issue of inscrutability when it comes to our machines. Indeed, almost the whole point of machines is to replace the “herding cats” element of the organic world with the deterministic reliability of classical physics. Machines are supposed to be precise, predictable, legible, and above all, under the complete control of the human beings who use them.

The question of legibility today hasn’t gone far beyond the traditional debate between the two schools of AI that have rivaled each other since the field’s birth. There are those who believe intelligence is merely the product of connections between different parts of the brain and those who think intelligence has more to do with the mind’s capacity to manipulate symbols. In our era of deep learning the Connectionists hold sway, but it’s not clear if we are getting any closer to machines actually understanding anything, a lack of comprehension that can result in the new comedy genre of silly or disturbing mistakes made by computers, but that also presents us with real world dangers as we place more and more of our decision making upon machines that have no idea what any of the data they are processing actually means even as these programs discover dangerous hacks such as deception that are as old as life itself.

Complicity

Of course, no techno-social system can exist unless it serves the interest of at least some group in a position of power. Bridle draws an analogy between the society in ancient Egypt and our own. There, the power of the priests was premised on their ability to predict the rise and fall of the Nile. To the public this predictive power was shrouded in the language of the religious castes’ ability to commune with the gods, all the while the priests were secretly using the much more prosaic technology of nilometers hidden underground.

Who are the priests of the current moment? Bridle makes a good case that it’s the “three letter agencies”, the NSA, MI5 and their ilk that are the priests of our age. It’s in the interest of these agencies, born in the militarized atmosphere of the Cold War and the War on Terrorism that the logic of radical transparency continues to unfold- where the goal is to see all and to know all.

Who knows how vulnerable these agencies have made our communications architecture in trying to see through it? Who can say, Bridle wonders, if the strongest encryption tools available haven’t already been made useless by some genius mathematician working for the security state? And here is the cruel irony of it all, that the agencies whose quest is to see into everything are completely opaque to the publics they supposedly server. There really is a “deep state” though given our bizzaro-land media landscape our grappling with it quickly gives way conspiracy theories and lunatic cults like Q-Anon.

Conspiracy

The hunt for conspiracy stems from the understandable need of the human mind to simplify. It is the search for clear agency where all we can see are blurred lines. Ironically, believers in conspiracy hold more expansive ideas of power and freedom than those who explain the world in terms of “social forces” or other necessary abstractions. For a conspiracists the world is indeed controllable it’s just a matter that those doing the controlling happen to be terrifying.  None of this makes conspiracy anything but an unhinged way of dealing with reality, just a likely one whenever a large number of individuals feel the world is spinning out of control.

The internet ends up being a double boon for conspiracists politics because it both fragments the shared notion that of reality that existed in the age of print and mass media while allowing individuals who fall under some particular conspiracy’s spell to find one another and validate their own worldview. Yet it’s not just a matter of fragmented media and the rise of filter bubbles that plague us but a kind of shattering of our sense of reality itself.

Concurrency

It is certainly a terrible thing that our current communications and media landscape has fractured into digital tribes with the gap of language and assumptions between us seemingly unbridgeable, and emotion-driven political frictions resulting in what some have called “a cold civil war.” It’s perhaps even more terrifying that this same landscape has spontaneously given way to a kind of disturbed collective unconscious that is amplified, and sometimes created, by AI into what amounts to the lucid dreams of a madman that millions of people, many of them children, experience at once.

Youtube isn’t so much a television channel as it is a portal to the twilight zone, where one can move from videos of strangers compulsively wrapping and unwrapping products to cartoons of Peppa the Pig murdering her parents. Like its sister “tubes” in the porn industry, Youtube has seemingly found a way to jack straight into the human lizard brain. As is the case with slot machines, the system has been designed with addiction in mind, only the trick here is to hook into whatever tangle of weirdness or depravity exists in the individual human soul- and pull.

The even crazier thing about these sites is that the majority of viewers, and perhaps soon creators, are humans but bots. As Bridle writes:

“It’s not just trolls, or just automation; it’s not just human actors playing out an algorithmic logic, or algorithms mindlessly responding to recommendation engines. It’s a vast and almost completely hidden matrix of interactions between desires and rewards, technologies and audiences, tropes and masks.”

Cloud

Bridle thinks one thing is certain, we will never again return to the feeling of being completely in control, and the very illusion that we can be, if we only had the right technical tweak, or the right political leader, is perhaps the greatest danger of our new dark age.

In a sense we’re stuck with complexity and it’s this complex human/machine artifice which has emerged without anyone having deliberately built it that is the source of all the ills he has detailed.

The historian George Dyson recently composed a very similar diagnosis. In his essay Childhood’s End Dyson argued that we are leaving the age of digital and going back to the era of analog. He didn’t mean that we’d shortly be cancelling our subscriptions to Spotify and rediscovering the beauty of LP’s (though plenty of us are doing that), but something much deeper. Rather than, being a model of the world’s knowledge, in some sense, now Google was the world’s knowledge. Rather than represent the world’s social graph, now FaceBook was the world’s social graph.

The problem with analogue systems when compared to digital is that they are hard to precisely control, and thus are full of surprises, some of which are far from good. Our quest to assert control over nature and society hasn’t worked out as planned. According to Dyson:

“Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.”

Bridle’s answer to our situation is to urge us to think, precisely the kind of meditation on the present he has provided with his book. It’s not as wanting a solution as one might suppose, and for me had clear echoes with the perspective put forward by Roy Scranton in his book Learning to Die in the Anthropocene where he wrote:

“We must practice suspending stress-semantic chains of social exhaustion through critical thought, contemplation, philosophical debate, and posing impertinent questions…

We must inculcate ruminative frequencies in the human animal by teaching slowness, attention to detail, argumentative rigor, careful reading, and meditative reflection.”

I’m down with that. Yet the problem I had with Scranton is ultimately the same one I had with Bridle. Where is the politics? Where is human agency? For it is one thing to say that we live in a complex world roamed by “hyperobjects” we at best partly control, but it quite another to discount our capacity for continuous intervention, especially our ability to “act in concert”, that is politically, to steer the world towards desirable ends.

Perhaps what the arrival of a new dark age means is that we’re regaining a sense of humility. Starting about two centuries ago human beings got it into their heads that they had finally gotten nature under their thumb. What we are learning in the 21st century was not only was this view incorrect, but that the human made world itself seems to have a mind of its own. What this means is that we’re likely barred forever from the promised land, condemned to a state of adaptation and response to nature’s cruelty and human injustice, which will only end with our personal death or the extinction of the species, and yet still contains all the joy and wonder of what it means to be a human being cast into a living world.

 

The Evolution of Chains

“Progress in society can be thought of as the evolution of chains….

 Phil Auerswald, The Code Economy

“I would prefer not to.”

Herman Melville, Bartleby the Scrivener

 The 21st century has proven to be all too much like living in a William Gibson novel. It may be sad, but at least it’s interesting. What gives the times this cyberpunk feel isn’t so much the ambiance (instead of dark cool of noir we’ve got frog memes and LOLcats), but instead is that absolutely every major happening and all of our interaction with the wider world has something to do with the issue of computation, of code.

Software has not only eaten the economy, as Marc Anderssen predicted back in 2011, but also culture, politics, and war. To understand both the present and the future, then, it is essential to get a handle on what exactly this code that now dominates our lives is, to see it in its broader historical perspective.

Phil Auerswald managed to do something of the sort with his book The Code Economy. A Forty-Thousand Year History. It’s a book that tries to take us to the very beginning, to chart the long march of code to sovereignty over human life. The book defines code (this will prove part of its shortcomings) broadly as a “recipe” a standardized way of achieving some end. Looked at this way human beings have been a species dominated by code since our beginnings, that is with the emergence of language, but there have been clear leaps closer to the reign of code along the way with Auerswald seeing the invention of writing being the first. Written language, it seems, was the invention of bureaucrats, a way for the tribal scribes in ancient Sumer to keep track of temple tributes and debts. And though it soon broke out of these chains and proved a tool as much for building worlds and wonders like the Epic of Gilgamesh as a radical new means of power and control, code has remained linked to the domination of one human group over another ever since.

Auerswald is mostly silent on the mathematical history of code before the modern age. I wish he had spent time discussing predecessors of the computer such as the abacus, the Antikythera mechanism, clocks, and especially the astrolabe. Where exactly did this idea of mechanizing mathematics actually emerge, and what are the continuities and discontinuities between different forms of such mechanization?

Instead, Gottfried Leibniz who envisioned the binary arithmetic that underlies all of our modern day computers is presented like he had just fallen out of the matrix. Though Leibniz’ genius does seem almost inexplicable and sui generis. A philosopher who arguably beat Isaac Newton to the invention of calculus, he was also an inventor of ingenious calculating machines and an almost real-life version of Nostradamus. In a letter to Christiaan Huygens in 1670 Leibniz predicted that with machines using his new binary arithmetic: “The human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes.” He was right, though it would be nearly 300 years until these instruments were up and running.

Something I found particularly fascinating is that Leibniz found a premonition for his binary number system in the Chinese system of divination- the I-ching which had been brought to his attention by Father Bouvet, a Jesuit priest then living in China. It seems that from the beginning the idea of computers, and the code underlying them, has been wrapped up with the human desire to know the future in advance.

Leibniz believed that the widespread adoption of binary would make calculation more efficient and thus mechanical calculation easier, but it wouldn’t be until the industrial revolution that we actually had the engineering to make his dreams a reality. Two developments especially aided in the development of what we might call proto-computers in the 19th century. The first was the division of labor first identified by Adam Smith, the second was Jacquard’s loom. Much like the first lurch toward code that came with the invention of writing, this move was seen as a tool that would extend the reach and powers of bureaucracy.       

Smith’s idea of how efficiency was to be gained from breaking down complex tasks into simple easily repeatable steps served as the inspiration to applying the same methods to computation itself. Faced with the prospect of not having enough trained mathematicians to create the extensive logarithmic and trigonometric tables upon which modern states and commerce was coming to depend, innovators such as Gaspard de Prony in France hit upon the idea of breaking down complex computation into simple tasks. The human “computer” was born.

It was the mechanical loom of Joseph Marie Jacquard that in the early 1800s proved that humans themselves weren’t needed once the creation of a complex pattern had been reduced to routine tasks. It was merely a matter of drawing the two, route human computation and machine enabled pattern making, together which would show the route to Leibniz’ new instrument for thought. And it was Charles Babbage along with his young assistant Ada Lovelace who seemed to graph the implications beyond data crunching of building machines that could “think”  who would do precisely this.

Babbage’s Difference and Analytical Engines, however, remained purely things of the mind. Early industrial age engineering had yet to catch up with Leibniz’ daydreams. Society’s increasing needs for data instead came to be served by a growing army of clerks along with simple adding machines and the like.

In an echo of the Luddites who rebelled against the fact that their craft was being supplanted by the demands of the machine, at least some of these clerks must have found knowledge reduced to information processing dehumanizing. Melville’s Bartleby the Scrivener gives us a portrait of a man who’s become like a computer stuck in a loopy glitch that eventually leads to his being scraped.    

What I think is an interesting aside here, Auerswald  doesn’t really explore the philosophical assumptions behind the drive to make the world computable. Melville’s Bartleby, for example, might have been used as a jumping off point for a discussion about how both the computer the view of human beings as a sort of automata meant to perform a specific task emerged out of a thoroughly deterministic worldview. This view, after all, was the main target of  Melville’s short story where a living person has come to resemble a sort of flawed windup toy, and make direct references to religious and philosophical tracts arguing in favor of determinism; namely, the firebrand preacher Jonathan Edwards sermon On the Will, and the chemist and utopian Joseph Priestley’s book  Doctrine of Philosophical Necessity.  

It is the effect of the development of machine based code on these masses of 19th and early 20th century white collar workers, rather than the more common ground of automation’s effect on the working class, that most interests Auerswald. And he has reasons for this, given that the current surge of AI seems to most threaten low level clerical workers rather than blue collar workers whose jobs have already been automated or outsourced, and in which the menial occupations that have replaced jobs in manufacturing require too much dexterity for even the most advanced robots.

As Auerswald points out, fear of white collar unemployment driven by the automation of cognitive tasks is at least as old as the invention of the Burroughs’s calculating machine in the 1880s. Yet rather than lead to a mass of unemployed clerks, the adoption of adding machines, typewriters, dictaphones only increased the number of people needed to manage and process information. That is, the depth of code into society, or the desire for society to conform to the demands of code placed even higher demands on both humans and machines. Perhaps that historical analogy will hold for us as well. Auerswald doesn’t seem to think that this is a problem.

By the start of the 20th century we still weren’t sure how to automate computation, but we were getting close. In an updated version of De Prony, the meteorologist, Fry Richardson showed how we could predict the weather armed with a factory of human computers. There are echoes of both to be seen in the low-paid laborers working for platforms such as Amazon’s Mechanical Turk who still provide the computation behind the magic-act that is much of contemporary AI.

Yet it was World War II and the Cold War that followed that would finally bootstrap Leibniz’s dream of the computer into reality. The names behind this computational revolution are now legendary: Vannevar Bush, Norbert Wiener, Claude Shannon, John Von Neumann, and especially Alan Turing.

It was these figures who laid the cornerstone for what George Dyson has called “Turing’s Cathedral” for like the great medieval cathedrals the computational megastructure in which all of us are now embedded was the product of generations of people dedicated to giving substance to the vision of its founders. Unlike the builders of Notre Dame or Chartres who labored in the name of ad majorem Dei gloriam, those who constructed Turing’s Cathedral were driven by ideas regarding the nature of thought and the power and necessity of computation.

Surprisingly, this model of computation created in the early 20th century by Turing and his fellow travelers continues to underlie almost all of our digital technology today. It’s a model that may be reaching its limits, and needs to be replaced with a more environmentally sustainable form of computation that is actually capable of helping us navigate and find the real structure of information in a world whose complexity we are finding so deep as to be intractable, and in which our own efforts to master only results in a yet harder to solve maze. It’s a story Auerswald doesn’t tell, and must wait until another time.        

Rather than explore what computation itself means, or what its future might be, Auerswald throws wide open the definition of what constitutes a code. As Erwin Schrodinger observed in his prescient essay What is Life?”, and as we later learned with the discovery of DNA, a code does indeed stand at the root of every living thing. Yet in pushing the definition of code ever farther from its origins in the exchange of information and detailed instructions on how to perform a task, something is surely lost. Thus Auerswald sees not only computer software and DNA as forms of code, but also the work of the late celebrity chef Julia Child, the McDonald’s franchise, WalMart and even the economy itself all as forms of it.

As I suggested at the beginning of this essay, there might be something to be gained by viewing the world through the lens of code. After all, code of one form or another is now the way in which most of the largely private and remaining government bureaucracies that run our lives function. The sixties radicals terrified that computers would be the ultimate tool of bureaucrats guessed better where we were headed. “Do not fold spindle or mutilate.” Right on. Code has also become the primary tool through which the system is hacked- forced to buckle and glitch in ways that expose its underlying artificiality, freeing us for a moment from its claustrophobic embrace. Unfortunately, most of its attackers aren’t rebels fighting to free us, but barbarians at the gates.

Here is where Auerswald really lets us down. Seeing progress as tied to our loss of autonomy and the development of constraints he seems to think we should just accept our fetters now made out of 1’s and 0’s. Yet within his broad definition of what constitutes code, it would appear that at least the creation of laws remains in our power. At least in democracies the law is something that the people collectively make. Indeed, if any social phenomenon can be said to be code like it would be law. It surprising, then, that Auerswald gives it no mention in his book. Meaning it’s not really surprising at all.

You see, there’s an implicit political agenda underneath Auerswald whole understanding of code, or at least a whole set of political and economic assumptions. These are assumptions regarding the naturalness of capitalism, the beneficial nature of behemoths built on code such as WalMart, the legitimacy of global standard setting institutions and protocols, the dominance of platforms over other “trophic layers” in the information economy that provide the actual technological infrastructure on which these platforms run. Perhaps not even realizing that these are just assumptions rather than natural facts Auerswald never develops or defends them. Certainly, code has its own political-economy, but to see it we will need to look elsewhere. Next time.

 

The Kingdom of Machines

The Book of the Machines

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the cannon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)

I should back up. Many of us who have heard of Butler and Erewhon have likely done so through George Dyson’s amazing little book Darwin Among the Machines, a book that itself deserves a top spot in the nature-man-machine cannon and got its title from an essay of Butler’s that found its way into the fictional world of his Erewhon. Dyson’s 1997 bookwritten just as the Internet age was ramping up tried to place the digital revolution within the longue duree of human and evolutionary history, but Butler had gotten there first. Indeed, Erewhon articulated challenges ahead of us which took almost 150 years to unfold, issues being discussed today by a set of scientists and scholars concerned over both the future of machines and the ultimate fate of our species.

A few weeks back Giulio Prisco at the IEET pointed me in the direction of an editorial placed in both the Huffington Post and the Independent by a group of scientists including, Stephen Hawking, Nick Bostrom and Max Tegmark warning of the potential dangers emerging from technologies surrounding artificial intelligence that are now going through an explosive period of development. As the authors of the letter note:

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Most seem to think we are at the beginning rather than at the end of this AI revolution and see it likely unfolding into an unprecedented development; namely, machines that are just as smart if not incredibly more so than their human creators. Should the development of AI as intelligent or more intelligent than ourselves be a concern? Giulio himself doesn’t think so  believing that advanced AIs are in a sense both our children and our evolutionary destiny. The scientists and scholars behind the letter to Huffpost and The Independent , however, are very concerned.  As they put it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

And again:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In a probing article about the existential risks posed by artificial intelligence, and more so the larger public’s indifference to it, James Hamblin writes of the theoretical physicists and Nobel Laureate Frank Wilczek another of the figures trying to promote greater serious public awareness of the risks posed by artificial intelligence. Wilczek thinks we will face existential risks from intelligent machines not over the long haul of human history but in a very short amount of time.

It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.

To be honest, it’s quite hard to take these scientists and thinkers seriously, perhaps because I’ve been so desensitized by Hollywood dystopias staring killer robots. But when your Noahs are some of the smartest guys on the planet it’s probably a good idea if not to actually start building your boat to at least be able to locate and beat a path if necessary to higher ground.

What is the scale of the risk we might be facing? Here’s physicist Max Tegmark from Hamblin’s piece:

Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.

If there’s a potential monster somewhere on your path it’s always good to have some idea of its actual shape otherwise you find yourself jumping and lunging at harmless shadows.  How should we think about potentially humanity threatening AI’s? The first thing is that we need to be clear about what is happening. For all the hype, the AI sceptics are probably right– we are nowhere near replicating the full panoply of our biological intelligence in a machine. Yet, this fact should probably increase rather than decrease our concern regarding the potential threat from “super-intelligent” AIs. However far they remain from the full complexity of human intelligence, on account of their much greater speed and memory such machines largely already run something so essential and potentially destructive as our financial markets, the military is already discussing the deployment of autonomous weapons systems, with the domains over which the decisions of AIs hold sway only likely to increase over time. One just needs to imagine one or more such systems going rouge and doing what may amount to highly destructive things its creators or programmers did not imagine or intend to comprehend the risk. Such a rogue system would be more akin to a killer-storm than Lex Luthor, and thus, Nick Bostrom is probably on to something profound when he suggest that one of the worse things we could do would be to anthropomorphize the potential dangers. Telling Ross Anderson over at Aeon:

You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

The crazy thing is that Erewhon a novel published in 1872 clearly stated almost all of these dangers and perspectives. If these fears prove justified it will be Samuel Butler rather than Friedrich Nietzsche who will have been the great prophet to emerge from the the 19th century.

Erewhon was released right on the eve of what’s called the second industrial revolution that lasted from the 1870’s to World War I. Here you get mass rail and steam ships, gigantic factories producing iron and steel, and the birth of electrification. It is the age romanticized and reimagined today by cultural movement of steampunk.

The novel appeared 13 years after the publication of Charles Darwin’s The Origin of Species and even more importantly one year after Darwin’s even more revolutionary The Descent of Man. As we learn in Dyson’s book, Butler and Darwin were engaged in a long lasting intellectual feud, though the issue wasn’t evolution itself, but Butler’s accusation that Charles Darwin had essentially ripped his theory from his grandfather Erasmus Darwin without giving him any of the credit. Be that as it may,

Erewhon was never intended as many thought at the time, as a satire of Darwinism, but was Darwinian to its core, and tried to apply the lessons of the worldview made apparent by the Theory of Evolution to the innovatively exploding world of machines Butler saw all around him.

The narrator in Erewhon, a man name Higgs, is out exploring the undiscovered valleys of a thinly disguised New Zealand or some equivalent looking for a large virgin territory to raise sheep. In the process he discovers an unknown community, the nation of Erewhon hidden in its deep unexplored valleys. The people there are not primitive, but exist at roughly the level of a European village before the industrial revolution.  Or as Higgs observers of them in a quip pregnant with meaning – “…savages do not make bridges.” What they find most interesting about the newcomer Higgs is, of all things, his pocket watch.

But by and by they came to my watch which I had hidden away in the pocket that I had and had forgotten when began their search. They seemed concerned and the moment that they got hold of it. They made me open it and show the works and as as I had done so they gave signs of very grave which disturbed me all the more because I not conceive wherein it could have offended them.  (58)

One needs to know a little of the history of the idea of evolution to realize how deliciously clever this narrative use of a watch by Butler is. He’s jumping off of William Paley’s argument for intelligent design found in Paley’s 1802 Natural Theology or Evidences of the Existence and Attributes of the Deityabout which it has been said:

It is a book I greatly admire for in its own time the book succeeded in doing what I am struggling to do now. He had a point to make, he passionately believed in it, and he spared no effort to ram it home clearly. He had a proper reverence for the complexity of the living world, and saw that it demands a very special kind of explanation. (4)

 

The quote above is Richard Dawkins talking in his The Blind Watchmakerwhere he recovered and popularized Paley’s analogy of the stumbled upon watch as evidence that the stunningly complex world of life around us must have been designed.

Paley begins his Natural Theology with an imagined scenario where a complex machine, a watch, hitherto unknown by its discoverer leads to speculation about it origins.

In crossing a heath suppose I pitched my foot against a stone and were asked how the stone came to be there. I might possibly answer that for any thing I knew to the contrary it had lain there for ever nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground and it should be inquired how the watch happened to be in that place. I should hardly think of the answer which I had before given that for any thing I knew the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? Why is it not as admissible in the second case as in the first?  (1)

After investigating the intricacies of the watch, how all of its pieces seem to fit together perfectly and have precise and definable functions Paley thinks that the:

….inference….is inevitable that the watch must have had a maker that there must have existed at some time and at some place or other an artificer or artificers who formed it for the purpose which we find it actually to answer who comprehended its construction and designed its use. (3-4)

Paley thinks we should infer a designer even if we discover the watch is capable of making copies of itself:

Contrivance must have had a contriver design a designer whether the machine immediately proceeded from another machine or not. (12)

This is creationism, but as even Dawkins admits, it is an eloquent and sophisticated creationism.

Darwin, which ever one got there first, overthrew this need for an engineer as the ultimate source of complexity by replacing a conscious designer with a simple process through which the most intricate of complex entities could emerge over time- in the younger’s case – Natural Selection.

 

The brilliance of Samuel Butler in Erewhon was to apply this evolutionary emergence of complexity not just to living things, but to the machines we believe ourselves to have engineered. Perhaps the better assumption to have when we encounter anything of sufficient complexity is that to reach such complexity it must have been something that evolved over time. Higgs says of the magistrate of Erewhon obsessed with the narrator’s pocket watch that he had:

….a look of horror and dismay… a look which conveyed to me the impression that he regarded my watch not as having been designed but rather as the designer of himself and of the universe or as at any rate one of the great first causes of all things  (58)

What Higgs soon discovers, however, is that:

….I had misinterpreted the expression on the magistrate’s face and that it was one not of fear but hatred.  (58)

Erewhon is a civilization where something like the Luddites of the early 19th century have won. The machines have been smashed, dismantled, turned into museum pieces.

Civilization has been reset back to the pre-industrial era, and the knowledge of how to get out of this era and back to the age of machines has been erased, education restructured to strangle in the cradle scientific curiosity and advancement.

All this happened in Erewhon because of a book. It is a book that looks precisely like an essay the real world Butler had published not long before his novel, his essay Darwin among the Machines, where George Dyson got his title. In Erewhon it is called simply “The Book of the Machines”.

It is sheer hubris we read in “The Book of the Machines” to think that evolution, having played itself out over so many billions of years in the past and likely to play itself out for even longer in the future, that we are the creatures who have reached the pinnacle. And why should we believe there could not be such a thing as “post-biological” forms of life:

…surely when we reflect upon the manifold phases of life and consciousness which have been evolved already it would be a rash thing to say that no others can be developed and that animal life is the end of all things. There was a time when fire was the end of all things another when rocks and water were so. (189)

In the “Book of the Machines” we see the same anxiety about the rapid progress of technology that we find in those warning of the existential dangers we might face within only the next several decades.

The more highly organised machines are creatures not so much of yesterday as of the last five minutes so to speak in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years, see what strides machines have made in the last thousand. May not the world last twenty million years longer. If so what will they not in the end become?  (189-190)

Butler, even in the 1870’s is well aware of the amazing unconscious intelligence of evolution. The cleverest of species use other species for their own reproductive ends. The stars of this show are the flowers a world on display in Louie Schwartzberg’s stunning documentary Wings of Life which shows how much of the beauty of our world is the product of this bridging between species as a means of reproduction. Yet flowers are just the most visually prominent example.

Our machines might be said to be like flowers unable to reproduce on their own, but with an extremely effective reproductive vehicle in use through human beings. This, at least is what Butler speculated in Erewhon, again in the section “The Book of Machines”:

No one expects that all the features of the now existing organisations will be absolutely repeated in an entirely new class of life. The reproductive system of animals differs widely from that of plants but both are reproductive systems. Has nature exhausted her phases of this power? Surely if a machine is able to reproduce another machine systematically we may say that it has a reproductive system. What is a reproductive system if it be not a system for reproduction? And how few of the machines are there which have not been produced systematically by other machines? But it is man that makes them do so. Yes, but is it not insects that make many of the plants reproductive and would not whole families of plants die out if their fertilisation were not effected by a class of agents utterly foreign to themselves? Does any one say that the red clover has no reproductive system because the humble bee and the humble bee only must aid and abet it before it can reproduce? (204)

Reproduction is only one way one species or kingdom can use another. There is also their use as a survival vehicle itself. Our increasing understanding of the human microbiome almost leads one to wonder whether our whole biology and everything that has grown up around it has all this time merely been serving as an efficient vehicle for the real show – the millions of microbes living in our guts. Or, as Butler says in what is the most quoted section of Erewhon:

Who shall say that a man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his and whether he is anything but another kind of ant heap after all. Might not man himself become a sort of parasite upon the machines. An affectionate machine tickling aphid. (196)

Yet, one might suggest that perhaps we shouldn’t separate ourselves from our machines. Perhaps we have always been transhuman in the sense that from our beginnings we have used our technology to extend our own reach. Butler well understood this argument that technology was an extension of the human self.

A machine is merely a supplementary limb this is the be all and end all of machinery We do not use our own limbs other than as machines and a leg is only a much better wooden leg than any one can manufacture. Observe a man digging with a spade his right forearm has become artificially lengthened and his hand has become a joint.

In fact machines are to be regarded as the mode of development by which human organism is now especially advancing every past invention being an addition to the resources of the human body. (219)

Even the Luddites of Erewhon understood that to lose all of our technology would be to lose our humanity, so intimately woven were our two fates. The solution to the dangers of a new and rival kingdom of animated beings arising from machines was to deliberately wind back the clock of technological development to before the time fossil fuels had freed machines from the constraints of animal, human, and cyclical power to before machines had become animate in the way life itself was animate.

Still, Butler knew it would be almost impossible to make the solution of Erewhon our solution. Given our numbers, to retreat from the world of machines would unleash death and anarchy such as the world has never seen:

The misery is that man has been blind so long already. In his reliance upon the use of steam he has been betrayed into increasing and multiplying. To withdraw steam power suddenly will not have the effect of reducing us to the state in which we were before its introduction there will be a general breakup and time of anarchy such as has never been known it will be as though our population were suddenly doubled with no additional means of feeding the increased number. The air we breathe is hardly more necessary for our animal life than the use of any machine on the strength of which we have increased our numbers is to our civilisation it is the machines which act upon man and make him man as much as man who has acted upon and made the machines but we must choose between the alternative of undergoing much present suffering or seeing ourselves gradually superseded by our own creatures till we rank no higher in comparison with them than the beasts of the field with ourselves. (215-216)

Therein lies the horns of our current dilemma. The one way we might save biological life from the risk of the new kingdom of machines would be to dismantle our creations and retreat from technological civilization. It is a choice we cannot make both because of its human toll and for the long term survival of the genealogy of earthly life. Butler understood the former, but did not grasp the latter. For, the only way the legacy of life on earth, a legacy that has lasted for 3 billion years, and which everything living on our world shares, will survive the inevitable life cycle of our sun which will boil away our planet’s life sustaining water in less than a billion years and consume the earth itself a few billion years after that, is for technological civilization to survive long enough to provide earthly life with a new home.

I am not usually one for giving humankind a large role to play in cosmic history, but there is at least a chance that something like Peter Ward’s Medea Hypothesis is correct, that given the thoughtless nature of evolution’s imperative to reproduce at all cost life ultimately destroys its own children like the murderous mother of Euripides play.

As Ward points out it is the bacteria that have almost destroyed life on earth, and more than once, by mindlessly transforming its atmosphere and poisoning it oceans. This is perhaps the risk behind us, that Bostrom thinks might explain our cosmic loneliness complex life never gets very far before bacteria kills it off. Still perhaps we might be in a sort of pincer of past and future our technological civilization, its capitalist economic system, and the AIs that might come to be at the apex of this world ultimately as absent of thought as bacteria and somehow the source of both biological life’s and its own destruction.     

Our luck has been to avoid a medea-ian fate in our past. If we can keeps our wits about us we might be able to avoid a  medea-ian fate in our future only this brought to us by the kingdom machines we have created. If most of life in the universe really has been trapped at the cellular level by something like the Medea Hypothesis, and we actually are able to survive over the long haul, then our cosmic task might be to purpose the kingdom of machines to be to spread and cultivate  higher order life as far as deeply as we can reach into space. Machines, intelligent or not, might be the interstellar pollinators of biological life. It is we the living who are the flowers.

The task we face over both the short and the long term is to somehow negotiate the rivalry not only of the two worlds that have dominated our existence so far- the natural world and the human world- but a third, an increasingly complex and distinct world of machines. Getting that task right is something that will require an enormous amount of wisdom on our part, but part of the trick might lie in treating machines in some way as we already, when approaching the question rightly, treat nature- containing its destructiveness, preserving its diversity and beauty, and harnessing its powers not just for the good of ourselves but for all of life and the future.

As I mentioned last time, the more sentient our machines become the more we will need to look to our own world, the world of human rights, and for animals similar to ourselves, animal rights, for guidance on how to treat our machines balancing these rights with the well being of our fellow human beings.

The nightmare scenario is that instead of carefully cultivating the emergence of the kingdom of machines to serve as a shelter for both biological life and those things we human beings most value they will instead continue to be tied to the dream of endless the endless accumulation of capital, or as Douglas Rushkoff recently stated it:

When you look at the marriage of Google and Kurzweil, what is that? It’s the marriage of digital technology’s infinite expansion with the idea that the market is somehow going to infinitely expand.

A point that was never better articulated or more nakedly expressed than by the imperialist business man Cecil Rhodes in the 19th century:

To think of these stars that you see overhead at night, these vast worlds which we can never reach. I would annex the planets if I could.

If that happens, a kingdom of machines freed from any dependence on biological life or any reflection of human values might eat the world of the living until the earth is nothing but a valley of our bones.

_____________________________________________________________________

That a whole new order of animated beings might emerge from what was essentially human weakness vis a-vis  other animals is yet another one of the universes wonders, but it is a wonder that needs to go right in order to make it a miracle, or even to prevent it merely from becoming a nightmare. Butler invented almost all of our questions in this regard and in speculating in Erewhon how humans might unravel their dependence on machines raised another interesting question as well; namely, whether the very freewill that we use to distinguish the human world from the worlds of animals or machines actually exists? He also pointed to an alternative future where the key event would not be the emergence of the kingdom of machines but the full harnessing and control of the powers of biological life by human beings questions I’ll look at sometime soon.