Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.



The Kingdom of Machines

The Book of the Machines

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the cannon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)

I should back up. Many of us who have heard of Butler and Erewhon have likely done so through George Dyson’s amazing little book Darwin Among the Machines, a book that itself deserves a top spot in the nature-man-machine cannon and got its title from an essay of Butler’s that found its way into the fictional world of his Erewhon. Dyson’s 1997 bookwritten just as the Internet age was ramping up tried to place the digital revolution within the longue duree of human and evolutionary history, but Butler had gotten there first. Indeed, Erewhon articulated challenges ahead of us which took almost 150 years to unfold, issues being discussed today by a set of scientists and scholars concerned over both the future of machines and the ultimate fate of our species.

A few weeks back Giulio Prisco at the IEET pointed me in the direction of an editorial placed in both the Huffington Post and the Independent by a group of scientists including, Stephen Hawking, Nick Bostrom and Max Tegmark warning of the potential dangers emerging from technologies surrounding artificial intelligence that are now going through an explosive period of development. As the authors of the letter note:

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Most seem to think we are at the beginning rather than at the end of this AI revolution and see it likely unfolding into an unprecedented development; namely, machines that are just as smart if not incredibly more so than their human creators. Should the development of AI as intelligent or more intelligent than ourselves be a concern? Giulio himself doesn’t think so  believing that advanced AIs are in a sense both our children and our evolutionary destiny. The scientists and scholars behind the letter to Huffpost and The Independent , however, are very concerned.  As they put it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

And again:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In a probing article about the existential risks posed by artificial intelligence, and more so the larger public’s indifference to it, James Hamblin writes of the theoretical physicists and Nobel Laureate Frank Wilczek another of the figures trying to promote greater serious public awareness of the risks posed by artificial intelligence. Wilczek thinks we will face existential risks from intelligent machines not over the long haul of human history but in a very short amount of time.

It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.

To be honest, it’s quite hard to take these scientists and thinkers seriously, perhaps because I’ve been so desensitized by Hollywood dystopias staring killer robots. But when your Noahs are some of the smartest guys on the planet it’s probably a good idea if not to actually start building your boat to at least be able to locate and beat a path if necessary to higher ground.

What is the scale of the risk we might be facing? Here’s physicist Max Tegmark from Hamblin’s piece:

Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.

If there’s a potential monster somewhere on your path it’s always good to have some idea of its actual shape otherwise you find yourself jumping and lunging at harmless shadows.  How should we think about potentially humanity threatening AI’s? The first thing is that we need to be clear about what is happening. For all the hype, the AI sceptics are probably right– we are nowhere near replicating the full panoply of our biological intelligence in a machine. Yet, this fact should probably increase rather than decrease our concern regarding the potential threat from “super-intelligent” AIs. However far they remain from the full complexity of human intelligence, on account of their much greater speed and memory such machines largely already run something so essential and potentially destructive as our financial markets, the military is already discussing the deployment of autonomous weapons systems, with the domains over which the decisions of AIs hold sway only likely to increase over time. One just needs to imagine one or more such systems going rouge and doing what may amount to highly destructive things its creators or programmers did not imagine or intend to comprehend the risk. Such a rogue system would be more akin to a killer-storm than Lex Luthor, and thus, Nick Bostrom is probably on to something profound when he suggest that one of the worse things we could do would be to anthropomorphize the potential dangers. Telling Ross Anderson over at Aeon:

You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

The crazy thing is that Erewhon a novel published in 1872 clearly stated almost all of these dangers and perspectives. If these fears prove justified it will be Samuel Butler rather than Friedrich Nietzsche who will have been the great prophet to emerge from the the 19th century.

Erewhon was released right on the eve of what’s called the second industrial revolution that lasted from the 1870’s to World War I. Here you get mass rail and steam ships, gigantic factories producing iron and steel, and the birth of electrification. It is the age romanticized and reimagined today by cultural movement of steampunk.

The novel appeared 13 years after the publication of Charles Darwin’s The Origin of Species and even more importantly one year after Darwin’s even more revolutionary The Descent of Man. As we learn in Dyson’s book, Butler and Darwin were engaged in a long lasting intellectual feud, though the issue wasn’t evolution itself, but Butler’s accusation that Charles Darwin had essentially ripped his theory from his grandfather Erasmus Darwin without giving him any of the credit. Be that as it may,

Erewhon was never intended as many thought at the time, as a satire of Darwinism, but was Darwinian to its core, and tried to apply the lessons of the worldview made apparent by the Theory of Evolution to the innovatively exploding world of machines Butler saw all around him.

The narrator in Erewhon, a man name Higgs, is out exploring the undiscovered valleys of a thinly disguised New Zealand or some equivalent looking for a large virgin territory to raise sheep. In the process he discovers an unknown community, the nation of Erewhon hidden in its deep unexplored valleys. The people there are not primitive, but exist at roughly the level of a European village before the industrial revolution.  Or as Higgs observers of them in a quip pregnant with meaning – “…savages do not make bridges.” What they find most interesting about the newcomer Higgs is, of all things, his pocket watch.

But by and by they came to my watch which I had hidden away in the pocket that I had and had forgotten when began their search. They seemed concerned and the moment that they got hold of it. They made me open it and show the works and as as I had done so they gave signs of very grave which disturbed me all the more because I not conceive wherein it could have offended them.  (58)

One needs to know a little of the history of the idea of evolution to realize how deliciously clever this narrative use of a watch by Butler is. He’s jumping off of William Paley’s argument for intelligent design found in Paley’s 1802 Natural Theology or Evidences of the Existence and Attributes of the Deityabout which it has been said:

It is a book I greatly admire for in its own time the book succeeded in doing what I am struggling to do now. He had a point to make, he passionately believed in it, and he spared no effort to ram it home clearly. He had a proper reverence for the complexity of the living world, and saw that it demands a very special kind of explanation. (4)


The quote above is Richard Dawkins talking in his The Blind Watchmakerwhere he recovered and popularized Paley’s analogy of the stumbled upon watch as evidence that the stunningly complex world of life around us must have been designed.

Paley begins his Natural Theology with an imagined scenario where a complex machine, a watch, hitherto unknown by its discoverer leads to speculation about it origins.

In crossing a heath suppose I pitched my foot against a stone and were asked how the stone came to be there. I might possibly answer that for any thing I knew to the contrary it had lain there for ever nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground and it should be inquired how the watch happened to be in that place. I should hardly think of the answer which I had before given that for any thing I knew the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? Why is it not as admissible in the second case as in the first?  (1)

After investigating the intricacies of the watch, how all of its pieces seem to fit together perfectly and have precise and definable functions Paley thinks that the:

….inference….is inevitable that the watch must have had a maker that there must have existed at some time and at some place or other an artificer or artificers who formed it for the purpose which we find it actually to answer who comprehended its construction and designed its use. (3-4)

Paley thinks we should infer a designer even if we discover the watch is capable of making copies of itself:

Contrivance must have had a contriver design a designer whether the machine immediately proceeded from another machine or not. (12)

This is creationism, but as even Dawkins admits, it is an eloquent and sophisticated creationism.

Darwin, which ever one got there first, overthrew this need for an engineer as the ultimate source of complexity by replacing a conscious designer with a simple process through which the most intricate of complex entities could emerge over time- in the younger’s case – Natural Selection.


The brilliance of Samuel Butler in Erewhon was to apply this evolutionary emergence of complexity not just to living things, but to the machines we believe ourselves to have engineered. Perhaps the better assumption to have when we encounter anything of sufficient complexity is that to reach such complexity it must have been something that evolved over time. Higgs says of the magistrate of Erewhon obsessed with the narrator’s pocket watch that he had:

….a look of horror and dismay… a look which conveyed to me the impression that he regarded my watch not as having been designed but rather as the designer of himself and of the universe or as at any rate one of the great first causes of all things  (58)

What Higgs soon discovers, however, is that:

….I had misinterpreted the expression on the magistrate’s face and that it was one not of fear but hatred.  (58)

Erewhon is a civilization where something like the Luddites of the early 19th century have won. The machines have been smashed, dismantled, turned into museum pieces.

Civilization has been reset back to the pre-industrial era, and the knowledge of how to get out of this era and back to the age of machines has been erased, education restructured to strangle in the cradle scientific curiosity and advancement.

All this happened in Erewhon because of a book. It is a book that looks precisely like an essay the real world Butler had published not long before his novel, his essay Darwin among the Machines, where George Dyson got his title. In Erewhon it is called simply “The Book of the Machines”.

It is sheer hubris we read in “The Book of the Machines” to think that evolution, having played itself out over so many billions of years in the past and likely to play itself out for even longer in the future, that we are the creatures who have reached the pinnacle. And why should we believe there could not be such a thing as “post-biological” forms of life:

…surely when we reflect upon the manifold phases of life and consciousness which have been evolved already it would be a rash thing to say that no others can be developed and that animal life is the end of all things. There was a time when fire was the end of all things another when rocks and water were so. (189)

In the “Book of the Machines” we see the same anxiety about the rapid progress of technology that we find in those warning of the existential dangers we might face within only the next several decades.

The more highly organised machines are creatures not so much of yesterday as of the last five minutes so to speak in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years, see what strides machines have made in the last thousand. May not the world last twenty million years longer. If so what will they not in the end become?  (189-190)

Butler, even in the 1870’s is well aware of the amazing unconscious intelligence of evolution. The cleverest of species use other species for their own reproductive ends. The stars of this show are the flowers a world on display in Louie Schwartzberg’s stunning documentary Wings of Life which shows how much of the beauty of our world is the product of this bridging between species as a means of reproduction. Yet flowers are just the most visually prominent example.

Our machines might be said to be like flowers unable to reproduce on their own, but with an extremely effective reproductive vehicle in use through human beings. This, at least is what Butler speculated in Erewhon, again in the section “The Book of Machines”:

No one expects that all the features of the now existing organisations will be absolutely repeated in an entirely new class of life. The reproductive system of animals differs widely from that of plants but both are reproductive systems. Has nature exhausted her phases of this power? Surely if a machine is able to reproduce another machine systematically we may say that it has a reproductive system. What is a reproductive system if it be not a system for reproduction? And how few of the machines are there which have not been produced systematically by other machines? But it is man that makes them do so. Yes, but is it not insects that make many of the plants reproductive and would not whole families of plants die out if their fertilisation were not effected by a class of agents utterly foreign to themselves? Does any one say that the red clover has no reproductive system because the humble bee and the humble bee only must aid and abet it before it can reproduce? (204)

Reproduction is only one way one species or kingdom can use another. There is also their use as a survival vehicle itself. Our increasing understanding of the human microbiome almost leads one to wonder whether our whole biology and everything that has grown up around it has all this time merely been serving as an efficient vehicle for the real show – the millions of microbes living in our guts. Or, as Butler says in what is the most quoted section of Erewhon:

Who shall say that a man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his and whether he is anything but another kind of ant heap after all. Might not man himself become a sort of parasite upon the machines. An affectionate machine tickling aphid. (196)

Yet, one might suggest that perhaps we shouldn’t separate ourselves from our machines. Perhaps we have always been transhuman in the sense that from our beginnings we have used our technology to extend our own reach. Butler well understood this argument that technology was an extension of the human self.

A machine is merely a supplementary limb this is the be all and end all of machinery We do not use our own limbs other than as machines and a leg is only a much better wooden leg than any one can manufacture. Observe a man digging with a spade his right forearm has become artificially lengthened and his hand has become a joint.

In fact machines are to be regarded as the mode of development by which human organism is now especially advancing every past invention being an addition to the resources of the human body. (219)

Even the Luddites of Erewhon understood that to lose all of our technology would be to lose our humanity, so intimately woven were our two fates. The solution to the dangers of a new and rival kingdom of animated beings arising from machines was to deliberately wind back the clock of technological development to before the time fossil fuels had freed machines from the constraints of animal, human, and cyclical power to before machines had become animate in the way life itself was animate.

Still, Butler knew it would be almost impossible to make the solution of Erewhon our solution. Given our numbers, to retreat from the world of machines would unleash death and anarchy such as the world has never seen:

The misery is that man has been blind so long already. In his reliance upon the use of steam he has been betrayed into increasing and multiplying. To withdraw steam power suddenly will not have the effect of reducing us to the state in which we were before its introduction there will be a general breakup and time of anarchy such as has never been known it will be as though our population were suddenly doubled with no additional means of feeding the increased number. The air we breathe is hardly more necessary for our animal life than the use of any machine on the strength of which we have increased our numbers is to our civilisation it is the machines which act upon man and make him man as much as man who has acted upon and made the machines but we must choose between the alternative of undergoing much present suffering or seeing ourselves gradually superseded by our own creatures till we rank no higher in comparison with them than the beasts of the field with ourselves. (215-216)

Therein lies the horns of our current dilemma. The one way we might save biological life from the risk of the new kingdom of machines would be to dismantle our creations and retreat from technological civilization. It is a choice we cannot make both because of its human toll and for the long term survival of the genealogy of earthly life. Butler understood the former, but did not grasp the latter. For, the only way the legacy of life on earth, a legacy that has lasted for 3 billion years, and which everything living on our world shares, will survive the inevitable life cycle of our sun which will boil away our planet’s life sustaining water in less than a billion years and consume the earth itself a few billion years after that, is for technological civilization to survive long enough to provide earthly life with a new home.

I am not usually one for giving humankind a large role to play in cosmic history, but there is at least a chance that something like Peter Ward’s Medea Hypothesis is correct, that given the thoughtless nature of evolution’s imperative to reproduce at all cost life ultimately destroys its own children like the murderous mother of Euripides play.

As Ward points out it is the bacteria that have almost destroyed life on earth, and more than once, by mindlessly transforming its atmosphere and poisoning it oceans. This is perhaps the risk behind us, that Bostrom thinks might explain our cosmic loneliness complex life never gets very far before bacteria kills it off. Still perhaps we might be in a sort of pincer of past and future our technological civilization, its capitalist economic system, and the AIs that might come to be at the apex of this world ultimately as absent of thought as bacteria and somehow the source of both biological life’s and its own destruction.     

Our luck has been to avoid a medea-ian fate in our past. If we can keeps our wits about us we might be able to avoid a  medea-ian fate in our future only this brought to us by the kingdom machines we have created. If most of life in the universe really has been trapped at the cellular level by something like the Medea Hypothesis, and we actually are able to survive over the long haul, then our cosmic task might be to purpose the kingdom of machines to be to spread and cultivate  higher order life as far as deeply as we can reach into space. Machines, intelligent or not, might be the interstellar pollinators of biological life. It is we the living who are the flowers.

The task we face over both the short and the long term is to somehow negotiate the rivalry not only of the two worlds that have dominated our existence so far- the natural world and the human world- but a third, an increasingly complex and distinct world of machines. Getting that task right is something that will require an enormous amount of wisdom on our part, but part of the trick might lie in treating machines in some way as we already, when approaching the question rightly, treat nature- containing its destructiveness, preserving its diversity and beauty, and harnessing its powers not just for the good of ourselves but for all of life and the future.

As I mentioned last time, the more sentient our machines become the more we will need to look to our own world, the world of human rights, and for animals similar to ourselves, animal rights, for guidance on how to treat our machines balancing these rights with the well being of our fellow human beings.

The nightmare scenario is that instead of carefully cultivating the emergence of the kingdom of machines to serve as a shelter for both biological life and those things we human beings most value they will instead continue to be tied to the dream of endless the endless accumulation of capital, or as Douglas Rushkoff recently stated it:

When you look at the marriage of Google and Kurzweil, what is that? It’s the marriage of digital technology’s infinite expansion with the idea that the market is somehow going to infinitely expand.

A point that was never better articulated or more nakedly expressed than by the imperialist business man Cecil Rhodes in the 19th century:

To think of these stars that you see overhead at night, these vast worlds which we can never reach. I would annex the planets if I could.

If that happens, a kingdom of machines freed from any dependence on biological life or any reflection of human values might eat the world of the living until the earth is nothing but a valley of our bones.


That a whole new order of animated beings might emerge from what was essentially human weakness vis a-vis  other animals is yet another one of the universes wonders, but it is a wonder that needs to go right in order to make it a miracle, or even to prevent it merely from becoming a nightmare. Butler invented almost all of our questions in this regard and in speculating in Erewhon how humans might unravel their dependence on machines raised another interesting question as well; namely, whether the very freewill that we use to distinguish the human world from the worlds of animals or machines actually exists? He also pointed to an alternative future where the key event would not be the emergence of the kingdom of machines but the full harnessing and control of the powers of biological life by human beings questions I’ll look at sometime soon.