The Kingdom of Machines

The Book of the Machines

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the cannon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)

I should back up. Many of us who have heard of Butler and Erewhon have likely done so through George Dyson’s amazing little book Darwin Among the Machines, a book that itself deserves a top spot in the nature-man-machine cannon and got its title from an essay of Butler’s that found its way into the fictional world of his Erewhon. Dyson’s 1997 bookwritten just as the Internet age was ramping up tried to place the digital revolution within the longue duree of human and evolutionary history, but Butler had gotten there first. Indeed, Erewhon articulated challenges ahead of us which took almost 150 years to unfold, issues being discussed today by a set of scientists and scholars concerned over both the future of machines and the ultimate fate of our species.

A few weeks back Giulio Prisco at the IEET pointed me in the direction of an editorial placed in both the Huffington Post and the Independent by a group of scientists including, Stephen Hawking, Nick Bostrom and Max Tegmark warning of the potential dangers emerging from technologies surrounding artificial intelligence that are now going through an explosive period of development. As the authors of the letter note:

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Most seem to think we are at the beginning rather than at the end of this AI revolution and see it likely unfolding into an unprecedented development; namely, machines that are just as smart if not incredibly more so than their human creators. Should the development of AI as intelligent or more intelligent than ourselves be a concern? Giulio himself doesn’t think so  believing that advanced AIs are in a sense both our children and our evolutionary destiny. The scientists and scholars behind the letter to Huffpost and The Independent , however, are very concerned.  As they put it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

And again:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In a probing article about the existential risks posed by artificial intelligence, and more so the larger public’s indifference to it, James Hamblin writes of the theoretical physicists and Nobel Laureate Frank Wilczek another of the figures trying to promote greater serious public awareness of the risks posed by artificial intelligence. Wilczek thinks we will face existential risks from intelligent machines not over the long haul of human history but in a very short amount of time.

It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.

To be honest, it’s quite hard to take these scientists and thinkers seriously, perhaps because I’ve been so desensitized by Hollywood dystopias staring killer robots. But when your Noahs are some of the smartest guys on the planet it’s probably a good idea if not to actually start building your boat to at least be able to locate and beat a path if necessary to higher ground.

What is the scale of the risk we might be facing? Here’s physicist Max Tegmark from Hamblin’s piece:

Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.

If there’s a potential monster somewhere on your path it’s always good to have some idea of its actual shape otherwise you find yourself jumping and lunging at harmless shadows.  How should we think about potentially humanity threatening AI’s? The first thing is that we need to be clear about what is happening. For all the hype, the AI sceptics are probably right– we are nowhere near replicating the full panoply of our biological intelligence in a machine. Yet, this fact should probably increase rather than decrease our concern regarding the potential threat from “super-intelligent” AIs. However far they remain from the full complexity of human intelligence, on account of their much greater speed and memory such machines largely already run something so essential and potentially destructive as our financial markets, the military is already discussing the deployment of autonomous weapons systems, with the domains over which the decisions of AIs hold sway only likely to increase over time. One just needs to imagine one or more such systems going rouge and doing what may amount to highly destructive things its creators or programmers did not imagine or intend to comprehend the risk. Such a rogue system would be more akin to a killer-storm than Lex Luthor, and thus, Nick Bostrom is probably on to something profound when he suggest that one of the worse things we could do would be to anthropomorphize the potential dangers. Telling Ross Anderson over at Aeon:

You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

The crazy thing is that Erewhon a novel published in 1872 clearly stated almost all of these dangers and perspectives. If these fears prove justified it will be Samuel Butler rather than Friedrich Nietzsche who will have been the great prophet to emerge from the the 19th century.

Erewhon was released right on the eve of what’s called the second industrial revolution that lasted from the 1870’s to World War I. Here you get mass rail and steam ships, gigantic factories producing iron and steel, and the birth of electrification. It is the age romanticized and reimagined today by cultural movement of steampunk.

The novel appeared 13 years after the publication of Charles Darwin’s The Origin of Species and even more importantly one year after Darwin’s even more revolutionary The Descent of Man. As we learn in Dyson’s book, Butler and Darwin were engaged in a long lasting intellectual feud, though the issue wasn’t evolution itself, but Butler’s accusation that Charles Darwin had essentially ripped his theory from his grandfather Erasmus Darwin without giving him any of the credit. Be that as it may,

Erewhon was never intended as many thought at the time, as a satire of Darwinism, but was Darwinian to its core, and tried to apply the lessons of the worldview made apparent by the Theory of Evolution to the innovatively exploding world of machines Butler saw all around him.

The narrator in Erewhon, a man name Higgs, is out exploring the undiscovered valleys of a thinly disguised New Zealand or some equivalent looking for a large virgin territory to raise sheep. In the process he discovers an unknown community, the nation of Erewhon hidden in its deep unexplored valleys. The people there are not primitive, but exist at roughly the level of a European village before the industrial revolution.  Or as Higgs observers of them in a quip pregnant with meaning – “…savages do not make bridges.” What they find most interesting about the newcomer Higgs is, of all things, his pocket watch.

But by and by they came to my watch which I had hidden away in the pocket that I had and had forgotten when began their search. They seemed concerned and the moment that they got hold of it. They made me open it and show the works and as as I had done so they gave signs of very grave which disturbed me all the more because I not conceive wherein it could have offended them.  (58)

One needs to know a little of the history of the idea of evolution to realize how deliciously clever this narrative use of a watch by Butler is. He’s jumping off of William Paley’s argument for intelligent design found in Paley’s 1802 Natural Theology or Evidences of the Existence and Attributes of the Deityabout which it has been said:

It is a book I greatly admire for in its own time the book succeeded in doing what I am struggling to do now. He had a point to make, he passionately believed in it, and he spared no effort to ram it home clearly. He had a proper reverence for the complexity of the living world, and saw that it demands a very special kind of explanation. (4)


The quote above is Richard Dawkins talking in his The Blind Watchmakerwhere he recovered and popularized Paley’s analogy of the stumbled upon watch as evidence that the stunningly complex world of life around us must have been designed.

Paley begins his Natural Theology with an imagined scenario where a complex machine, a watch, hitherto unknown by its discoverer leads to speculation about it origins.

In crossing a heath suppose I pitched my foot against a stone and were asked how the stone came to be there. I might possibly answer that for any thing I knew to the contrary it had lain there for ever nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground and it should be inquired how the watch happened to be in that place. I should hardly think of the answer which I had before given that for any thing I knew the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? Why is it not as admissible in the second case as in the first?  (1)

After investigating the intricacies of the watch, how all of its pieces seem to fit together perfectly and have precise and definable functions Paley thinks that the:

….inference….is inevitable that the watch must have had a maker that there must have existed at some time and at some place or other an artificer or artificers who formed it for the purpose which we find it actually to answer who comprehended its construction and designed its use. (3-4)

Paley thinks we should infer a designer even if we discover the watch is capable of making copies of itself:

Contrivance must have had a contriver design a designer whether the machine immediately proceeded from another machine or not. (12)

This is creationism, but as even Dawkins admits, it is an eloquent and sophisticated creationism.

Darwin, which ever one got there first, overthrew this need for an engineer as the ultimate source of complexity by replacing a conscious designer with a simple process through which the most intricate of complex entities could emerge over time- in the younger’s case – Natural Selection.


The brilliance of Samuel Butler in Erewhon was to apply this evolutionary emergence of complexity not just to living things, but to the machines we believe ourselves to have engineered. Perhaps the better assumption to have when we encounter anything of sufficient complexity is that to reach such complexity it must have been something that evolved over time. Higgs says of the magistrate of Erewhon obsessed with the narrator’s pocket watch that he had:

….a look of horror and dismay… a look which conveyed to me the impression that he regarded my watch not as having been designed but rather as the designer of himself and of the universe or as at any rate one of the great first causes of all things  (58)

What Higgs soon discovers, however, is that:

….I had misinterpreted the expression on the magistrate’s face and that it was one not of fear but hatred.  (58)

Erewhon is a civilization where something like the Luddites of the early 19th century have won. The machines have been smashed, dismantled, turned into museum pieces.

Civilization has been reset back to the pre-industrial era, and the knowledge of how to get out of this era and back to the age of machines has been erased, education restructured to strangle in the cradle scientific curiosity and advancement.

All this happened in Erewhon because of a book. It is a book that looks precisely like an essay the real world Butler had published not long before his novel, his essay Darwin among the Machines, where George Dyson got his title. In Erewhon it is called simply “The Book of the Machines”.

It is sheer hubris we read in “The Book of the Machines” to think that evolution, having played itself out over so many billions of years in the past and likely to play itself out for even longer in the future, that we are the creatures who have reached the pinnacle. And why should we believe there could not be such a thing as “post-biological” forms of life:

…surely when we reflect upon the manifold phases of life and consciousness which have been evolved already it would be a rash thing to say that no others can be developed and that animal life is the end of all things. There was a time when fire was the end of all things another when rocks and water were so. (189)

In the “Book of the Machines” we see the same anxiety about the rapid progress of technology that we find in those warning of the existential dangers we might face within only the next several decades.

The more highly organised machines are creatures not so much of yesterday as of the last five minutes so to speak in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years, see what strides machines have made in the last thousand. May not the world last twenty million years longer. If so what will they not in the end become?  (189-190)

Butler, even in the 1870’s is well aware of the amazing unconscious intelligence of evolution. The cleverest of species use other species for their own reproductive ends. The stars of this show are the flowers a world on display in Louie Schwartzberg’s stunning documentary Wings of Life which shows how much of the beauty of our world is the product of this bridging between species as a means of reproduction. Yet flowers are just the most visually prominent example.

Our machines might be said to be like flowers unable to reproduce on their own, but with an extremely effective reproductive vehicle in use through human beings. This, at least is what Butler speculated in Erewhon, again in the section “The Book of Machines”:

No one expects that all the features of the now existing organisations will be absolutely repeated in an entirely new class of life. The reproductive system of animals differs widely from that of plants but both are reproductive systems. Has nature exhausted her phases of this power? Surely if a machine is able to reproduce another machine systematically we may say that it has a reproductive system. What is a reproductive system if it be not a system for reproduction? And how few of the machines are there which have not been produced systematically by other machines? But it is man that makes them do so. Yes, but is it not insects that make many of the plants reproductive and would not whole families of plants die out if their fertilisation were not effected by a class of agents utterly foreign to themselves? Does any one say that the red clover has no reproductive system because the humble bee and the humble bee only must aid and abet it before it can reproduce? (204)

Reproduction is only one way one species or kingdom can use another. There is also their use as a survival vehicle itself. Our increasing understanding of the human microbiome almost leads one to wonder whether our whole biology and everything that has grown up around it has all this time merely been serving as an efficient vehicle for the real show – the millions of microbes living in our guts. Or, as Butler says in what is the most quoted section of Erewhon:

Who shall say that a man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his and whether he is anything but another kind of ant heap after all. Might not man himself become a sort of parasite upon the machines. An affectionate machine tickling aphid. (196)

Yet, one might suggest that perhaps we shouldn’t separate ourselves from our machines. Perhaps we have always been transhuman in the sense that from our beginnings we have used our technology to extend our own reach. Butler well understood this argument that technology was an extension of the human self.

A machine is merely a supplementary limb this is the be all and end all of machinery We do not use our own limbs other than as machines and a leg is only a much better wooden leg than any one can manufacture. Observe a man digging with a spade his right forearm has become artificially lengthened and his hand has become a joint.

In fact machines are to be regarded as the mode of development by which human organism is now especially advancing every past invention being an addition to the resources of the human body. (219)

Even the Luddites of Erewhon understood that to lose all of our technology would be to lose our humanity, so intimately woven were our two fates. The solution to the dangers of a new and rival kingdom of animated beings arising from machines was to deliberately wind back the clock of technological development to before the time fossil fuels had freed machines from the constraints of animal, human, and cyclical power to before machines had become animate in the way life itself was animate.

Still, Butler knew it would be almost impossible to make the solution of Erewhon our solution. Given our numbers, to retreat from the world of machines would unleash death and anarchy such as the world has never seen:

The misery is that man has been blind so long already. In his reliance upon the use of steam he has been betrayed into increasing and multiplying. To withdraw steam power suddenly will not have the effect of reducing us to the state in which we were before its introduction there will be a general breakup and time of anarchy such as has never been known it will be as though our population were suddenly doubled with no additional means of feeding the increased number. The air we breathe is hardly more necessary for our animal life than the use of any machine on the strength of which we have increased our numbers is to our civilisation it is the machines which act upon man and make him man as much as man who has acted upon and made the machines but we must choose between the alternative of undergoing much present suffering or seeing ourselves gradually superseded by our own creatures till we rank no higher in comparison with them than the beasts of the field with ourselves. (215-216)

Therein lies the horns of our current dilemma. The one way we might save biological life from the risk of the new kingdom of machines would be to dismantle our creations and retreat from technological civilization. It is a choice we cannot make both because of its human toll and for the long term survival of the genealogy of earthly life. Butler understood the former, but did not grasp the latter. For, the only way the legacy of life on earth, a legacy that has lasted for 3 billion years, and which everything living on our world shares, will survive the inevitable life cycle of our sun which will boil away our planet’s life sustaining water in less than a billion years and consume the earth itself a few billion years after that, is for technological civilization to survive long enough to provide earthly life with a new home.

I am not usually one for giving humankind a large role to play in cosmic history, but there is at least a chance that something like Peter Ward’s Medea Hypothesis is correct, that given the thoughtless nature of evolution’s imperative to reproduce at all cost life ultimately destroys its own children like the murderous mother of Euripides play.

As Ward points out it is the bacteria that have almost destroyed life on earth, and more than once, by mindlessly transforming its atmosphere and poisoning it oceans. This is perhaps the risk behind us, that Bostrom thinks might explain our cosmic loneliness complex life never gets very far before bacteria kills it off. Still perhaps we might be in a sort of pincer of past and future our technological civilization, its capitalist economic system, and the AIs that might come to be at the apex of this world ultimately as absent of thought as bacteria and somehow the source of both biological life’s and its own destruction.     

Our luck has been to avoid a medea-ian fate in our past. If we can keeps our wits about us we might be able to avoid a  medea-ian fate in our future only this brought to us by the kingdom machines we have created. If most of life in the universe really has been trapped at the cellular level by something like the Medea Hypothesis, and we actually are able to survive over the long haul, then our cosmic task might be to purpose the kingdom of machines to be to spread and cultivate  higher order life as far as deeply as we can reach into space. Machines, intelligent or not, might be the interstellar pollinators of biological life. It is we the living who are the flowers.

The task we face over both the short and the long term is to somehow negotiate the rivalry not only of the two worlds that have dominated our existence so far- the natural world and the human world- but a third, an increasingly complex and distinct world of machines. Getting that task right is something that will require an enormous amount of wisdom on our part, but part of the trick might lie in treating machines in some way as we already, when approaching the question rightly, treat nature- containing its destructiveness, preserving its diversity and beauty, and harnessing its powers not just for the good of ourselves but for all of life and the future.

As I mentioned last time, the more sentient our machines become the more we will need to look to our own world, the world of human rights, and for animals similar to ourselves, animal rights, for guidance on how to treat our machines balancing these rights with the well being of our fellow human beings.

The nightmare scenario is that instead of carefully cultivating the emergence of the kingdom of machines to serve as a shelter for both biological life and those things we human beings most value they will instead continue to be tied to the dream of endless the endless accumulation of capital, or as Douglas Rushkoff recently stated it:

When you look at the marriage of Google and Kurzweil, what is that? It’s the marriage of digital technology’s infinite expansion with the idea that the market is somehow going to infinitely expand.

A point that was never better articulated or more nakedly expressed than by the imperialist business man Cecil Rhodes in the 19th century:

To think of these stars that you see overhead at night, these vast worlds which we can never reach. I would annex the planets if I could.

If that happens, a kingdom of machines freed from any dependence on biological life or any reflection of human values might eat the world of the living until the earth is nothing but a valley of our bones.


That a whole new order of animated beings might emerge from what was essentially human weakness vis a-vis  other animals is yet another one of the universes wonders, but it is a wonder that needs to go right in order to make it a miracle, or even to prevent it merely from becoming a nightmare. Butler invented almost all of our questions in this regard and in speculating in Erewhon how humans might unravel their dependence on machines raised another interesting question as well; namely, whether the very freewill that we use to distinguish the human world from the worlds of animals or machines actually exists? He also pointed to an alternative future where the key event would not be the emergence of the kingdom of machines but the full harnessing and control of the powers of biological life by human beings questions I’ll look at sometime soon.






The Ethics of a Simulated Universe


This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary.  Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it:

This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

To state his case in everyday language: any technological civilization whose progress did not stop at some point would gain the capacity to realistically simulate whole worlds, including the individual minds that inhabit them, that is, they could run realistic ancestor simulations. Technologically advanced civilizations either die out before gaining the capacity to produce realistic ancestor simulations,  there is  something stopping such technologically mature civilizations from running such simulations in large numbers or, we ourselves are most likely living in such a simulation because there are a great many more such simulated worlds than real ones.

There is a lot in that argument to digest and a number of underlying assumptions that might be explored or challenged, but I want to look at just one of them, #2. That is, I will make the case that there may be very good reasons why technological civilizations both prohibit and are largely uninterested in creating realistic ancestor simulations. Reasons that are both ethical and scientific. Bostrom himself discusses the possibility that ethical constraints might prevent  technologically mature civilizations from creating realistic ancestor simulations. He writes:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

I think the issue of “suffering that is inflicted on the inhabitants of a simulation” may be more serious than Bostrom appears to believe. Any civilization that has reached the stage where it can create realistic worlds that contain fully conscious human beings will have almost definitely escaped the two conditions that haunt the human condition in its current form- namely pain and death. The creation of realistic ancestor simulations will have brought back into existence these two horrors and thus might likely be considered not merely unethical but perhaps even evil. Were our world actually such a simulation it would confront us with questions that once went by the name of theodicy, namely, the attempt to reconcile the assumed goodness of the creator (for our case the simulator)  with the existence of evil: natural, moral, and metaphysical that exists in the world.

Questions as to why there is, or the morality of there being, such a wide disjunction between the conditions of any imaginable creator/simulator and those of the created/simulated is a road we’ve been down before as the philosopher Susan Neiman so brilliantly showed us in her Evil in Modern Thought: An Alternative History of Philosophy.  There, Neiman points out that the question of theodicy is a nearly invisible current that swept through the modern age. As a serious topic of thought in this period it began as an underlying assumption behind the scientific revolution with thinkers such as Leibniz arguing in his Essays on the Goodness of God, the Freedom of Man and the Origin of Evil that “ours is the best of all possible worlds”. Leibniz having invented calculus and been the first to envision what we would understand as a modern computer was no dope, so one might wonder how such a genius could ever believe in anything so seemingly contradictory to actual human experience?

What one needs to remember in order to understand this is that the early giants of the scientific revolution, Newton, Descartes, Leibniz, Bacon weren’t out to replace God but to understand what was thought of as his natural order. Given how stunning and quick what seemed at the time a complete understanding of the natural world had been formulated using the new methods it perhaps made sense to think that a similar human understanding of the moral order was close at hand. Given the intricate and harmonious picture of how nature was “designed” it was easy to think that this natural order did not somehow run in violation of the moral order. That underneath every seemingly senseless and death-bringing natural event- an earthquake or plague- there was some deeper and more necessary process for the good of humanity going on.    

The quest for theodicy continued when Rousseau gave us the idea that for harmony to be restored to the human world we needed to return to the balance found in nature, something we had lost when we developed civilization.  Nature, and therefore the intelligence that was thought to have designed it were good. Human beings got themselves into trouble when they failed to heed this nature- built their cities on fault lines, or even, for Rousseau, lived in cities at all.

Anyone who has ever taken a serious look at history (or even paid real attention to the news) would agree with Hegel that: “History… is, indeed, little more than the register of the ‘crimes, follies, and misfortunes’ of mankind”. There isn’t any room for an ethical creator there, but Hegel himself tried to find some larger meaning in the long term trajectory of history. With him, and Marx who followed on his heels we have not so much a creator, but a natural and historical process that leads to a fulfillment an intelligence or perfect society at its end. There may be a lot of blood and gore, a huge amount of seemingly unnecessary human suffering between the beginning of history and its end, but the price is worth paying.

Lest anyone think that these views are irrelevant in our current context, one can see a great deal of Rousseau in the positions of contemporary environmentalists and bio-conservatives who take the position that it is us, that is human beings and the artificial technological society we have created that is the problem. Likewise, the views of both singularitarians and some transhumanists who think we are on the verge of reaching some breakout stage where we complete or transcend the human condition, have deep echoes of both Hegel and Marx.

But perhaps the best analog for the kinds of rules a technologically advanced civilization might create around the issue of realistic ancestor simulations might lie not in these religiously based philosophical ideas but  in our regulations regarding more practical areas such as animal experimentation. Scientific researchers have long recognized that the pursuit of truth needs humane constraints and that such constraints apply not just to human research subjects, but to animal subjects as well. In the United States, standards regarding research that use live animals is subject to self-regulation and oversight based on those established by the Institutional Animal Care and Use Committee (IACUC). Those criteria are as follows:

The basic criteria for IACUC approval are that the research (a) has the potential to allow us to learn new information, (b) will teach skills or concepts that cannot be obtained using an alternative, (c) will generate knowledge that is scientifically and socially important, and (d) is designed such that animals are treated humanely.

The underlying idea behind these regulations is that researchers should never unnecessarily burden animals in research. Therefore, it is the job of researchers to design and carry out research in a way that does not subject animals to unnecessary burdens. Doing research that does not promise to generate important knowledge, subjecting animals to unnecessary pain, doing experiments on animals when the objectives can be reached without doing so are all ways of unnecessarily burdening animals.

Would realistic ancestor simulations meet these criteria? I think realistic ancestor simulations would probably fulfill criteria (a), but I have serious doubts that it meets any of the others. In terms of simulations offering us “skills or concepts that cannot be obtained using an alternative” (b), in what sense would a realistic simulations teach skills or concepts that could not be achieved using something short of such simulations where those within them actually suffer and die? Are there not many types of simulations that would fall short of the full range of subjective human experiences of suffering that would nevertheless grant us a good deal of knowledge regarding such types of worlds and societies?  There is probably an even greater ethical hurdle for realistic ancestor simulations to cross in (c), for it is difficult to see exactly what lessons or essential knowledge running such simulations would bring. Societies that are advanced enough to run such simulations are unlikely to gain vital information about how to run their own societies.

The knowledge they are seeking is bound to be historical in nature, that is, what was it like to live in such and such a period or what might have happened if some historical contingency were reversed? I find it extremely difficult to believe that we do not have the majority of information today to create realistic models of what it was like to live in a particular historical period, a recreation that does not have to entail real suffering on the part of innocent participants to be of worth.

Let’s take a specific rather than a general historical example dear to my heart because I am a Pennsylvanian- the Battle of Gettysburg. Imagine that you live hundreds or even thousands of years into the future when we are capable of creating realistic ancestor simulations. Imagine that you are absolutely fascinated by the American Civil War and would like to “live” in that period to see it for yourself. Certainly you might want to bring your own capacity for suffering and pain or to replicate this capacity in a “human” you, but why do the simulated beings in this world with you have to actually feel the lash of a whip, or the pain of a saw hacking off an injured limb? Again, if completely accurate simulations of limited historical events are ethically suspect, why would this suspicion not hold for the simulation of entire worlds?

One might raise the objection that any realistic ancestor simulation would need to possess sentient beings with free will such as ourselves that possess not merely the the ability to suffer, but to inflict evil upon others. This, of course, is exactly the argument Christian theodicy makes. It is also an argument that was undermined by sceptics of early modern theodicy- such as that put forth by Leibniz.    

Arguments that the sentient beings we are now (whether simulated or real) require free will that puts us at risk of self-harm were dealt with brilliantly by the 17th century philosopher, Pierre Bayle, who compared whatever creator might exist behind a world where sentient beings are in constant danger due to their exercise of free will to a negligent parent. Every parent knows that giving their children freedom is necessary for moral growth, but what parent would give this freedom such reign when it was not only likely but known that it would lead to the severe harm or even death of their child?

Applying this directly to the simulation argument any creator of such simulations knows that we will kill millions of our fellow human beings in war and torture, deliberately starve and enslave countless others. At least these are problems we have brought on ourselves,but the world also contains numerous so-called natural evils such as earthquakes and pandemics which have devastated us numerous times. Above all, it contains death itself which will kill all of us in the end.    

It was quite obvious to another sceptic, David Hume, that the reality we lived in had no concern for us. If it was “engineered” what did it say that the engineer did not provide obvious “tweaks” to the system that would make human life infinitely better? Why, are we driven by pain such as hunger and not correspondingly greater pleasure alone?

Arguments that human and animal suffering is somehow not “real” because it is simulated seem to me to be particularly tone deaf. In a lighter mood I might kick a stone like Samuel Johnson and squeak “I refute it thus!” In a darker mood I might ask those who hold the idea whether they would willingly exchange places with a burn victim. I think not! 

If it is the case that we live in a realistic ancestor simulation then the simulator cares nothing for our suffering on a granular level. This leads us to the question of what type of knowledge could truly be gained from running such realistic ancestor simulations. It might be the case that the more granular a simulation is the less knowledge can actually be gained from it. If one needs to create entire worlds with individuals and everything in them in order to truly understand what is going on, then the results of the process you are studying is either contingent or you are really not in possession of full knowledge regarding how such worlds work. It might be interesting to run natural selection over again from the beginning of life on earth to see what alternatives evolution might have come up with, but would we really be gaining any knowledge about evolution itself rather than a view of how it might have looked had such and such initial conditions been changed? And why do we need to replicate an entire world including the pain suffered by the simulated creatures within before we can grasp what alternatives to the path evolution or even history followed might have looked like. Full understanding of the process by which evolution works, which we do not have but a civilization able to create realistic ancestor simulations doubtless would, should allow us to envision alternatives without having to run the process from scratch a countless number of times.

Yet, it is in the last criteria (d) that experiments are  “designed such that animals are treated humanely” that the ethical nature of any realistic ancestor simulation really hits a wall. If we are indeed living in an ancestor simulation it seems pretty clear that the simulator(s) should be declared inhumane. How otherwise would the simulator create a world of pandemics and genocides, torture, war, and murder, and above all, universal death?

One might claim that any simulator is so far above us that it takes little concern of our suffering. Yet, we have granted this kind of concern to animals and thus might consider ourselves morally superior to an ethically blind simulator. Without any concern or interest in our collective well-being, why simulate us in the first place?

Indeed, any simulator who created a world such as our own would be the ultimate anti-transhumanist. Having escaped pain and death “he” would bring them back into the world on account of what would likely be socially useless curiosity. Here then we might have an answer to the second half of Bostrom’s quote above:

Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

A society that was technologically capable of creating realistic ancestor simulations and actually runs them would appear to have on of two features (1) it finds such simulations ethically permissible, (2) it is unable to prevent such simulations from being created. Perhaps any society that remains open to creating such a degree of suffering found in realistic ancestor simulations  for anything but the reason of existential survival would be likely to destroy itself for other reasons relating to such lack of ethical boundaries.

However, it is (2) that I find the most illuminating. For perhaps the condition that decides if a civilization will continue to exist is its ability to adequately regulate the use of technology within it. Any society that is unable to prevent rogue members from creating realistic ancestor simulations despite deep ethical prohibitions is incapable of preventing the use of destructive technologies or in managing its own technological development in a way that promotes survival. A situation we can perhaps see glimpses of in our own situation related to nuclear and biological weapons, or the dangers of the Anthropocene.

This link between ethics, successful control of technology and long term survival is perhaps is the real lesson we should glean from Bostrom’s provocative simulation argument.