The Terrifying Banality of Humanity 2.0

William Blake Urizen in Fetters Tears streaming from His Eyes

Something I think we are prone to forget in this age of chattering heads and two-bit pundits is that ideas have consequences. Anyone engaged in public discourse has some responsibility to wrestle with the ethical implications of their thought, and this is as much the case for the Rush Limbaugh’s of the world as it is for that disappearing class of thinkers who once proudly went by the name intellectuals.

In a way, artists have always had an easier time than those engaged in more discursive lines of thought. We hold Plato morally accountable for the ideas he put forward in a work like The Republic in a way we don’t hold his contemporary Aeschylus for a play full of human violence like The Oresteia. The reason for this is that philosophers rather than artists it is assumed, not doubt in part falsely, are not just giving us a view into the world as it is, but as they believe it should be.

If ideas are indeed serious things, and we need to hold intellectuals responsible for their moral implications in a way artists need not be, then the pushing open, perhaps better tearing up, of the envelope of our moral horizons found in Steve Fuller’s recent Humanity 2.0: What it means to Be Human: Past, Present and Future needs to be confronted in a way a work of art like the song Vicarious by the band Tool does not, even if both start from a kind of Archimedean point outside or above the normal human perspective to look down upon the world as whole to end up with a sort of metaphysical justification for evil and violence.

Upon picking up Humanity 2.0 I did not expect this. What I anticipated is a recent version of run of the mill transhumanism with an epistemological slant (Fuller is a social epistemologist). Instead, what I got was a rather disturbing set of conclusions embedded within an otherwise banal alternative history of sociology.

There were far too many points throughout Humanity 2.0 where I found myself stopping to exclaim “Did he really just say that?” to discuss them all. So, in what follows I will focus on what I found the most relevant. For starters, Fuller thinks that for transhumanism, or better transhumanism as he understands it, to succeed it has to adopt what can only be understood as a form of  politics as deception. The first step in this deception is merely the discussions surrounding transhumanists’ concerns, where, for Fuller:

Intentionally or not, this serves to acclimatize citizens, in the company of their peers, to whatever nano-driven changes might be on the horizon, thereby updating the concept of self-fulfilling prophecy. (147)

In social psychology, this strategy is often dubbed ‘inoculation’, the suggestion being that by allowing people to spend time thinking about extreme or pure cases of some potential threat, you have laid the groundwork for the acceptance of a less virulent version.  (148)

Fuller compares this indirect method to be used in the achievement of transhumanist ends to be akin to the victory of socialism which emerged as a way to stave off the communist threat. (150)

Let’s pause and think about what he is saying for a moment. For Fuller, discussions and debates surrounding the issues of the application of science and technology to transform the current human condition are not so much a way to think through the profound challenges and questions transhumanist concerns pose to our understanding of what it means to be human or even post-human as they are a sort of public relations campaign which will win over the public to transhumanist positions the more extreme the proposals it appears to make for the human future.

That Fuller would adopt a perspective more reminiscent of the television show Mad Men that anything we might recognize from the tradition of moral philosophy or democratic politics shouldn’t be all that surprising given that he thinks the bizarre way in which we afford corporations with the rights of individuals to be a primary vector by which his post- human version of utopia will come into being. He muses:

Once the modes of legitimate succession started to be forged along artificial rather than natural lines with the advent of the corporation… the path to the noosphere had been set. (205)

If indeed Fuller means us to associate our modern corporations- our Walmart’s and McDonald’s- with the telos of the world, Teilhard de Chardin will not be the only one rolling in his grave.

Fuller’s love of corporations and his view of them as vectors for what he conceives to be the “real” ends of transhumanism is just one shade of his unabashed collectivism. One of his primary aims is to rehabilitate 19th and early 20th century eugenics. Many of the contemporary proponents of eugenics tend to approach the issue from the standpoint of reproductive rights. The problem with antique forms of eugenics, in their view, wasn’t just that it was based upon bad, meaning pre-genetic and racial science, but that it was focused on empowering the group in complete disdain of the rights and life of individuals.

Fuller will have none of such individualism or talk of “rights”.  We should rehabilitate 19th and 20th century eugenics because:

The history of eugenics is relevant to the project of human enhancement because it establishes the point-of-view from which one is to regard human-beings: namely, not as ends in themselves but as a means for the production of benefits… (emphasis added/142)

Lest we think that Fuller is out merely to rehabilitate the image of now forgotten  eugenicists like Francis Galton and couldn’t possibly be talking about “rehabilitating” our understanding of National Socialism we have this:

Put bluntly, we must envisage the prospect of a transformation in the normative image of Nazi Germany comparable to what Barrington Moore described for the French Revolution. This is not easy… there have been only the barest hints of Nazi rehabilitation. But hints there are, helped along by the deaths of those with first-hand experience of Nazism. (244)

Does the death of those with first-hand experience of Nazi atrocities make us wiser in Fuller’ eyes? This is not the kind of “PR” transhumanism needs.

Fuller’s commitment to collectivism takes him very far from traditional transhumanism indeed. In his hands the universal commitment to the increase in human longevity becomes nonsensical because the individual herself, from Fuller’s perspective, is a non-entity, a bridge to a kind of reborn primordial soup of cells and silicon which from his pantheist perch appears to be the ideal evolutionary future of a disembodied human intelligence or logos.

From his perspective the satirical world of “Utilitaria” imagined by the moral philosopher, Steven Lukes, where the characters are fully cared for from birth until old age only to be literally deconstructed both mentally and physically to serve as “parts” for the young becomes not a searing critique of a purely utilitarian world devoid of human rights, but a prescription for his version of our post-human future.

… with nature-inspired technologies we might think more imaginatively (aka divinely) about the terms on which ‘the greatest good’ can be secured for ‘the greatest number’, especially how parts of individuals might be subsumed under this rubric.” (228)

This zooming out to an amoral height in which the individual no longer has real value aside from whatever benefits they can bring not so much to society as to the process of history leading us to Fuller’s post-human age leads him to embrace a whole host of troubling concepts.  He gives gives his proposal to disassemble the panoply of protections we have built up around the use of human beings in scientific research vanilla,  soft-nietzschean labels such as “suffering smart”. Where his proposals amount to a kind of martyrology in which individuals sacrifice their own lives and health for the “honors” of propelling scientific research at a faster pace.

Fuller’s version of “heaven” is a return to the Eden of the primordial soup where the distinctions between species, machines, and even individuals has collapsed. In this he is riding the wave of the transformation of biology into an information science where individuals of a species become mere pattern of genes and the boundary between species become permeable. Here, Fuller takes the extremely irresponsible glorification of our new found ability to swap and combine genes between species who have not occupied the same place in lineage since the dawn of life on earth coming from figures like Freeman Dyson to an even greater extreme. Being able to incorporate the genes from other species is less for Fuller a means of human enhancement as it is the road the disappearance of the human into some sort of divine stew.

Fuller takes this transformation of biology into an information science to the degree that he has become one of the most vocal proponents of teaching intelligent design in schools. What? The logic behind his advocation of the pseudoscience of intelligent design is that the actual history of life on earth has become irrelevant now that we have reduced it to information.

… nothing much hangs on the fact of whether animals and plants evolved naturally or were specially created, let alone whether it happened over 5000 or 5 billion years… (64)

For Fuller, intelligent design is a sort of noble lie meant to teach its students how to be “God-like” designers of nature.

As far as things to raise critical awareness I will stop there, and am left to wonder how it is possible that Fuller has come up with such a regressive version of transhumanism where debate and discourse is replaced by manipulation, where corporations and collectives have replaced individuals, where not just 19th and early 20th century racists, but Nazis mass murderers are seen as premonitions of transhumanism and that is not be taken as a negative thing, a warning, where individuals become sacrificial animals for a research endeavor whose success would ultimately spell the end of individuals, where intelligent design is touted as a serious version of biological science?

The extent that Fuller flirts with the shibboleths of early 20th century fascism, indeed, he appears to see them as the true harbingers of transhumanist politics, suggested that the most recent offshoot of transhumanist thought, so called techno-progressivism will need to divorce and divest itself from its transhumanist roots if it is to remain ethically viable. Special efforts need to be made to disentangle techno-progressivism from underlying religious aspiration,s for it is these which serve as the launch point from which Fuller would undo much of the moral progress that emerged as a delayed response to the abuses and cognitive distortions of fascism and European imperialism.

As I have warned elsewhere religious rhetoric or the historically unconscious adoption of ideas whose origins are ultimately religious is dangerous to the transhumanist project both because it puts transhumanists and traditional religious persons in an artificial state of conflict, but also because it leads to the very kinds of moral and intellectual hubris found in a work like Humanity 2.0.

Fuller taps into what Susan Neiman in her Evil in Modern Thought declared to be the moral problem that would define the modern age- why is there evil in the world? A problem answered by thinkers such as Gottfried Leibniz with his concept of theodicy. The problem of evil is one of our own limited human perspective. Get up “high” enough and you will see that we live in the “best of all possible worlds”.

It shouldn’t be surprising that theodicy emerged at the same time as modern science and biblical literalism. Before the scientific revolution it was assumed that the everyday world that surrounds us is the worst of all possible worlds only in comparison to the Hell thought to be literally below us both a consequence of ours and the rebellious angels’ fall from grace.

Newton was just the most prominent of the early moderns to have destroyed the gap between the perfect world of the heavens and that of earth; what guided a thrown object here was the same thing that ruled heavenly objects above. Here emerges the idea of God as a celestial engineer.

Many of the early scientists such as Newton or Francis Bacon were proponents of the new biblical literalism. For them the project of freeing our understanding of the world from the hold of ancient and inaccurate science and finding the true meaning of the Bible by clearing it of the distortions caused by Catholicism were a set piece- a dual project. In her The Case for God, Karen Armstrong points out how these early moderns created the literalists idea of God replacing more mystical and metaphorical understandings. What drives debates between vocal New Atheists and fundamentalist is that the two are in fact theological brothers who share their conception of God and are merely arguing over whether this God exists or not.

Fuller places himself within this camp although his argument might be described as a leap from New Atheism to fundamentalism: the literalists creator God of the early moderns (and today’s fundamentalists) does not exist, so we should ourselves become this God. All of history then becomes excused as the mere birthpangs of us a newborn “god”.

The problem is we are not “gods” and are nowhere near “becoming gods”, or are very strange gods indeed. Not the omnipotent creator/designer that is a common power fantasy, but a god trapped within his own creation that needs to create and act anew merely to survive and can never be sure that the next creative act will not spell his end.

Theodicy of the type Fuller promotes denies the very reality that the Copernican Revolution and all subsequent great leaps in our scientific understanding has brought us: We are not at the center of events- we are not the story.

Theodicy implies that we can judge the universe by the moral qualities characteristic of human individuals- good and evil. But we do not exist on the same moral plane of the universe because the universe does not have a moral plane. It creates and destroys with absolute indifference.

When doing science we have to approach the world from above, from an Archimedean point, but morality does not and can not work like this. Justifying the whole of life or the entirety of the universe on moral grounds means a justification which a more theological age called evil, death, pain and destruction, for there is no need to justify the good. Fuller wants to justify historical and natural evil, but instead ends up at the same place as the viewpoint of the character in that Tool song I mentioned earlier who justifies his own voyeuristic bloodlust because the universe itself is a place of pain:

Credulous at best, your desire to believe in angels in the hearts of men.

                Pull your head on out your hippy haze and give a listen.

   Shouldn’t have to say it all again.

              The universe is hostile. so Impersonal. devour to survive.

So it is. So it’s always been.

  We all feed on tragedy

 It’s like blood to a vampire

Vicariously I, live while the whole world dies

   Much better you than I

The banality of evil is that it comes not so much from this sort of willful malice as a mistaken notion of the good and an obliviousness to our own limits. Fuller’s Humanity 2.0 has both in spades.

The Ethics of a Simulated Universe

Zhuangzi-Butterfly-Dream

This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary.  Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it:


This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

To state his case in everyday language: any technological civilization whose progress did not stop at some point would gain the capacity to realistically simulate whole worlds, including the individual minds that inhabit them, that is, they could run realistic ancestor simulations. Technologically advanced civilizations either die out before gaining the capacity to produce realistic ancestor simulations,  there is  something stopping such technologically mature civilizations from running such simulations in large numbers or, we ourselves are most likely living in such a simulation because there are a great many more such simulated worlds than real ones.

There is a lot in that argument to digest and a number of underlying assumptions that might be explored or challenged, but I want to look at just one of them, #2. That is, I will make the case that there may be very good reasons why technological civilizations both prohibit and are largely uninterested in creating realistic ancestor simulations. Reasons that are both ethical and scientific. Bostrom himself discusses the possibility that ethical constraints might prevent  technologically mature civilizations from creating realistic ancestor simulations. He writes:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

I think the issue of “suffering that is inflicted on the inhabitants of a simulation” may be more serious than Bostrom appears to believe. Any civilization that has reached the stage where it can create realistic worlds that contain fully conscious human beings will have almost definitely escaped the two conditions that haunt the human condition in its current form- namely pain and death. The creation of realistic ancestor simulations will have brought back into existence these two horrors and thus might likely be considered not merely unethical but perhaps even evil. Were our world actually such a simulation it would confront us with questions that once went by the name of theodicy, namely, the attempt to reconcile the assumed goodness of the creator (for our case the simulator)  with the existence of evil: natural, moral, and metaphysical that exists in the world.

Questions as to why there is, or the morality of there being, such a wide disjunction between the conditions of any imaginable creator/simulator and those of the created/simulated is a road we’ve been down before as the philosopher Susan Neiman so brilliantly showed us in her Evil in Modern Thought: An Alternative History of Philosophy.  There, Neiman points out that the question of theodicy is a nearly invisible current that swept through the modern age. As a serious topic of thought in this period it began as an underlying assumption behind the scientific revolution with thinkers such as Leibniz arguing in his Essays on the Goodness of God, the Freedom of Man and the Origin of Evil that “ours is the best of all possible worlds”. Leibniz having invented calculus and been the first to envision what we would understand as a modern computer was no dope, so one might wonder how such a genius could ever believe in anything so seemingly contradictory to actual human experience?

What one needs to remember in order to understand this is that the early giants of the scientific revolution, Newton, Descartes, Leibniz, Bacon weren’t out to replace God but to understand what was thought of as his natural order. Given how stunning and quick what seemed at the time a complete understanding of the natural world had been formulated using the new methods it perhaps made sense to think that a similar human understanding of the moral order was close at hand. Given the intricate and harmonious picture of how nature was “designed” it was easy to think that this natural order did not somehow run in violation of the moral order. That underneath every seemingly senseless and death-bringing natural event- an earthquake or plague- there was some deeper and more necessary process for the good of humanity going on.    


The quest for theodicy continued when Rousseau gave us the idea that for harmony to be restored to the human world we needed to return to the balance found in nature, something we had lost when we developed civilization.  Nature, and therefore the intelligence that was thought to have designed it were good. Human beings got themselves into trouble when they failed to heed this nature- built their cities on fault lines, or even, for Rousseau, lived in cities at all.


Anyone who has ever taken a serious look at history (or even paid real attention to the news) would agree with Hegel that: “History… is, indeed, little more than the register of the ‘crimes, follies, and misfortunes’ of mankind”. There isn’t any room for an ethical creator there, but Hegel himself tried to find some larger meaning in the long term trajectory of history. With him, and Marx who followed on his heels we have not so much a creator, but a natural and historical process that leads to a fulfillment an intelligence or perfect society at its end. There may be a lot of blood and gore, a huge amount of seemingly unnecessary human suffering between the beginning of history and its end, but the price is worth paying.

Lest anyone think that these views are irrelevant in our current context, one can see a great deal of Rousseau in the positions of contemporary environmentalists and bio-conservatives who take the position that it is us, that is human beings and the artificial technological society we have created that is the problem. Likewise, the views of both singularitarians and some transhumanists who think we are on the verge of reaching some breakout stage where we complete or transcend the human condition, have deep echoes of both Hegel and Marx.

But perhaps the best analog for the kinds of rules a technologically advanced civilization might create around the issue of realistic ancestor simulations might lie not in these religiously based philosophical ideas but  in our regulations regarding more practical areas such as animal experimentation. Scientific researchers have long recognized that the pursuit of truth needs humane constraints and that such constraints apply not just to human research subjects, but to animal subjects as well. In the United States, standards regarding research that use live animals is subject to self-regulation and oversight based on those established by the Institutional Animal Care and Use Committee (IACUC). Those criteria are as follows:

The basic criteria for IACUC approval are that the research (a) has the potential to allow us to learn new information, (b) will teach skills or concepts that cannot be obtained using an alternative, (c) will generate knowledge that is scientifically and socially important, and (d) is designed such that animals are treated humanely.

The underlying idea behind these regulations is that researchers should never unnecessarily burden animals in research. Therefore, it is the job of researchers to design and carry out research in a way that does not subject animals to unnecessary burdens. Doing research that does not promise to generate important knowledge, subjecting animals to unnecessary pain, doing experiments on animals when the objectives can be reached without doing so are all ways of unnecessarily burdening animals.

Would realistic ancestor simulations meet these criteria? I think realistic ancestor simulations would probably fulfill criteria (a), but I have serious doubts that it meets any of the others. In terms of simulations offering us “skills or concepts that cannot be obtained using an alternative” (b), in what sense would a realistic simulations teach skills or concepts that could not be achieved using something short of such simulations where those within them actually suffer and die? Are there not many types of simulations that would fall short of the full range of subjective human experiences of suffering that would nevertheless grant us a good deal of knowledge regarding such types of worlds and societies?  There is probably an even greater ethical hurdle for realistic ancestor simulations to cross in (c), for it is difficult to see exactly what lessons or essential knowledge running such simulations would bring. Societies that are advanced enough to run such simulations are unlikely to gain vital information about how to run their own societies.

The knowledge they are seeking is bound to be historical in nature, that is, what was it like to live in such and such a period or what might have happened if some historical contingency were reversed? I find it extremely difficult to believe that we do not have the majority of information today to create realistic models of what it was like to live in a particular historical period, a recreation that does not have to entail real suffering on the part of innocent participants to be of worth.

Let’s take a specific rather than a general historical example dear to my heart because I am a Pennsylvanian- the Battle of Gettysburg. Imagine that you live hundreds or even thousands of years into the future when we are capable of creating realistic ancestor simulations. Imagine that you are absolutely fascinated by the American Civil War and would like to “live” in that period to see it for yourself. Certainly you might want to bring your own capacity for suffering and pain or to replicate this capacity in a “human” you, but why do the simulated beings in this world with you have to actually feel the lash of a whip, or the pain of a saw hacking off an injured limb? Again, if completely accurate simulations of limited historical events are ethically suspect, why would this suspicion not hold for the simulation of entire worlds?

One might raise the objection that any realistic ancestor simulation would need to possess sentient beings with free will such as ourselves that possess not merely the the ability to suffer, but to inflict evil upon others. This, of course, is exactly the argument Christian theodicy makes. It is also an argument that was undermined by sceptics of early modern theodicy- such as that put forth by Leibniz.    

Arguments that the sentient beings we are now (whether simulated or real) require free will that puts us at risk of self-harm were dealt with brilliantly by the 17th century philosopher, Pierre Bayle, who compared whatever creator might exist behind a world where sentient beings are in constant danger due to their exercise of free will to a negligent parent. Every parent knows that giving their children freedom is necessary for moral growth, but what parent would give this freedom such reign when it was not only likely but known that it would lead to the severe harm or even death of their child?

Applying this directly to the simulation argument any creator of such simulations knows that we will kill millions of our fellow human beings in war and torture, deliberately starve and enslave countless others. At least these are problems we have brought on ourselves,but the world also contains numerous so-called natural evils such as earthquakes and pandemics which have devastated us numerous times. Above all, it contains death itself which will kill all of us in the end.    

It was quite obvious to another sceptic, David Hume, that the reality we lived in had no concern for us. If it was “engineered” what did it say that the engineer did not provide obvious “tweaks” to the system that would make human life infinitely better? Why, are we driven by pain such as hunger and not correspondingly greater pleasure alone?

Arguments that human and animal suffering is somehow not “real” because it is simulated seem to me to be particularly tone deaf. In a lighter mood I might kick a stone like Samuel Johnson and squeak “I refute it thus!” In a darker mood I might ask those who hold the idea whether they would willingly exchange places with a burn victim. I think not! 

If it is the case that we live in a realistic ancestor simulation then the simulator cares nothing for our suffering on a granular level. This leads us to the question of what type of knowledge could truly be gained from running such realistic ancestor simulations. It might be the case that the more granular a simulation is the less knowledge can actually be gained from it. If one needs to create entire worlds with individuals and everything in them in order to truly understand what is going on, then the results of the process you are studying is either contingent or you are really not in possession of full knowledge regarding how such worlds work. It might be interesting to run natural selection over again from the beginning of life on earth to see what alternatives evolution might have come up with, but would we really be gaining any knowledge about evolution itself rather than a view of how it might have looked had such and such initial conditions been changed? And why do we need to replicate an entire world including the pain suffered by the simulated creatures within before we can grasp what alternatives to the path evolution or even history followed might have looked like. Full understanding of the process by which evolution works, which we do not have but a civilization able to create realistic ancestor simulations doubtless would, should allow us to envision alternatives without having to run the process from scratch a countless number of times.

Yet, it is in the last criteria (d) that experiments are  “designed such that animals are treated humanely” that the ethical nature of any realistic ancestor simulation really hits a wall. If we are indeed living in an ancestor simulation it seems pretty clear that the simulator(s) should be declared inhumane. How otherwise would the simulator create a world of pandemics and genocides, torture, war, and murder, and above all, universal death?

One might claim that any simulator is so far above us that it takes little concern of our suffering. Yet, we have granted this kind of concern to animals and thus might consider ourselves morally superior to an ethically blind simulator. Without any concern or interest in our collective well-being, why simulate us in the first place?

Indeed, any simulator who created a world such as our own would be the ultimate anti-transhumanist. Having escaped pain and death “he” would bring them back into the world on account of what would likely be socially useless curiosity. Here then we might have an answer to the second half of Bostrom’s quote above:


Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

A society that was technologically capable of creating realistic ancestor simulations and actually runs them would appear to have on of two features (1) it finds such simulations ethically permissible, (2) it is unable to prevent such simulations from being created. Perhaps any society that remains open to creating such a degree of suffering found in realistic ancestor simulations  for anything but the reason of existential survival would be likely to destroy itself for other reasons relating to such lack of ethical boundaries.

However, it is (2) that I find the most illuminating. For perhaps the condition that decides if a civilization will continue to exist is its ability to adequately regulate the use of technology within it. Any society that is unable to prevent rogue members from creating realistic ancestor simulations despite deep ethical prohibitions is incapable of preventing the use of destructive technologies or in managing its own technological development in a way that promotes survival. A situation we can perhaps see glimpses of in our own situation related to nuclear and biological weapons, or the dangers of the Anthropocene.

This link between ethics, successful control of technology and long term survival is perhaps is the real lesson we should glean from Bostrom’s provocative simulation argument.