H.P. Lovecraft and the Horror of the Posthuman

The Shadow Over Innsmouth

Most boundaries have their origin in our fears, imposed in a vain quest of isolating what frightens us on the other side. The last two centuries have been the era of eroding boundaries, the gradual disappearance of what were once thought to be unassailable walls between ourselves and the “other”. It is the story of liberation the flip-side of which has been a steady accumulation of anxiety and dread.

By far the most important boundary, as far as Western culture at least is concerned, dealt with the line that separated the human from the merely animal. It was a border that Charles Darwin began the process of destroying with his book The Origin of Species in 1859, but though it was strongly implied in that work that human beings also were animals, and that we had emerged through the same evolutionary processes as monkeys or sea slugs, that cold fact wouldn’t be directly addressed until Darwin wrote another masterpiece – The Descent of Man in 1871.

Darwin himself was an abolitionist and believer in the biological equality of the “races”, but with his theory, and especially after The Descent’s publication, the reaction to his ideas wasn’t the a scientific affirmation of the core truth found in the book of Genesis- that despite differences in appearance and material circumstances all of humanity were part of one extended family- but a sustained move in the opposite direction.

Darwin’s theories would inadvertently serve as the justification for a renewed stage of oppression and exploitation of non-European peoples on the basis of an imagined evolutionary science of race. Evolutionary theory applied to human being also formed the basis for a new kind of fear- that of “master races” being drug back into barbarism by hordes of “inferior” human types- the nightmare that still stalks the racist in his sleep.

Darwin held views that while attempting to distance his theory from any eugenicist project nevertheless left itself open to such interpretations and their corresponding anxieties. Here Darwin’s humanism is much more obscure than on the question of race, for he drew a conclusion most of us find difficult to accept- that our humanistic instincts had combined with our civilizational capacities in such a way as to cause humanity to evolutionarily “degenerate”.  As he wrote in The Descent:

With savages, the weak in body or mind are soon eliminated; and those that survive commonly exhibit a vigorous state of health. We civilised men, on the other hand, do our utmost to check the process of elimination; we build asylums for the imbecile, the maimed, and the sick; we institute poor-laws; and our medical men exert their utmost skill to save the life of every one to the last moment. There is reason to believe that vaccination has preserved thousands, who from a weak constitution would formerly have succumbed to small-pox. Thus the weak members of civilised societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man. It is surprising how soon a want of care, or care wrongly directed, leads to the degeneration of a domestic race; but excepting in the case of man himself, hardly any one is so ignorant as to allow his worst animals to breed.

Still, the ever humane Darwin didn’t think such degeneration was a problem we should even try to solve given, for in doing so we might lose those features about ourselves- such as our caring for one another- that made our particular form of human life meaningful in the first place. Yet, despite his efforts to morally anchor the theory of evolution his ideas became the starting point for the project to replace the religious nightmares populated by witches and demons that were being driven from the world from the light of an increasingly scientific and secular age with new “scientifically based” fears.

What indeed could have been more horrifying for bourgeois whites than barbarians breaching the gate they had so assiduously built, between Western and subject peoples, the upper and middle classes and the poor- who often were of another color and language, or lastly, erosion of the sharp line that had been established between man as the master of nature and nature herself?

H.G. Wells was one of the first to articulate this new seemingly scientific nightmare that would replace older religious ones. If evolution was even now reshaping our very bodies and minds should we not fear that it was molding us in ways we might dread?

Wells in his novel The Time Machine thus imagined a future in which evolution had caused a twisted bifurcation of the human species. The upper class after eons of having had every need and whim provided had become imbeciles and gone soft-his childlike Eloi. Yet it was Wells’ imagined degenerates, the Morlocks, the keepers of machines who had evolved to live underground and who lived off the flesh of the Eloi that would haunt the imagination of a 20th century beset by the war between classes.

In The Time Machine Wells reworked European anxiety regarding “savages” into an imagined future of European civilization itself that might be brought about should the course of future human evolution come to be shaped by the failure to address class conflict and preserve the types of challenges that made gave life meaning and had shaped human evolution in the past. He would explore a  similar to erosion of boundaries, only this time between the human and the animal, with his 1886 novel The Island of Dr Moreau, which I’ve discussed on a prior Halloween.

Yet Wells would never explore the racial anxieties that were unleashed with the Theory of Natural Selection, probably because he remained both too intelligent and too humane to be taken in by the pseudoscience of race even when racism was a common opinion among educated whites. The person who would eventually give vent to these anxieties was a strange and lonely figure from Providence Rhode Island whose rise to popularity might serve as a sort of barometer of the early 21st century and its’ mixture of new and old fears.

I can’t say I’ve read much of H.P. Lovecraft, perhaps because I am no great fan of horror fiction generally, but after reading his 1931 tale The Shadow over Innsmouth I did understand much better what all the fuss over the last decade or so has been about.

Lovecraft is an excellent example of the old saw that brilliance and moral character need have nothing to do with one another. Indeed, he is a master storyteller of a sort of who excels at tapping into and exploring the darker human emotions. Where Lovecraft especially outperforms is at capturing less the emotion of fear than the sensation of suffocating disgust. Yet for all his brilliance, the man was also, indisputably, a racist asshole.

The plot of The Shadow over Innsmouth deals with an unnamed antiquarian who in search of the origins of an obscure kind of “crown” finds himself in the bizarre and ultimately horrifying coastal town of Innsmouth. What the protagonist discovers upon his arrival at Innsmouth is something close to a ghost town, an almost post- apocalyptic place neither fully abandoned nor alive and inhabited. Despite the tale’s setting near the sea the atmosphere Lovecraft describes reminded me very much of what is left from the man made disaster that is the town of Centralia, a place where our fantasies of the underworld meet reality in a way I know all too well.

The backdrop is important for the message I think Lovecraft is trying to convey, which is that civilization is separated from barbarism and violence, the angelic from the demonic, or the human from the animal, by a razor thin boundary that might disappear in an instant.

Against this setting Lovecraft succeeds in a mashup of the anxieties that haunted Westerners over the course of the modern age. The churches of Innsmouth have been converted over to a type of devil worship. Indeed, one of the most striking scenes in the story for me was when the protagonist sees one of the darkly robed priests of this cult walking across the street and finds himself struck with a strange sense of mental insanity and nausea- what fear can sometimes feel like as I myself have experienced it. Still, Lovecraft is not going to center his story on this one deep seated, and probably dying fear of devil worshipers, but will hybridize it and create a chimera of fears old and new.

The devil the inhabitants of Innsmouth worship will turn out to be one of the deities from Lovecraft’s Cthulhu Mythos. In fact, Lovecraft appears to have almost single-handedly revived the ancient Christian idea that the gods of “pagans” and “heathens”, weren’t, as modern Christian mostly understand them, fictions or childish ideas regarding the “true” God, but were themselves real powers that could do things for their worshipers- though for a demonic price. The ship captain who brought the worship of a dark gods from lands in the Pacific did so for purely utilitarian reasons- prayer to the entity works- unlike prayers to the Christian God, who in a twist I am not sure Lovecraft grasped the cleverness of, has left the the fishing community of Innsmouth with empty nets.

In exchange for full nets of fish, the people of Innsmouth agree to worship these new “dark ones” and breed with them like the “sons of God” did with the “daughters of men” in the Book of Genesis, and just like in the Bible where such interbreeding gives rise to human monsters hybrids that God later destroys with the Flood, in The Shadow over Innsmouth the product of these unnatural couplings result in similar hybrids only, in a dark reversal of both the Noah story and historical evolutionary movement from sea to land, the hybrids will be born human only to devolve by the time they reach adulthood into a sort of immortal fish.

Okay, that’s weird… and deep.

What’s a shame is that it’s likely Lovecraft we’ll be talking about in 2100, much more so than Joseph Conrad, who, despite attempting to wrestle with similar anxieties, and from a much more humane perspective, will likely be anchored in the 19th and 20th centuries by his historical realism.

Lovecraft will likely last longer because he was tapping into archetypes that seem to transcend history and are burned deep into the human unconscious, but it’s not just this Jungian aspect to Lovecraft’s fiction that will probably keep his version of horror relevant in the 21st century.

The very anxieties, both personally and in fiction, that Lovecraft exhibited are almost bound to become even more acute as fears over immigration are exacerbated by a shrinking world that for a multitude of reasons is on the move. These fears are only the tip of the iceberg, however, as boundaries are breaking down in much more radical ways, as the kinds of revolutionary changes in public perception of homosexuality and the transgendered within just the last couple of years clearly attests.

Add to that what will only be the ever more visible presence of cyborgs, the dread of animated environments, and ubiquity of AI “deities” running the world behind the scenes, and all this combined with the continued proliferation of subcultures and bio-technological innovations that will blur the line between humans and animals, and one can see why anxiety over boundaries may be the most important fear in terms of its psychological and political impact in the 21st century.

What makes Lovecraft essential reading for today is that he gives those of us not prone to such fears over eroding boundaries a window directly into the mind of someone who experiences such erosion with visceral horror that as in Lovecraft’s time (he wrote during the era when Nazi propaganda was portraying the Jews as vermin) quite dark political forces are prone to tap. For that we might be thankful that the author was as dark of soul as he was for because of it we can be a little more certain that such fears are truly felt by some, and not just a childlike play for attention, even while the adults among us continue to know there aren’t any such monsters under our beds waiting until the lights go out to attack.


God’s a Hedge fund Manager, and I’m a Lab Rat


Of all the books and essays of Steve Fuller I have read his latest The Proactionary Imperative: A Foundation for Transhumanism is by far the most articulate and morally anchored. From me that’s saying a lot given how critical I have been on more than one occasion regarding what I’ve understood as some troubling undercurrents justifying political violence and advocating a kind of amoral reading of history found in his work.

Therefore, I was surprised when I found The Proactionary Imperative which Fuller co-authored with Veronica Lipińska not only filled with intellectual gems, but informed by a more nuanced ethics than I had seen in Fuller’s prior writing or speeches.

In The Proactionary Imperative Fuller and Lipińska aim to re-calibrate the ideological division between left/right that has existed since the French Revolution’s unfortunate seating arrangement. The authors set out to redefine this division that they see as antiquated with a new split between the adherents of the precautionary principle and those who embrace a version of what Max More was the first to call the proactionary principle. The former urges caution towards technological and especially biological interventions in nature whereas the latter adopts the position that the road to progress is paved with calculated risks.

Fuller and Lipińska locate those who espouse some version of the precautionary  principle as tracing their modern origins back to Darwin himself and his humbling of the human status and overall pessimism that we could ever transcend our lowly nature as animals. They lump philosophers such as Peter Singer (whom I think David Roden more accurately characterizes as a Critical Posthumanist) within this precautionary sect whom they argue are united by their desire for the rebalancing of the moral scale away from humans and towards our fellow animals. Opposed to this, Fuller and Lipińska argue that we should cling to the pre-Darwinian notion that humans on a metaphysical and ontological level are superior and distinct from other animals on account of the fact that we are the only animal that seeks to transcend its’ own nature and become “gods”.

Fuller himself is a Christian (this was news to me) of a very peculiar sort- a variant of the faith whose origins the authors fascinatingly trace to changes in our understanding of God first articulated in the philosophy of the 12th century theologian Duns Scotus.

Before Scotus the consensus among Christian theologians was to stress the supernatural characteristic of God, that is, God was unlike anything we had experienced in the material world and therefore any of our mental categories were incapable of describing him. To use examples from my own memory an extreme Scotian position would be the materialism of Thomas Hobbes who held God to possess an actual body, whereas the unbridgeable gap between ourselves and our ideas and God, or indeed the world itself, is a theme explored with unparalleled brilliance in Milton’s Paradise Lost and brought to its’ philosophical climax in the work of Kant.

Fuller and Lipińska think that Scotus’ narrowing of the gap between human and godly characteristics was a seminal turning point in modernity. Indeed, a whole new type of theology known as Radical Orthodoxy and associated with the philosopher Slavoj Žižek has hinged its’ critique of modernity around this Scotian turn, and Fuller takes the other side of this Christian split adopting the perspective that because God is of this world we can become him.

Problems with using religious language as a justification for transhumanism or science I’ve discussed ad nauseum such as here, here and here, so I won’t bore you with them again. Instead, to Fuller and Lipińska’s political prescriptions.

The authors want us to embrace our “God-like” nature and throw ourselves into the project of transcending our animal biology. What they seem to think is holding us back from seizing the full technological opportunities in front of us is not merely our fossilized political divisions whose re-calibration they wish to spur, but the fact that the proactionary principle has been understood up until this point on primarily libertarian terms.

Transhumanism if understood as merely the morphological freedom of individuals over their own bodies for Fuller and Lipińska fails to promote either rapid modernization or the kinds of popular mobilization that can be found during most other eras of transformative change. We need other models. Unfortunately, they are also from the 19th century for the authors argue for a reassessment of the progressive and liberal aspects of late 19th and early 20th century eugenics as a template for a new politics, and it’s right about there that I knew I was in for a let down.

Fuller and Lipińska conceptualize a new variant of eugenic politics they call “hegentics”. From what I can gather it’s meant to be a left- of- center alternative to both the libertarian view of transhumanism as mere morphological freedom and the kinds of abuses and corporate capture of genetic information seen in novels like Michael Crichton’s Next or Paolo Bacigalupi’s The Windup Girl. This alternative sees genetic inheritance being reconceptualized as the collective property of groups who can then benefit from their genes being studied or shared.

The authors also want to encourage individual genetic and medical experimentation and encourage/celebrate individual “sacrifice” in the cause of transhumanist innovation in something akin to the way we celebrate the sacrifice of individual soldiers in war.

As in the past, I think Fuller fails to grapple with the immoral aspects and legacy of medical experimentation and eugenics even outside of the hellish world concocted by the Nazis. He seems to assume that the lack of constraints on human medical experiments will lead to more rapid medical innovation in the same way fans of Dick Cheney think torture will lead to actionable intelligence, that is, without assuming that this is a case that needs to be proved. If it were indeed true that weak rules on human experimentation lead to more rapid medical innovation then the Soviet Union or China should have been among the most medically advanced nations on earth. There’s a very real danger that should we succeed in building the type of society Fuller and Lipińska envision we’ll have exchanged our role as citizens only to have become a very sophisticated form of lab rat.

Another issue is that the authors seem informed by a version of genetic determinism that bears little resemblance to scientific reality. As Ramez Naam, no opponent of human enhancement indeed, has pointed out even in cases where genes are responsible for a large percentage of a trait such as IQ or personality, literally thousands of genes seem to be responsible for those traits none of which has been found to be so predominant that intervention is easy or without the risk of causing other unwanted conditions, so that, for example, enhancing the genes for intelligence seems to increase the risk for schizophrenia.

Naam points out that it’s unlikely parents will take such genetic risks with children except to protect against debilitating diseases- a case where genetic changes appear much easier in any case. Fuller and Lipińska never really discuss parental rights or more importantly protections for children, which is odd because eugenics has historically been aimed at reproduction. Perhaps they were thinking of the kinds of gene therapies for adults promised by new techniques like Crispr, but even there the kinds of limitations imposed by complexity identified by Naam continue to apply.

Nor do Fuller and Lipińska really address how bio-electronic prosthetics and enhancements fit into their idea of hegenetics. Here the idea of biology as individual or ethnic property would seem to break down as does the idea of state subsidized experimentation and enhancement unless we were to create a system of periodic and free “upgrades” for all. It’s a nice dream, but then again I can’t even get the state to pay to fix a broken tooth. Welcome to godhood.

What makes war between the US and China or Russia inevitable?

Battle of Lepanto

There is a dangerous and not so new idea currently making the rounds that not only is conventional war between the great powers inevitable, but that it would be much less of an existential threat to humanity than we have been led to believe and even might be necessary for human progress.

The appearance of this argument in favor of war was almost predictable given that it was preceded by a strong case being made that war was now obsolete and at least in its great power guise was being driven out of history by deep underlying trends towards prosperity and peace. I am thinking her mostly of Steven Pinker in his Better Angels of Our Nature, but he was not alone.

The exact same thing happened in the 19th century. Then, voices arose that argued that war was becoming unnecessary as values became global and the benefits of peaceful trade replaced the plunder of war. The counter-reaction argued that war had been the primary vector for human progress and that without it we would “go soft” or de-evolve into a species much less advanced than our own.

Given their racist overtones, arguments that we will evolve “backward” absent war are no longer made in intelligent circles. Instead, war has been tied to technological development the argument being that without war in general and great power war in particular we will become technologically stuck. That’s the case made by Ian Morris in his War What is it Good For? where he sees the US in shepherding in the Singularity via war, and it’s a case made from opposite sides of the fence regarding transhumaism by Christopher Coker and Steve Fuller.

The problem with these new arguments in favor of great power conflict is that they discount the prospect of nuclear exchange. Perhaps warfare is the heir of technological progress, but it’s better to be the tortoise when such conflicts hold the risk of driving you back to the stone age.

That said, there is an argument out there that perhaps even nuclear warfare wouldn’t be quite the civilization destroyer we have believed, but that position seems less likely to become widespread than the idea that the great powers could directly fight each other and yet some how avoid bringing the full weight of their conventional and nuclear forces down upon the other even in the face of a devastating loss at the hands of the other. It’s here where we can find Peter W. Singer and August Cole’s recent novel Ghost Fleet: A novel of the Third World War that tells the story of a conventional war, mainly at sea, between the US and China and Russia.

It’s a copiously researched book, and probably gives a good portrait of what warfare in the next ten to fifteen years will look like. If the authors are right, in the wars of the future drones – underground, land, air, and sea will be ubiquitous and AI used to manage battles where sensors take the place of senses- the crew of a ship needs never set sight on the actual sea.

Cyber attacks in the future will be a fully functional theater of war- as will outer space. The next war will take advantage of enhancement technologies- augmented reality, and interventions such “stim tabs” for alertness. Advances in neuroscience and bioelectronics will be used at the very least as a form of enhanced, and brutal, interrogation.

If the book makes any warning (and all such books seem to have a warning) it is that the US is extremely reliant on the technological infrastructure that allows the country to deploy and direct its’ global force. The war begins with a Chinese/Russian attack on American satellites, which effectively blinds the American military. I’ve heard interviews where Singer claims that in light of this the US Navy is now training its’ officers in the art of stellar navigation, which in the 21st century is just, well… cool. The authors point out how American hardware is vulnerable as well to threats like embedded beacons or kill switches given that much of this hardware has come out of Chinese factories.

The plot of Ghost Fleet is pretty standard: in a surprise attack the Chinese and Russians push the US out of the Pacific by destroying much of the American navy and conquering Hawaii. Seemingly without a single ally, the US then manages to destroy the Chinese fleet after resurrecting the ships of its naval graveyard near San Francisco- a real thing apparently, especially a newfangled battleship the USS Zumwalt, which is equipped with a new and very powerful form of rail gun. I am not sure if it helped or hindered my enjoyment of the book that its’ plot seemed eerily similar to my favorite piece of anime from when I was a kid- Star Blazers.

The problem, once again, is in thinking that we can contain such conflicts and therefore need not do everything possible to avoid them. In the novel there never seems to be any possibility of nuclear exchange or strategic bombing and with the exception of the insurgency and counterinsurgency in Hawaii the war is hermetically contained to the Pacific and the sea. Let’s hope we would be so lucky, but I’m doubtful.

Ghost Fleet is also not merely weak but detrimental in regards to the one “technology” that will be necessary for the US to avoid a war with China or others and contain it should it break out.

If I had to take a vote on one work that summed up the historical differences between the West and everyone else, differences that would eventually bring us the scientific revolution and all of the power that came with it, that work wouldn’t be one of science such as the Principia Mathematica or even some great work of philosophy or literature, but a history book.

The thing that makes Herodotus’ The Histories so unique was that it was the first time that one people tried to actually understand their enemies. It’s certainly eurocentric to say it, but the Greeks as far as I am aware, were first and unique here. It wasn’t just a one off.

Thucydides would do something similar for the intra-Hellenic conflict between Athenians and Spartans, but the idea of actually trying to understand your enemy, no doubt because so much about being an enemy lends itself to being hidden and thus needs to be imagined was brought to its’ heights in the worlds of drama and fiction. The great Aeschylus did this with the Persians as well with his tragedy named for that noble people.

It’s a long tradition that was seen right up until The Riddle of the Sands a 1903 book that depicted a coming war between the British and the Germans. Much different than the dehumanizing war propaganda that would characterize the Germans as less than human “Huns” during the First World War, The Riddle of the Sands attempted to make German aggression explicable given its’ historical and geographical circumstances even while arguing such aggression needed to be countered.

In Ghost Fleet by contrast the Chinese especially are reduced to something like Bond Villains, US control of the Pacific is wholly justified, its’ characters largely virtuous and determined. In that sense the novel fails to do what novels naturally excel at; namely, opening up realms to the imagination that otherwise remain inaccessible, and in this specific case the motivations, assumptions, and deep historical grievances likely to drive Chinese or Russian policy makers in the run up to and during any such conflict. Sadly, it is exactly such a lack of understanding that makes war great power wars and the existential risks to humanity they pose, perhaps not inevitable, but increasingly more likely.

Stalinism as Transhumanism

New Soviet Man 2

The ever controversial Steve Fuller has recently published a number of jolting essays at the IEET,(there has been a good discussion on David Roden’s blog on the topic), yet whatever one thinks about the prospect of zombie vs transhumanist apocalypse he has managed to raise serious questions for anyone who identifies themselves with the causes of transhumanism and techno-progressivism; namely, what is the proper role, if any, of the revolutionary, modernizing state in such movements and to what degree should the movement be open to violence as a means to achieve its ends? Both questions, I will argue, can best be answered by looking at the system constructed in the Soviet Union between 1929 and 1953 under the reign of Joseph Stalin.            

Those familiar with the cast of characters in the early USSR will no doubt wonder why I chose to focus on the “conservative” Stalin rather Leon Trotsky who certainly evidenced something like proto-transhumanism with quotes such as this one in his essay Revolutionary and Socialist Art:

Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman.

It was views like this one that stood as evidence for me of the common view among intellectuals sympathetic to Marxism that it was Trotsky who was the true revolutionary figure and heir of Lenin, whereas Stalin was instead a reactionary in Marxists garb who both murdered the promise latent in the Russian Revolution and constructed on the revolution’s corpse a system of totalitarian terror.

Serious historians no longer hold such views in light of evidence acquired after the opening up of Soviet archives after the USSR’s collapse in 1991. What we’ve gained as a consequence of these documents is a much more nuanced and detailed view of the three major figures of the Russian Revolution: Lenin, Trotsky and Stalin all of which in some ways looked at violence as the premier tool for crafting a new social and political order.

Take the father of the Russian Revolution- Lenin. The common understanding of Lenin in the past was to see him as a revolutionary pragmatist who engaged in widespread terror and advocated dictatorship largely as a consequence of the precarious situation of the revolution and in the atmosphere of danger and bloodshed found in the Russian civil war. In fact the historical record presented in the archives now makes clear that many of the abhorrent features of state communism in Russia once associated with Stalin in fact originated under Lenin, obscenities such as as the forerunner to the KGB, the Cheka, or the Soviet system of concentration camps- Solzhenitsyn’s gulag archipelago. Far from seeing them as temporary expedients in precarious situation of the revolution and the atmosphere of danger and bloodshed found in the Russian civil war, Lenin saw these innovations as a permanent feature of Marx’s “dictatorship of the proletariat”, which Lenin had reimagined as the permanent rule of a revolutionary, technocratic elite.

Trotsky comes off only a little better because he never assumed the mantle of Soviet dictator. Trotsky’s problems, however, originate from his incredible egotism when combined with his stunning political naivete. His idea was that in the 1920’s conditions were ripe to export the communist revolution globally barely grasping just how precarious the situation was for the newly victorious communist revolutionaries within even his own country. Indeed, to the extent that both domestic communist in non-Russian countries identified with Soviet communism and prematurely pushed their societies towards revolution this only ended up in empowering reactionaries and leading to the rise of fascist parties globally who would come quite close to strangling the infant of state communism while it was still in its cradle.

Then there’s Stalin who those sympathetic to communism often see as the figure who did the most to destroy its’ promise. The so-called “man of steel” is understood as a reactionary monster who replaced the idealism of the early Soviet Union with all the snakes that had infected czarist Russia’s head.

This claim too has now been upended. For what is clear is that although Stalin reoriented the USSR away from Trotsky’s dream of world revolution to focus on “socialism in one country” the state Stalin managed to create was perhaps the most radical and revolutionary the world has yet seen.

The state has always played a large role in modernization, but it was only in the types of communists regimes which Stalin was the first to envision that the state became synonymous with modernization itself.  And it wasn’t only modernization, but a kind of ever accelerating modernization that aimed to break free of the gravity of history and bring humanity to a new world inhabited by a new type of man.

It was his revolutionary impatience and desire to accelerate the pace of history that led Stalin down the path to being perhaps the most bloody tyrant the world has yet know. The collectivization of Soviet agriculture alone coming at the cost of at least 5 million deaths. What Stalin seemed to possess that his fellow revolutionaries lacked was a kind of sociopathy that left him flinch-less in the face of mass suffering, though he also became possessed by the paranoia to which tyrants are prone and towards the end of his reign killed as much out of fear as he did as a consequence of his desire to propel the USSR through history.

Here I return to Fuller who seems to have been inspired by state communism and appears to be making the case that what transhumanism needs is a revolutionary vanguard that will, violently if necessary, push society towards transformative levels of technological change. Such change will then make possible the arrival of a new form of human being.

There is a great deal of similarity here with the view of the state as presented by Zoltan Istvan in his novel The Transhumanist Wager though that work is somewhat politically schizophrenic with elements of Ayn Rand and Bolshevism at the same time, which perhaps makes sense given that the Silicon Valley culture it emerged from was largely built on government subsidies yet clings to the illusion it is the product of heroic individuals such as Steve Jobs. And of course, The Transhumanist Wager exhibits this same feature of believing violence necessary to open up the path to radical transformation.

One of the important questions we might ask is how analogous was the Russian  situation with our own? What the Soviets were trying to do was to transform their society in a way that had been technologically experienced elsewhere only under a brand new form of political regime. In other words the technological transformation – industrialization- was already understood, where the Soviets were innovative was in the political sphere. At the same time, modernization to be accomplished in the Soviet Union needed to overthrow and uproot traditional social forces that had aligned themselves against industrialization- the nobility and the clergy. A strong state was necessary to overcome powerful and intransigent classes.

None of these characteristics seem applicable the kinds of technological change taking place today. For one, the type of complete technological transformation Fuller is talking about are occurring everywhere nearly simultaneously- say the current revolutions in biotechnology, robotics and artificial intelligence where no society has achieved some sort of clear breakout that a revolutionary movement is necessary to push society in the direction of replicating.

Nor does the question of technological transformation break clearly along class or even religious lines in a world where the right to left political spectrum remains very much intact. It is highly unlikely that a technological movement on the right will emerge that can remain viable while jettisoning those adhering to more traditional religious notions.

Likewise, it is improbable that any left wing technological movement would abandon its commitment to the UN Declaration and the world’s poor. In thinking otherwise I believe Fuller is repeating Trotsky’s mistake- naively believing in political forces and realignments about to surface that simply aren’t there. (The same applies to Zoltan Istvan’s efforts.)

Yet by far the worst mistake Fuller makes is in embracing the Stalinist idea that acceleration needs to be politicized and that violence is a useful tool in achieving this acceleration. The danger here is not that justifying violence in the pursuit of transhumanist ends is likely to result in widespread collective violence in the name of such goals, but that it might start to attract sadists and kooks whose random acts of violence “in the name of transhumanism” put the otherwise humane goals of the movement at risk of social backlash.

At the same time Fuller is repeating the grave error of Stalinism which is to conflate power and the state with the process of technological transformation. The state is many and sometimes contradictory things with one of its’ roles indeed to modernize society. Yet at the same time the state is tasked with protecting us from that very same modernization by preventing us from being treated as mere parts in a machine rather than as individuals and citizens.

Fuller also repeats the error of the Stalinist state when it comes to the role and efficacy of violence in the political sphere for the proper role of violence is not as a creative force that is able to establish something new, but as a means of resistance and protection for those already bound together in some mutual alliance.

Still, we might want to take seriously the possibility that Fuller has tapped into a longer term trend in which the ways technology is developing is opening up the possibility of the reappearance of the Stalinist state, but that’s a subject that will have to await for another time.


The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

Would AI and Aliens be moral in a godless universe?

Black hole

Last time I attempted to grapple with R. Scott Bakker’s intriguing essay on what kinds of philosophy aliens might practice and remaining dizzied by questions. Luckily, I had a book in my possession which seemed to offer me the answers, a book that had nothing to do with the a modern preoccupation like question of alien philosophers at all, but rather a metaphysical problem that had been barred from philosophy except among seminary students since Darwin; namely, whether or not there was such a thing as moral truth if God didn’t exist.

The name of the book was Robust Ethics: The Metaphysics and Epistemology of Godless Normative Realism (contemporary philosophy isn’t all that sharp when it comes to titles), by Erik J. Wielenberg. Now, I won’t even attempt to write a proper philosophical review of Robust Ethics for the book has been excellently dissected by a proper philosopher, John Danaher in pieces such as this, this, and this, and one more. Indeed, it was Danaher’s thoughtful reviews that had resulted in Wielenberg’s short work being in the ever changing pile of books that shadows my living room floor like a patch of unextractable mold. It was just the book I needed when thinking about what types of intelligence might be possessed by extraterrestrials.

It’s a problem I ran into when reviewing David Roden’s Post-human Life that goes like this: while it is not so much easy, as it is that I don’t simply draw a blank for me to conceive of an alternative form of intelligence to our human type, it’s damned near impossible for me to imagine what our an alternative form to our moral cognition and action would consist of and how it would be embedded in these other forms of intelligence.

The way Wielenberg answers this question would seem to throw a wrench into Bakker’s idea of Blind Brain Theory (BBT) because what Bakker is urging is that we be suspicious of our cognitive intuitions because they were provided by evolution not as a means of knowing the truth but in terms of their effectiveness in supporting survival and reproduction, whereas Wielenberg is making the case that we can generally rely on these intuitions ( luckily) because of the way they have emerged out of a very peculiar human evolutionary story one which we largely do not share with other animals. That is, Wielenberg argument is anthropocentric to its core and therein lies a new set of problems.

His contention, in essence, is that the ability of human being to perceive moral truth arises as a consequence of the prolonged period of childhood we experience in comparison to other animals. In responding to the argument by Sharon Street that moral “truth” would seem quite different from the perspective of lions, or bonobos, or social insects, than from a human standpoint Wielenberg  responds:

Lions and bonobos lack the nuclear family structure. Primatologist Frans de Waal suggests the “[b] onobos have stretched the single parent system to the limit”. He also claims that an essential component of human reproductive success is the male-female pair bond which he suggests “sets us apart from the apes more than anything else” . These considerations provide some support for the idea that a moralizing species like ours requires an evolutionary path significantly different from that of lions or bonobos. (171)

The prolonged childhood of humans necessitates both pair-bonding and “alloparents” that for Wielenberg shape and indeed create our moral disposition and perception in a way seen in no other animals.

As for the social insects scenario suggested by Street, the social insects (termites, ants, and many species of bees and wasps) are so different from us that it’s hard to evaluate whether such a scenario is nomologically possible.  (171).

In a sense the ability to perceive moral truth, what Wielenberg  characterizes as “brute facts” such as “rape is wrong”, emerges out of the slow speed in cultural/technological knowledge requires to be passed from adults to the young. Were children born fully formed with sufficient information for their own survival (or a quick way of gaining needed capacity/knowledge) neither the pair bond nor the care of the “village” would be necessary and the moral knowledge that comes as a result of this period of dependence/interdependence might go undiscovered.

Though I was very much rooting that Wielenberg would have succeeded in providing an account of moral realism absent any need for God, I believe that in these reflections found in the very last pages of Robust Ethics he may have inadvertently undermined that very noble project.

I have complained before about someone like E.O. Wilson’s lack of imagination when it comes to alternative forms of intelligence on worlds other than our own, but what Wielenberg has done is perhaps even more suffocating. For if the perception of moral truth depends upon the evolution of creatures dependent on pair bonding and alloparenting then what this suggests is that due to our peculiarities human beings might be the only species in the universe capable of perceiving moral truth. This is not the argument Wielenberg likely hoped he was making at all, and indeed is more anthropocentric than the argument of some card carrying theists.

I suppose Wielenberg might object that any intelligent aliens would likely require the same extended period of learning as ourselves because they too would have arrived at their station via cultural/technological evolution which seems to demand long periods of dependent learning. Perhaps, or perhaps not. For even if I can’t imagine some biological species where knowledge from parent to offspring is directly passed, we know that it’s possible- the efficiency of DNA as a cultural storage device is well known.

Besides, I think it’s a mistake to see biological intelligence as the type of intelligence that commands the stage over the long duree- even if artificial intelligence, like children, need to learn many task through actual experience rather than programming “from above” the  advantages of AI over the biological sort is that it can then share this learning directly with fully grown copies of itself a like Neo in the Matrix its’ “progeny” can say “I know kung fu” without ever having themselves learned it. According to Wielenberg’s logic it doesn’t seem that such intelligent entities would necessarily perceive brute moral facts or ethical truths, so if he is right an enormous contraction of the potential scale of the moral universe would have occurred . The actual existence of moral truth limited to perhaps one species in a lonely corner of an otherwise ordinary galaxy would then seem to be a blatant violation of the Copernican principle and place us back onto the center stage of the moral narrative of the universe- if it has such a narrative to begin with.

The only way it seems one can hold that both the assertion of moral truth found in Godless Normative Realism and the Copernican principle can be true would be if the brute facts of moral truth were more broadly perceivable across species and conceivably within forms of intelligence that have emerged or have been built on the basis of an evolutionary trajectory and environment very different from our own.

I think the best chance here is if moral truth were somehow related to the truths of mathematics (indeed Wielenberg thinks the principle of contradiction [which is the core of mathematics/logic] is essential to the articulation and development of our own moral sense which begins with the emotions but doesn’t end there.) Like us, other animals seem not only to possess forms of moral cognition that rival our own, but even radical different types of creatures such as social insects are capable of discovering mathematical truths about the world, the kind of logic that underlies moral reasoning, something I explored extensively here.

Let’s hope that the perception of moral truth isn’t as dependent on our very peculiar evolutionary development as Wielenberg’s argument suggest, for if that is the case that particular form of truth might be so short lived and isolated in the cosmos that someone might be led to the mistaken conclusion that it never existed at all.

Do Extraterrestrials Philosophize?


The novelist and philosopher R. Scott Bakker recently put out a mind blowing essay on the philosophy of extraterrestrials, which isn’t as Area 51 as such a topic might seem at first blush.  After all, Voltaire covered the topic of aliens, but if a Frenchman is still a little too playful for your philosophical tastes , recall that Kant thought the topic of extraterrestrial intelligence important to cover extensively as well, and you can’t get much more full of dull seriousness than the man from Koeningsberg.

So let’s take an earnest look at Bakker’s alien philosophy…well, not just yet. Before I begin it’s necessary to lay out a very basic version of the philosophical perspective Bakker is coming from, for in a way his real goal is to use some of our common intuitions regarding humanoid aliens as a way of putting flesh on the bones of two often misunderstood and not (at least among philosophers) widely held philosophical positions- eliminativism and Blind Brain Theory, both of which, to my lights at least, could be consumed under one version of the ominous and cool sounding, philosophy of Dark Phenomenology. Though once you get a handle of on dark phenomenology it won’t seem all that ominous, and if it’s cool, it’s not the type of cool that James Dean or the Fonz before season 5 would have recognized.

Eliminativism, if I understand it,  is the full recognition of the fact that perhaps all our notions about human mental life are suspect in so far as they have not been given a full scientific explanation. In a sense, then, eliminativism is merely an extension of the materialization (some would call it dis-enchantment) that has been going on since the scientific revolution.

Most of us no longer believe in angels, demons or fairies, not to mention quasi-scientific ideas that have ultimately proven to be empty of content like the ether or phlogiston. Yet in those areas where science has yet to reach, especially areas that concern human thinking and emotion, we continue to cling to what strict eliminativists believe are likely to be proved similar fictions, a form of myth that can range from categories of mental disease without much empirical substance to more philosophical and religiously determined beliefs such as those in free will, intentionality and the self.            

I think Bakker is attracted to eliminativism because it allows us to cut the gordian knot of problems that have remained unresolved since the beginning of Western philosophy itself. Problems built around assumptions which seem to be increasingly brought into question in light of our increasing knowledge of the actual workings of the human brain rather than our mere introspection regarding the nature of mental life. Indeed, a kind of subset of eliminativism in the form Blind Brain Theory essentially consists in the acknowledgement that the brain was designed for a certain kind of blindness by evolution.

What was not necessary for survival has been made largely invisible to the brain without great effort to see what has not been revealed. Philosophy’s mistake from the standpoint of a proponent of Blind Brain Theory has always been to try to shed light upon this darkness from introspection alone- a Sisyphean tasks in which the philosopher if not made ridiculous becomes hopelessly lost in the dark labyrinth of the human imagination. In contrast an actually achievable role for philosophy would be to define the boundary of the unknown until the science necessary to study this realm has matured enough for its’ investigations to begin.

The problem becomes what can one possibly add to the philosophical discourse once one has taken an eliminativists/Blind Brain position? Enter the aliens, for Bakker manages to make a very reasonable argument that we can use both to give us a plausible picture of what the mental life and philosophy of intelligent “humanoid” aliens might look like.

In terms of understanding the minds of aliens eliminativism and Blind Brain Theory are like addendums to evolutionary psychology. An understanding of the perceptual limitations of our aliens- not just mental limitations, but limitations brought about by conditions of time and space should allow us to make reasonable guesses about not only the philosophical questions, but the philosophical errors likely to be made by our intelligent aliens.

In a way the application of eliminativism and BBT to intelligent aliens put me in mind of Isaac Asimov’s short story Nightfall in which a world bathed in perpetual light is destroyed when it succumbs to the fall of  night. There it is not the evolved limitations of the senses that prevent Asimov’s “aliens” from perceiving darkness but their being on a planet that orbits two suns and keep them bathed in an unending day.

I certainly agree with Bakker that there is something pregnant and extremely useful in both eliminativism and Blind Brain Theory, though perhaps not so much it terms of understanding the possibility space of “alien intelligence” as in understanding our own intelligence and the way it has unfolded and developed over time and has been embedded in a particular spatio-temporal order we have only recently gained the power to see beyond.

Nevertheless, I think there are limitations to the model. After all, it isn’t even clear the extent to which the kinds of philosophical problems that capture the attention of intelligence are the same even across our own species. How are we to explain the differences in the primary questions that obsess, say, Western versus Chinese philosophy? Surely, something beyond neurobiology and spatial-temporal location is necessary to understand the the development of human philosophy in its various schools and cultural guises including how a discourse has unfolded historically and the degree to which it has been supported by the powers and techniques to secure the survival of some question/perspective over long stretches of time.

There is another way in which the use of eliminativism or Blind Brain Theory might lead us astray when it come to thinking about alien intelligence- it just isn’t weird enough.When the story of the development of not just human intelligence, but especially our technological/scientific civilization is told in full detail it seems so contingent as to be quite unlikely to repeat itself. The big question I think to ask is what are the possible alternative paths to intelligence of a human degree or greater and to technological civilization like or more advanced than our own. These, of course, are questions for speculative philosophy and fiction that can be scientifically informed in some way, but are very unlikely to be scientifically answered. And if if we could discover the very distant technological artifacts of another technological civilization as the new Milner/Hawking project hopes there remains no way to reverse engineer our way to understand the lost “philosophical” questions that would have once obsessed the biological “seeds” of such a civilization.

Then again, we might at least come up with some well founded theories though not from direct contact or investigation of alien intelligence itself. Our studies of biology are already leading to alternative understanding of the way intelligence can be embeded say with the amazing cephalopods. As our capacity from biological engineering increases we will be able make models of, map alternative histories for, and even create alternative forms of living intelligence. Indeed, our current development of artificial intelligence is like an enormous applied experiment in an alternative form of intelligence to our own.

What we might hope is that such alternative forms of intelligence not only allow us to glimpse the limits of our own perception and pattern making, but might even allow us to peer into something deeper and more enchanted and mystical beyond. We might hope even more deeply that in the far future something of the existential questions that have obsessed us will still be there like fossils in our posthuman progeny.