The Kingdom of Glass Walls: A Fable


There once was a king of extraordinary vanity who liked to spend his time riding throughout the great city he ruled on his gilded coach allowing all who saw him to marvel at his attire which was woven of gold thread and bejeweled with diamonds and sapphires.

The vain king especially liked parading himself in the neighborhoods of the poor reasoning that their sight of him would be the closest thing to heaven such filthy people would have before they died and met God’s glorious angels. It was during one of these “tours” that a woman threw a bucket of shit out of her window which landed squarely on the king’s head.

Who knows if it was by accident as the poor peasant girl claimed that she was merely emptying onto the streets the overflow of her commode when the bucket slipped from her hands and fell onto the king? In any case the king awoke from unconsciousness after the blow a changed man with his vanity transformed.

The changed king adopted the unusual style of doing away with all prerogatives of royal privacy and ordered the bricks of his castle walls be replaced with the most transparent glass. Anyone could now literally see all the doings of the royal household as if the court were now fish in a bowl to the great annoyance of the queen.

The king soon went so far as to abandon his former golden attire for no clothes at all. The queen at this point wanted to have the royal doctors declare the king mad on account of him having been hit by a bucket on the head, but there were powerful courtiers in the court who decided that the king was to remain on his throne.

After having a dream that his city-kingdom had been invaded by an enemy that feared the naked king was planning a surprise attack he had the defensive walls of the city leveled, so that no one could might think he meant them ill. Having experienced how liberating it felt to be open to all the world through his glass or torn down walls and nakedness the king wished that all his subjects could experience what he had.

He thus offered to pay out of his own treasury the cost of any of his subjects replacement of their walls of wood, brick or straw with walls of glass. Many of his subjects took the king up on the offer, for who would not want their cold house with walls of straw or dilapidated wood to be replaced with walls of clean, shiny glass?

But then a bug was placed in the king’s ear from a powerful anonymous courtier : “He with nothing to hide has nothing to fear” and with that the suspicion entered the king’s mind that the only people who would have kept their opaque walls must be hiding something.

Criminals and bandits soon found ways game this system largely by using disguises and fake scenes painted on their glass walls while those with enough money found that with the proper “donation” they could keep their brick houses if they just put in a few more windows. However most law abiding persons now completely surrounded by glass had left themselves open to all sorts of peeping toms, shisters, and burglars. And with the walls of the city taken down some said they could see the torch lights of an army on the nearby hills at night.   

Citizens began to complain that perhaps replacing the walls of their homes with glass and tearing down the walls of the city was not a good idea. Just then the king announced to everyone’s’ surprise that he had fallen in love with a pig and wished to divorce the queen and marry his newly beloved scrofa domesticus.

At this the queen and even more importantly the powerful courtiers had had enough. The king was arrested and placed in a dungeon, although, not to be accused of cruelty, he was allowed to keep his pig with him. New walls much thicker than before replaced the castle’s walls of glass and the walls around the city were rebuilt.  

A decree went out from the court declaring that no citizen was allowed to replace the glass walls of their homes. The queen claimed that such total transparency among the subjects was necessary to catch criminals and even more so spies from neighboring kingdoms who meant to do the city harm, and after many years the people forgot that things had ever been different or that they had once been ruled by a transparent king.         

Stalinism as Transhumanism

New Soviet Man 2

The ever controversial Steve Fuller has recently published a number of jolting essays at the IEET,(there has been a good discussion on David Roden’s blog on the topic), yet whatever one thinks about the prospect of zombie vs transhumanist apocalypse he has managed to raise serious questions for anyone who identifies themselves with the causes of transhumanism and techno-progressivism; namely, what is the proper role, if any, of the revolutionary, modernizing state in such movements and to what degree should the movement be open to violence as a means to achieve its ends? Both questions, I will argue, can best be answered by looking at the system constructed in the Soviet Union between 1929 and 1953 under the reign of Joseph Stalin.            

Those familiar with the cast of characters in the early USSR will no doubt wonder why I chose to focus on the “conservative” Stalin rather Leon Trotsky who certainly evidenced something like proto-transhumanism with quotes such as this one in his essay Revolutionary and Socialist Art:

Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman.

It was views like this one that stood as evidence for me of the common view among intellectuals sympathetic to Marxism that it was Trotsky who was the true revolutionary figure and heir of Lenin, whereas Stalin was instead a reactionary in Marxists garb who both murdered the promise latent in the Russian Revolution and constructed on the revolution’s corpse a system of totalitarian terror.

Serious historians no longer hold such views in light of evidence acquired after the opening up of Soviet archives after the USSR’s collapse in 1991. What we’ve gained as a consequence of these documents is a much more nuanced and detailed view of the three major figures of the Russian Revolution: Lenin, Trotsky and Stalin all of which in some ways looked at violence as the premier tool for crafting a new social and political order.

Take the father of the Russian Revolution- Lenin. The common understanding of Lenin in the past was to see him as a revolutionary pragmatist who engaged in widespread terror and advocated dictatorship largely as a consequence of the precarious situation of the revolution and in the atmosphere of danger and bloodshed found in the Russian civil war. In fact the historical record presented in the archives now makes clear that many of the abhorrent features of state communism in Russia once associated with Stalin in fact originated under Lenin, obscenities such as as the forerunner to the KGB, the Cheka, or the Soviet system of concentration camps- Solzhenitsyn’s gulag archipelago. Far from seeing them as temporary expedients in precarious situation of the revolution and the atmosphere of danger and bloodshed found in the Russian civil war, Lenin saw these innovations as a permanent feature of Marx’s “dictatorship of the proletariat”, which Lenin had reimagined as the permanent rule of a revolutionary, technocratic elite.

Trotsky comes off only a little better because he never assumed the mantle of Soviet dictator. Trotsky’s problems, however, originate from his incredible egotism when combined with his stunning political naivete. His idea was that in the 1920’s conditions were ripe to export the communist revolution globally barely grasping just how precarious the situation was for the newly victorious communist revolutionaries within even his own country. Indeed, to the extent that both domestic communist in non-Russian countries identified with Soviet communism and prematurely pushed their societies towards revolution this only ended up in empowering reactionaries and leading to the rise of fascist parties globally who would come quite close to strangling the infant of state communism while it was still in its cradle.

Then there’s Stalin who those sympathetic to communism often see as the figure who did the most to destroy its’ promise. The so-called “man of steel” is understood as a reactionary monster who replaced the idealism of the early Soviet Union with all the snakes that had infected czarist Russia’s head.

This claim too has now been upended. For what is clear is that although Stalin reoriented the USSR away from Trotsky’s dream of world revolution to focus on “socialism in one country” the state Stalin managed to create was perhaps the most radical and revolutionary the world has yet seen.

The state has always played a large role in modernization, but it was only in the types of communists regimes which Stalin was the first to envision that the state became synonymous with modernization itself.  And it wasn’t only modernization, but a kind of ever accelerating modernization that aimed to break free of the gravity of history and bring humanity to a new world inhabited by a new type of man.

It was his revolutionary impatience and desire to accelerate the pace of history that led Stalin down the path to being perhaps the most bloody tyrant the world has yet know. The collectivization of Soviet agriculture alone coming at the cost of at least 5 million deaths. What Stalin seemed to possess that his fellow revolutionaries lacked was a kind of sociopathy that left him flinch-less in the face of mass suffering, though he also became possessed by the paranoia to which tyrants are prone and towards the end of his reign killed as much out of fear as he did as a consequence of his desire to propel the USSR through history.

Here I return to Fuller who seems to have been inspired by state communism and appears to be making the case that what transhumanism needs is a revolutionary vanguard that will, violently if necessary, push society towards transformative levels of technological change. Such change will then make possible the arrival of a new form of human being.

There is a great deal of similarity here with the view of the state as presented by Zoltan Istvan in his novel The Transhumanist Wager though that work is somewhat politically schizophrenic with elements of Ayn Rand and Bolshevism at the same time, which perhaps makes sense given that the Silicon Valley culture it emerged from was largely built on government subsidies yet clings to the illusion it is the product of heroic individuals such as Steve Jobs. And of course, The Transhumanist Wager exhibits this same feature of believing violence necessary to open up the path to radical transformation.

One of the important questions we might ask is how analogous was the Russian  situation with our own? What the Soviets were trying to do was to transform their society in a way that had been technologically experienced elsewhere only under a brand new form of political regime. In other words the technological transformation – industrialization- was already understood, where the Soviets were innovative was in the political sphere. At the same time, modernization to be accomplished in the Soviet Union needed to overthrow and uproot traditional social forces that had aligned themselves against industrialization- the nobility and the clergy. A strong state was necessary to overcome powerful and intransigent classes.

None of these characteristics seem applicable the kinds of technological change taking place today. For one, the type of complete technological transformation Fuller is talking about are occurring everywhere nearly simultaneously- say the current revolutions in biotechnology, robotics and artificial intelligence where no society has achieved some sort of clear breakout that a revolutionary movement is necessary to push society in the direction of replicating.

Nor does the question of technological transformation break clearly along class or even religious lines in a world where the right to left political spectrum remains very much intact. It is highly unlikely that a technological movement on the right will emerge that can remain viable while jettisoning those adhering to more traditional religious notions.

Likewise, it is improbable that any left wing technological movement would abandon its commitment to the UN Declaration and the world’s poor. In thinking otherwise I believe Fuller is repeating Trotsky’s mistake- naively believing in political forces and realignments about to surface that simply aren’t there. (The same applies to Zoltan Istvan’s efforts.)

Yet by far the worst mistake Fuller makes is in embracing the Stalinist idea that acceleration needs to be politicized and that violence is a useful tool in achieving this acceleration. The danger here is not that justifying violence in the pursuit of transhumanist ends is likely to result in widespread collective violence in the name of such goals, but that it might start to attract sadists and kooks whose random acts of violence “in the name of transhumanism” put the otherwise humane goals of the movement at risk of social backlash.

At the same time Fuller is repeating the grave error of Stalinism which is to conflate power and the state with the process of technological transformation. The state is many and sometimes contradictory things with one of its’ roles indeed to modernize society. Yet at the same time the state is tasked with protecting us from that very same modernization by preventing us from being treated as mere parts in a machine rather than as individuals and citizens.

Fuller also repeats the error of the Stalinist state when it comes to the role and efficacy of violence in the political sphere for the proper role of violence is not as a creative force that is able to establish something new, but as a means of resistance and protection for those already bound together in some mutual alliance.

Still, we might want to take seriously the possibility that Fuller has tapped into a longer term trend in which the ways technology is developing is opening up the possibility of the reappearance of the Stalinist state, but that’s a subject that will have to await for another time.


The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

Would AI and Aliens be moral in a godless universe?

Black hole

Last time I attempted to grapple with R. Scott Bakker’s intriguing essay on what kinds of philosophy aliens might practice and remaining dizzied by questions. Luckily, I had a book in my possession which seemed to offer me the answers, a book that had nothing to do with the a modern preoccupation like question of alien philosophers at all, but rather a metaphysical problem that had been barred from philosophy except among seminary students since Darwin; namely, whether or not there was such a thing as moral truth if God didn’t exist.

The name of the book was Robust Ethics: The Metaphysics and Epistemology of Godless Normative Realism (contemporary philosophy isn’t all that sharp when it comes to titles), by Erik J. Wielenberg. Now, I won’t even attempt to write a proper philosophical review of Robust Ethics for the book has been excellently dissected by a proper philosopher, John Danaher in pieces such as this, this, and this, and one more. Indeed, it was Danaher’s thoughtful reviews that had resulted in Wielenberg’s short work being in the ever changing pile of books that shadows my living room floor like a patch of unextractable mold. It was just the book I needed when thinking about what types of intelligence might be possessed by extraterrestrials.

It’s a problem I ran into when reviewing David Roden’s Post-human Life that goes like this: while it is not so much easy, as it is that I don’t simply draw a blank for me to conceive of an alternative form of intelligence to our human type, it’s damned near impossible for me to imagine what our an alternative form to our moral cognition and action would consist of and how it would be embedded in these other forms of intelligence.

The way Wielenberg answers this question would seem to throw a wrench into Bakker’s idea of Blind Brain Theory (BBT) because what Bakker is urging is that we be suspicious of our cognitive intuitions because they were provided by evolution not as a means of knowing the truth but in terms of their effectiveness in supporting survival and reproduction, whereas Wielenberg is making the case that we can generally rely on these intuitions ( luckily) because of the way they have emerged out of a very peculiar human evolutionary story one which we largely do not share with other animals. That is, Wielenberg argument is anthropocentric to its core and therein lies a new set of problems.

His contention, in essence, is that the ability of human being to perceive moral truth arises as a consequence of the prolonged period of childhood we experience in comparison to other animals. In responding to the argument by Sharon Street that moral “truth” would seem quite different from the perspective of lions, or bonobos, or social insects, than from a human standpoint Wielenberg  responds:

Lions and bonobos lack the nuclear family structure. Primatologist Frans de Waal suggests the “[b] onobos have stretched the single parent system to the limit”. He also claims that an essential component of human reproductive success is the male-female pair bond which he suggests “sets us apart from the apes more than anything else” . These considerations provide some support for the idea that a moralizing species like ours requires an evolutionary path significantly different from that of lions or bonobos. (171)

The prolonged childhood of humans necessitates both pair-bonding and “alloparents” that for Wielenberg shape and indeed create our moral disposition and perception in a way seen in no other animals.

As for the social insects scenario suggested by Street, the social insects (termites, ants, and many species of bees and wasps) are so different from us that it’s hard to evaluate whether such a scenario is nomologically possible.  (171).

In a sense the ability to perceive moral truth, what Wielenberg  characterizes as “brute facts” such as “rape is wrong”, emerges out of the slow speed in cultural/technological knowledge requires to be passed from adults to the young. Were children born fully formed with sufficient information for their own survival (or a quick way of gaining needed capacity/knowledge) neither the pair bond nor the care of the “village” would be necessary and the moral knowledge that comes as a result of this period of dependence/interdependence might go undiscovered.

Though I was very much rooting that Wielenberg would have succeeded in providing an account of moral realism absent any need for God, I believe that in these reflections found in the very last pages of Robust Ethics he may have inadvertently undermined that very noble project.

I have complained before about someone like E.O. Wilson’s lack of imagination when it comes to alternative forms of intelligence on worlds other than our own, but what Wielenberg has done is perhaps even more suffocating. For if the perception of moral truth depends upon the evolution of creatures dependent on pair bonding and alloparenting then what this suggests is that due to our peculiarities human beings might be the only species in the universe capable of perceiving moral truth. This is not the argument Wielenberg likely hoped he was making at all, and indeed is more anthropocentric than the argument of some card carrying theists.

I suppose Wielenberg might object that any intelligent aliens would likely require the same extended period of learning as ourselves because they too would have arrived at their station via cultural/technological evolution which seems to demand long periods of dependent learning. Perhaps, or perhaps not. For even if I can’t imagine some biological species where knowledge from parent to offspring is directly passed, we know that it’s possible- the efficiency of DNA as a cultural storage device is well known.

Besides, I think it’s a mistake to see biological intelligence as the type of intelligence that commands the stage over the long duree- even if artificial intelligence, like children, need to learn many task through actual experience rather than programming “from above” the  advantages of AI over the biological sort is that it can then share this learning directly with fully grown copies of itself a like Neo in the Matrix its’ “progeny” can say “I know kung fu” without ever having themselves learned it. According to Wielenberg’s logic it doesn’t seem that such intelligent entities would necessarily perceive brute moral facts or ethical truths, so if he is right an enormous contraction of the potential scale of the moral universe would have occurred . The actual existence of moral truth limited to perhaps one species in a lonely corner of an otherwise ordinary galaxy would then seem to be a blatant violation of the Copernican principle and place us back onto the center stage of the moral narrative of the universe- if it has such a narrative to begin with.

The only way it seems one can hold that both the assertion of moral truth found in Godless Normative Realism and the Copernican principle can be true would be if the brute facts of moral truth were more broadly perceivable across species and conceivably within forms of intelligence that have emerged or have been built on the basis of an evolutionary trajectory and environment very different from our own.

I think the best chance here is if moral truth were somehow related to the truths of mathematics (indeed Wielenberg thinks the principle of contradiction [which is the core of mathematics/logic] is essential to the articulation and development of our own moral sense which begins with the emotions but doesn’t end there.) Like us, other animals seem not only to possess forms of moral cognition that rival our own, but even radical different types of creatures such as social insects are capable of discovering mathematical truths about the world, the kind of logic that underlies moral reasoning, something I explored extensively here.

Let’s hope that the perception of moral truth isn’t as dependent on our very peculiar evolutionary development as Wielenberg’s argument suggest, for if that is the case that particular form of truth might be so short lived and isolated in the cosmos that someone might be led to the mistaken conclusion that it never existed at all.

Do Extraterrestrials Philosophize?


The novelist and philosopher R. Scott Bakker recently put out a mind blowing essay on the philosophy of extraterrestrials, which isn’t as Area 51 as such a topic might seem at first blush.  After all, Voltaire covered the topic of aliens, but if a Frenchman is still a little too playful for your philosophical tastes , recall that Kant thought the topic of extraterrestrial intelligence important to cover extensively as well, and you can’t get much more full of dull seriousness than the man from Koeningsberg.

So let’s take an earnest look at Bakker’s alien philosophy…well, not just yet. Before I begin it’s necessary to lay out a very basic version of the philosophical perspective Bakker is coming from, for in a way his real goal is to use some of our common intuitions regarding humanoid aliens as a way of putting flesh on the bones of two often misunderstood and not (at least among philosophers) widely held philosophical positions- eliminativism and Blind Brain Theory, both of which, to my lights at least, could be consumed under one version of the ominous and cool sounding, philosophy of Dark Phenomenology. Though once you get a handle of on dark phenomenology it won’t seem all that ominous, and if it’s cool, it’s not the type of cool that James Dean or the Fonz before season 5 would have recognized.

Eliminativism, if I understand it,  is the full recognition of the fact that perhaps all our notions about human mental life are suspect in so far as they have not been given a full scientific explanation. In a sense, then, eliminativism is merely an extension of the materialization (some would call it dis-enchantment) that has been going on since the scientific revolution.

Most of us no longer believe in angels, demons or fairies, not to mention quasi-scientific ideas that have ultimately proven to be empty of content like the ether or phlogiston. Yet in those areas where science has yet to reach, especially areas that concern human thinking and emotion, we continue to cling to what strict eliminativists believe are likely to be proved similar fictions, a form of myth that can range from categories of mental disease without much empirical substance to more philosophical and religiously determined beliefs such as those in free will, intentionality and the self.            

I think Bakker is attracted to eliminativism because it allows us to cut the gordian knot of problems that have remained unresolved since the beginning of Western philosophy itself. Problems built around assumptions which seem to be increasingly brought into question in light of our increasing knowledge of the actual workings of the human brain rather than our mere introspection regarding the nature of mental life. Indeed, a kind of subset of eliminativism in the form Blind Brain Theory essentially consists in the acknowledgement that the brain was designed for a certain kind of blindness by evolution.

What was not necessary for survival has been made largely invisible to the brain without great effort to see what has not been revealed. Philosophy’s mistake from the standpoint of a proponent of Blind Brain Theory has always been to try to shed light upon this darkness from introspection alone- a Sisyphean tasks in which the philosopher if not made ridiculous becomes hopelessly lost in the dark labyrinth of the human imagination. In contrast an actually achievable role for philosophy would be to define the boundary of the unknown until the science necessary to study this realm has matured enough for its’ investigations to begin.

The problem becomes what can one possibly add to the philosophical discourse once one has taken an eliminativists/Blind Brain position? Enter the aliens, for Bakker manages to make a very reasonable argument that we can use both to give us a plausible picture of what the mental life and philosophy of intelligent “humanoid” aliens might look like.

In terms of understanding the minds of aliens eliminativism and Blind Brain Theory are like addendums to evolutionary psychology. An understanding of the perceptual limitations of our aliens- not just mental limitations, but limitations brought about by conditions of time and space should allow us to make reasonable guesses about not only the philosophical questions, but the philosophical errors likely to be made by our intelligent aliens.

In a way the application of eliminativism and BBT to intelligent aliens put me in mind of Isaac Asimov’s short story Nightfall in which a world bathed in perpetual light is destroyed when it succumbs to the fall of  night. There it is not the evolved limitations of the senses that prevent Asimov’s “aliens” from perceiving darkness but their being on a planet that orbits two suns and keep them bathed in an unending day.

I certainly agree with Bakker that there is something pregnant and extremely useful in both eliminativism and Blind Brain Theory, though perhaps not so much it terms of understanding the possibility space of “alien intelligence” as in understanding our own intelligence and the way it has unfolded and developed over time and has been embedded in a particular spatio-temporal order we have only recently gained the power to see beyond.

Nevertheless, I think there are limitations to the model. After all, it isn’t even clear the extent to which the kinds of philosophical problems that capture the attention of intelligence are the same even across our own species. How are we to explain the differences in the primary questions that obsess, say, Western versus Chinese philosophy? Surely, something beyond neurobiology and spatial-temporal location is necessary to understand the the development of human philosophy in its various schools and cultural guises including how a discourse has unfolded historically and the degree to which it has been supported by the powers and techniques to secure the survival of some question/perspective over long stretches of time.

There is another way in which the use of eliminativism or Blind Brain Theory might lead us astray when it come to thinking about alien intelligence- it just isn’t weird enough.When the story of the development of not just human intelligence, but especially our technological/scientific civilization is told in full detail it seems so contingent as to be quite unlikely to repeat itself. The big question I think to ask is what are the possible alternative paths to intelligence of a human degree or greater and to technological civilization like or more advanced than our own. These, of course, are questions for speculative philosophy and fiction that can be scientifically informed in some way, but are very unlikely to be scientifically answered. And if if we could discover the very distant technological artifacts of another technological civilization as the new Milner/Hawking project hopes there remains no way to reverse engineer our way to understand the lost “philosophical” questions that would have once obsessed the biological “seeds” of such a civilization.

Then again, we might at least come up with some well founded theories though not from direct contact or investigation of alien intelligence itself. Our studies of biology are already leading to alternative understanding of the way intelligence can be embeded say with the amazing cephalopods. As our capacity from biological engineering increases we will be able make models of, map alternative histories for, and even create alternative forms of living intelligence. Indeed, our current development of artificial intelligence is like an enormous applied experiment in an alternative form of intelligence to our own.

What we might hope is that such alternative forms of intelligence not only allow us to glimpse the limits of our own perception and pattern making, but might even allow us to peer into something deeper and more enchanted and mystical beyond. We might hope even more deeply that in the far future something of the existential questions that have obsessed us will still be there like fossils in our posthuman progeny.

Why aren’t we living in H.G. Wells’ scientific dictatorship?

HG Wells Things to Come

One of the more depressing things to come out of the 2008 financial crisis was just how little it managed to effect our expectations about the economy and political forms of the future. Sure, there was Occupy Wall Street, and there’s been at least some interesting intellectual ferment here and there with movements such as Accelerationist Marxism and the like, but none have really gone anywhere. Instead what we’ve got is the same old system only now with even more guarantees and supports for the super rich. Donald Trump may be a blowhard and a buffoon, but even buffoons and blowhards can tell the truth as he did during last Thursday’s debate when he essentially stated that politicians were in the pocket to those with the cash, such as himself, who were underneath it all really running the show.

The last really major crisis of capitalism wasn’t anything like this. In the 1930’s not only had the whole system gone down, but nearly everyone seemed convinced that capitalism, (and some even thought the representative democracy that had emerged in tandem with it) was on the way out.

Then again, the political and economic innovation of the early 20th century isn’t the kind of thing any of us would wish for. Communists, which to many born after 1989 may seem as much like antiquated creatures from another world as American revolutionaries in powdered wigs, was by the 1930’s considered one of the two major ways the society of the future would likely be organized, and its’ competitor over the shape of the future wasn’t some humane and reasoned alternative, but the National Socialism of Hitler’s dark Reich.

If one wants to get a sense of the degree to which the smart money was betting against the survival of capitalism and democracy in the 1930’s one couldn’t do much better than that most eerily prescient of science-fiction prophets – H.G. Wells. In many ways, because he was speaking through the veneer of fiction Wells could allow himself to voice opinions which would have led even political radicals to blush. Also, because he was a “mere” fiction author his writings became one of the few ways intellectuals and politicians in liberal societies could daydream about a way out of capitalism’s constant crises, democracy’s fissiparousness and corruption, and most importantly for the survival of humanity in light of the nation-state’s increasingly destructive wars.

Well’s 1933 The Shape of Things to Come, published not long after the Nazis had come to power in Germany, is perhaps his best example of a work that blurs the boundaries between a work of fiction and a piece of political analysis, polemic, and prediction. In the guise of a dream book of a character who has seen the future of the world from the 1930’s to the middle of the beginning of the 22nd century, Wells is able to expound upon the events of the day and their possible implications- over a century into the future.

Writing six years before the event takes place Well’s spookily imagines World War II beginning with the German invasion of Poland. Also identifying the other major aggressor in a world war still to come, Wells realizes Japan had stepped into a quagmire by invading China from which much ill would come.

These predictions of coming violence (Wells forecast the outbreak of the Second World War to be 1940- one year off) are even more chilling when one watches the movie based upon the book, and know that the bombings of cities it depicts is not some cinematographer’s fantasy, but will no doubt have killed some of those who watched the film in theaters in 1936- less than five years later.

Nevertheless, Wells gets a host of very important things, not only about the future but about his present, very wrong. He gets it ass backwards in generally admiring the Soviet Union and seeing its’ problem not being the inhuman treatment by the Communist regime of its citizens, but the fact that they have wed themselves to what Well’s believes is an antiquated, dogmatic theory in Marxism.

Indeed, Wells will build his own version of dictatorship in The Shape of Things to Come (though versions of it can be seen in his earlier work) using the ideas of two of Soviet communism’s founders- Trotsky’s idea of a global revolutionary movement which will establish a worldwide government and Lenin’s idea of an intellectual nucleus that will control all the aspects of society.

Nor, did Wells really grasp the nature of Nazism or the strange contradiction of a global alliance of fascist regimes that ostensibly worship the state. Wells saw Hitler as a throwback to a dying order based on the nation-state. His only modernity being

“…control by a self-appointed, self-disciplined élite was a distinct step towards our Modern State organization.” (192)

Wells therefore misses the savagery born of the competition between world shaping ideologies and their mobilization of entire societies that will constitute the Second World War and its aftermath.

Ironically, Wells mistakenly thinks WWII will be short and its fatalities low because he gets his technological predictions right. He clearly foresees the role of the importance of the tank, the airplane, and the submarine to the future war and because of them even anticipates the Nazi idea of blitzkrieg. At one point he seems to have a glimmer of the death spirit that will seize over humankind during the war when he compares the submarine to a sacrificial altar:

The Germans supplied most of the flesh for this particular altar; willing and disciplined, their youngsters saluted and carried their kit down the ladder into this gently swaying clumsy murder mechanism which was destined to become their coffin. (70)

Nevertheless, he fails to see that the Second World War will unleash the kinds of violence and fanaticism formerly only seen in religious wars.

Two decades after Wells’ novel many would think that because of the introduction of nuclear weapons wars would be reduced to minutes. Instead conflict became stretched out across multiple decades. What this is should teach us is that we have no idea how any particular technology will ultimately affect the character of war – especially in terms of its intensity or duration- thus those hoping that robotic or cyber weapons will return us to short decisive conflicts are likely seeing a recurrent mirage.

Wells perhaps better understood than other would be revolutionaries and prophets of the time just how robust existing societies were despite their obvious flaws. The kind of space for true political innovation had seemingly occurred only during times of acute stress, such as war, that by their nature were short lived. A whole new way of organizing society had seemingly revealed itself during World War I in which the whole industrial apparatus of the nation was mobilized and directed towards a particular end. Yet the old society would reassert itself except in those societies that had experienced either defeat and collapse or Pyrrhic victory (Italy, Japan) in the conflict.

Wells thus has to imagine further crises after economic depression and world war to permanently shatter Western societies that had become fossilized into their current form. The new kind of war had itself erased the boundary between the state and the society during war, and here Wells is perhaps prescient in seeing the link between mass mobilization, the kinds of wars against civilians seen in the Second World War and insurgency/terrorism. Yet he pictures the final hammer blow not in the form of such a distributed conflict but coming in the form of a global pandemic that kills half of the world’s people. After that comes the final death of the state and the reversion to feudalism.

It is from a world ruled by warlords that Wells’ imagined “Air Dictatorship” will emerge. It is essentially the establishment of global rule by a scientific technocracy that begins with the imposition of a monopoly over global trade networks and especially control over the air.

To contemporary ears the sections on the Air Dictatorship can be humorously reminiscent of an advertisement for FedEx or the US Navy. And then the humor passes when one recalls that a world dominated by one global straddling military and multinational corporations isn’t too far from the one Wells pictured even if he was more inspired by the role of the Catholic Church in the Dark Ages, the Hanseatic League or the what the damned Bolsheviks were up to in Russia.

Oddly enough, Wells foresaw no resistance to the establishment of a world-state (he called it The Modern State) from global capitalists, or communists or the remnant of the security services of the states that had collapsed. Instead, falling into a modernist bias that remains quite current, Wells sees the only rival to the “Modern State” in the form of the universal religions which the Air Dictatorship will therefore have to destroy. Wells’ utopians declare war on Catholics (Protestants oddly give no resistance) forcefully close Mecca and declare war on Kosher foods. And all this deconstruction to be followed by “re-education” Wells thinks could be done without the kinds of totalitarian nightmares and abuses which are less than two decades away from when he is writing The Shape of Things.

I am not particular fan of the universal confusion called post-modernism, but it does normally prevent most of us from making zingers like Wells’ such as this:

They are going to realize that there can be only one right way of looking at the world for a normal human being and only one conception of a proper scheme of social reactions, and that all others must be wrong and misleading and involve destructive distortions of conduct. (323)

Like any self-respecting version of apocalypse, Wells imagines that after a period of pain and violence the process will become self sustaining and neither will be required, though most honorably for the time Wells thinks this world will be one of racial equality that will never again suffer the plague of extreme want.

Analogous to the universal religions, after the establishment of the Modern State all of humankind will become party to ultimate mission of the scientific endeavor which the protagonist in the movie version sums up manically in this crazy speech at the end of the film:

For man, no rest, he must go on. First this little planet and its’ winds and ways, and then all of the laws of mind and matter that restrain him. Then the planets above and at last out across immensity to the stars. And when he conquers all the depths of space and all of time still he will not be finished.

All the universe or nothing! Which shall it be?

(As a side note Ken Stanley Robinson seems to think this modernist’s dream that the destiny of humanity is to settle the stars is still alive and kicking. In his newest novel he is out to kill it. Review pending. )

To return to our lack of imagination and direction after 2008: we, unlike Wells, know how his and similar modernist projects failed, and just how horribly they did so. Nevertheless, his diagnosis remains largely sound. It might take a crisis the scale none of us would wish for to engender real reform let alone the taking of radically new directions. Given historical experience such crises are much more likely to give rise to monsters than anything benign.

Anarchists seem to grasp the shape of the time but not its implications. In a globalized world power has slipped out of the grasp of democratic sovereignty and into the hands of networked organizations- from multinational corporations, to security services, to terrorists and criminal groups able to transcend these borders. Yet it is tightly organized “machine like” organizations rather than decentralized/anarchic ones that seem to thrive in this feudal environment, and whereas that very feudalism and its competition makes achieving a unified voice in addressing urgent global problems even more difficult, and where despite our current perceptions, war between the armed groups that represent states the gravest existential threat to humanity, we, unlike Wells, know that no one group of us has all the answers, and that it is not only inhumane but impossible to win human unity out of the barrel of a ray gun

The King of Weird Futures

Bosch vanity Garden of earthy delights

Back in the late winter I wrote a review of the biologist Edward O. Wilson’s grandiloquently mistitled tract-  The Meaning of Human Existence. As far as visions of the future go Wilson’s was a real snoozer, although for that very reason it left little to be nervous about. The hope that he articulated in his book being that we somehow manage to keep humanity pretty much the same- genetically at least- “as a sacred trust”,  in perpetuity. It’s a bio-conservatism that, on one level, I certainly understand, but one I also find incredibly unlikely given that the future consists of….well…. an awfully long stretch of time (that is as long as we’re wise enough or just plain lucky ). How in the world can we expect, especially in light of current advances in fields like genetics, neuroscience, artificial intelligence etc, that we can, or even should, keep humanity essentially unchanged not just now, but for 100 years, or 1000s year, 10,000s years, or even longer?

If Wilson is the 21st century’s prince of the dull future the philosopher David Roden should perhaps be crowned the king of weird one(s). Indeed, it may be that the primary point of his recent mind-bending book Posthuman Life:Philosophy at the Edge of the Human, is to make the case for the strange and unexpected. The Speculative Posthumanism (SP) he helps launch with this book a philosophy that grapples with the possibility that the future of our species and its descendents will be far weirder than we have so far allowed ourselves to imagine.

I suppose the best place to begin a proper discussion of  Posthuman Life would be with explaining just exactly what Roden means by Speculative Posthumanism, something that (as John Dahaner has pointed out) Roden manages to uncover like a palimpsest by providing some very useful clarifications for often philosophically confused and conflated areas of speculation regarding humanity’s place in nature and its future.

Essentially Roden sees four domains of thought regarding humanism/posthumanism. There is Humanism of the old fashioned type that even absent some kind of spiritual dimension makes the claim that there is something special, cognitively, morally, etc that marks human beings off from the rest of nature.

Interestingly, Roden sees Transhumanism as merely an updating of this humanism- the expansion of its’ tool kit for perfecting humankind to include not just things like training and education but physical, cognitive, and moral enhancements made available by advances in medicine, genetics, bio-electronics and similar technologies.

Then there is Critical Posthumanism by which Roden means a move in Western philosophy apparent since the later half of the 20th century that seeks to challenge the anthropocentrism at the heart of Western thinking. The shining example of this move was the work of Descartes, which reduced animals to machines while treating the human intellect as mere “spirit” as embodied and tangible as a burnt offering to the gods. Critical Posthumanism, among whom one can count a number of deconstructionists, feminists, multicultural, animal rights, and environmentalists philosophers from the last century, aims to challenge the centrality of the subject and the discourses surrounding the idea of an observer located at some Archimedean point outside of nature and society.

Lastly, there is the philosophy Roden himself hopes to help create- Speculative Posthumanism the goal of which is to expand and explore the potential boundaries of what he calls the posthuman possibility space (PPS). It is a posthumanism that embraces the “weird” in the sense that it hopes, like critical posthumanism, to challenge the hold anthropocentrism has had on the way we think about possible manifestations of phenomenology, moral reasoning, and cognition. Yet unlike Critical Posthumanism, Speculative Posthumanism does not stop at scepticism but seeks to imagine, in so far as it is possible, what non-anthropocentric forms of phenomenology, moral reasoning, and cognition might actually look like. (21)

It is as a work of philosophical clarification that Posthuman Life succeeds best, though a close runner up would be the way Roden manages to explain and synthesize many of the major movements within philosophy in the modern period in a way that clearly connects them to what many see as upcoming challenges to traditional philosophical categories as a consequence of emerging technologies from machines that exhibit more reasoning, or the disappearance of the boundary between the human, the animal, and the machine, or even the erosion of human subjectivity and individuality themselves.

Roden challenges the notion that any potential moral agents of the future that can trace their line of descent back to humanity will be something like Kantian moral agents rather than agents possessing a moral orientation we simply cannot imagine. He also manages to point towards connections of the postmodern thrust of late 21st century philosophy which challenged the role of the self/subject and recent developments in neuroscience, including connections between philosophical phenomenology and the neuroscience of human perception that do something very similar to our conception of the self. Indeed, Posthuman Life eclipses similar efforts at synthesis and Roden excels at bringing to light potentially pregnant connections between thinkers as diverse as Andy Clark and Heidegger, Donna Haraway and Deleuze and Derrida along with non-philosophical figures like the novelist Philip K. Dick.

It is as a very consequence of his success at philosophical clarification that leads Roden across what I, at least, felt was a bridge (philosophically) too far. As posthumanist philosophers are well aware, the very notion of the “human” suffers a continuum problem. Unique to us alone, it is almost impossible to separate humanity from technology broadly defined and this is the case even if we go back to the very beginnings of the species where the technologies in question are the atul or the baby sling. We are in the words of Andy Clark “natural born cyborgs”. In addition to this is the fact that (like anything bound up with historical change) how a human being is defined is a moving target rather than a reflection of any unchanging essence.

How then can one declare any possible human future that emerges out of our continuing “technogenesis” “post” human, rather than just the latest iteration in what in fact is the very old story of the human “artificial ape”? And this status of mere continuation (rather than break with the past) would seem to hold in a philosophical sense even if whatever posthumans emerged bore no genetic and only a techno-historical relationship to biological humans. This somewhat different philosophical problem of clarification again emerges as the consequence of another continuum problem namely the fact that human beings are inseparable from the techno-historical world around them- what Roden brilliantly calls “the Wide Human” (WH).

It is largely out of the effort to find clear boundaries within this confusing continuum that leads Roden to postulate what he calls the “disconnection thesis”. According to this thesis an entity can only properly be said to be posthuman if it is no longer contained within the Wide Human.  A “Wide Human descendent is a posthuman if and only if:”

  1. It has ceased to belong to WH (the Wide Human) as a result of technical alteration.
  2. Or is wide descendent of such a being. (outside WH) . (112)

Yet it isn’t clear, to me at least, why disconnection from the Wide Human is more likely to result in something more different from humanity and our civilization as they currently exist today than anything that could emerge out of, but still remain part of, the Wide Human itself. Roden turns to the idea of “assemblages” developed by Deleuze and Guattari in an attempt to conceptualize how such a disconnection might occur, but his idea is perhaps conceptually clearer if one comes at it from the perspective of the kinds of evolutionary drift that occurs when some set of creatures becomes isolated from another by having become an island.

As Darwin realized while on his journey to the Galapagos isolation can lead quite rapidly to wide differences between the isolated variant and its parent species. The problem when applying such isolation analogies to technological development is that unlike biological evolution (or technological development before the modern era), the evolution of technology is now globally distributed, rapid and continuous.

Something truly disruptive seems much more likely to emerge from within the Wide Human than from some separate entity or enclave- even one located far out in space.  At the very least because the Wide Human possesses the kind of leverage that could turn something disruptive into something transformative to the extent it could be characterized as posthuman.

What I think we should look out for in terms of the kinds of weird divergence from current humanity that Roden is contemplating, and though he claims speculative posthumanism is not normative, is perhaps rooting for, is maybe something more akin to a phase change or the kinds of rapid evolutionary changes seen in events like the cambrian explosion or the opening up of whole new evolutionary theaters such as when life in the sea first moved unto the land than some sort of separation. It would be something like the singularity predicted by Vernor Vinge though might just as likely come from a direction completely unanticipated and cause a transformation that would make the world, from our current perspective, unrecognizable, and indeed, weird.

Still, what real posthuman weirdness would seem to require would be something clearly identified by Roden and not dependent, to my lights, on his disruption thesis being true. The same reality that would make whatever follows humanity truly weird would be that which allowed alien intelligence to be truly weird; namely, that the kinds of cognition, logic, mathematics, science found in our current civilization, or the kinds of biology and social organization we ourselves possess to all be contingent. What that would mean in essence was that there were a multitude of ways intelligence and technological civilizations might manifest themselves of which we were only a single type, and by no means the most interesting one. Life itself might be like that with the earthly variety and its conditions just one example of what is possible, or it might not.

The existence of alien intelligence and technology very different from our own means we are not in the grip of any deterministic developmental process and that alternative developmental paths are available. So far, we have no evidence one way or another, though unlike Kant who used aliens as a trope to defend a certain versions of what intelligence and morality means we might instead imagine both extraterrestrial and earthly alternatives to our own.

While I can certainly imagine what alternative, and from our view, weird forms of cognition might look like- for example the kinds of distributed intelligence found in a cephalopod or eusocial insect colony, it is much more difficult for me to conceive what morality and ethics might look like if divorced from our own peculiar hybrid of social existence and individual consciousness (the very features Wilson, perhaps rightfully, hopes we will preserve). For me at least one side of what Roden calls dark phenomenology is a much deeper shade of black.

What is especially difficult in this regard for me to imagine is how the kinds of openness to alternative developmental paths that Roden, at the very least, wants us to refrain from preemptively aborting is compatible with a host of other projects surrounding our relationship to emerging technology which I find extremely important: projects such as subjecting technology to stricter, democratically established ethical constraints, including engineering moral philosophy into machines themselves as the basis for ethical decision making autonomous from human beings. Nor is it clear what guidance Roden’s speculative posthumanism provides when it comes to the question of how to regulate against existential risks, dangers which our failure to tackle will foreclose not only a human future but very likely possibility of a posthuman future.

Roden seems to think the fact that there is no such thing as a human “essence” we should be free to engender whatever types of posthumans we want. As I see it this kind of ahistoricism is akin to a parent who refuses to use the lessons learned from a difficult youth to inform his own parenting. Despite the pessimism of some, humanity has actually made great moral strides over the arc of its history and should certainly use those lessons to inform whatever posthumans we chose to create.

One would think the types of posthumans whose creation we permit should be constrained by our experience of a world ill designed by the God of Job. How much suffering is truly necessary? Certainly less than sapient creatures currently experience and thus any posthumans should suffer less than ourselves. We must be alert to and take precautions to avoid the danger that posthuman weirdness will emerge from those areas of the Wide Human where the greatest resources are devoted- military or corporate competition- and for that reason- be terrifying.

Yet the fact that Roden has left one with questions should not subtract from what he has accomplished; namely he has provided us with a framework in which much of modern philosophy can be used to inform the unprecedented questions that are facing as a result of emerging technologies. Roden has also managed to put a very important bug in the ear of all those who would move too quick to prohibit technologies that have the potential to prove disruptive, or close the door to the majority of the hopefully very long future in front of us and our descendents- that in too great an effort to preserve the contingent reality of what we currently are we risk preventing the appearance of something infinitely more brilliant in our future.