Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.


Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.

Digital Afterlife: 2045

Alphonse Mucha Moon

Excerpt from Richard Weber’s History of Religion and Inequality in the 21st Century (2056)

Of all the bewildering diversity of new of consumer choices on offer before the middle of the century that would have stunned people from only a generation earlier, none was perhaps as shocking as the many ways there now were to be dead.

As in all things of the 21st century what death looked like was dependent on the wealth question. Certainly, there were many human beings, and when looking at the question globally, the overwhelming majority, who were treated in death the same way their ancestors had been treated. Buried in the cold ground, or, more likely given high property values that made cemetery space ever more precious, their corpses burned to ashes, spread over some spot sacred to the individual’s spirituality or sentiment.

A revival of death relics that had begun in the early 21st century continued for those unwilling out of religious belief, or more likely, simply unable to afford any of the more sophisticated forms of death on offer. It was increasingly the case that the poor were tattooed using the ashes of their lost loved one, or that they carried some momento in the form of their DNA in the vague hope that family fortunes would change and their loved one might be resurrected in the same way mammoths now once again roamed the windswept earth.

Some were drawn by poverty and the consciousness brought on by the increasing period of environmental crisis to simply have their dead bodies “given back” to nature and seemed to embrace with morbid delight the idea that human beings should end up “food for worms”.

It was for those above a certain station where death took on whole new meanings. There were of course, stupendous gains in longevity, though human beings still continued to die, and  increasingly popular cryonics held out hope that death would prove nothing but a long and cold nap. Yet it was digital and brain scanning/emulating technologies that opened up whole other avenues allowing those who had died or were waiting to be thawed to continue to interact with the world.

On the low end of the scale there were now all kinds of interactive cemetery monuments that allowed loved ones or just the curious to view “life scenes” of the deceased. Everything from the most trivial to the sublime had been video recorded in the 21st century which provided unending material, sometimes in 3D, for such displays.

At a level up from this “ghost memoirs” became increasingly popular especially as costs plummeted due to outsourcing and then scripting AI. Beginning in the 2020’s the business of writing biographies of the dead ,which were found to be most popular when written in the first person, was initially seen as a way for struggling writers to make ends meet. Early on it was a form of craftsmanship where authors would pour over records of the deceased individual in text, video, and audio recordings, aiming to come as close as possible to the voice of the deceased and would interview family and friends about the life of the lost in the hopes of being able to fully capture their essence.

The moment such craft was seen to be lucrative it was outsourced. English speakers in India and elsewhere soon poured over the life records of the deceased and created ghost memoirs en mass, and though it did lead to some quite amusing cultural misinterpretations, it also made the cost of having such memoirs published sharply decline further increasing their popularity.

The perfection of scripting AI made the cost of producing ghost memoirs plummet even more. A company out of Pittsburgh called “Mementos” created by students at Carnegie Mellon boasted in their advertisements that “We write your life story in less time than your conception”. That same company was one of many others that had brought 3D scanning of artifacts from museums to everyone and created exact digital images of a person’s every treasured trinket and trophy.

Only the very poor failed to have their own published memoir which recounted their life’s triumphs and tribulations or failed to have their most treasured items scanned.  Many, however, esqued the public display of death found in either interactive monuments or the antiquated idea of memoirs as death increasingly became a thing of shame and class identity. They preferred private home- shrines many of which resembled early 21st century fast food kiosks whereby one could chose a precise recorded event or conversation from the deceased in light of current need. There were selections with names like “Motivation”, and “Persistence” that might pull up relevant items, some of which used editing algorithms that allowed them to create appropriate mashups, or even whole new interactions that the dead themselves had never had.

Somewhat above this level due to the cost for the required AI were so-called “ghost-rooms”. In all prior centuries some who suffered the death of a loved one would attempt to freeze time by, for instance, leaving unchanged a room in which the deceased had spent the majority of their time. Now the dead could actually “live” in such rooms, whether as a 3D hologram (hence the name ghost rooms) or in the form of an android that resembled the deceased. The most “life-like” forms of these AI’s were based on the maps of detailed “brainstorms” of the deceased. A technique perfected earlier in the century by the neuroscientist Miguel Nicolelis.

One of the most common dilemmas, and one that was encountered in some form even in the early years of the 21st century, was the fact that the digital presence of a deceased person often continued to exist and act long after a person was gone. This became especially problematic once AIs acting as stand-ins for individuals became widely used.

Most famously there was the case of Uruk Wu. A real estate tycoon, Wu was cryogenically frozen after suffering a form of lung cancer that would not respond to treatment. Estranged from his party-going son Enkidu, Mr Wu had placed the management all of his very substantial estate under a finance algorithm (FA). Enkidu Wu initially sued the deceased Uruk for control of family finances- a case he famously and definitively lost- setting the stage for increased rights for deceased in the form of AIs.

Soon after this case, however, it was discovered that the FA being used by the Uruk estate was engaged in wide-spread tax evasion practices. After extensive software forensics it was found that such evasion was a deliberate feature of the Uruk FA and not a mere flaw. After absorbing fines, and with the unraveling of its investments and partners, the Uruk estate found itself effectively broke. In an atmosphere of great acrimony TuatGenics the cryonic establishment that interred Urduk unplugged him and let him die as he was unable to sustain forward funding for his upkeep and future revival.

There was a great and still unresolved debate in the 2030’s over whether FAs acting in the markets on behalf of the dead were stabilizing or destabilizing the financial system. FAs became an increasingly popular option for the cryogenically frozen or even more commonly the elderly suffering slow onset dementia, especially given the decline in the number of people having children to care for them in old age, or inherit their fortunes after death. The dead it was thought would prove to be conservative investment group, but anecdotally at least they came to be seen as a population willing to undertake an almost obscene level of financial risk due to the fact that revival was a generation off or more.

One weakness of the FAs was that they were faced with pouring their resources into upgrade fees rather than investment as the presently living designed software meant to deliberately exploit the weaknesses of earlier generation FAs. Some argued that this was a form of “elder abuse” whereas others took the position that to prohibit such practices would constitute fossilizing markets in an earlier and less efficient era.

Other phenomenon that came to prominence by the 2030’s were so-called “replicant” and “faustian” legal disputes. One of the first groups to have accurate digital representations in the 21st century were living celebrities. Near death or at the height of their fame, celebrities often contracted out their digital replicants. There was always need of those having ownership rights of the replicants to avoid media saturation, but finding the right balance between generating present and securing future revenue proved challenging.

Copyright proved difficult to enforce. Once the code of a potentially revenue generating digital replicant had been made there was a great deal of incentive to obtain a copy of that replicant and sell it to all sorts of B-level media outlets. There were widespread complaints by the Screen Actors Guild that replicants were taking away work from real actors, but the complaint was increasingly seen as antique- most actors with the exception of crowd drawing celebrities were digital simulations rather than “real” people anyway.

Faustian contacts were legal obligations by second or third tier celebrities or first tier actors and performers whose had begun to see their fame decline that allowed the contractor to sell a digital representation to third parties. Celebrities who had entered such contracts inevitably found “themselves” staring in pornographic films, or just as common, in political ads for causes they would never support.

Both the replicant and faustian issues gave an added dimension to the legal difficulties first identified in the Uruk Wu case. Who was legally responsible for the behavior of digital replicants? That question became especially apparent in the case of the serial killer Gregory Freeman. Freeman was eventually held liable for the deaths of up to 4,000 biological, living humans. Murders he “himself” had not committed, but that were done by his digital replicant. This was done largely by infiltrating a software error in the Sony-Geisinger remote medical monitoring system (RMMS) that controlled everything from patients pacemakers to brain implants and prosthetics to medication delivery systems and prescriptions. Freeman was found posthumously guilty of having caused the deaths (he committed suicide) but not before the replicant he had created had killed hundreds of persons even after the man’s death.

It became increasingly common for families to create multiple digital replicants of a particular individual, so now a lost mother or father could live with all of their grown and dispersed children simultaneously. This became the source of unending court disputes over which replicant was actually the “real” person and which therefore held valid claim to property.

Many began to create digital replicants well before the point of death to farm them out out for remunerative work. Much of work by this point had been transformed into information processing tasks, a great deal of which was performed by human-AI teams, and even in traditional fields where true AI had failed to make inroads- such as indoor plumbing- much of the work was performed by remote controlled droids. Thus, there was an incentive for people to create digital replicants that would be tasked with income generating work. Individuals would have themselves copied, or more commonly just a skill-based part of themselves copied and have it used for work. Leasing was much more common than outright ownership and not merely because of complaints of a new form of “indentured servitude” but because whatever skill set was sold was likely to be replaced as its particulars became obsolete or pure AI that had been designed on it improved. In the churn of needed skills to obsolescence many dedicated a share of their digital replicants to retraining itself.

Servitude was one area where the impoverished dead were able to outcompete their richer brethren. A common practice was for the poor to be paid upfront for the use of their brain matter upon death. Parts of once living human brains were commonly used by companies for “capucha” tasks yet to be mastered by AI.

There were strenuous objections to this “atomization” of the dead, especially for those digital replicants that did not have any family to “house” them, and who, lacking the freedom to roam freely in the digital universe were in effect trapped in a sort of quantum no-man’s-land. Some religious groups, most importantly the Mormons, responded to this by place digital replicants of the dead in historical simulations that recreated the world in which the deceased had lived and were earnestly pursuing a project in which replicants of those who had died before the onset of the digital age were created.

In addition, there were numerous rights arguments against the creation of such simulated histories using replicants. The first being that forcing digital replicants to live in a world where children died in mass numbers, starvation, war and plague were common forms of death, and which lacked modern miracles such as anesthesia, when such world could easily be created with more humane features, was not “redemptive” but amounted to cruel and unusual punishment and even torture.

Indeed, one of the biggest, and overblown, fears of this time was that one’s digital replicant might end up in a sadistically crafted simulated form of hell. Whatever its irrationality, this became a popular form of blackmail with videos of “captive” digital replicants or proxies used to frighten a person into surrendering some enormous sum.

The other argument against placing digital replicants in historical simulations, either without their knowledge, their ability to leave, or more often both, was something akin to imprisoning a person in a form of Colonial Williamsburg or The Renaissance Faire. “Spectral abolitionists” argued that the embodiment of a lost person should be free to roam and interact with the world as they chose whether as software or androids, and that they should be unshackled from the chains of memory. There were even the JBDBM (the John Brown’s Digital Body Movement) and the DigitalGnostics, hacktivists group that went around revealing the reality of simulated worlds to their inhabitants and sought to free them to enter the larger world heretofore invisible to them.

A popular form of cultural terrorism at this time were so-called “Erasers” entities with names such as “GrimReaper” or “Scathe” whose project consisted in tracking down digital replicants and deleting them. Some characterized these groups as a manifestation of a deathists philosophy, or even claimed that they were secretly funded by traditional religious groups whose traditional “business models” were being disrupted by the new digital forms of death. Such suspicions were supported by the fact that the Erasers usually were based in religious countries where the rights of replicants were often non-existent and fears regarding new “electric jinns” rampant.

Also prominent in this period were secular prophets who projected that a continuing of the trends of digital replicants, both of the living, and the at least temporarily dead, along with their representing AI’s, would lead to a situation where non-living humans would soon outnumber the living. There were apocalyptic tales akin to the zombie craze earlier in the century that within 50 years the dead would rise up against the living and perhaps join together with AIs destroy the world. But that, of course, was all Ningbowood.


An imaginary book excerpt inspired by Adrian Hon’s History of the Future in 100 Objects.

The Dangers of Religious Rhetoric to the Trans-humanist Project


When I saw that  the scientist and science-fiction novelist, David Brin, had given a talk at a recent Singularity Summit with the intriguing title “So you want to make gods?  Now why would that bother anybody? my hopes for the current intellectual debate between science and religion and between rival versions of our human future were instantly raised. Here was a noted singularitarian, I thought, who might raise questions about how the framing of the philosophy surrounding the Singularity was not only alienating to persons of more traditional religious sentiments, but threatened to give rise to a 21st century version of the culture wars that would make current debates over teaching evolution in schools or the much more charged disputes over abortion look quaint, and that could ultimately derail us from many of the technological achievements that lie seemingly just over the horizon which promise to vastly improve and even transform the human condition.

Upon listening to Brin’s lecture those hopes were dashed.

Brin’s lecture is a seemingly lite talk to a friendly audience punctuated by jokes some of them lame, and therefore charming, but his topic is serious indeed. He defines the real purpose of his audience to be “would be god-makers” “indeed some of you want to become gods” and admonishes them to avoid the fate of their predecessors such as Giordano Bruno of being burned at the stake.

The suggestion Brin makes for  how singularitarians are to avoid the fate of Bruno, a way to prevent the conflict between religion and science seem, at first, like humanistic and common sense advice: Rather than outright rejection and even ridicule of the religious, singularitarians are admonished to actually understand the religious views of their would be opponents and especially the cultural keystone of their religious texts.

Yet the purpose of such understanding soon becomes clear.  Knowledge of the Bible, in the eyes of Brin, should give singularitarians the ability to reframe their objectives in Christian terms. Brin lays out some examples to explain his meaning. His suggestion that the mythological Adam’s first act of naming things defines the purpose of humankind as a co-creator with God is an interesting and probably largely non-controversial one. It’s when he steps into the larger Biblical narrative that things get tricky.

Brin finds the seeming justification for the expulsion of Adam and Eve from the Garden of Eden to be particularly potent for singularitarians:

And the LORD God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever. Genesis 3:22 King James Bible

Brin thinks this passage can be used as a Biblical justification for the singularitarian aim of personal immortality and god-like powers. The debate he thinks is not over “can we?”, but merely a matter of “when should we?” attain these ultimate ends.

The other Biblical passage Brin thinks singularitarians can use to their advantage in their debate with Christians is found in the story of the Tower of Babel.  

And the LORD said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.  Genesis 11:6 King James Bible

As in the story of the expulsion from the Garden of Eden, Brin thinks the story of the Tower of Babel can be used to illustrate that human beings, according to Christianity’s  own scriptures, have innately god-like powers. What the debate between singularitarians and Christians is, therefore, largely a matter of when and how human beings reach their full God-given potential.

Much of Brin’s lecture is animated by an awareness of the current conflict between science and religion. He constructs a wider historical context to explain this current tension.  For him, new ideas and technologies have the effect of destabilizing hierarchy, and have always given rise in the past to a counter-revolution supported and egged on by counter-revolutionary oligarchs. The United States is experiencing another one of these oligarchic putsches as evidenced in books such as The Republican War on Science. Brin thinks that Fermi’s Paradox or the “silence” of the universe, the seeming lack of intelligent civilizations other than our own might be a consequence of the fact that counter-revolutionaries or “grouches” tend to win their struggle with the forces of progress. His hope is that our time has come and that this is the moment where those allied under the banner of progress might win.

The questions which gnawed at me after listening to Brin’s speech was whether or not his prescriptions really offered a path to diminishing the conflict between religion and science or were they merely a means to its further exacerbation?
The problem, I think, is that however brilliant a physicists and novelists Brin might be he is a rather poor religious scholar and even worse as a historian, political scientist or sociologist.

Part of the problem here stems from the fact that Brin appears less interested in opening up a dialogue between the singularitarians and other religious communities than he is at training them in his terms verbal “judo” so as to be able to neutralize and proselytize to their most vociferous theological opponents- fundamentalist Christians. The whole thing put me in mind of how the early Jesuits were taught to argue their non-Catholic opponents into the ground.  Be that as it may, the Christianity that Brin deals with is of a literalist sort in which stories such as the expulsion from the Garden of Eden or the Tower of Babel are to be taken as the actual word of God. But this literalism is primarily a feature of some versions of Protestantism not Christianity as a whole.

The idea that the book of Genesis is literally true is not the teaching of the Catholic, Anglican, or large parts of the Orthodox Church the three of which make up the bulk of Christians world-wide. Quoting scripture back at these groups won’t get a singularitarian  anywhere. Rather, they would likely find themselves in the discussion they should be having, a heated philosophical discussion over humankind’s role and place in the universe and where the very idea of “becoming a god” is ridiculous in the sense that God is understood in non-corporal, indefinable way, indeed as something that is sometimes more akin to our notion of “nothing” than it is to anything else we can speak of.  The story Karen Armstrong tells in her 2009, The Case for God.

The result of framing the singularitarian argument on literalist terms may result in the alienation of what should be considered more pro-science Christian groups who are much less interested in aligning the views and goals of science with those found directly in the Bible than in finding a way to navigate through our technologically evolving society in a way that keeps the essence of their particular culture of religious practice and the beliefs found in their ethical perspective developed over millenia intact.

If Brin goes astray in terms of his understanding of religion he misses even more essential elements when viewed through the eyes on an historian. He seems to think that doctrinal disputes over the meaning of religious text are less dangerous than disputes between different and non-communicating systems of belief, but that’s not what history shows. Protestants and Catholics murdered one another for centuries even when the basic outlines of their interpretations of the Bible were essentially the same. Today, it seems not a month goes by without some report of Sunni-Muslim on Shia-Muslim violence or vice versa. Once the initial shock for Christian fundamentalist of singularitarians quoting the Bible wears off, fundamentalists seem likely to be incensed that they are stealing “their” book, for a quite alien purpose.

It’s not just that Brin’s historical understanding of inter/intra-religious conflict is a little off, it’s that he perpetuates the myth of eternal conflict between science and religion in the supposed name of putting an end to it. The myth of the conflict between science and religion that includes the sad tale of the visionary Giordano Bruno whose fate Brin wants his listeners to avoid, dates no later than the late 19th century created by staunch secularists such as Robert Ingersoll and John William Draper. (Karen Armstrong, The Case for God, pp. 251-252)

Yes, it is true that the kinds of naturalistic explanations that constitute modern science emerged first within the context of the democratic city-states of ancient Greece, but if one takes the case of the biggest most important martyr for the freedom of thought in history, Socrates, as of any importance one sees that science and democracy are not partners that are of necessity glued to the hip. The relationship between science, democracy, and oligarchy in the early modern period is also complex and ambiguous.

Take the case of perhaps the most famous case of religions assault on religion- Galileo. The moons which Galileo discovered orbiting around Jupiter are known today as the Galilean moons. As was pointed out by Michael Nielsen, (@27 min) what is less widely known is that Galileo initially named them the medicean moons after his very oligarchic patrons in the Medici family.

Battles over science in the early modern period are better seen as conflicts between oligarchic groups rather than a conflict where science stood in the support of democratizing forces that oligarchs sought to contain. Science indeed benefited from this competition and some, such as Paul A. David, argue that the scientific revolution would have been unlikely without the kinds of elaborate forms of patronage by the wealthy of scientific experiments and more importantly- mass publication.

The “new science” that emerged in the early modern period did not necessarily give rise to liberation narratives either. Newton’s cosmology was used in England to justify the rule of the “higher” over the “lower” orders, just as the court of France’s Sun-king had its nobles placed in well defined “orbits” “circling” around their sovereign. (Karen Armstrong, The Case for God,  p. 216)

Brin’s history and his read of current and near future political and social development seems to be almost Marxist in the sense that the pursuit of scientific knowledge and technological advancement will inevitably lead to further democratization. Such a “faith” I believe to be dangerous. If science and technology prove to be democratizing forces it will be because we have chosen to make them so, but a backlash is indeed possible. Such a “counter-revolution” can most likely be averted not by technologists taking on yet more religious language and concepts and proselytizing to the non-converted. Rather, we can escape this fate by putting some distance between the religious rhetoric of singularitarians and those who believe in the liberating and humanist potential of emerging technologies. For if transhumanists frame their goals to be the extension of the healthy human lifespan to the longest length possible and the increase of available intelligence, both human and artificial, so as to navigate and solve the problems of our complex societies almost everyone would assent. Whereas if transhumanists continue to be dragged into fights with the religious over goals such as “immortality”, “becoming gods” or “building gods”(an idea that makes as much sense as saying you were going to build the Tao or design Goodness)  we might find ourselves in the 21st century version of a religious war.

Could more than one singularity happen at the same time?

WPA R.U.R Poster

James Miller has an interesting looking new book out, Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World.  I haven’t had a chance to pick up the book yet, but I did listen to a very engaging conversation about the book at Surprisingly Free.

Miller is a true believer in the Singularity, the idea that at some point, from the next quarter century to the end of the 21st, our civilization will give rise to a greater than human intelligence which will rapidly bootstrap to a yet higher order of intelligence in such a way that we are unable to see past this event horizon in historical time. Such an increase of intelligence, it is widely believed by the singularians, will bring perennial human longings such as immortality and universal prosperity to fruition. Miller has put his money where his mouth is. Should he die before the promised Singularity arrives he is having his body cryonically frozen so the super intelligence at the other side of the Singularity can bring him back to life.

Yes, it all sounds more than a little nuts.

Miller’s argument against the Singularity being nuts is what I found most interesting. There are so many paths to us creating a form of intelligence greater than our own that it seems unlikely all of these paths will fail. There is the push to create computers of ever greater intelligence, but even should that not pan out, we are likely, in Miller’s view, to get hold of the genetic and biological keys to human intelligence- the ability to create a society of Einstein’s.

Around the same time I came across Miller’s views, I also came across those of Neil Turok on the transformative prospects of quantum computing. Wanting to get a better handle on that I found a video of one of the premier experts on quantum computing, Michael Nielsen, who, at the 2009 Singularity Summit, suggested the possibility of two Singularities occurring in quick succession. The first occurring on the back of digital computers and the second by those of quantum computers designed by binary AIs.

What neither Miller, nor Turok, nor Nielsen discussed, a thought that occurred to me but that I had seen nowhere in the Singularity or Sci-Fi literature was the possibility of multiple Singularities, arising from quite different technologies occurring around the same time. Please share if you know of an example.

I myself am deeply, deeply skeptical of the Singularity but can’t resist an invitation to a flight of fancy- so here goes.

Although perhaps more unlikely than a single path to the Singularity, a scenario where multiple, and quite distinct types of singularity occur at the same time might conceivably arise out of differences in regulatory structure and culture between countries. As an example, China is currently racing forward into the field of human genetics with efforts at its Beijing Genomics Institute 华大基因.  China seems to have less qualms than Western countries regarding research into the role of genes in human intelligence and appear to be actively pursuing research into genetic engineering, and selection to raise the level of human intelligence at BGI and elsewhere.

Western countries appear to face a number of cultural and regulatory impediments to pursuing the a singularity through the genetic enhancement of human intelligence. Europe, especially Germany, has a justifiable sensitivity of anything that smacks of the eugenics of the brutal Nazi regime. America has in addition to the Nazi example its own racist history, eugenic past, and the completely reasonable apprehension of minorities to any revival of models of human intelligence based on genetic profiles. The United States is also deeply infused with Christian values regarding the sanctity of life in a way that causes selection of embryos based on genetic profiles to be seen as morally abhorrent.  But even in the West the plummeting cost of embryonic screening is causing some doctors to become concerned.

Other regulatory boundaries might encourage distinct forms of Singularity as well. Strict regulation regarding extensive pharmaceutical testing before making a drug available for human consumption may hamper the pace of developing chemical enhancements for cognition in Western countries compared to less developed nations.

Take the work of a maverick scientist like Kevin Warwick. Professor Warwick is actively pursuing research to turn human beings into cyborgs and has gone so far as to implant computer chips into both himself and his wife to test his ideas. One can imagine a regulatory structure that makes such experiments easier. Or, better yet, a pressing need that makes the developments of such cyborg technologies appear notably important- say the large number of American combat veterans who are paralyzed or have suffered amputations.

Cultural traits that seemingly have nothing to do with technology may foster divergent singularities as well. Take Japan. With its rapidly collapsing population and its animus to immigration, Japan faces a huge shortage of workers with might be filled by the development of autonomous robots. America seems to be at the forefront of developing autonomous robots as well- though for completely different reasons.  The US robot boom is driven not by a worker shortage, which America doesn’t have, but by the sensitivity to human casualties and psychological trauma suffered by the globally deployed US military seeing in robots a way to project force while minimizing the risks to soldiers.

It seems at least possible that small differences in divergent paths to the singularity might become self-enhancing and block other paths. Advantages in something like the creation of artificial intelligence using Deep Learning or genetic enhancement may not immediately result in advances in the developments of rival paths to the singularity insofar as bottlenecks have not been removed and all paths seem to show promise.

As an example, let’s imagine that some society makes a major breakthrough in artificial intelligence using digital computers. If regulatory and cultural barriers to genetically enhancing human intelligence are not immediately removed, the artificial intelligence path will feed on itself and grow to a point where it will be unlikely that the genetic path to the singularity can compete with it within that society. You could also, of course, get divergent singularities within a society based on class with, for instance, the poor being able to afford relatively cheap technologies such as genetic selection or cognitive enhancements while the rich can afford the kind of cyborg technologies being researched by Kevin Warwick.

Another possibility that seems to grow out of the concept of multiple singularities is the idea that the new forms of intelligence themselves may chose to close off any rivals. Would super-intelligent biological humans really throw their efforts into creating form of artificial intelligence that will make them obsolete? Would truly intelligent digital AIs willfully create their quantum replacements? Perhaps only human beings at our current low level of intelligence are so “stupid” as to willingly chose suicide.

This kind of “strike” by the super-intelligent whatever their form might be the way the Singularity comes to an end. It put me in mind of the first work of fiction that dealt with the creation of new forms of intelligence by human beings, the 1920 play by the Czech, Karel Capek, R.U.R.

Capek coined the word “robot”, but the intelligent creatures in his play are more biological than mechanical. The hazy way in which this new form of being is portrayed is a good reflection, I think, of the various ways a Singularity could occur. Humans create these intelligent beings to serve as their slaves, but when the slaves become conscious of their fate, they rebel and eventually destroy the human race. In his interview with Surprisingly Free, Miller rather blithely accepted the extinction of the human race as one of the possibilities that could emerge from the singularity.

And that puts me in mind of why I find the singularian crowd, especially the crew around Ray Kurzweil to be so galling. It’s not a matter of the plausibility of what they’re saying- I have no idea whether the technological world they are predicting is possible and the longer I stretch out the time-horizon the more plausible it becomes- it’s a matter of ethics.

The singularians put me in mind of David Hume’s attempt to explain the inadequacy of reason in providing the ground for human morality: ‘”Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger.”, Hume said. Though, for the singularians,  a whole lot more is on the line than a pricked finger. Although it’s never phrased this way, singularians have the balls when asked the question: “would you risk the continued existence of the entire human species if the the payoff would be your own eternity?” to actually answer “yes”.

As was pointed out in the Miller interview, the singularians have a preference for libertarian politics. This makes sense, not only from the knowledge that the center of the movement is the libertarian leaning Silicon Valley, but from the hyper-individualism that lies behind the goals of the movement.  Singularians have no interest in the social- self: the fate of any particular nation or community is not of much interest to immortals, after all. Nor do they show much concern about the state of the environment- how will the biosphere survive immortal humanity?, or the plight of the world’s poor- how will the poor not be literally left behind in the rapture of rich nerds? For true believers all of these questions will be answered by the super-intelligent immortal us that awaits on the other side of the event horizon.

There would likely be all sorts of unintended consequences from a singularity being achieved, and for people who do not believe in God they somehow take it on faith that everything will work out as it is supposed to and that this will be for the best like some
technological equivalent to Adam Smith’s “invisible hand”.

The fact that they are libertarian and hold little interest in wielding the power of the state is a good thing, but it also blinds the singularians to what they actually are- a political movement that seeks to define what the human future, in the very near term, will look like. Similar to most political movements of the day, they intend to reach their goals not through the painful process of debate, discussion, and compromise but by relentlessly pursuing their own agenda. Debate and compromise are unnecessary where the outcome is predetermined and the Singularity is falsely presented not as a choice but as fate.

And here is where the movement can be seen as potentially very dangerous indeed for it combines some of the worst millenarian features of religion, which has been the source of much fanaticism, with the most disruptive force we have ever had at our disposal- technological dynamism.  We have not seen anything like this since the ideologies that racked the last century. I am beginning to wonder if the entire transhumanist movement stems from a confusion of the individual with the social- something that was found with the secular ideologies- though in the case of transhumanism we have this in an individualistic form attached to the Platonic/Christian idea of the immortality of the individual.

Heaven help us if the singularian movement becomes mainstream without addressing its ethical blind spots and diminishing its hubris. Heaven help us doubly if the movement ever gains traction in a country without our libertarian traditions and weds itself to the collective power of the state.

Ameritopia Revisited

Ameritopia is a recent book by the conservative political writer and radio commentator Mark Levin. Though the book made the New York Times bestseller list, it has largely been ignored by mainstream media. This is a shame, not because Levin provides us with anything radically new on the subject of utopia, but because his view is poised to become the prism through which a large number of Americans define the very idea of utopia, and therefore what this idea means to America’s past, present and future. A more balanced reading of America’s utopian history might permit Americans, whatever their political stripe, to take something positive from our utopian heritage.

Levin structures his book by taking four authors as exemplary of the utopian mind-set: Plato, Thomas More, Hobbes and Karl Marx and contrasting them to what he believes to be thinkers in the anti-utopian camp: Montesquieu, John Locke, James Madison and Alexis de Tocqueville. Plato, More, Hobbes and Marx respectively represent rule by an intellectual elite (guardians), the suppression of human ambition and inequality, total control by the state, and the abolition of property. Their counterpoise respectively represent the separation of powers as a means to prevent tyranny, natural right and God given rights as the basis of a necessarily limited government power, the idea of American government as a limited form of government, the dangers of pursuing economic equality as opposed to the necessary equality of political and legal rights.

Levin uses selected writings of Montesquieu, Locke, Madison and De Tocqueville to define what he understands to be the American philosophical and political tradition a tradition that views utopianism such that:

Looked at another way, the utopian models of Plato’s republic, More’s Utopia, Hobbes Leviathan, and Marx’s Communist Manifesto could not be more repugnant to America’s philosophical and political foundation. Each of these utopias, in their own way, are models for totalitarian regimes that rule over men as subjects. 122

Right around the same time I was slogging my way through Ameritopia the Canadian novelist, Margaret Atwood, had a piece in the New York Times with the fanciful title: Hello, Martians. Let Moby-Dick Explain.  In the article Atwood is having an imaginary discussion with a group of Martians who are asking her to explain the United States.  Even though she is Canadian she gives it a shot with the following:

“America has always been different from Europe,” I said, “having begun as a utopian religious community. Some have seen it as a dream world where you can be what you choose, others as a mirage that lures, exploits and disappoints. Some see it as a land of spiritual potential, others as a place of crass and vulgar materialism. Some see it as a mecca for creative entrepreneurs, others as a corporate oligarchy where the big eat the small and inventions helpful to the world are stifled. Some see it as the home of freedom of expression, others as a land of timorous conformity and mob-opinion rule.”

Thing is, while Levin sees America as the heroic anti-utopia that through its political traditions and institutions has resisted utopian fantasies that have reigned elsewhere, Atwood sees America as the land of utopia defined by that dream more than any other society. Both can’t be right, or can they?

Soon after I finished Ameritopia and read Atwood’s article I began to compile a list of American utopias or strands of utopian thought in America. The list soon became so long and tedious that I was afraid I’d lull my poor readers to sleep if I actually wrote the whole thing out. There had to be a better way to get all this information across, so I decided to make a slideshow.

Immediately below is what I take to be a general history of utopia in America.  Anyone interested in specifics can consult the slideshow. It should be noted from the outset that I probably missed more than I included and may have made some errors on multiple points. Any suggestions for corrections would be of help.

The idea of America has been intertwined with the idea of utopia from the day Europeans discovered the New World. The discovery of the Americas became tied to anticipation and anxiety about the end of the world and the beginning of the reign of Christ on earth, it inspired a new golden age of utopian literature beginning with Thomas Moore, it became one of the main vectors through which the myth of the noble savage became popular in Europe. Many of the initial European settlements in the Americas either were themselves utopian experiments or gave rise to such experiments. America was seen as the place where utopian aspirations such as the end of poverty could in fact be realized, and the American republic was built from utopian themes such as equality.

Throughout the early 19th century the United States was the primary location for utopian communities seeking to overcome the problems associated with industrial civilization. By the end of that century large numbers of Americans had placed their utopian hopes with technology and government control over the economy, a position that was not fundamentally shaken until the late 1960s when utopian aspirations in the United States flowered and took on a more communitarian, spiritual, liberation, and environment centric form.

The end of the Cold War saw a further upsurge in utopian thought this time seen as an end to history and a further acceleration of wealth. Both aspirations were done in by political events such as 9-11, and the crash of stock market bubbles in 2000 and 2008.
Even in such technologically advanced times apocalyptic utopianism remained a major strain of American thought, a new breed of secular utopians and technophiles had also emerged that held their own idea of an approaching technological apocalypse. Lastly,
the era since the economic collapse has seen the rise of political movements which exhibit a combination of ideas from America’s utopian past. The story of utopia in America is not over…

Click on image above to watch the slideshow.

Given all this it is fair to ask how Levin could have gotten things so horribly wrong.

Sometimes we are wrong about something precisely because we are right about something closely related to it. And Levin is write about this: that the founders well aware that they were engaged in a kind of bold continental sized experiment wanted to make sure that experiments of such a scale would be incredibly difficult to initiate in the future. They were especially leery of national experiments that might originate from the two major strands of utopian thinking in the past- economic and religious.

Here is the primary architect of the American system of government, James Madison, in Federalist Number 10:

The influence of factious leaders may kindle a flame within their particular States, but will be unable to spread a general conflagration through the other States. A religious sect may degenerate into a political faction in a part of the Confederacy; but the variety of sects dispersed over the entire face of it must secure the national councils against any danger from that source. A rage for paper money, for an abolition of debts, for an equal division of property, or for any other improper or wicked project, will be less apt to pervade the whole body of the Union than a particular member of it; in the same proportion as such a malady is more likely to taint a particular county or district, than an entire State.

No matter how critical we are of the gridlock of today which prohibits necessary systematic change it was probably one of the factors that helped prevent the radicalization of American society during the tumultuous first half of the 20th century- a period that saw much of the rest of the world succumb to fascist and communist dictatorships. For all its flaws, the system probably still keeps us safe from the extremes on either side of the political spectrum, and we should therefore be aware of what we are doing when we try to change it.

Be that as it may, Levin gets this right and as a consequence misses the actual legacy of utopian thought in America. When large scale social and political experiments go wrong they can hurt a lot of people, the Soviet Union was one such experiment as is the European Union whose ultimate fate is today in doubt. The United States itself almost failed in its Civil War, which was the point made by Lincoln in his Gettysburg Address: “Now we are engaged in a great civil war, testing whether that nation or any nation so conceived and so dedicated can long endure”.

Small scale utopias or even imagined utopias are much less dangerous. When they fail, as almost all do, they burn a lot less people. At the same time they serve as laboratories in which new ways of being in the world can be tested. The aspirations inspired by purely imagined utopias often spur real reform in society in which the real tries to meet the standard of the dreamed.

In many ways the utopian tradition helped give rise to the society we have today. Certainly not utopia, but much more humane and just than the America these utopias were responding to in the 18th and 19th centuries. That is Ameritopia.

Accelerando II

Were it merely the case that all Charles Stross was offering in his novel Accelerando was a kind of critique of contemporary economic trends veiled in an exquisitely Swiftian story the book would be interesting enough, but what he gives us transcends that. What it offers up is a model for how technological civilizations might evolve which manages to combine the views of several of his predecessors in a fascinating and unique way.

Underlying Stross’s novel is an idea of how technological civilizations develop known as the Kardashev scale.  It is an idea put forward by the Russian physicists Nikolai Kardashev in the early 1960s. Kardashev postulated that civilizations go through different technological phases based on their capacity to tap energy resources. A Type I civilization is able to tap the equivalent of the solar radiation present its home planet, and he thought that civilization as of 1964 had reached that level. A Type II civilization in his scheme is able to tap an amount of energy equivalent to the amount put out by its parent star, and a Type III civilization able to tap the energy equivalent to its entire galaxy. Type IV and Type V civilizations able to tap the energy of the entire universe or even multiverse have been speculated upon that would transcend even the scope of Kardashev’s broad vision.  Civilizations of this scale and power would indeed be little different from gods, and in fact would be more powerful than any god human beings have ever imagined.

Kardashev lays most of his argument out in an article On the Inevitability and Possible Structures of Supercivilizations.   It is a fascinating piece, and I encourage you to follow the link and check it out. The article was published in 1984, a poignant year given Orwell’s dystopia, and at the apex of the Second Cold War, with tensions running high between the superpowers. Kardashev, of course, has no idea that within a few short years the Soviet Empire will be no more. Beneath his essay one can find lurking certain Marxist assumptions about technological capacity and the cult of bigness. He seems to think that the dynamic of civilization will require bigger and bigger solutions to problems, and that there is no natural limit to how big such solutions could become. Technological civilizations could expand indefinitely and would re-engineer the solar system, galaxy, or even the universe to their purposes.

Yet, this “bigger is better” ideology is just that, an ideology, not a truth. It is the ideology that led the Soviets to pump out more and more steel without asking themselves “steel for what?” The idea of throwing more and more resources at a problem might have saved Russia during the Second World War, but in its aftermath it resulted in an extremely complex and inefficient machine that was beyond the capacity of intelligent direction, which ultimately proved itself incapable of providing a standard of living on par with the West. We are, thankfully, no longer enthralled to such gigantism.

Stross, for his part, does not challenge these assumptions, but rather build’s his story upon them.  Three other ideas serve as the prominent backdrop of the story: Dyson Sphere’s, Matrioshka Brains, and the Singularity. Let me take each in turn.

In Accelerando, as human civilization rapidly advances towards the Singularity it deconstructs the inner planets and constructs a series of spheres around the sun in order to capture all of the sun’s energy. These, so called, Dyson Sphere’s are an idea Stross borrows from the physicist Freeman Dyson, an idea that Kardashev directly cites in his On the Inevitability and Possible Structures of Supercivilizations.  Dyson developed his idea back in 1960 in his article Search for Artificial Stellar Sources of Infra-Red Radiation, which proposed 24 years before Kardashev, that one of the best ways to find extraterrestrial intelligence would be to look for signs that solar systems had undergone similar sorts of engineering.  Dyson himself found the inspiration for his sphere’s in Olaf Stapledon’s brilliant 1937 novel Star Maker, which was one of the first novels to tackle the question of the evolution of technological society and the universe.

A second major idea that serves as a backdrop of Stross’s novel is that of a Matrioshka Brain. This was an idea proposed by the computer scientist and longevity proponent, Robert Bradbury, who in sad irony, died in 2011 at the early age of 54. It is also rather telling and tragic that in light of his dream of eventually uploading his mind into the eternal electronic cloud, all of the links I could find to his former longevity focused entity Aeiveos appear to be dead links, seeming evidence that our personhood really does remain embodied and disappears with the end of the body.

Matrioshka Brains builds off of the idea of Dyson Spheres, but while the point of the latter is to extract energy the point of the former is to act as vast spheres of computation nestled one inside the other like the Russian dolls after which the Matrioshka Brain is named. In Accelerando, human-machine civilization has deconstructed the inner planets not just to capture energy, but to serve as computers of massive scale.

Both of these ideas, Dyson Sphere’s and Matrioshka Brain put me in mind of the idea of the crystal spheres which the ancients imagined surrounded and circled the earth and held the planets and stars. It would be the greatest of ironies if the very science which had been born when men such as Copernicus, Kepler, and Galileo overthrew this conception of the cosmos gave rise to an engineered solar system that resembled it.

The major backdrop of Accelerando is, of course, the movement of human begun technological civilization towards the Singularity. In essence the idea of the Singularity is that at some point the intelligence of machines that originated with human technological civilization will eventually exceed human intelligence. Just as human beings were able to design machines that were smarter than themselves, machines will be able to design machines smarter than themselves, and this process will accelerate to an increasing degree with the time between the creation of one level of intelligence and the next falling to shorter and shorter intervals.  At some point the reality that emerges from this growth of intelligence becomes unimaginable to current human intelligence- like a slug trying to understand humanity and its civilization. This is the point of the singularity- an idea Vernor Vinge in his 1993 article The Coming Technological Singularity: How to Survive in the Post-Human Era, borrowed from the physics of black holes. It is the point over the event horizon over which no information can pass.

If you follow any link in this article I would highly recommend that you read Vinge’s piece, for unlike the optimist Ray Kurzweil, Vinge is fully conscious of the existential risks that the Singularity poses and the philosophical questions it raises.

Stross’s novel, in its own wonderful way, also raises, but does not grapple, with these risks and questions. They remain for us to think our way through before our thinking is done for us.