A Global History of Post-humans

Cave painting hand prints

One thing that can certainly not be said either the anthropologist Ynval Harari’s or his new book Sapiens: A Brief History of Humankind is that they lack ambition. In Sapiens, Harari sets out to tell the story of humanity since our emergence on the plans of Africa until the era in which we are living right now today, a period he thinks is the beginning of the end of our particular breed of primate. His book ends with some speculations on our post-human destiny, whether we achieve biological immortality or manage to biologically and technologically engineer ourselves into an entirely different species. If you want to talk about establishing a historical context for the issues confronted by transhumanism, you can’t get better than that.

In Sapiens, Harari organizes the story of humanity by casting it within the framework of three revolutions: the cognitive, the agricultural and the scientific. The Cognitive Revolution is what supposedly happened somewhere between 70,000- 30,000 thousand years ago when a suddenly very creative and  cooperative homo sapiens came roaring out of Africa using their unprecedented social coordination and technological flexibility to invade every ecological niche on the planet.

Armed with an extremely cooperative form of culture, and the ability to make tools, (especially clothing) to fit environments they had not evolved for, homo sapiens was able to move into more northern latitudes than their much more cold adapted Neanderthal cousins, or to cast out on the seas to settle far off islands and continents such as Australia, Polynesia, and perhaps even the Americas.

The speed with which homo sapiens invaded new territory was devastating for almost all other animals. Harari points out how the majority of large land animals outside of Africa (where animals had enough time to evolve weariness of this strange hairless ape) disappeared not long after human beings arrived there. And the casualties included other human species as well.

One of the best things about Harari is his ability to overthrow previous conceptions- as he does here with the romantic notion held by some environmentalist that “primitive” cultures lived in harmony with nature. Long before even the adoption of agriculture, let alone the industrial revolution, the arrival of homo sapiens proved devastating for every other species, including other hominids.

Yet Harari also defies intellectual stereotypes. He might not think the era in which the only human beings on earth were  “noble savages” was a particularly good one for other species, but he does see it as having been a particularly good one for homo sapiens, at least compared to what came afterward, and up until quite recently.

Humans, in the era before the Agricultural Revolution lived a healthier lifestyle than any since. They had a varied diet, and though they only worked on average six hours a day, and were far more active than any of us in modern societies chained to our cubicles and staring at computer screens.

Harari, also throws doubt on the argument that has been made most recently by Steven Piker, that the era before states was one of constant tribal warfare and violence, suggesting that it’s impossible to get an overall impression for levels of violence based on what end up being a narrow range of human skeletal remains. The most likely scenario, he thinks, is that some human societies before agriculture were violent, and some were not, and that even the issue of which societies were violent varied over time rising and falling in respect to circumstances.

From the beginning of the Cognitive Revolution up until the Agricultural Revolution starting around 10,000 years ago things were good for homo sapiens, but as Harari sees it, things really went downhill for us as individuals, something he sees as different from our status as a species, with the rise of farming.

Harari is adamant that while the Agricultural Revolution may have had the effect of increasing our numbers, and gave us all the wonders of civilization and beauty of high culture, its price, on the bodies and minds of countless individuals, both humans, and other animals was enormous. Peasants were smaller, less healthy,  and died younger than their hunter gatherer ancestors. The high culture of the elites of ancient empires was bought at the price of the systematic oppression of the vast majority of human beings who lived in those societies. And, in the first instance of telling this tale with in the context of a global history of humanity that I can think of, Harari tells the story of not just our oppression of each other, but of our domesticated animals as well.

He shows us how inhumane animal husbandry was long before our era of factory farming, which is even worse, but it was these more “natural”, “organic” farmers who began practices such as penning animals in cages, separating mothers from their young, castrating males, and cutting off the noses or out the eyes of animals such a pigs so they could better serve their “divinely allotted” function of feeding human mouths and stomachs.

Yet this begs the question: if the Agricultural Revolution was so bad for the vast majority of human beings, and animals with the exception of a slim class at the top of the pyramid, why did it not only last, but spread, until only a tiny minority of homo sapiens in remote corners continued to be hunter gatherers while the vast majority, up until quite recently were farmers?

Harari doesn’t know. It was probably a very gradual process, but once human societies had crossed a certain threshold there was no going back- our numbers were simply too large to support a reversion to hunting and gathering, For one of the ironies of the Agricultural Revolution is that while it made human beings unhealthy, it also drove up birthrates. This probably happened through rational choice. A hunter gathering family would likely space their children, whereas a peasant family needed all the hands it could produce, something that merely drove the need for more children, and was only checked by the kinds of famines Malthus had pegged as the defining feature of agricultural societies and that we only escaped recently via the industrial revolution.

Perhaps, as Harrai suggest, “We did not domesticate wheat. It domesticated us.” (81) At the only level evolution cares about- the propagation of our genes- wheat was a benefit to humanity, but at the cost of much human suffering and backbreaking labor in which we rid wheat of its rivals and spread the plant all over the globe. The wheat got a better deal, no matter how much we love our toast.

It was on the basis of wheat, and a handful of other staple crops (rice, maize, potatoes) that states were first formed. Harari emphasizes the state’s role as record keeper, combined with enforcer of rules. The state saw its beginning as a scorekeeper and referee for the new complex and crowded societies that grew up around farming.

All the greatest works of world literature, not to mention everything else we’ve ever read, can be traced back to this role of keeping accounts, of creating long lasting records, that led the nascent states that grew up around agriculture to create writing. Shakespeare’s genius can trace its way back to the 7th century B.C. equivalent to an IRS office. Along with writing the first states also created numbers and mathematics, bureaucrats have been bean counters ever since.

The quirk of human nature that for Harari made both the Cognitive and Agricultural Revolution possible and led to much else besides was our ability to imagine things that do not exist, by which he means almost everything we find ourselves surrounded by, not just religion, but the state and its laws, and everything in between has been akin to a fantasy game. Indeed, Harari left me feeling that the whole of both premodern and modern societies was at root little but a game of pretend played by grown ups, with adulthood perhaps nothing more than agreeing to play along with the same game everyone else is engaged in. He was especially compelling and thought provoking when it came to that ultimate modern fantasy and talisman that all of us, from Richard Dawkins to, Abu Bakr al-Baghdadi believes in; namely money.

For Harari the strange thing is that: “Money is the only trust system created by humans that can bridge almost any cultural gap, and does not discriminate on the basis of religion, gender, race, age or sexual orientation.” In this respect it is “the apogee of human tolerance” (186). Why then have so many of its critics down through the ages denounced money as the wellspring human of evil? He explains the contradiction this way:

For although money builds universal trust between strangers, this trust is invested not in humans, communities or sacred values, but in money itself and the impersonal systems that back it. We do not trust the stranger or the next store neighbor- we trust the coins they hold. If they run out of coins, we run out of trust. (188)

We ourselves don’t live in the age of money, so much as the age of Credit (or capital), and it is this Credit which Harari sees as one of the legs of the three-legged stools which he think defines our own period of history. For him we live in the age of the third revolution in human history, the age of the Scientific Revolution that has followed his other two. It is an age built out of an alliance of the forces of Capital-Science- and Empire.

What has made our the age of Credit and different from the material cultures that have come before where money certainly played a prominent role, is that we have adopted lending as the route to economic expansion. But what separates us from past eras that engaged in lending as well, is that ours is based on a confidence that the future will be not just different but better than the past, a confidence that has, at least so far, panned out over the last two centuries largely through the continuous advances in science and technology.

The feature that really separated the scientific revolution from earlier systems of knowledge, in Harari’s view, grew out of the recognition in the 17th century of just how little we actually knew:

The Scientific Revolution has not been a revolution of knowledge. It has above all been a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions. (251)

Nothing perhaps better captures the early modern recognition of this ignorance than their discovery of the New World which began us down our own unique historical path towards Empire. Harari sees both Credit and Science being fed and feeding the intermediary institution of Empire, and indeed, that the history of capitalism along with science and technology cannot be understood without reference to the way the state and imperialism have shaped both.

The imperialism of the state has been necessary to enable the penetration of capitalism and secure its gains, and the powers of advanced nations to impose these relationships has been the result largely of more developed science and technology which the state itself has funded. Science was also sometimes used not merely to understand the geography and culture of subject peoples to exploit them, but in the form of 19th century and early 20th century racism was used as a justification for that racism itself. And the quest for globe spanning Empire that began with the Age of Exploration in the 15th century is still ongoing.

Where does Harari think all this is headed?

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet Harari questions not only whether the scientific revolution or the new age of economic prosperity, not to mention the hunt for empire, have actually brought about a similar amount of misery, if not quite suffering, as the Agricultural Revolution that preceded it.

After all, the true quest of modern science is really not power, but immortality. In Harari’s view we are on the verge of fulfilling the goal of the “Gilgamesh Project”.

Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself. (267)

The quest after wealth, too, seems to be reaching a point of diminishing returns. If the objective of material abundance was human happiness, we might ask why so many of us are miserable? The problem, Harari thinks, might come down to biologically determined hedonic set-points that leave a modern office worker surrounded by food and comforts ultimately little happier to his peasant ancestor who toiled for a meager supper from sunset to sunrise. Yet perhaps the solution to this problem is at our fingertips as well:

 There is only one historical development that has real significance. Today when we realize that the keys to happiness are in the hands of our biochemical system, we can stop wasting our time on politics, social reforms, putsches, and ideologies and focus instead on the only thing that truly makes us happy: manipulating our biochemistry.  (389)

Still, even should the Gilgamesh Project succeed, or we prove capable of mastering our biochemistry, Harari sees a way the Sisyphean nature may continue to have the last laugh. He writes:

Suppose that science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an epidemic of anger and anxiety.

Those unable to afford the new treatments- the vast majority of people- will be besides themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even handed- that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth they will not revive corpses. How dreadful to think that I and my loved ones can live forever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow adverse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.( 384-385).

Along with this, Harari reminds us that it might not be biology that is most important for our happiness, but our sense of meaning. Given that he thinks all of our sources of meaning are at bottom socially constructed illusions, he concludes that perhaps the only philosophically defensible position might be some form of Buddhism- to stop all of our chasing after desire in the first place.

The real question is whether the future will show if all our grasping has ended up in us reaching our object or has led us further down the path of illusion and pain that we need to outgrow to achieve a different kind of transcendence.

 

Edward O. Wilson’s Dull Paradise

Garden of Eden

In all sincerity I have to admit that there is much I admire about the biologist Edward O. Wilson. I can only pray that not only should I live into my 80’s, but still possess the intellectual stamina to write what are at least thought provoking books when I get there. I also wish I still have the balls to write a book with the title of Wilson’s latest- The Meaning of Human Existence, for publishing with an appellation like that would mean I wasn’t afraid I would disappoint my readers, and Wilson did indeed leave me wondering if the whole thing was worth the effort.

Nevertheless,  I think Wilson opened up an important alternative future that is seldom discussed here- namely what if we aimed not at a supposedly brighter, so-called post-human future but to keep things the same? Well, there would be some changes, no extremes of human poverty, along with the restoration of much of the natural environment to its pre-industrial revolution health. Still, we ourselves would aim to stay largely the same human beings who emerged some 100,000 years ago- flaws and all.

Wilson calls this admittedly conservative vision paradise, and I’ve seen his eyes light up like a child contemplating Christmas when using the word in interviews. Another point that might be of interest to this audience is who he largely blames for keeping us from entering this Shangri-la; archaic religions and their “creation stories.”

I have to admit that I find the idea of trying to preserve humanity as it is a valid alternative future. After all, “evolve or die” isn’t really the way nature works. Typically the “goal” of evolution is to find a “design” that works and then stick with it for as long as possible. Since we now dominate the entire planet and our numbers out-rival by a long way any other large animal it seems hard to assert that we need a major, and likely risky, upgrade. Here’s Wilson making the case:

While I am at it, I hereby cast a vote for existential conservatism, the preservation of biological human nature as a sacred trust. We are doing very well in terms of science and technology. Let’s agree to keep that up, and move both along even faster. But let’s also promote the humanities, that which makes us human, and not use science to mess around with the wellspring of this, the absolute and unique potential of the human future. (60)

It’s an idea that rings true to my inner Edmund Burke, and sounds simple, doesn’t it? And on reflection it would be, if human beings were bison, blue whales, or gray wolves. Indeed, I think Wilson has drawn this idea of human preservation from his lifetime of very laudable work on biodiversity. Yet had he reflected upon why efforts at preservation fail when they do he would have realized that the problem isn’t the wildlife itself, but the human beings who don’t share the same value system going in the opposite direction. That is, humans, though we are certainly animals, aren’t wildlife, in the sense that we take destiny into our own hands, even if doing so is sometimes for the worse. Wilson seems to think that it’s quite a short step from asserting it as a goal to gaining universal assent to the “preservation of biological human nature as a sacred trust”, the problem is there is no widespread agreement over what human nature even is, and then, even if you had such agreement, how in the world do you go about enforcing it for the minority who refuse to adhere to it? How far should we be willing to go to prevent persons from willingly crossing some line that defines what a human being is? And where exactly is that line in the first place? Wilson thinks we’re near the end of the argument when we only just took our seat at the debate.

Strange thing is the very people who would likely naturally lean towards the kind of biological conservatism that Wilson hopes “we” will ultimately choose are the sorts of traditionally religious persons he thinks are at the root of most of our conflicts. Here again is Wilson:

Religious warriors are not an anomaly. It is a mistake to classify believers of a particular religious and dogmatic religion-like ideologies into two groups, moderates versus extremists. The true cause of hatred and religious violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things. (154)

For Wilson, a religious groups “defines itself foremost by its creation story, the supernatural narrative that explains how human beings came into existence.” (151)  The trouble with this is that it’s not even superficially true. Three of the world’s religions that have been busy killing one another over the last millennium – Judaism, Christianity and Islam all have the same creation story. Wilson knows a hell of a lot more about ants and evolution then he does about religion or even world history. And while religion is certainly the root of some of our tribalism, which I agree is the deep and perennial human problem, it’s far from the only source, and very few of our tribal conflicts have anything to do with the fight between human beings over our origins in the deep past. How about class conflict? Or racial conflict? Or nationalist conflicts when the two sides profess the not only the exact same religion but the exact same sect- such as the current fight between the two Christian Orthodox nations of Russia and Ukraine? If China and Japan someday go to war it will not be a horrifying replay of the Scopes Monkey Trial.

For a book called The Meaning of Human Existence Wilson’s ideas have very little explanatory power when it comes to anything other than our biological origins, and some quite questionable ideas regarding the origins of our capacity for violence. That is, the book lacks depth, and because of this I found it, well… dull.

Nowhere was I more hopeful that Wilson would have something interesting and different to say than when it came to the question of extraterrestrial life. Here we have one of the world’s greatest living biologists, a man who had spent a lifetime studying ants as an alternative route to the kinds of eusociality possessed only by humans, the naked mole rat, and a handful of insects. Here was a scientists who was clearly passionate about preserving the amazing diversity of life on our small planet.

Yet Wilson’s E.T.s are land dwellers, relatively large, biologically audiovisual, “their head is distinct, big, and located up front” (115) they have moderate teeth and jaws, they have a high social intelligence, and “a small number of free locomotory appendages, levered for maximum strength with stiff internal or external skeletons composed of hinged segments (as by human elbows and knees), and with at least one pair of which are terminated by digits with pulpy tips used for sensitive touch and grasping. “ (116)

In other words they are little green men.

What I had hoped was the Wilson would have used his deep knowledge of biology to imagine alternative paths to technological civilization. Couldn’t he have imagined a hive-like species that evolves in tandem with its own technological advancement? Or maybe some larger form of insect like animal which doesn’t just have an instinctive repertoire of things that it builds, but constantly improves upon its own designs, and explores the space of possible technologies? Or aquatic species that develop something like civilization through the use of sea-herding and ocean farming? How about species that communicate not audio-visually but through electrical impulses the way our computers do?

After all, nature on earth is pretty weird. There’s not just us, but termites that build air conditioned skyscrapers (at least from their view), whales which have culturally specific songs, and strange little things that eat and excrete electrons. One might guess that life elsewhere will be even weirder. Perhaps my problem with The Meaning of Human Existence is that it just wasn’t weird enough not just to capture the worlds of tomorrow and elsewhere- but the one we’re living in right now.

 

Living in the Divided World of the Internet’s Future

Marten_van_Valckenborch_the_Elder_-_The_Tower_of_Babel_-_Google_Art_Project

Sony hacks, barbarians with FaceBook pages, troll armies, ministries of “truth”- it wasn’t supposed to be like this. When the early pioneers of what we now call the Internet freed the network from the US military they were hoping for a network of mutual trust and sharing- a network like the scientific communities in which they worked where minds were brought into communion from every corner of the world. It didn’t take long for some of the witnesses to the global Internet’s birth to see in it the beginnings of a global civilization, the unification, at last, of all of humanity under one roof brought together in dialogue by the miracle of a network that seemed to eliminate the parochialism of space and time.   

The Internet for everyone that these pioneers built is now several decades old, we are living in its future as seen from the vantage point of the people who birthed it as this public “thing” this thick overlay of human interconnections which now mediates almost all of our relationships with the world. Yet, rather than bringing the world together, humanity appears to be drifting apart.

Anyone who doubts the Internet has become as much a theater of war in which political conflicts are fought as much as it is a harbinger of a nascent “global mind” isn’t reading the news. Much of the Internet has been weaponized whether by nation-states or non-state actors. Bots, whether used for in contests between individuals, or over them, now outnumber human beings on the web.

Why is this happening? Why did the Internet that connected us also fail to bring us closer?

There are probably dozens of reasons, only one of which I want to comment on here because I think it’s the one that’s least discussed. What I think the early pioneers failed to realize about the Internet was that it would be as much a tool of reanimating the past as it would be a means of building a future. It’s not only that history didn’t end, it’s that it came alive to a degree we had failed to anticipate.

Think what one will of Henry Kissinger, but his recent (and given that he’s 91 probably last major) book, World Order, tries to get a handle on this. What makes the world so unstable today, in Kissinger’s view, is that it is perhaps the first time in world history where we truly have “one world”, and, at the same time have multiple competing definitions over how that world should be organized. We only really began to live what can be considered a single world with the onset of the age of European expansion that mapped, conquered, and established contact, with every region on the earth. Especially from the 1800’s to the end of World War II world order was defined by the European system of balance of power, and, I might add, the shared dominant, Western culture these nations protelytized. After 1945 you have the Cold War with the world order defined by the bipolar split between the US and the Soviet Union. After the fall of the USSR, for a good few decades, world order was an American affair.

It was during this last period of time, when the US and neo-liberal globalization were at their apex, that the Internet became a global thing, a planetary network connecting all of humanity together into something like Marshal Mcluhan’s “global village.” Yet this new realm couldn’t really exist as something disconnected from underlying geopolitical and economic currents forever. An empire secure in its hegemony doesn’t seek to turn its communication system into a global spying tool or weaponize it, both of which the US have done. If the US could treat the global network or the related global financial system as tools for parochial nationalist ends then other countries would seek to do the same- and they have. Rather than becoming Chardin’s noosphere the Internet has become another theater of war for states, terrorist and criminal networks and companies.

What exactly these entities were that competed with one another across what we once called “cyberspace”, and what goals they had, were not really technological questions at all, but born from the ancient realities of history, geography and the contest for resources and wealth. Rather than one modernity we have several competing versions even if all of them are based on the same technological foundations.

Non-western countries had once felt compelled to copy the West’s cultural features as the price for modernity, and we should not forget the main reason that modernization was pursued despite it upheavals was to develop to a level where they could defend themselves against the West and its technological superiority. As Samuel Huntington pointed out in the 1990’s, now that the West had fallen from the apex of its power other countries were free to pursue modernity on their own terms. His model of a “clash of civilizations” was simplistic, but it was not, as some critics claimed, “racists”. Indeed if we had listened to Huntington we would never have invaded Afghanistan and Iraq.

Yet civilization is not quite the right term. Better, perhaps, political cultures with different ideas regarding political order along with a panorama of non-state actors old and new. What we have is a contest between different political cultures,concepts of order, and contests for raw power, all unfolding in the context of a technologically unified world.

The US continues to pursue its ideological foreign policy with deep roots in its history, but China has revived its own much deeper history of being the center of East Asia as well. Meanwhile, the Middle East, where most states were historical fictions created by European imperialists power in the wake of World War I with the Sykes-Picot agreement, has imploded and what’s replaced the defunct nation-state is a millenium old conflict between the two major branches of Islam. Russia at the same moment pursues old czarists dreams long thought as dead and encrypted as Lenin’s corpse.

That’s the world order, perhaps better world disorder, that Kissinger sees, and when you add to it the fact that the our global networks are vectors not just for these conflicts between states and cultures, but between criminals and corporations it can look quite scary. On the Internet we’ve become the “next door neighbor” not just of interesting people from all the world’s cultures, but to scam artists, and electronic burglars, spies and creeps.

Various attempts have been made to come up with a theory that would describe our current geopolitical situation. There have been the Fukuyama’s victory of liberal democracy in his “end of history” thesis, and Huntington’s clash of civilizations. There have been arguments that the nation-state is dead and that we are resurrecting a pre-Westphalian “neo-medieval” order where the main story isn’t the struggle between states but international groups, especially corporations. There are those who argue that city-states and empires are the political units not only of the distant past, but of the future. All of these and their many fellow theories, including still vibrant marxists and revived anarchists takes on events have a grain of truth to them, but always seem to come up short in capturing the full complexity of the moment.

Perhaps the problem we encounter when trying to understand our era is that it truly is sui generis. We have quite simply never existed in a world where the connective tissue, the networks that facilitate the exchange of goods, money, ideas, culture was global but where the underlying civilization and political and social history and condition was radically different.

Looking for a historical analog might be a mistake. Like the early European imperialists who dressed themselves up in togas and re-discovered the doric column, every culture in this big global knot of interconnections we’ve managed to tie all of humankind within are blinded by their history into giving a false order to the labyrinth.

Then again, maybe its the case that what digital technologies are really good at is destroying the distinction between the past and the future, just as the Internet is the most powerful means we have yet discovered for bringing together like- minded regardless of their separation in space.  Political order, after all, is nothing but a reflection of the type of world groups of human beings wish to reify. Some groups get these imagined worlds from the past and some from imagined futures, but the stability of none can never be assured now as all are exposed to reality of other worlds outside their borders. A transhumanist and member of ISIS encountering one another would be something akin to the meeting of time travelers from past and future.

This goes beyond the political. Take any cultural group you like, from steampunk aficionados to constitutional literalists, and what you have are people trying to make an overly complex present understandable by refashioning it in the form of an imagined past. Sometimes people even try to get a grip on the present by recasting it in the form of an imagined future. There is the “march of progress” which assumes we are headed for a destination in time, or science-fiction, which give us worlds more graspable than the present because the worlds presented there have a shape that our real world lacks.

It might be the case that there has never been a shape to humanity or our communities at any time in the past. Perhaps future historians will make the same mistake we have and project their simplifications on our world which was their formless past. We know better.

 

2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt - killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.

 

Digital Afterlife: 2045

Alphonse Mucha Moon

Excerpt from Richard Weber’s History of Religion and Inequality in the 21st Century (2056)

Of all the bewildering diversity of new of consumer choices on offer before the middle of the century that would have stunned people from only a generation earlier, none was perhaps as shocking as the many ways there now were to be dead.

As in all things of the 21st century what death looked like was dependent on the wealth question. Certainly, there were many human beings, and when looking at the question globally, the overwhelming majority, who were treated in death the same way their ancestors had been treated. Buried in the cold ground, or, more likely given high property values that made cemetery space ever more precious, their corpses burned to ashes, spread over some spot sacred to the individual’s spirituality or sentiment.

A revival of death relics that had begun in the early 21st century continued for those unwilling out of religious belief, or more likely, simply unable to afford any of the more sophisticated forms of death on offer. It was increasingly the case that the poor were tattooed using the ashes of their lost loved one, or that they carried some momento in the form of their DNA in the vague hope that family fortunes would change and their loved one might be resurrected in the same way mammoths now once again roamed the windswept earth.

Some were drawn by poverty and the consciousness brought on by the increasing period of environmental crisis to simply have their dead bodies “given back” to nature and seemed to embrace with morbid delight the idea that human beings should end up “food for worms”.

It was for those above a certain station where death took on whole new meanings. There were of course, stupendous gains in longevity, though human beings still continued to die, and  increasingly popular cryonics held out hope that death would prove nothing but a long and cold nap. Yet it was digital and brain scanning/emulating technologies that opened up whole other avenues allowing those who had died or were waiting to be thawed to continue to interact with the world.

On the low end of the scale there were now all kinds of interactive cemetery monuments that allowed loved ones or just the curious to view “life scenes” of the deceased. Everything from the most trivial to the sublime had been video recorded in the 21st century which provided unending material, sometimes in 3D, for such displays.

At a level up from this “ghost memoirs” became increasingly popular especially as costs plummeted due to outsourcing and then scripting AI. Beginning in the 2020’s the business of writing biographies of the dead ,which were found to be most popular when written in the first person, was initially seen as a way for struggling writers to make ends meet. Early on it was a form of craftsmanship where authors would pour over records of the deceased individual in text, video, and audio recordings, aiming to come as close as possible to the voice of the deceased and would interview family and friends about the life of the lost in the hopes of being able to fully capture their essence.

The moment such craft was seen to be lucrative it was outsourced. English speakers in India and elsewhere soon poured over the life records of the deceased and created ghost memoirs en mass, and though it did lead to some quite amusing cultural misinterpretations, it also made the cost of having such memoirs published sharply decline further increasing their popularity.

The perfection of scripting AI made the cost of producing ghost memoirs plummet even more. A company out of Pittsburgh called “Mementos” created by students at Carnegie Mellon boasted in their advertisements that “We write your life story in less time than your conception”. That same company was one of many others that had brought 3D scanning of artifacts from museums to everyone and created exact digital images of a person’s every treasured trinket and trophy.

Only the very poor failed to have their own published memoir which recounted their life’s triumphs and tribulations or failed to have their most treasured items scanned.  Many, however, esqued the public display of death found in either interactive monuments or the antiquated idea of memoirs as death increasingly became a thing of shame and class identity. They preferred private home- shrines many of which resembled early 21st century fast food kiosks whereby one could chose a precise recorded event or conversation from the deceased in light of current need. There were selections with names like “Motivation”, and “Persistence” that might pull up relevant items, some of which used editing algorithms that allowed them to create appropriate mashups, or even whole new interactions that the dead themselves had never had.

Somewhat above this level due to the cost for the required AI were so-called “ghost-rooms”. In all prior centuries some who suffered the death of a loved one would attempt to freeze time by, for instance, leaving unchanged a room in which the deceased had spent the majority of their time. Now the dead could actually “live” in such rooms, whether as a 3D hologram (hence the name ghost rooms) or in the form of an android that resembled the deceased. The most “life-like” forms of these AI’s were based on the maps of detailed “brainstorms” of the deceased. A technique perfected earlier in the century by the neuroscientist Miguel Nicolelis.

One of the most common dilemmas, and one that was encountered in some form even in the early years of the 21st century, was the fact that the digital presence of a deceased person often continued to exist and act long after a person was gone. This became especially problematic once AIs acting as stand-ins for individuals became widely used.

Most famously there was the case of Uruk Wu. A real estate tycoon, Wu was cryogenically frozen after suffering a form of lung cancer that would not respond to treatment. Estranged from his party-going son Enkidu, Mr Wu had placed the management all of his very substantial estate under a finance algorithm (FA). Enkidu Wu initially sued the deceased Uruk for control of family finances- a case he famously and definitively lost- setting the stage for increased rights for deceased in the form of AIs.

Soon after this case, however, it was discovered that the FA being used by the Uruk estate was engaged in wide-spread tax evasion practices. After extensive software forensics it was found that such evasion was a deliberate feature of the Uruk FA and not a mere flaw. After absorbing fines, and with the unraveling of its investments and partners, the Uruk estate found itself effectively broke. In an atmosphere of great acrimony TuatGenics the cryonic establishment that interred Urduk unplugged him and let him die as he was unable to sustain forward funding for his upkeep and future revival.

There was a great and still unresolved debate in the 2030’s over whether FAs acting in the markets on behalf of the dead were stabilizing or destabilizing the financial system. FAs became an increasingly popular option for the cryogenically frozen or even more commonly the elderly suffering slow onset dementia, especially given the decline in the number of people having children to care for them in old age, or inherit their fortunes after death. The dead it was thought would prove to be conservative investment group, but anecdotally at least they came to be seen as a population willing to undertake an almost obscene level of financial risk due to the fact that revival was a generation off or more.

One weakness of the FAs was that they were faced with pouring their resources into upgrade fees rather than investment as the presently living designed software meant to deliberately exploit the weaknesses of earlier generation FAs. Some argued that this was a form of “elder abuse” whereas others took the position that to prohibit such practices would constitute fossilizing markets in an earlier and less efficient era.

Other phenomenon that came to prominence by the 2030’s were so-called “replicant” and “faustian” legal disputes. One of the first groups to have accurate digital representations in the 21st century were living celebrities. Near death or at the height of their fame, celebrities often contracted out their digital replicants. There was always need of those having ownership rights of the replicants to avoid media saturation, but finding the right balance between generating present and securing future revenue proved challenging.

Copyright proved difficult to enforce. Once the code of a potentially revenue generating digital replicant had been made there was a great deal of incentive to obtain a copy of that replicant and sell it to all sorts of B-level media outlets. There were widespread complaints by the Screen Actors Guild that replicants were taking away work from real actors, but the complaint was increasingly seen as antique- most actors with the exception of crowd drawing celebrities were digital simulations rather than “real” people anyway.

Faustian contacts were legal obligations by second or third tier celebrities or first tier actors and performers whose had begun to see their fame decline that allowed the contractor to sell a digital representation to third parties. Celebrities who had entered such contracts inevitably found “themselves” staring in pornographic films, or just as common, in political ads for causes they would never support.

Both the replicant and faustian issues gave an added dimension to the legal difficulties first identified in the Uruk Wu case. Who was legally responsible for the behavior of digital replicants? That question became especially apparent in the case of the serial killer Gregory Freeman. Freeman was eventually held liable for the deaths of up to 4,000 biological, living humans. Murders he “himself” had not committed, but that were done by his digital replicant. This was done largely by infiltrating a software error in the Sony-Geisinger remote medical monitoring system (RMMS) that controlled everything from patients pacemakers to brain implants and prosthetics to medication delivery systems and prescriptions. Freeman was found posthumously guilty of having caused the deaths (he committed suicide) but not before the replicant he had created had killed hundreds of persons even after the man’s death.

It became increasingly common for families to create multiple digital replicants of a particular individual, so now a lost mother or father could live with all of their grown and dispersed children simultaneously. This became the source of unending court disputes over which replicant was actually the “real” person and which therefore held valid claim to property.

Many began to create digital replicants well before the point of death to farm them out out for remunerative work. Much of work by this point had been transformed into information processing tasks, a great deal of which was performed by human-AI teams, and even in traditional fields where true AI had failed to make inroads- such as indoor plumbing- much of the work was performed by remote controlled droids. Thus, there was an incentive for people to create digital replicants that would be tasked with income generating work. Individuals would have themselves copied, or more commonly just a skill-based part of themselves copied and have it used for work. Leasing was much more common than outright ownership and not merely because of complaints of a new form of “indentured servitude” but because whatever skill set was sold was likely to be replaced as its particulars became obsolete or pure AI that had been designed on it improved. In the churn of needed skills to obsolescence many dedicated a share of their digital replicants to retraining itself.

Servitude was one area where the impoverished dead were able to outcompete their richer brethren. A common practice was for the poor to be paid upfront for the use of their brain matter upon death. Parts of once living human brains were commonly used by companies for “capucha” tasks yet to be mastered by AI.

There were strenuous objections to this “atomization” of the dead, especially for those digital replicants that did not have any family to “house” them, and who, lacking the freedom to roam freely in the digital universe were in effect trapped in a sort of quantum no-man’s-land. Some religious groups, most importantly the Mormons, responded to this by place digital replicants of the dead in historical simulations that recreated the world in which the deceased had lived and were earnestly pursuing a project in which replicants of those who had died before the onset of the digital age were created.

In addition, there were numerous rights arguments against the creation of such simulated histories using replicants. The first being that forcing digital replicants to live in a world where children died in mass numbers, starvation, war and plague were common forms of death, and which lacked modern miracles such as anesthesia, when such world could easily be created with more humane features, was not “redemptive” but amounted to cruel and unusual punishment and even torture.

Indeed, one of the biggest, and overblown, fears of this time was that one’s digital replicant might end up in a sadistically crafted simulated form of hell. Whatever its irrationality, this became a popular form of blackmail with videos of “captive” digital replicants or proxies used to frighten a person into surrendering some enormous sum.

The other argument against placing digital replicants in historical simulations, either without their knowledge, their ability to leave, or more often both, was something akin to imprisoning a person in a form of Colonial Williamsburg or The Renaissance Faire. “Spectral abolitionists” argued that the embodiment of a lost person should be free to roam and interact with the world as they chose whether as software or androids, and that they should be unshackled from the chains of memory. There were even the JBDBM (the John Brown’s Digital Body Movement) and the DigitalGnostics, hacktivists group that went around revealing the reality of simulated worlds to their inhabitants and sought to free them to enter the larger world heretofore invisible to them.

A popular form of cultural terrorism at this time were so-called “Erasers” entities with names such as “GrimReaper” or “Scathe” whose project consisted in tracking down digital replicants and deleting them. Some characterized these groups as a manifestation of a deathists philosophy, or even claimed that they were secretly funded by traditional religious groups whose traditional “business models” were being disrupted by the new digital forms of death. Such suspicions were supported by the fact that the Erasers usually were based in religious countries where the rights of replicants were often non-existent and fears regarding new “electric jinns” rampant.

Also prominent in this period were secular prophets who projected that a continuing of the trends of digital replicants, both of the living, and the at least temporarily dead, along with their representing AI’s, would lead to a situation where non-living humans would soon outnumber the living. There were apocalyptic tales akin to the zombie craze earlier in the century that within 50 years the dead would rise up against the living and perhaps join together with AIs destroy the world. But that, of course, was all Ningbowood.

 

An imaginary book excerpt inspired by Adrian Hon’s History of the Future in 100 Objects.

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…

 

How to predict the future with five simple rules

Caravaggio Fortune Teller

When it comes to predicting the future it seems only our failure to consistently get tomorrow right has been steadily predictable, though that may be about to change, at least a little bit.  If you don’t think our society and especially those “experts” whose job it is to help us steer us through the future aren’t doing a horrible job just think back to the fall of the Soviet Union which blindsided the American intelligence community, or 9-11, which did the same, or the financial crisis of 2008, or even much more recently the Arab Spring, the rise of ISIL, or the war in Ukraine.

Technological predictions have been generally as bad or worse and as proof just take a look at any movie or now yellowed magazine from the 1960’s and 70’s about the “2000’s”. People have put a lot of work into imagining a future but, fortunately or unfortunately, we haven’t ended up living there.

In 2005 a psychologist, Philip Tetlock, set out to explain our widespread lack of success at playing Nostradamus. Tetlock  was struck by the colossal intelligence failure the collapse of the Soviet Union had unveiled. Policy experts, people who had spent their entire careers studying the USSR and arguing for this or that approach to Cold War had almost universally failed to have foreseen a future in which the Soviet Union would unravel overnight, let alone that such a mass form of social and geopolitical deconstruction would occur relatively peacefully.

Tetlock in his book Expert Political Judgment: How Good Is It? How Can We Know? made the case that a good deal of the predictions made by experts were no better than those of the proverbial dart throwing chimp- that is no better than chance, or for the numerically minded among you – 33 percent. With the most frightening revelation being that often the more knowledge a person had on a subject the worse rather than the better their predictions.

The reasons Tetlock discovered for the predictive failure of experts were largely psychological and all too human. Experts, like the rest of us, hate to be wrong, and have a great deal of difficulty assimilating information that fails to square with their world view. They tend to downplay or misremember their past predictive failures, deal in squishy predictions without clearly defining probabilities- qualitative statements that are open to multiple interpretations. Most of all, there seems to be no real penalty for getting predictions wrong, even horribly wrong, policy makers and pundits that predicted a quick and easy exit from Iraq or Dow 36,000 on the eve of the crash go right on working with a “the world is complicated” shrug of the shoulders.

What’s weird, I suppose, is that while getting the future horribly wrong has no effect on career prospects, getting it right, especially when the winds had been blowing in the opposite direction, seems to shoot a person into the pundit equivalent of stardom. Nouriel Roubini or “Dr Doom” was laughed at by some of his fellow economists when he was predicting the implosion of the banking sector well before Lehman Brothers went belly up. Afterwards he was inescapable, with a common image of the man, whose visage puts one in mind of Vlad the Impaler, or better Gene Simmons, finding himself surrounded by 4 or 5 ravishing supermodels.

I can’t help but thinking there’s a bit of Old Testament style thinking sneaking in here- as if the person who correctly predicted disaster is so much smarter or so tuned into the zeitgeist that the future is now somehow magically at their fingertips. Where the reality might be that they simply had the courage to speak up against the “wisdom” of the crowd.

Getting the future right is not a matter of intelligence, but it probably is a matter of thinking style. At least that’s where Tetlock, to return to him, has gone with his research. He has launched The Good Judgement Project which hopes to train people to be better predictors of the future. You too can make better predictions if you follow these 5 rules.

  • Comparisons are important: use relevant comparisons as a starting point;
  • Historical trends can help: look at history unless you have a strong reason to expect change;
  • Average opinions: experts disagree, so find out what they think and pick a midpoint;
  • Mathematical models: when model-based predictions are available, you should take them into account;
  • Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

One thing that surprises me about these rules is that apparently policy makers and public intellectuals aren’t following them in the first place. Obviously we need to do our best to make predictions minimizing cognitive biases, and we should certainly be able to do better than the chimps, but we should also be aware of our limitations. Some aspects of the future are inherently predictable demographics work like this, barring some disaster we have a pretty good idea what the human population will be at the end of this century and how it will be distributed. Or at least demographics should be predictable, though sometimes we miss just happen to miss 3 billion future people.

For the things on our immediate horizon we should be able to give pretty good probabilities and those should be helpful to policy makers. It has a definite impact on behavior whether we think there is a 25 percent probability that Iran will detonate a nuclear weapon by 2017 or a 75 percent chance. But things start to get much more difficult the further out into the future you go or when you’re dealing with the interaction of events some of which you didn’t even anticipate – Donald Rumsfeld’s infamous “unknown unknowns”.

It’s the centenary of World War I so that’s a relevant example. Would the war have happened had Gavrilo Princip’s assassination attempt on Franz Ferdinand failed? Maybe? But even had the war only been delayed timing would have most likely have affected its outcome. Perhaps the Germans would have been better prepared and so swiftly defeated the Russians, French and British that the US would never have became involved. It may be the case that German dominance of Europe was preordained, and therefore, in a sense, predictable, as it seems to be asserting itself even now,  but it makes more than a world of difference not just in Europe but beyond whether or not that dominance came in the form of the Kaiser, or Hitler, or, thank God, Merkle and the EU.

History is so littered with accidents, chance encounters, fateful deaths and births that it seems pretty unlikely that we’ll ever get a tight grip on the future which in all likelihood will be as subject to these contingencies as the past (Tetlock’s Second Rule). The best we can probably do is hone our judgement, as Telock argues, plan for what seems inevitable given trend lines, secure ourselves as well as possible against the most catastrophic scenarios based on their probability of destruction (Nick Bostrom’s view) and design institutions that are resilient and flexible in the face a set of extremely varied possible scenarios – (the position of Nassim Taleb).

None of this, however, gives us a graspable and easily understood view of how the future might hang together. Science fiction, for all its flaws, is perhaps alone in doing that. The writer and game developer Adrian Hon may have discovered a new, and certainly produced an insightful way of practicing the craft. To his recent book I’ll turn next time…