2040’s America will be like 1840’s Britain, with robots?

Christopher Gibbs Steampunk

Looked at in a certain light, Adrian Hon’s History of the Future in 100 Objects can be seen as giving us a window into a fictionalized version of an intermediate technological stage we may be entering. It is the period when the gains in artificial intelligence are clearly happening, but they have yet to completely replace human intelligence. The question if it AI ever will actually replace us is not of interest to me here. It certainly won’t be tomorrow, and technological prediction beyond a certain limited horizon is a fool’s game.

Nevertheless, some features of the kind of hybrid stage we have entered are clearly apparent. Hon built an entire imagined world around them from with “amplified-teams” (AI working side by side with groups of humans) as one of the major elements of 21st century work, sports, and much else besides.

The economist Tyler Cowen perhaps did Hon one better, for he based his very similar version of the future not only on things that are happening right now, but provided insight on what we should do as job holders and bread-winners in light of the rise of ubiquitous, if less than human level, artificial intelligence. One only wishes that his vision had room for more politics, for if Cowen is right, and absent us taking collective responsibility for the type of future we want to live in, 2040’s America might look like the Britain found in Dickens, only we’ll be surrounded by robots.

Cowen may seem a strange duck to take up the techno-optimism mantle, but he did in with gusto in his recent book Average is Over. The book in essence is a sequel to Cowen’s earlier best seller The Great Stagnation in which he argued that developed economies, including the United States, had entered a period of secular stagnation beginning in the 1970’s. The reason for this stagnation was that advanced economies had essentially picked all the “low hanging fruit” of the industrial revolution.

Arguing that we are in a period of technological stagnation at first seems strange, but when I reflect a moment on the meaning of facts such as not flying all that much faster than would have been common for my grandparents in the 1960’s, the kitchen in my family photos from the Carter days looking surprisingly like the kitchen I have right now- minus the paneling, or saddest of all, from the point of view of someone brought up on Star Trek, Star Wars and Our Star Blazers with a comforter sporting Viking 2 and Pioneer, the fact that, not only have we failed to send human visitors to Mars or beyond, we haven’t even been back to the moon. Hell we don’t even have any human beings beyond low-earth orbit.

Of course, it would be silly to argue there has been no technological progress since Nixon. Information, communication and computer technology have progressed at an incredible speed, remaking much of the world in their wake, and have now seemingly been joined by revolutions in biotechnology and renewable energy.

And yet, despite how revolutionary these technologies have been, they have not been able to do the heavy lifting of prior forms of industrialization due to the simple fact that they haven’t been as qualitatively transformative as the industrial revolution. If I had a different job I could function just fine without the internet, and my life would be different only at the margins. Set the technological clock by which I live back to the days preceding industrialization, before electricity, and the internal combustion engine, and I’d be living the life of my dawn-to-dusk Amish neighbors- a different life entirely.

Average is Over is a followup to Cowen’s earlier book in that in it he argues that technological changes now taking place will have an impact that will shake us out of our stagnation, or at least how that stagnation is itself evolving into something quite different with some being able to escape its pull while others fall even further behind.

Like Hon, Cowen thinks intermediate level AI is what we should be paying attention to rather than Kurzweil or Bostrom- like hopes and fears regarding superintelligence. Also like Hon, Cowen thinks the most important aspect of artificial intelligence in the near future is human-AI teams. This is the lesson Cowen takes from, among other things, freestyle chess.

For those who haven’t been paying attention to the world of competitive chess, freestyle chess is what emerged once people were able to buy a chess playing program that could beat the best players in the world for a few dollars to play on one’s phone. One might of thought that would be the death knell for human chess, but something quite different has happened. Now, some of the most popular chess games are freestyle meaning human-machine vs human-machine.

The moral Cowen draws from freestyle chess is that the winners of these games, and he extrapolates, the economic “games” of the future, are those human beings who are most willing to defer to the decisions of the machine. I find this conclusion more than a little chilling given we’re talk about real people here rather than Knight or Pawns, but Cowen seems to think it’s just common sense.

In its simplest form Cowen’s argument boils down to the prediction that an increasing amount of human work in the future will come in the form of these AI-human teams. Some of this, he admits, will amount to no workers at all with the human part of the “team” reduced to an unpaid customer. I now almost always scan and bag my own goods at the grocery store, just as I can’t remember the last time I actually spoke to a bank teller who wasn’t my mom. Cowen also admits that the rise of AI might mean the world actually gets “dumber” our interactions with our environment simplified to foster smooth integration with machines and compressed to meet their limits.

In his vision intelligent machines will revolutionize everything from medicine to education to business management and negotiation to love. The human beings who will best thrive in this new environment will be those whose work best complements that of intelligent machines, and this will be the case all the way from the factory floor to the classroom. Intelligent machines should improve human judgement in areas such as medical diagnostics and would even replace judges in the courtroom if we are ever willing to take the constitutional plunge. Teachers will go from educators to “coaches” as intelligent machines allow individualized instruction , but education will still require a human touch when it comes to motivating students.

His message to those who don’t work well with intelligent machines is – good luck. He sees automation leading to an ever more competitive job market in which many will fail to develop the skills necessary to thrive. Those unfortunate ones will be left to fend for themselves in the face of an increasingly penny-pinching state. There is one area, however, where Cowen thinks you might find refuge if machines just aren’t your thing-marketing. Indeed, he sees marketing as one of the major growth areas in the new otherwise increasingly post-human economy.

The reason for this is simple. In the future there are going to be less ,not more, people with surplus cash to spend on all the goods built by a lot of robots and a handful of humans. One will have to find and persuade those with real incomes to part with some of their cash. Computers can do the finding, but it will take human actors to sell the dream represented by a product.

The world of work presented in Cowen’s Average is Over is almost exclusively that of the middle class and higher who find their way with ease around the Infosphere, or whatever we want to call this shell of information and knowledge we’ve built around ourselves. Either that or those who thrive economically will be those able to successfully pitch whatever it is they’re selling to wealthy or well off buyers, sometimes even with the help of AI that is able to read human emotions.

I wish Cowen had focused more on what it will be like to be poor in such a world. One thing is certain, it will not be fun. For one, he sees further contraction rather than expansion of the social safety net, and widespread conservatism, rather than any attempts at radically new ways of organizing our economy, society and politics. Himself a libertarian conservative, Cowen sees such conservatism baked into the demographic cake of our aging societies. The old do not lead revolutions and given enough of them they can prevent the young from forcing any deep structural changes to society.

Cowen also has a thing for so-called “moral enhancement” though he doesn’t call it that. Moral enhancement need not only come from conservative forces, as the extensive work on the subject by the progressive James Hughes shows, but in the hands of both Hon and Cowen, moral enhancement is a bulwark of conservative societies, where the world of middle class work and the social safety net no longer function, or even exist, in the ways they had in the 20th century.

Hon with his neuroscience background sees moral enhancement leveraging off of our increasing mastery over the brain, but manifesting itself in a revival of religious longings related to meaning, a meaning that was for a long time provided by work, callings and occupations that he projects will become less and less available as we roll through the 21st century with human workers replaced by increasingly intelligent machines. Cowen, on the other hand, sees moral enhancement as the only way the poor will survive in an increasingly competitive and stingy environment, though his enhancement is to take place by more traditional means, the return of strict schools that inculcate victorian era morals such as self-control and above all conscientiousness in the young. Cowen is far from alone in thinking that in an era when machines are capable of much of the physical and intellectual labor once done by human beings what will matter most to individual success is ancient virtues.

In Cowen’s world the rich with money to burn are chased down with a combination of AI, behavioral economics, targeted consumer surveillance, and old fashioned, fleshy persuasion to part with their cash, but what will such a system be like for those chronically out of work? Even should mass government surveillance disappear tomorrow, (fat chance) it seems the poor will still face a world where the forces behind their ever more complex society become increasingly opaque, responsible humans harder to find, and in which they are constantly “nudged” by people who claim to know better. For the poor, surveillance technologies will likely be used not to sell them stuff which they can’t afford, but are a tool of the repo-man, and debt collector, parole officer, and cop that will slowly chisel away whatever slim column continues to connect them the former middle class world of their parents. It is a world more akin to the 1940’s or even the 1840’s than it is to anything we have taken to be normal since the middle of the 20th century.

I do not know if such a world is sustainable over the long haul, and pray that it is not. The pessimist in me remembers that the classical and medieval world’s existed for long periods of time with extreme levels of inequality in both wealth and power, the optimist chimes in that these were ages when the common people did not know how to read. In any case, it is not a society that must by some macabre logic of economic determinism come about. The mechanism by which Cowen sees no sustained response to such a future coming into being is our own political paralysis and generational tribalism. He seems to want this world more than he is offering us a warning of it arrival. Let’s decide to prove him wrong for the technologies he puts so much hope in could be used in totally different ways and in the service of a juster form of society.

However critical I am of Cowen for accepting such a world as a fait accompli, the man still has some rather fascinating things to say. Take for instance his view of the future of science:

Once genius machines start coming up with new theories…. intelligibility will seem like a legacy from the very distant past. ( 220)

For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….

Digital Afterlife: 2045

Alphonse Mucha Moon

Excerpt from Richard Weber’s History of Religion and Inequality in the 21st Century (2056)

Of all the bewildering diversity of new of consumer choices on offer before the middle of the century that would have stunned people from only a generation earlier, none was perhaps as shocking as the many ways there now were to be dead.

As in all things of the 21st century what death looked like was dependent on the wealth question. Certainly, there were many human beings, and when looking at the question globally, the overwhelming majority, who were treated in death the same way their ancestors had been treated. Buried in the cold ground, or, more likely given high property values that made cemetery space ever more precious, their corpses burned to ashes, spread over some spot sacred to the individual’s spirituality or sentiment.

A revival of death relics that had begun in the early 21st century continued for those unwilling out of religious belief, or more likely, simply unable to afford any of the more sophisticated forms of death on offer. It was increasingly the case that the poor were tattooed using the ashes of their lost loved one, or that they carried some momento in the form of their DNA in the vague hope that family fortunes would change and their loved one might be resurrected in the same way mammoths now once again roamed the windswept earth.

Some were drawn by poverty and the consciousness brought on by the increasing period of environmental crisis to simply have their dead bodies “given back” to nature and seemed to embrace with morbid delight the idea that human beings should end up “food for worms”.

It was for those above a certain station where death took on whole new meanings. There were of course, stupendous gains in longevity, though human beings still continued to die, and  increasingly popular cryonics held out hope that death would prove nothing but a long and cold nap. Yet it was digital and brain scanning/emulating technologies that opened up whole other avenues allowing those who had died or were waiting to be thawed to continue to interact with the world.

On the low end of the scale there were now all kinds of interactive cemetery monuments that allowed loved ones or just the curious to view “life scenes” of the deceased. Everything from the most trivial to the sublime had been video recorded in the 21st century which provided unending material, sometimes in 3D, for such displays.

At a level up from this “ghost memoirs” became increasingly popular especially as costs plummeted due to outsourcing and then scripting AI. Beginning in the 2020’s the business of writing biographies of the dead ,which were found to be most popular when written in the first person, was initially seen as a way for struggling writers to make ends meet. Early on it was a form of craftsmanship where authors would pour over records of the deceased individual in text, video, and audio recordings, aiming to come as close as possible to the voice of the deceased and would interview family and friends about the life of the lost in the hopes of being able to fully capture their essence.

The moment such craft was seen to be lucrative it was outsourced. English speakers in India and elsewhere soon poured over the life records of the deceased and created ghost memoirs en mass, and though it did lead to some quite amusing cultural misinterpretations, it also made the cost of having such memoirs published sharply decline further increasing their popularity.

The perfection of scripting AI made the cost of producing ghost memoirs plummet even more. A company out of Pittsburgh called “Mementos” created by students at Carnegie Mellon boasted in their advertisements that “We write your life story in less time than your conception”. That same company was one of many others that had brought 3D scanning of artifacts from museums to everyone and created exact digital images of a person’s every treasured trinket and trophy.

Only the very poor failed to have their own published memoir which recounted their life’s triumphs and tribulations or failed to have their most treasured items scanned.  Many, however, esqued the public display of death found in either interactive monuments or the antiquated idea of memoirs as death increasingly became a thing of shame and class identity. They preferred private home- shrines many of which resembled early 21st century fast food kiosks whereby one could chose a precise recorded event or conversation from the deceased in light of current need. There were selections with names like “Motivation”, and “Persistence” that might pull up relevant items, some of which used editing algorithms that allowed them to create appropriate mashups, or even whole new interactions that the dead themselves had never had.

Somewhat above this level due to the cost for the required AI were so-called “ghost-rooms”. In all prior centuries some who suffered the death of a loved one would attempt to freeze time by, for instance, leaving unchanged a room in which the deceased had spent the majority of their time. Now the dead could actually “live” in such rooms, whether as a 3D hologram (hence the name ghost rooms) or in the form of an android that resembled the deceased. The most “life-like” forms of these AI’s were based on the maps of detailed “brainstorms” of the deceased. A technique perfected earlier in the century by the neuroscientist Miguel Nicolelis.

One of the most common dilemmas, and one that was encountered in some form even in the early years of the 21st century, was the fact that the digital presence of a deceased person often continued to exist and act long after a person was gone. This became especially problematic once AIs acting as stand-ins for individuals became widely used.

Most famously there was the case of Uruk Wu. A real estate tycoon, Wu was cryogenically frozen after suffering a form of lung cancer that would not respond to treatment. Estranged from his party-going son Enkidu, Mr Wu had placed the management all of his very substantial estate under a finance algorithm (FA). Enkidu Wu initially sued the deceased Uruk for control of family finances- a case he famously and definitively lost- setting the stage for increased rights for deceased in the form of AIs.

Soon after this case, however, it was discovered that the FA being used by the Uruk estate was engaged in wide-spread tax evasion practices. After extensive software forensics it was found that such evasion was a deliberate feature of the Uruk FA and not a mere flaw. After absorbing fines, and with the unraveling of its investments and partners, the Uruk estate found itself effectively broke. In an atmosphere of great acrimony TuatGenics the cryonic establishment that interred Urduk unplugged him and let him die as he was unable to sustain forward funding for his upkeep and future revival.

There was a great and still unresolved debate in the 2030’s over whether FAs acting in the markets on behalf of the dead were stabilizing or destabilizing the financial system. FAs became an increasingly popular option for the cryogenically frozen or even more commonly the elderly suffering slow onset dementia, especially given the decline in the number of people having children to care for them in old age, or inherit their fortunes after death. The dead it was thought would prove to be conservative investment group, but anecdotally at least they came to be seen as a population willing to undertake an almost obscene level of financial risk due to the fact that revival was a generation off or more.

One weakness of the FAs was that they were faced with pouring their resources into upgrade fees rather than investment as the presently living designed software meant to deliberately exploit the weaknesses of earlier generation FAs. Some argued that this was a form of “elder abuse” whereas others took the position that to prohibit such practices would constitute fossilizing markets in an earlier and less efficient era.

Other phenomenon that came to prominence by the 2030’s were so-called “replicant” and “faustian” legal disputes. One of the first groups to have accurate digital representations in the 21st century were living celebrities. Near death or at the height of their fame, celebrities often contracted out their digital replicants. There was always need of those having ownership rights of the replicants to avoid media saturation, but finding the right balance between generating present and securing future revenue proved challenging.

Copyright proved difficult to enforce. Once the code of a potentially revenue generating digital replicant had been made there was a great deal of incentive to obtain a copy of that replicant and sell it to all sorts of B-level media outlets. There were widespread complaints by the Screen Actors Guild that replicants were taking away work from real actors, but the complaint was increasingly seen as antique- most actors with the exception of crowd drawing celebrities were digital simulations rather than “real” people anyway.

Faustian contacts were legal obligations by second or third tier celebrities or first tier actors and performers whose had begun to see their fame decline that allowed the contractor to sell a digital representation to third parties. Celebrities who had entered such contracts inevitably found “themselves” staring in pornographic films, or just as common, in political ads for causes they would never support.

Both the replicant and faustian issues gave an added dimension to the legal difficulties first identified in the Uruk Wu case. Who was legally responsible for the behavior of digital replicants? That question became especially apparent in the case of the serial killer Gregory Freeman. Freeman was eventually held liable for the deaths of up to 4,000 biological, living humans. Murders he “himself” had not committed, but that were done by his digital replicant. This was done largely by infiltrating a software error in the Sony-Geisinger remote medical monitoring system (RMMS) that controlled everything from patients pacemakers to brain implants and prosthetics to medication delivery systems and prescriptions. Freeman was found posthumously guilty of having caused the deaths (he committed suicide) but not before the replicant he had created had killed hundreds of persons even after the man’s death.

It became increasingly common for families to create multiple digital replicants of a particular individual, so now a lost mother or father could live with all of their grown and dispersed children simultaneously. This became the source of unending court disputes over which replicant was actually the “real” person and which therefore held valid claim to property.

Many began to create digital replicants well before the point of death to farm them out out for remunerative work. Much of work by this point had been transformed into information processing tasks, a great deal of which was performed by human-AI teams, and even in traditional fields where true AI had failed to make inroads- such as indoor plumbing- much of the work was performed by remote controlled droids. Thus, there was an incentive for people to create digital replicants that would be tasked with income generating work. Individuals would have themselves copied, or more commonly just a skill-based part of themselves copied and have it used for work. Leasing was much more common than outright ownership and not merely because of complaints of a new form of “indentured servitude” but because whatever skill set was sold was likely to be replaced as its particulars became obsolete or pure AI that had been designed on it improved. In the churn of needed skills to obsolescence many dedicated a share of their digital replicants to retraining itself.

Servitude was one area where the impoverished dead were able to outcompete their richer brethren. A common practice was for the poor to be paid upfront for the use of their brain matter upon death. Parts of once living human brains were commonly used by companies for “capucha” tasks yet to be mastered by AI.

There were strenuous objections to this “atomization” of the dead, especially for those digital replicants that did not have any family to “house” them, and who, lacking the freedom to roam freely in the digital universe were in effect trapped in a sort of quantum no-man’s-land. Some religious groups, most importantly the Mormons, responded to this by place digital replicants of the dead in historical simulations that recreated the world in which the deceased had lived and were earnestly pursuing a project in which replicants of those who had died before the onset of the digital age were created.

In addition, there were numerous rights arguments against the creation of such simulated histories using replicants. The first being that forcing digital replicants to live in a world where children died in mass numbers, starvation, war and plague were common forms of death, and which lacked modern miracles such as anesthesia, when such world could easily be created with more humane features, was not “redemptive” but amounted to cruel and unusual punishment and even torture.

Indeed, one of the biggest, and overblown, fears of this time was that one’s digital replicant might end up in a sadistically crafted simulated form of hell. Whatever its irrationality, this became a popular form of blackmail with videos of “captive” digital replicants or proxies used to frighten a person into surrendering some enormous sum.

The other argument against placing digital replicants in historical simulations, either without their knowledge, their ability to leave, or more often both, was something akin to imprisoning a person in a form of Colonial Williamsburg or The Renaissance Faire. “Spectral abolitionists” argued that the embodiment of a lost person should be free to roam and interact with the world as they chose whether as software or androids, and that they should be unshackled from the chains of memory. There were even the JBDBM (the John Brown’s Digital Body Movement) and the DigitalGnostics, hacktivists group that went around revealing the reality of simulated worlds to their inhabitants and sought to free them to enter the larger world heretofore invisible to them.

A popular form of cultural terrorism at this time were so-called “Erasers” entities with names such as “GrimReaper” or “Scathe” whose project consisted in tracking down digital replicants and deleting them. Some characterized these groups as a manifestation of a deathists philosophy, or even claimed that they were secretly funded by traditional religious groups whose traditional “business models” were being disrupted by the new digital forms of death. Such suspicions were supported by the fact that the Erasers usually were based in religious countries where the rights of replicants were often non-existent and fears regarding new “electric jinns” rampant.

Also prominent in this period were secular prophets who projected that a continuing of the trends of digital replicants, both of the living, and the at least temporarily dead, along with their representing AI’s, would lead to a situation where non-living humans would soon outnumber the living. There were apocalyptic tales akin to the zombie craze earlier in the century that within 50 years the dead would rise up against the living and perhaps join together with AIs destroy the world. But that, of course, was all Ningbowood.

 

An imaginary book excerpt inspired by Adrian Hon’s History of the Future in 100 Objects.

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…

 

How to predict the future with five simple rules

Caravaggio Fortune Teller

When it comes to predicting the future it seems only our failure to consistently get tomorrow right has been steadily predictable, though that may be about to change, at least a little bit.  If you don’t think our society and especially those “experts” whose job it is to help us steer us through the future aren’t doing a horrible job just think back to the fall of the Soviet Union which blindsided the American intelligence community, or 9-11, which did the same, or the financial crisis of 2008, or even much more recently the Arab Spring, the rise of ISIL, or the war in Ukraine.

Technological predictions have been generally as bad or worse and as proof just take a look at any movie or now yellowed magazine from the 1960’s and 70’s about the “2000’s”. People have put a lot of work into imagining a future but, fortunately or unfortunately, we haven’t ended up living there.

In 2005 a psychologist, Philip Tetlock, set out to explain our widespread lack of success at playing Nostradamus. Tetlock  was struck by the colossal intelligence failure the collapse of the Soviet Union had unveiled. Policy experts, people who had spent their entire careers studying the USSR and arguing for this or that approach to Cold War had almost universally failed to have foreseen a future in which the Soviet Union would unravel overnight, let alone that such a mass form of social and geopolitical deconstruction would occur relatively peacefully.

Tetlock in his book Expert Political Judgment: How Good Is It? How Can We Know? made the case that a good deal of the predictions made by experts were no better than those of the proverbial dart throwing chimp- that is no better than chance, or for the numerically minded among you – 33 percent. With the most frightening revelation being that often the more knowledge a person had on a subject the worse rather than the better their predictions.

The reasons Tetlock discovered for the predictive failure of experts were largely psychological and all too human. Experts, like the rest of us, hate to be wrong, and have a great deal of difficulty assimilating information that fails to square with their world view. They tend to downplay or misremember their past predictive failures, deal in squishy predictions without clearly defining probabilities- qualitative statements that are open to multiple interpretations. Most of all, there seems to be no real penalty for getting predictions wrong, even horribly wrong, policy makers and pundits that predicted a quick and easy exit from Iraq or Dow 36,000 on the eve of the crash go right on working with a “the world is complicated” shrug of the shoulders.

What’s weird, I suppose, is that while getting the future horribly wrong has no effect on career prospects, getting it right, especially when the winds had been blowing in the opposite direction, seems to shoot a person into the pundit equivalent of stardom. Nouriel Roubini or “Dr Doom” was laughed at by some of his fellow economists when he was predicting the implosion of the banking sector well before Lehman Brothers went belly up. Afterwards he was inescapable, with a common image of the man, whose visage puts one in mind of Vlad the Impaler, or better Gene Simmons, finding himself surrounded by 4 or 5 ravishing supermodels.

I can’t help but thinking there’s a bit of Old Testament style thinking sneaking in here- as if the person who correctly predicted disaster is so much smarter or so tuned into the zeitgeist that the future is now somehow magically at their fingertips. Where the reality might be that they simply had the courage to speak up against the “wisdom” of the crowd.

Getting the future right is not a matter of intelligence, but it probably is a matter of thinking style. At least that’s where Tetlock, to return to him, has gone with his research. He has launched The Good Judgement Project which hopes to train people to be better predictors of the future. You too can make better predictions if you follow these 5 rules.

  • Comparisons are important: use relevant comparisons as a starting point;
  • Historical trends can help: look at history unless you have a strong reason to expect change;
  • Average opinions: experts disagree, so find out what they think and pick a midpoint;
  • Mathematical models: when model-based predictions are available, you should take them into account;
  • Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

One thing that surprises me about these rules is that apparently policy makers and public intellectuals aren’t following them in the first place. Obviously we need to do our best to make predictions minimizing cognitive biases, and we should certainly be able to do better than the chimps, but we should also be aware of our limitations. Some aspects of the future are inherently predictable demographics work like this, barring some disaster we have a pretty good idea what the human population will be at the end of this century and how it will be distributed. Or at least demographics should be predictable, though sometimes we miss just happen to miss 3 billion future people.

For the things on our immediate horizon we should be able to give pretty good probabilities and those should be helpful to policy makers. It has a definite impact on behavior whether we think there is a 25 percent probability that Iran will detonate a nuclear weapon by 2017 or a 75 percent chance. But things start to get much more difficult the further out into the future you go or when you’re dealing with the interaction of events some of which you didn’t even anticipate – Donald Rumsfeld’s infamous “unknown unknowns”.

It’s the centenary of World War I so that’s a relevant example. Would the war have happened had Gavrilo Princip’s assassination attempt on Franz Ferdinand failed? Maybe? But even had the war only been delayed timing would have most likely have affected its outcome. Perhaps the Germans would have been better prepared and so swiftly defeated the Russians, French and British that the US would never have became involved. It may be the case that German dominance of Europe was preordained, and therefore, in a sense, predictable, as it seems to be asserting itself even now,  but it makes more than a world of difference not just in Europe but beyond whether or not that dominance came in the form of the Kaiser, or Hitler, or, thank God, Merkle and the EU.

History is so littered with accidents, chance encounters, fateful deaths and births that it seems pretty unlikely that we’ll ever get a tight grip on the future which in all likelihood will be as subject to these contingencies as the past (Tetlock’s Second Rule). The best we can probably do is hone our judgement, as Telock argues, plan for what seems inevitable given trend lines, secure ourselves as well as possible against the most catastrophic scenarios based on their probability of destruction (Nick Bostrom’s view) and design institutions that are resilient and flexible in the face a set of extremely varied possible scenarios – (the position of Nassim Taleb).

None of this, however, gives us a graspable and easily understood view of how the future might hang together. Science fiction, for all its flaws, is perhaps alone in doing that. The writer and game developer Adrian Hon may have discovered a new, and certainly produced an insightful way of practicing the craft. To his recent book I’ll turn next time…