The Roots of Rage

Perhaps the main problem with the case made by Pankaj Mishra in his Age of Anger is that it gives an outsized place to intellectuals and the ideas that inspire them, people and their works like Mishra and his books, and as consequence fail to bring to light the material forces that are such idea’s true source.

It’s one thing to be aware that today’s neo-liberalism, and the current populist revolt against them have roots stretching back to the Enlightenment and Rousseau’s revolt against it and to be made aware that there’s a contradiction at the heart of the Enlightenment project that has yet to be resolved. It’s quite another thing to puzzle out why even a likely doomed revolt against this project is taking place right now as opposed to a decade or even decades ago. To do that one needs to turn to insights from sociology and political economy, for if the crisis we are in is truly global- how is it so, and is it the same everywhere, or does it vary across regions?

The big trend that defines our age as much as any other is the growing littoralisation of human populations, and capital. In the developing world this means the creation of mega-cities. By 2050, 75  percent of humanity will be urbanized. India alone might have 6 cities with a population of over 10 million.     

What’s driving littoralisation in the developing world? I won’t deny that part of mass migration to the cities can be explained by people seeking more opportunities for themselves and especially for their children. It’s also the case that globalization has compelled regions to specialize in the face of cheap food and goods from elsewhere and thus reduced the opportunities for employment. Yet perhaps one of the biggest, and least discussed, reasons for littoralization in the developing world is that huge tracts of land are being bought by often outside capitalists to set up massive plantations, industrial farms and mines.

It’s a process the urban sociologist Saskia Sassen describes in great detail in her book: Expulsions: Brutality and Complexity in the Global Economy where she writes:

A recent report from the Oakland Institute suggest that during 2009 alone, foreign investors acquired nearly 60 million hectares of land in Africa.

Further, Oxfam estimates that between 2008 and 2009, deals by foreign investors for agricultural land increased by 200 percent. (94-95)

I assume the spread of military grade satellite imaging will only make these kinds of massive purchases easier as companies and wealthy individuals are able to spot heretofore obscured investment opportunities in countries whose politicians can easily be bought, where the ability of the public to resist such purchases and minimal, and in an environment where developed world governments no longer administer any oversight on such activities.  

For developing world states strong enough to constrain foreign capital these processes are often more internally than externally driven.  Regardless, much of littoralization is driven the expulsion of the poor as the owning classes use their political influence to chase greater returns on capital often oblivious to the social consequences. In that sense it’s little different than the capitalism we’ve had since that system’s very beginnings, which, after all, began with the conquest of the New World, slavery,  the dissolutions of the monasteries, and the enclosure movement.

What makes this current iteration of capitalism’s perennial features somewhat different is the role played by automation. I’ll get to that in a moment, but first it’s important to see how the same trend towards littoralisation seen in the developing world is playing out much different in advanced economies.

Whereas the developing world is seeing the mass movement of people to the cities what the developing world is primarily experiencing the movement of capital. Oddly, this has not meant that percentage of overall wealth has shifted to the coasts because at the same time capital is becoming concentrated in a few major cities those same cities are actually declining in their overall share of the population.

The biggest reason for this discrepancy appears to be the increasing price of real estate on the coast. Here’s what the US would look like if it was mapped by land values rather than area:

US land area by wealth


As in the case with the developing world much of the change in land values appears to be driven by investments by capital not located in the city, and in many instances located abroad.

In the developed world littoralisation has almost all been about capital. Though an increasing amount of wealth is becoming located in a few great cities, structural reasons are preventing people from being able to move. Foreign money, much of it of nefarious origins has been pouring into global cities such as New York and London and driving up the cost of rent let alone property ownership. Often such properties are left empty while, as Tim Wu has pointed out, inflated property values have turned the most valuable real estate into something resembling ghost towns.

This is a world that in a strange way was anticipated by William Gibson in his novel The Peripheral where Gibson leveraged his knowledge of shady Russian real estate deals in London to imagine a future in which the rich actively interfere in the past of an Appalachian society in a state of collapse.

The evidence I have for this is merely anecdotal, but many of Dominicans who are newly arrived to small Pennsylvania cities such as Bethlehem and Lancaster are recent refugees from the skyrocketing rent of New York. If this observation is correct ethnic communities are being driven from large cities where wealth is increasing to interior regions with declining job prospects, which have not experienced mass immigration since the 1920’s. In other words we’ve set the stage for the rise of political nativist.

I said automation plays a role here that might make our capitalist era distinct from prior ones. The developed world has witnessed the hollowing out of the interior through automation before when farm machinery replaced the number of farmers required as a percentage of the population from 64 percent in 1850, to around 15 percent in 1950, to just two percent today. The difference is the decline of employment in agriculture occurred at the same time manufacturing employment was increasing and this manufacturing was much less concentrated, supporting a plethora of small and mid-sized cities in the nation’s interior, and much less dependent on high skills, than the capitalism built around the global city and high-end services we have today.

Automation is manufacturing has been decimating employment in that sector even after it was initially pummeled by globalization. Indeed, the Washington Post has charted how districts that went for Trump in the last election map almost perfectly where the per capita use of robots has increased.

Again speaking merely anecdotally, a number of the immigrants I know are employed in one of Amazon’s “fulfillment centers” (warehouses) in Pennsylvania. Such warehouses are among the most hyper-automated an AI directed businesses currently running at scale. It’s isn’t hard to see why the native middle class feels it is being crushed in a vice, and it’s been far too easy to mobilize human against human hate and deny- as Steven Mnuchin Trump’s Treasury Secretary recently did- that automation is even a problem.

These conditions are not limited to the US but likely played a role in the Brexit vote in the UK and are even more pronounced in France where a declining industrial interior is the source of the far-right Marine Le Pen’s base of support.  

The decline of industrial employment has meant that employees have been pushed into much less remunerative (on account of being much less unionized) services, that is, if the dislocated are employed at all.  This relocation to non-productive services might be one of the reasons why, despite the thrust of technology, overall labor productivity remains so anemic.

Yet, should the AI revolution live up to the hype we should witness the flood of robots into the services a move that will place yet larger downward pressure on wages in the developed world.

The situation for developing economies is even worse. If the economist Dani Rodrik is right developing economies are already suffering what he calls “premature de-industrialization” . The widespread application of robots threatens to make manufacturing in developed countries- sans workers– as cheap as products made by cheap labor in the developing world. Countries that have yet to industrialize will be barred from the development path followed by all societies since the industrial revolution, though perhaps labor in services will remain so cheap there that service sector automation does not take hold. My fear there is that instead of humans being replaced by robots central direction via directing and monitoring “apps” will turn human beings into something all too robot-like.

A world where employment opportunities are decreasing everywhere, but where population continues to grow in places where wealth has never, and now cannot accumulate, means a world of increased illegal migration and refugee flows- the very forces that enabled Brexit, propelled Trump to the White House, and might just leave Le Pen in charge of France.

The apparent victory of the Kuchner over the Bannon faction in the Trump White House luckily saves us from the most vicious ways to respond to these trends. It also means that one of the largest forces behind these dislocations- namely the moguls (like Kushner himself) who run the international real estate market are now in charge of the country. My guess is that their “nationalism” will consist in gaining a level playing field for wealthy US institutions and individuals to invest abroad in the same way foreign players now do here. That, and that the US investors will no longer have their “hands-tied” by ethical standards investors from countries like China do not face so that weak countries are even further prevented from erecting barriers against capital.

Still, should the Bannon faction really have fallen apart it will present an opportunity for the left to address these problems while avoiding the alt-right’s hyper-nationalistic solutions. Progressive solutions (at least in developed economies) might entail providing affordable housing for our cities, preventing shadow money from buying up real estate, unionizing services, recognizing and offsetting the cost to workers of automation. UBI should be part of that mix.

The situation is much more difficult for developing countries and there they will need to find their own, and quite country specific solutions. Advanced countries will need to help them as much (including helping them restore barriers against ravenous capital) as they can to manage their way into new forms of society, for the model of development that has run nearly two centuries now appears to be irrevocably broken.

The Future of Money is Liquid Robots


Klimpt Midas

Over the last several weeks global financial markets have experienced what amounts to some really stomach churning volatility and though this seems to have stabilized or even reversed for the moment as players deem assets oversold, the turmoil has revealed in much clearer way what many have suspected all along; namely, that the idea that the Federal Reserve and other central banks, by forcing banks to lend money could heal the wounds caused by the 2008 financial crisis was badly mistaken.

Central banks were perhaps always the wrong knights in shining armor to pin our hopes on, for at precisely the moment they were called upon to fill a vacuum left by a failed and impotent political system, the very instrument under their control, that is money, was changing beyond all recognition, and had been absorbed into the revolutions (ongoing if soon to slow) in communications and artificial intelligence.

The money of the near future will likely be very different than anything at any time in all of human history. Already we are witnessing wealth that has become as formless as electrons and the currencies of sovereign states less important to an individual’s fortune than the access to and valuation of artificial intelligence able to surf the liquidity of data and its chaotic churn.

As a reminder, authorities at the onset of the 2008 crisis were facing a Great Depression level collapse in the demand for goods and services brought about the bursting of the credit bubble. To stem the crisis authorities largely surrendered to the demands for bailouts by the “masters of the universe” who had become their most powerful base of support. Yet for political and ideological reasons politicians found themselves unwilling or unable to provide similar levels of support for lost spending power of average consumers or to address the crisis of unemployment fiscally- that is, politicians refused to embark on deliberate, sufficient government spending on infrastructure and the like to fill the role of the vacated private sector.

The response authorities hit upon instead and that would spread from the United States to all of the world’s major economies would be to suppress interest rates in order to encourage lending. Part of this was in a deliberate effort to re-inflate asset prices that had collapsed during the crisis. It was hoped that with stock markets restored to their highs the so-called wealth effect would encourage consumers to return to emptying their pocket books and restoring the economy to a state of normalcy.

It’s going on eight years since the onset of the financial crisis, and though the US economy in terms of the unemployment rate and GDP has recovered somewhat from its lows, the recovery has been slow and achieved only in light of the most unusual of financial conditions- money lent out with an interest rate near zero. Even the small recent move by the Federal Reserve away from zero has been enough to throw the rest of the world’s financial markets into a tail spin.

While the US has taken a small step away from zero interest rates a country like Japan has done the opposite and the unprecedented. It has set rates below zero. To understand how bizarre this is banks in Japan now charges savers to hold their money. Prominent economists have argued that the US would benefit from negative rates as well, and the Fed has not denied such a possibility should the fragile American Recovery stall.

There are plenty of reasons why the kinds of growth that might have been expected from lending money out for free has failed to materialize. One reason I haven’t heard much discussed is that the world’s central banks are acting under a set of assumptions about what money is that no longer holds- that over the last few decades the very nature of money has fundamentally changed in ways that make zero or lower interest rates set by the central banks of decreasing relevance.

That change started quite some time ago with the move away from money backed up with gold to fiat currencies. Those gold bugs who long to return to the era of Bretton Woods understand the current crisis mostly in terms of the distortions caused by countries that have abandoned the link between money and anything “real” that is precious metals and especially gold way back in the 1970’s. Indeed it was at this time that money started its transition from a means of exchange to a form of pure information.

That information is a form of bet. The value of the dollars, euros, yen or yuan in your pocket is a wager by those who trade in such things on the future economic prospects and fiscal responsibility of the power that issued the currency. That is, nation-states no longer really control the value of their currency, the money traders who operate the largest market on the planet, which in reality is nothing but bits representing the world’s currencies, are the ones truly running the show.

We had to wait for a more recent period for this move to money in the form of bits to change the existential character of money itself. Both the greatest virtue of money in the form of coins or cash and it’s greatest danger is its lack of memory. It is a virtue in the sense that money is blind to tribal ties and thus allows individuals to free themselves from dependence upon the narrow circle of those whom they personally know. It is a danger as a consequence of this same amnesia for a dollar doesn’t care how it was made, and human beings being the types of creatures that they are will purchase all sorts of horrific things.

At first blush it would seem that libertarian anarchism behind a digital currency like  Bitcoin promises to deepen this ability of money to forget.  However, governments along with major financial institutions are interested in bitcoin like currencies because they promise to rob cash of this very natural amnesia and serve as the ultimate weapon against the economic underworld. That is, rather than use Bitcoin like technologies to hide transactions they could be used to ensure that every transaction was linked and therefore traceable to its actual transactee.

Though some economists fear that the current regime of loose money and the financial repression of savers is driving people away from traditional forms of money to digital alternatives others see digital currency as the ultimate tool. Something that would also allow central banks to more easily do what they have so far proven spectacularly incapable of doing- namely to spur the spending rather than the hoarding of cash.

Even with interest rates set to zero or even below a person can at least not lose their money by putting it in a safe. Digital currency however could be made to disappear if at a certain date it wasn’t invested. Talk about power!- which is why digital currency will not long remain in the hands of libertarians and anarchists.

The loss of the egalitarian characteristics of cash will likely further entrench already soaring inequality. The wealth of many of us is already leveraged by credit ratings, preferred customer privileges and the like, whereas others among us are charged a premium for our consumption in the form of higher interest rates, rent instead of ownership and the need to leverage income through government assistance and coupons. In the future all these features are likely to be woven into our digital currency itself. A dollar in my pocket will mean a very different thing from a dollar in yours or anyone else’s.

With the increased use of biometric technologies money itself might disappear into the person and may become as David Birch has suggested synonymous with identity itself.The value of such personalized forms of currency- which is really just a measure of individual power- will be in a state of constant flux. With everyone linked to some form of artificial intelligence prices will be in a constant state of permanent and rarely seen negotiation between bots.

There will be a huge inequality in the quality and capability of these bots, and while those of the wealthy or used by institutions will roam the world for short lived investments and unleash universal volatility, those of the poor will shop for the best deals at box stores and vainly defend their owners against the manipulation of ad bots who prod them into self-destructive consumption.

Depending on how well the bots we use for ourselves do against the bots used to cajole us into spending- the age of currency as liquid robot money could be extremely deflationary, but would at the same time be more efficient and thus better for the planet.

One could imagine  much different us for artificial intelligence, something like the AI used to run economies found in the Iain Banks’ novels. It doesn’t look likely. Rather, to quote Jack Weatherford from his excellent History of Money that still holds up nearly two decades after it was written:

In the global economy that is still emerging, the power of money and the institutions built on it will supersede that of any nation, combination of nations, or international organization now in existence. Propelled and protected by the power of electronic technology, a new global elite is emerging- an elite without loyalty to any particular country. But history has already shown that the people who make monetary revolutions are not always the ones who benefit from them in the end. The current electronic revolution in money promises to increase even more the role of money in our public and private lives, surpassing kinship, religion, occupation and citizenship as the defining element of social life. We stand now at the dawn of the Age of Money. (268)


The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

William Gibson Grocks the Future: The Peripheral

William Gibbson pencil_sketch_photo_effect 2

It’s hard to get your head around the idea of a humble prophet. Picturing Jeremiah screaming to the Israelites that the wrath of God is upon them and then adding “at least I think so, but I could be wrong…” or some utopian claiming the millenium is near, but then following it up with “then again this is just one man’s opinion…” would be the best kind of ridiculous- seemingly so out of character to be both shocking and refreshing.

William Gibson is a humble prophet.

In part this stems from his understanding of what science-fiction is for- not to predict the future, but to understand the present with right calls about things yet to happen likely just lucky guesses. Over the weekend I finished William Gibson’s novel The Peripheral, and I will take the humble man at his word as in: “The future is already here- it’s not just very evenly distributed.” As a reader I knew he wasn’t trying to make any definitive calls about the shape of tomorrow, he was trying to tell me about how he understands the state of our world right now, including the parts of it that might be important in the future.  So let me try to reverse engineer that, to try and excavate the picture of our present in the ruins of the world of the tomorrow Gibson so brilliantly gave us with his gripping novel.    

The Peripheral is a time-travel story, but a very peculiar one. In his imagined world we have gained the ability not to travel between past, present and future but to exchange information between different versions of the multiverse. You can’t reach into your own past, but you can reach into the past of an alternate universe that thereafter branches off from the particular version of the multiverse you inhabit. It’s a past that looks like your own history but isn’t.

The novel itself is the story of one of these encounters between “past” and “future.” The past in question is a world that is actually our imagined near future somewhere in the American South where the novel’s protagonist, Flynn, her brother Burton and his mostly veteran friends eek out their existence. (Even if I didn’t have a daughter I probably love Gibson’s use of strong female characters, but having two, I love that even more.) It’s a world that rang very true to me because it was a sort of dystopian extrapolation of the world where I both grew up and live now. A rural county where the economy is largely composed of “Hefty Mart” and people building drugs out of their homes.

The farther future in the story is the world of London decades after a wrenching crisis known as the “jackpot”, much of whose devastation was brought about by global warming that went unchecked and resulted in the loss of billions of human lives and even greater destruction for the other species on earth. It’s a world of endemic inequality, celebrity culture and sycophants. And the major character from this world, Wilf Netherton, would have ended his days as a mere courtier to the wealthy had it not been for his confrontation with an alternate past.

So to start there are a few observations we can draw out from the novel about the present. The hollowing out of rural economies dominated by box stores, which I see all around me, the prevalence of meth labs as a keystone of this economy now only held up by the desperation of its people. Dito.

The other present Gibson is giving us some insight into is London where Russian oligarchs after the breakup of the Soviet Union established a kind of second Moscow. That’s a world that may fade now with the collapse of the Russian ruble, but the broader trend will likely remain in place- corrupt elites who have made their millions or billions by pilfering their home countries making their homes in, and ultimately shaping the fate, of the world’s greatest cities.

Both the near and far futures in Gibson’s novel are horribly corrupt. Local, state and even national politicians can not only be bought in Flynn’s America, their very jobs seem to be to put themselves on sale. London of the farther future is corrupt to the bone as well. Indeed, it’s hard to say that government exists at all there except as a conduit for corruption. The detective Ainsley Lowbeer, another major character in the novel, who plays the role of the law in London seems to not even be a private contractor, but someone pursuing justice on her own dime. We may not have this level of corruption today, but I have to admit it didn’t seem all that futuristic.

Inequality (both of wealth and power with little seeming distinction between the two) also unites these two worlds and our own. It’s an inequality that has an effect on privacy in that only those that have political influence have it. The novel hinges around Flynn being the sole (innocent) witness of a murder. There being no tape of the crime is something that leaves her incredulous, and, disturbingly enough, left me incredulous as well, until Lowbeer explains it to Flynn this way:

“Yours is a relatively evolved culture of mass surveillance,” Lowbeer said. “Ours, much more so. Mr Zubov’s house here, internally at least, is a rare exception. Not so much a matter of great expense as one of great influence.”

“What does that mean?” (Flynn)

“A matter of whom one knows,” said Lowbeer, “and of what they consider knowing you to be worth.” (223)

2014 saw the breaking open of the shell hiding the contours of the surveillance state we have allowed to be built around us in the wake of 9/11. Though how we didn’t realize this before Edward Snowden is beyond me. If I were a journalists looking for a story it would be some version of the surveillance-corruption-complex described by Gibson’s detective Lowbeer. That is, I would look for ways in which the blindness of the all seeing state (or even just the overwhelming surveillance powers of companies) was bought or gained from leveraging influence, or where its concentrated gaze was purchased for use as a weapon against rivals. In a world where one’s personal information can be ripped with simple hacks,or damaging correlations about anyone can be conjured out of thin air, no one is really safe. It merely an extrapolation of human nature that the asymmetries of power and wealth will ultimately decide who has privacy and who does not. Sadly, again, not all that futuristic.

In both the near and far futures of The Peripheral drones are ubiquitous. Flynn’s brother Burton himself was a former haptic drone pilot in the US military, and him and his buddies have surrounded themselves with all sorts of drones. In the far future drones are even more widespread and much smaller. Indeed, Flynn witnesses the  aforementioned murder while standing in for Burton as a kind of drone piloting flyswatter keeping paparazzi drone flies away from the soon to be killed celebrity Aelita West.    

That Flynn ended up a paparazzi flyswatter in an alternate future she thinks is a video game began in the most human of ways- Netherton trying to impress his girlfriend- Desarda West- Aelita’s sister. By far the coolest future-tech element of the book builds off of this, when Flynn goes from being a drone pilot to being the “soul” of a peripheral in order to be able to find Aelita’s murderer.

Peripherals, if I understand them, are quasi-biological forms of puppets. They can act intelligently on their own but nowhere near with the nuance and complexity of when a human being is directly controlling them through a brain-peripheral interface. Flynn becomes embodied in an alternative future by controlling the body of a peripheral while herself being in an alternative past. Leaves your head spinning? Don’t worry, Gibson is such a genius that in the novel itself is seems completely natural.

So Gibson is warning us about environmental destruction, inequality, corruption, and trying to imagine a world of ubiquitous drones and surveillance. All essential stuff for us to pay attention to and for which The Peripheral provides us with a kind of frame that might serve as a sort of protection against blinding continuing to head in directions we would rather not.   

Yet the most important commentary on the present I gleaned from Gibson’s novel wasn’t these things, but what it said about a world where the distinction between the virtual and the real has disappeared where everything has become a sort of video-game.

In the novel, what this results in is a sort of digital imperialism and cruelty. Those in Gibson’s far future derisively call the alternative pasts they interfere in “stubs” though these are full worlds as much as their own with people in them who are just as real as us.

As Lowbeer tells Flynn:

Some persons or people unknown have since attempted to have you murdered, in your native continuum, presumably because they know you to be a witness. Shockingly, in my view, I am told that arranging your death would in no way constitute a crime here, as you are, according to current legal opinion, not considered to be real.(200)

The situation is actually much worse than that. As the character Ash explains to Netherton:

There were, for instance, Ash said, continua enthusiasts who’d been at it for several years longer than Lev, some of whom had conducted deliberate experiments on multiple continua, testing them sometimes to destruction, insofar as their human populations were concerned. One of these early enthusiasts, in Berlin, known to the community only as “Vespasian,” was a weapons fetishists, famously sadistic in his treatment of the inhabitants of his continua, whom he set against one another in grinding, interminable, essentially pointless combat, harvesting the weaponry evolved, though some too specialized to be of use outside whatever baroque scenario had produced it. (352)

Some may think this indicates Gibson believes we might ourselves be living in a matrix style simulation. In fact I think he’s actually trying to saying something about the way the world, beyond dispute, works right now, though we haven’t, perhaps, seen it all in one frame.

Our ability to use digital technology to interact globally is extremely dangerous unless we recognize that there are often real human beings behind the pixels. This is a problem for people who are engaged in military action, such as drone pilots, yes, but it goes well beyond that.

Take financial markets. Some of what Gibson is critiquing is the kinds of algo high-speed trading we’ve seen in recent years, and that played a role in the financial the onset of the financial crisis. Those playing with past continua in his near future are doing so in part to game the financial system there, which they can do not because they have a record of what financial markets in such continua will do, but because their computers are so much faster than those of the “past”. It’s a kind of AI neo-colonialism, itself a fascinating idea to follow up on, but I think the deeper moral lesson of The Peripheral for our own time lies in the fact that such actions, whether destabilizing the economies continua, or throwing them into wars as a sort of weapon’s development simulation, are done with impunity because the people in continua are consider nothing but points of data.

Today, with the click of a button, those who hold or manage large pools of wealth can ruin the lives of people on the other side of the globe. Criminals can essentially mug a million people with a keystroke. People can watch videos of stranger’s children and other people’s other loved ones being raped and murdered like they are playing a game in hell. I could go on, but shouldn’t have to.

One of the key, perhaps the key, way we might keep technology from facilitating this hell, from turning us into cold, heartless computers ourselves, is to remember that there are real flesh and blood human beings on the other side of what we do. We should be using our technology to find them and help them, or at least not to hurt them, rather than target them, or flip their entire world upside down without any recognition of their human reality because it some how benefits us. Much of the same technology that allows us to treat other human beings as bits, thankfully, gives us tools for doing the opposite as well, and unless we start using our technology in this more positive and humane way we really will end up in hell.

Gibson will have grocked the future and not just the present if we fail to address theses problems he has helped us (or me at least) to see anew. For if we fail to overcome these issues, it will mean that we will have continued forward into a very black continua of the multiverse, and turned Gibson into a dark prophet, though he had disclaimed the title.


2040’s America will be like 1840’s Britain, with robots?

Christopher Gibbs Steampunk

Looked at in a certain light, Adrian Hon’s History of the Future in 100 Objects can be seen as giving us a window into a fictionalized version of an intermediate technological stage we may be entering. It is the period when the gains in artificial intelligence are clearly happening, but they have yet to completely replace human intelligence. The question if it AI ever will actually replace us is not of interest to me here. It certainly won’t be tomorrow, and technological prediction beyond a certain limited horizon is a fool’s game.

Nevertheless, some features of the kind of hybrid stage we have entered are clearly apparent. Hon built an entire imagined world around them from with “amplified-teams” (AI working side by side with groups of humans) as one of the major elements of 21st century work, sports, and much else besides.

The economist Tyler Cowen perhaps did Hon one better, for he based his very similar version of the future not only on things that are happening right now, but provided insight on what we should do as job holders and bread-winners in light of the rise of ubiquitous, if less than human level, artificial intelligence. One only wishes that his vision had room for more politics, for if Cowen is right, and absent us taking collective responsibility for the type of future we want to live in, 2040’s America might look like the Britain found in Dickens, only we’ll be surrounded by robots.

Cowen may seem a strange duck to take up the techno-optimism mantle, but he did in with gusto in his recent book Average is Over. The book in essence is a sequel to Cowen’s earlier best seller The Great Stagnation in which he argued that developed economies, including the United States, had entered a period of secular stagnation beginning in the 1970’s. The reason for this stagnation was that advanced economies had essentially picked all the “low hanging fruit” of the industrial revolution.

Arguing that we are in a period of technological stagnation at first seems strange, but when I reflect a moment on the meaning of facts such as not flying all that much faster than would have been common for my grandparents in the 1960’s, the kitchen in my family photos from the Carter days looking surprisingly like the kitchen I have right now- minus the paneling, or saddest of all, from the point of view of someone brought up on Star Trek, Star Wars and Our Star Blazers with a comforter sporting Viking 2 and Pioneer, the fact that, not only have we failed to send human visitors to Mars or beyond, we haven’t even been back to the moon. Hell we don’t even have any human beings beyond low-earth orbit.

Of course, it would be silly to argue there has been no technological progress since Nixon. Information, communication and computer technology have progressed at an incredible speed, remaking much of the world in their wake, and have now seemingly been joined by revolutions in biotechnology and renewable energy.

And yet, despite how revolutionary these technologies have been, they have not been able to do the heavy lifting of prior forms of industrialization due to the simple fact that they haven’t been as qualitatively transformative as the industrial revolution. If I had a different job I could function just fine without the internet, and my life would be different only at the margins. Set the technological clock by which I live back to the days preceding industrialization, before electricity, and the internal combustion engine, and I’d be living the life of my dawn-to-dusk Amish neighbors- a different life entirely.

Average is Over is a followup to Cowen’s earlier book in that in it he argues that technological changes now taking place will have an impact that will shake us out of our stagnation, or at least how that stagnation is itself evolving into something quite different with some being able to escape its pull while others fall even further behind.

Like Hon, Cowen thinks intermediate level AI is what we should be paying attention to rather than Kurzweil or Bostrom- like hopes and fears regarding superintelligence. Also like Hon, Cowen thinks the most important aspect of artificial intelligence in the near future is human-AI teams. This is the lesson Cowen takes from, among other things, freestyle chess.

For those who haven’t been paying attention to the world of competitive chess, freestyle chess is what emerged once people were able to buy a chess playing program that could beat the best players in the world for a few dollars to play on one’s phone. One might of thought that would be the death knell for human chess, but something quite different has happened. Now, some of the most popular chess games are freestyle meaning human-machine vs human-machine.

The moral Cowen draws from freestyle chess is that the winners of these games, and he extrapolates, the economic “games” of the future, are those human beings who are most willing to defer to the decisions of the machine. I find this conclusion more than a little chilling given we’re talk about real people here rather than Knight or Pawns, but Cowen seems to think it’s just common sense.

In its simplest form Cowen’s argument boils down to the prediction that an increasing amount of human work in the future will come in the form of these AI-human teams. Some of this, he admits, will amount to no workers at all with the human part of the “team” reduced to an unpaid customer. I now almost always scan and bag my own goods at the grocery store, just as I can’t remember the last time I actually spoke to a bank teller who wasn’t my mom. Cowen also admits that the rise of AI might mean the world actually gets “dumber” our interactions with our environment simplified to foster smooth integration with machines and compressed to meet their limits.

In his vision intelligent machines will revolutionize everything from medicine to education to business management and negotiation to love. The human beings who will best thrive in this new environment will be those whose work best complements that of intelligent machines, and this will be the case all the way from the factory floor to the classroom. Intelligent machines should improve human judgement in areas such as medical diagnostics and would even replace judges in the courtroom if we are ever willing to take the constitutional plunge. Teachers will go from educators to “coaches” as intelligent machines allow individualized instruction , but education will still require a human touch when it comes to motivating students.

His message to those who don’t work well with intelligent machines is – good luck. He sees automation leading to an ever more competitive job market in which many will fail to develop the skills necessary to thrive. Those unfortunate ones will be left to fend for themselves in the face of an increasingly penny-pinching state. There is one area, however, where Cowen thinks you might find refuge if machines just aren’t your thing-marketing. Indeed, he sees marketing as one of the major growth areas in the new otherwise increasingly post-human economy.

The reason for this is simple. In the future there are going to be less ,not more, people with surplus cash to spend on all the goods built by a lot of robots and a handful of humans. One will have to find and persuade those with real incomes to part with some of their cash. Computers can do the finding, but it will take human actors to sell the dream represented by a product.

The world of work presented in Cowen’s Average is Over is almost exclusively that of the middle class and higher who find their way with ease around the Infosphere, or whatever we want to call this shell of information and knowledge we’ve built around ourselves. Either that or those who thrive economically will be those able to successfully pitch whatever it is they’re selling to wealthy or well off buyers, sometimes even with the help of AI that is able to read human emotions.

I wish Cowen had focused more on what it will be like to be poor in such a world. One thing is certain, it will not be fun. For one, he sees further contraction rather than expansion of the social safety net, and widespread conservatism, rather than any attempts at radically new ways of organizing our economy, society and politics. Himself a libertarian conservative, Cowen sees such conservatism baked into the demographic cake of our aging societies. The old do not lead revolutions and given enough of them they can prevent the young from forcing any deep structural changes to society.

Cowen also has a thing for so-called “moral enhancement” though he doesn’t call it that. Moral enhancement need not only come from conservative forces, as the extensive work on the subject by the progressive James Hughes shows, but in the hands of both Hon and Cowen, moral enhancement is a bulwark of conservative societies, where the world of middle class work and the social safety net no longer function, or even exist, in the ways they had in the 20th century.

Hon with his neuroscience background sees moral enhancement leveraging off of our increasing mastery over the brain, but manifesting itself in a revival of religious longings related to meaning, a meaning that was for a long time provided by work, callings and occupations that he projects will become less and less available as we roll through the 21st century with human workers replaced by increasingly intelligent machines. Cowen, on the other hand, sees moral enhancement as the only way the poor will survive in an increasingly competitive and stingy environment, though his enhancement is to take place by more traditional means, the return of strict schools that inculcate victorian era morals such as self-control and above all conscientiousness in the young. Cowen is far from alone in thinking that in an era when machines are capable of much of the physical and intellectual labor once done by human beings what will matter most to individual success is ancient virtues.

In Cowen’s world the rich with money to burn are chased down with a combination of AI, behavioral economics, targeted consumer surveillance, and old fashioned, fleshy persuasion to part with their cash, but what will such a system be like for those chronically out of work? Even should mass government surveillance disappear tomorrow, (fat chance) it seems the poor will still face a world where the forces behind their ever more complex society become increasingly opaque, responsible humans harder to find, and in which they are constantly “nudged” by people who claim to know better. For the poor, surveillance technologies will likely be used not to sell them stuff which they can’t afford, but are a tool of the repo-man, and debt collector, parole officer, and cop that will slowly chisel away whatever slim column continues to connect them the former middle class world of their parents. It is a world more akin to the 1940’s or even the 1840’s than it is to anything we have taken to be normal since the middle of the 20th century.

I do not know if such a world is sustainable over the long haul, and pray that it is not. The pessimist in me remembers that the classical and medieval world’s existed for long periods of time with extreme levels of inequality in both wealth and power, the optimist chimes in that these were ages when the common people did not know how to read. In any case, it is not a society that must by some macabre logic of economic determinism come about. The mechanism by which Cowen sees no sustained response to such a future coming into being is our own political paralysis and generational tribalism. He seems to want this world more than he is offering us a warning of it arrival. Let’s decide to prove him wrong for the technologies he puts so much hope in could be used in totally different ways and in the service of a juster form of society.

However critical I am of Cowen for accepting such a world as a fait accompli, the man still has some rather fascinating things to say. Take for instance his view of the future of science:

Once genius machines start coming up with new theories…. intelligibility will seem like a legacy from the very distant past. ( 220)

For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….

Digital Afterlife: 2045

Alphonse Mucha Moon

Excerpt from Richard Weber’s History of Religion and Inequality in the 21st Century (2056)

Of all the bewildering diversity of new of consumer choices on offer before the middle of the century that would have stunned people from only a generation earlier, none was perhaps as shocking as the many ways there now were to be dead.

As in all things of the 21st century what death looked like was dependent on the wealth question. Certainly, there were many human beings, and when looking at the question globally, the overwhelming majority, who were treated in death the same way their ancestors had been treated. Buried in the cold ground, or, more likely given high property values that made cemetery space ever more precious, their corpses burned to ashes, spread over some spot sacred to the individual’s spirituality or sentiment.

A revival of death relics that had begun in the early 21st century continued for those unwilling out of religious belief, or more likely, simply unable to afford any of the more sophisticated forms of death on offer. It was increasingly the case that the poor were tattooed using the ashes of their lost loved one, or that they carried some momento in the form of their DNA in the vague hope that family fortunes would change and their loved one might be resurrected in the same way mammoths now once again roamed the windswept earth.

Some were drawn by poverty and the consciousness brought on by the increasing period of environmental crisis to simply have their dead bodies “given back” to nature and seemed to embrace with morbid delight the idea that human beings should end up “food for worms”.

It was for those above a certain station where death took on whole new meanings. There were of course, stupendous gains in longevity, though human beings still continued to die, and  increasingly popular cryonics held out hope that death would prove nothing but a long and cold nap. Yet it was digital and brain scanning/emulating technologies that opened up whole other avenues allowing those who had died or were waiting to be thawed to continue to interact with the world.

On the low end of the scale there were now all kinds of interactive cemetery monuments that allowed loved ones or just the curious to view “life scenes” of the deceased. Everything from the most trivial to the sublime had been video recorded in the 21st century which provided unending material, sometimes in 3D, for such displays.

At a level up from this “ghost memoirs” became increasingly popular especially as costs plummeted due to outsourcing and then scripting AI. Beginning in the 2020’s the business of writing biographies of the dead ,which were found to be most popular when written in the first person, was initially seen as a way for struggling writers to make ends meet. Early on it was a form of craftsmanship where authors would pour over records of the deceased individual in text, video, and audio recordings, aiming to come as close as possible to the voice of the deceased and would interview family and friends about the life of the lost in the hopes of being able to fully capture their essence.

The moment such craft was seen to be lucrative it was outsourced. English speakers in India and elsewhere soon poured over the life records of the deceased and created ghost memoirs en mass, and though it did lead to some quite amusing cultural misinterpretations, it also made the cost of having such memoirs published sharply decline further increasing their popularity.

The perfection of scripting AI made the cost of producing ghost memoirs plummet even more. A company out of Pittsburgh called “Mementos” created by students at Carnegie Mellon boasted in their advertisements that “We write your life story in less time than your conception”. That same company was one of many others that had brought 3D scanning of artifacts from museums to everyone and created exact digital images of a person’s every treasured trinket and trophy.

Only the very poor failed to have their own published memoir which recounted their life’s triumphs and tribulations or failed to have their most treasured items scanned.  Many, however, esqued the public display of death found in either interactive monuments or the antiquated idea of memoirs as death increasingly became a thing of shame and class identity. They preferred private home- shrines many of which resembled early 21st century fast food kiosks whereby one could chose a precise recorded event or conversation from the deceased in light of current need. There were selections with names like “Motivation”, and “Persistence” that might pull up relevant items, some of which used editing algorithms that allowed them to create appropriate mashups, or even whole new interactions that the dead themselves had never had.

Somewhat above this level due to the cost for the required AI were so-called “ghost-rooms”. In all prior centuries some who suffered the death of a loved one would attempt to freeze time by, for instance, leaving unchanged a room in which the deceased had spent the majority of their time. Now the dead could actually “live” in such rooms, whether as a 3D hologram (hence the name ghost rooms) or in the form of an android that resembled the deceased. The most “life-like” forms of these AI’s were based on the maps of detailed “brainstorms” of the deceased. A technique perfected earlier in the century by the neuroscientist Miguel Nicolelis.

One of the most common dilemmas, and one that was encountered in some form even in the early years of the 21st century, was the fact that the digital presence of a deceased person often continued to exist and act long after a person was gone. This became especially problematic once AIs acting as stand-ins for individuals became widely used.

Most famously there was the case of Uruk Wu. A real estate tycoon, Wu was cryogenically frozen after suffering a form of lung cancer that would not respond to treatment. Estranged from his party-going son Enkidu, Mr Wu had placed the management all of his very substantial estate under a finance algorithm (FA). Enkidu Wu initially sued the deceased Uruk for control of family finances- a case he famously and definitively lost- setting the stage for increased rights for deceased in the form of AIs.

Soon after this case, however, it was discovered that the FA being used by the Uruk estate was engaged in wide-spread tax evasion practices. After extensive software forensics it was found that such evasion was a deliberate feature of the Uruk FA and not a mere flaw. After absorbing fines, and with the unraveling of its investments and partners, the Uruk estate found itself effectively broke. In an atmosphere of great acrimony TuatGenics the cryonic establishment that interred Urduk unplugged him and let him die as he was unable to sustain forward funding for his upkeep and future revival.

There was a great and still unresolved debate in the 2030’s over whether FAs acting in the markets on behalf of the dead were stabilizing or destabilizing the financial system. FAs became an increasingly popular option for the cryogenically frozen or even more commonly the elderly suffering slow onset dementia, especially given the decline in the number of people having children to care for them in old age, or inherit their fortunes after death. The dead it was thought would prove to be conservative investment group, but anecdotally at least they came to be seen as a population willing to undertake an almost obscene level of financial risk due to the fact that revival was a generation off or more.

One weakness of the FAs was that they were faced with pouring their resources into upgrade fees rather than investment as the presently living designed software meant to deliberately exploit the weaknesses of earlier generation FAs. Some argued that this was a form of “elder abuse” whereas others took the position that to prohibit such practices would constitute fossilizing markets in an earlier and less efficient era.

Other phenomenon that came to prominence by the 2030’s were so-called “replicant” and “faustian” legal disputes. One of the first groups to have accurate digital representations in the 21st century were living celebrities. Near death or at the height of their fame, celebrities often contracted out their digital replicants. There was always need of those having ownership rights of the replicants to avoid media saturation, but finding the right balance between generating present and securing future revenue proved challenging.

Copyright proved difficult to enforce. Once the code of a potentially revenue generating digital replicant had been made there was a great deal of incentive to obtain a copy of that replicant and sell it to all sorts of B-level media outlets. There were widespread complaints by the Screen Actors Guild that replicants were taking away work from real actors, but the complaint was increasingly seen as antique- most actors with the exception of crowd drawing celebrities were digital simulations rather than “real” people anyway.

Faustian contacts were legal obligations by second or third tier celebrities or first tier actors and performers whose had begun to see their fame decline that allowed the contractor to sell a digital representation to third parties. Celebrities who had entered such contracts inevitably found “themselves” staring in pornographic films, or just as common, in political ads for causes they would never support.

Both the replicant and faustian issues gave an added dimension to the legal difficulties first identified in the Uruk Wu case. Who was legally responsible for the behavior of digital replicants? That question became especially apparent in the case of the serial killer Gregory Freeman. Freeman was eventually held liable for the deaths of up to 4,000 biological, living humans. Murders he “himself” had not committed, but that were done by his digital replicant. This was done largely by infiltrating a software error in the Sony-Geisinger remote medical monitoring system (RMMS) that controlled everything from patients pacemakers to brain implants and prosthetics to medication delivery systems and prescriptions. Freeman was found posthumously guilty of having caused the deaths (he committed suicide) but not before the replicant he had created had killed hundreds of persons even after the man’s death.

It became increasingly common for families to create multiple digital replicants of a particular individual, so now a lost mother or father could live with all of their grown and dispersed children simultaneously. This became the source of unending court disputes over which replicant was actually the “real” person and which therefore held valid claim to property.

Many began to create digital replicants well before the point of death to farm them out out for remunerative work. Much of work by this point had been transformed into information processing tasks, a great deal of which was performed by human-AI teams, and even in traditional fields where true AI had failed to make inroads- such as indoor plumbing- much of the work was performed by remote controlled droids. Thus, there was an incentive for people to create digital replicants that would be tasked with income generating work. Individuals would have themselves copied, or more commonly just a skill-based part of themselves copied and have it used for work. Leasing was much more common than outright ownership and not merely because of complaints of a new form of “indentured servitude” but because whatever skill set was sold was likely to be replaced as its particulars became obsolete or pure AI that had been designed on it improved. In the churn of needed skills to obsolescence many dedicated a share of their digital replicants to retraining itself.

Servitude was one area where the impoverished dead were able to outcompete their richer brethren. A common practice was for the poor to be paid upfront for the use of their brain matter upon death. Parts of once living human brains were commonly used by companies for “capucha” tasks yet to be mastered by AI.

There were strenuous objections to this “atomization” of the dead, especially for those digital replicants that did not have any family to “house” them, and who, lacking the freedom to roam freely in the digital universe were in effect trapped in a sort of quantum no-man’s-land. Some religious groups, most importantly the Mormons, responded to this by place digital replicants of the dead in historical simulations that recreated the world in which the deceased had lived and were earnestly pursuing a project in which replicants of those who had died before the onset of the digital age were created.

In addition, there were numerous rights arguments against the creation of such simulated histories using replicants. The first being that forcing digital replicants to live in a world where children died in mass numbers, starvation, war and plague were common forms of death, and which lacked modern miracles such as anesthesia, when such world could easily be created with more humane features, was not “redemptive” but amounted to cruel and unusual punishment and even torture.

Indeed, one of the biggest, and overblown, fears of this time was that one’s digital replicant might end up in a sadistically crafted simulated form of hell. Whatever its irrationality, this became a popular form of blackmail with videos of “captive” digital replicants or proxies used to frighten a person into surrendering some enormous sum.

The other argument against placing digital replicants in historical simulations, either without their knowledge, their ability to leave, or more often both, was something akin to imprisoning a person in a form of Colonial Williamsburg or The Renaissance Faire. “Spectral abolitionists” argued that the embodiment of a lost person should be free to roam and interact with the world as they chose whether as software or androids, and that they should be unshackled from the chains of memory. There were even the JBDBM (the John Brown’s Digital Body Movement) and the DigitalGnostics, hacktivists group that went around revealing the reality of simulated worlds to their inhabitants and sought to free them to enter the larger world heretofore invisible to them.

A popular form of cultural terrorism at this time were so-called “Erasers” entities with names such as “GrimReaper” or “Scathe” whose project consisted in tracking down digital replicants and deleting them. Some characterized these groups as a manifestation of a deathists philosophy, or even claimed that they were secretly funded by traditional religious groups whose traditional “business models” were being disrupted by the new digital forms of death. Such suspicions were supported by the fact that the Erasers usually were based in religious countries where the rights of replicants were often non-existent and fears regarding new “electric jinns” rampant.

Also prominent in this period were secular prophets who projected that a continuing of the trends of digital replicants, both of the living, and the at least temporarily dead, along with their representing AI’s, would lead to a situation where non-living humans would soon outnumber the living. There were apocalyptic tales akin to the zombie craze earlier in the century that within 50 years the dead would rise up against the living and perhaps join together with AIs destroy the world. But that, of course, was all Ningbowood.


An imaginary book excerpt inspired by Adrian Hon’s History of the Future in 100 Objects.

Welcome to the New Age of Revolution

Fall of the Bastille

Last week the prime minister of Ukraine, Mykola Azarov, resigned under pressure from a series of intense riots that had spread from Kiev to the rest of the country. Photographs from the riots in The Atlantic blew my mind, like something out of a dystopian steampunk flic. Many of the rioters were dressed in gas masks that looked as if they had been salvaged from World War I. As weapons they wielded homemade swords, molotov cocktails, and fireworks. To protect their heads some wore kitchen pots and spaghetti strainers.

The protestors were met by riot police in hypermodern black suits of armor, armed with truncheons, tear gas, and shotguns, not all of them firing only rubber bullets. Orthodox priests with crosses and icons in their hands, sometimes placed themselves perilously between the rioters and the police, hoping to bring calm to a situation that was spinning out of control.

Even for Ukraine, it was cold during the weeks of the riots. A situation that caused the blasts from water cannons used by the police to crystalize shortly after contact. The detritus of protesters covered in sheets of ice like they had be shot with some kind of high tech freeze gun.

Students of mine from the Ukraine were largely in sympathy with the protestors, but feared civil war unless something changed quickly. The protests had been brought on by a backdoor deal with Russia to walk away from talks aimed at Ukraine joining the European Union. Protests over that agreement led to the passage of an anti-protest law that only further inflamed the rioters. The resignation of the Russophile prime minister  seemed to calm the situation for a time, but with the return of the Ukrainian president Viktor Yanukovych  to work (he was supposedly ill during the heaviest of the protests) the situation has once again become volatile. It was Yanukovych who was responsible  for cutting the deal with Russia and pushing through draconian limits on the freedom of assembly which had sparked the protests in the first place.

Ukraine, it seem, is a country being torn in two, a conflict born of demographics and history.  Its eastern, largely Russian speaking population looking towards Russia and its western, largely Ukrainian speaking population looking towards Europe. In this play both Russia and the West are no doubt trying to influence the outcome of events in their favor, and thus exacerbating the instability.

Yet, while such high levels of tension are new, the problem they reveal is deep in historical terms- the cultural tug of war over Ukraine between Russia and Europe, East and West, stretches at least as far back as the 14th century when western Ukraine was brought into the European cultural orbit by the Poles. Since then, and often with great brutality on the Russian side, the question of Ukrainian identity, Slavic or Western, has been negotiated and renegotiated over centuries- a question that will perhaps never be fully resolved and whose very tension may be what it actually means to be Ukrainian.

Where Ukraine goes from here is anybody’s guess, but despite its demographic and historical particularities, its recent experience adds to the growing list of mass protests that have toppled governments, or at least managed to pressure governments into reversing course, that have been occurring regularly since perhaps 2008 with riots in Greece.

I won’t compile a comprehensive list but will simply state the mass protests and riots I can cite from memory. There was the 2009 Green Revolution in Iran that was subsequently crushed by the Iranian government. There was the 2010 Jasmine Revolution in Tunisia which toppled the government there and began what came to be the horribly misnamed “Arab Spring”. By 2011 mass protests had overthrown Hosni Mubarak in Egypt, and riots had broken out in London. 2012 saw a lull in mass protests, but in 2013 they came back with a vengeance. There were massive riots in Brazil over government cutbacks for the poor combined with extravagant spending in preparation for the 2014 World Cup, there were huge riots in Turkey which shook the government of the increasingly authoritarian Recep Tayyip Erdoğan, and a military coup in the form of mass protests that toppled the democratically elected Islamist president in Egypt. Protesters in Thailand have be “occupying” the capital since early January. And now we have Ukraine.

These are just some of the protests that were widely covered in the media. Sometimes, quite large, or at least numerous protests are taking place in a country and they are barely reported in the news at all.  Between 2006-2010 there were 180,000 reported “mass incidents” in China. It seems the majority of these protests are related to local issues and not against the national government, but the PRC has been adept at keeping them free of the prying eyes of Western media.

The abortive 2009 riots in Iran that were the first to be called a “Twitter Revolution” by Western media and digerati.  The new age of revolution often explained in terms of the impact of the communications revolution, and social media. We have had time to find out that just how little a role Western, and overwhelmingly English language media platforms, such as Twitter and FaceBook, have played in this new upsurge of revolutionary energy, but that’s not the whole story.

I wouldn’t go so far as to say technology has been irrelevant in bringing about our new revolutionary era, I’d just put the finger on another technology, namely mobile phones. In 2008 the number of mobile devices had, in the space of a decade, gone from a rich world luxury into the hands of 4 billion people. By 2013, 6 billion of the world’s 7 billion people had some sort of mobile device, more people than had access to working toilets.

It is the very disjunction between the number of people able to communicate and hence act en masse and those lacking what we in the developed world consider necessities that should get our attention- a potentially explosive situation. And yet, we have known since Alexis de Tocqueville that revolutions are less the product of the poor who have always known misery than stem from a rising middle class whose ambitions have been frustrated.

Questions I would ask a visitor from the near future if I wanted to gauge the state of the world a decade or two hence would be if the rising middle class in the developing world had put down solid foundations, and if, and to what extent, it had been cut off at the legs from either the derailment of the juggernaut of the Chinese economy, rising capacity of automation, or both?

The former fear seems to be behind the recent steep declines in the financial markets where the largest blows have been suffered by developing economies. The latter is a longer term risk for developing economies, which if they do not develop quickly enough may find themselves permanently locked out of what has been the traditional development arch of capitalist economic development moving from agriculture to manufacturing to services.

Automation threatens the wage competitiveness of developing economy workers on all stages of that scale. Poor textile workers in Bangladesh competing with first world robots, Indians earning a middle class wage working at call centers or doing grunt legal or medical work increasingly in competition with more and more sophisticated ,and in the long run less expensive, bots.

Intimately related to this would be my last question for our near future time traveler; namely, does the global trend towards increasing inequality continue, increase, or dissipate? With the exception of government incompetence and corruption combined with mobile enabled youth, rising inequality appears to be the only macro trend that these revolts share, though, this must not be the dominant factor, otherwise, protests would be the largest and most frequent in the country with the fastest growing inequality- the US.

Revolutions, as in the mobilization of a group of people massive enough and active enough to actually overthrow a government are a modern phenomenon and are so for a reason. Only since the printing press and mass literacy has the net of communication been thrown wide enough where revolution, as opposed to mere riots, has become possible. The Internet and even more so mobile technology have thrown that net even further, or better deeper, with literacy no longer being necessary, and with the capacity for intergroup communication now in real time and no longer in need of or under the direction of a center- as was the case in the era of radio and television.

Technology hasn’t resulted in the “end of history”, but quite the opposite. Mobile technology appears to facilitate the formation of crowds, but what these crowds mobilize around are usually deep seated divisions which the society in which protests occur have yet to resolve or against widely unpopular decisions made over the people’s head.

For many years now we have seen this phenomenon from financial markets one of the first area to develop deep, rapidly changing interconnections based on the digital revolution. Only a few years back, democracy seemed to have come under the thumb of much more rapidly moving markets, but now, perhaps, a populist analog has emerged.

What I wonder is how the state will respond to this, or how this new trend of popular mobilization may intersect with yet another contemporary trend- mass surveillance by the state itself?

The military theorist Carl von Clausewitz came up with his now famous concept of the “fog of war” defined as “the uncertainty in situational awareness experienced by participants in military operations. The term seeks to capture the uncertainty regarding one’s own capability, adversary capability, and adversary intent during an engagement, operation, or campaign.”  If one understands revolution as a kind of fever pitch exchange of information leading to action leading to exchange of information and so on, then, all revolutions in the past could be said to have taken place with all players under such a fog.

Past revolutions have only been transparent to historians. From a bird’s eye view and in hindsight scholars of the mother of all revolutions, the French, can see the effects of Jean-Paul Marat’s pamphlets and screeds inviting violence, the published speeches of the moralistic tyrant Robespierre, the plotting letters of Marie-Antoinette to the Austrians or the counter-revolutionary communique of General Lafayette. To the actors in the French Revolution itself the motivations and effects of other players were always opaque, the origin, in part, of the revolution’s paranoia and Reign of Terror which Robespierre saw as a means of unmasking conspirators and hypocrites.

With the new capacity of governments to see into communication, revolutions might be said to be becoming transparent in real time. Insecure governments that might be toppled by mass protest would seem to have an interest in developing the capacity to monitor the communication and track the movement of their citizens. Moore’s Law has made what remained an unachievable goal of total surveillance by the state relatively cheap.

During revolutionary situations foreign governments (with the US at the top of the list), may have the inclination to peer into revolutions through digital surveillance and in some cases will likely use this knowledge to interfere so as to shape outcomes in its own favor. States that are repressive towards their own people, such as China, will likewise try to use these surveillance tools to ensure revolutions never happen or to steer them toward preferred outcomes if they should occur despite best efforts.

One can only hope that the ability to see into a revolution while it is happening does not engender the illusion that we can also control its outcome, for as the riots and revolutions of the past few years have shown, moves against a government may be enabled by technology imported from outside, but the fate of such actions is decided by people on the ground who alone might be said have full responsibility for the future of the society in which revolution has occurred.

Foreign governments are engaged in a dangerous form of hubris if they think they can steer outcomes in their favor oblivious to local conditions and governments that think technology gives them a tool by which they can ignore the cries of their citizens are allowing the very basis on which they stand to rot underneath them and eventually collapse. A truth those who consider themselves part of a new global elite should heed when it comes to the issue of inequality.