William Gibson Grocks the Future: The Peripheral

William Gibbson pencil_sketch_photo_effect 2

It’s hard to get your head around the idea of a humble prophet. Picturing Jeremiah screaming to the Israelites that the wrath of God is upon them and then adding “at least I think so, but I could be wrong…” or some utopian claiming the millenium is near, but then following it up with “then again this is just one man’s opinion…” would be the best kind of ridiculous- seemingly so out of character to be both shocking and refreshing.

William Gibson is a humble prophet.

In part this stems from his understanding of what science-fiction is for- not to predict the future, but to understand the present with right calls about things yet to happen likely just lucky guesses. Over the weekend I finished William Gibson’s novel The Peripheral, and I will take the humble man at his word as in: “The future is already here- it’s not just very evenly distributed.” As a reader I knew he wasn’t trying to make any definitive calls about the shape of tomorrow, he was trying to tell me about how he understands the state of our world right now, including the parts of it that might be important in the future.  So let me try to reverse engineer that, to try and excavate the picture of our present in the ruins of the world of the tomorrow Gibson so brilliantly gave us with his gripping novel.    

The Peripheral is a time-travel story, but a very peculiar one. In his imagined world we have gained the ability not to travel between past, present and future but to exchange information between different versions of the multiverse. You can’t reach into your own past, but you can reach into the past of an alternate universe that thereafter branches off from the particular version of the multiverse you inhabit. It’s a past that looks like your own history but isn’t.

The novel itself is the story of one of these encounters between “past” and “future.” The past in question is a world that is actually our imagined near future somewhere in the American South where the novel’s protagonist, Flynn, her brother Burton and his mostly veteran friends eek out their existence. (Even if I didn’t have a daughter I probably love Gibson’s use of strong female characters, but having two, I love that even more.) It’s a world that rang very true to me because it was a sort of dystopian extrapolation of the world where I both grew up and live now. A rural county where the economy is largely composed of “Hefty Mart” and people building drugs out of their homes.

The farther future in the story is the world of London decades after a wrenching crisis known as the “jackpot”, much of whose devastation was brought about by global warming that went unchecked and resulted in the loss of billions of human lives and even greater destruction for the other species on earth. It’s a world of endemic inequality, celebrity culture and sycophants. And the major character from this world, Wilf Netherton, would have ended his days as a mere courtier to the wealthy had it not been for his confrontation with an alternate past.

So to start there are a few observations we can draw out from the novel about the present. The hollowing out of rural economies dominated by box stores, which I see all around me, the prevalence of meth labs as a keystone of this economy now only held up by the desperation of its people. Dito.

The other present Gibson is giving us some insight into is London where Russian oligarchs after the breakup of the Soviet Union established a kind of second Moscow. That’s a world that may fade now with the collapse of the Russian ruble, but the broader trend will likely remain in place- corrupt elites who have made their millions or billions by pilfering their home countries making their homes in, and ultimately shaping the fate, of the world’s greatest cities.

Both the near and far futures in Gibson’s novel are horribly corrupt. Local, state and even national politicians can not only be bought in Flynn’s America, their very jobs seem to be to put themselves on sale. London of the farther future is corrupt to the bone as well. Indeed, it’s hard to say that government exists at all there except as a conduit for corruption. The detective Ainsley Lowbeer, another major character in the novel, who plays the role of the law in London seems to not even be a private contractor, but someone pursuing justice on her own dime. We may not have this level of corruption today, but I have to admit it didn’t seem all that futuristic.

Inequality (both of wealth and power with little seeming distinction between the two) also unites these two worlds and our own. It’s an inequality that has an effect on privacy in that only those that have political influence have it. The novel hinges around Flynn being the sole (innocent) witness of a murder. There being no tape of the crime is something that leaves her incredulous, and, disturbingly enough, left me incredulous as well, until Lowbeer explains it to Flynn this way:

“Yours is a relatively evolved culture of mass surveillance,” Lowbeer said. “Ours, much more so. Mr Zubov’s house here, internally at least, is a rare exception. Not so much a matter of great expense as one of great influence.”

“What does that mean?” (Flynn)

“A matter of whom one knows,” said Lowbeer, “and of what they consider knowing you to be worth.” (223)

2014 saw the breaking open of the shell hiding the contours of the surveillance state we have allowed to be built around us in the wake of 9/11. Though how we didn’t realize this before Edward Snowden is beyond me. If I were a journalists looking for a story it would be some version of the surveillance-corruption-complex described by Gibson’s detective Lowbeer. That is, I would look for ways in which the blindness of the all seeing state (or even just the overwhelming surveillance powers of companies) was bought or gained from leveraging influence, or where its concentrated gaze was purchased for use as a weapon against rivals. In a world where one’s personal information can be ripped with simple hacks,or damaging correlations about anyone can be conjured out of thin air, no one is really safe. It merely an extrapolation of human nature that the asymmetries of power and wealth will ultimately decide who has privacy and who does not. Sadly, again, not all that futuristic.

In both the near and far futures of The Peripheral drones are ubiquitous. Flynn’s brother Burton himself was a former haptic drone pilot in the US military, and him and his buddies have surrounded themselves with all sorts of drones. In the far future drones are even more widespread and much smaller. Indeed, Flynn witnesses the  aforementioned murder while standing in for Burton as a kind of drone piloting flyswatter keeping paparazzi drone flies away from the soon to be killed celebrity Aelita West.    

That Flynn ended up a paparazzi flyswatter in an alternate future she thinks is a video game began in the most human of ways- Netherton trying to impress his girlfriend- Desarda West- Aelita’s sister. By far the coolest future-tech element of the book builds off of this, when Flynn goes from being a drone pilot to being the “soul” of a peripheral in order to be able to find Aelita’s murderer.

Peripherals, if I understand them, are quasi-biological forms of puppets. They can act intelligently on their own but nowhere near with the nuance and complexity of when a human being is directly controlling them through a brain-peripheral interface. Flynn becomes embodied in an alternative future by controlling the body of a peripheral while herself being in an alternative past. Leaves your head spinning? Don’t worry, Gibson is such a genius that in the novel itself is seems completely natural.

So Gibson is warning us about environmental destruction, inequality, corruption, and trying to imagine a world of ubiquitous drones and surveillance. All essential stuff for us to pay attention to and for which The Peripheral provides us with a kind of frame that might serve as a sort of protection against blinding continuing to head in directions we would rather not.   

Yet the most important commentary on the present I gleaned from Gibson’s novel wasn’t these things, but what it said about a world where the distinction between the virtual and the real has disappeared where everything has become a sort of video-game.

In the novel, what this results in is a sort of digital imperialism and cruelty. Those in Gibson’s far future derisively call the alternative pasts they interfere in “stubs” though these are full worlds as much as their own with people in them who are just as real as us.

As Lowbeer tells Flynn:

Some persons or people unknown have since attempted to have you murdered, in your native continuum, presumably because they know you to be a witness. Shockingly, in my view, I am told that arranging your death would in no way constitute a crime here, as you are, according to current legal opinion, not considered to be real.(200)

The situation is actually much worse than that. As the character Ash explains to Netherton:

There were, for instance, Ash said, continua enthusiasts who’d been at it for several years longer than Lev, some of whom had conducted deliberate experiments on multiple continua, testing them sometimes to destruction, insofar as their human populations were concerned. One of these early enthusiasts, in Berlin, known to the community only as “Vespasian,” was a weapons fetishists, famously sadistic in his treatment of the inhabitants of his continua, whom he set against one another in grinding, interminable, essentially pointless combat, harvesting the weaponry evolved, though some too specialized to be of use outside whatever baroque scenario had produced it. (352)

Some may think this indicates Gibson believes we might ourselves be living in a matrix style simulation. In fact I think he’s actually trying to saying something about the way the world, beyond dispute, works right now, though we haven’t, perhaps, seen it all in one frame.

Our ability to use digital technology to interact globally is extremely dangerous unless we recognize that there are often real human beings behind the pixels. This is a problem for people who are engaged in military action, such as drone pilots, yes, but it goes well beyond that.

Take financial markets. Some of what Gibson is critiquing is the kinds of algo high-speed trading we’ve seen in recent years, and that played a role in the financial the onset of the financial crisis. Those playing with past continua in his near future are doing so in part to game the financial system there, which they can do not because they have a record of what financial markets in such continua will do, but because their computers are so much faster than those of the “past”. It’s a kind of AI neo-colonialism, itself a fascinating idea to follow up on, but I think the deeper moral lesson of The Peripheral for our own time lies in the fact that such actions, whether destabilizing the economies continua, or throwing them into wars as a sort of weapon’s development simulation, are done with impunity because the people in continua are consider nothing but points of data.

Today, with the click of a button, those who hold or manage large pools of wealth can ruin the lives of people on the other side of the globe. Criminals can essentially mug a million people with a keystroke. People can watch videos of stranger’s children and other people’s other loved ones being raped and murdered like they are playing a game in hell. I could go on, but shouldn’t have to.

One of the key, perhaps the key, way we might keep technology from facilitating this hell, from turning us into cold, heartless computers ourselves, is to remember that there are real flesh and blood human beings on the other side of what we do. We should be using our technology to find them and help them, or at least not to hurt them, rather than target them, or flip their entire world upside down without any recognition of their human reality because it some how benefits us. Much of the same technology that allows us to treat other human beings as bits, thankfully, gives us tools for doing the opposite as well, and unless we start using our technology in this more positive and humane way we really will end up in hell.

Gibson will have grocked the future and not just the present if we fail to address theses problems he has helped us (or me at least) to see anew. For if we fail to overcome these issues, it will mean that we will have continued forward into a very black continua of the multiverse, and turned Gibson into a dark prophet, though he had disclaimed the title.

 

Living in the Divided World of the Internet’s Future

Marten_van_Valckenborch_the_Elder_-_The_Tower_of_Babel_-_Google_Art_Project

Sony hacks, barbarians with FaceBook pages, troll armies, ministries of “truth”– it wasn’t supposed to be like this. When the early pioneers of what we now call the Internet freed the network from the US military they were hoping for a network of mutual trust and sharing- a network like the scientific communities in which they worked where minds were brought into communion from every corner of the world. It didn’t take long for some of the witnesses to the global Internet’s birth to see in it the beginnings of a global civilization, the unification, at last, of all of humanity under one roof brought together in dialogue by the miracle of a network that seemed to eliminate the parochialism of space and time.   

The Internet for everyone that these pioneers built is now several decades old, we are living in its future as seen from the vantage point of the people who birthed it as this public “thing” this thick overlay of human interconnections which now mediates almost all of our relationships with the world. Yet, rather than bringing the world together, humanity appears to be drifting apart.

Anyone who doubts the Internet has become as much a theater of war in which political conflicts are fought as much as it is a harbinger of a nascent “global mind” isn’t reading the news. Much of the Internet has been weaponized whether by nation-states or non-state actors. Bots, whether used for in contests between individuals, or over them, now outnumber human beings on the web.

Why is this happening? Why did the Internet that connected us also fail to bring us closer?

There are probably dozens of reasons, only one of which I want to comment on here because I think it’s the one that’s least discussed. What I think the early pioneers failed to realize about the Internet was that it would be as much a tool of reanimating the past as it would be a means of building a future. It’s not only that history didn’t end, it’s that it came alive to a degree we had failed to anticipate.

Think what one will of Henry Kissinger, but his recent (and given that he’s 91 probably last major) book, World Order, tries to get a handle on this. What makes the world so unstable today, in Kissinger’s view, is that it is perhaps the first time in world history where we truly have “one world”, and, at the same time have multiple competing definitions over how that world should be organized. We only really began to live what can be considered a single world with the onset of the age of European expansion that mapped, conquered, and established contact, with every region on the earth. Especially from the 1800’s to the end of World War II world order was defined by the European system of balance of power, and, I might add, the shared dominant, Western culture these nations protelytized. After 1945 you have the Cold War with the world order defined by the bipolar split between the US and the Soviet Union. After the fall of the USSR, for a good few decades, world order was an American affair.

It was during this last period of time, when the US and neo-liberal globalization were at their apex, that the Internet became a global thing, a planetary network connecting all of humanity together into something like Marshal Mcluhan’s “global village.” Yet this new realm couldn’t really exist as something disconnected from underlying geopolitical and economic currents forever. An empire secure in its hegemony doesn’t seek to turn its communication system into a global spying tool or weaponize it, both of which the US have done. If the US could treat the global network or the related global financial system as tools for parochial nationalist ends then other countries would seek to do the same- and they have. Rather than becoming Chardin’s noosphere the Internet has become another theater of war for states, terrorist and criminal networks and companies.

What exactly these entities were that competed with one another across what we once called “cyberspace”, and what goals they had, were not really technological questions at all, but born from the ancient realities of history, geography and the contest for resources and wealth. Rather than one modernity we have several competing versions even if all of them are based on the same technological foundations.

Non-western countries had once felt compelled to copy the West’s cultural features as the price for modernity, and we should not forget the main reason that modernization was pursued despite it upheavals was to develop to a level where they could defend themselves against the West and its technological superiority. As Samuel Huntington pointed out in the 1990’s, now that the West had fallen from the apex of its power other countries were free to pursue modernity on their own terms. His model of a “clash of civilizations” was simplistic, but it was not, as some critics claimed, “racists”. Indeed if we had listened to Huntington we would never have invaded Afghanistan and Iraq.

Yet civilization is not quite the right term. Better, perhaps, political cultures with different ideas regarding political order along with a panorama of non-state actors old and new. What we have is a contest between different political cultures,concepts of order, and contests for raw power, all unfolding in the context of a technologically unified world.

The US continues to pursue its ideological foreign policy with deep roots in its history, but China has revived its own much deeper history of being the center of East Asia as well. Meanwhile, the Middle East, where most states were historical fictions created by European imperialists power in the wake of World War I with the Sykes-Picot agreement, has imploded and what’s replaced the defunct nation-state is a millenium old conflict between the two major branches of Islam. Russia at the same moment pursues old czarists dreams long thought as dead and encrypted as Lenin’s corpse.

That’s the world order, perhaps better world disorder, that Kissinger sees, and when you add to it the fact that the our global networks are vectors not just for these conflicts between states and cultures, but between criminals and corporations it can look quite scary. On the Internet we’ve become the “next door neighbor” not just of interesting people from all the world’s cultures, but to scam artists, and electronic burglars, spies and creeps.

Various attempts have been made to come up with a theory that would describe our current geopolitical situation. There have been the Fukuyama’s victory of liberal democracy in his “end of history” thesis, and Huntington’s clash of civilizations. There have been arguments that the nation-state is dead and that we are resurrecting a pre-Westphalian “neo-medieval” order where the main story isn’t the struggle between states but international groups, especially corporations. There are those who argue that city-states and empires are the political units not only of the distant past, but of the future. All of these and their many fellow theories, including still vibrant marxists and revived anarchists takes on events have a grain of truth to them, but always seem to come up short in capturing the full complexity of the moment.

Perhaps the problem we encounter when trying to understand our era is that it truly is sui generis. We have quite simply never existed in a world where the connective tissue, the networks that facilitate the exchange of goods, money, ideas, culture was global but where the underlying civilization and political and social history and condition was radically different.

Looking for a historical analog might be a mistake. Like the early European imperialists who dressed themselves up in togas and re-discovered the doric column, every culture in this big global knot of interconnections we’ve managed to tie all of humankind within are blinded by their history into giving a false order to the labyrinth.

Then again, maybe its the case that what digital technologies are really good at is destroying the distinction between the past and the future, just as the Internet is the most powerful means we have yet discovered for bringing together like- minded regardless of their separation in space.  Political order, after all, is nothing but a reflection of the type of world groups of human beings wish to reify. Some groups get these imagined worlds from the past and some from imagined futures, but the stability of none can never be assured now as all are exposed to reality of other worlds outside their borders. A transhumanist and member of ISIS encountering one another would be something akin to the meeting of time travelers from past and future.

This goes beyond the political. Take any cultural group you like, from steampunk aficionados to constitutional literalists, and what you have are people trying to make an overly complex present understandable by refashioning it in the form of an imagined past. Sometimes people even try to get a grip on the present by recasting it in the form of an imagined future. There is the “march of progress” which assumes we are headed for a destination in time, or science-fiction, which give us worlds more graspable than the present because the worlds presented there have a shape that our real world lacks.

It might be the case that there has never been a shape to humanity or our communities at any time in the past. Perhaps future historians will make the same mistake we have and project their simplifications on our world which was their formless past. We know better.

 

2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt – killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.

 

Think Time is Speeding Up? Here’s How to Slow It!

seven stages in man's life

One of the weirder things about human being’s perception of time is that our subjective clocks are so off. A day spent in our dreary cubicles can seem to crawl like an Amazonian sloth, while our weekends pass by as fast as a chameleon’s tongue . Most dreadful of all, once we pass into middle age, time seems to transform itself from a lumbering steam train heaving us through clearly delineated seasons and years to a Japanese bullet unstoppably hurdling us towards death with decades passing us by in a blurr.

Wondering about time is a habit of the middle aged, as sure a sign of having passed the clock- blind golden age of youth as the proverbial convertible or Harley. If my soon to be 93 year old grandmother is any indication, the old, like the young, aren’t much taken aback by the speed of time’s passage. Instead, time seems to take on the viscosity of New England molasses, the days gently flowing down life’s drain.

Up until now, I didn’t think there might be any empirical evidence to back up such colloquial observations, just the anecdotes passed around the holiday dinner table like turkey stuffing and cranberry sauce: “Can you believe it’s almost Christmas again?”, “Where did the year go?” Lucky for me I now know what happened to time, or how I’ve been confuddled all this time into thinking something had happened to it. I know because I’ve read the psychologist and BBC science broadcaster, Claudia Hammond’s excellent little book on the psychology of time called: Time Warped: Unlocking the Mysteries of Time Perception.     

If you’ve ever asked yourself why time seems to crawl when you’re watching the clock and want it to go faster, or why time appears to speed up in the face of an event you’re dreading like a speech, this is the book for you. But Hammond’s Time Warped goes much deeper than that and exposes us to the reality of what it would be like if some of our common dreams about controlling time actually came true. If we could indeed have “perfect memory” or, as everyone keeps reminding us to, “live in the present”. In addition to all that, the nature of our ambiguous relationship with time she reveals raises interesting questions for those hoping we wrestle from nature a great deal more of it.

Hammond doesn’t really discuss the physics of time, or more clearly, the fact that much of modern physics views time as an illusion akin to past imaginary entities like the ether or the phlogiston.  The fact that something so essential to our human self-understanding is considered by the bedrock of human sciences to be a brain induced mirage has led to a rebellion of at least one prominent physicists, Lee Smolin, but he’s almost a lone voice in the quest to restore time. Nor is Hammond all that interested in the philosophy of time, its history or what time actually is. You won’t find here any detailed discussion of how to define time, it’s more like Supreme Court Justice Potter Stewart’s definition of pornography: “you know it when you see it.” Hammond is, though, on firm scientific ground discussing her main subject, the human perception of time, which, whatever it’s underlying reality or unreality, we find it nearly impossible to live without.

Evolution might have kept things simple and given the human brain just one clock, a steady Bigben of a thing to accurately mark the time. Instead, Hammond draws our attention to the fact that we seem to have multiple clocks within us all running at once.

We seem to be extremely good at gauging the passage of seconds or minutes without counting.  We also have a twenty four- hour clock that runs with the same length but independent of the alternating light and darkness of our spinning earth as Hammond shows was proven by Michel Stiffre who, in the name of science and youthful stupidity, (he was 23) braved two months in a dark cave meticulously recording his bodily rhythms. What Stiffre proved is that, sun or no sun, our bodies follow twenty-four hour cycles. The turning of the earth has bored its traces deep into us, which we fight against using the miracle of electric lights, and if the popularity of sleeping pills are any indication, so often lose.

For some of us, there seems to be an inbuilt ability and need to see longer stretches of time spatially in the form of ovals, circles, or zig-zags, rather than the linear timelines one sees in history books. One day, not long before I read Hammond’s book, I found myself scribbling thinking about how far into the future my great-grandchildren would live, should my now small daughters and their children ever have children of their own.

For whatever reason, I didn’t draw out the decades as blocks of a line but like a set of steps. I thought nothing of it until I read Time Warped and saw that this was a common way for people to picture decades, though many do so in three dimensions, rather than my paltry two. Some people also associate days with color- a kind of synesthesia that isn’t just playful imagination, but is often stable across an individual’s life.

There is no real way to talk about how human beings experience time without discussing memory. What I found mind-blowing about Time Warped was just how many of what we consider the flaws of our memory end up being ambiguities we would be better off not having resolved.

Take the fact that our memories are so fallible and incomplete. One would think that things would be so much better if our brains could record everything and have it for playback on a sort of neuronal blu- ray. For certain situations like criminal trials this would solve a whole host of problems, but elsewhere, we should watch what we wish for. As Hammond shows, there are people who can remember every piece of minutia, down to the socks they wore on a particular day, decades earlier, but a moment’s reflection leads to the conclusion that such natural born mnemonic prodigies fail to dominate creative fields, the sciences, or anything else, and such was the case long before we had Google to remember things for us.

There are people who believe that the path to ensuring they are not unraveled by the flow of time is to record and document everything about themselves and all of their experiences. Digital technology has doubtless made such a quest easier, but Hammond leads us to wonder whether or not the effort to record our every action and keystroke is quixotic. Who will actually take the time to look at all this stuff? How many times, she asks, have any of us sat down to watch our wedding video?

People obsessed with recording every detail of their lives are very likely motivated by the idea that it is their memories that make them who they are. Part of our deep fear of developing Alzheimer’s probably originates in this idea that the loss of our memories would constitute the loss of our self. Yet somehow the loss of memories (and the damage of Alzheimer’s runs much deeper than the loss of memories) does not seem to rob those who experience such losses of what other recognize as their long standing personality.

Strangely, our not too reliable memories, when combined with our ability to mentally time-travel into the past, Hammond believes, gives rise to our ability to imagine futures which are not. It allows us to mix and match different scenes from our memory to come up with whole new ones we anticipate will happen, or even ones that could never happen.

The idea that our imagination might owe its existence to our faulty memory put me in mind of the recent findings of Laurie Santos of the Comparative Cognition Laboratory at Yale. Santos has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different, whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. It seems it is our flaws rather than our superiority that have so favored us above our animal kin.

What, though, of the big problem, the one we all face- the frightening speed through which we are running through our short lives? There is, it seems, some wisdom in the adage that the best way to approach time is in focusing on the present, even if you’re like me and watching another TED talk on the subject by Pico Iyer is enough to make you hurl. If the future is a realm of anxiety and the past a realm of regret, as long as one is not in pain, the present moment is a sort of refuge. Hammond believes that thinking about the future, even if we so often get it wrong by, for instance, thinking that our future self will have more time, money, or will-power, is the default mode of the brain.

Any meditative tradition worth its salt tries to free us from this future obsessed mode and connect us more fully with the present moment of our existence, our breath, its rhythms, the people we care about. There are ways we can achieve this focus on the present without mediation, but they often involve contemplation of our own impending death, which is why soldiers amid the suffering of war and the terminally ill or the very old like my Nanna can often unhitch themselves from the train pulling our thinking off to the future.

Focusing on the present is one way to not only slow the pace of time, but to infuse the short time we have here with the meaning it deserves. Knowing that my small children will outgrow my silliness is the best way I have found to appreciate their laughter now.

Present focus does not, however, solve the central paradox of time for the middle aged, namely, why it seems to move so much faster as we get older, for it is doubtful we were all that more capable of savoring the moment as teenagers than adults. Our commonsense explanation of time speeding up as we age typically has to do with proportionality as in “a year for a five year old is 1/5 of their life, but for a forty year old it is merely 1/40.” Hammond shows this proportionality theory to to wrong on its face, for, if it were true, the days for a middle aged person would be quite literally buzzing by in comparison to the days of their younger selves.

Only a moment’s reflection should show us that the proportionality theory for time’s seeming quickening as we age can’t be true. Think back to your school days waiting impatiently for the 3:00 pm bell to ring: was it really much longer than the time you spend now stuck to your chair in some meaningless business meeting? There are some difference in the gauging of how much time has passed between the young and the old, yet these are nowhere near large enough to explain the differences in the subjective experiences of how fast time is passing between those two groups. So if proportionality theory doesn’t explaining the speeding up of time for the middle aged- what does?

When thinking about duration, the thing we need to keep in mind is, as the work of Daniel Kahneman has shown, we have not one but two “selves” an experiencing self and a remembering self. Having two selves does a number on our ability to make decisions with our future in mind. The experiencing self wants us to eat the cookie now, because it’s the remembering self that will regret it later. It also skews our sense of the past.

Our sense of the duration of time is experienced differently by these two separate selves.Waiting in a long line feels like forever when you’re there, but unless something particularly interesting happened during your wait, the remembered experience feels like it happened in a blink of an eye. Yet, a wonderful or frightening experience, like a first kiss or a car accident, though it seems to fly by while we’re in it, usually cuts its groves deep enough into our memory that when we reflect upon it it seems to have taken a very long time to unfold.

Hammond’s explanation for why youth seems stretched out in time compared to middle age  is what she calls the “reminiscence bump” and the “holiday paradox”. Adolescence and young adulthood are filled with so many firsts they leave a deep impression on our memory and this “thickness” of memory leads our remembering self to conclude time must have been going more slowly back in the heady days of our youth- the reminiscence bump . If you want to make your middle age days seem longer, then you need to fill them up with exciting and new things, which is the reason, Hammond speculates, that holidays full of new experiences seem fast when we’re in them, but to be stretched out on reflection- the holiday paradox. She wonders, however, whether the better option is just not to worry so much about time’s speed and rest when we need it rather than constantly chase after new memories.

Given the interest of the audience here in extending the human lifespan I wonder what the implications of such discovers regarding time on that project might be? A comedy could certainly be written in which we have doubled the length of human life, and end up also doubling all those things we now find banal about time. Would human beings who lived well beyond their hundreds be subject to meetings that stretched out for days and weeks? Would traffic jams in which you spent a week in your car be normal?

Perhaps we might even want to focus on our ability to manipulate our sense of time’s duration as an easier path towards a sort of longevity. Imagine a world where love affairs could stretch out centuries and pain and boredom are reduced to a blink, or a future that has “time retreats” (like today’s religious retreats) where one goes away for a week that has been neurologically altered to having felt like it was decades or longer. We might use the same sorts of time manipulation to punish people for heinous crimes so that a 600 year sentence actually means something. One might object that such induced experiences of slow time aren’t real, but then again neither are most versions of digital immortality, or even, as Hammond showed us, our subjective experience of time itself.

All of this talk of manipulating our sense of time as a road to longevity is just playful speculation on my part. What should be clear is that any move towards changing the human body so that it lives much longer than it does now is probably also going to have to grapple with and transform our psychological notions of time and the society we have built around our strange capacity to warp it.

 

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.

Mary Shelley’s other horror story; Lessons for Super-pandemics

The Last Man

Back in the early 19th century a novel was written that tells the story of humanity’s downfall in the 21st century.  Our undoing was the consequence of a disease that originates in the developing world and radiates outward eventually spreading into North America, East Asia, and ultimately Europe. The disease proves unstoppable causing the collapse of civilization, our greatest cities becoming grave sites of ruin. For all the reader is left to know, not one human being survives the pandemic.

We best know the woman who wrote The Last Man in 1825 as the author of  Frankenstein, but it seems Mary Shelley had more than one dark tale up her sleeve. Yet, though the destruction wrought by disease in The Last Man is pessimistic to the extreme, we might learn some lessons from the novel that would prove helpful to understanding not only the very deadly, if less than absolute ruination, of the pandemic of the moment- Ebola- and even more regarding the dangers from super-pandemics more likely to emerge from within humanity than from what is a still quite dangerous nature herself.

The Last Man tells the story of son of a nobleman who had lost his fortune to gambling, Lionel Verney, who will become the sole remaining man on earth as humanity is destroyed by a plague in the 21st century. Do not read the novel hoping to get a glimpse of Shelley’s view of what our 21st century world would be like, for it looks almost exactly like the early 19th century, with people still getting around on horseback and little in the way of future technology.

My guess is that Shelley’s story is set in the “far future” in order to avoid any political heat for a novel in which England has become a republic. Surely, if she meant it to take place in a plausible 21st century, and had somehow missed the implications of the industrial revolution, there would at least have been some imagined political differences between that world and her own. The same Greco-Turkish conflict that raged in the 1820’s rages on in Shelley’s imagined 21st century with only changes in the borders of the war. Indeed, the novel is more of a reflection and critique on the Romantic movement, with Lord Byron making his appearance in the form of the character Lord Raymond, and Verney himself a not all that concealed version of Mary Shelley’s deceased husband Percy.

In The Last Man Shelley sets out to undermine all the myths of the Romantic movement, myths of the innocence of nature, the redemptive power of revolutionary politics and the transformative power of art. While of historical interests such debates offer us little in terms of the meaning of her story for us today. That meaning, I think,  can be found in the state of epidemiology, which on the very eve of Shelley’s story was about to undergo a revolution, a transformation that would occur in parallel with humanity’s assertion of general sovereignty over nature, the consequence of the scientific and industrial revolutions.

Reading The Last Man one needs to be carefully aware that Shelley has no idea of how disease actually works. In the 1820’s the leading theory of what caused diseases was the miasma theory, which held that they were caused by “bad air”. When Shelley wrote her story miasma theory was only beginning to be challenged by what we now call the “germ theory” of disease with the work of scientists such as Agostino Bassi. This despite the fact that we had known about microscopic organisms since the 1500s and their potential role in disease had been cited as early as 1546 by the Italian polymath Girolamo Fracastoro. Shelley’s characters thus do things that seem crazy in the light of germ theory; most especially, they make no effort to isolate the infected.

Well, some do. In The Last Man it is only the bad characters that try to run away or isolate themselves from the sick. The supremely tragic element in the novel is how what is most important to us, our small intimate circles, which we cling to despite everything, can be done away with by nature’s cruel shrug. Shelley’s tale is one of extreme pessimism not because it portrays the unraveling of human civilization, and turns our monuments into ruins, and eventually, dust, but because of how it portrays a world where everyone we love most dearly leave us almost overnight. The novel gives one an intimate portrait of what its like to watch one’s beloved family and friends vanish, a reality Mary Shelley was all too well acquainted with, having lost her husband and three children.

Here we can find the lesson we can take for the Ebola pandemic for the deaths we are witnessing today in west Africa are in a very real sense a measure of people’s humanity as if nature, perversely, set out to target those who are acting in a way that is most humane. For, absent modern medical infrastructure, the only ones left to care for the infected is the family of the sick themselves.

This is how is New York Times journalist Helene Cooper explained it to interviewer Terry Gross of Fresh Air:

COOPER: That’s the hardest thing, I think, about the disease is it does make pariahs out of the people who are sick. And it – you know, we’re telling the family people – the family members of people with Ebola to not try to help them or to make sure that they put on gloves. And, you know, that’s, you know, easier – I think that can be easier said than done. A lot of people are wearing gloves, but for a lot of people it’s really hard.

One of the things – two days after I got to Liberia, Thomas Eric Duncan sort of happened in the U.S. And, you know, I was getting all these questions from people in the U.S. about why did he, you know, help his neighbor? Why did he pick up that woman who was sick? Which is believed to be how we got it. And I set out trying to do this story about the whole touching thing because the whole culture of touching had gone away in Liberia, which was a difficult thing to understand. I knew the only way I could do that story was to talk to Ebola survivors because then you can ask people who actually contracted the disease because they touched somebody else, you know, why did you touch somebody? It’s not like you didn’t know that, you know, this was an Ebola – that, you know, you were putting yourself in danger. So why did you do it?

And in all the cases, the people I talked to there were, like, family members. There was this one woman, Patience, who contracted it from her daughter who – 2-year-old daughter, Rebecca – who had gotten it from a nanny. And Rebecca was crying, and she was vomiting and, you know, feverish, and her mom picked her up. When you’re seeing a familiar face that you love so much, it’s really, really hard to – I think it’s a physical – you have to physically – to physically restrain yourself from touching them is not as easy as we might think.

The thing we need to do to ensure naturally occurring pandemics such as Ebola cause the minimum of human suffering is to provide support for developing countries lacking the health infrastructure to respond to or avoid being the vectors for infectious diseases. We especially need to address the low number of doctors per capita found in some countries through, for example, providing doctor training programs. In a globalized world being our brother’s keeper is no longer just a matter of moral necessity, but helps preserve our own health as well.

A super-pandemic of the kind imagined by Mary Shelley, though, is an evolutionary near impossibility. It is highly unlikely that nature by itself would come up with a disease so devastating we will not be able to stop before it kills us in the billions. Having co-evolved with microscopic life some human being’s immune system, somewhere, anticipates even nature’s most devious tricks. We are also in the Anthropocene now, able to understand, anticipate, and respond to the deadliest games nature plays. Sadly, however, the 21st century could experience, as Shelley imagined, the world’s first super-pandemic only the source of such a disaster wouldn’t be nature- it would be us.

One might think I am referencing bio-terrorism, yet the disturbing thing is that the return address for any super-pandemic is just as likely to be stupid and irresponsible scientists as deliberate bioterrorism. Such is the indication from what happened in 2011 when the Dutch scientist Ron Fouchier deliberately turned the H5N1 bird flu into a form that could potentially spread human-to-human. As reported by Laurie Garrett:

Fouchier told the scientists in Malta that his Dutch group, funded by the U.S. National Institutes of Health, had “mutated the hell out of H5N1,” turning the bird flu into something that could infect ferrets (laboratory stand-ins for human beings). And then, Fouchier continued, he had done “something really, really stupid,” swabbing the noses of the infected ferrets and using the gathered viruses to infect another round of animals, repeating the process until he had a form of H5N1 that could spread through the air from one mammal to another.

Genetic research has become so cheap and easy that what once required national labs and huge budgets to do something nature would have great difficulty achieving through evolutionary means can now be done by run-of-the-mill scientists in simple laboratories, or even by high school students. The danger here is that scientists will create something so novel that  evolution has not prepared any of us for, and that through stupidity and lack of oversight it will escape from the lab and spread through human populations.

News of the crazy Dutch experiments with H5N1 was followed by revelations of mind bogglingly lax safety procedures around pandemic diseases at federal laboratories where smallpox virus had been forgotten in a storage area and pathogens were passed around in Ziploc bags.

The U.S. government, at least, has woken up to the danger imposing a moratorium on such research until their true risks and rewards can be understood and better safety standards established. This has already, and will necessarily, negatively impact potentially beneficial research. Yet what else, one might ask should the government do given the potential risks? What will ultimately be needed is an international treaty to monitor, regulate, and sometimes even ban certain kinds of research on pandemic diseases.

In terms of all the existential risks facing humanity in the 21st century, man-made super-pandemics are the one with the shortest path between reality and nightmare. The risk from runaway super-intelligence remains theoretical, based upon hypothetical technology that, for all we know, may never exist. The danger of runaway global warming is real, but we are unlikely to feel the full impact this century. Meanwhile, the technologies to create a super-pandemic in large part already here with the key uncertainty being how we might control such a dangerous potential if, as current trends suggest, the ability to manipulate and design organisms at the genetic level continues to both increase and democratize. Strangely enough, Mary Shelley’s warning in her Frankenstein about the dangers of science used for the wrong purposes has the greatest likelihood of coming in the form of her Last Man.

 

The Human Age

atlas-and-the-hesperides-1925 John Singer Sargent

There is no writer now, perhaps ever, who is able to convey the wonder and magic of science with poetry comparable to Diane Ackerman. In some ways this makes a great deal of sense given that she is a poet by calling rather than a scientist.  To mix metaphors: our knowledge of the natural world is merely Ackerman’s palette whose colors she uses to paint a picture of nature. It is a vision of the world as magical as that of the greatest worlds of fiction- think Dante’s Divine Comedy, or our most powerful realms of fable.

There is, however, one major difference between Ackerman and her fellow poets and storytellers: the strange world she breathes fire into is the true world, the real nature whose inhabitants and children we happen to be. The picture science has given us, the closest to the truth we have yet come, is in many ways stranger, more wondrous, more beautifully sublime, than anything human beings have been able to imagine; therefore, the perfect subject and home for a poet.

The task Ackerman sets herself in her latest book, The Human Age: The World Shaped By US, is to reconcile us to the fact that we have now become the authors and artists rather than the mere spectators of nature’s play . We live in an era in which our effect upon the earth has become so profound that some geologists want to declare it the “Anthropocene” signalling that our presence is now leaving its trace upon the geological record in the way only the mightiest, if slow moving, forces of nature have done heretofore. Yet in light of the the speed with which we are changing the world, we are perhaps more like a sudden catastrophe than languid geological transformation taking millions of years to unfold.

Ackerman’s hope is to find a way to come to terms with the scale of our own impact without at the same time reducing humanity to that of the mythical Höðr, bringing destruction on account of our blindness and foolishness, not to mention our greed. Ackerman loves humanity as much as she loves nature, or better, she refuses, as some environmentalist are prone to do, to place human beings on one side and the natural world on the other. For her, we are nature as much as anything else that exists. Everything we do is therefore “natural”, the question we should be asking is are we doing what is wise?

In The Human Age Ackerman attempts to reframe our current pessimism and self-loathing regarding our treatment of “mother” nature , everything from the sixth great extinction we are causing to our failure to adequately confront climate change, by giving us a window into the many incredible things we are doing right now that will benefit our fragile planet.  

She brings our attention to new movements in environmentalism such as “Reconciliation Ecology” which seeks to bring into harmony human settlements and the much broader needs of the rest of nature. Reconciliation Ecology is almost the opposite of another movement of growing popularity in “neo-environmentalists” circles, namely, “Environmental Modernism”. Whereas Environmental Modernism seeks to sever our relationship with nature in order to save it, Reconciliation Ecology aims to naturalize the human artifice, bringing farming and even wilderness into the city itself. It seeks to heal the fragmented wilderness our civilization has brought about by bringing the needs of wildlife into the design process.

Rather than covering our highways with road kill, we could provide animals with a safe way to cross the street. This might be of benefit to the deer, the groundhog, the racoon, the porcupine. We might even construct tunnels for the humble meadow vole. Providing wildlife corridors, large and small, is the one of the few ways we can reconcile the living space and migratory needs of “non-urbanized” wildlife to our fragmented and now global sprawl.

Human beings, Ackerman argues, are as negatively impacted by the disappearance of wilderness as any other of nature’s creatures, and perhaps given our aesthetic sensitivity, even more so. For a long time now we have sought to use our engineering prowess to subdue nature. Why not use it to make the human made environment more natural?  She highlights a growing movement among architects and engineers to take their inspiration not from the legacy of our lifeless machines and buildings, but from nature itself, which so often manages to create works of beautiful efficiency. In this vein we find leaps of the imagination such as the Eastgate Center designed by Mick Pearce who took his inspiration from the breathing, naturally temperature controlled, structure of the termite mound.

The idea of the Anthropocene is less an acknowledgement of our impact than a recognition of our responsibility. For so long we have warred against nature’s limits, arrows, and indifference that it is somewhat strange to find ourselves in the position of her caretaker and even designer. And make no mistake about it, for an increasing number of the plants and animals with whom we share this planet their fate will be decided by our action or inaction.

Some environmentalists would argue for nature’s “purity” and against our interference, even when this interference is done in the name of her creatures. Ackerman, though not entirely certain, is arguing against such environmental nihilism, paralysis, or puritanism. If it is necessary for us to “fix” nature- so be it, and she sees in our science and technology our greatest tool to come to nature’s aid. Such fixes can entail everything from permitting, rather than attempting to uproot, invasive species we have inadvertently or deliberately introduced if those invasives have positive benefits for an ecosystem, aiding the relocation of species as biomes shift under the impact of climate change, or introducing extinct species we have managed to save and resurrect through their DNA.

We are now entering the era where we are not only able to mimic nature, but to redesign it at the genetic level. Creating chimeras that nature left to itself would find it difficult or impossible to replicate. Ackerman is generally comfortable with our ever more cyborg nature and revels in the science that allows us to literally print body parts and one day whole organs. Like the rest of us should be, she is flabbergasted by ongoing revolutions in biology that are rewriting what it means to be human.

The early 21st century field of epigenetics is giving us a much more nuanced and complex view of the interplay between genes and the environment. It is not only that multiple genes need to be taken account of in explaining conditions and behaviors, but that genes are in a sense tuned by the environment itself. Indeed, much of this turning “on or off” of genes is a form of genetic memory. In a strange echo of Lamarck, the experiences of one’s close ancestors- their feasts and famines are encoded in the genes of their descendants.

Add to that our recent discoveries regarding the microbiome, the collection of bacteria that live within us that are far more numerous and in many ways as influential as our genes, and one gets an idea for how far we are moving from ideas of what it meant to be human held by scientists even a few years ago and how much distance has been put between current biology and simplistic versions of environmental or genetic determinism.

Some such, as the philosopher of biology, John Dupree see in our advancing understanding of the role of the microbiome and epigenetics a revolutionary change in human self understanding. For a generation we chased after a simplistic idea of genetic determinism where genes were seen as a sort of “blueprint” for the organism. This is now known to be false. Genes are just one part of a permeable interaction between them, the environment and the microbiome that guide individual development and behavior.

We are all of us collectives, constantly swapping biological information and rather than seeing the microscopic world as a kind of sideshow to the “real” biological story of large organisms such as ourselves we might be said to be where we have always been in Steven Jay’s “Age of Bacteria” as much as we are in an Anthropocene.

Returning to Ackerman, she is amazed at the recent advancements in artificial intelligence, and like Tyler Cowen, even wonders whether scientific discoveries will soon no longer be the prerogative of humans, but of our increasingly intelligent machines. Such is the conclusion one might draw from looking at the work of Hod Lipson of Cornell “Eureqa Machine” . Feed the Eureqa Machine observations or data and it is able to come up with equations that describe them all on its own. Ackerman does, however, doubt whether we could ever build a machine that replicated human beings in all their wondrous weirdness.

Where her doubts regarding technology veer towards warning has to do with the question of digitalization of the world. Ackerman is best known for her 1990 A Natural History of the Senses a work which explored the five forms of human perception. Little wonder, then, that she would worry about what the world being flattened on our ubiquitous screens   into the visual sense alone. She laments:

What if, through novelty and convenience, digital nature replaces biological nature?

The further we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. I worry about virtual blinders. Hobble all the senses except the visual, and you produce curiously deprived voyeurs.  (196-197)

While concerned that current technologies may be flattening human experience by leaving us visually mesmerized behind screens at the expense of the body, even as they broaden their scope allowing us to see into world small, large, and at speeds never before possible, Ackerman accepts our cyborg nature. For her we are, to steal a phrase from the philosopher Andy Clark “natural born cyborgs”, and this is not a new thing. Since our beginning we have been a changeling able to morph into a furred animal in the cold with our clothes, wield fire like a mythical dragon, or cover ourselves with shells of metal among much else.

Ackerman is not alone in the view that our cyborg tendencies are an ancient legacy. Shortly before I read The Human Age I finished the anthropologist Timothy Taylor’s short and thought provoking The Artificial Ape: How Technology Changed the Course of Human Evolution. Taylor makes a strong case that anatomically modern humans co-evolved with technology, indeed, that the technology came first and in a way has been the primary driver of our evolution.

The technology Taylor thinks lifted us beyond our hominid cousins wasn’t the usual suspects of fire or stone tools but likely the unsung baby sling.  This simple invention allowed humans to overcome a constraint suffered by the other hominids whose brains, contrary to common opinion, were getting smaller because upright walking was putting a premium of quick rather than delayed development of the young. Slings allowed mothers to carry big brained babies that took longer to develop, but because of the long period of youth could learn much more than faster developing relatives. In effect, slings were a way for human mothers who needed their hands free to externalize the womb and evolve a koala like pouch.

To return to The Human Age; although, she has, as always, given us a wonderfully written book filled with poetry and insights, Ackerman’s book is not without its flaws. Here I will focus only on what I found to be the most important one; namely, that for a book named after our age, there is not enough regarding humans in it. This is what I mean: Though the problems suffered from the effects of the Anthropocene are profound and serious, the animal most likely to broadly suffer the impact of phenomenon such as climate change are likely to be us.

The weakness of the idea of the Anthropocene when judged largely positively, which is what Ackerman is trying to do, is that it universalizes a state of control over nature that is largely limited to advanced countries and the world’s wealthy. The threat of rising sea levels look quite different from the perspective of Manhattan or Chittagong. A good deal of the environmental gains in advanced countries over the past half century can be credited to globalization, which amid its many benefits, has also entailed the offloading of pollution, garbage, and waste processing from developed to developing countries. This is the story that the photos of Edward Burtynsky, whom Ackerman profiles, tells. Stories such as the lives of the shipbreakers of Bangladesh whose world resembles something out of a dystopian novel.

Humanity is not sovereign over nature everywhere, and for some of us not only our we faced with a wildness that has not been subdued, but where humanity itself has become like a unpredictable natural force reigning down unfathomable, uncontrollable good and ill. Such is the world now being experienced by the west African societies suffering under the epidemic of Ebola. It is a world we might better understand by looking at a novel written on the eve of our mastery over nature, a novel by another amazing writer who was also a woman.

2040’s America will be like 1840’s Britain, with robots?

Christopher Gibbs Steampunk

Looked at in a certain light, Adrian Hon’s History of the Future in 100 Objects can be seen as giving us a window into a fictionalized version of an intermediate technological stage we may be entering. It is the period when the gains in artificial intelligence are clearly happening, but they have yet to completely replace human intelligence. The question if it AI ever will actually replace us is not of interest to me here. It certainly won’t be tomorrow, and technological prediction beyond a certain limited horizon is a fool’s game.

Nevertheless, some features of the kind of hybrid stage we have entered are clearly apparent. Hon built an entire imagined world around them from with “amplified-teams” (AI working side by side with groups of humans) as one of the major elements of 21st century work, sports, and much else besides.

The economist Tyler Cowen perhaps did Hon one better, for he based his very similar version of the future not only on things that are happening right now, but provided insight on what we should do as job holders and bread-winners in light of the rise of ubiquitous, if less than human level, artificial intelligence. One only wishes that his vision had room for more politics, for if Cowen is right, and absent us taking collective responsibility for the type of future we want to live in, 2040’s America might look like the Britain found in Dickens, only we’ll be surrounded by robots.

Cowen may seem a strange duck to take up the techno-optimism mantle, but he did in with gusto in his recent book Average is Over. The book in essence is a sequel to Cowen’s earlier best seller The Great Stagnation in which he argued that developed economies, including the United States, had entered a period of secular stagnation beginning in the 1970’s. The reason for this stagnation was that advanced economies had essentially picked all the “low hanging fruit” of the industrial revolution.

Arguing that we are in a period of technological stagnation at first seems strange, but when I reflect a moment on the meaning of facts such as not flying all that much faster than would have been common for my grandparents in the 1960’s, the kitchen in my family photos from the Carter days looking surprisingly like the kitchen I have right now- minus the paneling, or saddest of all, from the point of view of someone brought up on Star Trek, Star Wars and Our Star Blazers with a comforter sporting Viking 2 and Pioneer, the fact that, not only have we failed to send human visitors to Mars or beyond, we haven’t even been back to the moon. Hell we don’t even have any human beings beyond low-earth orbit.

Of course, it would be silly to argue there has been no technological progress since Nixon. Information, communication and computer technology have progressed at an incredible speed, remaking much of the world in their wake, and have now seemingly been joined by revolutions in biotechnology and renewable energy.

And yet, despite how revolutionary these technologies have been, they have not been able to do the heavy lifting of prior forms of industrialization due to the simple fact that they haven’t been as qualitatively transformative as the industrial revolution. If I had a different job I could function just fine without the internet, and my life would be different only at the margins. Set the technological clock by which I live back to the days preceding industrialization, before electricity, and the internal combustion engine, and I’d be living the life of my dawn-to-dusk Amish neighbors- a different life entirely.

Average is Over is a followup to Cowen’s earlier book in that in it he argues that technological changes now taking place will have an impact that will shake us out of our stagnation, or at least how that stagnation is itself evolving into something quite different with some being able to escape its pull while others fall even further behind.

Like Hon, Cowen thinks intermediate level AI is what we should be paying attention to rather than Kurzweil or Bostrom- like hopes and fears regarding superintelligence. Also like Hon, Cowen thinks the most important aspect of artificial intelligence in the near future is human-AI teams. This is the lesson Cowen takes from, among other things, freestyle chess.

For those who haven’t been paying attention to the world of competitive chess, freestyle chess is what emerged once people were able to buy a chess playing program that could beat the best players in the world for a few dollars to play on one’s phone. One might of thought that would be the death knell for human chess, but something quite different has happened. Now, some of the most popular chess games are freestyle meaning human-machine vs human-machine.

The moral Cowen draws from freestyle chess is that the winners of these games, and he extrapolates, the economic “games” of the future, are those human beings who are most willing to defer to the decisions of the machine. I find this conclusion more than a little chilling given we’re talk about real people here rather than Knight or Pawns, but Cowen seems to think it’s just common sense.

In its simplest form Cowen’s argument boils down to the prediction that an increasing amount of human work in the future will come in the form of these AI-human teams. Some of this, he admits, will amount to no workers at all with the human part of the “team” reduced to an unpaid customer. I now almost always scan and bag my own goods at the grocery store, just as I can’t remember the last time I actually spoke to a bank teller who wasn’t my mom. Cowen also admits that the rise of AI might mean the world actually gets “dumber” our interactions with our environment simplified to foster smooth integration with machines and compressed to meet their limits.

In his vision intelligent machines will revolutionize everything from medicine to education to business management and negotiation to love. The human beings who will best thrive in this new environment will be those whose work best complements that of intelligent machines, and this will be the case all the way from the factory floor to the classroom. Intelligent machines should improve human judgement in areas such as medical diagnostics and would even replace judges in the courtroom if we are ever willing to take the constitutional plunge. Teachers will go from educators to “coaches” as intelligent machines allow individualized instruction , but education will still require a human touch when it comes to motivating students.

His message to those who don’t work well with intelligent machines is – good luck. He sees automation leading to an ever more competitive job market in which many will fail to develop the skills necessary to thrive. Those unfortunate ones will be left to fend for themselves in the face of an increasingly penny-pinching state. There is one area, however, where Cowen thinks you might find refuge if machines just aren’t your thing-marketing. Indeed, he sees marketing as one of the major growth areas in the new otherwise increasingly post-human economy.

The reason for this is simple. In the future there are going to be less ,not more, people with surplus cash to spend on all the goods built by a lot of robots and a handful of humans. One will have to find and persuade those with real incomes to part with some of their cash. Computers can do the finding, but it will take human actors to sell the dream represented by a product.

The world of work presented in Cowen’s Average is Over is almost exclusively that of the middle class and higher who find their way with ease around the Infosphere, or whatever we want to call this shell of information and knowledge we’ve built around ourselves. Either that or those who thrive economically will be those able to successfully pitch whatever it is they’re selling to wealthy or well off buyers, sometimes even with the help of AI that is able to read human emotions.

I wish Cowen had focused more on what it will be like to be poor in such a world. One thing is certain, it will not be fun. For one, he sees further contraction rather than expansion of the social safety net, and widespread conservatism, rather than any attempts at radically new ways of organizing our economy, society and politics. Himself a libertarian conservative, Cowen sees such conservatism baked into the demographic cake of our aging societies. The old do not lead revolutions and given enough of them they can prevent the young from forcing any deep structural changes to society.

Cowen also has a thing for so-called “moral enhancement” though he doesn’t call it that. Moral enhancement need not only come from conservative forces, as the extensive work on the subject by the progressive James Hughes shows, but in the hands of both Hon and Cowen, moral enhancement is a bulwark of conservative societies, where the world of middle class work and the social safety net no longer function, or even exist, in the ways they had in the 20th century.

Hon with his neuroscience background sees moral enhancement leveraging off of our increasing mastery over the brain, but manifesting itself in a revival of religious longings related to meaning, a meaning that was for a long time provided by work, callings and occupations that he projects will become less and less available as we roll through the 21st century with human workers replaced by increasingly intelligent machines. Cowen, on the other hand, sees moral enhancement as the only way the poor will survive in an increasingly competitive and stingy environment, though his enhancement is to take place by more traditional means, the return of strict schools that inculcate victorian era morals such as self-control and above all conscientiousness in the young. Cowen is far from alone in thinking that in an era when machines are capable of much of the physical and intellectual labor once done by human beings what will matter most to individual success is ancient virtues.

In Cowen’s world the rich with money to burn are chased down with a combination of AI, behavioral economics, targeted consumer surveillance, and old fashioned, fleshy persuasion to part with their cash, but what will such a system be like for those chronically out of work? Even should mass government surveillance disappear tomorrow, (fat chance) it seems the poor will still face a world where the forces behind their ever more complex society become increasingly opaque, responsible humans harder to find, and in which they are constantly “nudged” by people who claim to know better. For the poor, surveillance technologies will likely be used not to sell them stuff which they can’t afford, but are a tool of the repo-man, and debt collector, parole officer, and cop that will slowly chisel away whatever slim column continues to connect them the former middle class world of their parents. It is a world more akin to the 1940’s or even the 1840’s than it is to anything we have taken to be normal since the middle of the 20th century.

I do not know if such a world is sustainable over the long haul, and pray that it is not. The pessimist in me remembers that the classical and medieval world’s existed for long periods of time with extreme levels of inequality in both wealth and power, the optimist chimes in that these were ages when the common people did not know how to read. In any case, it is not a society that must by some macabre logic of economic determinism come about. The mechanism by which Cowen sees no sustained response to such a future coming into being is our own political paralysis and generational tribalism. He seems to want this world more than he is offering us a warning of it arrival. Let’s decide to prove him wrong for the technologies he puts so much hope in could be used in totally different ways and in the service of a juster form of society.

However critical I am of Cowen for accepting such a world as a fait accompli, the man still has some rather fascinating things to say. Take for instance his view of the future of science:

Once genius machines start coming up with new theories…. intelligibility will seem like a legacy from the very distant past. ( 220)

For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….

Digital Afterlife: 2045

Alphonse Mucha Moon

Excerpt from Richard Weber’s History of Religion and Inequality in the 21st Century (2056)

Of all the bewildering diversity of new of consumer choices on offer before the middle of the century that would have stunned people from only a generation earlier, none was perhaps as shocking as the many ways there now were to be dead.

As in all things of the 21st century what death looked like was dependent on the wealth question. Certainly, there were many human beings, and when looking at the question globally, the overwhelming majority, who were treated in death the same way their ancestors had been treated. Buried in the cold ground, or, more likely given high property values that made cemetery space ever more precious, their corpses burned to ashes, spread over some spot sacred to the individual’s spirituality or sentiment.

A revival of death relics that had begun in the early 21st century continued for those unwilling out of religious belief, or more likely, simply unable to afford any of the more sophisticated forms of death on offer. It was increasingly the case that the poor were tattooed using the ashes of their lost loved one, or that they carried some momento in the form of their DNA in the vague hope that family fortunes would change and their loved one might be resurrected in the same way mammoths now once again roamed the windswept earth.

Some were drawn by poverty and the consciousness brought on by the increasing period of environmental crisis to simply have their dead bodies “given back” to nature and seemed to embrace with morbid delight the idea that human beings should end up “food for worms”.

It was for those above a certain station where death took on whole new meanings. There were of course, stupendous gains in longevity, though human beings still continued to die, and  increasingly popular cryonics held out hope that death would prove nothing but a long and cold nap. Yet it was digital and brain scanning/emulating technologies that opened up whole other avenues allowing those who had died or were waiting to be thawed to continue to interact with the world.

On the low end of the scale there were now all kinds of interactive cemetery monuments that allowed loved ones or just the curious to view “life scenes” of the deceased. Everything from the most trivial to the sublime had been video recorded in the 21st century which provided unending material, sometimes in 3D, for such displays.

At a level up from this “ghost memoirs” became increasingly popular especially as costs plummeted due to outsourcing and then scripting AI. Beginning in the 2020’s the business of writing biographies of the dead ,which were found to be most popular when written in the first person, was initially seen as a way for struggling writers to make ends meet. Early on it was a form of craftsmanship where authors would pour over records of the deceased individual in text, video, and audio recordings, aiming to come as close as possible to the voice of the deceased and would interview family and friends about the life of the lost in the hopes of being able to fully capture their essence.

The moment such craft was seen to be lucrative it was outsourced. English speakers in India and elsewhere soon poured over the life records of the deceased and created ghost memoirs en mass, and though it did lead to some quite amusing cultural misinterpretations, it also made the cost of having such memoirs published sharply decline further increasing their popularity.

The perfection of scripting AI made the cost of producing ghost memoirs plummet even more. A company out of Pittsburgh called “Mementos” created by students at Carnegie Mellon boasted in their advertisements that “We write your life story in less time than your conception”. That same company was one of many others that had brought 3D scanning of artifacts from museums to everyone and created exact digital images of a person’s every treasured trinket and trophy.

Only the very poor failed to have their own published memoir which recounted their life’s triumphs and tribulations or failed to have their most treasured items scanned.  Many, however, esqued the public display of death found in either interactive monuments or the antiquated idea of memoirs as death increasingly became a thing of shame and class identity. They preferred private home- shrines many of which resembled early 21st century fast food kiosks whereby one could chose a precise recorded event or conversation from the deceased in light of current need. There were selections with names like “Motivation”, and “Persistence” that might pull up relevant items, some of which used editing algorithms that allowed them to create appropriate mashups, or even whole new interactions that the dead themselves had never had.

Somewhat above this level due to the cost for the required AI were so-called “ghost-rooms”. In all prior centuries some who suffered the death of a loved one would attempt to freeze time by, for instance, leaving unchanged a room in which the deceased had spent the majority of their time. Now the dead could actually “live” in such rooms, whether as a 3D hologram (hence the name ghost rooms) or in the form of an android that resembled the deceased. The most “life-like” forms of these AI’s were based on the maps of detailed “brainstorms” of the deceased. A technique perfected earlier in the century by the neuroscientist Miguel Nicolelis.

One of the most common dilemmas, and one that was encountered in some form even in the early years of the 21st century, was the fact that the digital presence of a deceased person often continued to exist and act long after a person was gone. This became especially problematic once AIs acting as stand-ins for individuals became widely used.

Most famously there was the case of Uruk Wu. A real estate tycoon, Wu was cryogenically frozen after suffering a form of lung cancer that would not respond to treatment. Estranged from his party-going son Enkidu, Mr Wu had placed the management all of his very substantial estate under a finance algorithm (FA). Enkidu Wu initially sued the deceased Uruk for control of family finances- a case he famously and definitively lost- setting the stage for increased rights for deceased in the form of AIs.

Soon after this case, however, it was discovered that the FA being used by the Uruk estate was engaged in wide-spread tax evasion practices. After extensive software forensics it was found that such evasion was a deliberate feature of the Uruk FA and not a mere flaw. After absorbing fines, and with the unraveling of its investments and partners, the Uruk estate found itself effectively broke. In an atmosphere of great acrimony TuatGenics the cryonic establishment that interred Urduk unplugged him and let him die as he was unable to sustain forward funding for his upkeep and future revival.

There was a great and still unresolved debate in the 2030’s over whether FAs acting in the markets on behalf of the dead were stabilizing or destabilizing the financial system. FAs became an increasingly popular option for the cryogenically frozen or even more commonly the elderly suffering slow onset dementia, especially given the decline in the number of people having children to care for them in old age, or inherit their fortunes after death. The dead it was thought would prove to be conservative investment group, but anecdotally at least they came to be seen as a population willing to undertake an almost obscene level of financial risk due to the fact that revival was a generation off or more.

One weakness of the FAs was that they were faced with pouring their resources into upgrade fees rather than investment as the presently living designed software meant to deliberately exploit the weaknesses of earlier generation FAs. Some argued that this was a form of “elder abuse” whereas others took the position that to prohibit such practices would constitute fossilizing markets in an earlier and less efficient era.

Other phenomenon that came to prominence by the 2030’s were so-called “replicant” and “faustian” legal disputes. One of the first groups to have accurate digital representations in the 21st century were living celebrities. Near death or at the height of their fame, celebrities often contracted out their digital replicants. There was always need of those having ownership rights of the replicants to avoid media saturation, but finding the right balance between generating present and securing future revenue proved challenging.

Copyright proved difficult to enforce. Once the code of a potentially revenue generating digital replicant had been made there was a great deal of incentive to obtain a copy of that replicant and sell it to all sorts of B-level media outlets. There were widespread complaints by the Screen Actors Guild that replicants were taking away work from real actors, but the complaint was increasingly seen as antique- most actors with the exception of crowd drawing celebrities were digital simulations rather than “real” people anyway.

Faustian contacts were legal obligations by second or third tier celebrities or first tier actors and performers whose had begun to see their fame decline that allowed the contractor to sell a digital representation to third parties. Celebrities who had entered such contracts inevitably found “themselves” staring in pornographic films, or just as common, in political ads for causes they would never support.

Both the replicant and faustian issues gave an added dimension to the legal difficulties first identified in the Uruk Wu case. Who was legally responsible for the behavior of digital replicants? That question became especially apparent in the case of the serial killer Gregory Freeman. Freeman was eventually held liable for the deaths of up to 4,000 biological, living humans. Murders he “himself” had not committed, but that were done by his digital replicant. This was done largely by infiltrating a software error in the Sony-Geisinger remote medical monitoring system (RMMS) that controlled everything from patients pacemakers to brain implants and prosthetics to medication delivery systems and prescriptions. Freeman was found posthumously guilty of having caused the deaths (he committed suicide) but not before the replicant he had created had killed hundreds of persons even after the man’s death.

It became increasingly common for families to create multiple digital replicants of a particular individual, so now a lost mother or father could live with all of their grown and dispersed children simultaneously. This became the source of unending court disputes over which replicant was actually the “real” person and which therefore held valid claim to property.

Many began to create digital replicants well before the point of death to farm them out out for remunerative work. Much of work by this point had been transformed into information processing tasks, a great deal of which was performed by human-AI teams, and even in traditional fields where true AI had failed to make inroads- such as indoor plumbing- much of the work was performed by remote controlled droids. Thus, there was an incentive for people to create digital replicants that would be tasked with income generating work. Individuals would have themselves copied, or more commonly just a skill-based part of themselves copied and have it used for work. Leasing was much more common than outright ownership and not merely because of complaints of a new form of “indentured servitude” but because whatever skill set was sold was likely to be replaced as its particulars became obsolete or pure AI that had been designed on it improved. In the churn of needed skills to obsolescence many dedicated a share of their digital replicants to retraining itself.

Servitude was one area where the impoverished dead were able to outcompete their richer brethren. A common practice was for the poor to be paid upfront for the use of their brain matter upon death. Parts of once living human brains were commonly used by companies for “capucha” tasks yet to be mastered by AI.

There were strenuous objections to this “atomization” of the dead, especially for those digital replicants that did not have any family to “house” them, and who, lacking the freedom to roam freely in the digital universe were in effect trapped in a sort of quantum no-man’s-land. Some religious groups, most importantly the Mormons, responded to this by place digital replicants of the dead in historical simulations that recreated the world in which the deceased had lived and were earnestly pursuing a project in which replicants of those who had died before the onset of the digital age were created.

In addition, there were numerous rights arguments against the creation of such simulated histories using replicants. The first being that forcing digital replicants to live in a world where children died in mass numbers, starvation, war and plague were common forms of death, and which lacked modern miracles such as anesthesia, when such world could easily be created with more humane features, was not “redemptive” but amounted to cruel and unusual punishment and even torture.

Indeed, one of the biggest, and overblown, fears of this time was that one’s digital replicant might end up in a sadistically crafted simulated form of hell. Whatever its irrationality, this became a popular form of blackmail with videos of “captive” digital replicants or proxies used to frighten a person into surrendering some enormous sum.

The other argument against placing digital replicants in historical simulations, either without their knowledge, their ability to leave, or more often both, was something akin to imprisoning a person in a form of Colonial Williamsburg or The Renaissance Faire. “Spectral abolitionists” argued that the embodiment of a lost person should be free to roam and interact with the world as they chose whether as software or androids, and that they should be unshackled from the chains of memory. There were even the JBDBM (the John Brown’s Digital Body Movement) and the DigitalGnostics, hacktivists group that went around revealing the reality of simulated worlds to their inhabitants and sought to free them to enter the larger world heretofore invisible to them.

A popular form of cultural terrorism at this time were so-called “Erasers” entities with names such as “GrimReaper” or “Scathe” whose project consisted in tracking down digital replicants and deleting them. Some characterized these groups as a manifestation of a deathists philosophy, or even claimed that they were secretly funded by traditional religious groups whose traditional “business models” were being disrupted by the new digital forms of death. Such suspicions were supported by the fact that the Erasers usually were based in religious countries where the rights of replicants were often non-existent and fears regarding new “electric jinns” rampant.

Also prominent in this period were secular prophets who projected that a continuing of the trends of digital replicants, both of the living, and the at least temporarily dead, along with their representing AI’s, would lead to a situation where non-living humans would soon outnumber the living. There were apocalyptic tales akin to the zombie craze earlier in the century that within 50 years the dead would rise up against the living and perhaps join together with AIs destroy the world. But that, of course, was all Ningbowood.

 

An imaginary book excerpt inspired by Adrian Hon’s History of the Future in 100 Objects.