Slavery’s past and disturbingly likely future

“The past is never dead. It’s not even past.”

                                      William Faulkner

Dystopias, just like utopias, are never unmoored from a society’s history. Our worst historical experiences inevitably become the source code for our nightmares regarding the future. Thankfully, America has been blessed with a shallow well from which to feed its dystopian imagination, at least when one compares its history to other societies’ sorrows.

After all, what do we have to compare with the devastation of China during the Taiping Rebellion, Japanese invasion, or Great Leap Forward? What in our experience compares to the pain inflicted on the Soviet Union’s peoples during World War II, the factional bloodletting of Europe during the wars of religion or world wars? Only Japan has had the tragic privilege of being terrorized into surrender by having its citizens incinerated into atomic dust, and we were the ones who did it.

Of course, the natural rejoinder here is that I’m looking at American history distorted through the funhouse lens of my own identity as a straight- white- male. From the perspective of Native Americans, African Americans, women, and sexual minorities it’s not only that the dark depths of American history were just as bad or worse than those of other societies, it’s that the times when the utopian imagination managed to burst into history are exceedingly difficult to find if indeed they ever existed at all.

Civil war threatens to inflict a society not only over the question of defining the future, but over the issue of defining the past. Deep divisions occur when what one segment of society takes to be its ideal another defines as its nightmare. Much of the current political conflict in the US can be seen in this light- dueling ideas of history which are equally about how we define desirable and undesirable futures.

Technology, along with cultural balkanization and relative economic abundance, has turned engagement with history into a choice. With the facts and furniture (stuff) of the past so easily accessible we can make any era of history we chose intimately close. We can also chose to ignore history entirely and use the attention we might have devoted to it with a passion for other realities- even wholly fictional ones.

In reality, devoting all of one’s time to trying to recapture life in the past, or ignoring the past in total and devoting one’s attention to one or more fictional worlds, tend to become one and the same. A past experienced as the present is little more real than a completely fictionalized world. Historical re-enactors can aim for authenticity, but then so can fans of Star Trek. And the fact remains that both those who would like to be living in the 24th century or those who would prefer to domicile in the 19th, or the 1950’s, by these very desires and how they go about them, reveal the reality that they’re sadly stuck in the early 21st.

What we lose by turning history into a consumer fetish that can either be embraced or pushed aside for other ways to spend our money and attention isn’t so much the past’s facts and furniture, which are for the first time universally accessible, but its meaning and meaning is not something we can avoid.  

We can never either truly ignore or return to the past because that past is deeply embedded in every moment of our present while at the same time being irreversibly mixed up with everything that happened between our own time and whatever era of history we wish to inhabit or avoid.

This strange sort of occlusion of the history where the past is simultaneously irretrievably distant in that it cannot be experienced as it truly was and yet is also intimately close- forming the very structure out of which the present is built- means we need other, more imaginative, ways to deal with the past. Above all, a way in which the past can be brought out of its occlusion, its ghosts that live in the present and might still haunt our future made visible, its ever present meaning made clear.

Ben Winters’ novel Underground Airlines does just this. By imagining a present day America in which the Civil War never happened and slavery still exists he not only manages to give us an emotional demonstration of the fact that the legacy of slavery is very much still with us, he also succeeds in warning us how that tragic history might become more, rather than less, part of our future.

The protagonist of Underground Airlines is a man named Victor. A bounty hunter in the American states outside of the “hard four” where slavery has remained at the core of the economy, he is a man with incredible skills of detection and disguise. His job is to hunt runaway slaves.

The character reminded me a little of Dr. Moriarty , or better, Sherlock Holmes- minus the cartoonishness. (More on why the latter in a second) But for me what made Victor so amazing a character wasn’t his skills or charm but his depth. You see Victor isn’t just a bounty hunter chasing down men and women trying to escape hellish conditions, he’s an escaped slave himself.

A black man who can only retain what little freedom he has by hunting down human beings just like himself. It’s not so much Victor’s seemingly inevitable redemption from villain to hero that made Underground Airlines so gripping, but Winters’ skill in making me think this redemption might just not happen. That and the fact that the world he depicted in the novel wasn’t just believable, but uncomfortably so.

Underground Airlines puts the reader inside a world of 21st century slavery where our moral superiority over the past, our assumption that we are far too enlightened to allow such a morally abhorrent order to exist, that had we lived in the 19th century we’d have certainly stood on the righteous side of the abolitionists and not been lulled to sleep by indifference or self-interest crumbles.

The novel depicts a thoroughly modern form of slavery, where those indignant over the institution’s existence do so largely through boycotts and virtue signaling all the while the constitution itself (which had been amended to forever legalize slavery in the early 19th century) permits the evil itself, and the evil that supports it like the human hunting done by Victor, to continue to destroy the humanity of those who live under it.

Winters also imagines a world like our own in that pop-culture exists in this strange morally ambiguous space. Victor comforts himself by listening to the rhythms of Michael Jackson (a brilliant choice given the real life Jackson’s uncomfortable relationship with his own race), just as whites in our actual existing world can simultaneously adopt and admire black culture while ignoring the systematic oppression that culture has emerged to salve. It’s a point that has recently been made a million times more powerfully than I ever could.

The fictional premise found in Underground Airlines, that the US could have kept slavery while at the same time clung to the constitution and the Union isn’t as absurd as it appears at first blush. Back in the early aughts the constitutional scholar Mark Graber had written a whole book on that very subject: Dred Scott and the Problem of Constitutional Evil . Graber’s disturbing point was that not only was slavery constitutionally justifiable but its had been built into the very system devised by the founders, thus it was Lincoln and the abolitionists who were engaged in a whole scale reinterpretation of what the republic meant.

No doubt scarred by the then current failure of building democracy abroad in Iraq, Graber argued that the wise, constitutionally valid, course for 19th century politicians would have been to leave slavery intact in the name of constitutional continuity and social stability. He seems to assume that slavery as a system was somehow sustainable and that the constitution itself is in some way above the citizens who are the source of its legitimacy. And Graber makes this claim even when he knows that under modern conditions basing a political system on the brutal oppression of a large minority is a recipe for a state of permanent fragility and constant crises of legitimacy often fueled by the intervention of external enemies- which is the real lesson he should have taken from American intervention abroad.

In Underground Airlines we see the world that Graber’s 19th century compromisers might have spawned. It’s a world without John Brown like revolutionaries in which slave owners run corporate campuses and are at pains to present themselves as somehow humane. What rebellion does occur comes in the context of the underground airlines itself, a network, like its real historical analog that attempts to smuggle freed slaves out of the country. Victor himself had tried to escape more than once, but he is manacled in a particularly 21st century way, a tracking chip embedded deep under his skin- like a dog.

What Winters has managed to do by placing slavery in our own historical context is recover for us what it meant. The meaning of our history of slavery is that we should never allow material prosperity to be bought at the price of dehumanizing oppression. That it’s a system based as much on human indifference and cravenness as it is on our capacity for cruelty. It’s a meaning we’ve yet to learn.

This is not a lesson we can afford to forget for, despite appearances, it’s not entirely clear that we have eternally escaped it. It seems quite possible that we have entered an era when the issue of a narrow prosperity bought by widespread oppression come to dominate national and global politics. To see that- contra Steven Pinker– we haven’t escaped oppression as the basis of material abundance, but merely skillfully removed it from the sight of those lucky enough to be born in societies and classes where affluence is taken for granted, one need only look at the history of cotton itself.

Sven Beckert in his Empire of Cotton: A Global History skillfully laid out the stages in which the last of the triad of great human needs of food, shelter and clothing was at last secured, so that today it is hard for many of us to imagine how difficult it once was just to keep ourselves adequately clothed to the point where the problem has become one of having so much clothing we can’t find any place to put it.

The conquest of this need started with actual conquest. The war capitalism waged by states and their proxies starting with the Age of Exploration succeeded in monopolizing markets and eventually enslave untold numbers of African to cultivate cotton in the Americas whose lands had been cleared of inhabitants by disease and genocide. The British especially succeeded not only in monopolizing foreign trade in cotton and in enslaving and resettling Africans in the American south, they had also, at home, managed through enclosure to turn their peasantry into a mass of homeless proletarians who could be forced through necessity and vagrancy laws into factories to spin cloth using the new machines of the industrial revolution. It was a development that would turn the British from the most successfully middleman in the lucrative Asian cotton trade into the world’s key producer of cotton goods, a move that would devastate the farmers of Asia who relied on cotton as a means to buffer their precarious incomes.

The success of the abolitionists movement, and especially the Union victory in the US Civil War seemed to have permanently severed the relationship between capitalism and slavery, yet smart capitalists had already figured out that gig was up. Wage labor had inherent advantages over slave based production. Under a wage based system labor was no longer linked to one owner but was free floating, thus able to rapidly respond to the ceaseless expansion followed by collapse that seemed to be the normal condition of an industrial economy. Producers no longer needed to worry about how they would extract value from their laborers when faced with falling demand, or worry about their loss of value and unsellabilty should they become in incurably sick or injured.  They could simply shed them and let the market or charity deal with such refuse. Capitalists also knew the days of slavery were numbered in light of successful slave revolts, especially the one in Haiti. The coercive apparatus slavery required was becoming prohibitively expensive.

It took capitalism less than twenty years after the end of American slavery to hit upon a solution to the problem of how to run commodity agriculture without slavery. That solution was to turn farmers themselves into proletarians. The Jim Crow laws that rose up in the former Confederate states after the failure of Reconstruction to turn the country into a true republic (based on civic rather than ethnic nationalism) were in essence a racially based form of proletarianization.

It was a model that Beckert points out was soon copied globally. First by Western imperialists, and later by strong states established along Western lines, peasants were coerced into specializing in commodity crops such as cotton and forced to rely on far flung markets for their survival. In the late 19th century the initial effect of this was a series of devastating famines, which with technological improvements, and the maturation of the market and global supply chains have thankfully become increasingly rare.

What Beckert’s work definitely shows is that the idea of “the market” arising spontaneously on its own between individuals free of the interfering hand of the state is mere fiction. Capitalism of both the commercial and industrial varieties required strong states to establish itself and were essential to creating the kind of choice architecture that compelled individuals to accept their social reality.

Yet this history wasn’t all bad, for the very same strength of the state that had been used to establish markets could be turned around and used to contain and humanize them. It required strong states to enact emancipation and workers rights rights (even if the later was achieved under conditions of racialized democracy) and it was the state at the height of its strength after the World Wars that finally put an end to Jim Crow.

But by the beginning of the 21st century the state had lost much of this strength. The old danger of basing the material prosperity of some on the oppression of others remained very much alive and well. Beckert charts this change for the realm of cotton production with the major players in our age of globalization being no longer producers but retail giants such as WalMart or Amazon- distributors of finished products which aren’t so much traditional stores as vast logistical networks able to navigate and dominate opaque global supply chains.

In an odd way, perhaps the end of the Cold War did not so much signal the victory of capitalism over state communism as the birth of a rather monstrous hybrid of the two with massive capitalist entities tapping into equally massive pools of socialized production whether that be Chinese factories, Uzbek plantations, or enormous state subsidized farms in the US. Despite its undeniable contribution to global material prosperity this is also a system where the benefits largely flow in one direction and the costs in another.

It’s as if the primary tool of the age somehow ends up defining the shape of its political economy. Our primary tool is the computer, a machine whose use comes with its own logic and cost. To quote Jaron Lanier in Who Owns the Future?:

Computation is the demarcation of a little part of the universe, called a computer, which is engineered to be very well understood and controllable, so that it closely approximates a deterministic, non-entropic process. But in order for a computer to run, the surrounding parts of the universe must take on the waste heat, the randomness. You can create a local shield against entropy, but your neighbors will always pay for it. (143)

Under this logic the middle and upper classes in advanced economies, where they have been prohibited from unloading their waste and pollution on their own weak and impoverished populations have merely moved to spewing their entropy abroad or upon the non-human world- offloading their waste, pollution and the social costs of production to the developing world.

Still, such a system isn’t slavery which has its own peculiar brutalities. Unbeknownst to many slavery still exists, indeed, according to some estimates, there are more slaves now than there have ever been in human history. It is a scourge we should increasingly work to eradicate, yet it is in no way at the core of our economy as it was during the Roman Empire or 19th century.

That doesn’t mean, however, that slavery could never return to its former prominence. Such a dark future would depend on certain near universal assumptions about our technological future failing to come to pass. Namely, that Moore’s Law will not have a near term successor and thus that the predicted revolution in AI and robotics now expected fails to arrive this century. The failure of such a technological revolution might then intersect with current trends that are all too apparent. The frightening thing is that such a return to slavery in a high-tech form (though we wouldn’t call it that) would not require any sorts of technological breakthroughs at all.

In Underground Airlines what keeps Victor from escaping his fate is the tracking chip implanted deep under his skin. There’s already some use and a lot of discussion about using non- removable GPS tracking devices to keep tabs on former convicts no longer behind bars.

The reasoning behind this initially seems to provide a humane alternative to the system of mass incarceration we have today. The current system is in large measure a white, rural  jobs program – with upwards of 70 percent of prisons built between 1970 – 2000 constructed in rural areas.   It was a system built on the disproportionate incarceration of African Americans who make up less than 13 percent of the US population, but comprise 40 percent of its prisoners.

The election of Donald Trump has for now nixed the nascent movement towards reforming this barbaric system, a movement which has some strangely conservative supporters most notably the notorious Koch Brothers. What their presence signals is that we are in danger of replacing one inhumane system with an alternative with dangers of its own. One where people we once imprisoned are now virtually caged and might even be sold out for labor in exchange for state “support”.

This could happen if we enter another AI winter and human labor proves, temporarily at least, unreplaceable by robots, and at the same time we continue down the path of racialized politics. In these conditions immigrants might be treated in a similar way. A roving labor force used to meet shortages on the condition that they can be constantly tracked, sold out Uber-like, and deported at will. Such a “solution” to European, North American, and East Asian societies would be a way for racialized, demographically declining societies to avoid multi-cultural change while clinging to their standard of living. One need only look at how migrant labor works today in a seemingly liberal poster child such as Dubai or how Filipino servants are used by Israelis who keep their Palestinian neighbors in a state of semi-apartheid to get glimpses of this.

We might enter such a world almost unawares our anxieties misdirected by what turn out to be false science-fiction based nightmares of jobless futures and Skynet. Let’s do our best to avoid it.

 

Escape from the Body Farm

Body snatchers

One of the lesser noted negative consequences of having a tabloid showman for a president is the way the chaos and scandal around him has managed to suck up all the air in the room. Deep social and political problems that would have once made the front page, sat on top of the newsfeed, or been covered in depth by TV news, have been relegated to the dustbin of our increasingly monetized attention. And because so few of the public know about these issues their future remains in the hands of interested parties unlikely to give more than a perfunctory concern to issues such as ethics or the common good such issues involve.

For that reason I was extremely pleased when the news service Reuters recently did a series of articles on a topic that seemingly has nothing to do with Trump. That series called The Body Trade gives the reader insight into an issue I would bet few of us are aware of. The way that life-saving tissues and organs have been increasingly monetized and turned into profits centers for medical companies despite the fact this biological trade is supposedly done on a voluntary basis not for money but in what is often the last ethical, charitable act a person can do in the service of the common good.

According to Reuters, bodies “donated to science” often end up dismembered and sold to the highest bidder to body brokers who sell the dead for a profit to anyone willing to pay. Whatever your qualifications you can buy such human remains over the internet, for a price. Many morticians are apparently now onto the game and will convince a family to donate the body of a loved one only to sell these remains at a profit, but the trade is also comprised of large corporations. One such corporate body broker, Science Care, has aimed to become the “MacDonald’s” of the dead and has managed to run a 27 million dollar profit from the sale of whole bodies many of which were gained from poor people unable to pay for funeral expenses.

A body reduced to a commodity comes to be treated like a commodity. In one body broker’s warehouse the heads of dead were stacked like frozen cookie jars. Biological Resource Center dismembered bodies using off-the-shelf power tools and stored the remains like trash in garbage bags. They seemed to have been especially adept at getting hold of the bodies of the poor.

All of the cases from the Reuters series appear to have happened in the US and dealt with the remains of the dead, but the body trade is a global phenomenon and often trucks in the parts of the living and the living themselves. I knew this because I had recently read Scott Carney’s excellent book on the subject, The Red Market: On the Trail of the World’s Organ Brokers, Bone Thieves, Blood Farmers, and Child  Traffickers.

Carney’s book takes readers into the heart of the red market, whose victims come largely from the poor of the developing and post-communist world and whose beneficiaries are the rich and middle class of advanced economies along with the nouveau riche who because of globalization are now everywhere. Skeletons are obtained for the rich world via Indian grave robbers, a place where in one of the most gruesome section of the book. There, on the Indian border with Nepal, a farmer named Papa Yadhav kept his captive victims whom he milked like cows- only for the much more valuable commodity of human blood.

Carney reveals that there are whole villages in south Asia that base their economies on selling their kidneys, that Chinese authorities have harvested corneas from political prisoners such as those from the religious movement Falun Gong, that older women who can afford it can with ease buy the eggs of poor women, or rent their wombs for a pittance. Among the world’s poor pharmaceutical companies can also find willing human guinea pigs at a similarly bargain basement price.

One might think the quest for organs especially is born from a crisis of supply. Yet Carney points out how the scarcity of organs is largely artificial. Like an oil cartel, by inflating the number of patients eligible for transplants the medical industry consciously guarantees that demand will exceed supply.

Then there are the children. Often kidnapped on the streets of the world’s crowded mega-cities their darkest fate is to become the commodities of the global sex trade while the lucky ones are adopted into the homes of well-off families who even with the best of intentions remain oblivious of their new children’s sinister origins.

Rightfully, Carney dismisses market based solutions to the problem of the red market. Given the level of global inequality there is no way to sort willing sellers from those forced into the decision to undergo risky and life changing surgery in order to temporarily escape the vice grip of hunger and homelessness.

His solution is that we mandate transparency throughout the supply chain of human organs and tissues so that anyone who receives a transplant or other gift of this kind can trace what they have been given back to its original owner or their family. As Paul Auster laid bare is his book The Winter Journal a self is inextricable from its body, our unique experience etched into every scar and wrinkle. What Carney is arguing for is really a form of social memory that links its way back to this personal experience. In the era of ubiquitous big data this shouldn’t be too hard. It is simply a matter of political will.

The body trade is just one example of new forms of dystopia missed by 21st century proponents of the belief in human progress, such as Steven Pinker. Optimists focus on the bright side. Organ transplantation, the harvesting of human eggs, and surrogacy are all technical marvels that would be impossible without breakthroughs such as immunosuppressive drugs and antibiotics. They are technologies that, at one level, certainly increase human happiness- allowing patients to live longer, or people to have children where it was previously impossible- as is the case with homosexual couples.

The dystopian aspects of this new relationship towards our own and other’s bodies, however, hasn’t been missed by the writers of speculative fiction. Kazuo Ishiguro made it the theme of his novel Never Let Me Go in which cloned children are raised for their organs. Yet the philosopher Steven Lukes probably gave us the picture most clearly with his depiction of a utilitarian dystopia in his book The Curious Enlightenment of Professor Caritat.  There the narrator Nicholas Caritat gets this response when he suggest that the county of Utilitaria compels organ donation for the benefit of the physically disabled:

 ‘They’re not beneficiaries,’ Priscilla corrected him. ‘They’re benefactors. It’s their distinctive way of contributing to the general welfare. They can’t produce goods or services, but they can provide organs that will enable others to do so. It gives them a purpose in life, and that’s especially valuable as we’ve largely phased out medical care for that particular category.’ (83)

Yet both Ishiguro and Lukes in their focus on the individual perhaps underplay the fact that modern day utopias are sustain themselves by creating entire dystopian realms both within and between societies on opposite sides of that chasm. There’s a reason both techno-optimists and pessimists, while drawing opposite conclusions, are reading our situation correctly.

In our time the utopian and dystopian aspects of civilization have taken on the same topology as our economics, and communications- utopia and dystopia are now global, networked, with little respect for national borders, whose membership is almost solely based on your ability to pay, which in turn is based on your capacity to extract rents and displace costs onto those outside your own utopian bubble. If every society takes on the shape of its most important technology then ours, as Jaron Lanier has pointed out, has the shape of computer- a box that creates a pocket of order at the price of displaced entropy.

Here are just a few examples of this displacement: material abundance is bought at the cost of brutal conditions for the laboring poor and rampant environmental destruction; food abundance is bought at the price of horrendous animal suffering, wildlife eradication and cruel conditions for migrant labor. The increasing complexity of our societies is bought at the price of displacing our entropy and pain onto other human beings and life itself. Yet chaos can only be held at bay for so long.

Yet I am making it all sound too new. What makes our situation unique is its truly global aspect, its openness to elites everywhere. That this system is based on the domination of human bodies is as old as civilization itself, a cruel reality we, for all our supposed tolerance and lack of overt violence, have never escaped. Ta-Nehisi Coates said it best:

As for now, it must be said that the process of washing the disparate tribes white, the elevation in the belief in being white, was not achieved through wine tasting, and ice cream socials, but rather through the pillaging of life, liberty, labor, and land; through the flaying of backs; the chaining of limbs; the strangling of dissidents; the destruction of families; the rape of mothers; the sale of children; and various other acts meant, first and foremost, to deny you and me the right to secure and govern our own bodies.”

The new people are not original in this. Perhaps there has been, at some point in history, some great power whose elevation was exempt from the violent exploitation of other human bodies. If there has been, I have yet to discover it.” (8)

Since the agricultural revolution elites have mined the laboring bodies of the lower classes starting with slaves and the peasantry, moving on to the industrial proletariat and now seemingly having moved on after a brief interim when elite wealth was supported by middle class consumption to mining our data.

I say elites, but given the way human ancestry folds back upon itself and in a globalized world has penetrated even the most isolated populations, everyone alive today has been shown to share a common ancestor as little as 3,600 years ago. What this means is that none of us are truly innocent or completely guilty. All of us can trace our existence back to both cruel masters and blameless slaves. In some sense the moral truths behind the myth of the Fall remain true even in light of Darwin’s discovery of evolution: all human beings share a common parentage, and all exist as a consequence of their guilt. It remains up to us to break free from this cycle.

The dystopia of the moment, surveillance capitalism, isn’t the only dystopian iteration of this perennial theme of human fallenness possibly in store for us, although given the profits in medicine the two are likely to become linked. For if human labor is truly becoming superfluous, and production become too cheap through globalization and automation to render large profits, then the lower classes, absent technical breakthroughs such as 3D printed organs, artificial wombs, and the growth of human organs in livestock,  still have our bodies themselves left to exploit.

As with global warming, many hope that the rapid pace of technological progress will ultimately save us from such a fate. And thanks to breakthroughs like Crispr things are moving extremely fast, especially in the area of growing and harvesting human organs from animals.

While exploiting the bodies of animals for life saving organs would be better than using them for meat, such breakthroughs wouldn’t completely solve the problem of the red market which stem as much from political economy as they do from technological roadblocks.

Bodies might then be exploited not as a source of organs but as sites for what would now be deemed unnecessary surgeries- in the same way unnecessary testing is done today by the medical industry to drive up profits. This is what is bound to happen when one treats the human person as just another commodity and source of revenue. To disconnect the needs of the human body from the ravenous appetite of  capitalism would be the best thing we could do to ensure its humane treatment.

Yet there is another, more philosophical and spiritual aspect to our condition. When Western culture made the move into Protestantism followed by the scientific revolution and secularism we also made a break from an aspect of human culture that was perhaps universal up until that point in history- the respect for and veneration of the dead. And while much understanding and untold good came from this move in that a good deal of our modern health can be laid at the feet of those courageous enough to pursue knowledge through dissection and other means that came at great personal risks, something was also tragically lost in the bargain.

There are signs, however, that we are getting it back: from efforts to understand death in other cultures, to a desire to naturalize our relationship with death, to the attempts to memorialize the death of loved ones through tokens of remembrance we carry on and even etch into our bodies. All stem from the acknowledgement that we are bodies, material beings prone to decay and death, and yet, through the power of human love and memory, always something else besides. Some might even call it a soul.

A Box of a Trillion Souls

pandora's box

“The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.”                                                                          

Jaron Lanier

 

Stephen Wolfram may, or may not, have a justifiable reputation for intellectual egotism, but I like him anyway. I am pretty sure this is because, whenever I listen to the man speak I most often  walk away no so much with answers as a whole new way to frame questions I had never seen before, but sometimes I’m just left mesmerized, or perhaps bewildered, by an image he’s managed to draw.

A while back during a talk/demo of at the SXSW festival he managed to do this when he brought up the idea of “a box of a trillion souls”. He didn’t elaborate much, but left it there, after which I chewed on the metaphor for a few days and then returned to real life, which can be mesmerizing and bewildering enough.

A couple days ago I finally came across an explanation of the idea in a speech by Wolfram over at John Brockman’s Edge.org  There, Wolfram also opined on the near future of computation and the place of  humanity in the universe. I’ll cover those thoughts first before I get to his box full of souls.

One of the things I like about Wolfram is that, uncommonly for a technologist, he tends to approach explanations historically. In his speech he lays out a sort of history of information that begins with information being conveyed genetically with the emergence of life, moves to the interplay between individual and environment with the development of more complex life, and flowers in spoken language with the appearance of humans.

Spoken language eventually gave rise to the written word, though it took almost all of human history for writing to become nearly as common as speaking. For most of that time reading and writing were monopolized by elites. A good deal of mathematics, as well has moved from being utilized by an intellectual minority to being part of the furniture of the everyday world, though more advanced maths continues to be understandable by specialists alone.

The next stage in Wolfram’s history of information, the one we are living in, is the age of code. What distinguishes code from language is that it is “immediately executable” by which I understand him to mean that code is not just some set of instructions but, when run, the thing those instruction describe itself.

Much like reading, writing and basic mathematics before the invention of printing and universal education, code is today largely understood by specialists only. Yet rather than endure for millennia, as was the case with the monopoly of writing by the clerisy, Wolfram sees the age of non-universal code to be ending almost as soon as it began.

Wolfram believes that specialized computer languages will soon give way to “natural language programming”.  A fully developed form of natural language programming would be readable by both computers and human beings- numbers of people far beyond those who know how to code, so that code would be written in typical human languages like English or Chinese. He is not just making idle predictions, but has created a free program that allows you to play around with his own version of a NLP.

Wolfram makes some predictions as to what a world where natural language programming became ubiquitous- where just as many people could code as could now write- might look like. The gap between law and code would largely disappear. The vast majority of people, including school children, would have at the ability to program computers to do interesting things, including perform original research. As computers become embedded in objects the environment itself will be open to the programming of everyone.

All this would seem very good for us humans and would be even better given that Wolfram sees it as the prelude to the end of scarcity, including the scarcity of time that we now call death. But then comes the AI. Artificial intelligence will be both the necessary tool to explore the possibility space of the computational universe and the primary intelligence via which we interact with the entirety of the realm of human thought.  Yet at some threshold AI might leave us with nothing to do as it will have become the best and most efficient way to meet our goals.

What makes Wolfram nervous isn’t human extinction at the hands of super-intelligence so much as what becomes of us after scarcity and death have been eliminated and AI can achieve any goal- artistic ones included- better than us. This is Wolfram’s  vision of the not too far off future, which given the competition with even current reality, isn’t near sufficiently weird enough. It’s only when he starts speculating on where this whole thing is ultimately headed that anything so strange as Boltzmann brains make their appearance, yet something like them does and no one should be surprised given his ideas about the nature of computation.

One of Wolfram’s most intriguing, and controversial, ideas is something he calls computational equivalence. With this idea he claims not only that computation is ubiquitous across nature, but that the line between intelligence and merely complicated behavior that grows out of ubiquitous natural computation is exceedingly difficult to draw.

For Wolfram the colloquialism that “the weather has a mind of its own” isn’t just a way of complaining that the rain has ruined your picnic, but, in an almost panpsychic or pantheistic way, captures a deeper truth that natural phenomenon are the enactment of a sort of algorithm, which, he would claim, is why we can successfully model their behavior with other algorithms we call computer “simulations.” The word simulations needs quotes because, if I understand him, Wolfram is claiming that there would be no difference between a computer simulation of something at a certain level of description and the real thing.

It’s this view of computation that leads Wolfram to his far future and his box of a trillion souls. For if there is no difference between a perfect simulation and reality, if there is nothing that will prevent us from creating perfect simulations, at some point in the future however far off, then it makes perfect sense to think that some digitized version of you, which as far as you are concerned will be you, could end up in a “box”, along with billions or trillions of similar digitized persons, with perhaps millions or more copies of  you.   

I’ve tried to figure out where exactly this conclusion for an idea I otherwise find attractive, that is computational equivalence, goes wrong other just in terms of my intuition or common sense. I think the problem might come down to the fact that while many complex phenomenon in nature may have computer like features, they are not universal Turing machines i.e. general purpose computers, but machines whose information processing is very limited and specific to that established by its makeup.

Natural systems, including animals like ourselves, are more like the Tic-Tac-Toe machine built by the young Danny Hillis and described in his excellent primer on computers, that is still insightful decades after its publication- The Pattern on the Stone. Of course, animals such as ourselves can show vastly more types of behavior and exhibit a form of freedom of a totally different order than a game tree built out of circuit boards and lightbulbs, but, much like such a specialized machine, the way in which we think isn’t a form of generalized computation, but shows a definitive shape based on our evolutionary, cultural and personal history. In a way, Wolfram’s overgeneralization of computational equivalence negates what I find to be his as or more important idea of the central importance of particular pasts in defining who we are as a species, people and individuals.

Oddly enough, Wolfram falls into the exact same trap that the science-fiction writer Stanislaw Lem fell into after he had hit upon an equally intriguing, though in some ways quite opposite understanding of computation and information.

Lem believed that the whole system of computation and mathematics human beings use to describe the world was a kind of historical artifact for which there much be much better alternatives buried in the way systems that had evolved over time processed information. A key scientific task he thought would be to uncover this natural computation and find ways to use it in the way we now use math and computation.

Where this leads him is to precisely the same conclusion as Wolfram, the possibility of building a actual world in the form of simulation. He imagines the future designers of just such simulated worlds:

“Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess.” ( 291 -292)

Yet it seems to me that moving from the idea that things in the world: a storm, the structure of a sea-shell, the way particular types of problems are solved are algorithmic to the conclusion that the entirety of the world could be hung together in one universal  algorithm is a massive overgeneralization. Perhaps there is some sense that the universe might be said to be weakly analogous, not to one program, but to a computer language (the laws of physics) upon which an infinite ensemble of other programs can be instantiated, but which is structured so as to make some programs more likely to be run while deeming others impossible. Nevertheless, which programs actually get executed is subject to some degree of contingency- all that happens in the universe is not determined from initial conditions. Our choices actually count.

Still, such a view continues to treat the question of corporal structure as irrelevant, whereas structure itself may be primary.

The idea of the world as code, or DNA as a sort of code is incredibly attractive because it implies a kind of plasticity which equals power. What gets lost however, is something of the artifact like nature of everything that is, the physical stuff that surrounds us, life, our cultural environment. All that is exists as the product of a unique history where every moment counts, and this history, as it were, is the anchor that determines what is real. Asserting the world is or could be fully represented as a simulation either implies that such a simulation possesses the kinds of compression and abstraction, along with the ahistorical plasticity that comes with mathematics and code or it doesn’t, and if it doesn’t, it’s difficult to say how anything like a person, let alone, trillions of persons, or a universe could actually, rather than merely symbolically, be contained in a box even a beautiful one.

For the truly real can perhaps most often be identified by its refusal to be abstracted away or compressed and by its stubborn resistance to our desire to give it whatever shape we please.

 

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.

 

Jumping Off The Technological Hype-Cycle and the AI Coup

Robotic Railroad 1950s

What we know is that the very biggest tech companies have been pouring money into artificial intelligence in the last year. Back in January Google bought the UK artificial intelligence firm Deep Mind for 400 million dollars. Only a month earlier, Google had bought the innovative robotics firm Boston Dynamics. FaceBook is in the game as well having also in December 2013 created a massive lab devoted to artificial intelligence. And this new obsession with AI isn’t only something latte pumped-up Americans are into. The Chinese internet giant Baidu, with its own AI lab, recently snagged the artificial intelligence researcher Andrew Ng whose work for Google included the breakthrough of creating a program that could teach itself to recognize pictures of cats on the Internet, and the word “breakthrough” is not intended to be the punch line of a joke.

Obviously these firms see something that make these big bets and the competition for talent seem worthwhile with the most obvious thing they see being advances in an approach to AI known as Deep Learning, which moves programming away from a logical set of instructions and towards the kind bulking and pruning found in biological forms of intelligence. Will these investments prove worth it? We should know in just a few years, yet we simply don’t right now.

No matter how it turns out we need to beware of becoming caught in the technological hype-cycle. A tech investor, or tech company for that matter, needs to be able to ride the hype-cycle like a surfer rides a wave- when it goes up, she goes up, and when it comes down she comes down, with the key being to position oneself in just the right place, neither too far ahead or too far behind. The rest of us, however, and especially those charged with explaining science and technology to the general public, namely, science journalists, have a very different job- to parse the rhetoric and figure out what is really going on.

A good example of what science journalism should look like is a recent conversation over at Bloggingheads between Freddie deBoer and Alexis Madrigal. As Madrigal points out we need to be cognizant of what the recent spate of AI wonders we’ve seen actually are. Take the much over-hyped Google self-driving car. It seems much less impressive to know the areas where these cars are functional are only those areas that have been mapped before hand in painstaking detail. The car guides itself not through “reality” but a virtual world whose parameters can be upset by something out of order that the car is then pre-programmed to respond to in a limited set of ways. The car thus only functions in the context of a mindbogglingly precise map of the area in which it is driving. As if you were unable to make your way through a room unless you knew exactly where every piece of furniture was located. In other words Google’s self-driving car is undriveable in almost all situations that could be handled by a sixteen year old who just learned how to drive. “Intelligence” in a self-driving car is a question of gathering massive amounts of data up front. Indeed, the latest iteration of the Google self-driving car is more like tiny trolley car where information is the “track” than an automobile driven by a human being and able to go anywhere, without the need of any foreknowledge of the terrain, so long, that is, as there is a road to drive upon.

As Madrigal and deBoer also point out in another example, the excellent service of Google Translate isn’t really using machine intelligence to decode language at all. It’s merely aggregating the efforts of thousands of human translators to arrive at approximate results. Again, there is no real intelligence here, just an efficient way to sort through an incredibly huge amount of data.

Yet, what if this tactic of approaching intelligence by “throwing more data at it”  ultimately proves a dead end? There may come a point where such a strategy shows increasingly limited returns. The fact of the matter is that we know of only one fully sentient creature- ourselves- and the more data strategy is nothing like how our own brains work. If we really want to achieve machine intelligence, and it’s an open question whether this is a worthwhile goal, then we should be exploring at least some alternative paths to that end such as those long espoused by Douglas Hofstadter the author of the amazing Godel, Escher, Bach,  and The Mind’s I among others.

Predictions about the future of capacities of artificially intelligent agents are all predicated on the continued exponential rise in computer processing power. Yet, these predictions are based on what are some less than solid assumptions with the first being that we are nowhere near hard limits to the continuation of Moore’s Law. What this assumption ignores is increased rumblings that Moore’s Law might be in hospice and destined for the morgue.

But even if no such hard limits are encountered in terms of Moore’s Law, we still have the unproven assumption that greater processing power almost all by itself leads to intelligence, or even is guaranteed to bring incredible changes to society at large. The problem here is that sheer processing power doesn’t tell you all that much. Processing power hasn’t brought us machines that are intelligent so much as machines that are fast, nor are the increases in processing power themselves all that relevant to what the majority of us can actually do.  As we are often reminded all of us carry in our pockets or have sitting on our desktops computational capacity that exceeds all of NASA in the 1960’s, yet clearly this doesn’t mean that any of us are by this power capable of sending men to the moon.

AI may be in a technological hype-cycle, again we won’t really know for a few years, but the dangers of any hype-cycle for an immature technology is that it gets crushed as the wave comes down. In a hype-cycle initial progress in some field is followed by a private sector investment surge and then a transformation of the grant writing and academic publication landscape as universities and researchers desperate for dwindling research funding try to match their research focus to match a new and sexy field. Eventually progress comes to the attention of the general press and gives rise to fawning books and maybe even a dystopian Hollywood movie or two. Once the public is on to it, the game is almost up, for research runs into headwinds and progress fails to meet the expectations of a now profit-fix addicted market and funders. In the crash many worthwhile research projects end up in the dustbin and funding flows to the new sexy idea.

AI itself went through a similar hype-cycle in the 1980’s, back when Hofstander was writing his Godel, Escher, Bach  but we have had a spate of more recent candidates. Remember in the 1990’s when seemingly every disease and every human behavior was being linked to a specific gene promising targeted therapies? Well, as almost always, we found out that reality is more complicated than the current fashion. The danger here was that such a premature evaluation of our knowledge led to all kinds of crazy fantasies and nightmares. The fantasy that we could tailor design human beings through selecting specific genes led to what amount to some pretty egregious practices, namely, the sale of selection services for unborn children based on spurious science- a sophisticated form of quackery. It also led to childlike nightmares, such as found in the movie Gattaca or Francis Fukuyama’s Our Posthuman Future where we were frightened with the prospect of a dystopian future where human beings were to be designed like products, a nightmare that was supposed to be just over the horizon.  

We now have the field of epigenetics to show us what we should have known- that both genes and environment count and we have to therefore address both, and that the world is too complex for us to ever assume complete sovereignty over it. In many ways it is the complexity of nature itself that is our salvation protecting us from both our fantasies and our fears.

Some other examples? How about MOOCS which we supposed to be as revolutionary as the invention of universal education or the university? Being involved in distance education for non-university attending adults I had always known that the most successful model for online learning was where it was “blended” some face-to-face, some online. That “soft” study skills were as important to student success as academic ability. The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.  Andrew Ng, the AI researcher scooped up by Baidu I mentioned earlier being just one of a number of high level MOOC refugees having helped found Coursera.

The so-called Internet of Things is probably another example of getting caught on the hype-cycle. The IoT is this idea that people are going to be clamoring to connect all of their things: their homes, refrigerators, cars, and even their own bodies to the Internet in order to be able to constantly monitor those things. The holes in all this are that not only are we already drowning in a deluge of data, or that it’s pretty easy to see how the automation of consumption is only of benefit to those providing the service if we’re either buying more stuff or the automators are capturing a “finder’s fee”, it’s above all, that anything connected to the Internet is by that fact hackable and who in the world wants their homes or their very bodies hacked? This isn’t a paranoid fantasy of the future, as a recent skeptical piece on the IoT in The Economist pointed out:

Last year, for instance, the United States Fair Trade Commission filed a complaint against TrendNet, a Californian marketer of home-security cameras that can be controlled over the internet, for failing to implement reasonable security measures. The company pitched its product under the trade-name “SecureView”, with the promise of helping to protect owners’ property from crime. Yet, hackers had no difficulty breaching TrendNet’s security, bypassing the login credentials of some 700 private users registered on the company’s website, and accessing their live video feeds. Some of the compromised feeds found their way onto the internet, displaying private areas of users’ homes and allowing unauthorised surveillance of infants sleeping, children playing, and adults going about their personal lives. That the exposure increased the chances of the victims being the targets of thieves, stalkers or paedophiles only fuelled public outrage.

Personalized medicine might be considered a cousin of the IoT, and while it makes perfect sense to me for persons with certain medical conditions or even just interest in their own health to monitor themselves or be monitored and connected to health care professionals, such systems will most likely be closed networks to avoid the risk of some maleficent nerd turning off your pacemaker.

Still, personalized medicine itself, might be yet another example of the magnetic power of hype. It is one thing to tailor a patient’s treatment based on how others with similar genomic profiles reacted to some pharmaceutical and the like. What would be most dangerous in terms of health care costs both to individuals and society would be something like the “personalized” care for persons with chronic illnesses profiled in the New York Times this April, where, for instance, the:

… captive audience of Type 1 diabetics has spawned lines of high-priced gadgets and disposable accouterments, borrowing business models from technology companies like Apple: Each pump and monitor requires the separate purchase of an array of items that are often brand and model specific.

A steady stream of new models and updates often offer dubious improvement: colored pumps; talking, bilingual meters; sensors reporting minute-by-minute sugar readouts. Ms. Hayley’s new pump will cost $7,350 (she will pay $2,500 under the terms of her insurance). But she will also need to pay her part for supplies, including $100 monitor probes that must be replaced every week, disposable tubing that she must change every three days and 10 or so test strips every day.

The technological hype-cycle gets its rhetoric from the one technological transformation that actually deserves the characterization of a revolution. I am talking, of course, about the industrial revolution which certainly transformed human life almost beyond recognition from what came before. Every new technology seemingly ends up making its claim to be “revolutionary” as in absolutely transformative. Just in my lifetime we have had the IT , or digital revolution, the Genomics Revolution, the Mobile Revolution, The Big Data Revolution to name only a few.  Yet, the fact of the matter is not merely have no single one of these revolutions proven as transformative as the industrial revolution, arguably, all of them combined haven’t matched the industrial revolution either.

This is the far too often misunderstood thesis of economists like Robert Gordon. Gordon’s argument, at least as far as I understand it, is not that current technological advancements aren’t a big deal, just that the sheer qualitative gains seen in the industrial revolution are incredibly difficult to sustain let alone surpass.

The enormity of the change from a world where it takes, as it took Magellan propelled by the winds, years, rather than days to circle the globe is hard to get our heads around, the gap between using a horse and using a car for daily travel incredible. The average lifespan since the 1800’s has doubled. One in five of the children born once died in childhood. There were no effective anesthetics before 1846.  Millions would die from an outbreak of the flu or other infectious disease. Hunger and famine were common human experiences however developed one’s society was up until the 20th century, and indoor toilets were not common until then either. Vaccinations did not emerge until the late 19th century.

Bill Gates has characterized views such as those of Gordon as “stupid”. Yet, he himself is a Gordonite as evidenced by this quote:

But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.”

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.”

And this is all really what I think Gordon is saying, that the “revolutions” of the past 50 years pale in comparison to the effects on human living of the period between 1850 and 1950 and this is the case even if we accept the fact that the pace of technological change is accelerating. It is as if we are running faster and faster at the same time the hill in front of us gets steeper and steeper so that truly qualitative change of the human condition has become more difficult even as our technological capabilities have vastly increased.

For almost two decades we’ve thought that the combined effects of three technologies in particular- robotics, genetics, and nanotech were destined to bring qualitative change on the order of the industrial revolution. It’s been fourteen years since Bill Joy warned us that these technologies threatened us with a future without human beings in it, but it’s hard to see how even a positive manifestation of the transformations he predicted have come true. This is not to say that they will never bring such a scale of change, only that they haven’t yet, and fourteen years isn’t nothing after all.

So now, after that long and winding road, back to AI. Erik Brynjolfsson, Andrew McAfee and Jeff Cummings the authors of the most popular recent book on the advances in artificial intelligence over the past decade, The Second Machine Age take aim directly at the technological pessimism of Gordon and others. They are firm believers in the AI revolution and its potential. For them innovation in the 21st century is no longer about brand new breakthrough ideas but, borrowing from biology, the recombination of ideas that already exist. In their view, we are being “held back by our inability to process all the new ideas fast enough” and therefore one of the things we need are even bigger computers to test out new ideas and combinations of ideas. (82)

But there are other conclusions one might draw from the metaphor of innovation as “recombination”.  For one, recombination can be downright harmful for organisms that are actually working. Perhaps you do indeed get growth from Schumpeter’s endless cycle of creation and destruction, but if all you’ve gotten as a consequence are minor efficiencies at the margins at the price of massive dislocations for those in industries deemed antiquated, not to mention society as a whole, then it’s hard to see the game being worth the candle.

We’ve seen this pattern in financial services, in music, and journalism, and it is now desired in education and healthcare. Here innovation is used not so much to make our lives appreciably better as to upend traditional stakeholders in an industry so that those with the biggest computer, what Jaron Lanier calls “Sirene Servers” can swoop in and take control. A new elite stealing an old elite’s thunder wouldn’t matter all that much to the rest of us peasants were it not for the fact that this new elite’s system of production has little room for us as workers and producers, but only as consumers of its goods. It is this desire to destabilize, reorder and control the institutions of pre-digital capitalism and to squeeze expensive human beings from the process of production that is the real source of the push for intelligent machines, the real force behind the current “AI revolution”, but given its nature, we’d do better to call it a coup.

Correction:

The phrase above: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.”

Originally read: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY  closing up shop, as a result.”

Defining Home

One would be hard-pressed to find two thinkers as distinct as Jane Jacobs and Jaron Lanier. Jacobs, who passed away in 2006, was a thinker concerned with the concrete reality of real-world communities, and most especially, how to preserve them. Lanier is a pioneer in the field of virtual reality, having coined the phrase, with deep ties to the culture of Silicon Valley. This is why I found it so surprising upon reading relatively recent books from both of these authors that they provided an almost synergistic perspective in which each author appeared to inform the work of the other resulting in a more comprehensive whole.

I’ll start with Jane Jacob’s. The purpose of her last and by far most pessimistic book Dark Age Ahead, published in 2004, was to identify what she saw as some major dystopian trends in the West that if not checked might result in the appearance of a new dark age. Jacob’s gives what is perhaps one of the best descriptions of what a dark age is that I have ever seen; A state of “mass amnesia” in which not only have certain aspects of a culture been lost, but the fact that these aspects have been lost is forgotten as well.

In Dark Age Ahead, Jacobs identifies five dystopian trends which she thinks are leading us down the path of a new dark age: the decline of communities and the family, the decline of higher education, the decline of science, the failure of government, and the decay of culture. One of the things that make Jacobs so interesting is that she defies ideological stereotypes. Looking at the world from a perspective of the community allows her to cast her net much wider in the search for explanations than what emerges from “think tanks” of both the right and the left. Want a reason for the decline of the family? How about consumerism, the need for two incomes, and the automobile, rather than the right’s claim of declining moral standards. Want a reason for the failure of government?
What about the loss of taxing authority by local governments to the national government, and the innate inability of national bureaucracies to craft effective policies based on local conditions, rather than, as some on the left would have it, the need for a more expansive federal government.

Jacob’s unique perspective gained her prescience.  Over three years before the housing bubble burst and felled the US economy she was able to see the train wreck coming. (DA P.32). This perspective grows out of her disdain for ideology, which is one of her main targets in Dark Age Ahead. Something like ideology can be seen in what Jacobs understands to be the decline of science. Openness to  feedback from the real- world is the cornerstone of true science, but, in what Jacob’s sees as a far too often occurrence scientists, especially social scientists,  ignore such feedback because it fails to conform to the reigning paradigm. Another danger is when fields of knowledge without an empirical base at all successfully pass themselves off as “science”.

But where the negative effect of ideology is most apparent is at the level of national government where the “prefabricated answers” ideology provides become one-size-fits-all “solutions” that are likely to fail, firstly, because profound local differences are ignored, and secondly, because national imperatives and policies emerge from bureaucratic or corporate interests that promote or mandate solutions to broad problems that end up embedding their own ideology and agenda, rather than actually addressing the problem at hand.

Sometimes we are not even aware that policies from distant interests are being thrusts upon us. Often what are in fact politically crafted policies reflecting some interest have the appearance of having arisen organically as the product of consumer choice. Jacobs illustrates this by showing how the automobile centric culture of the US was largely the creation of the automobile industry, which pushed for the deconstruction of much of the public transportation system American cities. Of course, the federal government played a huge role in the expansion of the automobile as well, but it did not do so in order to address the question of what would be the best transportation system to adopt, but as a means of fostering national security, and less well known, to promote the end of national full-employment, largely blind to whatever negative consequences might emerge from such a policy.

Jacobs ideas regarding feedback- whether as the basis of real science, or as the foundation of effective government policies- have some echoes, I think, of the conservative economist Friedrich Hayek. Both Hayek and Jacobs favored feedback systems such as the market, in Hayek’s case, or, for Jacobs the community (which includes the economy but is also broader) over the theories of and policies crafted by and imposed by distant experts.

A major distinction, I think, is that whereas Jacob looked to provide boundaries to effective systems of feedback- her home city of Toronto was one such feedback system rather than the economy of all of Canada, North America, or the world- Hayek, emerging from the philosophy of classical liberalism focused his attention sharply on economics, rather than broadening his view to include things such as the education system, institutions of culture and the arts, or local customs. Jacob saw many markets limited in geographic scope, Hayek saw the MARKET a system potentially global in scale, that is given the adoption of free- trade, would constitute a real, as opposed to a politically distorted, feedback system which could cover the whole earth. Jacobs is also much more attuned to areas that appear on the surface to be driven by market mechanisms- such the idea that consumer choice led to the widespread adoption of the automobile in the US- that on closer inspection are shown to be driven by influence upon or decisions taken by national economic and political elites.

Anyone deeply familiar with either Hayek or Jacobs who could help me clarify my thoughts here would be greatly appreciated, but now back to Lanier.

Just as Jacobs sees a naturally emergent complexity to human environments such as cities, a complexity that makes any de-contextualized form of social engineering likely to end in failure, Lanier, in his 2009 manifesto, You Are Not A Gadget, applies an almost identical idea to the human person, and challenges the idea that any kind of engineered “human-like” artificial intelligence will manage to make machines like people. Instead, Lanier claims, by trying to make machines like people we will make people more like machines.

Lanier is not claiming that there is a sort of “ghost in the machine” that makes human beings distinct. His argument is instead evolutionary:

I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very  large, very long, and very deep encounter with physical reality.( 157)

Both human communities and individuals, these authors seem to be suggesting, are the products of a deep and largely non-replicable processes. Imagine what it would truly mean to replicate, as software, the city of Rome. It is easy enough to imagine that we could reproduce within amazing levels of detail the architecture and landscape of the city, but how on earth would we replicate the all the genealogical legacies that go into a city: its history, culture, language- not to mention the individuals who are the carriers of such legacies?The layers that have gone into making Rome what it is stretch deep back into human, biological, and physical time: beginning with the Big Bang, the formation of the Milky Way, our sun, the earth, life on earth from the eons up until human history, prehistoric settlements, the story of the Roman Republic and Empire, the Catholic Church, Renaissance city states, national unification, Mussolini’s fascist dictatorship down to our own day. Or, to quote Lanier: “What makes something fully real is that it is impossible to represent it to completion”.  (134)

Lanier thinks the fact that everything is represented in bits has lead to the confusion that everything is bits. The result of this type of idolatry is for representation and reality to begin to part company a delusion which he thinks explains the onset of the economic crisis in 2008.( It’s easy to see why he might think this when the crisis was engendered by financial frankensteins such as Credit Default Swaps which displaced traditional mortgages where the borrowers credit was a reality lenders were forced to confront when granting a loan.)

Lanier also thinks it is far beyond our current capacity to replicate human intelligence in the form of software, and when it appears we have actually done so, what we have in fact achieved is a massive reduction in complexity which has likely stripped away the most human aspects of whatever quality or activity we are trying to replicate in machines. Take the case of chess where the psychological aspects of the game are stripped away to create chess playing machines and the game is reduced to the movement of pieces around a board. Of course, even in this case, it really isn’t the chess playing machine that has won but the human engineers and programmers behind it who have figured out how to make and program such a machine. Lanier doesn’t even think it is necessary to locate a human activity on a machine for that activity to be stripped of its human elements. He again uses the case of chess only this time chess played against a grandmaster not by a machine but by a crowd wherein individual choices are averaged out to choose the move of the crowd “player”. He wants us to ask whether the human layer of chess, the history of the players their psychological understanding of their opponent is still in evidence in the game-play of this “hive- mind”. He thinks not.

Like Jacobs and her example of the origins of the US transportation system in the machinations of the automotive industry and the influence of the American government to promote an economy built around the automobile for reasons that had nothing to do with transportation as such- namely national security and the desire for full-employment, Lanier sees the current state of computer technology and software as not a determined outcome, but as a conscious choice that has been imposed upon the broader society by technologist. What he sees as dangerous here is that any software architecture is built upon a certain number of assumptions that amount to a philosophy, something he calls “digital-lock-in”.That philosophy then becomes the technological world in which we live without ever having had any broader discussion in society around the question of if this is truly what we want.

Examples of such assumptions are the non-value of privacy, and the idea that everything is a vehicle for advertising. Lanier thinks the current treatment of content producers as providers of a shell for advertisement are driving artists to the wall. Fact is, we all eventually become stuck with these models once they become universal. We all end up using FaceBook and Google because we have to if we want to participate in the online world. But we should realize that the assumptions of these architectures was a choice, and did not have to be this way.

It is my hope that, in terms of the Internet, the market and innovation will likely provide solutions to these problems even the problem of how artist and writers are to find a viable means of living in conditions of ubiquitous copyable content. But the market is far from perfect, and as Jacob’s example of the development of the US transportation system shows, are far too often distorted by political manipulation.

A great example of this is both the monopolization of the world’s agriculture by a handful of mammoth agribusinesses, a phenomenon detailed by Giulio Caperchi, of the blog The Geneaology of Consent.  In his post , Food Sovereignty, Caperchi details how both the world food system is dominated by a small number of global firms and international organizations. He also introduces the novel concept of epistemological sovereignty “the right to define what systems of knowledge are best suited for particular contexts”.  These are ideas that are desperately needed, for if Lanier is right, we are about to embark on an even more dangerous experiment by applying the assumptions of computer science to the natural world, and he cites an article by one of the patriarchs of 20th century physics- Freeman Dyson- to show us that this is so.

There must be something with me and Freeman Dyson, for this is the second time in a short period that I have run into the musings of the man, first in doing research for a post I wrote on the science-fiction novel Accelerando, and now here. In Our Biotech Future  Dyson lays out what he thinks will be the future of not just the biotech industry and biological sciences but the future of life itself.

Citing an article by Carl Woese on “the golden age” of life before species had evolved and gene transfer between organisms was essentially unbounded and occurred rapidly. Dyson writes:

But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell, anticipating Bill Gates by three billion years, separated itself from the community and refused to share. Its offspring became the first species of bacteria and the first species of any kind reserving their intellectual property for their own private use.

And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes. Then the evolution of life will once again be communal, as it was in the good old days before separate species and intellectual property were invented.

Dyson looks forward to an age when:

Domesticated biotechnology, once it gets into the hands of housewives and children, will give us an explosion of diversity of new living creatures, rather than the monoculture crops that the big corporations prefer. New lineages will proliferate to replace those that monoculture farming and deforestation have destroyed. Designing genomes will be a personal thing, a new art form as creative as painting or sculpture.

Dyson, like Lanier and Jacobs praises complexity: he thinks swapping genes is akin to cultural evolution which is more complex than biological evolution ,and that the new biological science, unlike much of the physical sciences, will need to reflect this complexity. What he misses, what both Jacobs and Lanier understand ,is that the complexity of life does not emerge just from combination, but from memory, which acts as a constraint and limits choices. Rome is Rome, a person is a person, a species is a species because choices were made which have closed off alternatives.

Dyson is also looking at life through the eyes of the same reductionist science he thinks has reached its limits. I want to make a kitten that glows in the dark, so I insert a firefly gene etc. In doing this he is almost oblivious to the fact that in complex systems the consequences are often difficult to predict beforehand, and some could be incredibly dangerous both for natural animals and plants and the ecosystems they live in and for us human beings as well. Some of this danger will come from bio-terrorism- persons deliberately creating organisms to harm other people- and this would include any reinvigorated effort to develop such weapons on behalf of states as it would the evil intentions of any nihilistic group or individual. Still, a good deal of the danger from such a flippant attitude towards the re-engineering of life could arise more often from unintended consequences of our actions. One might counter that we have been doing this re-engineering at least since we domesticated plants and animals, and we have, though not on anything like the scale Dyson is proposing. It is also to forget that one of the unintended consequences of agriculture was to produce diseases that leap from domesticated animals to humans and resulted in the premature deaths of millions.

Applying the ideas of computer science to biology creates the assumption that life is software. This is an idea that is no doubt pregnant with discoveries that could improve the human condition, but in the end it is only an assumption- the map not the territory. Holding to it too closely results in us treating all of life as if it was our plaything, and aggressively rather than cautiously applying the paradigm until, like Jacob’s decaying cities or Lanier’s straight-jacket computer technologies, or Caperchi’s industrialized farming it becomes the reality we have trapped ourselves in without having ever had a conversation about whether we wanted to live there.