Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600′s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980′s and went mainstream in the heady 1990′s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.


Malthusian Fiction and Fact


Prophecies of doom, especially when they’re particularly frightening, have a way of sticking with us in a way more rosy scenarios never seem to do. We seem to be wired this way by evolution, and for good reason.  It’s the lions that almost ate you that you need to remember, not the ones you were lucky enough not to see. Our negative bias is something we need to be aware of, and where it seems called for, lean against, but that doesn’t mean we should dismiss and ignore every chicken little as a false prophet even when his predictions turn out to be wrong, not just once, but multiple times. For we can never really discount completely the prospect that chicken little was right after all, and it just took the sky a long, long time to fall.

 The Book of Revelation is a doom prophecy like that, but it is not one that any secular or non-Christian person in their right mind would subscribe to. A better example is the prediction of Thomas Malthus who not only gave us a version of Armageddon compatible with natural science, but did so in the form of what was perhaps the most ill timed book in human history.

Malthus in his  An Essay on the Principle of Population published in 1798 was actually responding to one of history’s most wild-eyed optimist. While he had hidden himself away from French Revolutionary Jacobins who wanted to chop off his head, Nicolas de Condorcet had the balls to write his Sketch for a Historical Picture of the Progress of the Human Spirit, which argued not only that the direction of history was one of continuous progress without limit, but that such progress might one day grant mankind biological immortality. Condorcet himself wouldn’t make it, dying just after being captured, and before he could be drug off to the Guillotine, whether from self-ingested poison or just exhaustion at being hunted we do not know.

The argument for the perpetual progress of mankind was too much for the pessimist Malthus to bear. Though no one knew how long we had been around it was obvious to anyone who knew their history that our numbers didn’t just continually increase let alone things get better in some linear fashion. Malthus thought he had fingered the culprit – famine. Periods of starvation, and therefore decreases in human numbers, were brought about whenever human beings exceeded their ability to feed themselves, which had a tendency to happen often. As Malthus observed the production of food grew in a linear fashion while human population increased geometrically or exponentially.  If you’re looking for the origins of today’s debate between environmentalists and proponents of endless growth from whatever side of the political ledger- here it is-with Condorcet and Malthus.

Malthus could not have been worse in terms of timing. The settlement of the interiors of North America and Eurasia along with the mechanization of agriculture were to add to the world untold tons of food. The 19th century was also when we discovered what proved to be the key to unleashing nature’s bounty- the nitrogen cycle- which we were able to start to take advantage of in large part because in Peru we had stumbled across a hell of a lot of bat poop.

Supplies of guano along with caliche deposits from the Atacama desert of Chile would provide the world with soil supercharging nitrogen up until the First World War when the always crafty Germans looking for ways to feed themselves and at the same time make poison gas to kill the allies discovered the Haber-Bosch process. Nitrogen rich fertilizer was henceforth to be made using fossil fuels. All of that really took off just in time to answer another Malthusian prophecy  Paul Ehrlich’s The Population Bomb of 1968, which looked at the quite literally exploding human population and saw mass starvation. The so-called “Green Revolution” proved those predictions wrong through a mixture of nitrogen fertilizers, plant breeding and pesticides. We had dodged the bullet yet again.

Malthusians today don’t talk much about mass starvation, which seems to make sense in a world where even the developing world has a growing problem with obesity, but the underlying idea of Malthus, that the biological-system of the earth has systemic limits and humans have a tendency to grow towards exceeding them still has legs, and I would argue, we should take this view very seriously indeed.

We can be fairly certain there are limits out there, though we are in the dark as to what exactly those limits are. Limits to what? Limits to what we can do or take from the earth or subject it to before we make it unlivable the majority of creatures who evolved for its current environment- including ourselves- unless we want to live here under shells as we would on a dead planet like Mars or like in the 70s science-fiction classic Logan’s Run itself inspired by Malthusian fears about runaway population growth.

The reason why any particular Malthus is likely wrong is that the future is in essence unknowable. The retort to deterministic pessimism is often deterministic optimism, something has always saved us in the past, usually human inventiveness i.e. technology and will therefore save us in the future. I don’t believe in either sort of determinism we are not fated to destroy ourselves, but the silence of the universe shouldn’t fill us with confidence that we are fated to save ourselves either. Our future is in our hands.

What might the world look like if today’s Malthusian fears prove true, if we don’t stumble upon or find the will and means to address the problems of what people are now calling the Anthropocene, the era when humanity itself has become the predominant biological, and geochemical force on earth? The problems are not hard to identify and are very real. We are heating up the earth with the carbon output of our technological civilization, and despite all the progress in solar power generation, demon coal is still king. Through our very success as a species we appear to be unleashing the sixth great extinction upon life on earth, akin to the cataclysmic events that almost destroyed life on earth in the deep past. Life and the geography of the planet will bear our scars.

Paolo Bacigalupi is a novelist of the Anthropocene bringing to life dystopian visions of what the coming centuries might look like if we have failed to solve our current Malthusian challenges. The 23rd century Bacigalupi presents in his novel The Windup Girl is not one any of us would want to live in, inheriting the worst of all worlds. Global warming has caused a catastrophic level of sea rise, yet the age of cheap and abundant energy is also over. We are in an era with a strange mixture of the pre-industrial and the post industrial- dirigibles rather than planes ply the air, sailing ships have returned. Power is once again a matter of simple machines and animals with huge biologically engineered elephants turning vast springs to produce energy which for all that is painfully scarce. Those living in the 23rd century look back to our age of abundance, what they call the Expansion, as a golden age.

What has most changed in the 23rd century is our relationship to the living world and especially to food. Bacigalupi has said that the role of science-fiction authors is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” What Bacigalupi was especially interested in extrapolating was the way biotechnology and agribusiness seem to be trying to take ownership over food. Copyrighting genes, preventing farmers from re-using, engaging in bioprospecting. The latter becomes biopiracy when companies copyright genetic samples and leverage indigenous knowledge without compensating the peoples from whom this biological knowledge and material has been gleaned.

In the neo-medieval world of The Windup Girl agribusinesses known as “calorie companies” with names like AgriGen, PurCal and RedStar don’t just steal indigenous knowledge and foreign plants, and hide this theft under the cloak of copyright, they hire mercenaries to topple governments, unleash famines, engage in a sophisticated form of biological warfare. It as if today’s patent obsessed and abusing pharmaceutical companies started to deliberately create and release diseases in order to reap the profits when they swept in and offered cures. A very frightening possible future indeed.

Genetic engineering, whether deliberately or by accident, has brought into being new creatures that outcompete the other forms of life. There are cats called cheshires that roam everywhere and  possess chameleon like camouflaging capabilities. There are all sorts of diseases, mostly deadly, diseases that make the trans-genetic jump from plants to humans.

Congruent with the current Malthusianism, The Windup Girl is really not focused on population as our primary problem. Indeed, the humanoid windups, a people whose condition and fate take center stage in the novel were created not because of population growth, but in response to population decline. In a world without abundant energy robots are a non-starter. Facing collapsing population the Japanese have bred a race of infertile servants called windups for their mechanical like motions. Just one of the many examples where Bacigalupi is able to turn underlying cultural assumptions into a futuristic reality the windups are “more  Japanese than the Japanese” and combine elements of robots and geishas.

The novel is the story of one of these windups, Emiko, who has been abandoned by her owner in Thailand coming to find her soul in a world that considers her soulless. She is forced to live as a sexual slave and is barbarically brutalized by Somdet Chaopraya’ the regent of the child queen, and murders him as a result. Emiko had become involved with a Westerner, Anderson Lake working for AgriGen who is searching out the Thais most valuable resource, their seed bank, the Norwegian seed bank of Svalbard having been destroyed in an act of industrial terrorism by the calorie companies, along with a mysterious scientist called Gibbons who keeps the Thais one step ahead of whatever genetic plague nature or the calorie companies can throw at them.

Anderson and the calories companies are preparing a sort of coup to unseat the powerful Environment Ministry once lead by a murdered figure named Jaidee to open up trade and especially access to the genetic wealth of Thailand. The coup is eventually undone,  Bangkok flooded and Emiko living in hiding the deserted city offered the chance to create more of her own kind by Gibbons who relishes playing God.

What I found most amazing about The Windup Girl wasn’t just the fact that Bacigalupi made such a strange world utterly believable, but that he played out the story in a cosmology that is Buddhist, which gives a certain depth to how the characters understand their own suffering.

The prayers of the monks are all that can sometimes be done in what are otherwise hopeless situation. The ancient temples to faith have survived better than the temples to money- the skyscrapers built during the age of expansion. Kamma (karma) gives the Asian characters in the novel a shared explanatory framework, one not possessed by foreigners (the farang). It is the moral conscience that haunts their decisions and betrayals. They have souls and what they do here and not do will decide the fate of the their soul in the next life.

The possession of a soul is a boundary against the windups. Genetic creations are considered somehow soulless not subject to the laws of kamma and therefore outside of the moral universe, inherently evil. This belief is driven home with particular power in a scene in which the character Kanya comes across a butterfly. It is beautiful, bejeweled a fragile thing that has traveled great distances and survived great hardships. It lands on her finger and she studies it. She lets it sit on her finger and then she shepherds it into the palm of her hand. She crushes it into dust. Remorseless. “Windups have no souls. But they are beautiful.”  (241)

It isn’t only a manufactured pollinator like the butterfly that are considered soulless and treated accordingly. Engineered humans like Emiko are treated in such a way as well. They exist either as commodities to be used or as something demonic to be destroyed. But we know different. Emiko is no sexual toy she is a person, genetically engineered or not, which means, among much else, that she can be scarred by abuse.

 In the privacy of the open air and the setting sun she bathes. It is a ritual process, a careful cleansing. The bucket of water, a fingering of soap. She squats beside the bucket and ladles the warm water over herself. It is a precise thing, a scripted act as deliberate as Jo No Mai, each move choreographed, a worship of scarcity.

Animals bathe to remove dirt. Only humans, only something inhabiting a moral universe, possessing a moral memory both of which might be the best materialist way of understanding the soul, bathe in the hope of removing emotional scars. If others could see into Emiko, or her into the hearts of other human beings, then both would see that she has a soul to the extent that human beings might be said to have one. The American farang Anderson thinks:

 She is an animal. Servile as a dog.

He wonders if she were a real person if he would feel more incensed at the abuse she suffers. It’s an odd thing, being with a manufactured creature, built and trained to serve.

She admits herself that her soul wars with herself. That she does not rightly know which parts of her are hers alone and which have been inbuilt genetically. Does her eagerness to serve come from some portion of canine DNA that makes her assume that natural people outrank her for pack loyalty? Or is it simply the training she has spoken of? (184)

As we ask of ourselves- nature or nurture? Both are part of our kamma as is what we do or fail to do with them.

The scientist, Gibbons who saves the Thais from the genetic assaults of the the calorie companies and holds the keys to their invaluable seed bank is not a hero, but himself a Promethean monster surrounding himself with genetically engineered beings he treats as sexual toys, his body rotting from disease. To Kanya’s assertion that his disease is his kamma, Gibbons shouts:

Karma? Did you say karma? And what sort of karma is it that ties your entire country to me, my rotting body.  (245)

Gibbons is coming from a quite different religious tradition, a Christian tradition, or rather the Promethean scientific religion which emerged from it in which we play the role of god.

We are nature. Our every tinkering is nature, our every biological striving. We are what we are, and the world is ours. We are its gods. Your only difficulty is your unwillingness to unleash your potential fully upon it.

I want to shake you sometimes. If you would just let me, I could be your god and shape you to the Eden that beckons us.

To which Kanya replies: “I’m Buddhist.”


 And we all know windups have no soul. No rebirth for them. They will have to find their own gods protect them. Their own gods to pray for their dead. Perhaps I will be that one, and your windup children will pray to me for salvation.(243)

This indeed is the world that open up at the end of the novel, with Gibbons becoming Emiko’s “God” promising to make her fertile and her race plentiful. One get the sense that the rise of the windups will not bode well for humanity our far future kamma eventually tracing itself back to the inhuman way we treated Emiko.

Bacigalupi has written an amazing book, even if it has its limitations. The Windup Girl managed to avoid being didactic but is nonetheless polemical- it has political points to score. It’s heroes are either Malthusian or carried by events with all the bad guys wearing Promethean attire. In this sense it doesn’t take full advantage of the capacity of fiction for political and philosophical insight which can only be accomplished when one humanizes all sides, not just ones own.

Still, it was an incredible novel, one of the best to come out of our new century. Unlike most works of fiction, however, we can only hope that by the time we reach the 23rd century the novel represents it will be forgotten. For otherwise, the implication would be that after many false alarms Malthusian fiction has after a long history of false alarms, become Malthusian fact.


This City is Our Future

Erich Kettelhut Metropolis Sketch

If you wish to understand the future you need to understand the city, for the human future is an overwhelmingly urban future. The city may have always been synonymous with civilization, but the rise of urban humanity has been something that has almost all occurred after the onset of the industrial revolution. In 1800 a mere 3 percent of humanity lived in cities of over one million people. By 2050, 75  percent of humanity will be urbanized. India alone might have 6 cities with a population of over 10 million.    

The trend towards megacities is one into which humanity as we speak is accelerating in a process we do not fully understand let alone control. As the counterinsurgency expert David Kilcullen writes in his Out of the Mountains:

 To put it another way, these data show that the world’s cities are about to be swamped by a human tide that will force them to absorb- in just one generation- the same population growth that occurred in all of human history up to 1960. And virtually all of this growth will happen in the world’s poorest areas- a recipe for conflict, for crises in health, education and in governance, and for food water and energy scarcity.  (29)

Kilcullen sees 4 trends including urbanization that he thinks are reshaping human geography all of which can be traced to processes that began in the industrial revolution: the aforementioned urbanization and growth of megacities, population growth, littoralization and connectedness.

In terms of population growth: The world’s population has exploded going from 750 million in 1750 to a projected  9.1 – 9.3 billion by 2050. The rate of population growth is thankfully slowing, but barring some incredible catastrophe, the earth seems destined to gain the equivalent of another China and India all within the space of a generation. Almost all of this growth will occur in poor and underdeveloped countries already stumbling under the pressures of the populations they have.

One aspect of population growth Kilcullen doesn’t really discuss is the aging of the human population. This is normally understood in terms of the failure of advanced societies in Japan, South Korea in Europe to reach replacement levels so that the number of elderly are growing faster than the youth to support them, a phenomenon that is also happening in China as a consequence of their draconian one child policy. Yet, the developing world, simply because of the sheer numbers and increased longevity will face its own elderly crisis as well as tens of millions move into age-related conditions of dependency. As I have said in the past, gaining a “longevity dividend” is not a project for spoiled Westerners alone, but is primarily a development issue.

Another trend Kilcullen explores is littoralization, the concentration of human populations near the sea. A fact that was surprising to a landlubber such as myself, Kilcullen points out that in 2012 80% of human beings lived within 60 miles of the ocean. (30) A number that is increasing as the interiors of the continents are hollowed out of human inhabitants.

Kilcullen doesn’t discuss climate change much but the kinds of population dislocations that might be caused by moderate not to mention severe sea level rise would be catastrophic should certain scenarios for climate change play out. This goes well beyond islands or wealthy enclaves such as Miami, New Orleans or Manhattan. Places such as these and Denmark may have the money to engineer defenses against the rising sea, but what of a poor country such as Bangladesh? There, almost 200 million people might find themselves in flight from the relentless forward movement of the oceans. To where will they flee?

It is not merely the displacement of tens of millions of people, or more, living in low-lying coastal areas. Much of the world’s staple crop of rice is produced in deltas which would be destroyed by the inundation of the salt-water seas.

The last and most optimistic of Kilcullen’s trends is growing connectedness. He quotes the journalist John Pollack:

Cell-phone penetration in the developing world reached 79 percent in 2011. Cisco estimates that by 2015 more people in sub-saharan Africa,  South and Southeast Asia and the Middle East will have Internet access than electricity at home.

What makes this less optimistic is the fact as Pollack continues:

Across much of the world, this new information power sits uncomfortably upon layers of corrupt and inefficient government.  (231)

One might have thought that the communications revolution had made geography irrelevant or “flat” in Thomas Friedman’s famous term. Instead, the world has become“spiky” with the concentration of people, capital, and innovation in cities spread across the globe and interconnected with one another. The need for concentration as a necessary condition for communication is felt by the very rich and the very poor alike, both of whom collect together in cities. Companies running sophisticated trading algorithms have reshaped the very landscape to get closer to the heart of the Internet and gain a speed advantage over competitors so small they can not be perceived by human beings.

Likewise, the very poor flood to the world’s cities, because they can gain access to networks of markets and capital, but more recently, because only there do they have access to electricity that allows them to connect with one another or the larger world, especially in terms of their ethnic diaspora or larger civilizational community, through mobile devices and satellite TV. And there are more of these poor struggling to survive in our 21st century world than we thought, 400 million more of them according to a recent report.

For the urban poor and disenfranchised of the cities what the new connectivity can translate into is what Audrey Kurth Croninn has called the new levee en mass.  The first levee en mass was that of the French Revolution where the population was mobilized for both military and revolutionary action by new short length publications written by revolutionary writers such as Robespierre, Saint-Just or the blood thirsty Marat. In the new levee en mass, crowds capable of overthrowing governments- witness, Tunisia, Egypt and Ukraine can be mobilized by bloggers, amateur videographers, or just a kind of swarm intelligence emerging on the basis of some failure of the ruling classes.

Even quite effective armies, such as ISIS now sweeping in from Syria and taking over swaths of Iraq can be pulled seemingly out of thin air. The mobilizing capacity that was once the possession of the state or long-standing revolutionary groups has, under modern conditions of connectedness, become democratized even if the money behind them can ultimately be traced to states.

The movement of the great mass of human beings into cities portends the movement of war into cities, and this is the underlying subject of Kilcullen’s book, the changing face of war in an urban world. Given that the vast majority of countries in which urbanization is taking place will be incapable of fielding advanced armies the kinds of conflicts likely to be encountered there Kilcullen thinks will be guerilla wars whether pitting one segment of society off against another or drawing in Western armies.

The headless, swarm tactics of guerrilla war, which as the author Lawrence H. Keeley reminded us is in some sense a more evolved, “natural” and ultimately more effective form of warfare than the clashing professional armies of advanced states, its roots stretching back into human prehistory and the ancient practices of both hunting and tribal warfare, are given a potent boost by local communication technologies such as traditional radio communication and mesh networks. The crowd or small military group able to be tied together by an electronic web that turns them into something more like an immune system than a modern centrally directed army.

Attempting to avoid the high casualties so often experienced when advanced armies try to fight guerrilla wars, those capable of doing so are likely to turn to increasingly sophisticated remote and robotic weapons to fight these conflicts for them. Kilcullen is troubled by this development, not the least, because it seems to relocate the risk of war onto the civilian population of whatever country is wielding them, the communities in which remote warriors live or where their weapons themselves designed and built, arguably legitimate targets of a remote enemy a community might not even be aware it is fighting. Perhaps the real key is to try to prevent conflicts that might end with our military engagement in the first place.

Cities likely to experience epidemic crime, civil war or revolutionary upheaval are also those that have in Kilcullen’s terms gone “feral”, meaning the order usually imposed by the urban landscape no longer operates due to failures of governance. Into such a vacuum criminal networks often emerge which exchanges the imposition of some semblance of order for the control of illicit trade. All of these things: civil war, revolution, and international crime represent pull factors for Western military engagement whether in the name of international stability, humanitarian concerns or for more nefarious ends most of which are centered on resource extraction. The question is how can one prevent cities from going feral in the first place, avoiding the deep discontent and social breakdown that leads to civil war, revolution or the rise of criminal cartels all of which might end with the military intervention of advanced countries?

The solution lies in thinking of the city as a type of organism with “inflows” such as water, food, resources, manufactured products and capital and “outflows”, especially waste. There is also the issue of order as a kind of homeostasis. A city such as Beijing or Shanghai with their polluted skies is a sick organism as is the city of Dhaka in Bangladesh with its polluted waters or a city with a sky-high homicide rate such as Guatemala City or Sao Paulo. The beautiful thing about the new technologically driven capacity for mass mobilization is that it forces governments to take notice of the people’s problems or figuratively (and sometimes literally lose their heads). The problem is once things have gone badly enough to inspire mass riots the condition is likely systemic and extremely difficult to solve, and that the kinds of protests the Internet and mobile have inspired, at least so far, have been effective at toppling governments, but unable to either found or serve as governments themselves.

At least one answer to the problems of urban geography that could potentially allow cities to avoid instability is “Big-Data” or so-called “smart cities” where the a city is minutely monitored in real time for problems which then initiate quick responses by city authorities. There are several problems here, the first being the costs of such systems, but that might be the least insurmountable one, the biggest being the sheer data load.

As Kilcullen puts it in the context of military intelligence, but which could just as well be stated as the problem of city administrators, international NGOs and aid agencies.

The capacity to intercept, tag, track and locate specific cell phone and Internet users from a drone already exists, but distinguishing signal from noise in a densely connected, heavily trafficked piece of digital space is a daunting challenge. (238)

Kilcullen’s answer to the incomplete picture provided by the view from above, from big data, is to combine this data with the partial but deep view of the city by its inhabitants on the ground. In its essence a city is the stories and connections of those that live in them. Think of the deep, if necessarily narrow perspective of a major city merchant or even a well connected drug dealer. Add this to the stories of those working in social and medical services, police officers, big employers. socialites etc and one starts to get an idea of the biography of a city. Add to that the big picture of flows and connections and one starts to understand the city for what it is, a complex type of non-biological organism that serves as a stage for human stories.

Kilcullen has multiple examples of where knowledge of the big picture from experts has successfully aligned with grassroots organization to save societies on the brink of destruction an alignment he calls “co-design”. He cites the Women of Liberia Mass Action for Peace where grassroots organizer Leymah Gbowee leveraged the expertise of Western NGOs to stop the civil war in Liberia. CeaseFire Chicago uses a big-picture model of crime literally based on epidemiology and combines that with community level interventions to stop violent crime before it occurs.

Another group Kilcullen discusses is Crisis Mappers which offers citizens everywhere in the world access to the big picture, what the organization describes as “the largest and most active international community of experts, practitioners, policy makers, technologists, researchers, journalists, scholars, hackers and skilled volunteers engaged at the intersection between humanitarian crises, technology, crowd-sourcing, and crisis mapping.” (253)

On almost all of this I find Kilcullen to be spot on. The problem is that he fails to tackle the really systemic issue which is inequality. What is necessary to save any city, as Kilcullen acknowledges, is a sense of shared community. What I would call a sense of shared past and future. Insofar as the very wealthy in any society or city are connected to and largely identify with their wealthy fellow elites abroad rather than their poor neighbors, a city and a society is doomed, for only the wealthy have the wherewithal to support the kinds of social investments that make a city livable for its middle classes let alone its poor.

The very globalization that has created the opportunity for the rich in once poor countries to rise, and which connects the global poor to their fellow sufferers both in the same country and more amazingly across the world has cleft the connection between poor and rich in the same society. It is these global connections between classes which gives the current situation a revolutionary aspect, which as Marx long ago predicted, is global in scope.

The danger is that the very wealthy classes use the new high tech tools for monitoring citizens into a way to avoid systemic change, either by using their ability to intimately monitor so-called “revolutionaries” and short-circuit legitimate protest or by addressing the public’s concern in only the most superficial of ways.

The long term solution to the new era of urban mankind is giving people who live in cities the tools, including increasing sophisticated tools of data gathering and simulation, to control their own fates to find ways to connect revolutionary movements to progressive forces in societies where cities are not failing, and their tools for dealing with all the social and environmental problems cities face, and above all, to convince the wealthy to support such efforts, both in their own locality as well as on a global scale. For, the attempt at total control of a complex entity like a city through the tools of the security state, like the paper flat Utopian cities of state worshipers of days past, is to attempt building a castle in the thin armed sky.






Why the World Cup in Brazil Is Our Future: In More Ways Than One

The bold gamble of the Brazilian neuroscientist Miguel Nicolelis to have a paralysied person using an exoskeleton controlled by the brain kick a soccer ball during the World Cup opening ceremony  has paid off.  Yet, for how important the research of The Walk Again Project is to those suffering paralysis, the less than 2 second display of the technology did very little to live up to the pre-game media hype.

The technology of saving people suffering paralysis using brain-machine interfaces is still in its infancy, though I hold out hope for Nicolelis and others’ work in these areas for those suffering from paralysis or the elderly unable to enjoy the wonders of movement, the future we can see when looking at Brazil today is a much different one than the one hinted at by the miracle of the paralyzed being able to walk.

I first got hint of this through a photograph.  Seeing the elite Brazilian riot police in their full body armor reminded me of nothing so much as StormTroopers from Star Wars. Should full exoskeletons remain prohibitively expensive, it might be soldiers and police who are wearing them to leverage their power against urban protesters.  And if the experience of Brazil in the runup to the World Cup, especially in 2013, but also now, is any indication, protests there will be.

The protests in Brazil reflect a widespread malaise with the present and future of the country. There is endemic concern about youth underemployment and underemployment, crime and inadequate and grossly underfunded public services. What should be a great honor, hosting what is arguably the premier sporting event on the planet, about which the country is fanatic, and in which it is predicted to do amazingly well, has a public where instead, according to a recent Pew Survey:

 About six-in-ten (61%) think hosting the event is a bad thing for Brazil because it takes money away from schools, health care and other public services — a common theme in the protests that have swept the country since June 2013.

The Brazilian government has reasons to be nervous, it is now widely acknowledged, as David Kilcullen points out in his excellent book Out of the Mountains that rabid soccer fans themselves, known as Ultras, were instrumental in the downfall of government in Tunisia and Egypt in 2011. This is not to suggest that Brazil in on the verge of revolution, the country is not, after all, a brittle autocracy like countries that experienced what proved to be the horribly misnamed “Arab Spring” rather, to suggest just how potent a punch can be packed by testosterone charged young men with nothing to lose.

This situation, where you have an intensively dissatisfied urban population, from which mass protests, enabled by ubiquitous mobile technology, protests which governments have extreme difficulty containing, are not, of course, the province of Brazil alone, but have become one of the defining features of the early 21st century.

One of the problems seems to be that even as technological capacities stream forward the low-tech foundational capabilities of societies are crumbling, developments which may be related. Brazilians are simultaneously empowered by mobile technology and disempowered through the decline of necessary services such as public transportation.

We can see the implications here even for something like Nicolelis’ miracle on a Brazilian soccer field today. Who will pay for such exoskeletons for those both paralyzed and poor? But there is another issue, perhaps as worrisome. Our very failure to support low -tech  foundations may eventually render many of our high-tech breakthroughs moot.

Take the low-tech foundational technology of antibiotics:  as Mary Mckenna pointed out in a chilling article back in 2013 the failure of pharmaceutical companies to invest in low-tech and therefore unprofitable antibiotics, combined with their systemic overuse in both humans and animals, threatens to push us into a frightening post antibiotic age.  Many of our medical procedures for dealing with life threatening conditions utilize the temporary suppression of the immune system, something that in itself would be life threatening without our ability to utilize antibiotics. In a world without antibiotics chemotherapy would kill you, severe burns would almost guarantee death, intensive care would be extremely difficult.

Next to go: surgery, especially on sites that harbor large populations of bacteria such as the intestines and the urinary tract. Those bacteria are benign in their regular homes in the body, but introduce them into the blood, as surgery can, and infections are practically guaranteed. And then implantable devices, because bacteria can form sticky films of infection on the devices’ surfaces that can be broken down only by antibiotics.

One answer to this might be nanotechnology, not in the sense of microscopic robots  touted by thinkers such as Ray Kurzweil, but in the form of nanoparticle sprays that kill pathogens and act as protective coatings that help stem the transmission of killer bacteria and viruses. Research into pathogen destroying nanoparticles is still in its early stages, so we have no idea where it will go. The risk remains, however, that our failure to invest in and properly use technologies we’ve had for a century might make a technological miracle like Nicolelis’ exoskeletons impossible, at least, in the case of paraplegics, where surgery might be involved. It would also threatens to erase much of the last century’s gains in longevity. In a world without effective antibiotics my recently infected gum might have killed me. The message from Brazil is that a wondrous future may await, but only if we remember and sustain the achievements and miracles of the past.


Jumping Off The Technological Hype-Cycle and the AI Coup

Robotic Railroad 1950s

What we know is that the very biggest tech companies have been pouring money into artificial intelligence in the last year. Back in January Google bought the UK artificial intelligence firm Deep Mind for 400 million dollars. Only a month earlier, Google had bought the innovative robotics firm Boston Dynamics. FaceBook is in the game as well having also in December 2013 created a massive lab devoted to artificial intelligence. And this new obsession with AI isn’t only something latte pumped-up Americans are into. The Chinese internet giant Baidu, with its own AI lab, recently snagged the artificial intelligence researcher Andrew Ng whose work for Google included the breakthrough of creating a program that could teach itself to recognize pictures of cats on the Internet, and the word “breakthrough” is not intended to be the punch line of a joke.

Obviously these firms see something that make these big bets and the competition for talent seem worthwhile with the most obvious thing they see being advances in an approach to AI known as Deep Learning, which moves programming away from a logical set of instructions and towards the kind bulking and pruning found in biological forms of intelligence. Will these investments prove worth it? We should know in just a few years, yet we simply don’t right now.

No matter how it turns out we need to beware of becoming caught in the technological hype-cycle. A tech investor, or tech company for that matter, needs to be able to ride the hype-cycle like a surfer rides a wave- when it goes up, she goes up, and when it comes down she comes down, with the key being to position oneself in just the right place, neither too far ahead or too far behind. The rest of us, however, and especially those charged with explaining science and technology to the general public, namely, science journalists, have a very different job- to parse the rhetoric and figure out what is really going on.

A good example of what science journalism should look like is a recent conversation over at Bloggingheads between Freddie deBoer and Alexis Madrigal. As Madrigal points out we need to be cognizant of what the recent spate of AI wonders we’ve seen actually are. Take the much over-hyped Google self-driving car. It seems much less impressive to know the areas where these cars are functional are only those areas that have been mapped before hand in painstaking detail. The car guides itself not through “reality” but a virtual world whose parameters can be upset by something out of order that the car is then pre-programmed to respond to in a limited set of ways. The car thus only functions in the context of a mindbogglingly precise map of the area in which it is driving. As if you were unable to make your way through a room unless you knew exactly where every piece of furniture was located. In other words Google’s self-driving car is undriveable in almost all situations that could be handled by a sixteen year old who just learned how to drive. “Intelligence” in a self-driving car is a question of gathering massive amounts of data up front. Indeed, the latest iteration of the Google self-driving car is more like tiny trolley car where information is the “track” than an automobile driven by a human being and able to go anywhere, without the need of any foreknowledge of the terrain, so long, that is, as there is a road to drive upon.

As Madrigal and deBoer also point out in another example, the excellent service of Google Translate isn’t really using machine intelligence to decode language at all. It’s merely aggregating the efforts of thousands of human translators to arrive at approximate results. Again, there is no real intelligence here, just an efficient way to sort through an incredibly huge amount of data.

Yet, what if this tactic of approaching intelligence by “throwing more data at it”  ultimately proves a dead end? There may come a point where such a strategy shows increasingly limited returns. The fact of the matter is that we know of only one fully sentient creature- ourselves- and the more data strategy is nothing like how our own brains work. If we really want to achieve machine intelligence, and it’s an open question whether this is a worthwhile goal, then we should be exploring at least some alternative paths to that end such as those long espoused by Douglas Hofstadter the author of the amazing Godel, Escher, Bach,  and The Mind’s I among others.

Predictions about the future of capacities of artificially intelligent agents are all predicated on the continued exponential rise in computer processing power. Yet, these predictions are based on what are some less than solid assumptions with the first being that we are nowhere near hard limits to the continuation of Moore’s Law. What this assumption ignores is increased rumblings that Moore’s Law might be in hospice and destined for the morgue.

But even if no such hard limits are encountered in terms of Moore’s Law, we still have the unproven assumption that greater processing power almost all by itself leads to intelligence, or even is guaranteed to bring incredible changes to society at large. The problem here is that sheer processing power doesn’t tell you all that much. Processing power hasn’t brought us machines that are intelligent so much as machines that are fast, nor are the increases in processing power themselves all that relevant to what the majority of us can actually do.  As we are often reminded all of us carry in our pockets or have sitting on our desktops computational capacity that exceeds all of NASA in the 1960’s, yet clearly this doesn’t mean that any of us are by this power capable of sending men to the moon.

AI may be in a technological hype-cycle, again we won’t really know for a few years, but the dangers of any hype-cycle for an immature technology is that it gets crushed as the wave comes down. In a hype-cycle initial progress in some field is followed by a private sector investment surge and then a transformation of the grant writing and academic publication landscape as universities and researchers desperate for dwindling research funding try to match their research focus to match a new and sexy field. Eventually progress comes to the attention of the general press and gives rise to fawning books and maybe even a dystopian Hollywood movie or two. Once the public is on to it, the game is almost up, for research runs into headwinds and progress fails to meet the expectations of a now profit-fix addicted market and funders. In the crash many worthwhile research projects end up in the dustbin and funding flows to the new sexy idea.

AI itself went through a similar hype-cycle in the 1980’s, back when Hofstander was writing his Godel, Escher, Bach  but we have had a spate of more recent candidates. Remember in the 1990’s when seemingly every disease and every human behavior was being linked to a specific gene promising targeted therapies? Well, as almost always, we found out that reality is more complicated than the current fashion. The danger here was that such a premature evaluation of our knowledge led to all kinds of crazy fantasies and nightmares. The fantasy that we could tailor design human beings through selecting specific genes led to what amount to some pretty egregious practices, namely, the sale of selection services for unborn children based on spurious science- a sophisticated form of quackery. It also led to childlike nightmares, such as found in the movie Gattaca or Francis Fukuyama’s Our Posthuman Future where we were frightened with the prospect of a dystopian future where human beings were to be designed like products, a nightmare that was supposed to be just over the horizon.  

We now have the field of epigenetics to show us what we should have known- that both genes and environment count and we have to therefore address both, and that the world is too complex for us to ever assume complete sovereignty over it. In many ways it is the complexity of nature itself that is our salvation protecting us from both our fantasies and our fears.

Some other examples? How about MOOCS which we supposed to be as revolutionary as the invention of universal education or the university? Being involved in distance education for non-university attending adults I had always known that the most successful model for online learning was where it was “blended” some face-to-face, some online. That “soft” study skills were as important to student success as academic ability. The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.  Andrew Ng, the AI researcher scooped up by Baidu I mentioned earlier being just one of a number of high level MOOC refugees having helped found Coursera.

The so-called Internet of Things is probably another example of getting caught on the hype-cycle. The IoT is this idea that people are going to be clamoring to connect all of their things: their homes, refrigerators, cars, and even their own bodies to the Internet in order to be able to constantly monitor those things. The holes in all this are that not only are we already drowning in a deluge of data, or that it’s pretty easy to see how the automation of consumption is only of benefit to those providing the service if we’re either buying more stuff or the automators are capturing a “finder’s fee”, it’s above all, that anything connected to the Internet is by that fact hackable and who in the world wants their homes or their very bodies hacked? This isn’t a paranoid fantasy of the future, as a recent skeptical piece on the IoT in The Economist pointed out:

Last year, for instance, the United States Fair Trade Commission filed a complaint against TrendNet, a Californian marketer of home-security cameras that can be controlled over the internet, for failing to implement reasonable security measures. The company pitched its product under the trade-name “SecureView”, with the promise of helping to protect owners’ property from crime. Yet, hackers had no difficulty breaching TrendNet’s security, bypassing the login credentials of some 700 private users registered on the company’s website, and accessing their live video feeds. Some of the compromised feeds found their way onto the internet, displaying private areas of users’ homes and allowing unauthorised surveillance of infants sleeping, children playing, and adults going about their personal lives. That the exposure increased the chances of the victims being the targets of thieves, stalkers or paedophiles only fuelled public outrage.

Personalized medicine might be considered a cousin of the IoT, and while it makes perfect sense to me for persons with certain medical conditions or even just interest in their own health to monitor themselves or be monitored and connected to health care professionals, such systems will most likely be closed networks to avoid the risk of some maleficent nerd turning off your pacemaker.

Still, personalized medicine itself, might be yet another example of the magnetic power of hype. It is one thing to tailor a patient’s treatment based on how others with similar genomic profiles reacted to some pharmaceutical and the like. What would be most dangerous in terms of health care costs both to individuals and society would be something like the “personalized” care for persons with chronic illnesses profiled in the New York Times this April, where, for instance, the:

… captive audience of Type 1 diabetics has spawned lines of high-priced gadgets and disposable accouterments, borrowing business models from technology companies like Apple: Each pump and monitor requires the separate purchase of an array of items that are often brand and model specific.

A steady stream of new models and updates often offer dubious improvement: colored pumps; talking, bilingual meters; sensors reporting minute-by-minute sugar readouts. Ms. Hayley’s new pump will cost $7,350 (she will pay $2,500 under the terms of her insurance). But she will also need to pay her part for supplies, including $100 monitor probes that must be replaced every week, disposable tubing that she must change every three days and 10 or so test strips every day.

The technological hype-cycle gets its rhetoric from the one technological transformation that actually deserves the characterization of a revolution. I am talking, of course, about the industrial revolution which certainly transformed human life almost beyond recognition from what came before. Every new technology seemingly ends up making its claim to be “revolutionary” as in absolutely transformative. Just in my lifetime we have had the IT , or digital revolution, the Genomics Revolution, the Mobile Revolution, The Big Data Revolution to name only a few.  Yet, the fact of the matter is not merely have no single one of these revolutions proven as transformative as the industrial revolution, arguably, all of them combined haven’t matched the industrial revolution either.

This is the far too often misunderstood thesis of economists like Robert Gordon. Gordon’s argument, at least as far as I understand it, is not that current technological advancements aren’t a big deal, just that the sheer qualitative gains seen in the industrial revolution are incredibly difficult to sustain let alone surpass.

The enormity of the change from a world where it takes, as it took Magellan propelled by the winds, years, rather than days to circle the globe is hard to get our heads around, the gap between using a horse and using a car for daily travel incredible. The average lifespan since the 1800’s has doubled. One in five of the children born once died in childhood. There were no effective anesthetics before 1846.  Millions would die from an outbreak of the flu or other infectious disease. Hunger and famine were common human experiences however developed one’s society was up until the 20th century, and indoor toilets were not common until then either. Vaccinations did not emerge until the late 19th century.

Bill Gates has characterized views such as those of Gordon as “stupid”. Yet, he himself is a Gordonite as evidenced by this quote:

But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.”

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.”

And this is all really what I think Gordon is saying, that the “revolutions” of the past 50 years pale in comparison to the effects on human living of the period between 1850 and 1950 and this is the case even if we accept the fact that the pace of technological change is accelerating. It is as if we are running faster and faster at the same time the hill in front of us gets steeper and steeper so that truly qualitative change of the human condition has become more difficult even as our technological capabilities have vastly increased.

For almost two decades we’ve thought that the combined effects of three technologies in particular- robotics, genetics, and nanotech were destined to bring qualitative change on the order of the industrial revolution. It’s been fourteen years since Bill Joy warned us that these technologies threatened us with a future without human beings in it, but it’s hard to see how even a positive manifestation of the transformations he predicted have come true. This is not to say that they will never bring such a scale of change, only that they haven’t yet, and fourteen years isn’t nothing after all.

So now, after that long and winding road, back to AI. Erik Brynjolfsson, Andrew McAfee and Jeff Cummings the authors of the most popular recent book on the advances in artificial intelligence over the past decade, The Second Machine Age take aim directly at the technological pessimism of Gordon and others. They are firm believers in the AI revolution and its potential. For them innovation in the 21st century is no longer about brand new breakthrough ideas but, borrowing from biology, the recombination of ideas that already exist. In their view, we are being “held back by our inability to process all the new ideas fast enough” and therefore one of the things we need are even bigger computers to test out new ideas and combinations of ideas. (82)

But there are other conclusions one might draw from the metaphor of innovation as “recombination”.  For one, recombination can be downright harmful for organisms that are actually working. Perhaps you do indeed get growth from Schumpeter’s endless cycle of creation and destruction, but if all you’ve gotten as a consequence are minor efficiencies at the margins at the price of massive dislocations for those in industries deemed antiquated, not to mention society as a whole, then it’s hard to see the game being worth the candle.

We’ve seen this pattern in financial services, in music, and journalism, and it is now desired in education and healthcare. Here innovation is used not so much to make our lives appreciably better as to upend traditional stakeholders in an industry so that those with the biggest computer, what Jaron Lanier calls “Sirene Servers” can swoop in and take control. A new elite stealing an old elite’s thunder wouldn’t matter all that much to the rest of us peasants were it not for the fact that this new elite’s system of production has little room for us as workers and producers, but only as consumers of its goods. It is this desire to destabilize, reorder and control the institutions of pre-digital capitalism and to squeeze expensive human beings from the process of production that is the real source of the push for intelligent machines, the real force behind the current “AI revolution”, but given its nature, we’d do better to call it a coup.


The phrase above: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.”

Originally read: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY  closing up shop, as a result.”

The Kingdom of Machines

The Book of the Machines

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the cannon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)

I should back up. Many of us who have heard of Butler and Erewhon have likely done so through George Dyson’s amazing little book Darwin Among the Machines, a book that itself deserves a top spot in the nature-man-machine cannon and got its title from an essay of Butler’s that found its way into the fictional world of his Erewhon. Dyson’s 1997 bookwritten just as the Internet age was ramping up tried to place the digital revolution within the longue duree of human and evolutionary history, but Butler had gotten there first. Indeed, Erewhon articulated challenges ahead of us which took almost 150 years to unfold, issues being discussed today by a set of scientists and scholars concerned over both the future of machines and the ultimate fate of our species.

A few weeks back Giulio Prisco at the IEET pointed me in the direction of an editorial placed in both the Huffington Post and the Independent by a group of scientists including, Stephen Hawking, Nick Bostrom and Max Tegmark warning of the potential dangers emerging from technologies surrounding artificial intelligence that are now going through an explosive period of development. As the authors of the letter note:

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Most seem to think we are at the beginning rather than at the end of this AI revolution and see it likely unfolding into an unprecedented development; namely, machines that are just as smart if not incredibly more so than their human creators. Should the development of AI as intelligent or more intelligent than ourselves be a concern? Giulio himself doesn’t think so  believing that advanced AIs are in a sense both our children and our evolutionary destiny. The scientists and scholars behind the letter to Huffpost and The Independent , however, are very concerned.  As they put it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

And again:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In a probing article about the existential risks posed by artificial intelligence, and more so the larger public’s indifference to it, James Hamblin writes of the theoretical physicists and Nobel Laureate Frank Wilczek another of the figures trying to promote greater serious public awareness of the risks posed by artificial intelligence. Wilczek thinks we will face existential risks from intelligent machines not over the long haul of human history but in a very short amount of time.

It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.

To be honest, it’s quite hard to take these scientists and thinkers seriously, perhaps because I’ve been so desensitized by Hollywood dystopias staring killer robots. But when your Noahs are some of the smartest guys on the planet it’s probably a good idea if not to actually start building your boat to at least be able to locate and beat a path if necessary to higher ground.

What is the scale of the risk we might be facing? Here’s physicist Max Tegmark from Hamblin’s piece:

Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.

If there’s a potential monster somewhere on your path it’s always good to have some idea of its actual shape otherwise you find yourself jumping and lunging at harmless shadows.  How should we think about potentially humanity threatening AI’s? The first thing is that we need to be clear about what is happening. For all the hype, the AI sceptics are probably right- we are nowhere near replicating the full panoply of our biological intelligence in a machine. Yet, this fact should probably increase rather than decrease our concern regarding the potential threat from “super-intelligent” AIs. However far they remain from the full complexity of human intelligence, on account of their much greater speed and memory such machines largely already run something so essential and potentially destructive as our financial markets, the military is already discussing the deployment of autonomous weapons systems, with the domains over which the decisions of AIs hold sway only likely to increase over time. One just needs to imagine one or more such systems going rouge and doing what may amount to highly destructive things its creators or programmers did not imagine or intend to comprehend the risk. Such a rogue system would be more akin to a killer-storm than Lex Luthor, and thus, Nick Bostrom is probably on to something profound when he suggest that one of the worse things we could do would be to anthropomorphize the potential dangers. Telling Ross Anderson over at Aeon:

You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

The crazy thing is that Erewhon a novel published in 1872 clearly stated almost all of these dangers and perspectives. If these fears prove justified it will be Samuel Butler rather than Friedrich Nietzsche who will have been the great prophet to emerge from the the 19th century.

Erewhon was released right on the eve of what’s called the second industrial revolution that lasted from the 1870′s to World War I. Here you get mass rail and steam ships, gigantic factories producing iron and steel, and the birth of electrification. It is the age romanticized and reimagined today by cultural movement of steampunk.

The novel appeared 13 years after the publication of Charles Darwin’s The Origin of Species and even more importantly one year after Darwin’s even more revolutionary The Descent of Man. As we learn in Dyson’s book, Butler and Darwin were engaged in a long lasting intellectual feud, though the issue wasn’t evolution itself, but Butler’s accusation that Charles Darwin had essentially ripped his theory from his grandfather Erasmus Darwin without giving him any of the credit. Be that as it may,

Erewhon was never intended as many thought at the time, as a satire of Darwinism, but was Darwinian to its core, and tried to apply the lessons of the worldview made apparent by the Theory of Evolution to the innovatively exploding world of machines Butler saw all around him.

The narrator in Erewhon, a man name Higgs, is out exploring the undiscovered valleys of a thinly disguised New Zealand or some equivalent looking for a large virgin territory to raise sheep. In the process he discovers an unknown community, the nation of Erewhon hidden in its deep unexplored valleys. The people there are not primitive, but exist at roughly the level of a European village before the industrial revolution.  Or as Higgs observers of them in a quip pregnant with meaning – “…savages do not make bridges.” What they find most interesting about the newcomer Higgs is, of all things, his pocket watch.

But by and by they came to my watch which I had hidden away in the pocket that I had and had forgotten when began their search. They seemed concerned and the moment that they got hold of it. They made me open it and show the works and as as I had done so they gave signs of very grave which disturbed me all the more because I not conceive wherein it could have offended them.  (58)

One needs to know a little of the history of the idea of evolution to realize how deliciously clever this narrative use of a watch by Butler is. He’s jumping off of William Paley’s argument for intelligent design found in Paley’s 1802 Natural Theology or Evidences of the Existence and Attributes of the Deityabout which it has been said:

It is a book I greatly admire for in its own time the book succeeded in doing what I am struggling to do now. He had a point to make, he passionately believed in it, and he spared no effort to ram it home clearly. He had a proper reverence for the complexity of the living world, and saw that it demands a very special kind of explanation. (4)


The quote above is Richard Dawkins talking in his The Blind Watchmakerwhere he recovered and popularized Paley’s analogy of the stumbled upon watch as evidence that the stunningly complex world of life around us must have been designed.

Paley begins his Natural Theology with an imagined scenario where a complex machine, a watch, hitherto unknown by its discoverer leads to speculation about it origins.

In crossing a heath suppose I pitched my foot against a stone and were asked how the stone came to be there. I might possibly answer that for any thing I knew to the contrary it had lain there for ever nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground and it should be inquired how the watch happened to be in that place. I should hardly think of the answer which I had before given that for any thing I knew the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? Why is it not as admissible in the second case as in the first?  (1)

After investigating the intricacies of the watch, how all of its pieces seem to fit together perfectly and have precise and definable functions Paley thinks that the:

….inference….is inevitable that the watch must have had a maker that there must have existed at some time and at some place or other an artificer or artificers who formed it for the purpose which we find it actually to answer who comprehended its construction and designed its use. (3-4)

Paley thinks we should infer a designer even if we discover the watch is capable of making copies of itself:

Contrivance must have had a contriver design a designer whether the machine immediately proceeded from another machine or not. (12)

This is creationism, but as even Dawkins admits, it is an eloquent and sophisticated creationism.

Darwin, which ever one got there first, overthrew this need for an engineer as the ultimate source of complexity by replacing a conscious designer with a simple process through which the most intricate of complex entities could emerge over time- in the younger’s case – Natural Selection.


The brilliance of Samuel Butler in Erewhon was to apply this evolutionary emergence of complexity not just to living things, but to the machines we believe ourselves to have engineered. Perhaps the better assumption to have when we encounter anything of sufficient complexity is that to reach such complexity it must have been something that evolved over time. Higgs says of the magistrate of Erewhon obsessed with the narrator’s pocket watch that he had:

….a look of horror and dismay… a look which conveyed to me the impression that he regarded my watch not as having been designed but rather as the designer of himself and of the universe or as at any rate one of the great first causes of all things  (58)

What Higgs soon discovers, however, is that:

….I had misinterpreted the expression on the magistrate’s face and that it was one not of fear but hatred.  (58)

Erewhon is a civilization where something like the Luddites of the early 19th century have won. The machines have been smashed, dismantled, turned into museum pieces.

Civilization has been reset back to the pre-industrial era, and the knowledge of how to get out of this era and back to the age of machines has been erased, education restructured to strangle in the cradle scientific curiosity and advancement.

All this happened in Erewhon because of a book. It is a book that looks precisely like an essay the real world Butler had published not long before his novel, his essay Darwin among the Machines, where George Dyson got his title. In Erewhon it is called simply “The Book of the Machines”.

It is sheer hubris we read in “The Book of the Machines” to think that evolution, having played itself out over so many billions of years in the past and likely to play itself out for even longer in the future, that we are the creatures who have reached the pinnacle. And why should we believe there could not be such a thing as “post-biological” forms of life:

…surely when we reflect upon the manifold phases of life and consciousness which have been evolved already it would be a rash thing to say that no others can be developed and that animal life is the end of all things. There was a time when fire was the end of all things another when rocks and water were so. (189)

In the “Book of the Machines” we see the same anxiety about the rapid progress of technology that we find in those warning of the existential dangers we might face within only the next several decades.

The more highly organised machines are creatures not so much of yesterday as of the last five minutes so to speak in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years, see what strides machines have made in the last thousand. May not the world last twenty million years longer. If so what will they not in the end become?  (189-190)

Butler, even in the 1870’s is well aware of the amazing unconscious intelligence of evolution. The cleverest of species use other species for their own reproductive ends. The stars of this show are the flowers a world on display in Louie Schwartzberg’s stunning documentary Wings of Life which shows how much of the beauty of our world is the product of this bridging between species as a means of reproduction. Yet flowers are just the most visually prominent example.

Our machines might be said to be like flowers unable to reproduce on their own, but with an extremely effective reproductive vehicle in use through human beings. This, at least is what Butler speculated in Erewhon, again in the section “The Book of Machines”:

No one expects that all the features of the now existing organisations will be absolutely repeated in an entirely new class of life. The reproductive system of animals differs widely from that of plants but both are reproductive systems. Has nature exhausted her phases of this power? Surely if a machine is able to reproduce another machine systematically we may say that it has a reproductive system. What is a reproductive system if it be not a system for reproduction? And how few of the machines are there which have not been produced systematically by other machines? But it is man that makes them do so. Yes, but is it not insects that make many of the plants reproductive and would not whole families of plants die out if their fertilisation were not effected by a class of agents utterly foreign to themselves? Does any one say that the red clover has no reproductive system because the humble bee and the humble bee only must aid and abet it before it can reproduce? (204)

Reproduction is only one way one species or kingdom can use another. There is also their use as a survival vehicle itself. Our increasing understanding of the human microbiome almost leads one to wonder whether our whole biology and everything that has grown up around it has all this time merely been serving as an efficient vehicle for the real show – the millions of microbes living in our guts. Or, as Butler says in what is the most quoted section of Erewhon:

Who shall say that a man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his and whether he is anything but another kind of ant heap after all. Might not man himself become a sort of parasite upon the machines. An affectionate machine tickling aphid. (196)

Yet, one might suggest that perhaps we shouldn’t separate ourselves from our machines. Perhaps we have always been transhuman in the sense that from our beginnings we have used our technology to extend our own reach. Butler well understood this argument that technology was an extension of the human self.

A machine is merely a supplementary limb this is the be all and end all of machinery We do not use our own limbs other than as machines and a leg is only a much better wooden leg than any one can manufacture. Observe a man digging with a spade his right forearm has become artificially lengthened and his hand has become a joint.

In fact machines are to be regarded as the mode of development by which human organism is now especially advancing every past invention being an addition to the resources of the human body. (219)

Even the Luddites of Erewhon understood that to lose all of our technology would be to lose our humanity, so intimately woven were our two fates. The solution to the dangers of a new and rival kingdom of animated beings arising from machines was to deliberately wind back the clock of technological development to before the time fossil fuels had freed machines from the constraints of animal, human, and cyclical power to before machines had become animate in the way life itself was animate.

Still, Butler knew it would be almost impossible to make the solution of Erewhon our solution. Given our numbers, to retreat from the world of machines would unleash death and anarchy such as the world has never seen:

The misery is that man has been blind so long already. In his reliance upon the use of steam he has been betrayed into increasing and multiplying. To withdraw steam power suddenly will not have the effect of reducing us to the state in which we were before its introduction there will be a general breakup and time of anarchy such as has never been known it will be as though our population were suddenly doubled with no additional means of feeding the increased number. The air we breathe is hardly more necessary for our animal life than the use of any machine on the strength of which we have increased our numbers is to our civilisation it is the machines which act upon man and make him man as much as man who has acted upon and made the machines but we must choose between the alternative of undergoing much present suffering or seeing ourselves gradually superseded by our own creatures till we rank no higher in comparison with them than the beasts of the field with ourselves. (215-216)

Therein lies the horns of our current dilemma. The one way we might save biological life from the risk of the new kingdom of machines would be to dismantle our creations and retreat from technological civilization. It is a choice we cannot make both because of its human toll and for the long term survival of the genealogy of earthly life. Butler understood the former, but did not grasp the latter. For, the only way the legacy of life on earth, a legacy that has lasted for 3 billion years, and which everything living on our world shares, will survive the inevitable life cycle of our sun which will boil away our planet’s life sustaining water in less than a billion years and consume the earth itself a few billion years after that, is for technological civilization to survive long enough to provide earthly life with a new home.

I am not usually one for giving humankind a large role to play in cosmic history, but there is at least a chance that something like Peter Ward’s Medea Hypothesis is correct, that given the thoughtless nature of evolution’s imperative to reproduce at all cost life ultimately destroys its own children like the murderous mother of Euripides play.

As Ward points out it is the bacteria that have almost destroyed life on earth, and more than once, by mindlessly transforming its atmosphere and poisoning it oceans. This is perhaps the risk behind us, that Bostrom thinks might explain our cosmic loneliness complex life never gets very far before bacteria kills it off. Still perhaps we might be in a sort of pincer of past and future our technological civilization, its capitalist economic system, and the AIs that might come to be at the apex of this world ultimately as absent of thought as bacteria and somehow the source of both biological life’s and its own destruction.     

Our luck has been to avoid a medea-ian fate in our past. If we can keeps our wits about us we might be able to avoid a  medea-ian fate in our future only this brought to us by the kingdom machines we have created. If most of life in the universe really has been trapped at the cellular level by something like the Medea Hypothesis, and we actually are able to survive over the long haul, then our cosmic task might be to purpose the kingdom of machines to be to spread and cultivate  higher order life as far as deeply as we can reach into space. Machines, intelligent or not, might be the interstellar pollinators of biological life. It is we the living who are the flowers.

The task we face over both the short and the long term is to somehow negotiate the rivalry not only of the two worlds that have dominated our existence so far- the natural world and the human world- but a third, an increasingly complex and distinct world of machines. Getting that task right is something that will require an enormous amount of wisdom on our part, but part of the trick might lie in treating machines in some way as we already, when approaching the question rightly, treat nature- containing its destructiveness, preserving its diversity and beauty, and harnessing its powers not just for the good of ourselves but for all of life and the future.

As I mentioned last time, the more sentient our machines become the more we will need to look to our own world, the world of human rights, and for animals similar to ourselves, animal rights, for guidance on how to treat our machines balancing these rights with the well being of our fellow human beings.

The nightmare scenario is that instead of carefully cultivating the emergence of the kingdom of machines to serve as a shelter for both biological life and those things we human beings most value they will instead continue to be tied to the dream of endless the endless accumulation of capital, or as Douglas Rushkoff recently stated it:

When you look at the marriage of Google and Kurzweil, what is that? It’s the marriage of digital technology’s infinite expansion with the idea that the market is somehow going to infinitely expand.

A point that was never better articulated or more nakedly expressed than by the imperialist business man Cecil Rhodes in the 19th century:

To think of these stars that you see overhead at night, these vast worlds which we can never reach. I would annex the planets if I could.

If that happens, a kingdom of machines freed from any dependence on biological life or any reflection of human values might eat the world of the living until the earth is nothing but a valley of our bones.


That a whole new order of animated beings might emerge from what was essentially human weakness vis a-vis  other animals is yet another one of the universes wonders, but it is a wonder that needs to go right in order to make it a miracle, or even to prevent it merely from becoming a nightmare. Butler invented almost all of our questions in this regard and in speculating in Erewhon how humans might unravel their dependence on machines raised another interesting question as well; namely, whether the very freewill that we use to distinguish the human world from the worlds of animals or machines actually exists? He also pointed to an alternative future where the key event would not be the emergence of the kingdom of machines but the full harnessing and control of the powers of biological life by human beings questions I’ll look at sometime soon.





Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…




The Problem with the Trolley Problem, or why I avoid utilitarians near subways


Master Bertram of Miden The Fall

Human beings are weird. At least, that is, when comparing ourselves to our animal cousins. We’re weird in terms of our use of language, our creation and use of symbolic art and mathematics, our extensive use of tools. We’re also weird in terms of our morality, and engage in strange behaviors visa-via one another that are almost impossible to find throughout the rest of the animal world.

We will help one another in ways that no other animal would think of, sacrificing resources, and sometimes going so far as surrendering the one thing evolution commands us to do- to reproduce- in order to aid total strangers. But don’t get on our bad side. No species has ever murdered or abused fellow members of its own species with such viciousness or cruelty. Perhaps religious traditions have something fundamentally right, that our intelligence, our “knowledge of good and evil” really has put us on an entirely different moral plain, opened up a series of possibilities and consciousness new to the universe or at least our limited corner of it.

We’ve been looking for purely naturalistic explanations for our moral strangeness at least since Darwin. Over the past decade or so there has been a growing number of hybrid explorations of this topic combining elements in varying degrees of philosophy, evolutionary theory and cognitive psychology.

Joshua Greene’s recent Moral Tribes: Emotion, Reason, and the Gap Between Us and Them   is an excellent example of these. Yet, it is also a book that suffers from the flaw of almost all of these reflections on singularity of human moral distinction. What it gains in explanatory breadth comes at the cost of a diminished understanding of how life on this new moral plain is actually experienced the way knowing the volume of water in an ocean, lake or pool tells you nothing about what it means to swim.

Greene like any thinker looking for a “genealogy of morals” peers back into our evolutionary past. Human beings spent the vast majority of our history as members of small tribal groups of perhaps a few hundred members. Wired over eons of evolutionary time into our very nature is an unconscious ability to grapple with conflicting interests between us as individuals and the interest of the tribe to which we belong. What Greene calls the conflicts of Me vs Us.

When evolution comes up with a solution to a commonly experienced set of problems it tends to hard-wire that solution. We don’t think all at all about how to see, ditto how to breathe, and when we need to, it’s a sure sign that something has gone wrong. There is some degree of nurture in this equation, however. Raise a child up to a certain age in total darkness and they will go blind. The development of language skills, especially, show this nature-nurture codependency. We are primed by evolution for language acquisition at birth and just need an environment in which any language is regularly spoken. Greene thinks our normal everyday “common sense morality” is like this. It comes natural to us and becomes automatic given the right environment.

Why might evolution have wired human beings morally in this way, especially when it comes to our natural aversion to violence? Greene thinks Hobbes was right when he proposed that human beings possess a natural equality to kill the consequence of our evolved ability to plan and use tools. Even a small individual could kill a much larger human being if they planned it well and struck fast enough. Without some in built inhibition against acts of personal aggression humanity would have quickly killed each other off in a stone age version of the Hatfields and the McCoys.

This automatic common sense morality has gotten us pretty far, but Greene thinks he has found a way where it has also become a bug, distorting our thinking rather than helping us make moral decisions. He thinks he sees distortion in the way we make wrong decisions when imagining how to save people being run over by runaway trolley cars.

Over the last decade the Trolley problem has become standard fair in any general work dealing with cognitive psychology. The problem varies but in its most standard depiction the scenario goes as follows: there is a trolley hurtling down a track that is about to run over and kill five people. The only way to stop this trolley is to push an innocent, very heavy man onto the tracks. A decision guaranteed to kill the man. Do you push him?

The answer people give is almost universally- no, and Greene intends to show why most of us answer in this way even though, as a matter of blunt moral reasoning, saving five lives should be worth more than sacrificing one.

The problem, as Greene sees it, is that we are using our automatic moral decision making faculties to make the “don’t push call.” The evidence for this can be found in the fact that those who have their ability to make automatic moral decisions compromised end up making the “right” moral decision to sacrifice one life in order to save five.

People who have compromised automatic decision making processes might be persons with damage to their ventromedial prefrontal cortex, persons such as Phineas Gauge whose case was made famous by the neuroscientist, Antonio Damasio in his book Descartes’ Error. Gauge was a 19th century miner who after suffering an accident that damaged part of his brain became emotionally unstable and unable to make good decisions. Damasio used his case and more modern ones to show just how important emotion was to our ability to reason.

It just so happens that persons suffering from Gauge style injuries also decide the Trolley problem in favor of saving five persons rather than refusing to push one to their death. Persons with brain damage aren’t the only ones who decide the Trolley problem in this way- autistic individuals and psychopaths do so as well.

Greene makes some leaps from this difference in how people respond to the Trolley problem, proposing that we have not one but two systems of moral decision making which he compares to the automatic and manual settings on a camera. The automatic mode is visceral, fast and instinctive whereas the manual mode is reasoned, deliberative and slow. Greene thinks that those who are able to decide in the Trolley problem to push the man onto the tracks to save five people are accessing their manual mode because their automatic settings have been shut off.

The leaps continue in that Greene thinks we need manual mode to solve the types of moral problems our evolution did not prepare us for, not Me vs. Us, but Us vs. Them, of our society in conflict other societies. Moral philosophy is manual mode in action, and Greene believes one version of our moral philosophy is head and shoulders above the rest, Utilitarianism, which he would prefer to be called Deep Pragmatism.

Needless to say there are problems with the Trolley Problem. How for instance does one interpret the change in responses to the problem when one substitutes a push with a switch? The number of respondents who chose to send the fat man to his death on the tracks to save five people significantly increases when one exchanges a push for a switch of a button that opens a trapdoor. Greene thinks it is because giving us bodily distance allows our more reasoned moral thinking to come into play. However, this is not the conclusion one gets when looking at real instances of personal versus instrumentalized killing i.e. in war.

We’ve known for quite a long time that individual soldiers have incredible difficulty killing even fellow enemy soldiers on the battlefield a subject brilliantly explored by Dave Grossman in his book On Killing: The Psychological Cost of Learning to Kill in War and Society. Strange thing is, whereas an individual soldier has problems killing even one human being who is shooting at him, and can sustain psychological scars because of such killing, airmen working in bombers who incinerate thousands of human beings including men women and children do not seem to be affected to such a degree by either inhibitions on killing or its mental scars. The US military’s response to this is to try to instrumentalize and automaticize killing by infantrymen in the same way killing by airmen and sailors is disassociated. Disconnection from their instinctual inhibitions against violence allows human being to achieve levels of violence impossible at an individual level.     

It may be the persons who chose to save five persons at the cost of one aren’t engaging in a form of moral reasoning at all they are merely comparing numbers. 5 > 1, so all else being equal choose the greater number. Indeed, the only way it might be said that higher moral reasoning was being used in choosing 5 over 1 was if the 1 that needed to be sacrificed to save 5 was the person being asked. With the question being would you throw yourself on the trolley tracks in order to save 5 other people?

Our automatic, emotional systems are actually very good at leading us to moral decisions, and the reasons we have trouble stretching them to Us vs Them problems might be different than the ones Greene identifies.

A reason the stretching might be difficult can be seen in a famous section in The Brother’s Karamazov by Russian novelist, Fyodor Dostoyevsky. “The Grand Inquisitor” is a section where the character Ivan is conveying an imaginary future to his brother Alyosha where Jesus has returned to earth and is immediately arrested by authorities of the Roman church who claim that they have already solved the problem of man’s nature: the world is stable so long as mankind is given bread and prohibited from exercising his free will. Writing in the late 19th century, Dostoyevsky was an arch-conservative, with deep belief in the Orthodox Church and had a prescient anxiety regarding the moral dangers of nihilisms and revolutionary socialism in Russia.

In her On Revolution, Hannah Arendt pointed out that Dostoyevsky was conveying with his story of the Grand Inquisitor, not only an important theological observation, that only an omniscient God could experience the totality of suffering without losing sight of the suffering individual, but an important moral and political observation as well- the contrast between compassion and pity.

Compassion by its very nature can not be touched off by the sufferings of a whole group of people, or, least of all,  mankind as a whole. It cannot reach out further than what is suffered by one person and remain what it is supposed to be, co-suffering. Its strength hinges on the strength of passion itself, which, in contrast to reason, can comprehend only the particular, but has no notion of the general and no capacity for generalization. (75)

In light of Greene’s argument what this means is that there is a very good reason why our automatic morality has trouble with abstract moral problems- without having sight of an individual to which moral decisions can be attached one must engage in a level of abstraction our older moral systems did not evolve to grapple with. As a consequence of our lack of God-like omniscience moral problems that deal with masses of individuals achieve consistency only by applying some general rule. Moral philosophers, not to mention the rest of us, are in effect imaginary kings issuing laws for the good of their kingdoms. The problem here is not only that the ultimate effect of such abstract level decisions are opaque due to our general uncertainty regarding the future, but there is no clear and absolute way of defining for whose good exactly are we imposing these rules?

Even the Utilitarianism that Greene thinks is our best bet for reaching common agreement regarding such rules has widely different and competing answers to how we should answer the question “good for who?” In their camp can be found Abolitionist such as David Pierce who advocate the re-engineering of all of nature to minimize suffering, and Peter Singer who establishes a hierarchy of rational beings. For the latter, the more of a rational being you are the more the world should conform to your needs, so much so, that Singer thinks infanticide is morally justifiable because newborns do not yet possess the rational goals and ideas regarding the future possessed by older children and adults. Greene, who thinks Utilitarianism could serve as mankind’s common “moral currency”, or language,  himself has little to say regarding where his concept of Utilitarianism falls on this spectrum, but we are very far indeed from anything like universal moral agreement on the Utilitarianism of Pierce or Singer.

21st century global society has indeed come to embrace some general norms. Past forms of domination and abuse, most notably slavery, have become incredibly rare relative to other periods of human history, though far from universal, women are better treated than in ages past. External relations between states have improved as well the world is far less violent than in any other historical era, the global instruments of cooperation, if far too seldom, coordination, are more robust.

Many factors might be credited here, but the one I would least credit is any adoption of a universal moral philosophy. Certainly one of the large contributors has been the increased range over which we exercise our automatic systems of morality. Global travel, migration, television, fiction and film all allow us to see members of the “Them” as fragile, suffering, and human all too human individuals like “Us”. This may not provide us with a negotiating tool to resolve global disputes over questions like global warming, but it does put us on the path to solving such problems. For, a stable global society will only appear when we realize that in more sense than not there is no tribe on the other side of us, that there is no “Them”, there is only “Us”.

War and Human Evolution



Has human evolution and progress been propelled by war? 

The question is not an easy one to ask, not least because war is not merely one of the worst but arguably the worst thing human beings inflict on one another comprising murder, collective theft, and, almost everywhere but in the professional militaries of Western powers, and only quite recently, mass, and sometimes systematic rape. Should it be the case that war was somehow the driving mechanism, the hidden motor behind what we deem most good regarding human life, trapped in some militaristic version of Mandeville’s Fable of the Bees, then we might truly feel ourselves in the grip and the plaything of something like the demon of Descartes’ imagination, though rather than taunt us by making the world irrational we would be cursed to a world of permanent moral incomprehensibility.

When it comes to human evolution we should probably begin to address the question by asking how long war has been part of the human condition? Almost all of our evolution took place while we lived as hunter gatherers, indeed, we’ve only even had agriculture (and anything that comes with it) for 5-10% of our history. From a wide angle view the agricultural, industrial, and electronic revolutions might all be considered a set piece, perhaps one of the reasons things like philosophy, religion, and literature from millennia ago still often resonate with us. Within the full scope of our history, Socrates and Jesus lived only yesterday.

The evidence here isn’t good for those who hoped our propensity for violence is a recent innovation. For over the past generation a whole slew of archeological scholars has challenged the notion of the “peaceful savage” born in the radiating brilliance of the mind of Jean-Jacques Rousseau. Back in 1755 Rousseau had written his Discourse on Inequality perhaps the best timed book, ever. Right before the industrial revolution was to start, Rousseau made the argument that it was civilization, development and the inequalities it produced that was the source of the ills of mankind- including war. He had created an updated version of the myth of Eden, which became in his hands mere nature, without any God in the story. For those who would fear humanity’s increasingly rapid technological development, and the victory of our artificial world over the natural one we were born from and into, he gave them an avenue of escape, which would be to shed not only the new industrial world but the agricultural one that preceded it. Progress, in the mind of Rousseau and those inspired by him, was a Sisyphean trap, the road to paradise lie backwards in time, in the return to primitive conditions.

During the revived Romanticism of the 1960’s archeologists like Francis de Waal set out to prove Rousseau right. Primitive, hunter-gatherer societies, in his view, were not war like and had only become so with contact from advanced warlike societies such as our own. In recent decades, however, this idea of prehistoric and primitive pacifism has come under sustained assault. Steven Pinker’s Better Angels of Our Nature: Why Violence has Declined is the most popularly know of these challenges to the idea of primitive innocence that began with Rousseau’s Discourse, but in many ways, Pinker was merely building off of, and making more widely known, scholarship done elsewhere.

One of the best of examples of this scholarship is Lawrence H. Keeley’s War Before Civilization: The Myth of the Peaceful Savage. Keeley makes a pretty compelling argument that warfare was almost universal across pre-agricultural, hunter-gatherer societies. Rather than being dominated by the non-lethal ritualized “battle”, as de Waal had argued, war for such societies war was a high casualty affair and total.

It is quite normal for conflict between tribes in close proximity to be endemic. Protracted periods of war result in casualty figures that exceed modern warfare even the total war between industrial states seen only in World War II.Little distinction is made in such pre-agricultural forms of conflict between women and children and men of fighting age. Both are killed indiscriminately in wars made up, not of the large set battles that characterize warfare in civilizations after the adoption of agriculture, but of constant ambushes and attacks that aim to kill those who are most vulnerable.

Pre-agricultural war is what we now call insurgency or guerilla war. Americans, who remember Vietnam or look with clear eyes at the Second Iraq War, know how effective this form of war can be against even the most technologically advanced powers. Although Keeley acknowledges the effectiveness and brutal efficiency of guerilla war,  he does not make the leap to suggest that in going up against insurgency advanced powers face in some sense a more evolutionarily advanced rival, honed by at least 100,000 years of human history.  

Keeley concludes that as long as there have been human beings there has been war, yet, he reaches much less dark conclusions from the antiquity of war than one might expect. Human aversion to violence and war, disgust with its consequences, and despair at its, more often than not, unjustifiable suffering are near universal whatever a society’s level of technological advancement. He writes:

“… warfare whether primitive or civilized involves losses, suffering, and terror even for the victors. Consequently, it was nowhere viewed as an unalloyed good, and the respect accorded to an accomplished warrior was often tinged with aversion. “

“For example, it was common the world over for the warrior who had just killed an enemy to be regarded by his own people as spiritually polluted or contaminated. He therefore had to undergo a magical cleansing to remove this pollution. Often he had to live for a time in seclusion, eat special food or fast, be excluded from participation in rituals and abstain from sexual intercourse. “ (144)

Far from the understanding of war found in pre-agricultural societies being ethically shallow, psychologically unsophisticated or unaware of the moral dimension of war their ideas of war are as sophisticated as our own. What the community asks the warrior to do when sending them into conflict and asking them to kill others is to become spiritually tainted and ritual is a way for them to become reintegrated into their community.  This whole idea that soldiers require moral repair much more than praise for their heroics and memorials is one we are only now rediscovering.

Contrary to what Freud wrote to Einstein when the former asked him why human beings seemed to show such a propensity for violence, we don’t have a “death instinct” either. Violence is not an instinct, or as Azar Gat wrote in his War in Human Civilization:

Aggression does not accumulate in the body by a hormone loop mechanism, with a rising level that demands release. People have to feed regularly if they are to stay alive, and in the relevant ages they can normally avoid sexual activity all together only by extraordinary restraint and at the cost of considerable distress. By contrast, people can live in peace for their entire lives, without suffering on that account, to put it mildly, from any particular distress. (37)

War and violence are not an instinct but a tactic, thank God, for if it was an instinct we’d all be dead. Still, if it is a tactic it has been a much used one and it seems that throughout our hundred thousand year human history it is one that we have used with less rather than greater frequency as we have technologically advanced, and one might wonder why this has been the case.    

Azar Gat’s theory of our increasing pacifism is very similar to Pinker’s, war is a very costly and dangerous tactic. If you can get what you are after without it, peace is normally the course human beings will follow. Since the industrial revolution, nature has come under our increasing command, war and expansion are no longer very good answers to the problems of scarcity, thus the frequency of wars was on the decline even before we had invented atomic weapons which turned full scale war into pure madness.

Yet, a good case can be made that war played a central role in putting down the conditions in which the industrial revolution, the spark of the material progress that has made war an unnecessary tactic, occurred and that war helped push technological advancement forward. The Empire won by the British through force of arms was the lever that propelled their industrialization. The Northern United States industrialized much more quickly as a consequence of the Civil War, Western pressure against pre-industrial Japan and China pushed those countries to industrialize. War research gave us both nuclear power and computers, the space race and satellite communications were the result of geopolitical competition. The Internet was created by the Pentagon, and the contemporary revolution in bionic prosthetics grew out of the horrendous injuries suffered in the wars in Iraq and Afghanistan.

Thing is, just as is the case for pre-agricultural societies it is impossible to disaggregate war itself with the capacity for cooperation between human beings a capacity that has increased enormously since the widespread adoption of agriculture. Any increase in human cooperative capacity increases a society’s capacity for war, and, oddly enough many increases in human war fighting capacity lead to increases in cooperative capacity in the civilian sphere. We are caught in a tragic technological logic that those who think technology leads of itself to global prosperity and peace if we just use it for the right ends are largely unaware of. Increases in our technological capacity even if pursued for non-military ends will grant us further capacity for violence and it is not clear that abandoning military research wouldn’t derail or greatly slow technological progress altogether.

One perhaps doesn’t normally associate the quest for better warfighting technologies with transhumanism- the idea “that the human race can evolve beyond its current physical and mental limitations, especially by means of science and technology”-, butone should. It’s not only the case that, as mentioned above, the revolution in bionic prosthetics emerged from war, the military is at the forefront of almost every technology on transhumanists’ wish list for future humanity from direct brain-machine and brain- to-brain interfaces to neurological enhancements for cognition and human emotion. And this doesn’t even capture the military’s revolutionary effect on other aspects of the long held dreams of futurists such as robotics.

The subject of how the world militaries have and are moving us in the direction of “post-human” war is the subject Christopher Coker’s  Warrior Geeks: How 21st Century Technology Is Changing the Way We Fight and Think About War and it is with this erudite and thought provoking book that the majority of the rest of this post will be concerned.

Warrior Geeks is a book about much more than the “geekification”  of war, the fact that much of what now passes for war is computerized, many the soldiers themselves now video gamers stuck behind desks, even if the people they kill remain flesh and blood. It is also a book about the evolving human condition and how war through science is changing that condition, and what this change might mean for us. For Coker sees the capabilities many transhumanists dream of finding their application first on the battlefield.

There is the constantly monitored “quantified-self”:

And within twenty years much as engineers analyze data and tweak specifications in order to optimize a software program, military commanders will be able to bio-hack into their soldiers brains and bodies, collecting and correlating data the better to optimize their physical and mental performance. The will be able to reduce distractions and increase productivity and achieve “flow”- the optimum state of focus. (63-64)

And then there is “augmented reality” where the real world is overlaid with the digital such as in the ideal which the US army and others are reaching for-to tie together servicemen in a “cybernetic network” where soldiers:

…. don helmets with a minute screen which when flipped over the eyes enables them to see the entire battlefield, marking the location of both friendly and enemy forces. They can also access the latest weather reports. In future, soldiers will not even need helmets at all. They will be able to wear contact lenses directly onto their retinas. (93)

In addition the military is investigating neurological enhancements of all sorts:

The ‘Persistence in Combat Program’ is well advanced and includes research into painkillers which soldiers could take before going into action, in anticipation of blunting the pain of being wounded. Painkillers such as R1624, created by scientists at Rinat Neuroscience, use an antibody to keep in check a neuropeptide that helps transmit pain sensations from the tissues to the nerves. And some would go further in the hope that one day it might be possible to pop a pill and display conspicuous courage like an army of John Rambos. (230)

With 1 in 8 soldiers returning from combat suffering from PTSD Some in the military hope that the scars of war might be undone with neurological advances that would allow us to “erase” memories.

One might go on with examples of exoskeletons that give soldiers superhuman endurance and strength or tying together the most intimate military unit, the platoon, with neural implants that allow direct brain- to-brain communication a kind of “brain-net” that an optimistic neuroscientist like Miguel Nicolelis seems not to have anticipated.

Coker, for his part, sees these developments in a very dark light, a move away from what have been eternal features of both human nature and war such as courage and character and towards a technologically over-determined and ultimately less than human world. He is very good at showing how our reductive assumptions end up compressing the full complexity of human life into something that gives us at least the illusion of technological control. It’s not our machines who are becoming like us, but us like our machines, it is not necessarily our gain in understanding and sovereignty over our nature to understand underlying genetic and neural mechanisms, but a constriction of our sense of ourselves.

Do we gain from being able to erase the memory of an event in combat that leads to our moral injury or lose something in no longer being able to understand and heal it through reconciliation and self-knowledge? Is this not in some way a more reduced and blind answer to the experience of killing and death in war than that of the primitive tribes people mentioned earlier? Luckily, reductive theories of human nature based on our far less than complete understanding of neuroscience or genetics almost always end up showing their inadequacies after a period of friction with the sheer complexity of being alive.

There are dangers which I see as two-fold. First, there is always the danger that we will move under the assumptions that we have sufficient knowledge and that the solutions to some problem are therefore “easy” should we just double-down and apply some technological or medical solution. Given enough time, the complexity of reality is likely to rear its head, though not after we have caused a good deal of damage in the name of a chimera. Secondly, it is that we will use our knowledge to torque the complexity of the human world into something more simple and ultimately more shallow in the quest for a reality we deem manageable. There is something deeper and more truthful in the primitive response to the experience of killing than erasing the memory of such experiences.

One of the few problems I had with Coker’s otherwise incredible book is, sadly, a very fundamental one. In some sense, Warrior Geeks is not so much a critique of the contemporary ways and trend lines of war, but a bio-conservative argument against transhumanist technologies that uses the subject of war as a backdrop. Echoing Thucydides Coker defines war as “the human thing” and sees its transformation towards post-human forms as ultimately dehumanizing. Thucydides may have been the most brilliant historian to ever live, but his claim that in war our full humanity is made manifest is true only in a partial sense. Almost all higher animals engage in intraspecies killing, and we are not the worst culprit. It wasn’t human beings who invented war, and who have been fighting for the longests, but the ants. As Keeley puts it:

But before developing too militant a view of human existence, let us put war in its place. However frequent, dramatic, and eye-catching, war remains a lesser part of social life. Whether one takes a purely behavioral view of human life or imagines that one can divine mental events, there can be no dispute that peaceful activities, arts, and ideas are by far more crucial and more common even in bellicose societies. (178)

If war has anything in human meaning over and above areas that are truly distinguishing between us and animals such as our culture or for that matter experiences we share with other animals such as love, companionship, and care for our young, it is that at the intimate level, such as is found in platoons, it is one of the few areas where what we do truly has consequences and where we are not superfluous.

We have evolved for culture, love and care as much as we ever have for war, and during most of our evolutionary history the survival of those around us really was dependent on what we did. War, in some cases, is one of the few places today where this earlier importance of the self can be made manifest. Or, as the journalist, Sebastian Junger has stated it, many men eagerly return to combat not so much because they are addicted to its adrenaline high as because being part of a platoon in a combat zone is one of the few places where what a 20 something year old man does actually counts for something.

As for transhumanism or continuing technological advancement and war, the solution to the problem starts with realizing there is no ultimate solution to the problem. To embrace human enhancement advanced by the spear-tip of military innovation would be to turn the movement into something resembling fascism- that is militarism combined with the denial of our shared human worth and destiny. Yet, to completely deny the fact that research into military applications often leads to leaps in technological advancement would be to blind ourselves to the facts of history. Exoskeletons that are developed as armor for our soldiers will very likely lead to advances for paraplegics and the elderly not to mention persons in perfect health just as a host of military innovations have led to benefits for civilians in the past.

Perhaps, one could say that if we sought to drastically reduce military spending worldwide, as in any case I believe we should, we would have sufficient resources to devote to civilian technological improvements of and for themselves. The problem here is the one often missed by technological enthusiasts who fail to take into account the cruel logic of war- that advancements in the civilian sphere alone would lead to military innovations. Exoskeletons developed for paraplegics would very likely end up being worn by soldiers somewhere, or as Keeley puts it:

Warfare is ultimately not a denial of the human capacity for social cooperation, but merely the most destructive expression of it. (158)

Our only good answer to this is really not technological at all, it is to continue to decrease the sphere of human life where war “makes sense” as a tactic. War is only chosen as an option when the peaceful roads to material improvement and respect are closed. We need to ensure both that everyone is able to prosper in our globalized economy and that countries and peoples are encouraged, have ways to resolve potentially explosive disputes and afforded the full respect that should come from belonging to what Keeley calls the “intellectual, psychological, and physiological equality of humankind ”, (179) to which one should add moral equality as well.  The prejudice that we who possess scientific knowledge and greater technological capacity than many of our rivals are in some sense fundamentally superior is one of our most pernicious dangers- we are all caught in the same trap.

For two centuries a decline in warfare has been possible because, even taking into account the savagery of the World Wars, the prosperity of mankind has been improving. A new stage of crisis truly earthshaking proportions would occur should this increasing prosperity be derailed, most likely from its collision with the earth’s inability to sustain our technological civilization becoming truly universal, or from the exponential growth upon which this spreading prosperity rests being tapped out and proving a short-lived phenomenon- it is, after all only two centuries old. Should rising powers such as China and India along with peoples elsewhere come to the conclusion that not only were they to be denied full development to the state of advanced societies, but that the system was rigged in favor of the long industrialized powers, we might wake up to discover that technology hadn’t been moving us towards an age of global unity and peace, after all, but that we had been lucky enough to live in a long interlude in which the human story, for once, was not also the story of war.


Wired for Good and Evil

Battle in Heaven

It seems almost as long as we could speak human beings have been arguing over what, if anything, makes us different from other living creatures. Mark Pagel’s recent book Wired for Culture: The Origins of the Human Social Mind is just the latest incantation of this millennia old debate, and as it has always been, the answers he comes up with have implications for our relationship with our fellow animals, and, above all, our relationship with one another, even if Pagel doesn’t draw many such implications.

As is the case with so many of the ideas regarding human nature, Aristotle got there first. A good bet to make when anyone comes up with a really interesting idea regarding what we are as a species is that either Plato or Aristotle had already come up with it, or discussed it. Which is either a sign of how little progress we have made understanding ourselves, or symptomatic of the fact that the fundamental questions for all our gee-whiz science and technology remain important even if ultimately never fully answerable.

Aristotle had classified human beings as being unique in the sense that we were a zóon politikón variously translated as a social animal or a political animal. His famous quote on this score of course being: “He who is unable to live in society, or who has no need because he is sufficient for himself, must be either a beast or a god.” (Politics Book I)

He did recognize that other animals were social in a similar way to human beings, animals such as storks (who knew), but especially social insects such as bees and ants. He even understood bees (the Greeks were prodigious beekeepers) and ants as living under different types of political regimes. Misogynists that he was he thought bees lived under a king instead of a queen and thought the ants more democratic and anarchical. (History of Animals).       

Yet Aristotle’s definition of mankind as zóon politikón is deeper than that. As he says in his Politics:

That man is much more a political animal than any kind of bee or any herd animal is clear. For, as we assert, nature does nothing in vain, and man alone among the animals has speech….Speech serves to reveal the advantageous and the harmful and hence also the just and unjust. For it is peculiar to man as compared to the other animals that he alone has a perception of good and bad and just and unjust and other things of this sort; and partnership in these things is what makes a household and a city.  (Politics Book 1).

Human beings are shaped in a way animals are not by the culture of their community, the particular way their society has defined what is good and what is just. We live socially in order to live up to their potential for virtue- ideas that speech alone makes manifest and allows us to resolve. Aristotle didn’t just mean talk but discussion, debate, conversation all guided by reason. Because he doesn’t think women and slaves are really capable of such conversations he excludes them from full membership in human society.More on that at the end, but now back to Pagel’s Wired for Culture.

One of the things Pagel thinks makes human beings unique among all other animals is the fact that we alone are hyper-social or ultra-social. The only animals that even come close are the eusocial insects- Aristotle’s ants and bees. As E.O. Wilson pointed out in his The Social Conquest of Earth it is the eusocial species, including us, which, pound-for-pound absolutely dominate life on earth. In the spirit of a 1950’s B-movie all the ants in the world added together are actually heavier than us.

Eusociality makes a great deal of sense for insects colonies and hives given how closely related these groups are, indeed, in many ways they might be said to constitute one body and just as cells in our body sacrifice themselves for the survival of the whole, eusocial insects often do the same. Human beings, however, are different, after all, if you put your life at risk by saving the children of non-relatives from a fire, you’ve done absolutely nothing for your reproductive potential, and may even have lost your chance to reproduce. Yet, sacrifices and cooperation like this are done by human beings all the time, though the vast majority in a less heroic key. We are thus, hypersocial- willing to cooperate with and help non-relatives in ways no other animals do. What gives?

Pagel’s solution to this evolutionary conundrum is very similar to Wilson’s though he uses different lingo. Pagel sees us as naturally members of what he calls cultural survival vehicles, we evolved to be raised in them, and to be loyal to them because that was the best, perhaps the only way, of securing our own survival. These cultural survival vehicles themselves became the selective environment in which we evolved. Societies where individuals hoarded their cooperation or refused to makes sacrifices for the survival of the group overtime were driven out of existence by those where individuals had such qualities.

The fact that we have evolved to be cooperative and giving within our particular cultural survival vehicle Pagel sees as the origin of our angelic qualities our altruism and heroism and just general benevolence. No other animal helps its equivalent of little old ladies cross the street, or at least to nothing like the extent we do.   

Yet, if having evolved to live in cultural survival vehicles has made us capable of being angels, for Pagel, it has also given rise to our demonic capacities as well. No other species is a giving as we are, but, then again, no other species has invented torture devices and practices like the iron maiden or breaking on the wheel like we have either. Nor does any other species go to war with fellow members of its species so commonly or so savagely as human beings.

Pagel thinks that our natural benevolence can be easily turned off under two circumstance both of which represent a threat to our particular cultural survival vehicle.

We can be incredibly cruel when we think someone threatens the survival of our group by violating its norms. There is a sort of natural proclivity in human nature to burn “witches” which is why we need to be particularly on guard whenever some part of our society is labeled as vile and marked for punishment- The Scarlet Letter should be required reading in high school.

There is also a natural proclivity for members of one cultural survival vehicle to demonize and treat cruelly their rivals- a weakness we need to be particularly aware of in judging the heated rhetoric in the run up to war.

Pagel also breaks with Richard Dawkins over the later’s view of the evolutionary value of religion. Dawkins in his book The God Virus and subsequently has tended to view religion as a net negative- being a suicide bomber is not a successful reproductive strategy, nor is celibacy. Dawkins has tended to see ideas his “memes” as in a way distinct from the person they inhabit. Like genes, his memes don’t care all that much about the well being of the person who has them- they just want to make more of themselves- a goal that easily conflicts with the goal of genes to reproduce.

Pagel quite nicely closes the gap between memes and genes which had left human beings torn in two directions at once. Memes need living minds to exists, so any meme that had too large a negative effect on the reproductive success of genes should have lain its own grave quite quickly. But if genes are being selected for because they protect not the individual but the group one can see how memes that might adversely impact an individual’s chance of reproductive success can nonetheless aid in the survival of the group. For religions to have survived for so long and to be so universal they must be promoting group survival rather than being a detriment to it, and because human beings need these groups to live religion must on average, though not in every particular case, be promoting the survival of the individual at least to reproductive age.

One of the more counterintuitive, and therefore interesting claims Pagel makes is that what really separates human beings from other species is our imitativeness as opposed to our inventiveness. It the reverse of the trope found in the 60’s-70’s books and films that built around the 1963 novel La Planète des singes- The Planet of the Apes. If you remember, the apes there are able to imitate human technology, but aren’t able to invent it- “monkey see, monkey do”. Perhaps it should be “human see human do” for if Pagel is right our advantage as a species really lies in our ability to copy one another. If I see you doing something enough times I am pretty likely to be able to repeat it though I might not have a clue what it was I was actually doing.

Pagel thinks that all we need is innovation through minor tweaks combined with our ability to rapidly copy one another to have technological evolution take off. In his view geniuses are not really required for human advancement, though they might speed the process along. He doesn’t really apply his ideas to understand modern history, but such a view could explain what the scientific revolution which corresponded with the widespread adoption of the printing press was really all about. The printing press allowed ideas to be deciminated more rapidly whereas the scientific method proved a way to sift those that worked from those that didn’t.

This makes me wonder if something rather silly like Youtube hasn’t even further revved up cultural evolution in this imitative sense. What I’ve found is that instead of turning to “elders” or “experts” when I want to do something new, I just turn to Youtube and find some nice or ambitious soul who has filmed through the steps.

The idea that human beings are the great imitator has some very unanticipated consequences for the rationality of human being visa-vi other animals. Laurie Santos of the Comparative Cognition Laboratory at Yale has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. One more sign that human beings can be thankful for at least some of their irrationality.

This might have implications for machine intelligence that neither Pagel nor Santos draw. Machines might be said to be rational in the way animals are rational but this does not mean that they possess something like human thought. The key to making artificial intelligence more human like might be in making them, in some sense, less rational or as Jeff Steibel stated it for machine intelligence to be truly human like it will need to be “loopy”, “iterative”, and able to make mistakes.

To return to Pagel, he also sees implications for human consciousness and our sense of self in the idea that we are “wired for culture”. We still have no idea why it is human beings have a self, or think they have one, and we have never been able to locate the sense of self in the brain. Pagel thinks its pretty self-evident why a creature that is enmeshed in a society of like creatures with less than perfectly aligned needs would evolve a sense of self. We need one in order to negotiate our preferences, and our rights.

Language, for Pagel, is not so much a means for discovering and discussing the truth as it is an instrument wielded by the individual to gain in his own interest through persuasion and sometimes outright deception. Pagel’s views seem to be seconded by Steven Pinker who in a fascinating talk back in 2007 laid out how the use of language seems to be structured by the type of relationship it represents. In a polite dinner conversation you don’t say “give me the salt”, but “please pass me the salt” because the former is a command of the type most found in a dominance/submission relationship. The language of seduction is not “come up to my place so I can @#!% you”, but “would you like to come up and see my new prints.” Language is a type of constant negotiation and renegotiation between individuals one that is necessary because relationships between human beings are not ordinarily based on coercion where no speech is necessary besides the grunt of a command.

All of which brings me back to Aristotle and to the question of evil along with the sole glaring oversight of Pagel’s otherwise remarkable book. The problem moderns often have with the brilliant Aristotle is his justification of slavery and the subordination of women. That is, Aristotle clearly articulates what makes a society human- our interdependence, our capacity for speech, the way our “telos” is to be shaped and tuned by the culture in which we are raised like a precious instrument. At the same time he turns around and allows this society to compel by force the actions of those deliberately excluded from the opportunity for speech which he characterizes as a “natural” defect.

Pagel’s cultural survival vehicles miss this sort of internal oppression that is distinct both from the moral anger of the community against those who have violated its norms or its violence towards those outside of it. Aren’t there minature cultural survival vehicles in the form of one group or another that fosters its own genetic interest through oppression and exploitation? It got me thinking if any example of what a more religious age would have called “evil” as opposed to mere “badness” couldn’t be understood as a form of either oppression or exploitation, and I am unable to come up with any.

Such sad reflections dampen Pagel’s optimism towards the end of Wired for Culture that the world’s great global cities are a harbinger of us transcending the state of war between our various cultural survival vehicles and moving towards a global culture. We might need to delve more deeply into our evolutionary proclivities to fight within our own and against other cultural survival vehicles inside our societies as much as outside before we put our trust in such a positive outcome for the human story.

If Pagel is right and we are wired, if we have co-evolved, to compete in the name of our own particular cultural survival vehicle against other such communities then we may have in a very real sense co-evolved with and for war. A subject I will turn to next time by jumping off another very recent and excellent book.