2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt - killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.

 

Think Time is Speeding Up? Here’s How to Slow It!

seven stages in man's life

One of the weirder things about human being’s perception of time is that our subjective clocks are so off. A day spent in our dreary cubicles can seem to crawl like an Amazonian sloth, while our weekends pass by as fast as a chameleon’s tongue . Most dreadful of all, once we pass into middle age, time seems to transform itself from a lumbering steam train heaving us through clearly delineated seasons and years to a Japanese bullet unstoppably hurdling us towards death with decades passing us by in a blurr.

Wondering about time is a habit of the middle aged, as sure a sign of having passed the clock- blind golden age of youth as the proverbial convertible or Harley. If my soon to be 93 year old grandmother is any indication, the old, like the young, aren’t much taken aback by the speed of time’s passage. Instead, time seems to take on the viscosity of New England molasses, the days gently flowing down life’s drain.

Up until now, I didn’t think there might be any empirical evidence to back up such colloquial observations, just the anecdotes passed around the holiday dinner table like turkey stuffing and cranberry sauce: “Can you believe it’s almost Christmas again?”, “Where did the year go?” Lucky for me I now know what happened to time, or how I’ve been confuddled all this time into thinking something had happened to it. I know because I’ve read the psychologist and BBC science broadcaster, Claudia Hammond’s excellent little book on the psychology of time called: Time Warped: Unlocking the Mysteries of Time Perception.     

If you’ve ever asked yourself why time seems to crawl when you’re watching the clock and want it to go faster, or why time appears to speed up in the face of an event you’re dreading like a speech, this is the book for you. But Hammond’s Time Warped goes much deeper than that and exposes us to the reality of what it would be like if some of our common dreams about controlling time actually came true. If we could indeed have “perfect memory” or, as everyone keeps reminding us to, “live in the present”. In addition to all that, the nature of our ambiguous relationship with time she reveals raises interesting questions for those hoping we wrestle from nature a great deal more of it.

Hammond doesn’t really discuss the physics of time, or more clearly, the fact that much of modern physics views time as an illusion akin to past imaginary entities like the ether or the phlogiston.  The fact that something so essential to our human self-understanding is considered by the bedrock of human sciences to be a brain induced mirage has led to a rebellion of at least one prominent physicists, Lee Smolin, but he’s almost a lone voice in the quest to restore time. Nor is Hammond all that interested in the philosophy of time, its history or what time actually is. You won’t find here any detailed discussion of how to define time, it’s more like Supreme Court Justice Potter Stewart’s definition of pornography: “you know it when you see it.” Hammond is, though, on firm scientific ground discussing her main subject, the human perception of time, which, whatever it’s underlying reality or unreality, we find it nearly impossible to live without.

Evolution might have kept things simple and given the human brain just one clock, a steady Bigben of a thing to accurately mark the time. Instead, Hammond draws our attention to the fact that we seem to have multiple clocks within us all running at once.

We seem to be extremely good at gauging the passage of seconds or minutes without counting.  We also have a twenty four- hour clock that runs with the same length but independent of the alternating light and darkness of our spinning earth as Hammond shows was proven by Michel Stiffre who, in the name of science and youthful stupidity, (he was 23) braved two months in a dark cave meticulously recording his bodily rhythms. What Stiffre proved is that, sun or no sun, our bodies follow twenty-four hour cycles. The turning of the earth has bored its traces deep into us, which we fight against using the miracle of electric lights, and if the popularity of sleeping pills are any indication, so often lose.

For some of us, there seems to be an inbuilt ability and need to see longer stretches of time spatially in the form of ovals, circles, or zig-zags, rather than the linear timelines one sees in history books. One day, not long before I read Hammond’s book, I found myself scribbling thinking about how far into the future my great-grandchildren would live, should my now small daughters and their children ever have children of their own.

For whatever reason, I didn’t draw out the decades as blocks of a line but like a set of steps. I thought nothing of it until I read Time Warped and saw that this was a common way for people to picture decades, though many do so in three dimensions, rather than my paltry two. Some people also associate days with color- a kind of synesthesia that isn’t just playful imagination, but is often stable across an individual’s life.

There is no real way to talk about how human beings experience time without discussing memory. What I found mind-blowing about Time Warped was just how many of what we consider the flaws of our memory end up being ambiguities we would be better off not having resolved.

Take the fact that our memories are so fallible and incomplete. One would think that things would be so much better if our brains could record everything and have it for playback on a sort of neuronal blu- ray. For certain situations like criminal trials this would solve a whole host of problems, but elsewhere, we should watch what we wish for. As Hammond shows, there are people who can remember every piece of minutia, down to the socks they wore on a particular day, decades earlier, but a moment’s reflection leads to the conclusion that such natural born mnemonic prodigies fail to dominate creative fields, the sciences, or anything else, and such was the case long before we had Google to remember things for us.

There are people who believe that the path to ensuring they are not unraveled by the flow of time is to record and document everything about themselves and all of their experiences. Digital technology has doubtless made such a quest easier, but Hammond leads us to wonder whether or not the effort to record our every action and keystroke is quixotic. Who will actually take the time to look at all this stuff? How many times, she asks, have any of us sat down to watch our wedding video?

People obsessed with recording every detail of their lives are very likely motivated by the idea that it is their memories that make them who they are. Part of our deep fear of developing Alzheimer’s probably originates in this idea that the loss of our memories would constitute the loss of our self. Yet somehow the loss of memories (and the damage of Alzheimer’s runs much deeper than the loss of memories) does not seem to rob those who experience such losses of what other recognize as their long standing personality.

Strangely, our not too reliable memories, when combined with our ability to mentally time-travel into the past, Hammond believes, gives rise to our ability to imagine futures which are not. It allows us to mix and match different scenes from our memory to come up with whole new ones we anticipate will happen, or even ones that could never happen.

The idea that our imagination might owe its existence to our faulty memory put me in mind of the recent findings of Laurie Santos of the Comparative Cognition Laboratory at Yale. Santos has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different, whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. It seems it is our flaws rather than our superiority that have so favored us above our animal kin.

What, though, of the big problem, the one we all face- the frightening speed through which we are running through our short lives? There is, it seems, some wisdom in the adage that the best way to approach time is in focusing on the present, even if you’re like me and watching another TED talk on the subject by Pico Iyer is enough to make you hurl. If the future is a realm of anxiety and the past a realm of regret, as long as one is not in pain, the present moment is a sort of refuge. Hammond believes that thinking about the future, even if we so often get it wrong by, for instance, thinking that our future self will have more time, money, or will-power, is the default mode of the brain.

Any meditative tradition worth its salt tries to free us from this future obsessed mode and connect us more fully with the present moment of our existence, our breath, its rhythms, the people we care about. There are ways we can achieve this focus on the present without mediation, but they often involve contemplation of our own impending death, which is why soldiers amid the suffering of war and the terminally ill or the very old like my Nanna can often unhitch themselves from the train pulling our thinking off to the future.

Focusing on the present is one way to not only slow the pace of time, but to infuse the short time we have here with the meaning it deserves. Knowing that my small children will outgrow my silliness is the best way I have found to appreciate their laughter now.

Present focus does not, however, solve the central paradox of time for the middle aged, namely, why it seems to move so much faster as we get older, for it is doubtful we were all that more capable of savoring the moment as teenagers than adults. Our commonsense explanation of time speeding up as we age typically has to do with proportionality as in “a year for a five year old is 1/5 of their life, but for a forty year old it is merely 1/40.” Hammond shows this proportionality theory to to wrong on its face, for, if it were true, the days for a middle aged person would be quite literally buzzing by in comparison to the days of their younger selves.

Only a moment’s reflection should show us that the proportionality theory for time’s seeming quickening as we age can’t be true. Think back to your school days waiting impatiently for the 3:00 pm bell to ring: was it really much longer than the time you spend now stuck to your chair in some meaningless business meeting? There are some difference in the gauging of how much time has passed between the young and the old, yet these are nowhere near large enough to explain the differences in the subjective experiences of how fast time is passing between those two groups. So if proportionality theory doesn’t explaining the speeding up of time for the middle aged- what does?

When thinking about duration, the thing we need to keep in mind is, as the work of Daniel Kahneman has shown, we have not one but two “selves” an experiencing self and a remembering self. Having two selves does a number on our ability to make decisions with our future in mind. The experiencing self wants us to eat the cookie now, because it’s the remembering self that will regret it later. It also skews our sense of the past.

Our sense of the duration of time is experienced differently by these two separate selves.Waiting in a long line feels like forever when you’re there, but unless something particularly interesting happened during your wait, the remembered experience feels like it happened in a blink of an eye. Yet, a wonderful or frightening experience, like a first kiss or a car accident, though it seems to fly by while we’re in it, usually cuts its groves deep enough into our memory that when we reflect upon it it seems to have taken a very long time to unfold.

Hammond’s explanation for why youth seems stretched out in time compared to middle age  is what she calls the “reminiscence bump” and the “holiday paradox”. Adolescence and young adulthood are filled with so many firsts they leave a deep impression on our memory and this “thickness” of memory leads our remembering self to conclude time must have been going more slowly back in the heady days of our youth- the reminiscence bump . If you want to make your middle age days seem longer, then you need to fill them up with exciting and new things, which is the reason, Hammond speculates, that holidays full of new experiences seem fast when we’re in them, but to be stretched out on reflection- the holiday paradox. She wonders, however, whether the better option is just not to worry so much about time’s speed and rest when we need it rather than constantly chase after new memories.

Given the interest of the audience here in extending the human lifespan I wonder what the implications of such discovers regarding time on that project might be? A comedy could certainly be written in which we have doubled the length of human life, and end up also doubling all those things we now find banal about time. Would human beings who lived well beyond their hundreds be subject to meetings that stretched out for days and weeks? Would traffic jams in which you spent a week in your car be normal?

Perhaps we might even want to focus on our ability to manipulate our sense of time’s duration as an easier path towards a sort of longevity. Imagine a world where love affairs could stretch out centuries and pain and boredom are reduced to a blink, or a future that has “time retreats” (like today’s religious retreats) where one goes away for a week that has been neurologically altered to having felt like it was decades or longer. We might use the same sorts of time manipulation to punish people for heinous crimes so that a 600 year sentence actually means something. One might object that such induced experiences of slow time aren’t real, but then again neither are most versions of digital immortality, or even, as Hammond showed us, our subjective experience of time itself.

All of this talk of manipulating our sense of time as a road to longevity is just playful speculation on my part. What should be clear is that any move towards changing the human body so that it lives much longer than it does now is probably also going to have to grapple with and transform our psychological notions of time and the society we have built around our strange capacity to warp it.

 

The Human Age

atlas-and-the-hesperides-1925 John Singer Sargent

There is no writer now, perhaps ever, who is able to convey the wonder and magic of science with poetry comparable to Diane Ackerman. In some ways this makes a great deal of sense given that she is a poet by calling rather than a scientist.  To mix metaphors: our knowledge of the natural world is merely Ackerman’s palette whose colors she uses to paint a picture of nature. It is a vision of the world as magical as that of the greatest worlds of fiction- think Dante’s Divine Comedy, or our most powerful realms of fable.

There is, however, one major difference between Ackerman and her fellow poets and storytellers: the strange world she breathes fire into is the true world, the real nature whose inhabitants and children we happen to be. The picture science has given us, the closest to the truth we have yet come, is in many ways stranger, more wondrous, more beautifully sublime, than anything human beings have been able to imagine; therefore, the perfect subject and home for a poet.

The task Ackerman sets herself in her latest book, The Human Age: The World Shaped By US, is to reconcile us to the fact that we have now become the authors and artists rather than the mere spectators of nature’s play . We live in an era in which our effect upon the earth has become so profound that some geologists want to declare it the “Anthropocene” signalling that our presence is now leaving its trace upon the geological record in the way only the mightiest, if slow moving, forces of nature have done heretofore. Yet in light of the the speed with which we are changing the world, we are perhaps more like a sudden catastrophe than languid geological transformation taking millions of years to unfold.

Ackerman’s hope is to find a way to come to terms with the scale of our own impact without at the same time reducing humanity to that of the mythical Höðr, bringing destruction on account of our blindness and foolishness, not to mention our greed. Ackerman loves humanity as much as she loves nature, or better, she refuses, as some environmentalist are prone to do, to place human beings on one side and the natural world on the other. For her, we are nature as much as anything else that exists. Everything we do is therefore “natural”, the question we should be asking is are we doing what is wise?

In The Human Age Ackerman attempts to reframe our current pessimism and self-loathing regarding our treatment of “mother” nature , everything from the sixth great extinction we are causing to our failure to adequately confront climate change, by giving us a window into the many incredible things we are doing right now that will benefit our fragile planet.  

She brings our attention to new movements in environmentalism such as “Reconciliation Ecology” which seeks to bring into harmony human settlements and the much broader needs of the rest of nature. Reconciliation Ecology is almost the opposite of another movement of growing popularity in “neo-environmentalists” circles, namely, “Environmental Modernism”. Whereas Environmental Modernism seeks to sever our relationship with nature in order to save it, Reconciliation Ecology aims to naturalize the human artifice, bringing farming and even wilderness into the city itself. It seeks to heal the fragmented wilderness our civilization has brought about by bringing the needs of wildlife into the design process.

Rather than covering our highways with road kill, we could provide animals with a safe way to cross the street. This might be of benefit to the deer, the groundhog, the racoon, the porcupine. We might even construct tunnels for the humble meadow vole. Providing wildlife corridors, large and small, is the one of the few ways we can reconcile the living space and migratory needs of “non-urbanized” wildlife to our fragmented and now global sprawl.

Human beings, Ackerman argues, are as negatively impacted by the disappearance of wilderness as any other of nature’s creatures, and perhaps given our aesthetic sensitivity, even more so. For a long time now we have sought to use our engineering prowess to subdue nature. Why not use it to make the human made environment more natural?  She highlights a growing movement among architects and engineers to take their inspiration not from the legacy of our lifeless machines and buildings, but from nature itself, which so often manages to create works of beautiful efficiency. In this vein we find leaps of the imagination such as the Eastgate Center designed by Mick Pearce who took his inspiration from the breathing, naturally temperature controlled, structure of the termite mound.

The idea of the Anthropocene is less an acknowledgement of our impact than a recognition of our responsibility. For so long we have warred against nature’s limits, arrows, and indifference that it is somewhat strange to find ourselves in the position of her caretaker and even designer. And make no mistake about it, for an increasing number of the plants and animals with whom we share this planet their fate will be decided by our action or inaction.

Some environmentalists would argue for nature’s “purity” and against our interference, even when this interference is done in the name of her creatures. Ackerman, though not entirely certain, is arguing against such environmental nihilism, paralysis, or puritanism. If it is necessary for us to “fix” nature- so be it, and she sees in our science and technology our greatest tool to come to nature’s aid. Such fixes can entail everything from permitting, rather than attempting to uproot, invasive species we have inadvertently or deliberately introduced if those invasives have positive benefits for an ecosystem, aiding the relocation of species as biomes shift under the impact of climate change, or introducing extinct species we have managed to save and resurrect through their DNA.

We are now entering the era where we are not only able to mimic nature, but to redesign it at the genetic level. Creating chimeras that nature left to itself would find it difficult or impossible to replicate. Ackerman is generally comfortable with our ever more cyborg nature and revels in the science that allows us to literally print body parts and one day whole organs. Like the rest of us should be, she is flabbergasted by ongoing revolutions in biology that are rewriting what it means to be human.

The early 21st century field of epigenetics is giving us a much more nuanced and complex view of the interplay between genes and the environment. It is not only that multiple genes need to be taken account of in explaining conditions and behaviors, but that genes are in a sense tuned by the environment itself. Indeed, much of this turning “on or off” of genes is a form of genetic memory. In a strange echo of Lamarck, the experiences of one’s close ancestors- their feasts and famines are encoded in the genes of their descendants.

Add to that our recent discoveries regarding the microbiome, the collection of bacteria that live within us that are far more numerous and in many ways as influential as our genes, and one gets an idea for how far we are moving from ideas of what it meant to be human held by scientists even a few years ago and how much distance has been put between current biology and simplistic versions of environmental or genetic determinism.

Some such, as the philosopher of biology, John Dupree see in our advancing understanding of the role of the microbiome and epigenetics a revolutionary change in human self understanding. For a generation we chased after a simplistic idea of genetic determinism where genes were seen as a sort of “blueprint” for the organism. This is now known to be false. Genes are just one part of a permeable interaction between them, the environment and the microbiome that guide individual development and behavior.

We are all of us collectives, constantly swapping biological information and rather than seeing the microscopic world as a kind of sideshow to the “real” biological story of large organisms such as ourselves we might be said to be where we have always been in Steven Jay’s “Age of Bacteria” as much as we are in an Anthropocene.

Returning to Ackerman, she is amazed at the recent advancements in artificial intelligence, and like Tyler Cowen, even wonders whether scientific discoveries will soon no longer be the prerogative of humans, but of our increasingly intelligent machines. Such is the conclusion one might draw from looking at the work of Hod Lipson of Cornell “Eureqa Machine” . Feed the Eureqa Machine observations or data and it is able to come up with equations that describe them all on its own. Ackerman does, however, doubt whether we could ever build a machine that replicated human beings in all their wondrous weirdness.

Where her doubts regarding technology veer towards warning has to do with the question of digitalization of the world. Ackerman is best known for her 1990 A Natural History of the Senses a work which explored the five forms of human perception. Little wonder, then, that she would worry about what the world being flattened on our ubiquitous screens   into the visual sense alone. She laments:

What if, through novelty and convenience, digital nature replaces biological nature?

The further we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. I worry about virtual blinders. Hobble all the senses except the visual, and you produce curiously deprived voyeurs.  (196-197)

While concerned that current technologies may be flattening human experience by leaving us visually mesmerized behind screens at the expense of the body, even as they broaden their scope allowing us to see into world small, large, and at speeds never before possible, Ackerman accepts our cyborg nature. For her we are, to steal a phrase from the philosopher Andy Clark “natural born cyborgs”, and this is not a new thing. Since our beginning we have been a changeling able to morph into a furred animal in the cold with our clothes, wield fire like a mythical dragon, or cover ourselves with shells of metal among much else.

Ackerman is not alone in the view that our cyborg tendencies are an ancient legacy. Shortly before I read The Human Age I finished the anthropologist Timothy Taylor’s short and thought provoking The Artificial Ape: How Technology Changed the Course of Human Evolution. Taylor makes a strong case that anatomically modern humans co-evolved with technology, indeed, that the technology came first and in a way has been the primary driver of our evolution.

The technology Taylor thinks lifted us beyond our hominid cousins wasn’t the usual suspects of fire or stone tools but likely the unsung baby sling.  This simple invention allowed humans to overcome a constraint suffered by the other hominids whose brains, contrary to common opinion, were getting smaller because upright walking was putting a premium of quick rather than delayed development of the young. Slings allowed mothers to carry big brained babies that took longer to develop, but because of the long period of youth could learn much more than faster developing relatives. In effect, slings were a way for human mothers who needed their hands free to externalize the womb and evolve a koala like pouch.

To return to The Human Age; although, she has, as always, given us a wonderfully written book filled with poetry and insights, Ackerman’s book is not without its flaws. Here I will focus only on what I found to be the most important one; namely, that for a book named after our age, there is not enough regarding humans in it. This is what I mean: Though the problems suffered from the effects of the Anthropocene are profound and serious, the animal most likely to broadly suffer the impact of phenomenon such as climate change are likely to be us.

The weakness of the idea of the Anthropocene when judged largely positively, which is what Ackerman is trying to do, is that it universalizes a state of control over nature that is largely limited to advanced countries and the world’s wealthy. The threat of rising sea levels look quite different from the perspective of Manhattan or Chittagong. A good deal of the environmental gains in advanced countries over the past half century can be credited to globalization, which amid its many benefits, has also entailed the offloading of pollution, garbage, and waste processing from developed to developing countries. This is the story that the photos of Edward Burtynsky, whom Ackerman profiles, tells. Stories such as the lives of the shipbreakers of Bangladesh whose world resembles something out of a dystopian novel.

Humanity is not sovereign over nature everywhere, and for some of us not only our we faced with a wildness that has not been subdued, but where humanity itself has become like a unpredictable natural force reigning down unfathomable, uncontrollable good and ill. Such is the world now being experienced by the west African societies suffering under the epidemic of Ebola. It is a world we might better understand by looking at a novel written on the eve of our mastery over nature, a novel by another amazing writer who was also a woman.

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…

 

How to predict the future with five simple rules

Caravaggio Fortune Teller

When it comes to predicting the future it seems only our failure to consistently get tomorrow right has been steadily predictable, though that may be about to change, at least a little bit.  If you don’t think our society and especially those “experts” whose job it is to help us steer us through the future aren’t doing a horrible job just think back to the fall of the Soviet Union which blindsided the American intelligence community, or 9-11, which did the same, or the financial crisis of 2008, or even much more recently the Arab Spring, the rise of ISIL, or the war in Ukraine.

Technological predictions have been generally as bad or worse and as proof just take a look at any movie or now yellowed magazine from the 1960’s and 70’s about the “2000’s”. People have put a lot of work into imagining a future but, fortunately or unfortunately, we haven’t ended up living there.

In 2005 a psychologist, Philip Tetlock, set out to explain our widespread lack of success at playing Nostradamus. Tetlock  was struck by the colossal intelligence failure the collapse of the Soviet Union had unveiled. Policy experts, people who had spent their entire careers studying the USSR and arguing for this or that approach to Cold War had almost universally failed to have foreseen a future in which the Soviet Union would unravel overnight, let alone that such a mass form of social and geopolitical deconstruction would occur relatively peacefully.

Tetlock in his book Expert Political Judgment: How Good Is It? How Can We Know? made the case that a good deal of the predictions made by experts were no better than those of the proverbial dart throwing chimp- that is no better than chance, or for the numerically minded among you – 33 percent. With the most frightening revelation being that often the more knowledge a person had on a subject the worse rather than the better their predictions.

The reasons Tetlock discovered for the predictive failure of experts were largely psychological and all too human. Experts, like the rest of us, hate to be wrong, and have a great deal of difficulty assimilating information that fails to square with their world view. They tend to downplay or misremember their past predictive failures, deal in squishy predictions without clearly defining probabilities- qualitative statements that are open to multiple interpretations. Most of all, there seems to be no real penalty for getting predictions wrong, even horribly wrong, policy makers and pundits that predicted a quick and easy exit from Iraq or Dow 36,000 on the eve of the crash go right on working with a “the world is complicated” shrug of the shoulders.

What’s weird, I suppose, is that while getting the future horribly wrong has no effect on career prospects, getting it right, especially when the winds had been blowing in the opposite direction, seems to shoot a person into the pundit equivalent of stardom. Nouriel Roubini or “Dr Doom” was laughed at by some of his fellow economists when he was predicting the implosion of the banking sector well before Lehman Brothers went belly up. Afterwards he was inescapable, with a common image of the man, whose visage puts one in mind of Vlad the Impaler, or better Gene Simmons, finding himself surrounded by 4 or 5 ravishing supermodels.

I can’t help but thinking there’s a bit of Old Testament style thinking sneaking in here- as if the person who correctly predicted disaster is so much smarter or so tuned into the zeitgeist that the future is now somehow magically at their fingertips. Where the reality might be that they simply had the courage to speak up against the “wisdom” of the crowd.

Getting the future right is not a matter of intelligence, but it probably is a matter of thinking style. At least that’s where Tetlock, to return to him, has gone with his research. He has launched The Good Judgement Project which hopes to train people to be better predictors of the future. You too can make better predictions if you follow these 5 rules.

  • Comparisons are important: use relevant comparisons as a starting point;
  • Historical trends can help: look at history unless you have a strong reason to expect change;
  • Average opinions: experts disagree, so find out what they think and pick a midpoint;
  • Mathematical models: when model-based predictions are available, you should take them into account;
  • Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

One thing that surprises me about these rules is that apparently policy makers and public intellectuals aren’t following them in the first place. Obviously we need to do our best to make predictions minimizing cognitive biases, and we should certainly be able to do better than the chimps, but we should also be aware of our limitations. Some aspects of the future are inherently predictable demographics work like this, barring some disaster we have a pretty good idea what the human population will be at the end of this century and how it will be distributed. Or at least demographics should be predictable, though sometimes we miss just happen to miss 3 billion future people.

For the things on our immediate horizon we should be able to give pretty good probabilities and those should be helpful to policy makers. It has a definite impact on behavior whether we think there is a 25 percent probability that Iran will detonate a nuclear weapon by 2017 or a 75 percent chance. But things start to get much more difficult the further out into the future you go or when you’re dealing with the interaction of events some of which you didn’t even anticipate – Donald Rumsfeld’s infamous “unknown unknowns”.

It’s the centenary of World War I so that’s a relevant example. Would the war have happened had Gavrilo Princip’s assassination attempt on Franz Ferdinand failed? Maybe? But even had the war only been delayed timing would have most likely have affected its outcome. Perhaps the Germans would have been better prepared and so swiftly defeated the Russians, French and British that the US would never have became involved. It may be the case that German dominance of Europe was preordained, and therefore, in a sense, predictable, as it seems to be asserting itself even now,  but it makes more than a world of difference not just in Europe but beyond whether or not that dominance came in the form of the Kaiser, or Hitler, or, thank God, Merkle and the EU.

History is so littered with accidents, chance encounters, fateful deaths and births that it seems pretty unlikely that we’ll ever get a tight grip on the future which in all likelihood will be as subject to these contingencies as the past (Tetlock’s Second Rule). The best we can probably do is hone our judgement, as Telock argues, plan for what seems inevitable given trend lines, secure ourselves as well as possible against the most catastrophic scenarios based on their probability of destruction (Nick Bostrom’s view) and design institutions that are resilient and flexible in the face a set of extremely varied possible scenarios – (the position of Nassim Taleb).

None of this, however, gives us a graspable and easily understood view of how the future might hang together. Science fiction, for all its flaws, is perhaps alone in doing that. The writer and game developer Adrian Hon may have discovered a new, and certainly produced an insightful way of practicing the craft. To his recent book I’ll turn next time…

 

Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

City As Superintelligence

Medieval Paris

A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings. The movement goes by the name of smart-cities, or sometimes sentient cities, and the fate of the project is intimately tied to the fate of humanity in the 21st century and beyond because the question of how the city is organized will define the world we live in from here forwards -the beginning of era of urban mankind.

Here are just some of many possible examples of smart cities at work, there is the city of Sondgo in South Korea a kind of testing ground for companies such as Cisco which can experiment with integrated technologies, to quote a recent article on the subject, such as:

TelePresence system, an advanced videoconferencing technology that allows residents to access a wide range of services including remote health care, beauty consulting and remote learning, as well as touch screens that enable residents to control their unit’s energy use.

Another example would be IBM’s Smart City Initiative in Rio which has covered that city with a dense network of sensors and cameras that allow centralized monitoring and control of vital city functions, and was somewhat brazenly promoted by that city’s mayor during a TED Talk in 2012. New York has set up a similar system, but it is in the non-Western world where smart cities will live or die because it is there where almost all of the world’s increasingly rapid urbanization is taking place.

Thus India, which has yet to urbanize like its neighbor, and sometimes rival, China, has plans to build up to 100 smart cities with 4.5 billion of the funding towards such projects being provided by perhaps the most urbanized country on the planet- Japan.

China continues to urbanize at a historically unprecedented pace with 250 million of its people- the equivalent of the entire population of the United States a generation ago- to move to its cities in the next 12 years. (I didn’t forget a zero.) There you have a city that few of us have even heard of – Chongqing, – which The Guardian several years back characterized as “the fastest growing urban center on the planet”  with more people in it than the entire countries of Peru and Iraq. No doubt in response to urbanization pressure, and at least back in 2011, Cisco was helping that city with its so-called Peaceful Chongqing Project an attempt to blanket the city in 500,000 video surveillance cameras- a collaboration that was possibly derailed by allegations by Edward Snowden that the NSA had infiltrated or co-opted U.S. companies.

Yet there are other smart-city initiatives that go beyond monitoring technologies. Under this rubric should fall the renewed interest in arcologies- massive buildings that contain within them an entire city, and thus in principle allow a city to be managed in terms of its climate, flows, etc. in the same way the internal environment of a skyscraper can be managed. China had an arcology in the works in Dongtan, which appears to have been scrapped over corruption and cost overrun concerns. Dubai has its green arcology in Masdar City, but it’s in Russia in the grip of a 21st century version of czarism, of all places, where the mother of all arcologies is planned, architect Norman Foster’s Crystal Island which, if actually built, would be the largest structure on the planet.

On the surface, there is actually much to like about smart-cities and their related arcologies. Smart-cities hold out the promise of greater efficiency for an energy starved and warming world. They should allow city management to be more responsive to citizens. All things being equal, smart-cities should be better than “dumb” ones at responding to everything from common fires and traffic accidents to major man- made and natural disasters. If Wendy Orent is correct as she wrote in a recent issue of AEON that we have less to fear from pandemics emerging from the wilderness such as Ebola than those that evolve in areas of extreme human density, smart-city applications should make the response to pandemics both quicker and more effective.

Especially in terms of arcologies, smart-cities represent something relatively new. We’ve had our two major models of the city since the early to mid-20th century, whether the skyscraper cities pioneered by New York and Chicago or the cul-de-sac suburban sprawl of cities dominated by the automobile like Phoenix. Cities going up now in the developing world certainly look more modern than American cities many of whose infrastructure is in a state of decay, but the model is the same, with the marked exception of all those super-trains.

All that said there are problems with smart-cities and the thinker who has written most extensively on the subject Anthony M. Townsend lays them out excellently in his book Smart Cities: Big-Data, Civic Hackers and the Quest for a New Utopia. Townsend sees three potential problems with smart-cities- they might prove, in his terms, “buggy. brittle, and bugged”.  

Like all software, the kinds that will be used to run smart-cities might exhibit unwanted surprises. We’ve seen this in some of the most sophisticated software we have running, financial trading algorithms whose “flash crashes” have felled billion dollar companies.

The loss of money, even a great deal of money, is something any reasonably healthy society should be able to absorb, but what if buggy software made essential services go off line over an extended period? Cascades from services now coupled by smart-city software could take out electricity and communication in a heat wave or re-run of last winter’s “polar vortex” and lead to loss of life. Perhaps having functions separated in silos and with a good deal of redundancy, even at the cost of inefficiency, is  “smarter” than having them tightly coupled and under one system. That is, after all, how the human brain works.  

Smart-cities might also be brittle. We might not be able to see that we had built a precarious architecture that could collapse in the face of some stressor or effort to intentionally harm- ahem- Windows. Computers crash and sometimes do so for reasons we are completely at a loss to identify. Or, imagine someone blackmailing a city by threatening to shut it down after having hacked its management system. Old school dumb-cities don’t really crash, even if they can sicken and die, and its hard to say they can be hacked.

Would we be in danger of decreasing a city’s resilience by compressing its complexity into an algorithm? If something like Stephen Wolfram’s principles of computational equivalence  and computational irreducibility is correct then the city is already a kind of computation and no model we can create of it will ever be more effective than this natural computation itself.

Or, to make my meaning clearer, imagine that you had a person that had suffered some horrible accident where to save them you had to replace all of his body’s natural information processing with a computer program. Such a program who have to regulate everything from breathing to metabolism to muscle movement ,along with the immune system, and exchange between neurons, not to mention a dozen other things. My guess is that you’d have to go out many generations of such programs before they are anywhere near workable. That the first generations would miss important elements, be based on wrong assumptions on how things worked, and would be loaded with perhaps catastrophic design errors that you couldn’t identify until the program was fully run in multiple iterations.

We are blissfully unaware that we are the product of billions of years of “engineering” where “design” failures were weeded out by evolution. Cities have only a few thousand years of a similar type of evolution behind them, but trying to control a great number of their functions via algorithms run by “command centers” might pose similar risks to my body example.  Reducing city functions to something we can compute in silicon might oversimplify the city in such a way as to reduce its resilience to stressors cities have naturally evolved to absorb. That is, there is, in all use of “Big-Data”, a temptation to interpret reality only in light of the model that scaffolds this data or reframe problems in ways that can mathematically be modeled. We set ourselves up for crises when we confuse the map with the territory or as Jaron Lanier said:

 What makes something real is that it is Impossible to represent it to completion.

Lastly, and as my initial examples of smart-cities should have indicated, smart-cities are by design bugged. They are bugged so as to surveil their citizens in an effort to prevent crime or terrorism or even just respond to accidents or disasters. Yet the promise of safety comes at the cost of one of the virtues of city living – the freedom granted from anonymity. But even if we care nothing for such things I’ve got news- trading privacy for security doesn’t even work.

Chongqing may spend tens of millions of dollars installing CCTV cameras, but would be hooligans or criminals or just people who don’t like being watched such as those in London, have a twenty dollar answer to all these gizmos- it’s called a hoodie. Likewise, a simple pattern of dollar store facepaint, strategically applied, can short-circuit the most sophisticated facial recognition software. I will never cease to be amazed at human ingenuity.    

We need to acknowledge that it is largely companies or individuals with extremely deep pockets and even deeper political connections that are promoting this model of the city. Townsend estimates it is potentially a 100 billion dollar business. We need to exercise our historical memory and recall how it was automobile companies that lobbied for and ended up creating our world of sprawl. Before investing millions or even billions cities need to have an idea of what kind of future they want to have and not be swayed by the latest technological trends.

This is especially the case when it comes to cities in the developing world where the conditions often resemble something more out of Dicken’s 19th century than even the 20th. When I enthusiastically asked a Chinese student about the arcology at Dongtan he responded with something like “Fools! Who would want to live in such a thing! It’s a waste of money. We need clean air and water, not such craziness!” And he’s no doubt largely right. And perhaps we might be happy that the project ultimately unraveled and say with Thoreau:

As for your high towers and monuments, there was a crazy fellow once in this town who undertook to dig through to China, and he got so far that, as he said, he heard the Chinese pots and kettles rattle; but I think that I shall not go out of my way to admire the hole which he made. Many are concerned about the monuments of the West and the East — to know who built them. For my part, I should like to know who in those days did not build them — who were above such trifling.

The age of connectivity, if it’s done thoughtfully, could bring us cities that are cleaner, greener, more able to deal with the shocks of disaster or respond to the spread of disease. Truly smart-cities should support a more active citizenry, a less tone-deaf bureaucracy, a more socially and culturally rich, entertaining and more civil life- the very reasons human beings have chosen to live in cities in the first place .

If the age of connectivity is done wrong cities will have poured scarce resources down a hole of corruption as deep as the one dug by Thoreau’s townsman, will have turned vibrant cultural and historical environments into corporate “flat-pack” versions of tomorrowland, and most frighteningly of all turned the potential democratic agora of the city into a massive panopticon of Orwellian monitoring and control. Cities, in the Wolfram not Bostrom sense, are already a sort of super-intelligence or better, hive mind of its interconnected yet free individuals more vibrant and important than any human built structure imaginable.  Will we let them stay that way?