The Human Age

atlas-and-the-hesperides-1925 John Singer Sargent

There is no writer now, perhaps ever, who is able to convey the wonder and magic of science with poetry comparable to Diane Ackerman. In some ways this makes a great deal of sense given that she is a poet by calling rather than a scientist.  To mix metaphors: our knowledge of the natural world is merely Ackerman’s palette whose colors she uses to paint a picture of nature. It is a vision of the world as magical as that of the greatest worlds of fiction- think Dante’s Divine Comedy, or our most powerful realms of fable.

There is, however, one major difference between Ackerman and her fellow poets and storytellers: the strange world she breathes fire into is the true world, the real nature whose inhabitants and children we happen to be. The picture science has given us, the closest to the truth we have yet come, is in many ways stranger, more wondrous, more beautifully sublime, than anything human beings have been able to imagine; therefore, the perfect subject and home for a poet.

The task Ackerman sets herself in her latest book, The Human Age: The World Shaped By US, is to reconcile us to the fact that we have now become the authors and artists rather than the mere spectators of nature’s play . We live in an era in which our effect upon the earth has become so profound that some geologists want to declare it the “Anthropocene” signalling that our presence is now leaving its trace upon the geological record in the way only the mightiest, if slow moving, forces of nature have done heretofore. Yet in light of the the speed with which we are changing the world, we are perhaps more like a sudden catastrophe than languid geological transformation taking millions of years to unfold.

Ackerman’s hope is to find a way to come to terms with the scale of our own impact without at the same time reducing humanity to that of the mythical Höðr, bringing destruction on account of our blindness and foolishness, not to mention our greed. Ackerman loves humanity as much as she loves nature, or better, she refuses, as some environmentalist are prone to do, to place human beings on one side and the natural world on the other. For her, we are nature as much as anything else that exists. Everything we do is therefore “natural”, the question we should be asking is are we doing what is wise?

In The Human Age Ackerman attempts to reframe our current pessimism and self-loathing regarding our treatment of “mother” nature , everything from the sixth great extinction we are causing to our failure to adequately confront climate change, by giving us a window into the many incredible things we are doing right now that will benefit our fragile planet.  

She brings our attention to new movements in environmentalism such as “Reconciliation Ecology” which seeks to bring into harmony human settlements and the much broader needs of the rest of nature. Reconciliation Ecology is almost the opposite of another movement of growing popularity in “neo-environmentalists” circles, namely, “Environmental Modernism”. Whereas Environmental Modernism seeks to sever our relationship with nature in order to save it, Reconciliation Ecology aims to naturalize the human artifice, bringing farming and even wilderness into the city itself. It seeks to heal the fragmented wilderness our civilization has brought about by bringing the needs of wildlife into the design process.

Rather than covering our highways with road kill, we could provide animals with a safe way to cross the street. This might be of benefit to the deer, the groundhog, the racoon, the porcupine. We might even construct tunnels for the humble meadow vole. Providing wildlife corridors, large and small, is the one of the few ways we can reconcile the living space and migratory needs of “non-urbanized” wildlife to our fragmented and now global sprawl.

Human beings, Ackerman argues, are as negatively impacted by the disappearance of wilderness as any other of nature’s creatures, and perhaps given our aesthetic sensitivity, even more so. For a long time now we have sought to use our engineering prowess to subdue nature. Why not use it to make the human made environment more natural?  She highlights a growing movement among architects and engineers to take their inspiration not from the legacy of our lifeless machines and buildings, but from nature itself, which so often manages to create works of beautiful efficiency. In this vein we find leaps of the imagination such as the Eastgate Center designed by Mick Pearce who took his inspiration from the breathing, naturally temperature controlled, structure of the termite mound.

The idea of the Anthropocene is less an acknowledgement of our impact than a recognition of our responsibility. For so long we have warred against nature’s limits, arrows, and indifference that it is somewhat strange to find ourselves in the position of her caretaker and even designer. And make no mistake about it, for an increasing number of the plants and animals with whom we share this planet their fate will be decided by our action or inaction.

Some environmentalists would argue for nature’s “purity” and against our interference, even when this interference is done in the name of her creatures. Ackerman, though not entirely certain, is arguing against such environmental nihilism, paralysis, or puritanism. If it is necessary for us to “fix” nature- so be it, and she sees in our science and technology our greatest tool to come to nature’s aid. Such fixes can entail everything from permitting, rather than attempting to uproot, invasive species we have inadvertently or deliberately introduced if those invasives have positive benefits for an ecosystem, aiding the relocation of species as biomes shift under the impact of climate change, or introducing extinct species we have managed to save and resurrect through their DNA.

We are now entering the era where we are not only able to mimic nature, but to redesign it at the genetic level. Creating chimeras that nature left to itself would find it difficult or impossible to replicate. Ackerman is generally comfortable with our ever more cyborg nature and revels in the science that allows us to literally print body parts and one day whole organs. Like the rest of us should be, she is flabbergasted by ongoing revolutions in biology that are rewriting what it means to be human.

The early 21st century field of epigenetics is giving us a much more nuanced and complex view of the interplay between genes and the environment. It is not only that multiple genes need to be taken account of in explaining conditions and behaviors, but that genes are in a sense tuned by the environment itself. Indeed, much of this turning “on or off” of genes is a form of genetic memory. In a strange echo of Lamarck, the experiences of one’s close ancestors- their feasts and famines are encoded in the genes of their descendants.

Add to that our recent discoveries regarding the microbiome, the collection of bacteria that live within us that are far more numerous and in many ways as influential as our genes, and one gets an idea for how far we are moving from ideas of what it meant to be human held by scientists even a few years ago and how much distance has been put between current biology and simplistic versions of environmental or genetic determinism.

Some such, as the philosopher of biology, John Dupree see in our advancing understanding of the role of the microbiome and epigenetics a revolutionary change in human self understanding. For a generation we chased after a simplistic idea of genetic determinism where genes were seen as a sort of “blueprint” for the organism. This is now known to be false. Genes are just one part of a permeable interaction between them, the environment and the microbiome that guide individual development and behavior.

We are all of us collectives, constantly swapping biological information and rather than seeing the microscopic world as a kind of sideshow to the “real” biological story of large organisms such as ourselves we might be said to be where we have always been in Steven Jay’s “Age of Bacteria” as much as we are in an Anthropocene.

Returning to Ackerman, she is amazed at the recent advancements in artificial intelligence, and like Tyler Cowen, even wonders whether scientific discoveries will soon no longer be the prerogative of humans, but of our increasingly intelligent machines. Such is the conclusion one might draw from looking at the work of Hod Lipson of Cornell “Eureqa Machine” . Feed the Eureqa Machine observations or data and it is able to come up with equations that describe them all on its own. Ackerman does, however, doubt whether we could ever build a machine that replicated human beings in all their wondrous weirdness.

Where her doubts regarding technology veer towards warning has to do with the question of digitalization of the world. Ackerman is best known for her 1990 A Natural History of the Senses a work which explored the five forms of human perception. Little wonder, then, that she would worry about what the world being flattened on our ubiquitous screens   into the visual sense alone. She laments:

What if, through novelty and convenience, digital nature replaces biological nature?

The further we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. I worry about virtual blinders. Hobble all the senses except the visual, and you produce curiously deprived voyeurs.  (196-197)

While concerned that current technologies may be flattening human experience by leaving us visually mesmerized behind screens at the expense of the body, even as they broaden their scope allowing us to see into world small, large, and at speeds never before possible, Ackerman accepts our cyborg nature. For her we are, to steal a phrase from the philosopher Andy Clark “natural born cyborgs”, and this is not a new thing. Since our beginning we have been a changeling able to morph into a furred animal in the cold with our clothes, wield fire like a mythical dragon, or cover ourselves with shells of metal among much else.

Ackerman is not alone in the view that our cyborg tendencies are an ancient legacy. Shortly before I read The Human Age I finished the anthropologist Timothy Taylor’s short and thought provoking The Artificial Ape: How Technology Changed the Course of Human Evolution. Taylor makes a strong case that anatomically modern humans co-evolved with technology, indeed, that the technology came first and in a way has been the primary driver of our evolution.

The technology Taylor thinks lifted us beyond our hominid cousins wasn’t the usual suspects of fire or stone tools but likely the unsung baby sling.  This simple invention allowed humans to overcome a constraint suffered by the other hominids whose brains, contrary to common opinion, were getting smaller because upright walking was putting a premium of quick rather than delayed development of the young. Slings allowed mothers to carry big brained babies that took longer to develop, but because of the long period of youth could learn much more than faster developing relatives. In effect, slings were a way for human mothers who needed their hands free to externalize the womb and evolve a koala like pouch.

To return to The Human Age; although, she has, as always, given us a wonderfully written book filled with poetry and insights, Ackerman’s book is not without its flaws. Here I will focus only on what I found to be the most important one; namely, that for a book named after our age, there is not enough regarding humans in it. This is what I mean: Though the problems suffered from the effects of the Anthropocene are profound and serious, the animal most likely to broadly suffer the impact of phenomenon such as climate change are likely to be us.

The weakness of the idea of the Anthropocene when judged largely positively, which is what Ackerman is trying to do, is that it universalizes a state of control over nature that is largely limited to advanced countries and the world’s wealthy. The threat of rising sea levels look quite different from the perspective of Manhattan or Chittagong. A good deal of the environmental gains in advanced countries over the past half century can be credited to globalization, which amid its many benefits, has also entailed the offloading of pollution, garbage, and waste processing from developed to developing countries. This is the story that the photos of Edward Burtynsky, whom Ackerman profiles, tells. Stories such as the lives of the shipbreakers of Bangladesh whose world resembles something out of a dystopian novel.

Humanity is not sovereign over nature everywhere, and for some of us not only our we faced with a wildness that has not been subdued, but where humanity itself has become like a unpredictable natural force reigning down unfathomable, uncontrollable good and ill. Such is the world now being experienced by the west African societies suffering under the epidemic of Ebola. It is a world we might better understand by looking at a novel written on the eve of our mastery over nature, a novel by another amazing writer who was also a woman.

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…

 

How to predict the future with five simple rules

Caravaggio Fortune Teller

When it comes to predicting the future it seems only our failure to consistently get tomorrow right has been steadily predictable, though that may be about to change, at least a little bit.  If you don’t think our society and especially those “experts” whose job it is to help us steer us through the future aren’t doing a horrible job just think back to the fall of the Soviet Union which blindsided the American intelligence community, or 9-11, which did the same, or the financial crisis of 2008, or even much more recently the Arab Spring, the rise of ISIL, or the war in Ukraine.

Technological predictions have been generally as bad or worse and as proof just take a look at any movie or now yellowed magazine from the 1960’s and 70’s about the “2000’s”. People have put a lot of work into imagining a future but, fortunately or unfortunately, we haven’t ended up living there.

In 2005 a psychologist, Philip Tetlock, set out to explain our widespread lack of success at playing Nostradamus. Tetlock  was struck by the colossal intelligence failure the collapse of the Soviet Union had unveiled. Policy experts, people who had spent their entire careers studying the USSR and arguing for this or that approach to Cold War had almost universally failed to have foreseen a future in which the Soviet Union would unravel overnight, let alone that such a mass form of social and geopolitical deconstruction would occur relatively peacefully.

Tetlock in his book Expert Political Judgment: How Good Is It? How Can We Know? made the case that a good deal of the predictions made by experts were no better than those of the proverbial dart throwing chimp- that is no better than chance, or for the numerically minded among you – 33 percent. With the most frightening revelation being that often the more knowledge a person had on a subject the worse rather than the better their predictions.

The reasons Tetlock discovered for the predictive failure of experts were largely psychological and all too human. Experts, like the rest of us, hate to be wrong, and have a great deal of difficulty assimilating information that fails to square with their world view. They tend to downplay or misremember their past predictive failures, deal in squishy predictions without clearly defining probabilities- qualitative statements that are open to multiple interpretations. Most of all, there seems to be no real penalty for getting predictions wrong, even horribly wrong, policy makers and pundits that predicted a quick and easy exit from Iraq or Dow 36,000 on the eve of the crash go right on working with a “the world is complicated” shrug of the shoulders.

What’s weird, I suppose, is that while getting the future horribly wrong has no effect on career prospects, getting it right, especially when the winds had been blowing in the opposite direction, seems to shoot a person into the pundit equivalent of stardom. Nouriel Roubini or “Dr Doom” was laughed at by some of his fellow economists when he was predicting the implosion of the banking sector well before Lehman Brothers went belly up. Afterwards he was inescapable, with a common image of the man, whose visage puts one in mind of Vlad the Impaler, or better Gene Simmons, finding himself surrounded by 4 or 5 ravishing supermodels.

I can’t help but thinking there’s a bit of Old Testament style thinking sneaking in here- as if the person who correctly predicted disaster is so much smarter or so tuned into the zeitgeist that the future is now somehow magically at their fingertips. Where the reality might be that they simply had the courage to speak up against the “wisdom” of the crowd.

Getting the future right is not a matter of intelligence, but it probably is a matter of thinking style. At least that’s where Tetlock, to return to him, has gone with his research. He has launched The Good Judgement Project which hopes to train people to be better predictors of the future. You too can make better predictions if you follow these 5 rules.

  • Comparisons are important: use relevant comparisons as a starting point;
  • Historical trends can help: look at history unless you have a strong reason to expect change;
  • Average opinions: experts disagree, so find out what they think and pick a midpoint;
  • Mathematical models: when model-based predictions are available, you should take them into account;
  • Predictable biases exist and can be allowed for. Don’t let your hopes influence your forecasts, for example; don’t stubbornly cling to old forecasts in the face of news.

One thing that surprises me about these rules is that apparently policy makers and public intellectuals aren’t following them in the first place. Obviously we need to do our best to make predictions minimizing cognitive biases, and we should certainly be able to do better than the chimps, but we should also be aware of our limitations. Some aspects of the future are inherently predictable demographics work like this, barring some disaster we have a pretty good idea what the human population will be at the end of this century and how it will be distributed. Or at least demographics should be predictable, though sometimes we miss just happen to miss 3 billion future people.

For the things on our immediate horizon we should be able to give pretty good probabilities and those should be helpful to policy makers. It has a definite impact on behavior whether we think there is a 25 percent probability that Iran will detonate a nuclear weapon by 2017 or a 75 percent chance. But things start to get much more difficult the further out into the future you go or when you’re dealing with the interaction of events some of which you didn’t even anticipate – Donald Rumsfeld’s infamous “unknown unknowns”.

It’s the centenary of World War I so that’s a relevant example. Would the war have happened had Gavrilo Princip’s assassination attempt on Franz Ferdinand failed? Maybe? But even had the war only been delayed timing would have most likely have affected its outcome. Perhaps the Germans would have been better prepared and so swiftly defeated the Russians, French and British that the US would never have became involved. It may be the case that German dominance of Europe was preordained, and therefore, in a sense, predictable, as it seems to be asserting itself even now,  but it makes more than a world of difference not just in Europe but beyond whether or not that dominance came in the form of the Kaiser, or Hitler, or, thank God, Merkle and the EU.

History is so littered with accidents, chance encounters, fateful deaths and births that it seems pretty unlikely that we’ll ever get a tight grip on the future which in all likelihood will be as subject to these contingencies as the past (Tetlock’s Second Rule). The best we can probably do is hone our judgement, as Telock argues, plan for what seems inevitable given trend lines, secure ourselves as well as possible against the most catastrophic scenarios based on their probability of destruction (Nick Bostrom’s view) and design institutions that are resilient and flexible in the face a set of extremely varied possible scenarios – (the position of Nassim Taleb).

None of this, however, gives us a graspable and easily understood view of how the future might hang together. Science fiction, for all its flaws, is perhaps alone in doing that. The writer and game developer Adrian Hon may have discovered a new, and certainly produced an insightful way of practicing the craft. To his recent book I’ll turn next time…

 

Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

City As Superintelligence

Medieval Paris

A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings. The movement goes by the name of smart-cities, or sometimes sentient cities, and the fate of the project is intimately tied to the fate of humanity in the 21st century and beyond because the question of how the city is organized will define the world we live in from here forwards -the beginning of era of urban mankind.

Here are just some of many possible examples of smart cities at work, there is the city of Sondgo in South Korea a kind of testing ground for companies such as Cisco which can experiment with integrated technologies, to quote a recent article on the subject, such as:

TelePresence system, an advanced videoconferencing technology that allows residents to access a wide range of services including remote health care, beauty consulting and remote learning, as well as touch screens that enable residents to control their unit’s energy use.

Another example would be IBM’s Smart City Initiative in Rio which has covered that city with a dense network of sensors and cameras that allow centralized monitoring and control of vital city functions, and was somewhat brazenly promoted by that city’s mayor during a TED Talk in 2012. New York has set up a similar system, but it is in the non-Western world where smart cities will live or die because it is there where almost all of the world’s increasingly rapid urbanization is taking place.

Thus India, which has yet to urbanize like its neighbor, and sometimes rival, China, has plans to build up to 100 smart cities with 4.5 billion of the funding towards such projects being provided by perhaps the most urbanized country on the planet- Japan.

China continues to urbanize at a historically unprecedented pace with 250 million of its people- the equivalent of the entire population of the United States a generation ago- to move to its cities in the next 12 years. (I didn’t forget a zero.) There you have a city that few of us have even heard of – Chongqing, – which The Guardian several years back characterized as “the fastest growing urban center on the planet”  with more people in it than the entire countries of Peru and Iraq. No doubt in response to urbanization pressure, and at least back in 2011, Cisco was helping that city with its so-called Peaceful Chongqing Project an attempt to blanket the city in 500,000 video surveillance cameras- a collaboration that was possibly derailed by allegations by Edward Snowden that the NSA had infiltrated or co-opted U.S. companies.

Yet there are other smart-city initiatives that go beyond monitoring technologies. Under this rubric should fall the renewed interest in arcologies- massive buildings that contain within them an entire city, and thus in principle allow a city to be managed in terms of its climate, flows, etc. in the same way the internal environment of a skyscraper can be managed. China had an arcology in the works in Dongtan, which appears to have been scrapped over corruption and cost overrun concerns. Dubai has its green arcology in Masdar City, but it’s in Russia in the grip of a 21st century version of czarism, of all places, where the mother of all arcologies is planned, architect Norman Foster’s Crystal Island which, if actually built, would be the largest structure on the planet.

On the surface, there is actually much to like about smart-cities and their related arcologies. Smart-cities hold out the promise of greater efficiency for an energy starved and warming world. They should allow city management to be more responsive to citizens. All things being equal, smart-cities should be better than “dumb” ones at responding to everything from common fires and traffic accidents to major man- made and natural disasters. If Wendy Orent is correct as she wrote in a recent issue of AEON that we have less to fear from pandemics emerging from the wilderness such as Ebola than those that evolve in areas of extreme human density, smart-city applications should make the response to pandemics both quicker and more effective.

Especially in terms of arcologies, smart-cities represent something relatively new. We’ve had our two major models of the city since the early to mid-20th century, whether the skyscraper cities pioneered by New York and Chicago or the cul-de-sac suburban sprawl of cities dominated by the automobile like Phoenix. Cities going up now in the developing world certainly look more modern than American cities many of whose infrastructure is in a state of decay, but the model is the same, with the marked exception of all those super-trains.

All that said there are problems with smart-cities and the thinker who has written most extensively on the subject Anthony M. Townsend lays them out excellently in his book Smart Cities: Big-Data, Civic Hackers and the Quest for a New Utopia. Townsend sees three potential problems with smart-cities- they might prove, in his terms, “buggy. brittle, and bugged”.  

Like all software, the kinds that will be used to run smart-cities might exhibit unwanted surprises. We’ve seen this in some of the most sophisticated software we have running, financial trading algorithms whose “flash crashes” have felled billion dollar companies.

The loss of money, even a great deal of money, is something any reasonably healthy society should be able to absorb, but what if buggy software made essential services go off line over an extended period? Cascades from services now coupled by smart-city software could take out electricity and communication in a heat wave or re-run of last winter’s “polar vortex” and lead to loss of life. Perhaps having functions separated in silos and with a good deal of redundancy, even at the cost of inefficiency, is  “smarter” than having them tightly coupled and under one system. That is, after all, how the human brain works.  

Smart-cities might also be brittle. We might not be able to see that we had built a precarious architecture that could collapse in the face of some stressor or effort to intentionally harm- ahem- Windows. Computers crash and sometimes do so for reasons we are completely at a loss to identify. Or, imagine someone blackmailing a city by threatening to shut it down after having hacked its management system. Old school dumb-cities don’t really crash, even if they can sicken and die, and its hard to say they can be hacked.

Would we be in danger of decreasing a city’s resilience by compressing its complexity into an algorithm? If something like Stephen Wolfram’s principles of computational equivalence  and computational irreducibility is correct then the city is already a kind of computation and no model we can create of it will ever be more effective than this natural computation itself.

Or, to make my meaning clearer, imagine that you had a person that had suffered some horrible accident where to save them you had to replace all of his body’s natural information processing with a computer program. Such a program who have to regulate everything from breathing to metabolism to muscle movement ,along with the immune system, and exchange between neurons, not to mention a dozen other things. My guess is that you’d have to go out many generations of such programs before they are anywhere near workable. That the first generations would miss important elements, be based on wrong assumptions on how things worked, and would be loaded with perhaps catastrophic design errors that you couldn’t identify until the program was fully run in multiple iterations.

We are blissfully unaware that we are the product of billions of years of “engineering” where “design” failures were weeded out by evolution. Cities have only a few thousand years of a similar type of evolution behind them, but trying to control a great number of their functions via algorithms run by “command centers” might pose similar risks to my body example.  Reducing city functions to something we can compute in silicon might oversimplify the city in such a way as to reduce its resilience to stressors cities have naturally evolved to absorb. That is, there is, in all use of “Big-Data”, a temptation to interpret reality only in light of the model that scaffolds this data or reframe problems in ways that can mathematically be modeled. We set ourselves up for crises when we confuse the map with the territory or as Jaron Lanier said:

 What makes something real is that it is Impossible to represent it to completion.

Lastly, and as my initial examples of smart-cities should have indicated, smart-cities are by design bugged. They are bugged so as to surveil their citizens in an effort to prevent crime or terrorism or even just respond to accidents or disasters. Yet the promise of safety comes at the cost of one of the virtues of city living – the freedom granted from anonymity. But even if we care nothing for such things I’ve got news- trading privacy for security doesn’t even work.

Chongqing may spend tens of millions of dollars installing CCTV cameras, but would be hooligans or criminals or just people who don’t like being watched such as those in London, have a twenty dollar answer to all these gizmos- it’s called a hoodie. Likewise, a simple pattern of dollar store facepaint, strategically applied, can short-circuit the most sophisticated facial recognition software. I will never cease to be amazed at human ingenuity.    

We need to acknowledge that it is largely companies or individuals with extremely deep pockets and even deeper political connections that are promoting this model of the city. Townsend estimates it is potentially a 100 billion dollar business. We need to exercise our historical memory and recall how it was automobile companies that lobbied for and ended up creating our world of sprawl. Before investing millions or even billions cities need to have an idea of what kind of future they want to have and not be swayed by the latest technological trends.

This is especially the case when it comes to cities in the developing world where the conditions often resemble something more out of Dicken’s 19th century than even the 20th. When I enthusiastically asked a Chinese student about the arcology at Dongtan he responded with something like “Fools! Who would want to live in such a thing! It’s a waste of money. We need clean air and water, not such craziness!” And he’s no doubt largely right. And perhaps we might be happy that the project ultimately unraveled and say with Thoreau:

As for your high towers and monuments, there was a crazy fellow once in this town who undertook to dig through to China, and he got so far that, as he said, he heard the Chinese pots and kettles rattle; but I think that I shall not go out of my way to admire the hole which he made. Many are concerned about the monuments of the West and the East — to know who built them. For my part, I should like to know who in those days did not build them — who were above such trifling.

The age of connectivity, if it’s done thoughtfully, could bring us cities that are cleaner, greener, more able to deal with the shocks of disaster or respond to the spread of disease. Truly smart-cities should support a more active citizenry, a less tone-deaf bureaucracy, a more socially and culturally rich, entertaining and more civil life- the very reasons human beings have chosen to live in cities in the first place .

If the age of connectivity is done wrong cities will have poured scarce resources down a hole of corruption as deep as the one dug by Thoreau’s townsman, will have turned vibrant cultural and historical environments into corporate “flat-pack” versions of tomorrowland, and most frighteningly of all turned the potential democratic agora of the city into a massive panopticon of Orwellian monitoring and control. Cities, in the Wolfram not Bostrom sense, are already a sort of super-intelligence or better, hive mind of its interconnected yet free individuals more vibrant and important than any human built structure imaginable.  Will we let them stay that way?

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

Plato and the Physicist: A Multicosmic Love Story

Life as a Braid of Space Time

 

So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.

I should start though with my initial question of how Tegmark manages to fit something very much like Plato’s Theory of the Forms into the seemingly chaotic landscape of multiverse theories. If you remember back to your college philosophy classes, you might recall something of Plato’s idea of forms, which in its very basics boils down to this: Plato thought there was a world of perfect, eternally existing ideas of which our own supposedly real world was little more than a shadow. The idea sounds out there until you realize that Plato was thinking like a mathematician. We should remember that over the walls of Plato’s Academy was written “Let no man ignorant of geometry enter here”, and for the Greeks geometry was the essence of mathematics. Plato aimed to create a school of philosophical mathematicians much more than he hoped to turn philosophers into a sect of moral geometers.

Probably almost all mathematicians and physicists hold to some version of platonism, which means that they think mathematical structures are something discovered rather than a form of language invented by human beings. Non- mathematicians, myself very much included, often have trouble understanding this, but a simple example from Plato himself might help clarify.

When the Greeks played around with shapes for long enough they discovered things. And here we really should say discover because they had no idea shapes had these properties until they stumbled upon them through play.Plato’s dialogue Meno gave us the most famous demonstration of the discovery rather than invention of mathematical structures. Socrates asks a “slave boy” (we should take this to be the modern day equivalent of the man off the street) to figure out the area of a square which is double that of a square with a length of 2. The key, as Socrates leads the boy to see, is that one should turn the square with the side of 2 into a right triangle the length of whose hypotenuse is then seen as equal to one of the lengths of the doubled square allowing you easily calculate its area. The slave boy explains his measurement epiphany as the “recovery of knowledge from a past life.”

The big gap between Plato and modern platonists is that the ancient philosopher thought the natural world was a flawed copy of the crystalline purity of the mathematics of  thought. Contrast that with Newton who saw the hand of God himself in nature’s calculable regularities. The deeper the scientists of the modern age probed with their new mathematical tools the more nature appeared as Galileo said “ a book written in the language of mathematics”. For the moderns mathematical structures and natural structures became almost one and the same. The Spanish filmmaker and graphic designer Cristóbal Vila has a beautiful short over at AEON reflecting precisely this view.

It’s that “almost” that Tegmark has lept over with his Mathematical Universe Hypothesis (MUH). The essence of the MUH is not only that mathematical structures have an independent identity, or that nature is a book written in mathematics, but that the nature is a mathematical structure and just as all mathematical structures exist independent of whether we have discovered them or not, all logically coherent universes exists whether or not we have discovered their structures. This is platonism with a capital P, the latter half explaining how the MUH intersects with the idea of the multiverse.

One of the beneficial things Tegmark does with his book is to provide a simple to understand set of levels for different ideas that there is more than one universe.

Level I: Beyond our cosmological horizon

A Level I multiverse is the easiest for me to understand. It is within the lifetime of people still alive that our universe was held to be no bigger than our galaxy. Before that people thought the entirety of what was consisted of nothing but our solar system, so it is no wonder that people thought humanity was the center of creation’s story. As of right now the observable universe is around 46 billion light years across, actually older than the age of the universe due to its expansion. Yet, why should we think this observable horizon constitutes everything when such assumption has never proved true in the past? The Level I multiverse holds that there are entire other universes outside the limit of what we can observe.

Level II: Universes with different physical constants

The Level II multiverse again makes intuitive sense to me. If one assumes that the Big Bang was not the first or the last of its kind, and  if one assumes there are whole other, potentially an infinite number of universes, why assume that our is the only way a universe should be organized? Indeed, having a variety of physical constants to choose from would make the fine tuning of our own universe make more sense.

Level III: Many-worlds interpretation of quantum mechanics

This is where I start to get lost, or at least this particular doppelganger of me starts to get lost. Here we find Hugh Everett’s interpretation of quantum unpredictability. Rather than Schrodinger’s Cat being pushed from a superposition of states between alive and dead when you open the box, exposing the feline causes the universe to split- in one universe you have an alive cat, and in another a dead one. It gets me dizzy just thinking about it, just imagine the poor cat- wait, I am the cat!

Level IV: Ultimate ensemble

Here we have Tegmark’s model itself where every universe that can represented as a logically consistent mathematical structure is said to actually exist. In such a multiverse when you roll a six-sided die, there end up being six universes corresponding to each of the universes, but there is no universe where you have rolled a “1 not 1” , and so on. If a universe’s mathematical structure can be described, then that universe can be said to exist there being, in Tegmark’s view, no difference between such a mathematical structure and a universe.

I had previously thought the idea of the multiverse was a way to give scale to the shadow of our ignorance and expand our horizon in space and time. As mentioned, we had once thought all that is was only as big as our solar system and merely thousands of years old. By the 19th century the universe had expanded to the size of our galaxy and the past had grown to as much as 400 million years. By the end of the 20th century we knew there were at least 100 billion galaxies in the universe and that its age was 13.7 billion. There is no reason to believe that we have grasped the full totality of existence, that the universe, beyond our observable horizon isn’t even bigger, and the past deeper. There is “no sign on the Big Bang saying ‘this happened only once’” as someone once said cleverly whose attribution I cannot find.

Ideas of the multiverse seemed to explain the odd fact that the universe seems fine-tuned to provide the conditions for life, Martin Rees “six numbers” such as Epsilon (ε)- the strength of the force binding nucleons to nuclei. If you have a large enough sample of universes then the fact that some universes are friendly for life starts to make more sense. The problem, I think, comes in when you realize just how large this sample size has to be to get you to fine tuning- somewhere on the order of 10 ^200. What this means is that you’ve proposed the existence of a very very large or even infinite number of values, as far as we know which are unobservable to explain essentially six. If this is science, it is radically different from the science we’ve known since Galileo dropped cannon balls off of the Leaning Tower of Pisa.

For whatever reason, rather than solidify my belief in the possibility of the multiverse, or convert me to platonism, Tegmark’s book left me with a whole host of new questions, which is what good books do. The problem is my damned doppelgangers who can be found not only at the crazy quantum Level III, but at the levels I thought were a preserve of Copernican Mediocrity – Levels I and II, or as Tegmark says.

The only difference between Level I and Level III is where your doppelgängers reside.

Yet, to my non-physicist eyes, the different levels of multiverse sure seems distinct. Level III seems to violate Copernican Mediocrity with observers and actors being able to call into being whole new timelines with even the most minutea laden of their choices, whereas Levels I and II simply posit that a universe sufficiently large enough and sufficiently extended enough in time would allow for repeat performances down to the smallest detail- perhaps the universe is just smaller than that, or less extended in time, or there is some sort of kink whereby when what the late Stephen J Gould called the “life tape” is replayed you can never get the same results twice.

Still, our intuitions about reality have often been proven wrong, so no theory can be discounted on the basis of intuitive doubts. There are other reasons, however, why we might use caution when it comes to multiverse theories, namely, their potential risk to the scientific endeavor itself.  The fact that we can never directly observe parts of the multiverse that are not our own means that we would have to step away from falsifiability as the criteria for scientific truth. The physicist Sean Carroll  argues that falsifiability is a weak criteria, what makes a theory scientific is that it is “direct” (says something definite about how reality works) and “empirical”, by which he no longer means the Popperian notion of falsifiability, but its ability to explain the world. He writes:

Consider the multiverse.

If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.

We can’t (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe.

One could look at Tegmark’s MUH and Carroll’s comments as a broadening of our scientific and imaginative horizons and the continuation of our powers to explain into realms beyond what human beings will ever observe. The idea of a 22nd version of Plato’s Academy using amazingly powerful computers to explore all the potential universes ala Tegmark’s MUH is an attractive future to me. Yet, given how reliant we are on science and the technology that grows from it, and given the role of science in our society in establishing the consensus view of what our shared physical reality actually is, we need to be cognizant and careful of what such a changed understanding of science actually might mean.

The physicist, George Ellis, for one, thinks the multiverse hypothesis, and not just Tegmark’s version of it, opens the door to all sorts of pseudoscience such as Intelligent Design. After all, the explanation that the laws and structure of our universe can be understood only by reference to something “outside” is the essence of explanations from design as well, and just like the multiverse, cannot be falsified.

One might think that the multiverse was a victory of theorizing over real world science, but I think Sean Carroll is essentially right when he defends the multiverse theory by saying:

 Science is not merely armchair theorizing; it’s about explaining the world we see, developing models that fit the data.

It’s the use of the word “model” here rather than “theory” that is telling. For a model is a type of representation of something whereas a theory constitutes an attempt at a coherent self-contained explanation. If the move from theories to models was only happening in physics then we might say that this had something to do merely with physics as a science rather than science in general. But we see this move all over the place.

Among, neuroscientists, for example, there is no widely agreed upon theory of how SSRIs work, even though they’ve been around for a generation, and there’s more. In a widely debated speech Noam Chomsky argued that current statistical models in AI were bringing us no closer to the goal of AGI or the understanding of human intelligence because they lacked any coherent theory of how intelligence works. As Yaden Katz wrote for The Atlantic:

Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

Likewise, the field of systems biology and especially genomic science is built not on theory but on our ability to scan enormous databases of genetic information looking for meaningful correlations. The new field of social physics is based on the idea that correlations of human behavior can be used as governance and management tools, and business already believes that statistical correlation is worth enough to spend billions on and build an economy around.

Will this work as well as the science we’ve had for the last five centuries? It’s too early to tell, but it certainly constitutes a big change for science and the rest of us who depend upon it. This shouldn’t be taken as an unqualified defense of theory- for if theory was working then we wouldn’t be pursuing this new route of data correlation whatever the powers of our computers. Yet, those who are pushing this new model of science should be aware of its uncertain success, and its dangers.

The primary danger I can see from these new sorts of science, and this includes the MUH, is that it challenges the role of science in establishing the consensus reality which we all must agree upon. Anyone who remembers their Thomas Kuhn can recall that what makes science distinct from almost any system of knowledge we’ve had before, is that it both enforces a consensus view of physical reality beyond which an individual’s view of the world can be considered “unreal”, and provides a mechanism by which this consensus reality can be challenged and where the challenge is successful overturned.

With multiverse theories we are in approaching what David Engelman calls Possibilism the exploration of every range of ways existence can be structured that is compatible with the findings of science and is rationally coherent. I find this interesting as a philosophical and even spiritual project, but it isn’t science, at least as we’ve understood science since the beginning of the modern world. Declaring the project to be scientific blurs the lines between science and speculation and might allow people to claim the kind of understanding over uncertainty that makes politics and consensus decisions regarding acute needs of the present, such a global warming, or projected needs of the future impossible.

Let me try to clarify this. I found it very important that in Our Mathematical Universe Tegmark tried to tackle the problem of existential risks facing the human future. He touches upon everything from climate change, to asteroid impacts, to pandemics to rogue AI. Yet, the very idea that there are multiple versions of us out there, and that our own future is determined seems to rob these issues of their urgency. In an “infinity” of predetermined worlds we destroy ourselves, just as in an “infinity” of predetermined worlds we do what needs to be done. There is no need to urge us forward because, puppet-like, we are destined to do one thing or the other on this particular timeline.

Morally and emotionally, how is what happens in this version of the universe in the future all that different from what happens in other universe? Persons in those parallel universes are even closer to us, our children, parents, spouses, and even ourselves than the people of the future on our own timeline. According to the deterministic models of the multiverse, the world of these others are outside of our influence and both the expansion or contraction of our ethical horizon leave us in the same state of moral paralysis. Given this, I will hold off on believing in the multiverse, at least on the doppelganger scale of Level I and II, and especially Levels III and IV until it actually becomes established as a scientific fact,which it is not at the moment, and given our limitations, perhaps never will be, even if it is ultimately true.

All that said, I greatly enjoyed Tegmark’s book, it was nothing if not thought provoking. Nor would I say it left me with little but despair, for in one section he imagined a Spinoza-like version of eternity that will last me a lifetime, or perhaps I should say beyond.  I am aware that I will contradict myself here: his image that gripped me was of an individual life seen as a braid of space-time. For Tegmark, human beings have the most complex space-time braids we know of. The idea vastly oversimplified by the image above.

About which Tegmark explains:

At both ends of your spacetime braid, corresponding to your birth and death, all the threads gradually separate, corresponding to all your particles joining, interacting and finally going their own separate ways. This makes the spacetime structure of your entire life resemble a tree: At the bottom, corresponding to early times, is an elaborate system of roots corresponding to the spacetime trajectories of many particles, which gradually merge into thicker strands and culminate in a single tube-like trunk corresponding to your current body (with a remarkable braid-like pattern inside as we described above). At the top, corresponding to late times, the trunk splits into ever finer branches, corresponding to your particles going their own separate ways once your life is over. In other words, the pattern of life has only a finite extent along the time dimension, with the braid coming apart into frizz at both ends.

Because mathematical structures always exist whether or not anyone has discovered them, our life braid can be said to have always existed and will always exist. I have never been able to wrap my head around the religious idea of eternity, but this eternity I understand. Someday I may even do a post on how the notion of time found in the MUH resembles the medieval idea of eternity as nunc stans, the standing-now, but for now I’ll use it to address more down to earth concerns.

My youngest daughter, philosopher that she is, has often asked me “where was I before I was born?”. To which my lame response has been “you were an egg” which for a while made big breakfasts difficult. Now I can just tell her to get out her crayons to scribble, and we’ll color our way to something profound.