Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.

 

Edward O. Wilson’s Dull Paradise

Garden of Eden

In all sincerity I have to admit that there is much I admire about the biologist Edward O. Wilson. I can only pray that not only should I live into my 80’s, but still possess the intellectual stamina to write what are at least thought provoking books when I get there. I also wish I still have the balls to write a book with the title of Wilson’s latest- The Meaning of Human Existence, for publishing with an appellation like that would mean I wasn’t afraid I would disappoint my readers, and Wilson did indeed leave me wondering if the whole thing was worth the effort.

Nevertheless,  I think Wilson opened up an important alternative future that is seldom discussed here- namely what if we aimed not at a supposedly brighter, so-called post-human future but to keep things the same? Well, there would be some changes, no extremes of human poverty, along with the restoration of much of the natural environment to its pre-industrial revolution health. Still, we ourselves would aim to stay largely the same human beings who emerged some 100,000 years ago- flaws and all.

Wilson calls this admittedly conservative vision paradise, and I’ve seen his eyes light up like a child contemplating Christmas when using the word in interviews. Another point that might be of interest to this audience is who he largely blames for keeping us from entering this Shangri-la; archaic religions and their “creation stories.”

I have to admit that I find the idea of trying to preserve humanity as it is a valid alternative future. After all, “evolve or die” isn’t really the way nature works. Typically the “goal” of evolution is to find a “design” that works and then stick with it for as long as possible. Since we now dominate the entire planet and our numbers out-rival by a long way any other large animal it seems hard to assert that we need a major, and likely risky, upgrade. Here’s Wilson making the case:

While I am at it, I hereby cast a vote for existential conservatism, the preservation of biological human nature as a sacred trust. We are doing very well in terms of science and technology. Let’s agree to keep that up, and move both along even faster. But let’s also promote the humanities, that which makes us human, and not use science to mess around with the wellspring of this, the absolute and unique potential of the human future. (60)

It’s an idea that rings true to my inner Edmund Burke, and sounds simple, doesn’t it? And on reflection it would be, if human beings were bison, blue whales, or gray wolves. Indeed, I think Wilson has drawn this idea of human preservation from his lifetime of very laudable work on biodiversity. Yet had he reflected upon why efforts at preservation fail when they do he would have realized that the problem isn’t the wildlife itself, but the human beings who don’t share the same value system going in the opposite direction. That is, humans, though we are certainly animals, aren’t wildlife, in the sense that we take destiny into our own hands, even if doing so is sometimes for the worse. Wilson seems to think that it’s quite a short step from asserting it as a goal to gaining universal assent to the “preservation of biological human nature as a sacred trust”, the problem is there is no widespread agreement over what human nature even is, and then, even if you had such agreement, how in the world do you go about enforcing it for the minority who refuse to adhere to it? How far should we be willing to go to prevent persons from willingly crossing some line that defines what a human being is? And where exactly is that line in the first place? Wilson thinks we’re near the end of the argument when we only just took our seat at the debate.

Strange thing is the very people who would likely naturally lean towards the kind of biological conservatism that Wilson hopes “we” will ultimately choose are the sorts of traditionally religious persons he thinks are at the root of most of our conflicts. Here again is Wilson:

Religious warriors are not an anomaly. It is a mistake to classify believers of a particular religious and dogmatic religion-like ideologies into two groups, moderates versus extremists. The true cause of hatred and religious violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things. (154)

For Wilson, a religious groups “defines itself foremost by its creation story, the supernatural narrative that explains how human beings came into existence.” (151)  The trouble with this is that it’s not even superficially true. Three of the world’s religions that have been busy killing one another over the last millennium – Judaism, Christianity and Islam all have the same creation story. Wilson knows a hell of a lot more about ants and evolution then he does about religion or even world history. And while religion is certainly the root of some of our tribalism, which I agree is the deep and perennial human problem, it’s far from the only source, and very few of our tribal conflicts have anything to do with the fight between human beings over our origins in the deep past. How about class conflict? Or racial conflict? Or nationalist conflicts when the two sides profess the not only the exact same religion but the exact same sect- such as the current fight between the two Christian Orthodox nations of Russia and Ukraine? If China and Japan someday go to war it will not be a horrifying replay of the Scopes Monkey Trial.

For a book called The Meaning of Human Existence Wilson’s ideas have very little explanatory power when it comes to anything other than our biological origins, and some quite questionable ideas regarding the origins of our capacity for violence. That is, the book lacks depth, and because of this I found it, well… dull.

Nowhere was I more hopeful that Wilson would have something interesting and different to say than when it came to the question of extraterrestrial life. Here we have one of the world’s greatest living biologists, a man who had spent a lifetime studying ants as an alternative route to the kinds of eusociality possessed only by humans, the naked mole rat, and a handful of insects. Here was a scientists who was clearly passionate about preserving the amazing diversity of life on our small planet.

Yet Wilson’s E.T.s are land dwellers, relatively large, biologically audiovisual, “their head is distinct, big, and located up front” (115) they have moderate teeth and jaws, they have a high social intelligence, and “a small number of free locomotory appendages, levered for maximum strength with stiff internal or external skeletons composed of hinged segments (as by human elbows and knees), and with at least one pair of which are terminated by digits with pulpy tips used for sensitive touch and grasping. “ (116)

In other words they are little green men.

What I had hoped was the Wilson would have used his deep knowledge of biology to imagine alternative paths to technological civilization. Couldn’t he have imagined a hive-like species that evolves in tandem with its own technological advancement? Or maybe some larger form of insect like animal which doesn’t just have an instinctive repertoire of things that it builds, but constantly improves upon its own designs, and explores the space of possible technologies? Or aquatic species that develop something like civilization through the use of sea-herding and ocean farming? How about species that communicate not audio-visually but through electrical impulses the way our computers do?

After all, nature on earth is pretty weird. There’s not just us, but termites that build air conditioned skyscrapers (at least from their view), whales which have culturally specific songs, and strange little things that eat and excrete electrons. One might guess that life elsewhere will be even weirder. Perhaps my problem with The Meaning of Human Existence is that it just wasn’t weird enough not just to capture the worlds of tomorrow and elsewhere- but the one we’re living in right now.

 

There are two paths to superlongevity: only one of them is good

Memento Mori Ivories

Looked at in the longer historical perspective we have already achieved something our ancestors would consider superlongevity. In the UK life expectancy at birth averaged around 37 in 1700. It is roughly 81 today. The extent to which this is a reflection of decreased child mortality versus an increase in the survival rate of the elderly I’ll get to a little later, but for now, just try to get your head around the fact that we have managed to nearly double the life expectancy of human beings in a little over two centuries.

By itself the gains we have made in longevity are pretty incredible, but we have also managed to redefine what it means to be old. A person in 1830 was old at forty not just because of averages, but by the conditions of his body. A revealing game to play is to find pictures of adults from the 19th century and try to guess their ages. My bet is that you, like myself, will consistently estimate the people in these photos to be older than they actually were when the picture was taken. This isn’t a reflection of their lack of Botox and Photoshop, so much as the fact that they were missing the miracle of modern dentistry, were felled, or at least weathered, by diseases which we now consider mere nuisances. If I were my current age in 1830 I would be missing most of my teeth and the pneumonia I caught a few years back would have surely killed me, having been a major cause of death in the age of Darwin and Dickens.

Sixty or even seventy year olds today are probably in the state of health that a forty year old was in the 19th century. In other words we’ve increased the healthspan, not just the lifespan. Sixty really is the new forty, though what is important is how you define “new”. Yet get passed eighty in the early 21st century and you’re almost right back in the world where our ancestors lived. Experiencing the debilitations of old age that is the fate of those of us lucky enough to survive through the pleasures of youth and middle age. The disability of the old is part of the tragic aspect of life, and as always when it comes to giving poetic shape to our comic/ tragic existence, the Greeks got to the essence of old age with their myth of Tithonus.

Tithonus was a youth who had the ill fortune of inspiring the love of the goddess of spring Eos. (Love affairs between gods and mortals never end well). Eos asked Zeus to grant the youth immortality, which he did, but, of course, not in the way Eos intended. Tithonus would never die, but he also would continue to age becoming not merely old and decrepit, but eventually shrivel away to a grasshopper hugging a room’s corner. It is best not to ask the gods for anything.

Despite our successes, those of us lucky enough to live into our 7th and 8th decades still end up like poor old Tithonus. The deep lesson of the ancient myth still holds- longevity is not worth as much as we might hope if not also combined with the health of youth, and despite all of our advances, we are essentially still in Tithonus’ world.

Yet perhaps not for long. At least if one believes the story told by Jonathan Weiner in his excellent book Long for this World.  I learned much about our quest for long life and eternal youth from Long for this World, both its religious and cultural history, and the trajectory and state of its science. I never knew that Jewish folklore had a magical city called Luz where the death unleashed in Eden was prevented from entering, and that existed until  all its inhabitants became so bored that they walked out from its walls and we struck down by the Angel of Death waiting eagerly outside.

I did not know that Descartes, who had helped unleash the scientific revolution, thought that gains in knowledge were growing so fast that he would live to be 1,000. (He died in 1650 at 54). I did not realize that two other key figures of the scientific revolution Roger and Francis Bacon (no relation) thought that science would restore us to the knowledge before the fall (prelapsarian) which would allow us to live forever, or the depth to which very different Chinese traditions had no guilt at all about human immorality and pursued the goal with all sorts of elixirs and practices, none of which, of course, worked. I was especially taken with the story of how Pennsylvania’s most famous son- Benjamin Franklin- wanted to be “pickled” and awoken a century later.

Reviewing the past, when even ancient Egyptian hieroglyphs offer up recipes for “guaranteed to work” wrinkle creams, shows us just how deeply human the longing for agelessness is. It wasn’t invented by Madison Avenue or Dr Oz if even the attempts to find a fountain of youth by the ancients seem no less silly than many of our own. The question, I suppose, is the one that most risks the accusation that one is a fool: “Is this time truly different?” Are we, out of all the generations that have come before us believing the discovery of the route to human “immortality” (and every generation since the rise of modern science has had those who thought so) actually the ones who will achieve this dream?

Long for this World is at its heart a serious attempt to grapple with this question and tries to give us a clear picture of longevity science built around the theoretical biologist, Aubrey de Grey, who will either go down in history as a courageous prophet of a new era of superlongevity, or as just another figure in our long history of thinking biological immortality is at our fingertips when all we are seeing is a mirage.

One thing we have on our ancestors who chased this dream is that we know much, much, more about the biology of aging. Darwinian evolution allowed us to be able to conceive non- poetic theories on the origins of death. In the 1880’s the German biologist, August Weismann in his essay “Upon the Eternal Duration of Life”, provided a kind of survival of the fittest argument for death and aging. Even an ageless creature, Weismann argued, would overtime have to absorb multiple shocks eventually end up disabled. The the longer something lives the more crippled and worn out it becomes. Thus, it is in the interest of the species that death exists to clear the world of these disabled- very damned German- the whole thing.

Just after World War II the biologist Peter Medawar challenged the view of  Weismann. For Medawar if you look at any species selective pressures are really only operating when the organism is young. Those who can survive long enough to breed are the only ones that really count when it comes to natural selection. Like versions of James Dean or Marilyn Monroe, nature is just fine if we exit the world in the bloom of youth- as long, that is, as we have passed our genes.

In other words, healthful longevity has not really been something that natural selection has been selecting most organisms for, and because of this it hasn’t been selecting against bad things that can happen to old organisms either, as we’re finding when, by saving people from heart attacks in their 50’s, we destin them to die of diseases that were rare or unknown in the past like Alzheimers. In a sense we’re the victim of natural selection not caring about the health of those past reproductive age or their longevity.

Well, this is only partly true. Organisms that live in conditions where survival in youth is more secure end up with stretched longevity for their size. Some bats can live decades when similar sized mice have a lifespan of only a couple of years. Tortoises can live for well over a century while alligators of the same weight live from 30-50 years.

Stretching healthful longevity is also something that occurs when you starve an animal. We’ve know for decades that lifespan (in other animals at least) can be increased through caloric restriction. Although the mechanism is unclear, the Darwinian logic is not. Under conditions of starvation it’s a bad idea to breed and the body seems to respond by slowing development waiting for the return of food and a good time to mate.

Thus, there is no such thing as a death clock, lifespan is malleable and can be changed if we just learn how to work the dials. We should have known this from our historical experience over the last two-hundred years in which we doubled the human lifespan, but now we know that nature itself does it all the time and not by, like we do , by addressing the symptoms of aging but by resetting the clock of life itself.

We might ourselves find it easy to reset our aging clock if there weren’t multiple factors that play a role in its ticking. Aubrey de Grey has identified seven- the most important of which (excluding cancerous mutations) are probably the accumulation of “junk” within cells and the development of harmful “cross links” between cells. Strange thing about these is that they are not something that suddenly appears when we are actually “old” but are there all along, only reaching levels when they become noticeable and start to cause problems after many decades. We start dying the day we are born.

As we learn in Long for This World, there is hope that someday we may be able to effectively intervene against all these causes of aging. Every year the science needed to do so advances. Yet as Aubrey de Grey has indicated, the greatest threat to this quest for biological immortality is something we are all too familiar with – cancer.

The possibility of developing cancer emerges from the very way our cells work. Over a lifetime our trillions of cells replicate themselves an even more mind bogglingly high number of times. It is almost impossible that every copying error will be caught before it takes on a life of its own and becomes a cancerous growth. Increasing lifespan only increases the amount of time such copying errors can occur.

It’s in Aubrey de Grey’s solution to this last and most serious of super-longevity’s medical hurdles that Weiner’s faith in the sense of that project breaks down, as does mine. De Grey’s cure for cancer goes by the name of WILT- whole body interdiction of the lengthening of telomeres. A great deal of the cancers that afflict human beings achieve their deadly replication without limit by taking control of the telomerase gene. De Grey’s solution is to strip every human gene of its telomeres, something that, even if successful in preventing cancerous growths, would also leave us without red and white blood cells. In order to allow us to live without these cells, de Grey proposes regular infusions of stem cells. What this leave us with would be a life of constant chemotherapy and invasive medical interventions just to keep us alive. In other words, a life when even healthy people relate to their bodies and are kept alive by medical interventions that are now only experienced by the terminally ill.

I think what shocks Weiner about this last step in SENS is the that it underscores just how radical the medical requirements of engineering superlongevity might become. It’s one thing to talk about strengthening the cell’s junk collector the lysosome by adding an enzyme or through some genetic tweak, it’s another to talk about removing the very cells and structures which define human biology, cells and platelets, which have always been essential for human life and health.

Yet, WILT struck me with somewhat different issues and questions. Here’s how I have come to understand it. For simplicities sake, we might be said to have two models of healthcare, both of which have contributed to the gains we have seen in human health and longevity since 1800. As is often noted, a good deal of this gain in longevity was a consequence of improving childhood mortality. Having less and less people die at the age of five drastically improves the average lifespan. We made these gains largely through public health: things like drastically improved sanitation, potable water, vaccinations, and, in the 20th century antibiotics.

This set of improvements in human health were cheap, “easy”, and either comprised of general environmental conditions, or administered at most annually- like the flu shoot. These features allowed this first model of healthcare to be distributed broadly across the population leading to increased longevity by saving the lives primarily of the young. In part these improvements, and above all the development of antibiotics, also allowed longevity increases from at older end of the scale, which although less pronounced than improvements in child mortality, are, nonetheless very real. This is my second model of healthcare and includes things everything from open heart surgery, to chemo and radiation treatments for cancer, to lifelong prescription drugs to treat chronic conditions.

As opposed to the first model, the second one is expensive, relatively difficult, and varies greatly among different segments of the population. My Amoxicillin and Larry Page’s Amoxicillin are the same, but the medical care we would receive to treat something like cancer would be radically different.

We actually are making greater strides in the battle against cancer than at any time since Nixon declared war on the scourge way back in the 1970’s. A new round of immunosuppressive drugs that are proving so successful against a host of different cancers that John LaMattina, former head of research and development for Pfizer has stated that “We are heading towards a world where cancer will become a chronic disease in much the same way as we have seen with diabetes and HIV.”

The problem is the cost which can range up to 150,000 per year. The costs of the new drugs are so expensive that the NHS has reduced the amount they are willing to spend on them by 30 percent. Here we are running up against the limits to second model of healthcare, a limit that at some point will force societies to choose between providing life preserving care for all, or only to those rich enough to afford it.

If the superlongevity project is going to be a progressive project it seems essential to me that it look like the first model of healthcare rather than the second. Otherwise it will either leave us with divergences in longevity within and between societies that make us long nostalgically for the “narrowness” of current gap between today’s poorest and richest societies, or it will bankrupt countries that seek to extend increased longevity to everyone.

This would require a u-turn from the trajectory of healthcare today which is dominated and distorted by the lucrative world of the second model. As an example of this distortion: the physicists, Paul Davies, is working on a new approach to cancer that involves attempting to attack the disease with viruses. If successful this would be a good example of model one. Using viruses (in a way the reverse of immunosuppressives) to treat cancer would likely be much cheaper than current approaches to cancer involving radiation, chemotherapy, and surgery due to the fact that viruses can self-replicate after being engineered rather than needing to be expensively and painstakingly constructed in drug labs. The problem is that it’s extremely difficult for Davies to get funding for such research precisely because there isn’t that much money to be made in it.

In an interview about his research, Davies compared his plight to how drug companies treat aspirin. There’s good evidence to show that plain old aspirin might be an effective preventative against cancer. Sadly, it’s almost impossible to find funding for large scale studies of aspirin’s efficacy in preventing cancer because you can buy a bottle of the stuff for a little over a buck, and what multi-billion dollar pharmaceutical company could justify profit margins as low as that?

The distortions of the second model are even more in evidence when it comes to antibiotics. Here is one of the few places where the second model of healthcare is dependent upon the first. As this chilling article by Maryn Mckenna drives home we are in danger of letting the second model lead to the nightmare of a sudden sharp reversal of the health and longevity gains of the last century.

We are only now waking up to the full danger implicit in antibiotic resistance. We’ve so over prescribed these miracle treatments both to ourselves and our poor farms animals who we treat as mere machines and “grow” in hellish sanitary conditions that bacteria have evolved to no longer be treatable with the suite of antibiotics we have, which are now a generation old, or older. If you don’t think this is a big deal, think about what it means to live in a world where a toothache can kill you and surgeries and chemotherapy can no longer be performed. A long winter of antibiotic resistance would just mean that many of our dreams of superlongevity this century would be moot. It would mean many of us might die quite young from common illnesses, or from surgical and treatment procedures that have combined given us the longevity we have now.

Again, the reason we don’t have alternatives to legacy antibiotics is that pharmaceutical companies don’t see any profit in these as opposed to, say Viagra. But the other part of the reason for their failure, is just as interesting. It’s that we have overtreated ourselves because we find the discomfort of being even mildly sick for a few days unbearable. It’s also because we want nature, in this case our farm animals, to function like machines. Mechanical functioning means regularity, predictability, standardization and efficiency and we’ve had to so distort the living conditions, food, and even genetics of the animals we raise that they would not survive without our constant medical interventions, including antibiotics.

There is a great deal of financial incentive to build solutions to human medical problems around interminable treatments rather than once and done cures or something that is done only periodically. Constant consumption and obsolescence guarantees revenue streams.  Not too long ago, Danny Hillis, who I otherwise have the deepest respect for, gave an interview on, among other things, proteomics, which, for my purposes here, essentially means the minute analysis of bodily processes with the purpose of intervening the moment things begin to go wrong- to catch diseases before they cause us to exhibit symptoms. An audience member asked a thought provoking question, which when followed up by the interviewer Alexis Madrigal, seemed to leave the otherwise loquacious Hillis, stumped. How do you draw the line between illness without symptoms and what the body just naturally does? The danger is you might end up turning everyone, including the healthy, into “patients” and “profit centers”.

We already have a world where seemingly healthy people needed to constantly monitor and medicate themselves just to keep themselves alive, where the body seems to be in a state of almost constant, secret revolt. This is the world as diabetics often experience it, and it’s not a pretty one.  What I wonder is if, in a world in which everyone sees themselves as permanently sick- as in the process of dying- and in need of medical intervention to counter this sickness if we will still remember the joy of considering ourselves healthy? This is medicine becoming subsumed under our current model of consumption.   

Everyone, it seems, has woken up to the fact that consumer electronics has the perfect consumption sustaining model. If things quickly grow “old” to the point where they no longer work with everything else you own, or become so rare that one is unable to find replacement parts, then one if forced to upgrade if merely to insure that your stuff still works. Like the automotive industry, healthcare now seems to be embracing technological obsolescence as a road to greater profitability. Insurance companies seem poised to use devices like the Apple watch to sort and monitor customers, but that is likely only the beginning.

Let me give you my nightmare scenario for a world of superlongevity. It’s a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable in  the same way August Weismann viewed the disabled in his day.  It’s a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual “profit center”.

The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity.  If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.

Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many “immortals” of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.

I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.

 Image: Memento Mori, South Netherlands, c. 1500-1525, the Thomson collection

2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt - killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.

 

Think Time is Speeding Up? Here’s How to Slow It!

seven stages in man's life

One of the weirder things about human being’s perception of time is that our subjective clocks are so off. A day spent in our dreary cubicles can seem to crawl like an Amazonian sloth, while our weekends pass by as fast as a chameleon’s tongue . Most dreadful of all, once we pass into middle age, time seems to transform itself from a lumbering steam train heaving us through clearly delineated seasons and years to a Japanese bullet unstoppably hurdling us towards death with decades passing us by in a blurr.

Wondering about time is a habit of the middle aged, as sure a sign of having passed the clock- blind golden age of youth as the proverbial convertible or Harley. If my soon to be 93 year old grandmother is any indication, the old, like the young, aren’t much taken aback by the speed of time’s passage. Instead, time seems to take on the viscosity of New England molasses, the days gently flowing down life’s drain.

Up until now, I didn’t think there might be any empirical evidence to back up such colloquial observations, just the anecdotes passed around the holiday dinner table like turkey stuffing and cranberry sauce: “Can you believe it’s almost Christmas again?”, “Where did the year go?” Lucky for me I now know what happened to time, or how I’ve been confuddled all this time into thinking something had happened to it. I know because I’ve read the psychologist and BBC science broadcaster, Claudia Hammond’s excellent little book on the psychology of time called: Time Warped: Unlocking the Mysteries of Time Perception.     

If you’ve ever asked yourself why time seems to crawl when you’re watching the clock and want it to go faster, or why time appears to speed up in the face of an event you’re dreading like a speech, this is the book for you. But Hammond’s Time Warped goes much deeper than that and exposes us to the reality of what it would be like if some of our common dreams about controlling time actually came true. If we could indeed have “perfect memory” or, as everyone keeps reminding us to, “live in the present”. In addition to all that, the nature of our ambiguous relationship with time she reveals raises interesting questions for those hoping we wrestle from nature a great deal more of it.

Hammond doesn’t really discuss the physics of time, or more clearly, the fact that much of modern physics views time as an illusion akin to past imaginary entities like the ether or the phlogiston.  The fact that something so essential to our human self-understanding is considered by the bedrock of human sciences to be a brain induced mirage has led to a rebellion of at least one prominent physicists, Lee Smolin, but he’s almost a lone voice in the quest to restore time. Nor is Hammond all that interested in the philosophy of time, its history or what time actually is. You won’t find here any detailed discussion of how to define time, it’s more like Supreme Court Justice Potter Stewart’s definition of pornography: “you know it when you see it.” Hammond is, though, on firm scientific ground discussing her main subject, the human perception of time, which, whatever it’s underlying reality or unreality, we find it nearly impossible to live without.

Evolution might have kept things simple and given the human brain just one clock, a steady Bigben of a thing to accurately mark the time. Instead, Hammond draws our attention to the fact that we seem to have multiple clocks within us all running at once.

We seem to be extremely good at gauging the passage of seconds or minutes without counting.  We also have a twenty four- hour clock that runs with the same length but independent of the alternating light and darkness of our spinning earth as Hammond shows was proven by Michel Stiffre who, in the name of science and youthful stupidity, (he was 23) braved two months in a dark cave meticulously recording his bodily rhythms. What Stiffre proved is that, sun or no sun, our bodies follow twenty-four hour cycles. The turning of the earth has bored its traces deep into us, which we fight against using the miracle of electric lights, and if the popularity of sleeping pills are any indication, so often lose.

For some of us, there seems to be an inbuilt ability and need to see longer stretches of time spatially in the form of ovals, circles, or zig-zags, rather than the linear timelines one sees in history books. One day, not long before I read Hammond’s book, I found myself scribbling thinking about how far into the future my great-grandchildren would live, should my now small daughters and their children ever have children of their own.

For whatever reason, I didn’t draw out the decades as blocks of a line but like a set of steps. I thought nothing of it until I read Time Warped and saw that this was a common way for people to picture decades, though many do so in three dimensions, rather than my paltry two. Some people also associate days with color- a kind of synesthesia that isn’t just playful imagination, but is often stable across an individual’s life.

There is no real way to talk about how human beings experience time without discussing memory. What I found mind-blowing about Time Warped was just how many of what we consider the flaws of our memory end up being ambiguities we would be better off not having resolved.

Take the fact that our memories are so fallible and incomplete. One would think that things would be so much better if our brains could record everything and have it for playback on a sort of neuronal blu- ray. For certain situations like criminal trials this would solve a whole host of problems, but elsewhere, we should watch what we wish for. As Hammond shows, there are people who can remember every piece of minutia, down to the socks they wore on a particular day, decades earlier, but a moment’s reflection leads to the conclusion that such natural born mnemonic prodigies fail to dominate creative fields, the sciences, or anything else, and such was the case long before we had Google to remember things for us.

There are people who believe that the path to ensuring they are not unraveled by the flow of time is to record and document everything about themselves and all of their experiences. Digital technology has doubtless made such a quest easier, but Hammond leads us to wonder whether or not the effort to record our every action and keystroke is quixotic. Who will actually take the time to look at all this stuff? How many times, she asks, have any of us sat down to watch our wedding video?

People obsessed with recording every detail of their lives are very likely motivated by the idea that it is their memories that make them who they are. Part of our deep fear of developing Alzheimer’s probably originates in this idea that the loss of our memories would constitute the loss of our self. Yet somehow the loss of memories (and the damage of Alzheimer’s runs much deeper than the loss of memories) does not seem to rob those who experience such losses of what other recognize as their long standing personality.

Strangely, our not too reliable memories, when combined with our ability to mentally time-travel into the past, Hammond believes, gives rise to our ability to imagine futures which are not. It allows us to mix and match different scenes from our memory to come up with whole new ones we anticipate will happen, or even ones that could never happen.

The idea that our imagination might owe its existence to our faulty memory put me in mind of the recent findings of Laurie Santos of the Comparative Cognition Laboratory at Yale. Santos has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different, whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. It seems it is our flaws rather than our superiority that have so favored us above our animal kin.

What, though, of the big problem, the one we all face- the frightening speed through which we are running through our short lives? There is, it seems, some wisdom in the adage that the best way to approach time is in focusing on the present, even if you’re like me and watching another TED talk on the subject by Pico Iyer is enough to make you hurl. If the future is a realm of anxiety and the past a realm of regret, as long as one is not in pain, the present moment is a sort of refuge. Hammond believes that thinking about the future, even if we so often get it wrong by, for instance, thinking that our future self will have more time, money, or will-power, is the default mode of the brain.

Any meditative tradition worth its salt tries to free us from this future obsessed mode and connect us more fully with the present moment of our existence, our breath, its rhythms, the people we care about. There are ways we can achieve this focus on the present without mediation, but they often involve contemplation of our own impending death, which is why soldiers amid the suffering of war and the terminally ill or the very old like my Nanna can often unhitch themselves from the train pulling our thinking off to the future.

Focusing on the present is one way to not only slow the pace of time, but to infuse the short time we have here with the meaning it deserves. Knowing that my small children will outgrow my silliness is the best way I have found to appreciate their laughter now.

Present focus does not, however, solve the central paradox of time for the middle aged, namely, why it seems to move so much faster as we get older, for it is doubtful we were all that more capable of savoring the moment as teenagers than adults. Our commonsense explanation of time speeding up as we age typically has to do with proportionality as in “a year for a five year old is 1/5 of their life, but for a forty year old it is merely 1/40.” Hammond shows this proportionality theory to to wrong on its face, for, if it were true, the days for a middle aged person would be quite literally buzzing by in comparison to the days of their younger selves.

Only a moment’s reflection should show us that the proportionality theory for time’s seeming quickening as we age can’t be true. Think back to your school days waiting impatiently for the 3:00 pm bell to ring: was it really much longer than the time you spend now stuck to your chair in some meaningless business meeting? There are some difference in the gauging of how much time has passed between the young and the old, yet these are nowhere near large enough to explain the differences in the subjective experiences of how fast time is passing between those two groups. So if proportionality theory doesn’t explaining the speeding up of time for the middle aged- what does?

When thinking about duration, the thing we need to keep in mind is, as the work of Daniel Kahneman has shown, we have not one but two “selves” an experiencing self and a remembering self. Having two selves does a number on our ability to make decisions with our future in mind. The experiencing self wants us to eat the cookie now, because it’s the remembering self that will regret it later. It also skews our sense of the past.

Our sense of the duration of time is experienced differently by these two separate selves.Waiting in a long line feels like forever when you’re there, but unless something particularly interesting happened during your wait, the remembered experience feels like it happened in a blink of an eye. Yet, a wonderful or frightening experience, like a first kiss or a car accident, though it seems to fly by while we’re in it, usually cuts its groves deep enough into our memory that when we reflect upon it it seems to have taken a very long time to unfold.

Hammond’s explanation for why youth seems stretched out in time compared to middle age  is what she calls the “reminiscence bump” and the “holiday paradox”. Adolescence and young adulthood are filled with so many firsts they leave a deep impression on our memory and this “thickness” of memory leads our remembering self to conclude time must have been going more slowly back in the heady days of our youth- the reminiscence bump . If you want to make your middle age days seem longer, then you need to fill them up with exciting and new things, which is the reason, Hammond speculates, that holidays full of new experiences seem fast when we’re in them, but to be stretched out on reflection- the holiday paradox. She wonders, however, whether the better option is just not to worry so much about time’s speed and rest when we need it rather than constantly chase after new memories.

Given the interest of the audience here in extending the human lifespan I wonder what the implications of such discovers regarding time on that project might be? A comedy could certainly be written in which we have doubled the length of human life, and end up also doubling all those things we now find banal about time. Would human beings who lived well beyond their hundreds be subject to meetings that stretched out for days and weeks? Would traffic jams in which you spent a week in your car be normal?

Perhaps we might even want to focus on our ability to manipulate our sense of time’s duration as an easier path towards a sort of longevity. Imagine a world where love affairs could stretch out centuries and pain and boredom are reduced to a blink, or a future that has “time retreats” (like today’s religious retreats) where one goes away for a week that has been neurologically altered to having felt like it was decades or longer. We might use the same sorts of time manipulation to punish people for heinous crimes so that a 600 year sentence actually means something. One might object that such induced experiences of slow time aren’t real, but then again neither are most versions of digital immortality, or even, as Hammond showed us, our subjective experience of time itself.

All of this talk of manipulating our sense of time as a road to longevity is just playful speculation on my part. What should be clear is that any move towards changing the human body so that it lives much longer than it does now is probably also going to have to grapple with and transform our psychological notions of time and the society we have built around our strange capacity to warp it.

 

The Human Age

atlas-and-the-hesperides-1925 John Singer Sargent

There is no writer now, perhaps ever, who is able to convey the wonder and magic of science with poetry comparable to Diane Ackerman. In some ways this makes a great deal of sense given that she is a poet by calling rather than a scientist.  To mix metaphors: our knowledge of the natural world is merely Ackerman’s palette whose colors she uses to paint a picture of nature. It is a vision of the world as magical as that of the greatest worlds of fiction- think Dante’s Divine Comedy, or our most powerful realms of fable.

There is, however, one major difference between Ackerman and her fellow poets and storytellers: the strange world she breathes fire into is the true world, the real nature whose inhabitants and children we happen to be. The picture science has given us, the closest to the truth we have yet come, is in many ways stranger, more wondrous, more beautifully sublime, than anything human beings have been able to imagine; therefore, the perfect subject and home for a poet.

The task Ackerman sets herself in her latest book, The Human Age: The World Shaped By US, is to reconcile us to the fact that we have now become the authors and artists rather than the mere spectators of nature’s play . We live in an era in which our effect upon the earth has become so profound that some geologists want to declare it the “Anthropocene” signalling that our presence is now leaving its trace upon the geological record in the way only the mightiest, if slow moving, forces of nature have done heretofore. Yet in light of the the speed with which we are changing the world, we are perhaps more like a sudden catastrophe than languid geological transformation taking millions of years to unfold.

Ackerman’s hope is to find a way to come to terms with the scale of our own impact without at the same time reducing humanity to that of the mythical Höðr, bringing destruction on account of our blindness and foolishness, not to mention our greed. Ackerman loves humanity as much as she loves nature, or better, she refuses, as some environmentalist are prone to do, to place human beings on one side and the natural world on the other. For her, we are nature as much as anything else that exists. Everything we do is therefore “natural”, the question we should be asking is are we doing what is wise?

In The Human Age Ackerman attempts to reframe our current pessimism and self-loathing regarding our treatment of “mother” nature , everything from the sixth great extinction we are causing to our failure to adequately confront climate change, by giving us a window into the many incredible things we are doing right now that will benefit our fragile planet.  

She brings our attention to new movements in environmentalism such as “Reconciliation Ecology” which seeks to bring into harmony human settlements and the much broader needs of the rest of nature. Reconciliation Ecology is almost the opposite of another movement of growing popularity in “neo-environmentalists” circles, namely, “Environmental Modernism”. Whereas Environmental Modernism seeks to sever our relationship with nature in order to save it, Reconciliation Ecology aims to naturalize the human artifice, bringing farming and even wilderness into the city itself. It seeks to heal the fragmented wilderness our civilization has brought about by bringing the needs of wildlife into the design process.

Rather than covering our highways with road kill, we could provide animals with a safe way to cross the street. This might be of benefit to the deer, the groundhog, the racoon, the porcupine. We might even construct tunnels for the humble meadow vole. Providing wildlife corridors, large and small, is the one of the few ways we can reconcile the living space and migratory needs of “non-urbanized” wildlife to our fragmented and now global sprawl.

Human beings, Ackerman argues, are as negatively impacted by the disappearance of wilderness as any other of nature’s creatures, and perhaps given our aesthetic sensitivity, even more so. For a long time now we have sought to use our engineering prowess to subdue nature. Why not use it to make the human made environment more natural?  She highlights a growing movement among architects and engineers to take their inspiration not from the legacy of our lifeless machines and buildings, but from nature itself, which so often manages to create works of beautiful efficiency. In this vein we find leaps of the imagination such as the Eastgate Center designed by Mick Pearce who took his inspiration from the breathing, naturally temperature controlled, structure of the termite mound.

The idea of the Anthropocene is less an acknowledgement of our impact than a recognition of our responsibility. For so long we have warred against nature’s limits, arrows, and indifference that it is somewhat strange to find ourselves in the position of her caretaker and even designer. And make no mistake about it, for an increasing number of the plants and animals with whom we share this planet their fate will be decided by our action or inaction.

Some environmentalists would argue for nature’s “purity” and against our interference, even when this interference is done in the name of her creatures. Ackerman, though not entirely certain, is arguing against such environmental nihilism, paralysis, or puritanism. If it is necessary for us to “fix” nature- so be it, and she sees in our science and technology our greatest tool to come to nature’s aid. Such fixes can entail everything from permitting, rather than attempting to uproot, invasive species we have inadvertently or deliberately introduced if those invasives have positive benefits for an ecosystem, aiding the relocation of species as biomes shift under the impact of climate change, or introducing extinct species we have managed to save and resurrect through their DNA.

We are now entering the era where we are not only able to mimic nature, but to redesign it at the genetic level. Creating chimeras that nature left to itself would find it difficult or impossible to replicate. Ackerman is generally comfortable with our ever more cyborg nature and revels in the science that allows us to literally print body parts and one day whole organs. Like the rest of us should be, she is flabbergasted by ongoing revolutions in biology that are rewriting what it means to be human.

The early 21st century field of epigenetics is giving us a much more nuanced and complex view of the interplay between genes and the environment. It is not only that multiple genes need to be taken account of in explaining conditions and behaviors, but that genes are in a sense tuned by the environment itself. Indeed, much of this turning “on or off” of genes is a form of genetic memory. In a strange echo of Lamarck, the experiences of one’s close ancestors- their feasts and famines are encoded in the genes of their descendants.

Add to that our recent discoveries regarding the microbiome, the collection of bacteria that live within us that are far more numerous and in many ways as influential as our genes, and one gets an idea for how far we are moving from ideas of what it meant to be human held by scientists even a few years ago and how much distance has been put between current biology and simplistic versions of environmental or genetic determinism.

Some such, as the philosopher of biology, John Dupree see in our advancing understanding of the role of the microbiome and epigenetics a revolutionary change in human self understanding. For a generation we chased after a simplistic idea of genetic determinism where genes were seen as a sort of “blueprint” for the organism. This is now known to be false. Genes are just one part of a permeable interaction between them, the environment and the microbiome that guide individual development and behavior.

We are all of us collectives, constantly swapping biological information and rather than seeing the microscopic world as a kind of sideshow to the “real” biological story of large organisms such as ourselves we might be said to be where we have always been in Steven Jay’s “Age of Bacteria” as much as we are in an Anthropocene.

Returning to Ackerman, she is amazed at the recent advancements in artificial intelligence, and like Tyler Cowen, even wonders whether scientific discoveries will soon no longer be the prerogative of humans, but of our increasingly intelligent machines. Such is the conclusion one might draw from looking at the work of Hod Lipson of Cornell “Eureqa Machine” . Feed the Eureqa Machine observations or data and it is able to come up with equations that describe them all on its own. Ackerman does, however, doubt whether we could ever build a machine that replicated human beings in all their wondrous weirdness.

Where her doubts regarding technology veer towards warning has to do with the question of digitalization of the world. Ackerman is best known for her 1990 A Natural History of the Senses a work which explored the five forms of human perception. Little wonder, then, that she would worry about what the world being flattened on our ubiquitous screens   into the visual sense alone. She laments:

What if, through novelty and convenience, digital nature replaces biological nature?

The further we distance ourselves from the spell of the present, explored by all our senses, the harder it will be to understand and protect nature’s precarious balance, let alone the balance of our own human nature. I worry about virtual blinders. Hobble all the senses except the visual, and you produce curiously deprived voyeurs.  (196-197)

While concerned that current technologies may be flattening human experience by leaving us visually mesmerized behind screens at the expense of the body, even as they broaden their scope allowing us to see into world small, large, and at speeds never before possible, Ackerman accepts our cyborg nature. For her we are, to steal a phrase from the philosopher Andy Clark “natural born cyborgs”, and this is not a new thing. Since our beginning we have been a changeling able to morph into a furred animal in the cold with our clothes, wield fire like a mythical dragon, or cover ourselves with shells of metal among much else.

Ackerman is not alone in the view that our cyborg tendencies are an ancient legacy. Shortly before I read The Human Age I finished the anthropologist Timothy Taylor’s short and thought provoking The Artificial Ape: How Technology Changed the Course of Human Evolution. Taylor makes a strong case that anatomically modern humans co-evolved with technology, indeed, that the technology came first and in a way has been the primary driver of our evolution.

The technology Taylor thinks lifted us beyond our hominid cousins wasn’t the usual suspects of fire or stone tools but likely the unsung baby sling.  This simple invention allowed humans to overcome a constraint suffered by the other hominids whose brains, contrary to common opinion, were getting smaller because upright walking was putting a premium of quick rather than delayed development of the young. Slings allowed mothers to carry big brained babies that took longer to develop, but because of the long period of youth could learn much more than faster developing relatives. In effect, slings were a way for human mothers who needed their hands free to externalize the womb and evolve a koala like pouch.

To return to The Human Age; although, she has, as always, given us a wonderfully written book filled with poetry and insights, Ackerman’s book is not without its flaws. Here I will focus only on what I found to be the most important one; namely, that for a book named after our age, there is not enough regarding humans in it. This is what I mean: Though the problems suffered from the effects of the Anthropocene are profound and serious, the animal most likely to broadly suffer the impact of phenomenon such as climate change are likely to be us.

The weakness of the idea of the Anthropocene when judged largely positively, which is what Ackerman is trying to do, is that it universalizes a state of control over nature that is largely limited to advanced countries and the world’s wealthy. The threat of rising sea levels look quite different from the perspective of Manhattan or Chittagong. A good deal of the environmental gains in advanced countries over the past half century can be credited to globalization, which amid its many benefits, has also entailed the offloading of pollution, garbage, and waste processing from developed to developing countries. This is the story that the photos of Edward Burtynsky, whom Ackerman profiles, tells. Stories such as the lives of the shipbreakers of Bangladesh whose world resembles something out of a dystopian novel.

Humanity is not sovereign over nature everywhere, and for some of us not only our we faced with a wildness that has not been subdued, but where humanity itself has become like a unpredictable natural force reigning down unfathomable, uncontrollable good and ill. Such is the world now being experienced by the west African societies suffering under the epidemic of Ebola. It is a world we might better understand by looking at a novel written on the eve of our mastery over nature, a novel by another amazing writer who was also a woman.

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…