There are two paths to superlongevity: only one of them is good

Memento Mori Ivories

Looked at in the longer historical perspective we have already achieved something our ancestors would consider superlongevity. In the UK life expectancy at birth averaged around 37 in 1700. It is roughly 81 today. The extent to which this is a reflection of decreased child mortality versus an increase in the survival rate of the elderly I’ll get to a little later, but for now, just try to get your head around the fact that we have managed to nearly double the life expectancy of human beings in a little over two centuries.

By itself the gains we have made in longevity are pretty incredible, but we have also managed to redefine what it means to be old. A person in 1830 was old at forty not just because of averages, but by the conditions of his body. A revealing game to play is to find pictures of adults from the 19th century and try to guess their ages. My bet is that you, like myself, will consistently estimate the people in these photos to be older than they actually were when the picture was taken. This isn’t a reflection of their lack of Botox and Photoshop, so much as the fact that they were missing the miracle of modern dentistry, were felled, or at least weathered, by diseases which we now consider mere nuisances. If I were my current age in 1830 I would be missing most of my teeth and the pneumonia I caught a few years back would have surely killed me, having been a major cause of death in the age of Darwin and Dickens.

Sixty or even seventy year olds today are probably in the state of health that a forty year old was in the 19th century. In other words we’ve increased the healthspan, not just the lifespan. Sixty really is the new forty, though what is important is how you define “new”. Yet get passed eighty in the early 21st century and you’re almost right back in the world where our ancestors lived. Experiencing the debilitations of old age that is the fate of those of us lucky enough to survive through the pleasures of youth and middle age. The disability of the old is part of the tragic aspect of life, and as always when it comes to giving poetic shape to our comic/ tragic existence, the Greeks got to the essence of old age with their myth of Tithonus.

Tithonus was a youth who had the ill fortune of inspiring the love of the goddess of spring Eos. (Love affairs between gods and mortals never end well). Eos asked Zeus to grant the youth immortality, which he did, but, of course, not in the way Eos intended. Tithonus would never die, but he also would continue to age becoming not merely old and decrepit, but eventually shrivel away to a grasshopper hugging a room’s corner. It is best not to ask the gods for anything.

Despite our successes, those of us lucky enough to live into our 7th and 8th decades still end up like poor old Tithonus. The deep lesson of the ancient myth still holds- longevity is not worth as much as we might hope if not also combined with the health of youth, and despite all of our advances, we are essentially still in Tithonus’ world.

Yet perhaps not for long. At least if one believes the story told by Jonathan Weiner in his excellent book Long for this World.  I learned much about our quest for long life and eternal youth from Long for this World, both its religious and cultural history, and the trajectory and state of its science. I never knew that Jewish folklore had a magical city called Luz where the death unleashed in Eden was prevented from entering, and that existed until  all its inhabitants became so bored that they walked out from its walls and we struck down by the Angel of Death waiting eagerly outside.

I did not know that Descartes, who had helped unleash the scientific revolution, thought that gains in knowledge were growing so fast that he would live to be 1,000. (He died in 1650 at 54). I did not realize that two other key figures of the scientific revolution Roger and Francis Bacon (no relation) thought that science would restore us to the knowledge before the fall (prelapsarian) which would allow us to live forever, or the depth to which very different Chinese traditions had no guilt at all about human immorality and pursued the goal with all sorts of elixirs and practices, none of which, of course, worked. I was especially taken with the story of how Pennsylvania’s most famous son- Benjamin Franklin- wanted to be “pickled” and awoken a century later.

Reviewing the past, when even ancient Egyptian hieroglyphs offer up recipes for “guaranteed to work” wrinkle creams, shows us just how deeply human the longing for agelessness is. It wasn’t invented by Madison Avenue or Dr Oz if even the attempts to find a fountain of youth by the ancients seem no less silly than many of our own. The question, I suppose, is the one that most risks the accusation that one is a fool: “Is this time truly different?” Are we, out of all the generations that have come before us believing the discovery of the route to human “immortality” (and every generation since the rise of modern science has had those who thought so) actually the ones who will achieve this dream?

Long for this World is at its heart a serious attempt to grapple with this question and tries to give us a clear picture of longevity science built around the theoretical biologist, Aubrey de Grey, who will either go down in history as a courageous prophet of a new era of superlongevity, or as just another figure in our long history of thinking biological immortality is at our fingertips when all we are seeing is a mirage.

One thing we have on our ancestors who chased this dream is that we know much, much, more about the biology of aging. Darwinian evolution allowed us to be able to conceive non- poetic theories on the origins of death. In the 1880’s the German biologist, August Weismann in his essay “Upon the Eternal Duration of Life”, provided a kind of survival of the fittest argument for death and aging. Even an ageless creature, Weismann argued, would overtime have to absorb multiple shocks eventually end up disabled. The the longer something lives the more crippled and worn out it becomes. Thus, it is in the interest of the species that death exists to clear the world of these disabled- very damned German- the whole thing.

Just after World War II the biologist Peter Medawar challenged the view of  Weismann. For Medawar if you look at any species selective pressures are really only operating when the organism is young. Those who can survive long enough to breed are the only ones that really count when it comes to natural selection. Like versions of James Dean or Marilyn Monroe, nature is just fine if we exit the world in the bloom of youth- as long, that is, as we have passed our genes.

In other words, healthful longevity has not really been something that natural selection has been selecting most organisms for, and because of this it hasn’t been selecting against bad things that can happen to old organisms either, as we’re finding when, by saving people from heart attacks in their 50’s, we destin them to die of diseases that were rare or unknown in the past like Alzheimers. In a sense we’re the victim of natural selection not caring about the health of those past reproductive age or their longevity.

Well, this is only partly true. Organisms that live in conditions where survival in youth is more secure end up with stretched longevity for their size. Some bats can live decades when similar sized mice have a lifespan of only a couple of years. Tortoises can live for well over a century while alligators of the same weight live from 30-50 years.

Stretching healthful longevity is also something that occurs when you starve an animal. We’ve know for decades that lifespan (in other animals at least) can be increased through caloric restriction. Although the mechanism is unclear, the Darwinian logic is not. Under conditions of starvation it’s a bad idea to breed and the body seems to respond by slowing development waiting for the return of food and a good time to mate.

Thus, there is no such thing as a death clock, lifespan is malleable and can be changed if we just learn how to work the dials. We should have known this from our historical experience over the last two-hundred years in which we doubled the human lifespan, but now we know that nature itself does it all the time and not by, like we do , by addressing the symptoms of aging but by resetting the clock of life itself.

We might ourselves find it easy to reset our aging clock if there weren’t multiple factors that play a role in its ticking. Aubrey de Grey has identified seven- the most important of which (excluding cancerous mutations) are probably the accumulation of “junk” within cells and the development of harmful “cross links” between cells. Strange thing about these is that they are not something that suddenly appears when we are actually “old” but are there all along, only reaching levels when they become noticeable and start to cause problems after many decades. We start dying the day we are born.

As we learn in Long for This World, there is hope that someday we may be able to effectively intervene against all these causes of aging. Every year the science needed to do so advances. Yet as Aubrey de Grey has indicated, the greatest threat to this quest for biological immortality is something we are all too familiar with – cancer.

The possibility of developing cancer emerges from the very way our cells work. Over a lifetime our trillions of cells replicate themselves an even more mind bogglingly high number of times. It is almost impossible that every copying error will be caught before it takes on a life of its own and becomes a cancerous growth. Increasing lifespan only increases the amount of time such copying errors can occur.

It’s in Aubrey de Grey’s solution to this last and most serious of super-longevity’s medical hurdles that Weiner’s faith in the sense of that project breaks down, as does mine. De Grey’s cure for cancer goes by the name of WILT- whole body interdiction of the lengthening of telomeres. A great deal of the cancers that afflict human beings achieve their deadly replication without limit by taking control of the telomerase gene. De Grey’s solution is to strip every human gene of its telomeres, something that, even if successful in preventing cancerous growths, would also leave us without red and white blood cells. In order to allow us to live without these cells, de Grey proposes regular infusions of stem cells. What this leave us with would be a life of constant chemotherapy and invasive medical interventions just to keep us alive. In other words, a life when even healthy people relate to their bodies and are kept alive by medical interventions that are now only experienced by the terminally ill.

I think what shocks Weiner about this last step in SENS is the that it underscores just how radical the medical requirements of engineering superlongevity might become. It’s one thing to talk about strengthening the cell’s junk collector the lysosome by adding an enzyme or through some genetic tweak, it’s another to talk about removing the very cells and structures which define human biology, cells and platelets, which have always been essential for human life and health.

Yet, WILT struck me with somewhat different issues and questions. Here’s how I have come to understand it. For simplicities sake, we might be said to have two models of healthcare, both of which have contributed to the gains we have seen in human health and longevity since 1800. As is often noted, a good deal of this gain in longevity was a consequence of improving childhood mortality. Having less and less people die at the age of five drastically improves the average lifespan. We made these gains largely through public health: things like drastically improved sanitation, potable water, vaccinations, and, in the 20th century antibiotics.

This set of improvements in human health were cheap, “easy”, and either comprised of general environmental conditions, or administered at most annually- like the flu shoot. These features allowed this first model of healthcare to be distributed broadly across the population leading to increased longevity by saving the lives primarily of the young. In part these improvements, and above all the development of antibiotics, also allowed longevity increases from at older end of the scale, which although less pronounced than improvements in child mortality, are, nonetheless very real. This is my second model of healthcare and includes things everything from open heart surgery, to chemo and radiation treatments for cancer, to lifelong prescription drugs to treat chronic conditions.

As opposed to the first model, the second one is expensive, relatively difficult, and varies greatly among different segments of the population. My Amoxicillin and Larry Page’s Amoxicillin are the same, but the medical care we would receive to treat something like cancer would be radically different.

We actually are making greater strides in the battle against cancer than at any time since Nixon declared war on the scourge way back in the 1970’s. A new round of immunosuppressive drugs that are proving so successful against a host of different cancers that John LaMattina, former head of research and development for Pfizer has stated that “We are heading towards a world where cancer will become a chronic disease in much the same way as we have seen with diabetes and HIV.”

The problem is the cost which can range up to 150,000 per year. The costs of the new drugs are so expensive that the NHS has reduced the amount they are willing to spend on them by 30 percent. Here we are running up against the limits to second model of healthcare, a limit that at some point will force societies to choose between providing life preserving care for all, or only to those rich enough to afford it.

If the superlongevity project is going to be a progressive project it seems essential to me that it look like the first model of healthcare rather than the second. Otherwise it will either leave us with divergences in longevity within and between societies that make us long nostalgically for the “narrowness” of current gap between today’s poorest and richest societies, or it will bankrupt countries that seek to extend increased longevity to everyone.

This would require a u-turn from the trajectory of healthcare today which is dominated and distorted by the lucrative world of the second model. As an example of this distortion: the physicists, Paul Davies, is working on a new approach to cancer that involves attempting to attack the disease with viruses. If successful this would be a good example of model one. Using viruses (in a way the reverse of immunosuppressives) to treat cancer would likely be much cheaper than current approaches to cancer involving radiation, chemotherapy, and surgery due to the fact that viruses can self-replicate after being engineered rather than needing to be expensively and painstakingly constructed in drug labs. The problem is that it’s extremely difficult for Davies to get funding for such research precisely because there isn’t that much money to be made in it.

In an interview about his research, Davies compared his plight to how drug companies treat aspirin. There’s good evidence to show that plain old aspirin might be an effective preventative against cancer. Sadly, it’s almost impossible to find funding for large scale studies of aspirin’s efficacy in preventing cancer because you can buy a bottle of the stuff for a little over a buck, and what multi-billion dollar pharmaceutical company could justify profit margins as low as that?

The distortions of the second model are even more in evidence when it comes to antibiotics. Here is one of the few places where the second model of healthcare is dependent upon the first. As this chilling article by Maryn Mckenna drives home we are in danger of letting the second model lead to the nightmare of a sudden sharp reversal of the health and longevity gains of the last century.

We are only now waking up to the full danger implicit in antibiotic resistance. We’ve so over prescribed these miracle treatments both to ourselves and our poor farms animals who we treat as mere machines and “grow” in hellish sanitary conditions that bacteria have evolved to no longer be treatable with the suite of antibiotics we have, which are now a generation old, or older. If you don’t think this is a big deal, think about what it means to live in a world where a toothache can kill you and surgeries and chemotherapy can no longer be performed. A long winter of antibiotic resistance would just mean that many of our dreams of superlongevity this century would be moot. It would mean many of us might die quite young from common illnesses, or from surgical and treatment procedures that have combined given us the longevity we have now.

Again, the reason we don’t have alternatives to legacy antibiotics is that pharmaceutical companies don’t see any profit in these as opposed to, say Viagra. But the other part of the reason for their failure, is just as interesting. It’s that we have overtreated ourselves because we find the discomfort of being even mildly sick for a few days unbearable. It’s also because we want nature, in this case our farm animals, to function like machines. Mechanical functioning means regularity, predictability, standardization and efficiency and we’ve had to so distort the living conditions, food, and even genetics of the animals we raise that they would not survive without our constant medical interventions, including antibiotics.

There is a great deal of financial incentive to build solutions to human medical problems around interminable treatments rather than once and done cures or something that is done only periodically. Constant consumption and obsolescence guarantees revenue streams.  Not too long ago, Danny Hillis, who I otherwise have the deepest respect for, gave an interview on, among other things, proteomics, which, for my purposes here, essentially means the minute analysis of bodily processes with the purpose of intervening the moment things begin to go wrong- to catch diseases before they cause us to exhibit symptoms. An audience member asked a thought provoking question, which when followed up by the interviewer Alexis Madrigal, seemed to leave the otherwise loquacious Hillis, stumped. How do you draw the line between illness without symptoms and what the body just naturally does? The danger is you might end up turning everyone, including the healthy, into “patients” and “profit centers”.

We already have a world where seemingly healthy people needed to constantly monitor and medicate themselves just to keep themselves alive, where the body seems to be in a state of almost constant, secret revolt. This is the world as diabetics often experience it, and it’s not a pretty one.  What I wonder is if, in a world in which everyone sees themselves as permanently sick- as in the process of dying- and in need of medical intervention to counter this sickness if we will still remember the joy of considering ourselves healthy? This is medicine becoming subsumed under our current model of consumption.   

Everyone, it seems, has woken up to the fact that consumer electronics has the perfect consumption sustaining model. If things quickly grow “old” to the point where they no longer work with everything else you own, or become so rare that one is unable to find replacement parts, then one if forced to upgrade if merely to insure that your stuff still works. Like the automotive industry, healthcare now seems to be embracing technological obsolescence as a road to greater profitability. Insurance companies seem poised to use devices like the Apple watch to sort and monitor customers, but that is likely only the beginning.

Let me give you my nightmare scenario for a world of superlongevity. It’s a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable in  the same way August Weismann viewed the disabled in his day.  It’s a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual “profit center”.

The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity.  If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.

Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many “immortals” of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.

I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.

 Image: Memento Mori, South Netherlands, c. 1500-1525, the Thomson collection

Why Europe Shouldn’t Print the Cartoons

The horrendous murders in Paris appear to have ignited a firestorm of defenders of free speech urging us to “print the cartoons”, an understandable, and likely to be unheeded plea, at least by the West’s major newspapers. That is all for the good, for contrary to claims that not re-printing cartoons inflammatory to many Muslims amounts to cowardice, (a claim I do not understand given how many journalists at these institutions have risked or lost their lives covering conflicts) printing them in seeming defiant defense of free speech is exactly what the terrorists wish we would do.

Islamists need for there to be a violent confrontation between their version of the world and the one born in the West, they need for Muslim minorities in Western countries to feel besieged, their religion disparaged, their value as human beings reduced to that of a threatening other.

France’s population of 5 million Muslims is perhaps the most secular group of Muslims in the world.  The overwhelming majority aren’t looking for a Caliphate they’re just looking for a job. They are not, as a best selling recent work of French speculative fiction, Soumission (Submission) depicts, likely to create a future Islamized France.

Rather, a handful of crazies might have managed to make a European nightmare of 21st century Ottomans at the gates of Vienna seem real, but only if Europeans let it.

It’s a nightmare that won’t benefit the Islamists much, but could greatly benefit the European right which had already identified Europe’s Muslim minority as the scapegoat for the continent’s decline. Neo-fascist parties such as Marie Le Pen’s National Front in France itself, or Nigel Farage’s UKIP in the UK. Europe is going through a very troubling identity crisis where even countries that should surely know better, namely Germany, have seen the rise of things like “Pegida” — Patriotic Europeans Against the Islamization of the West- which bring thousands to the streets in anti-immigrant, anti-Muslim marches. Islamists and the right need each other like the communists and Nazis in order to make fanatics out of the rest of us. I sincerely hope that neither the Muslims nor the non- Muslims of Europe will let them do it.

William Gibson Grocks the Future: The Peripheral

William Gibbson pencil_sketch_photo_effect 2

It’s hard to get your head around the idea of a humble prophet. Picturing Jeremiah screaming to the Israelites that the wrath of God is upon them and then adding “at least I think so, but I could be wrong…” or some utopian claiming the millenium is near, but then following it up with “then again this is just one man’s opinion…” would be the best kind of ridiculous- seemingly so out of character to be both shocking and refreshing.

William Gibson is a humble prophet.

In part this stems from his understanding of what science-fiction is for- not to predict the future, but to understand the present with right calls about things yet to happen likely just lucky guesses. Over the weekend I finished William Gibson’s novel The Peripheral, and I will take the humble man at his word as in: “The future is already here- it’s not just very evenly distributed.” As a reader I knew he wasn’t trying to make any definitive calls about the shape of tomorrow, he was trying to tell me about how he understands the state of our world right now, including the parts of it that might be important in the future.  So let me try to reverse engineer that, to try and excavate the picture of our present in the ruins of the world of the tomorrow Gibson so brilliantly gave us with his gripping novel.    

The Peripheral is a time-travel story, but a very peculiar one. In his imagined world we have gained the ability not to travel between past, present and future but to exchange information between different versions of the multiverse. You can’t reach into your own past, but you can reach into the past of an alternate universe that thereafter branches off from the particular version of the multiverse you inhabit. It’s a past that looks like your own history but isn’t.

The novel itself is the story of one of these encounters between “past” and “future.” The past in question is a world that is actually our imagined near future somewhere in the American South where the novel’s protagonist, Flynn, her brother Burton and his mostly veteran friends eek out their existence. (Even if I didn’t have a daughter I probably love Gibson’s use of strong female characters, but having two, I love that even more.) It’s a world that rang very true to me because it was a sort of dystopian extrapolation of the world where I both grew up and live now. A rural county where the economy is largely composed of “Hefty Mart” and people building drugs out of their homes.

The farther future in the story is the world of London decades after a wrenching crisis known as the “jackpot”, much of whose devastation was brought about by global warming that went unchecked and resulted in the loss of billions of human lives and even greater destruction for the other species on earth. It’s a world of endemic inequality, celebrity culture and sycophants. And the major character from this world, Wilf Netherton, would have ended his days as a mere courtier to the wealthy had it not been for his confrontation with an alternate past.

So to start there are a few observations we can draw out from the novel about the present. The hollowing out of rural economies dominated by box stores, which I see all around me, the prevalence of meth labs as a keystone of this economy now only held up by the desperation of its people. Dito.

The other present Gibson is giving us some insight into is London where Russian oligarchs after the breakup of the Soviet Union established a kind of second Moscow. That’s a world that may fade now with the collapse of the Russian ruble, but the broader trend will likely remain in place- corrupt elites who have made their millions or billions by pilfering their home countries making their homes in, and ultimately shaping the fate, of the world’s greatest cities.

Both the near and far futures in Gibson’s novel are horribly corrupt. Local, state and even national politicians can not only be bought in Flynn’s America, their very jobs seem to be to put themselves on sale. London of the farther future is corrupt to the bone as well. Indeed, it’s hard to say that government exists at all there except as a conduit for corruption. The detective Ainsley Lowbeer, another major character in the novel, who plays the role of the law in London seems to not even be a private contractor, but someone pursuing justice on her own dime. We may not have this level of corruption today, but I have to admit it didn’t seem all that futuristic.

Inequality (both of wealth and power with little seeming distinction between the two) also unites these two worlds and our own. It’s an inequality that has an effect on privacy in that only those that have political influence have it. The novel hinges around Flynn being the sole (innocent) witness of a murder. There being no tape of the crime is something that leaves her incredulous, and, disturbingly enough, left me incredulous as well, until Lowbeer explains it to Flynn this way:

“Yours is a relatively evolved culture of mass surveillance,” Lowbeer said. “Ours, much more so. Mr Zubov’s house here, internally at least, is a rare exception. Not so much a matter of great expense as one of great influence.”

“What does that mean?” (Flynn)

“A matter of whom one knows,” said Lowbeer, “and of what they consider knowing you to be worth.” (223)

2014 saw the breaking open of the shell hiding the contours of the surveillance state we have allowed to be built around us in the wake of 9/11. Though how we didn’t realize this before Edward Snowden is beyond me. If I were a journalists looking for a story it would be some version of the surveillance-corruption-complex described by Gibson’s detective Lowbeer. That is, I would look for ways in which the blindness of the all seeing state (or even just the overwhelming surveillance powers of companies) was bought or gained from leveraging influence, or where its concentrated gaze was purchased for use as a weapon against rivals. In a world where one’s personal information can be ripped with simple hacks,or damaging correlations about anyone can be conjured out of thin air, no one is really safe. It merely an extrapolation of human nature that the asymmetries of power and wealth will ultimately decide who has privacy and who does not. Sadly, again, not all that futuristic.

In both the near and far futures of The Peripheral drones are ubiquitous. Flynn’s brother Burton himself was a former haptic drone pilot in the US military, and him and his buddies have surrounded themselves with all sorts of drones. In the far future drones are even more widespread and much smaller. Indeed, Flynn witnesses the  aforementioned murder while standing in for Burton as a kind of drone piloting flyswatter keeping paparazzi drone flies away from the soon to be killed celebrity Aelita West.    

That Flynn ended up a paparazzi flyswatter in an alternate future she thinks is a video game began in the most human of ways- Netherton trying to impress his girlfriend- Desarda West- Aelita’s sister. By far the coolest future-tech element of the book builds off of this, when Flynn goes from being a drone pilot to being the “soul” of a peripheral in order to be able to find Aelita’s murderer.

Peripherals, if I understand them, are quasi-biological forms of puppets. They can act intelligently on their own but nowhere near with the nuance and complexity of when a human being is directly controlling them through a brain-peripheral interface. Flynn becomes embodied in an alternative future by controlling the body of a peripheral while herself being in an alternative past. Leaves your head spinning? Don’t worry, Gibson is such a genius that in the novel itself is seems completely natural.

So Gibson is warning us about environmental destruction, inequality, corruption, and trying to imagine a world of ubiquitous drones and surveillance. All essential stuff for us to pay attention to and for which The Peripheral provides us with a kind of frame that might serve as a sort of protection against blinding continuing to head in directions we would rather not.   

Yet the most important commentary on the present I gleaned from Gibson’s novel wasn’t these things, but what it said about a world where the distinction between the virtual and the real has disappeared where everything has become a sort of video-game.

In the novel, what this results in is a sort of digital imperialism and cruelty. Those in Gibson’s far future derisively call the alternative pasts they interfere in “stubs” though these are full worlds as much as their own with people in them who are just as real as us.

As Lowbeer tells Flynn:

Some persons or people unknown have since attempted to have you murdered, in your native continuum, presumably because they know you to be a witness. Shockingly, in my view, I am told that arranging your death would in no way constitute a crime here, as you are, according to current legal opinion, not considered to be real.(200)

The situation is actually much worse than that. As the character Ash explains to Netherton:

There were, for instance, Ash said, continua enthusiasts who’d been at it for several years longer than Lev, some of whom had conducted deliberate experiments on multiple continua, testing them sometimes to destruction, insofar as their human populations were concerned. One of these early enthusiasts, in Berlin, known to the community only as “Vespasian,” was a weapons fetishists, famously sadistic in his treatment of the inhabitants of his continua, whom he set against one another in grinding, interminable, essentially pointless combat, harvesting the weaponry evolved, though some too specialized to be of use outside whatever baroque scenario had produced it. (352)

Some may think this indicates Gibson believes we might ourselves be living in a matrix style simulation. In fact I think he’s actually trying to saying something about the way the world, beyond dispute, works right now, though we haven’t, perhaps, seen it all in one frame.

Our ability to use digital technology to interact globally is extremely dangerous unless we recognize that there are often real human beings behind the pixels. This is a problem for people who are engaged in military action, such as drone pilots, yes, but it goes well beyond that.

Take financial markets. Some of what Gibson is critiquing is the kinds of algo high-speed trading we’ve seen in recent years, and that played a role in the financial the onset of the financial crisis. Those playing with past continua in his near future are doing so in part to game the financial system there, which they can do not because they have a record of what financial markets in such continua will do, but because their computers are so much faster than those of the “past”. It’s a kind of AI neo-colonialism, itself a fascinating idea to follow up on, but I think the deeper moral lesson of The Peripheral for our own time lies in the fact that such actions, whether destabilizing the economies continua, or throwing them into wars as a sort of weapon’s development simulation, are done with impunity because the people in continua are consider nothing but points of data.

Today, with the click of a button, those who hold or manage large pools of wealth can ruin the lives of people on the other side of the globe. Criminals can essentially mug a million people with a keystroke. People can watch videos of stranger’s children and other people’s other loved ones being raped and murdered like they are playing a game in hell. I could go on, but shouldn’t have to.

One of the key, perhaps the key, way we might keep technology from facilitating this hell, from turning us into cold, heartless computers ourselves, is to remember that there are real flesh and blood human beings on the other side of what we do. We should be using our technology to find them and help them, or at least not to hurt them, rather than target them, or flip their entire world upside down without any recognition of their human reality because it some how benefits us. Much of the same technology that allows us to treat other human beings as bits, thankfully, gives us tools for doing the opposite as well, and unless we start using our technology in this more positive and humane way we really will end up in hell.

Gibson will have grocked the future and not just the present if we fail to address theses problems he has helped us (or me at least) to see anew. For if we fail to overcome these issues, it will mean that we will have continued forward into a very black continua of the multiverse, and turned Gibson into a dark prophet, though he had disclaimed the title.

 

Living in the Divided World of the Internet’s Future

Marten_van_Valckenborch_the_Elder_-_The_Tower_of_Babel_-_Google_Art_Project

Sony hacks, barbarians with FaceBook pages, troll armies, ministries of “truth”- it wasn’t supposed to be like this. When the early pioneers of what we now call the Internet freed the network from the US military they were hoping for a network of mutual trust and sharing- a network like the scientific communities in which they worked where minds were brought into communion from every corner of the world. It didn’t take long for some of the witnesses to the global Internet’s birth to see in it the beginnings of a global civilization, the unification, at last, of all of humanity under one roof brought together in dialogue by the miracle of a network that seemed to eliminate the parochialism of space and time.   

The Internet for everyone that these pioneers built is now several decades old, we are living in its future as seen from the vantage point of the people who birthed it as this public “thing” this thick overlay of human interconnections which now mediates almost all of our relationships with the world. Yet, rather than bringing the world together, humanity appears to be drifting apart.

Anyone who doubts the Internet has become as much a theater of war in which political conflicts are fought as much as it is a harbinger of a nascent “global mind” isn’t reading the news. Much of the Internet has been weaponized whether by nation-states or non-state actors. Bots, whether used for in contests between individuals, or over them, now outnumber human beings on the web.

Why is this happening? Why did the Internet that connected us also fail to bring us closer?

There are probably dozens of reasons, only one of which I want to comment on here because I think it’s the one that’s least discussed. What I think the early pioneers failed to realize about the Internet was that it would be as much a tool of reanimating the past as it would be a means of building a future. It’s not only that history didn’t end, it’s that it came alive to a degree we had failed to anticipate.

Think what one will of Henry Kissinger, but his recent (and given that he’s 91 probably last major) book, World Order, tries to get a handle on this. What makes the world so unstable today, in Kissinger’s view, is that it is perhaps the first time in world history where we truly have “one world”, and, at the same time have multiple competing definitions over how that world should be organized. We only really began to live what can be considered a single world with the onset of the age of European expansion that mapped, conquered, and established contact, with every region on the earth. Especially from the 1800’s to the end of World War II world order was defined by the European system of balance of power, and, I might add, the shared dominant, Western culture these nations protelytized. After 1945 you have the Cold War with the world order defined by the bipolar split between the US and the Soviet Union. After the fall of the USSR, for a good few decades, world order was an American affair.

It was during this last period of time, when the US and neo-liberal globalization were at their apex, that the Internet became a global thing, a planetary network connecting all of humanity together into something like Marshal Mcluhan’s “global village.” Yet this new realm couldn’t really exist as something disconnected from underlying geopolitical and economic currents forever. An empire secure in its hegemony doesn’t seek to turn its communication system into a global spying tool or weaponize it, both of which the US have done. If the US could treat the global network or the related global financial system as tools for parochial nationalist ends then other countries would seek to do the same- and they have. Rather than becoming Chardin’s noosphere the Internet has become another theater of war for states, terrorist and criminal networks and companies.

What exactly these entities were that competed with one another across what we once called “cyberspace”, and what goals they had, were not really technological questions at all, but born from the ancient realities of history, geography and the contest for resources and wealth. Rather than one modernity we have several competing versions even if all of them are based on the same technological foundations.

Non-western countries had once felt compelled to copy the West’s cultural features as the price for modernity, and we should not forget the main reason that modernization was pursued despite it upheavals was to develop to a level where they could defend themselves against the West and its technological superiority. As Samuel Huntington pointed out in the 1990’s, now that the West had fallen from the apex of its power other countries were free to pursue modernity on their own terms. His model of a “clash of civilizations” was simplistic, but it was not, as some critics claimed, “racists”. Indeed if we had listened to Huntington we would never have invaded Afghanistan and Iraq.

Yet civilization is not quite the right term. Better, perhaps, political cultures with different ideas regarding political order along with a panorama of non-state actors old and new. What we have is a contest between different political cultures,concepts of order, and contests for raw power, all unfolding in the context of a technologically unified world.

The US continues to pursue its ideological foreign policy with deep roots in its history, but China has revived its own much deeper history of being the center of East Asia as well. Meanwhile, the Middle East, where most states were historical fictions created by European imperialists power in the wake of World War I with the Sykes-Picot agreement, has imploded and what’s replaced the defunct nation-state is a millenium old conflict between the two major branches of Islam. Russia at the same moment pursues old czarists dreams long thought as dead and encrypted as Lenin’s corpse.

That’s the world order, perhaps better world disorder, that Kissinger sees, and when you add to it the fact that the our global networks are vectors not just for these conflicts between states and cultures, but between criminals and corporations it can look quite scary. On the Internet we’ve become the “next door neighbor” not just of interesting people from all the world’s cultures, but to scam artists, and electronic burglars, spies and creeps.

Various attempts have been made to come up with a theory that would describe our current geopolitical situation. There have been the Fukuyama’s victory of liberal democracy in his “end of history” thesis, and Huntington’s clash of civilizations. There have been arguments that the nation-state is dead and that we are resurrecting a pre-Westphalian “neo-medieval” order where the main story isn’t the struggle between states but international groups, especially corporations. There are those who argue that city-states and empires are the political units not only of the distant past, but of the future. All of these and their many fellow theories, including still vibrant marxists and revived anarchists takes on events have a grain of truth to them, but always seem to come up short in capturing the full complexity of the moment.

Perhaps the problem we encounter when trying to understand our era is that it truly is sui generis. We have quite simply never existed in a world where the connective tissue, the networks that facilitate the exchange of goods, money, ideas, culture was global but where the underlying civilization and political and social history and condition was radically different.

Looking for a historical analog might be a mistake. Like the early European imperialists who dressed themselves up in togas and re-discovered the doric column, every culture in this big global knot of interconnections we’ve managed to tie all of humankind within are blinded by their history into giving a false order to the labyrinth.

Then again, maybe its the case that what digital technologies are really good at is destroying the distinction between the past and the future, just as the Internet is the most powerful means we have yet discovered for bringing together like- minded regardless of their separation in space.  Political order, after all, is nothing but a reflection of the type of world groups of human beings wish to reify. Some groups get these imagined worlds from the past and some from imagined futures, but the stability of none can never be assured now as all are exposed to reality of other worlds outside their borders. A transhumanist and member of ISIS encountering one another would be something akin to the meeting of time travelers from past and future.

This goes beyond the political. Take any cultural group you like, from steampunk aficionados to constitutional literalists, and what you have are people trying to make an overly complex present understandable by refashioning it in the form of an imagined past. Sometimes people even try to get a grip on the present by recasting it in the form of an imagined future. There is the “march of progress” which assumes we are headed for a destination in time, or science-fiction, which give us worlds more graspable than the present because the worlds presented there have a shape that our real world lacks.

It might be the case that there has never been a shape to humanity or our communities at any time in the past. Perhaps future historians will make the same mistake we have and project their simplifications on our world which was their formless past. We know better.

 

2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt - killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.

 

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.