Life: Inevitable or Accident?

The-Tree-Of-Life Gustav Klimt

Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon?

There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.

The proponent of the new theory is a young physicist at MIT named Jeremy England. I can’t claim I quite grasp all the intricate details of England’s theory, though he does an excellent job of explaining it here, but perhaps the best way of capturing it succinctly is by thinking of the laws of physics as a landscape, and a leaning one at that.

The second law of thermodynamics leans in the direction of increased entropy: systems naturally move in the direction of losing rather than gaining order over time, which is why we break eggs to make omelettes and not the other way round. The second law would seem to be a bad thing for living organisms, but oddly enough, ends up being a blessing not just for life, but for any self-organizing system so long as that system has a means of radiating this entropy away from itself.

For England, the second law provides the environment and direction in which life evolves. In those places where energy outputs from outside are available and can be dissipated because they have some boundary, such as a pool of water, self-organizing systems naturally come to be dominated by those forms that are particularly good at absorbing energy from their surrounding environment and dissipating less organized forms of energy in the form of heat (entropy) back into it.

This landscape in which life evolves, England postulates, may tilt as well in the direction of complexity and intelligence due to the fact that in a system that frequently changes in terms of oscillations of energy, those forms able to anticipate the direction of such oscillations gain the possibility of aligning themselves with them and thus become able to accomplish even more work through resonance.

England is in no sense out to replace Darwin’s natural selection as the mechanism through which evolution is best understood, though, should he be proved right, he would end up greatly amending it. If his theory ultimately proves successful, and it is admittedly very early days, England’s theory will have answered one of the fundamental questions that has dogged evolution since its beginnings. For while Darwin’s theory provides us with all the explanation we need for how complex organisms such as ourselves could have emerged out of seemingly random processes- that is through natural selection- it has never quite explained how you go from the inorganic to the organic and get evolution working in the first place. England’s work is blurring the line between the organic and the most complicated self-organizing forms of the inorganic, making the line separating cells from snowflakes and storms a little less distinct.

Whatever its ultimate fate, however, England’s theory faces major hurdles, not least because it seems to have a bias towards increasing complexity, and in its most radical form, points towards the inevitability that life will evolve in the direction of increased intelligence, ideas which many evolutionary thinkers vehemently disavow.

Some evolutionary theorists may see effort such as England’s not as a paradigm shift waiting in the wings, but as an example of a misconception regarding the relationship between increasing complexity and evolution that now appears to have been adopted by actual scientists rather than a merely misguided public. A misconception that, couched in scientific language, will further muddy the minds of the public leaving them with a conception of evolution that belongs much more to the 19th century than to the 21st. It is a misconception whose most vocal living opponent after the death of the irreplaceable Stephen J Gould has been the paleontologist, evolutionary biologist, and senior editor of the journal Nature, Henry Gee, who has set out to disabuse us of it in his book The Accidental Species.

Gee’s goal is to remind us of what he holds to be the fundamental truth behind the theory of evolution- evolution has one singular purpose from which everything else follows in lockstep- reproduction. His objective is to do away, once and for all, with what he feels is a common misconception that evolution is leading towards complexity and progress and that the highest peak of this complexity and progress is us- human beings.

If improved prospects for reproduction can be bought through the increased complexity of an organism then that is what will happen, but it needn’t be the case. Gee points out that many species, notably some worms and many parasites, have achieved improved reproductive prospects by decreasing their complexity.Therefore the idea that complexity (as in an increase in the specialization and number of parts an organism has)  is a merely matter of evolution plus time doesn’t hold up to close scrutiny. Judged through the eyes of evolution, losing features and becoming more simple is not necessarily a vice. All that counts is an organism’s ability to make more copies, or for animals that reproduce through sex, blended copies of itself.

Evolution in this view isn’t beautiful but coldly functional and messy- a matter of mere reproductive success. Gee reminds us of Darwin’s idea of evolution’s product as a “tangled bank”- a weird menagerie of creatures each having their own particular historical evolutionary trajectory. The anal retentive Victorian era philosophers who tried to build upon his ideas couldn’t accept such a mess and:

…missed the essential metaphor of Darwin’s tangled bank, however, and saw natural selection as a sort of motor that would drive transformation from one preordained station on the ladder of life to the next one.” (37)

Gee also sets out to show how deeply limited our abilities are when it comes to understanding the past through the fossil record. Very, very, few of the species that have ever existed left evidence of their lives in the form of fossils, which are formed only under very special conditions, and where the process of fossilization greatly favors the preservation of some species over others. The past is thus incredibly opaque making it impossible to impose an overarching narrative upon it- such as increasing complexity- as we move from the past towards the present.

Gee, though an ardent defender of evolution and opponent of creationist pseudoscience, finds the gaps in the fossil record so pronounced that he thinks we can create almost any story we want from it and end up projecting our contemporary biases onto the speechless stones. This is the case even when the remains we are dealing with are of much more recent origin and especially when their subject is the origin of us.

We’ve tended, for instance, to link tool use and intelligence, even in those cases such as Homo Habilis, when the records and artifacts point to a different story. We’ve tended not to see other human species such as the so-called Hobbit man as ways we might have actually evolved had circumstances not played out in precisely the way they had. We have not, in Gee’s estimation, been working our way towards the inevitable goal of our current intelligence and planetary dominance, but have stumbled into it by accident.

Although Gee is in no sense writing in anticipation of a theory such as England’s his line of thinking does seem to pose obstacles that the latter’s hypothesis will have to address. If it is indeed the case that, as England has stated it, complex life arises inevitably from the physics of the universe, so that in his estimation:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

Then England will have to address why it took so incredibly long – 4 billion years out of the earth’s 4.5 billion year history for actual plants to make their debut, not to mention similar spans for other complex eukarya such as animals like ourselves.

Whether something like England’s inevitable complexity or Gee’s, not just blind, but drunk and random, evolutionary walk is ultimately the right way to understand evolution has implications far beyond evolutionary theory. Indeed, it might have deep implications for the status and distribution of life in the universe and even inform the way we understand the current development of new forms of artificial intelligence.

What we have discovered over the last decade is that bodies of water appear to be both much more widespread and can be found in environments far beyond those previously considered. Hence NASA’s recent announcement that we are likely to find microbial life in the next 10 – 30 years both in our solar system and beyond. What this means is that England’s heat baths are likely ubiquitous, and if he’s correct, life likely can be found anywhere there is water- meaning nearly everywhere. There may even be complex lifelike forms that did not evolve through what we would consider normal natural selection at all.

If Gee is right the universe might be ripe for life, but the vast, vast majority of that life will be microbial and no amount of time will change that fate on most life inhabited worlds. If England in his minor key is correct the universe should at least be filled with complex multicellular life forms such as ourselves. Yet it is the possibility that England is right in his major key, that consciousness, civilization, and computation might flow naturally from the need of organisms to resonate with their fluctuating environments that, biased as we are, we likely find most exciting. Such a view leaves us with the prospect of many, many more forms of intelligence and technological civilizations like ourselves spread throughout the cosmos.

The fact that the universe so far has proven silent and devoid of any signs of technological civilization might give us pause when it comes to endorsing England’s optimism over Gee’s pessimism, unless, that is, there is some sort of limit or wall when it comes to our own perceived technological trajectory that can address the questions that emerge from the ideas of both. To that story, next time…

 

Is the Anthropocene a Disaster or an Opportunity?

Bradley Cantrell Pod-mod

Bradley Cantrell Pod-Mod

Recently the journal Nature published a paper arguing that the year in which the Anthropocene, the proposed geological era in which the collective actions of the human species started to trump other natural processes in terms of their impact, began in the year 1610 AD. If that year leaves you, like it did me, scratching your head and wondering what your missed while you dozed off in your 10th grade history class, don’t worry, because 1610 is a year in which nothing much happened at all. In fact, that’s why the author’s chose it.

There are a whole host of plausible candidates for the beginning of the Anthropocene, that are much, much more well know than 1610, starting relatively recently with the explosion of the first atomic bomb in 1945 and stretching backwards in time through the beginning of the industrial revolution (1750’s) to the discovery of the Americas and the bridging between the old and new worlds in the form of the Columbian Exchange, (1492), and back even further to some hypothetical year in which humans first began farming, or further still to the extinction of the mega-fauna such as Woolly Mammoths probably at the hands of our very clever, and therefore dangerous, hunter gatherer forebears.

The reason Simon L. Lewis and Mark A. Maslin (the authors of the Nature paper) chose 1610 is that in that year there can be found a huge, discernible drop in the amount of carbon dioxide in the atmosphere. A rapid decline in greenhouses gases which probably plunged the earth into a cold snap- global warming in reverse. The reason this happened goes to the heart of a debate plaguing the environmental community, a debate that someone like myself, merely listening from the sidelines, but concerned about the ultimate result, should have been paying closer attention to.

It seems that while I was focusing on how paleontologists had been undermining the myth of the “peaceful savage”, they had also called into question another widely held idea regarding the past, one that was deeply ingrained in the American environmental movement itself- that the Native Americans did not attempt the intensive engineering of the environment of the sort that Europeans brought with them from the Old World, and therefore, lived in a state of harmony with nature we should pine for or emulate.

As paleontologists and historians have come to understand the past more completely we’ve learned that it just isn’t so. Native Americans, across North America, and not just in well known population centers such as Tenochtitlan in Mexico, the lost civilizations of the Pueblos (Anasazi) in the deserts of the southwest, or Cahokia in the Mississippi Valley, but in the woodlands of the northeast in states such as my beloved Pennsylvania, practiced extensive agriculture and land management in ways that had a profound impact on the environment over the course of centuries.

Their tool in all this was the planful use of fire. As the naturalist and historian Scott Weidensaul tells the story in his excellent account of the conflicts between Native Americans and the Europeans who settled the eastern seaboard, The First Frontier:

Indians had been burning Virginia and much of the rest of North America annually for fifteen thousand years and it was certainly not a desert. But they had clearly altered their world in ways we are only still beginning to appreciate. The grasslands of the Great Plains, the lupine-studded savannas of northwestern Ohio’s “oak openings”, the longleaf pine forests of the southern coastal plain, the pine barrens of the Mid-Atlantic and Long Island- all were ecosystems that depended on fire and were shaped over millennia of use by its frequent and predictable application.  (69)

The reason, I think, most of us have this vision in our heads of North America at the dawn of European settlement as an endless wilderness is not on account of some conspiracy by English and French settlers (though I wouldn’t put it past them), but rather, when the first settlers looked out into a continent sparsely settled by noble “savages” what they were seeing was a mere remnant of the large Indian population that had existed there only a little over a century before. That the native peoples of the North America Europeans struggled so hard to subdue and expel were mere refugees from a fallen civilization. As the historian Charles Mann has put it: when settlers gazed out into a seemingly boundless American wilderness they were not seeing a land untouched by the hands of man, a virgin land given to them by God, but a graveyard.

Given the valiant, ingenious, and indeed, sometime brutal resistance Native Americans gave against European (mostly English) encroachment it seems highly unlikely the the United States and Canada would exist today in anything like their current form had the Indian population not been continually subject to collapse. What so ravaged the Native American population and caused their civilization to go into reverse were the same infectious diseases unwittingly brought by Europeans that were the primary culprit in the destruction of the even larger Indian civilizations in Mexico and South America. It was a form of accidental biological warfare that targeted Native American based upon their genomes, their immunity to diseases having never been shaped by contact with domesticated animals like pigs and cows as had the Europeans who were set on stealing their lands.

Well, it wasn’t always accidental. For though it would be sometime before we understood how infectious diseases worked, Europeans were well aware of their disproportionate impact on the Indians. In 1763 William Trent in charge of Fort Pitt, the seed that would become Pittsburgh, besieged by Indians, engaged in the first documented case of biological warfare. Weidensaul quotes one of Trent’s letters:

Out of regard to the [Indian emissaries] we gave them two blankets and a handkerchief out of the Small Pox Hospital. I hope it will have the desired effect (381)

It’s from this devastating impact of infectious diseases that decimated Native American populations and ended their management of the ecosystem through the systematic use of fire that (the authors of the Nature paper) get their 1610 start date for the beginning of the Anthropocene. With the collapse of Indian fire ecology eastern forests bloomed to such an extent that vast amounts of carbon otherwise left in the atmosphere was sucked up by the trees with the effect that the earth rapidly cooled.

Beyond the desire to uncover the truth and to enter into sympathy with those of the past that lies at the heart of the practice and study of history, this new view of the North American past has implications for the present. For the goal of returning, both the land itself and as individuals, to the purity of the wilderness that has been at the heart of the environmentalist project since its inception in the 19th century now encounters a problem with time. Where in the past does the supposedly pure natural state to which we are supposed to be returning actually exist?

Such an untrammeled state would seem to exist either before Native American’s development of fire ecology, or after their decimation by disease before European settlement west of the Allegheny mountains. This, after all, was the period of America as endless wilderness that has left its mark on our imagination. That is, we should attempt to return as much of North America as is compatible with civilization to the state it was after the collapse of the Native American population, but before the spread of Europeans into the American wilderness. The problem with such a goal is that this natural splendor may have hid the fact that such an ecology was ultimately unsustainable in the wild  state in which European settlers first encountered it.

As Tim Flannery pointed out in his book The Eternal Frontier: An Ecological History of North America and Its Peoples, the fire ecology adopted by Native Americans may have been filling a vital role, doing the job that should have been done by large herbivores, which North America, unlike other similar continents, and likely due to the effectiveness of the first wave of human hunters to enter it as the ultimate “invasive species” from Asia, lacked.  Large herbivores cull the forests and keep them healthy, a role which in their absence can also be filled by fire.

Although projects are currently in the works to try to bring back some of these large foresters such as the woolly mammoth I am doubtful large populations will ever be restored in a world filled even with self-driving cars. Living in Pennsylvania I’ve had the misfortune of running into a mere hundred pound or so white tailed deer (or her running into me). I don’t even want to think about what a mammoth would do to my compact. Besides, woolly mammoths don’t seem to be a creature made for the age we’re headed into, which is one of likely continued, and extreme, warming.

Losing our romanticized picture of nature’s past means we are now also unclear as to its future.  Some are using this new understanding of the ecological history of North America to make an argument against traditional environmentalism arguing in essence that without a clear past to return to the future of the environment is ours to decide.

Perhaps what this New Environmentalism signals is merely the normalization of North American’s relationship with nature, a move away from the idea of nature as a sort of lost Eden and towards something like ideas found elsewhere, which need not be any less reverential or religious. In Japan especially nature is worshiped, but it is in the form of a natural world that has been deeply sculpted by human beings to serve their aesthetic and spiritual needs. This the Anthropocene interpreted as opportunity rather than as unmitigated disaster.

Ultimately, I think it is far too soon to embrace without some small degree of skepticism the project of the New Environmentalism espoused by organizations such as the Breakthrough Institute or the chief scientist and fire- brand of the Nature Conservancy Peter Kareiva, or environmental journalists and scholars espousing those views like Andrew Rifkin, Emma Marris, or even the incomparable Diane Ackerman. For certainly the argument that even if the New Environmentalism contains a degree of truth it is inherently at risk of being captured by actors whose environmental concern is a mere smokescreen for economic interests, and which, at the very least, threatens to undermine the long held support for the setting aside of areas free from human development whose stillness and silence many in of us in our drenched in neon and ever beeping world hold dear.

It would be wonderful if objective scientists could adjudicate disputes between traditional environmentalists and New Environmentalists expressing non-orthodox views on everything from hydraulic fracking, to genetic engineering, to invasive species, to geo-engineering. The problem is that the very uncertainty at the heart of these types of issues probably makes definitive objectivity impossible. At the end of the day the way one decides comes down to values.

Yet what the New Environmentalists have done that I find most intriguing is to challenge  the mythology of Edenic nature, and just as in any other undermining of a ruling mythology this has opened the potential to both freedom and risks. On the basis of my own values the way I would use that freedom would not be to hive off humanity from nature, to achieve the “de-ecologization of our material welfare”,  as some New Environmentalists want to do, but for the artifact of human civilization itself to be made more natural.

Up until today humanity has gained its ascendancy over nature by assaulting it with greater and more concentrated physical force. We’ve damned and redrawn the course of rivers, built massive levies against the sea, cleared the land of its natural inhabitants (nonhuman and human) and reshaped it to grow our food.

We’ve reached the point where we’re smarter than this now and understand much better how nature achieve ends close to our own without resorting to excessively destructive and wasteful force. Termites achieve climate control from the geometry of their mounds rather than our energy intensive air conditioning. IBM’s Watson took a small city’s worth of electric power to beat its human opponents on Jeopardy! whose brains consumed no more power than a light bulb.

Although I am no great fan of the term “cyborg-ecology” I am attracted to the idea as it is being developed by figures such as Bradley Cantrell who propose that we use our new capabilities and understanding to create a flexible infrastructure that allows, for instance, a river to be a river rather than a mere “concrete channel” which we’ve found superior to the alternative of a natural river because it is predictable, while providing for our needs. A suffocatingly narrow redefinition of nature that has come at great costs to the other species with which we share with other such habitats, and ultimately causes large scale problems that require yet more brute force engineering for us to control once nature breaks free from our chains.

I can at least imagine an Anthropocene that provides a much richer and biodiverse landscape than our current one, a world where we are much more planful about providing for creatures other than just our fellow humans. It would be a world where we perhaps break from environmentalism’s shibboleths and do things such as prudently and judiciously use genetic engineering to make the natural world more robust, and where scientists who propose that we use the tools our knowledge has given to address plagues not just to ourselves, but to fellow creatures, do not engender scorn, but careful consideration of their proposals.

It would be a much greener world where the cold kingdom of machines hadn’t replaced the living, but was merely a temporary way station towards the rebirth, after our long, and almost terminal assault, on the biological world. If our strategies before have been divided between raping and subduing nature in light of our needs, or conversely, placing nature at such a remove from our civilization that she might be deemed safe from our impact, perhaps, having gained knowledge regarding nature’s secrets, we can now live not so much with her as a part of her, and with her as a part of us.

A Global History of Post-humans

Cave painting hand prints

One thing that can certainly not be said either the anthropologist Ynval Harari’s or his new book Sapiens: A Brief History of Humankind is that they lack ambition. In Sapiens, Harari sets out to tell the story of humanity since our emergence on the plans of Africa until the era in which we are living right now today, a period he thinks is the beginning of the end of our particular breed of primate. His book ends with some speculations on our post-human destiny, whether we achieve biological immortality or manage to biologically and technologically engineer ourselves into an entirely different species. If you want to talk about establishing a historical context for the issues confronted by transhumanism, you can’t get better than that.

In Sapiens, Harari organizes the story of humanity by casting it within the framework of three revolutions: the cognitive, the agricultural and the scientific. The Cognitive Revolution is what supposedly happened somewhere between 70,000- 30,000 thousand years ago when a suddenly very creative and  cooperative homo sapiens came roaring out of Africa using their unprecedented social coordination and technological flexibility to invade every ecological niche on the planet.

Armed with an extremely cooperative form of culture, and the ability to make tools, (especially clothing) to fit environments they had not evolved for, homo sapiens was able to move into more northern latitudes than their much more cold adapted Neanderthal cousins, or to cast out on the seas to settle far off islands and continents such as Australia, Polynesia, and perhaps even the Americas.

The speed with which homo sapiens invaded new territory was devastating for almost all other animals. Harari points out how the majority of large land animals outside of Africa (where animals had enough time to evolve weariness of this strange hairless ape) disappeared not long after human beings arrived there. And the casualties included other human species as well.

One of the best things about Harari is his ability to overthrow previous conceptions- as he does here with the romantic notion held by some environmentalist that “primitive” cultures lived in harmony with nature. Long before even the adoption of agriculture, let alone the industrial revolution, the arrival of homo sapiens proved devastating for every other species, including other hominids.

Yet Harari also defies intellectual stereotypes. He might not think the era in which the only human beings on earth were  “noble savages” was a particularly good one for other species, but he does see it as having been a particularly good one for homo sapiens, at least compared to what came afterward, and up until quite recently.

Humans, in the era before the Agricultural Revolution lived a healthier lifestyle than any since. They had a varied diet, and though they only worked on average six hours a day, and were far more active than any of us in modern societies chained to our cubicles and staring at computer screens.

Harari, also throws doubt on the argument that has been made most recently by Steven Piker, that the era before states was one of constant tribal warfare and violence, suggesting that it’s impossible to get an overall impression for levels of violence based on what end up being a narrow range of human skeletal remains. The most likely scenario, he thinks, is that some human societies before agriculture were violent, and some were not, and that even the issue of which societies were violent varied over time rising and falling in respect to circumstances.

From the beginning of the Cognitive Revolution up until the Agricultural Revolution starting around 10,000 years ago things were good for homo sapiens, but as Harari sees it, things really went downhill for us as individuals, something he sees as different from our status as a species, with the rise of farming.

Harari is adamant that while the Agricultural Revolution may have had the effect of increasing our numbers, and gave us all the wonders of civilization and beauty of high culture, its price, on the bodies and minds of countless individuals, both humans, and other animals was enormous. Peasants were smaller, less healthy,  and died younger than their hunter gatherer ancestors. The high culture of the elites of ancient empires was bought at the price of the systematic oppression of the vast majority of human beings who lived in those societies. And, in the first instance of telling this tale with in the context of a global history of humanity that I can think of, Harari tells the story of not just our oppression of each other, but of our domesticated animals as well.

He shows us how inhumane animal husbandry was long before our era of factory farming, which is even worse, but it was these more “natural”, “organic” farmers who began practices such as penning animals in cages, separating mothers from their young, castrating males, and cutting off the noses or out the eyes of animals such a pigs so they could better serve their “divinely allotted” function of feeding human mouths and stomachs.

Yet this begs the question: if the Agricultural Revolution was so bad for the vast majority of human beings, and animals with the exception of a slim class at the top of the pyramid, why did it not only last, but spread, until only a tiny minority of homo sapiens in remote corners continued to be hunter gatherers while the vast majority, up until quite recently were farmers?

Harari doesn’t know. It was probably a very gradual process, but once human societies had crossed a certain threshold there was no going back- our numbers were simply too large to support a reversion to hunting and gathering, For one of the ironies of the Agricultural Revolution is that while it made human beings unhealthy, it also drove up birthrates. This probably happened through rational choice. A hunter gathering family would likely space their children, whereas a peasant family needed all the hands it could produce, something that merely drove the need for more children, and was only checked by the kinds of famines Malthus had pegged as the defining feature of agricultural societies and that we only escaped recently via the industrial revolution.

Perhaps, as Harrai suggest, “We did not domesticate wheat. It domesticated us.” (81) At the only level evolution cares about- the propagation of our genes- wheat was a benefit to humanity, but at the cost of much human suffering and backbreaking labor in which we rid wheat of its rivals and spread the plant all over the globe. The wheat got a better deal, no matter how much we love our toast.

It was on the basis of wheat, and a handful of other staple crops (rice, maize, potatoes) that states were first formed. Harari emphasizes the state’s role as record keeper, combined with enforcer of rules. The state saw its beginning as a scorekeeper and referee for the new complex and crowded societies that grew up around farming.

All the greatest works of world literature, not to mention everything else we’ve ever read, can be traced back to this role of keeping accounts, of creating long lasting records, that led the nascent states that grew up around agriculture to create writing. Shakespeare’s genius can trace its way back to the 7th century B.C. equivalent to an IRS office. Along with writing the first states also created numbers and mathematics, bureaucrats have been bean counters ever since.

The quirk of human nature that for Harari made both the Cognitive and Agricultural Revolution possible and led to much else besides was our ability to imagine things that do not exist, by which he means almost everything we find ourselves surrounded by, not just religion, but the state and its laws, and everything in between has been akin to a fantasy game. Indeed, Harari left me feeling that the whole of both premodern and modern societies was at root little but a game of pretend played by grown ups, with adulthood perhaps nothing more than agreeing to play along with the same game everyone else is engaged in. He was especially compelling and thought provoking when it came to that ultimate modern fantasy and talisman that all of us, from Richard Dawkins to, Abu Bakr al-Baghdadi believes in; namely money.

For Harari the strange thing is that: “Money is the only trust system created by humans that can bridge almost any cultural gap, and does not discriminate on the basis of religion, gender, race, age or sexual orientation.” In this respect it is “the apogee of human tolerance” (186). Why then have so many of its critics down through the ages denounced money as the wellspring human of evil? He explains the contradiction this way:

For although money builds universal trust between strangers, this trust is invested not in humans, communities or sacred values, but in money itself and the impersonal systems that back it. We do not trust the stranger or the next store neighbor- we trust the coins they hold. If they run out of coins, we run out of trust. (188)

We ourselves don’t live in the age of money, so much as the age of Credit (or capital), and it is this Credit which Harari sees as one of the legs of the three-legged stools which he think defines our own period of history. For him we live in the age of the third revolution in human history, the age of the Scientific Revolution that has followed his other two. It is an age built out of an alliance of the forces of Capital-Science- and Empire.

What has made our the age of Credit and different from the material cultures that have come before where money certainly played a prominent role, is that we have adopted lending as the route to economic expansion. But what separates us from past eras that engaged in lending as well, is that ours is based on a confidence that the future will be not just different but better than the past, a confidence that has, at least so far, panned out over the last two centuries largely through the continuous advances in science and technology.

The feature that really separated the scientific revolution from earlier systems of knowledge, in Harari’s view, grew out of the recognition in the 17th century of just how little we actually knew:

The Scientific Revolution has not been a revolution of knowledge. It has above all been a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions. (251)

Nothing perhaps better captures the early modern recognition of this ignorance than their discovery of the New World which began us down our own unique historical path towards Empire. Harari sees both Credit and Science being fed and feeding the intermediary institution of Empire, and indeed, that the history of capitalism along with science and technology cannot be understood without reference to the way the state and imperialism have shaped both.

The imperialism of the state has been necessary to enable the penetration of capitalism and secure its gains, and the powers of advanced nations to impose these relationships has been the result largely of more developed science and technology which the state itself has funded. Science was also sometimes used not merely to understand the geography and culture of subject peoples to exploit them, but in the form of 19th century and early 20th century racism was used as a justification for that racism itself. And the quest for globe spanning Empire that began with the Age of Exploration in the 15th century is still ongoing.

Where does Harari think all this is headed?

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet Harari questions not only whether the scientific revolution or the new age of economic prosperity, not to mention the hunt for empire, have actually brought about a similar amount of misery, if not quite suffering, as the Agricultural Revolution that preceded it.

After all, the true quest of modern science is really not power, but immortality. In Harari’s view we are on the verge of fulfilling the goal of the “Gilgamesh Project”.

Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself. (267)

The quest after wealth, too, seems to be reaching a point of diminishing returns. If the objective of material abundance was human happiness, we might ask why so many of us are miserable? The problem, Harari thinks, might come down to biologically determined hedonic set-points that leave a modern office worker surrounded by food and comforts ultimately little happier to his peasant ancestor who toiled for a meager supper from sunset to sunrise. Yet perhaps the solution to this problem is at our fingertips as well:

 There is only one historical development that has real significance. Today when we realize that the keys to happiness are in the hands of our biochemical system, we can stop wasting our time on politics, social reforms, putsches, and ideologies and focus instead on the only thing that truly makes us happy: manipulating our biochemistry.  (389)

Still, even should the Gilgamesh Project succeed, or we prove capable of mastering our biochemistry, Harari sees a way the Sisyphean nature may continue to have the last laugh. He writes:

Suppose that science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an epidemic of anger and anxiety.

Those unable to afford the new treatments- the vast majority of people- will be besides themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even handed- that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth they will not revive corpses. How dreadful to think that I and my loved ones can live forever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow adverse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.( 384-385).

Along with this, Harari reminds us that it might not be biology that is most important for our happiness, but our sense of meaning. Given that he thinks all of our sources of meaning are at bottom socially constructed illusions, he concludes that perhaps the only philosophically defensible position might be some form of Buddhism- to stop all of our chasing after desire in the first place.

The real question is whether the future will show if all our grasping has ended up in us reaching our object or has led us further down the path of illusion and pain that we need to outgrow to achieve a different kind of transcendence.

 

Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.

 

Edward O. Wilson’s Dull Paradise

Garden of Eden

In all sincerity I have to admit that there is much I admire about the biologist Edward O. Wilson. I can only pray that not only should I live into my 80’s, but still possess the intellectual stamina to write what are at least thought provoking books when I get there. I also wish I still have the balls to write a book with the title of Wilson’s latest- The Meaning of Human Existence, for publishing with an appellation like that would mean I wasn’t afraid I would disappoint my readers, and Wilson did indeed leave me wondering if the whole thing was worth the effort.

Nevertheless,  I think Wilson opened up an important alternative future that is seldom discussed here- namely what if we aimed not at a supposedly brighter, so-called post-human future but to keep things the same? Well, there would be some changes, no extremes of human poverty, along with the restoration of much of the natural environment to its pre-industrial revolution health. Still, we ourselves would aim to stay largely the same human beings who emerged some 100,000 years ago- flaws and all.

Wilson calls this admittedly conservative vision paradise, and I’ve seen his eyes light up like a child contemplating Christmas when using the word in interviews. Another point that might be of interest to this audience is who he largely blames for keeping us from entering this Shangri-la; archaic religions and their “creation stories.”

I have to admit that I find the idea of trying to preserve humanity as it is a valid alternative future. After all, “evolve or die” isn’t really the way nature works. Typically the “goal” of evolution is to find a “design” that works and then stick with it for as long as possible. Since we now dominate the entire planet and our numbers out-rival by a long way any other large animal it seems hard to assert that we need a major, and likely risky, upgrade. Here’s Wilson making the case:

While I am at it, I hereby cast a vote for existential conservatism, the preservation of biological human nature as a sacred trust. We are doing very well in terms of science and technology. Let’s agree to keep that up, and move both along even faster. But let’s also promote the humanities, that which makes us human, and not use science to mess around with the wellspring of this, the absolute and unique potential of the human future. (60)

It’s an idea that rings true to my inner Edmund Burke, and sounds simple, doesn’t it? And on reflection it would be, if human beings were bison, blue whales, or gray wolves. Indeed, I think Wilson has drawn this idea of human preservation from his lifetime of very laudable work on biodiversity. Yet had he reflected upon why efforts at preservation fail when they do he would have realized that the problem isn’t the wildlife itself, but the human beings who don’t share the same value system going in the opposite direction. That is, humans, though we are certainly animals, aren’t wildlife, in the sense that we take destiny into our own hands, even if doing so is sometimes for the worse. Wilson seems to think that it’s quite a short step from asserting it as a goal to gaining universal assent to the “preservation of biological human nature as a sacred trust”, the problem is there is no widespread agreement over what human nature even is, and then, even if you had such agreement, how in the world do you go about enforcing it for the minority who refuse to adhere to it? How far should we be willing to go to prevent persons from willingly crossing some line that defines what a human being is? And where exactly is that line in the first place? Wilson thinks we’re near the end of the argument when we only just took our seat at the debate.

Strange thing is the very people who would likely naturally lean towards the kind of biological conservatism that Wilson hopes “we” will ultimately choose are the sorts of traditionally religious persons he thinks are at the root of most of our conflicts. Here again is Wilson:

Religious warriors are not an anomaly. It is a mistake to classify believers of a particular religious and dogmatic religion-like ideologies into two groups, moderates versus extremists. The true cause of hatred and religious violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things. (154)

For Wilson, a religious groups “defines itself foremost by its creation story, the supernatural narrative that explains how human beings came into existence.” (151)  The trouble with this is that it’s not even superficially true. Three of the world’s religions that have been busy killing one another over the last millennium – Judaism, Christianity and Islam all have the same creation story. Wilson knows a hell of a lot more about ants and evolution then he does about religion or even world history. And while religion is certainly the root of some of our tribalism, which I agree is the deep and perennial human problem, it’s far from the only source, and very few of our tribal conflicts have anything to do with the fight between human beings over our origins in the deep past. How about class conflict? Or racial conflict? Or nationalist conflicts when the two sides profess the not only the exact same religion but the exact same sect- such as the current fight between the two Christian Orthodox nations of Russia and Ukraine? If China and Japan someday go to war it will not be a horrifying replay of the Scopes Monkey Trial.

For a book called The Meaning of Human Existence Wilson’s ideas have very little explanatory power when it comes to anything other than our biological origins, and some quite questionable ideas regarding the origins of our capacity for violence. That is, the book lacks depth, and because of this I found it, well… dull.

Nowhere was I more hopeful that Wilson would have something interesting and different to say than when it came to the question of extraterrestrial life. Here we have one of the world’s greatest living biologists, a man who had spent a lifetime studying ants as an alternative route to the kinds of eusociality possessed only by humans, the naked mole rat, and a handful of insects. Here was a scientists who was clearly passionate about preserving the amazing diversity of life on our small planet.

Yet Wilson’s E.T.s are land dwellers, relatively large, biologically audiovisual, “their head is distinct, big, and located up front” (115) they have moderate teeth and jaws, they have a high social intelligence, and “a small number of free locomotory appendages, levered for maximum strength with stiff internal or external skeletons composed of hinged segments (as by human elbows and knees), and with at least one pair of which are terminated by digits with pulpy tips used for sensitive touch and grasping. “ (116)

In other words they are little green men.

What I had hoped was the Wilson would have used his deep knowledge of biology to imagine alternative paths to technological civilization. Couldn’t he have imagined a hive-like species that evolves in tandem with its own technological advancement? Or maybe some larger form of insect like animal which doesn’t just have an instinctive repertoire of things that it builds, but constantly improves upon its own designs, and explores the space of possible technologies? Or aquatic species that develop something like civilization through the use of sea-herding and ocean farming? How about species that communicate not audio-visually but through electrical impulses the way our computers do?

After all, nature on earth is pretty weird. There’s not just us, but termites that build air conditioned skyscrapers (at least from their view), whales which have culturally specific songs, and strange little things that eat and excrete electrons. One might guess that life elsewhere will be even weirder. Perhaps my problem with The Meaning of Human Existence is that it just wasn’t weird enough not just to capture the worlds of tomorrow and elsewhere- but the one we’re living in right now.

 

There are two paths to superlongevity: only one of them is good

Memento Mori Ivories

Looked at in the longer historical perspective we have already achieved something our ancestors would consider superlongevity. In the UK life expectancy at birth averaged around 37 in 1700. It is roughly 81 today. The extent to which this is a reflection of decreased child mortality versus an increase in the survival rate of the elderly I’ll get to a little later, but for now, just try to get your head around the fact that we have managed to nearly double the life expectancy of human beings in a little over two centuries.

By itself the gains we have made in longevity are pretty incredible, but we have also managed to redefine what it means to be old. A person in 1830 was old at forty not just because of averages, but by the conditions of his body. A revealing game to play is to find pictures of adults from the 19th century and try to guess their ages. My bet is that you, like myself, will consistently estimate the people in these photos to be older than they actually were when the picture was taken. This isn’t a reflection of their lack of Botox and Photoshop, so much as the fact that they were missing the miracle of modern dentistry, were felled, or at least weathered, by diseases which we now consider mere nuisances. If I were my current age in 1830 I would be missing most of my teeth and the pneumonia I caught a few years back would have surely killed me, having been a major cause of death in the age of Darwin and Dickens.

Sixty or even seventy year olds today are probably in the state of health that a forty year old was in the 19th century. In other words we’ve increased the healthspan, not just the lifespan. Sixty really is the new forty, though what is important is how you define “new”. Yet get passed eighty in the early 21st century and you’re almost right back in the world where our ancestors lived. Experiencing the debilitations of old age that is the fate of those of us lucky enough to survive through the pleasures of youth and middle age. The disability of the old is part of the tragic aspect of life, and as always when it comes to giving poetic shape to our comic/ tragic existence, the Greeks got to the essence of old age with their myth of Tithonus.

Tithonus was a youth who had the ill fortune of inspiring the love of the goddess of spring Eos. (Love affairs between gods and mortals never end well). Eos asked Zeus to grant the youth immortality, which he did, but, of course, not in the way Eos intended. Tithonus would never die, but he also would continue to age becoming not merely old and decrepit, but eventually shrivel away to a grasshopper hugging a room’s corner. It is best not to ask the gods for anything.

Despite our successes, those of us lucky enough to live into our 7th and 8th decades still end up like poor old Tithonus. The deep lesson of the ancient myth still holds- longevity is not worth as much as we might hope if not also combined with the health of youth, and despite all of our advances, we are essentially still in Tithonus’ world.

Yet perhaps not for long. At least if one believes the story told by Jonathan Weiner in his excellent book Long for this World.  I learned much about our quest for long life and eternal youth from Long for this World, both its religious and cultural history, and the trajectory and state of its science. I never knew that Jewish folklore had a magical city called Luz where the death unleashed in Eden was prevented from entering, and that existed until  all its inhabitants became so bored that they walked out from its walls and we struck down by the Angel of Death waiting eagerly outside.

I did not know that Descartes, who had helped unleash the scientific revolution, thought that gains in knowledge were growing so fast that he would live to be 1,000. (He died in 1650 at 54). I did not realize that two other key figures of the scientific revolution Roger and Francis Bacon (no relation) thought that science would restore us to the knowledge before the fall (prelapsarian) which would allow us to live forever, or the depth to which very different Chinese traditions had no guilt at all about human immorality and pursued the goal with all sorts of elixirs and practices, none of which, of course, worked. I was especially taken with the story of how Pennsylvania’s most famous son- Benjamin Franklin- wanted to be “pickled” and awoken a century later.

Reviewing the past, when even ancient Egyptian hieroglyphs offer up recipes for “guaranteed to work” wrinkle creams, shows us just how deeply human the longing for agelessness is. It wasn’t invented by Madison Avenue or Dr Oz if even the attempts to find a fountain of youth by the ancients seem no less silly than many of our own. The question, I suppose, is the one that most risks the accusation that one is a fool: “Is this time truly different?” Are we, out of all the generations that have come before us believing the discovery of the route to human “immortality” (and every generation since the rise of modern science has had those who thought so) actually the ones who will achieve this dream?

Long for this World is at its heart a serious attempt to grapple with this question and tries to give us a clear picture of longevity science built around the theoretical biologist, Aubrey de Grey, who will either go down in history as a courageous prophet of a new era of superlongevity, or as just another figure in our long history of thinking biological immortality is at our fingertips when all we are seeing is a mirage.

One thing we have on our ancestors who chased this dream is that we know much, much, more about the biology of aging. Darwinian evolution allowed us to be able to conceive non- poetic theories on the origins of death. In the 1880’s the German biologist, August Weismann in his essay “Upon the Eternal Duration of Life”, provided a kind of survival of the fittest argument for death and aging. Even an ageless creature, Weismann argued, would overtime have to absorb multiple shocks eventually end up disabled. The the longer something lives the more crippled and worn out it becomes. Thus, it is in the interest of the species that death exists to clear the world of these disabled- very damned German- the whole thing.

Just after World War II the biologist Peter Medawar challenged the view of  Weismann. For Medawar if you look at any species selective pressures are really only operating when the organism is young. Those who can survive long enough to breed are the only ones that really count when it comes to natural selection. Like versions of James Dean or Marilyn Monroe, nature is just fine if we exit the world in the bloom of youth- as long, that is, as we have passed our genes.

In other words, healthful longevity has not really been something that natural selection has been selecting most organisms for, and because of this it hasn’t been selecting against bad things that can happen to old organisms either, as we’re finding when, by saving people from heart attacks in their 50’s, we destin them to die of diseases that were rare or unknown in the past like Alzheimers. In a sense we’re the victim of natural selection not caring about the health of those past reproductive age or their longevity.

Well, this is only partly true. Organisms that live in conditions where survival in youth is more secure end up with stretched longevity for their size. Some bats can live decades when similar sized mice have a lifespan of only a couple of years. Tortoises can live for well over a century while alligators of the same weight live from 30-50 years.

Stretching healthful longevity is also something that occurs when you starve an animal. We’ve know for decades that lifespan (in other animals at least) can be increased through caloric restriction. Although the mechanism is unclear, the Darwinian logic is not. Under conditions of starvation it’s a bad idea to breed and the body seems to respond by slowing development waiting for the return of food and a good time to mate.

Thus, there is no such thing as a death clock, lifespan is malleable and can be changed if we just learn how to work the dials. We should have known this from our historical experience over the last two-hundred years in which we doubled the human lifespan, but now we know that nature itself does it all the time and not by, like we do , by addressing the symptoms of aging but by resetting the clock of life itself.

We might ourselves find it easy to reset our aging clock if there weren’t multiple factors that play a role in its ticking. Aubrey de Grey has identified seven- the most important of which (excluding cancerous mutations) are probably the accumulation of “junk” within cells and the development of harmful “cross links” between cells. Strange thing about these is that they are not something that suddenly appears when we are actually “old” but are there all along, only reaching levels when they become noticeable and start to cause problems after many decades. We start dying the day we are born.

As we learn in Long for This World, there is hope that someday we may be able to effectively intervene against all these causes of aging. Every year the science needed to do so advances. Yet as Aubrey de Grey has indicated, the greatest threat to this quest for biological immortality is something we are all too familiar with – cancer.

The possibility of developing cancer emerges from the very way our cells work. Over a lifetime our trillions of cells replicate themselves an even more mind bogglingly high number of times. It is almost impossible that every copying error will be caught before it takes on a life of its own and becomes a cancerous growth. Increasing lifespan only increases the amount of time such copying errors can occur.

It’s in Aubrey de Grey’s solution to this last and most serious of super-longevity’s medical hurdles that Weiner’s faith in the sense of that project breaks down, as does mine. De Grey’s cure for cancer goes by the name of WILT- whole body interdiction of the lengthening of telomeres. A great deal of the cancers that afflict human beings achieve their deadly replication without limit by taking control of the telomerase gene. De Grey’s solution is to strip every human gene of its telomeres, something that, even if successful in preventing cancerous growths, would also leave us without red and white blood cells. In order to allow us to live without these cells, de Grey proposes regular infusions of stem cells. What this leave us with would be a life of constant chemotherapy and invasive medical interventions just to keep us alive. In other words, a life when even healthy people relate to their bodies and are kept alive by medical interventions that are now only experienced by the terminally ill.

I think what shocks Weiner about this last step in SENS is the that it underscores just how radical the medical requirements of engineering superlongevity might become. It’s one thing to talk about strengthening the cell’s junk collector the lysosome by adding an enzyme or through some genetic tweak, it’s another to talk about removing the very cells and structures which define human biology, cells and platelets, which have always been essential for human life and health.

Yet, WILT struck me with somewhat different issues and questions. Here’s how I have come to understand it. For simplicities sake, we might be said to have two models of healthcare, both of which have contributed to the gains we have seen in human health and longevity since 1800. As is often noted, a good deal of this gain in longevity was a consequence of improving childhood mortality. Having less and less people die at the age of five drastically improves the average lifespan. We made these gains largely through public health: things like drastically improved sanitation, potable water, vaccinations, and, in the 20th century antibiotics.

This set of improvements in human health were cheap, “easy”, and either comprised of general environmental conditions, or administered at most annually- like the flu shoot. These features allowed this first model of healthcare to be distributed broadly across the population leading to increased longevity by saving the lives primarily of the young. In part these improvements, and above all the development of antibiotics, also allowed longevity increases from at older end of the scale, which although less pronounced than improvements in child mortality, are, nonetheless very real. This is my second model of healthcare and includes things everything from open heart surgery, to chemo and radiation treatments for cancer, to lifelong prescription drugs to treat chronic conditions.

As opposed to the first model, the second one is expensive, relatively difficult, and varies greatly among different segments of the population. My Amoxicillin and Larry Page’s Amoxicillin are the same, but the medical care we would receive to treat something like cancer would be radically different.

We actually are making greater strides in the battle against cancer than at any time since Nixon declared war on the scourge way back in the 1970’s. A new round of immunosuppressive drugs that are proving so successful against a host of different cancers that John LaMattina, former head of research and development for Pfizer has stated that “We are heading towards a world where cancer will become a chronic disease in much the same way as we have seen with diabetes and HIV.”

The problem is the cost which can range up to 150,000 per year. The costs of the new drugs are so expensive that the NHS has reduced the amount they are willing to spend on them by 30 percent. Here we are running up against the limits to second model of healthcare, a limit that at some point will force societies to choose between providing life preserving care for all, or only to those rich enough to afford it.

If the superlongevity project is going to be a progressive project it seems essential to me that it look like the first model of healthcare rather than the second. Otherwise it will either leave us with divergences in longevity within and between societies that make us long nostalgically for the “narrowness” of current gap between today’s poorest and richest societies, or it will bankrupt countries that seek to extend increased longevity to everyone.

This would require a u-turn from the trajectory of healthcare today which is dominated and distorted by the lucrative world of the second model. As an example of this distortion: the physicists, Paul Davies, is working on a new approach to cancer that involves attempting to attack the disease with viruses. If successful this would be a good example of model one. Using viruses (in a way the reverse of immunosuppressives) to treat cancer would likely be much cheaper than current approaches to cancer involving radiation, chemotherapy, and surgery due to the fact that viruses can self-replicate after being engineered rather than needing to be expensively and painstakingly constructed in drug labs. The problem is that it’s extremely difficult for Davies to get funding for such research precisely because there isn’t that much money to be made in it.

In an interview about his research, Davies compared his plight to how drug companies treat aspirin. There’s good evidence to show that plain old aspirin might be an effective preventative against cancer. Sadly, it’s almost impossible to find funding for large scale studies of aspirin’s efficacy in preventing cancer because you can buy a bottle of the stuff for a little over a buck, and what multi-billion dollar pharmaceutical company could justify profit margins as low as that?

The distortions of the second model are even more in evidence when it comes to antibiotics. Here is one of the few places where the second model of healthcare is dependent upon the first. As this chilling article by Maryn Mckenna drives home we are in danger of letting the second model lead to the nightmare of a sudden sharp reversal of the health and longevity gains of the last century.

We are only now waking up to the full danger implicit in antibiotic resistance. We’ve so over prescribed these miracle treatments both to ourselves and our poor farms animals who we treat as mere machines and “grow” in hellish sanitary conditions that bacteria have evolved to no longer be treatable with the suite of antibiotics we have, which are now a generation old, or older. If you don’t think this is a big deal, think about what it means to live in a world where a toothache can kill you and surgeries and chemotherapy can no longer be performed. A long winter of antibiotic resistance would just mean that many of our dreams of superlongevity this century would be moot. It would mean many of us might die quite young from common illnesses, or from surgical and treatment procedures that have combined given us the longevity we have now.

Again, the reason we don’t have alternatives to legacy antibiotics is that pharmaceutical companies don’t see any profit in these as opposed to, say Viagra. But the other part of the reason for their failure, is just as interesting. It’s that we have overtreated ourselves because we find the discomfort of being even mildly sick for a few days unbearable. It’s also because we want nature, in this case our farm animals, to function like machines. Mechanical functioning means regularity, predictability, standardization and efficiency and we’ve had to so distort the living conditions, food, and even genetics of the animals we raise that they would not survive without our constant medical interventions, including antibiotics.

There is a great deal of financial incentive to build solutions to human medical problems around interminable treatments rather than once and done cures or something that is done only periodically. Constant consumption and obsolescence guarantees revenue streams.  Not too long ago, Danny Hillis, who I otherwise have the deepest respect for, gave an interview on, among other things, proteomics, which, for my purposes here, essentially means the minute analysis of bodily processes with the purpose of intervening the moment things begin to go wrong- to catch diseases before they cause us to exhibit symptoms. An audience member asked a thought provoking question, which when followed up by the interviewer Alexis Madrigal, seemed to leave the otherwise loquacious Hillis, stumped. How do you draw the line between illness without symptoms and what the body just naturally does? The danger is you might end up turning everyone, including the healthy, into “patients” and “profit centers”.

We already have a world where seemingly healthy people needed to constantly monitor and medicate themselves just to keep themselves alive, where the body seems to be in a state of almost constant, secret revolt. This is the world as diabetics often experience it, and it’s not a pretty one.  What I wonder is if, in a world in which everyone sees themselves as permanently sick- as in the process of dying- and in need of medical intervention to counter this sickness if we will still remember the joy of considering ourselves healthy? This is medicine becoming subsumed under our current model of consumption.   

Everyone, it seems, has woken up to the fact that consumer electronics has the perfect consumption sustaining model. If things quickly grow “old” to the point where they no longer work with everything else you own, or become so rare that one is unable to find replacement parts, then one if forced to upgrade if merely to insure that your stuff still works. Like the automotive industry, healthcare now seems to be embracing technological obsolescence as a road to greater profitability. Insurance companies seem poised to use devices like the Apple watch to sort and monitor customers, but that is likely only the beginning.

Let me give you my nightmare scenario for a world of superlongevity. It’s a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable in  the same way August Weismann viewed the disabled in his day.  It’s a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual “profit center”.

The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity.  If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.

Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many “immortals” of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.

I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.

 Image: Memento Mori, South Netherlands, c. 1500-1525, the Thomson collection

2014: The death of the Human Rights Movement, or It’s Rebirth?

Edwin Abbey Justice Harrisburg

For anyone interested in the issues of human rights, justice, or peace, and I assume that would include all of us, 2014 was a very bad year. It is hard to know where to start, with Eric Garner, the innocent man choked to death in New York city whose police are supposed to protect citizens not kill them, or Ferguson Missouri where the lack of police restraint in using lethal force on African Americans, burst into public consciousness, with seemingly little effect, as the chilling murder of a young boy wielding a pop gun occurred even in the midst of riots that were national news.

Only days ago, we had the release of the US Senate’s report on torture on terrorists “suspects”, torture performed by or enabled by Americans set into a state of terror and rage in the wake of 9-11. Perhaps the most depressing feature of the report is the defense of these methods by members of the right even though there is no evidence forms of torture ranging from “anal feeding” to threatening prisoners with rape gave us even one piece of usable information that could have been gained without turning American doctors and psychologists into 21st century versions of Dr. Mengele.

Yet the US wasn’t the only source of ill winds for human compassion, social justice, and peace. It was a year when China essentially ignored and rolled up democratic protests in Hong Kong, where Russia effectively partitioned Ukraine, where anti-immigrant right-wing parties made gains across Europe. The Middle East proved especially bad:military secularists and the “deep state” reestablished control over Egypt – killing hundreds and arresting thousands, the living hell that is the Syrian civil war created the horrific movement that called itself the Islamic State, whose calling card seemed to be brutally decapitate, crucify, or stone its victims and post it on Youtube.

I think the best way to get a handle on all this is to zoom out and take a look from 10,000 feet, so to speak. Zooming out allows us to put all this in perspective in terms of space, but even more importantly, in terms of time, of history.

There is a sort of intellectual conceit among a certain subset of thoughtful, but not very politically active or astute, people who believe that, as Kevin Kelly recently said “any twelve year old can tell you that world government is inevitable”. And indeed, given how many problems are now shared across all of the world’s societies, how interdependent we have become, the opinion seems to make a great deal of sense. In addition to these people there are those, such as Steven Pinker, in his fascinating, if far too long, Better Angels, that make the argument that even if world government is not in the cards something like world sameness, convergence around a global shared set of liberal norms, along with continued social progress seems baked into the cake of modernity as long as we can rid ourselves of what they consider atavisms,most especially religion, which they think has allowed societies to be blind to the wonders of modernity and locked in a state of violence.

If we wish to understand current events, we need to grasp why it is these ideas- of greater and greater political integration of humanity and projections regarding the decline of violence seem as far away from us in 2014 as ever.

Maybe the New Atheists, among whom Pinker is a member, are right that the main source of violence in the world is religion. Yet it is quite obvious from looking at the headlines listed above that religion only unequivocally plays a role in two of them – the Syrian civil war and the Islamic state, and the two are so closely related we should probably count them as just one. US torture of Muslims was driven by nationalism- not religion, and police brutality towards African Americans is no doubt a consequence of a racism baked deep into the structure of American society. The Chinese government was not cracking down on religious but civically motivated protesters in Hong Kong, and the two side battling it out in Ukraine are both predominantly Orthodox Christians.

The argument that religion, even when viewed historically, hasn’t been the primary cause of human violence, is one made by Karen Armstrong in her recent book Fields of Blood. Someone who didn’t read the book, and Richard Dawkins is one critic who apparently hasn’t read it, might think it makes the case that religion is only violent as a proxy for conflicts that are at root political, but that really isn’t Armstrong’s point.

What she reminds those of us who live in secular societies is that before the modern era it isn’t possible to speak of religion as some distinct part of society at all. Religion’s purview was so broad it covered everything from the justification of political power, to the explanation of the cosmos to the regulation of marriage to the way society treated its poor.

Religion spread because the moral universalism it eventually developed sat so well with the universal aspirations of empire that the latter sanctioned and helped establish religion as the bedrock of imperial rule. Yet from the start, religion whether Taoism and Confucianism in China to Hinduism and Buddhism in Asia to Islam in North Africa and the Middle East along with Christian Europe, religion was the way in which the exercise of power or the degree of oppression was criticized and countered. It was religion which challenged the brutality of state violence and motivated the care for the impoverished and disabled . Armstrong also reminds us that the majority of the world is still religious in this comprehensive sense, that secularism is less a higher stage of society than a unique method of approaching the world that emerged in Europe for particularistic reasons, and which was sometimes picked up elsewhere as perceived necessity for technological modernization (as in Turkey and China).

Moving away from Armstrong, it was the secularizing West that invented the language of social and human rights that built on the utopian aspirations of religion, but shed their pessimism that a truly just world without poverty, oppression or war, would have to await the end of earthly history and the beginning of a transcendent era. We should build the perfect world in the here and now.

Yet the problem with human rights as they first appeared in the French Revolution was that they were intimately connected to imperialism. The French “Rights of Man” both made strong claims for universal human rights and were a way to undermine the legitimacy of European autocrats, serving the imperial interests of Napoleonic France. The response to the rights imperialism of the French was nationalism that both democratized politics, but tragically based its legitimacy on defending the rights of one group alone.

Over a century after Napoleon’s defeat both the US and the Soviet Union would claim the inheritance of French revolutionary universalism with the Soviets emphasizing their addressing of the problem of poverty and the inequalities of capitalism, and the US claiming the high ground of political freedom- it was here, as a critique of Soviet oppression, that the modern human rights movement as we would now recognize it emerged.

When the USSR fell in the 1990’s it seemed the world was heading towards the victory of the American version of rights universalism. As Francis Fukuyama would predict in his End of History and the Last Man the entire world was moving towards becoming liberal democracies like the US. It was not to be, and the reasons why both inform the present and give us a glimpse into the future of human rights.

The reason why the secular language of human rights has a good claim to be a universal moral language is not because religion is not a good way to pursue moral aims or because religion is focused on some transcendent “never-never-land” whereas secular human rights has its feet squarely placed in the scientifically supported real world. Rather, the secular character of human rights allows it to be universal because being devoid of religious claims it can be used as a bridge across groups adhering to different faiths, and even can include what is new under the sun- persons adhering to no religious tradition at all.

The problem human rights has had up until this moment is just how deeply it has been tied up with US imperial interests, which leads almost inevitably to those at the receiving end of US power crushing the manifestation of the human rights project in their societies- what China has just done in Hong Kong and how Putin’s Russia both understand and has responded to events in Ukraine – both seeing rights based protests there as  Western attempts to weaken their countries.

Like the nationalism that grew out of French rights imperialism, Islamic jihadism became such a potent force in the Middle East partially as a response to Western domination, and we in the West have long been in the strange position that the groups within Middle Eastern societies that share many of our values, such as Egypt today, are also the forces of oppression within those societies.

What those who continue to wish that human rights can provide a global moral language can hope for is that, as the proverb goes, “there is no wind so ill that it does not blow some good”. The good here would be, in exposing so clearly US inadequacy in living up to the standards of human rights, the global movement for these rights will at last become detached from American foreign policy. A human rights that was no longer seen as a clever strategy of US and other Western powers might eventually be given more room to breathe in non-western countries and cultures and over the very long hall bring the standards of justice in the entire world closer to the ideals of the now half century old UN Declaration of Human Rights.

The way this can be accomplished might also address the very valid Marxists critique of the human rights movement- that it deflects the idealistic youth on whom the shape of future society always depends away from the structural problems within their own societies, their efforts instead concentrated on the very real cruelties of dictators and fanatics on the other side of the world and on the fate of countries where their efforts would have little effect unless it served the interest of their Western government.

What 2014 reminded us is what Armstrong pointed out, that every major world religion has long known that every society is in some sense underwritten by structural violence and oppression. The efforts of human rights activists thus need to be ever vigilant in addressing the failure to live up to their ideals at home even as they forge bonds of solidarity and hold out a hand of support to those a world away, who, though they might not speak a common language regarding these rights, and often express this language in religious terms, are nevertheless on the same quest towards building a more just world.