Life: Inevitable or Accident?

The-Tree-Of-Life Gustav Klimt

Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon?

There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.

The proponent of the new theory is a young physicist at MIT named Jeremy England. I can’t claim I quite grasp all the intricate details of England’s theory, though he does an excellent job of explaining it here, but perhaps the best way of capturing it succinctly is by thinking of the laws of physics as a landscape, and a leaning one at that.

The second law of thermodynamics leans in the direction of increased entropy: systems naturally move in the direction of losing rather than gaining order over time, which is why we break eggs to make omelettes and not the other way round. The second law would seem to be a bad thing for living organisms, but oddly enough, ends up being a blessing not just for life, but for any self-organizing system so long as that system has a means of radiating this entropy away from itself.

For England, the second law provides the environment and direction in which life evolves. In those places where energy outputs from outside are available and can be dissipated because they have some boundary, such as a pool of water, self-organizing systems naturally come to be dominated by those forms that are particularly good at absorbing energy from their surrounding environment and dissipating less organized forms of energy in the form of heat (entropy) back into it.

This landscape in which life evolves, England postulates, may tilt as well in the direction of complexity and intelligence due to the fact that in a system that frequently changes in terms of oscillations of energy, those forms able to anticipate the direction of such oscillations gain the possibility of aligning themselves with them and thus become able to accomplish even more work through resonance.

England is in no sense out to replace Darwin’s natural selection as the mechanism through which evolution is best understood, though, should he be proved right, he would end up greatly amending it. If his theory ultimately proves successful, and it is admittedly very early days, England’s theory will have answered one of the fundamental questions that has dogged evolution since its beginnings. For while Darwin’s theory provides us with all the explanation we need for how complex organisms such as ourselves could have emerged out of seemingly random processes- that is through natural selection- it has never quite explained how you go from the inorganic to the organic and get evolution working in the first place. England’s work is blurring the line between the organic and the most complicated self-organizing forms of the inorganic, making the line separating cells from snowflakes and storms a little less distinct.

Whatever its ultimate fate, however, England’s theory faces major hurdles, not least because it seems to have a bias towards increasing complexity, and in its most radical form, points towards the inevitability that life will evolve in the direction of increased intelligence, ideas which many evolutionary thinkers vehemently disavow.

Some evolutionary theorists may see effort such as England’s not as a paradigm shift waiting in the wings, but as an example of a misconception regarding the relationship between increasing complexity and evolution that now appears to have been adopted by actual scientists rather than a merely misguided public. A misconception that, couched in scientific language, will further muddy the minds of the public leaving them with a conception of evolution that belongs much more to the 19th century than to the 21st. It is a misconception whose most vocal living opponent after the death of the irreplaceable Stephen J Gould has been the paleontologist, evolutionary biologist, and senior editor of the journal Nature, Henry Gee, who has set out to disabuse us of it in his book The Accidental Species.

Gee’s goal is to remind us of what he holds to be the fundamental truth behind the theory of evolution- evolution has one singular purpose from which everything else follows in lockstep- reproduction. His objective is to do away, once and for all, with what he feels is a common misconception that evolution is leading towards complexity and progress and that the highest peak of this complexity and progress is us- human beings.

If improved prospects for reproduction can be bought through the increased complexity of an organism then that is what will happen, but it needn’t be the case. Gee points out that many species, notably some worms and many parasites, have achieved improved reproductive prospects by decreasing their complexity.Therefore the idea that complexity (as in an increase in the specialization and number of parts an organism has)  is a merely matter of evolution plus time doesn’t hold up to close scrutiny. Judged through the eyes of evolution, losing features and becoming more simple is not necessarily a vice. All that counts is an organism’s ability to make more copies, or for animals that reproduce through sex, blended copies of itself.

Evolution in this view isn’t beautiful but coldly functional and messy- a matter of mere reproductive success. Gee reminds us of Darwin’s idea of evolution’s product as a “tangled bank”- a weird menagerie of creatures each having their own particular historical evolutionary trajectory. The anal retentive Victorian era philosophers who tried to build upon his ideas couldn’t accept such a mess and:

…missed the essential metaphor of Darwin’s tangled bank, however, and saw natural selection as a sort of motor that would drive transformation from one preordained station on the ladder of life to the next one.” (37)

Gee also sets out to show how deeply limited our abilities are when it comes to understanding the past through the fossil record. Very, very, few of the species that have ever existed left evidence of their lives in the form of fossils, which are formed only under very special conditions, and where the process of fossilization greatly favors the preservation of some species over others. The past is thus incredibly opaque making it impossible to impose an overarching narrative upon it- such as increasing complexity- as we move from the past towards the present.

Gee, though an ardent defender of evolution and opponent of creationist pseudoscience, finds the gaps in the fossil record so pronounced that he thinks we can create almost any story we want from it and end up projecting our contemporary biases onto the speechless stones. This is the case even when the remains we are dealing with are of much more recent origin and especially when their subject is the origin of us.

We’ve tended, for instance, to link tool use and intelligence, even in those cases such as Homo Habilis, when the records and artifacts point to a different story. We’ve tended not to see other human species such as the so-called Hobbit man as ways we might have actually evolved had circumstances not played out in precisely the way they had. We have not, in Gee’s estimation, been working our way towards the inevitable goal of our current intelligence and planetary dominance, but have stumbled into it by accident.

Although Gee is in no sense writing in anticipation of a theory such as England’s his line of thinking does seem to pose obstacles that the latter’s hypothesis will have to address. If it is indeed the case that, as England has stated it, complex life arises inevitably from the physics of the universe, so that in his estimation:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

Then England will have to address why it took so incredibly long – 4 billion years out of the earth’s 4.5 billion year history for actual plants to make their debut, not to mention similar spans for other complex eukarya such as animals like ourselves.

Whether something like England’s inevitable complexity or Gee’s, not just blind, but drunk and random, evolutionary walk is ultimately the right way to understand evolution has implications far beyond evolutionary theory. Indeed, it might have deep implications for the status and distribution of life in the universe and even inform the way we understand the current development of new forms of artificial intelligence.

What we have discovered over the last decade is that bodies of water appear to be both much more widespread and can be found in environments far beyond those previously considered. Hence NASA’s recent announcement that we are likely to find microbial life in the next 10 – 30 years both in our solar system and beyond. What this means is that England’s heat baths are likely ubiquitous, and if he’s correct, life likely can be found anywhere there is water- meaning nearly everywhere. There may even be complex lifelike forms that did not evolve through what we would consider normal natural selection at all.

If Gee is right the universe might be ripe for life, but the vast, vast majority of that life will be microbial and no amount of time will change that fate on most life inhabited worlds. If England in his minor key is correct the universe should at least be filled with complex multicellular life forms such as ourselves. Yet it is the possibility that England is right in his major key, that consciousness, civilization, and computation might flow naturally from the need of organisms to resonate with their fluctuating environments that, biased as we are, we likely find most exciting. Such a view leaves us with the prospect of many, many more forms of intelligence and technological civilizations like ourselves spread throughout the cosmos.

The fact that the universe so far has proven silent and devoid of any signs of technological civilization might give us pause when it comes to endorsing England’s optimism over Gee’s pessimism, unless, that is, there is some sort of limit or wall when it comes to our own perceived technological trajectory that can address the questions that emerge from the ideas of both. To that story, next time…

 

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and our certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

Big Data as statistical masturbation

Infinite Book Tunnel

It’s just possible that there is a looming crisis in yet another technological sector whose proponents have leaped too far ahead, and too soon, promising all kinds of things they are unable to deliver. It strange how we keep ramming our head into this same damned wall, but this next crisis is perhaps more important than deflated hype at other times, say our over optimism about the timeline for human space flight in the 1970’s, or the “AI winter” in the 1980’s, or the miracles that seemed just at our fingertips when we cracked the Human Genome while pulling riches out of the air during the dotcom boom- both of which brought us to a state of mania in the 1990’s and early 2000’s.

The thing that separates a potentially new crisis in the area of so-called “Big-Data” from these earlier ones is that, literally overnight, we have reconstructed much of our economy, national security infrastructure and in the process of eroding our ancient right privacy on it’s yet to be proven premises. Now, we are on the verge of changing not just the nature of the science upon which we all depend, but nearly every other field of human intellectual endeavor. And we’ve done and are doing this despite the fact that the the most over the top promises of Big Data are about as epistemologically grounded as divining the future by looking at goat entrails.

Well, that might be a little unfair. Big Data is helpful, but the question is helpful for what? A tool, as opposed to a supposedly magical talisman has its limits, and understanding those limits should lead not to our jettisoning the tool of large scale data based analysis, but what needs to be done to make these new capacities actually useful rather than, like all forms of divination, comforting us with the idea that we can know the future and thus somehow exert control over it, when in reality both our foresight and our powers are much more limited.

Start with the issue of the digital economy. One model underlies most of the major Internet giants- Google, FaceBook and to a lesser extent Apple and Amazon, along with a whole set of behemoths who few of us can name but that underlie everything we do online, especially data aggregators such as Axicom. That model is to essentially gather up every last digital record we leave behind, many of them gained in exchange for “free” services and using this living archive to target advertisements at us.

It’s not only that this model has provided the infrastructure for an unprecedented violation of privacy by the security state (more on which below) it’s that there’s no real evidence that it even works.

Just anecdotally reflect on your own personal experience. If companies can very reasonably be said to know you better than your mother, your wife, or even you know yourself, why are the ads coming your way so damn obvious, and frankly even oblivious? In my own case, if I shop online for something, a hammer, a car, a pair of pants, I end up getting ads for that very same type of product weeks or even months after I have actually bought a version of the item I was searching for.

In large measure, the Internet is a giant market in which we can find products or information. Targeted ads can only really work if they are able refract in their marketed product’s favor the information I am searching for, if they lead me to buy something I would not have purchased in the first place. Derek Thompson, in the piece linked to above points out that this problem is called Endogeneity, or more colloquially: “hell, I was going to buy it anyway.”

The problem with this economic model, though, goes even deeper than that. At least one-third of clicks on digital ads aren’t human beings at all but bots that represent a way of gaming advertising revenue like something right out of a William Gibson novel.

Okay, so we have this economic model based on what at it’s root is really just spyware, and despite all the billions poured into it, we have no idea if it actually affects consumer behavior. That might be merely an annoying feature of the present rather than something to fret about were it not for the fact that this surveillance architecture has apparently been captured by the security services of the state. The model is essentially just a darker version of its commercial forbearer. Here the NSA, GCHQ et al hoover up as much of the Internet’s information as they can get their hands on. Ostensibly, their doing this so they can algorithmically sort through this data to identify threats.

In this case, we have just as many reasons to suspect that it doesn’t really work, and though they claim it does, none of these intelligence agencies will actually look at their supposed evidence that it does. The reasons to suspect that mass surveillance might suffer similar flaws as mass “personalized” marketing, was excellently summed up   in a recent article in the Financial Times Zeynep Tufekci when she wrote:

But the assertion that big data is “what it’s all about” when it comes to predicting rare events is not supported by what we know about how these methods work, and more importantly, don’t work. Analytics on massive datasets can be powerful in analysing and identifying broad patterns, or events that occur regularly and frequently, but are singularly unsuited to finding unpredictable, erratic, and rare needles in huge haystacks. In fact, the bigger the haystack — the more massive the scale and the wider the scope of the surveillance — the less suited these methods are to finding such exceptional events, and the more they may serve to direct resources and attention away from appropriate tools and methods.

I’ll get to what’s epistemologically wrong with using Big Data in the way used by the NSA that Tufekci rightly criticizes in a moment, but on a personal, not societal level, the biggest danger from getting the capabilities of Big Data wrong seems most likely to come through its potentially flawed use in medicine.

Here’s the kind of hype we’re in the midst of as found in a recent article by Tim Mcdonnell in Nautilus:

We’re well on our way to a future where massive data processing will power not just medical research, but nearly every aspect of society. Viktor Mayer-Schönberger, a data scholar at the University of Oxford’s Oxford Internet Institute, says we are in the midst of a fundamental shift from a culture in which we make inferences about the world based on a small amount of information to one in which sweeping new insights are gleaned by steadily accumulating a virtually limitless amount of data on everything.

The value of collecting all the information, says Mayer-Schönberger, who published an exhaustive treatise entitled Big Data in March, is that “you don’t have to worry about biases or randomization. You don’t have to worry about having a hypothesis, a conclusion, beforehand.” If you look at everything, the landscape will become apparent and patterns will naturally emerge.

Here’s the problem with this line of reasoning, a problem that I think is the same, and shares the same solution to the issue of mass surveillance by the NSA and other security agencies. It begins with this idea that “the landscape will become apparent and patterns will naturally emerge.”

The flaw that this reasoning suffers has to do with the way very large data sets work. One would think that the fact that sampling millions of people, which we’re now able to do via ubiquitous monitoring, would offer enormous gains over the way we used to be confined to population samples of only a few thousand, yet this isn’t necessarily the case. The problem is the larger your sample size the greater your chance at false correlations.

Previously I had thought that surely this is a problem that statisticians had either solved or were on the verge of solving. They’re not, at least according to the computer scientist Michael Jordan, who fears that we might be on the verge of a “Big Data winter” similar to the one AI went through in the 1980’s and 90’s. Let’s say you had an extremely large database with multiple forms of metrics:

Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

The actual mathematics of sorting out spurious from potentially useful correlations from being distinguished is, in Jordan’s estimation, far from being worked out:

We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

Alright, now that’s a problem. As you’ll no doubt notice the danger of false correlation that Jordan identifies as a problem for science is almost exactly the same critique Tufekci  made against the mass surveillance of the NSA. That is, unless the NSA and its cohorts have actually solved the statistical/engineering problems Jordan identified and haven’t told us, all the biggest data haystack in the world is going to lead to is too many leads to follow, most of them false, and many of which will drain resources from actual public protection. Perhaps equally troubling: if security services have solved these statistical/engineering problems how much will be wasted in research funding and how many lives will be lost because medical scientists were kept from the tools that would have empowered their research?

At least part of the solution to this will be remembering why we developed statistical analysis in the first place. Herbert I. Weisberg with his recent book Willful Ignorance: The Mismeasure of Uncertainty has provided a wonderful, short primer on the subject.

Statistical evidence, according to Weisberg was first introduced to medical research back in the 1950’s as a protection against exaggerated claims to efficacy and widespread quackery. Since then we have come to take the p value .05 almost as the truth itself. Weisberg’s book is really a plea to clinicians to know their patients and not rely almost exclusively on statistical analyses of “average” patients to help those in their care make life altering decisions in terms of what medicines to take or procedures to undergo. Weisberg thinks that personalized medicine will over the long term solve these problems, and while I won’t go into my doubts about that here, I do think, in the experience of the physician, he identifies the root to the solution of our Big Data problem.

Rather than think of Big Data as somehow providing us with a picture of reality, “naturally emerging” as Mayer-Schönberger quoted above suggested we should start to view it as a way to easily and cheaply give us a metric for the potential validity of a hypothesis. And it’s not only the first step that continues to be guided by old fashioned science rather than computer driven numerology but the remaining steps as well, a positive signal  followed up by actual scientist and other researchers doing such now rusting skills as actual experiments and building theories to explain their results. Big Data, if done right, won’t end up making science a form of information promising, but will instead be used as the primary tool for keeping scientist from going down a cul-de-sac.

The same principle applied to mass surveillance means a return to old school human intelligence even if it now needs to be empowered by new digital tools. Rather than Big Data being used to hoover up and analyze all potential leads, espionage and counterterrorism should become more targeted and based on efforts to understand and penetrate threat groups themselves. The move back to human intelligence and towards more targeted surveillance rather than the mass data grab symbolized by Bluffdale may be a reality forced on the NSA et al by events. In part due to the Snowden revelations terrorist and criminal networks have already abandoned the non-secure public networks which the rest of us use. Mass surveillance has lost its raison d’etre.

At least it terms of science and medicine, I recently saw a version of how Big Data done right might work. In an article for Qunta and Scientific American by Veronique Greenwood she discussed two recent efforts by researchers to use Big Data to find new understandings of and treatments for disease.

The physicist (not biologist) Stefan Thurner has created a network model of comorbid diseases trying to uncover the hidden relationships between different, seemingly unrelated medical conditions. What I find interesting about this is that it gives us a new way of understanding disease, breaking free of hermetically sealed categories that may blind us to underlying shared mechanisms by medical conditions. I find this especially pressing where it comes to mental health where the kind of symptom listing found in the DSM- the Bible for mental health care professionals- has never resulted in a causative model of how conditions such as anxiety or depression actually work and is based on an antiquated separation between the mind and the body not to mention the social and environmental factors that all give shape to mental health.

Even more interesting, from Greenwood’s piece, are the efforts by Joseph Loscalzo of Harvard Medical School to try and come up with a whole new model for disease that looks beyond genome associations for diseases to map out the molecular networks of disease isolating the statistical correlation between a particular variant of such a map and a disease. This relationship between genes and proteins correlated with a disease is something Loscalzo calls a “disease module”.

Thurner describes the underlying methodology behind his, and by implication Loscalzo’s,  efforts to Greenwood this way:

Once you draw a network, you are drawing hypotheses on a piece of paper,” Thurner said. “You are saying, ‘Wow, look, I didn’t know these two things were related. Why could they be? Or is it just that our statistical threshold did not kick it out?’” In network analysis, you first validate your analysis by checking that it recreates connections that people have already identified in whatever system you are studying. After that, Thurner said, “the ones that did not exist before, those are new hypotheses. Then the work really starts.

It’s the next steps, the testing of hypotheses, the development of a stable model where the most important work really lies. Like any intellectual fad, Big Data has its element of truth. We can now much more easily distill large and sometimes previously invisible  patterns from the deluge of information in which we are now drowning. This has potentially huge benefits for science, medicine, social policy, and law enforcement.

The problem comes from thinking that we are at the point where our data crunching algorithms can do the work for us and are about to replace the human beings and their skills at investigating problems deeply and in the real world. The danger there would be thinking that knowledge could work like self-gratification a mere thing of the mind without all the hard work, compromises, and conflict between expectations and reality that goes into a real relationship. Ironically, this was a truth perhaps discovered first not by scientists or intelligence agencies but by online dating services. To that strange story, next time….

Edward O. Wilson’s Dull Paradise

Garden of Eden

In all sincerity I have to admit that there is much I admire about the biologist Edward O. Wilson. I can only pray that not only should I live into my 80’s, but still possess the intellectual stamina to write what are at least thought provoking books when I get there. I also wish I still have the balls to write a book with the title of Wilson’s latest- The Meaning of Human Existence, for publishing with an appellation like that would mean I wasn’t afraid I would disappoint my readers, and Wilson did indeed leave me wondering if the whole thing was worth the effort.

Nevertheless,  I think Wilson opened up an important alternative future that is seldom discussed here- namely what if we aimed not at a supposedly brighter, so-called post-human future but to keep things the same? Well, there would be some changes, no extremes of human poverty, along with the restoration of much of the natural environment to its pre-industrial revolution health. Still, we ourselves would aim to stay largely the same human beings who emerged some 100,000 years ago- flaws and all.

Wilson calls this admittedly conservative vision paradise, and I’ve seen his eyes light up like a child contemplating Christmas when using the word in interviews. Another point that might be of interest to this audience is who he largely blames for keeping us from entering this Shangri-la; archaic religions and their “creation stories.”

I have to admit that I find the idea of trying to preserve humanity as it is a valid alternative future. After all, “evolve or die” isn’t really the way nature works. Typically the “goal” of evolution is to find a “design” that works and then stick with it for as long as possible. Since we now dominate the entire planet and our numbers out-rival by a long way any other large animal it seems hard to assert that we need a major, and likely risky, upgrade. Here’s Wilson making the case:

While I am at it, I hereby cast a vote for existential conservatism, the preservation of biological human nature as a sacred trust. We are doing very well in terms of science and technology. Let’s agree to keep that up, and move both along even faster. But let’s also promote the humanities, that which makes us human, and not use science to mess around with the wellspring of this, the absolute and unique potential of the human future. (60)

It’s an idea that rings true to my inner Edmund Burke, and sounds simple, doesn’t it? And on reflection it would be, if human beings were bison, blue whales, or gray wolves. Indeed, I think Wilson has drawn this idea of human preservation from his lifetime of very laudable work on biodiversity. Yet had he reflected upon why efforts at preservation fail when they do he would have realized that the problem isn’t the wildlife itself, but the human beings who don’t share the same value system going in the opposite direction. That is, humans, though we are certainly animals, aren’t wildlife, in the sense that we take destiny into our own hands, even if doing so is sometimes for the worse. Wilson seems to think that it’s quite a short step from asserting it as a goal to gaining universal assent to the “preservation of biological human nature as a sacred trust”, the problem is there is no widespread agreement over what human nature even is, and then, even if you had such agreement, how in the world do you go about enforcing it for the minority who refuse to adhere to it? How far should we be willing to go to prevent persons from willingly crossing some line that defines what a human being is? And where exactly is that line in the first place? Wilson thinks we’re near the end of the argument when we only just took our seat at the debate.

Strange thing is the very people who would likely naturally lean towards the kind of biological conservatism that Wilson hopes “we” will ultimately choose are the sorts of traditionally religious persons he thinks are at the root of most of our conflicts. Here again is Wilson:

Religious warriors are not an anomaly. It is a mistake to classify believers of a particular religious and dogmatic religion-like ideologies into two groups, moderates versus extremists. The true cause of hatred and religious violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things. (154)

For Wilson, a religious groups “defines itself foremost by its creation story, the supernatural narrative that explains how human beings came into existence.” (151)  The trouble with this is that it’s not even superficially true. Three of the world’s religions that have been busy killing one another over the last millennium – Judaism, Christianity and Islam all have the same creation story. Wilson knows a hell of a lot more about ants and evolution then he does about religion or even world history. And while religion is certainly the root of some of our tribalism, which I agree is the deep and perennial human problem, it’s far from the only source, and very few of our tribal conflicts have anything to do with the fight between human beings over our origins in the deep past. How about class conflict? Or racial conflict? Or nationalist conflicts when the two sides profess the not only the exact same religion but the exact same sect- such as the current fight between the two Christian Orthodox nations of Russia and Ukraine? If China and Japan someday go to war it will not be a horrifying replay of the Scopes Monkey Trial.

For a book called The Meaning of Human Existence Wilson’s ideas have very little explanatory power when it comes to anything other than our biological origins, and some quite questionable ideas regarding the origins of our capacity for violence. That is, the book lacks depth, and because of this I found it, well… dull.

Nowhere was I more hopeful that Wilson would have something interesting and different to say than when it came to the question of extraterrestrial life. Here we have one of the world’s greatest living biologists, a man who had spent a lifetime studying ants as an alternative route to the kinds of eusociality possessed only by humans, the naked mole rat, and a handful of insects. Here was a scientists who was clearly passionate about preserving the amazing diversity of life on our small planet.

Yet Wilson’s E.T.s are land dwellers, relatively large, biologically audiovisual, “their head is distinct, big, and located up front” (115) they have moderate teeth and jaws, they have a high social intelligence, and “a small number of free locomotory appendages, levered for maximum strength with stiff internal or external skeletons composed of hinged segments (as by human elbows and knees), and with at least one pair of which are terminated by digits with pulpy tips used for sensitive touch and grasping. “ (116)

In other words they are little green men.

What I had hoped was the Wilson would have used his deep knowledge of biology to imagine alternative paths to technological civilization. Couldn’t he have imagined a hive-like species that evolves in tandem with its own technological advancement? Or maybe some larger form of insect like animal which doesn’t just have an instinctive repertoire of things that it builds, but constantly improves upon its own designs, and explores the space of possible technologies? Or aquatic species that develop something like civilization through the use of sea-herding and ocean farming? How about species that communicate not audio-visually but through electrical impulses the way our computers do?

After all, nature on earth is pretty weird. There’s not just us, but termites that build air conditioned skyscrapers (at least from their view), whales which have culturally specific songs, and strange little things that eat and excrete electrons. One might guess that life elsewhere will be even weirder. Perhaps my problem with The Meaning of Human Existence is that it just wasn’t weird enough not just to capture the worlds of tomorrow and elsewhere- but the one we’re living in right now.

 

There are two paths to superlongevity: only one of them is good

Memento Mori Ivories

Looked at in the longer historical perspective we have already achieved something our ancestors would consider superlongevity. In the UK life expectancy at birth averaged around 37 in 1700. It is roughly 81 today. The extent to which this is a reflection of decreased child mortality versus an increase in the survival rate of the elderly I’ll get to a little later, but for now, just try to get your head around the fact that we have managed to nearly double the life expectancy of human beings in a little over two centuries.

By itself the gains we have made in longevity are pretty incredible, but we have also managed to redefine what it means to be old. A person in 1830 was old at forty not just because of averages, but by the conditions of his body. A revealing game to play is to find pictures of adults from the 19th century and try to guess their ages. My bet is that you, like myself, will consistently estimate the people in these photos to be older than they actually were when the picture was taken. This isn’t a reflection of their lack of Botox and Photoshop, so much as the fact that they were missing the miracle of modern dentistry, were felled, or at least weathered, by diseases which we now consider mere nuisances. If I were my current age in 1830 I would be missing most of my teeth and the pneumonia I caught a few years back would have surely killed me, having been a major cause of death in the age of Darwin and Dickens.

Sixty or even seventy year olds today are probably in the state of health that a forty year old was in the 19th century. In other words we’ve increased the healthspan, not just the lifespan. Sixty really is the new forty, though what is important is how you define “new”. Yet get passed eighty in the early 21st century and you’re almost right back in the world where our ancestors lived. Experiencing the debilitations of old age that is the fate of those of us lucky enough to survive through the pleasures of youth and middle age. The disability of the old is part of the tragic aspect of life, and as always when it comes to giving poetic shape to our comic/ tragic existence, the Greeks got to the essence of old age with their myth of Tithonus.

Tithonus was a youth who had the ill fortune of inspiring the love of the goddess of spring Eos. (Love affairs between gods and mortals never end well). Eos asked Zeus to grant the youth immortality, which he did, but, of course, not in the way Eos intended. Tithonus would never die, but he also would continue to age becoming not merely old and decrepit, but eventually shrivel away to a grasshopper hugging a room’s corner. It is best not to ask the gods for anything.

Despite our successes, those of us lucky enough to live into our 7th and 8th decades still end up like poor old Tithonus. The deep lesson of the ancient myth still holds- longevity is not worth as much as we might hope if not also combined with the health of youth, and despite all of our advances, we are essentially still in Tithonus’ world.

Yet perhaps not for long. At least if one believes the story told by Jonathan Weiner in his excellent book Long for this World.  I learned much about our quest for long life and eternal youth from Long for this World, both its religious and cultural history, and the trajectory and state of its science. I never knew that Jewish folklore had a magical city called Luz where the death unleashed in Eden was prevented from entering, and that existed until  all its inhabitants became so bored that they walked out from its walls and we struck down by the Angel of Death waiting eagerly outside.

I did not know that Descartes, who had helped unleash the scientific revolution, thought that gains in knowledge were growing so fast that he would live to be 1,000. (He died in 1650 at 54). I did not realize that two other key figures of the scientific revolution Roger and Francis Bacon (no relation) thought that science would restore us to the knowledge before the fall (prelapsarian) which would allow us to live forever, or the depth to which very different Chinese traditions had no guilt at all about human immorality and pursued the goal with all sorts of elixirs and practices, none of which, of course, worked. I was especially taken with the story of how Pennsylvania’s most famous son- Benjamin Franklin- wanted to be “pickled” and awoken a century later.

Reviewing the past, when even ancient Egyptian hieroglyphs offer up recipes for “guaranteed to work” wrinkle creams, shows us just how deeply human the longing for agelessness is. It wasn’t invented by Madison Avenue or Dr Oz if even the attempts to find a fountain of youth by the ancients seem no less silly than many of our own. The question, I suppose, is the one that most risks the accusation that one is a fool: “Is this time truly different?” Are we, out of all the generations that have come before us believing the discovery of the route to human “immortality” (and every generation since the rise of modern science has had those who thought so) actually the ones who will achieve this dream?

Long for this World is at its heart a serious attempt to grapple with this question and tries to give us a clear picture of longevity science built around the theoretical biologist, Aubrey de Grey, who will either go down in history as a courageous prophet of a new era of superlongevity, or as just another figure in our long history of thinking biological immortality is at our fingertips when all we are seeing is a mirage.

One thing we have on our ancestors who chased this dream is that we know much, much, more about the biology of aging. Darwinian evolution allowed us to be able to conceive non- poetic theories on the origins of death. In the 1880’s the German biologist, August Weismann in his essay “Upon the Eternal Duration of Life”, provided a kind of survival of the fittest argument for death and aging. Even an ageless creature, Weismann argued, would overtime have to absorb multiple shocks eventually end up disabled. The the longer something lives the more crippled and worn out it becomes. Thus, it is in the interest of the species that death exists to clear the world of these disabled- very damned German- the whole thing.

Just after World War II the biologist Peter Medawar challenged the view of  Weismann. For Medawar if you look at any species selective pressures are really only operating when the organism is young. Those who can survive long enough to breed are the only ones that really count when it comes to natural selection. Like versions of James Dean or Marilyn Monroe, nature is just fine if we exit the world in the bloom of youth- as long, that is, as we have passed our genes.

In other words, healthful longevity has not really been something that natural selection has been selecting most organisms for, and because of this it hasn’t been selecting against bad things that can happen to old organisms either, as we’re finding when, by saving people from heart attacks in their 50’s, we destin them to die of diseases that were rare or unknown in the past like Alzheimers. In a sense we’re the victim of natural selection not caring about the health of those past reproductive age or their longevity.

Well, this is only partly true. Organisms that live in conditions where survival in youth is more secure end up with stretched longevity for their size. Some bats can live decades when similar sized mice have a lifespan of only a couple of years. Tortoises can live for well over a century while alligators of the same weight live from 30-50 years.

Stretching healthful longevity is also something that occurs when you starve an animal. We’ve know for decades that lifespan (in other animals at least) can be increased through caloric restriction. Although the mechanism is unclear, the Darwinian logic is not. Under conditions of starvation it’s a bad idea to breed and the body seems to respond by slowing development waiting for the return of food and a good time to mate.

Thus, there is no such thing as a death clock, lifespan is malleable and can be changed if we just learn how to work the dials. We should have known this from our historical experience over the last two-hundred years in which we doubled the human lifespan, but now we know that nature itself does it all the time and not by, like we do , by addressing the symptoms of aging but by resetting the clock of life itself.

We might ourselves find it easy to reset our aging clock if there weren’t multiple factors that play a role in its ticking. Aubrey de Grey has identified seven- the most important of which (excluding cancerous mutations) are probably the accumulation of “junk” within cells and the development of harmful “cross links” between cells. Strange thing about these is that they are not something that suddenly appears when we are actually “old” but are there all along, only reaching levels when they become noticeable and start to cause problems after many decades. We start dying the day we are born.

As we learn in Long for This World, there is hope that someday we may be able to effectively intervene against all these causes of aging. Every year the science needed to do so advances. Yet as Aubrey de Grey has indicated, the greatest threat to this quest for biological immortality is something we are all too familiar with – cancer.

The possibility of developing cancer emerges from the very way our cells work. Over a lifetime our trillions of cells replicate themselves an even more mind bogglingly high number of times. It is almost impossible that every copying error will be caught before it takes on a life of its own and becomes a cancerous growth. Increasing lifespan only increases the amount of time such copying errors can occur.

It’s in Aubrey de Grey’s solution to this last and most serious of super-longevity’s medical hurdles that Weiner’s faith in the sense of that project breaks down, as does mine. De Grey’s cure for cancer goes by the name of WILT- whole body interdiction of the lengthening of telomeres. A great deal of the cancers that afflict human beings achieve their deadly replication without limit by taking control of the telomerase gene. De Grey’s solution is to strip every human gene of its telomeres, something that, even if successful in preventing cancerous growths, would also leave us without red and white blood cells. In order to allow us to live without these cells, de Grey proposes regular infusions of stem cells. What this leave us with would be a life of constant chemotherapy and invasive medical interventions just to keep us alive. In other words, a life when even healthy people relate to their bodies and are kept alive by medical interventions that are now only experienced by the terminally ill.

I think what shocks Weiner about this last step in SENS is the that it underscores just how radical the medical requirements of engineering superlongevity might become. It’s one thing to talk about strengthening the cell’s junk collector the lysosome by adding an enzyme or through some genetic tweak, it’s another to talk about removing the very cells and structures which define human biology, cells and platelets, which have always been essential for human life and health.

Yet, WILT struck me with somewhat different issues and questions. Here’s how I have come to understand it. For simplicities sake, we might be said to have two models of healthcare, both of which have contributed to the gains we have seen in human health and longevity since 1800. As is often noted, a good deal of this gain in longevity was a consequence of improving childhood mortality. Having less and less people die at the age of five drastically improves the average lifespan. We made these gains largely through public health: things like drastically improved sanitation, potable water, vaccinations, and, in the 20th century antibiotics.

This set of improvements in human health were cheap, “easy”, and either comprised of general environmental conditions, or administered at most annually- like the flu shoot. These features allowed this first model of healthcare to be distributed broadly across the population leading to increased longevity by saving the lives primarily of the young. In part these improvements, and above all the development of antibiotics, also allowed longevity increases from at older end of the scale, which although less pronounced than improvements in child mortality, are, nonetheless very real. This is my second model of healthcare and includes things everything from open heart surgery, to chemo and radiation treatments for cancer, to lifelong prescription drugs to treat chronic conditions.

As opposed to the first model, the second one is expensive, relatively difficult, and varies greatly among different segments of the population. My Amoxicillin and Larry Page’s Amoxicillin are the same, but the medical care we would receive to treat something like cancer would be radically different.

We actually are making greater strides in the battle against cancer than at any time since Nixon declared war on the scourge way back in the 1970’s. A new round of immunosuppressive drugs that are proving so successful against a host of different cancers that John LaMattina, former head of research and development for Pfizer has stated that “We are heading towards a world where cancer will become a chronic disease in much the same way as we have seen with diabetes and HIV.”

The problem is the cost which can range up to 150,000 per year. The costs of the new drugs are so expensive that the NHS has reduced the amount they are willing to spend on them by 30 percent. Here we are running up against the limits to second model of healthcare, a limit that at some point will force societies to choose between providing life preserving care for all, or only to those rich enough to afford it.

If the superlongevity project is going to be a progressive project it seems essential to me that it look like the first model of healthcare rather than the second. Otherwise it will either leave us with divergences in longevity within and between societies that make us long nostalgically for the “narrowness” of current gap between today’s poorest and richest societies, or it will bankrupt countries that seek to extend increased longevity to everyone.

This would require a u-turn from the trajectory of healthcare today which is dominated and distorted by the lucrative world of the second model. As an example of this distortion: the physicists, Paul Davies, is working on a new approach to cancer that involves attempting to attack the disease with viruses. If successful this would be a good example of model one. Using viruses (in a way the reverse of immunosuppressives) to treat cancer would likely be much cheaper than current approaches to cancer involving radiation, chemotherapy, and surgery due to the fact that viruses can self-replicate after being engineered rather than needing to be expensively and painstakingly constructed in drug labs. The problem is that it’s extremely difficult for Davies to get funding for such research precisely because there isn’t that much money to be made in it.

In an interview about his research, Davies compared his plight to how drug companies treat aspirin. There’s good evidence to show that plain old aspirin might be an effective preventative against cancer. Sadly, it’s almost impossible to find funding for large scale studies of aspirin’s efficacy in preventing cancer because you can buy a bottle of the stuff for a little over a buck, and what multi-billion dollar pharmaceutical company could justify profit margins as low as that?

The distortions of the second model are even more in evidence when it comes to antibiotics. Here is one of the few places where the second model of healthcare is dependent upon the first. As this chilling article by Maryn Mckenna drives home we are in danger of letting the second model lead to the nightmare of a sudden sharp reversal of the health and longevity gains of the last century.

We are only now waking up to the full danger implicit in antibiotic resistance. We’ve so over prescribed these miracle treatments both to ourselves and our poor farms animals who we treat as mere machines and “grow” in hellish sanitary conditions that bacteria have evolved to no longer be treatable with the suite of antibiotics we have, which are now a generation old, or older. If you don’t think this is a big deal, think about what it means to live in a world where a toothache can kill you and surgeries and chemotherapy can no longer be performed. A long winter of antibiotic resistance would just mean that many of our dreams of superlongevity this century would be moot. It would mean many of us might die quite young from common illnesses, or from surgical and treatment procedures that have combined given us the longevity we have now.

Again, the reason we don’t have alternatives to legacy antibiotics is that pharmaceutical companies don’t see any profit in these as opposed to, say Viagra. But the other part of the reason for their failure, is just as interesting. It’s that we have overtreated ourselves because we find the discomfort of being even mildly sick for a few days unbearable. It’s also because we want nature, in this case our farm animals, to function like machines. Mechanical functioning means regularity, predictability, standardization and efficiency and we’ve had to so distort the living conditions, food, and even genetics of the animals we raise that they would not survive without our constant medical interventions, including antibiotics.

There is a great deal of financial incentive to build solutions to human medical problems around interminable treatments rather than once and done cures or something that is done only periodically. Constant consumption and obsolescence guarantees revenue streams.  Not too long ago, Danny Hillis, who I otherwise have the deepest respect for, gave an interview on, among other things, proteomics, which, for my purposes here, essentially means the minute analysis of bodily processes with the purpose of intervening the moment things begin to go wrong- to catch diseases before they cause us to exhibit symptoms. An audience member asked a thought provoking question, which when followed up by the interviewer Alexis Madrigal, seemed to leave the otherwise loquacious Hillis, stumped. How do you draw the line between illness without symptoms and what the body just naturally does? The danger is you might end up turning everyone, including the healthy, into “patients” and “profit centers”.

We already have a world where seemingly healthy people needed to constantly monitor and medicate themselves just to keep themselves alive, where the body seems to be in a state of almost constant, secret revolt. This is the world as diabetics often experience it, and it’s not a pretty one.  What I wonder is if, in a world in which everyone sees themselves as permanently sick- as in the process of dying- and in need of medical intervention to counter this sickness if we will still remember the joy of considering ourselves healthy? This is medicine becoming subsumed under our current model of consumption.   

Everyone, it seems, has woken up to the fact that consumer electronics has the perfect consumption sustaining model. If things quickly grow “old” to the point where they no longer work with everything else you own, or become so rare that one is unable to find replacement parts, then one if forced to upgrade if merely to insure that your stuff still works. Like the automotive industry, healthcare now seems to be embracing technological obsolescence as a road to greater profitability. Insurance companies seem poised to use devices like the Apple watch to sort and monitor customers, but that is likely only the beginning.

Let me give you my nightmare scenario for a world of superlongevity. It’s a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable in  the same way August Weismann viewed the disabled in his day.  It’s a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual “profit center”.

The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity.  If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.

Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many “immortals” of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.

I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.

 Image: Memento Mori, South Netherlands, c. 1500-1525, the Thomson collection

Think Time is Speeding Up? Here’s How to Slow It!

seven stages in man's life

One of the weirder things about human being’s perception of time is that our subjective clocks are so off. A day spent in our dreary cubicles can seem to crawl like an Amazonian sloth, while our weekends pass by as fast as a chameleon’s tongue . Most dreadful of all, once we pass into middle age, time seems to transform itself from a lumbering steam train heaving us through clearly delineated seasons and years to a Japanese bullet unstoppably hurdling us towards death with decades passing us by in a blurr.

Wondering about time is a habit of the middle aged, as sure a sign of having passed the clock- blind golden age of youth as the proverbial convertible or Harley. If my soon to be 93 year old grandmother is any indication, the old, like the young, aren’t much taken aback by the speed of time’s passage. Instead, time seems to take on the viscosity of New England molasses, the days gently flowing down life’s drain.

Up until now, I didn’t think there might be any empirical evidence to back up such colloquial observations, just the anecdotes passed around the holiday dinner table like turkey stuffing and cranberry sauce: “Can you believe it’s almost Christmas again?”, “Where did the year go?” Lucky for me I now know what happened to time, or how I’ve been confuddled all this time into thinking something had happened to it. I know because I’ve read the psychologist and BBC science broadcaster, Claudia Hammond’s excellent little book on the psychology of time called: Time Warped: Unlocking the Mysteries of Time Perception.     

If you’ve ever asked yourself why time seems to crawl when you’re watching the clock and want it to go faster, or why time appears to speed up in the face of an event you’re dreading like a speech, this is the book for you. But Hammond’s Time Warped goes much deeper than that and exposes us to the reality of what it would be like if some of our common dreams about controlling time actually came true. If we could indeed have “perfect memory” or, as everyone keeps reminding us to, “live in the present”. In addition to all that, the nature of our ambiguous relationship with time she reveals raises interesting questions for those hoping we wrestle from nature a great deal more of it.

Hammond doesn’t really discuss the physics of time, or more clearly, the fact that much of modern physics views time as an illusion akin to past imaginary entities like the ether or the phlogiston.  The fact that something so essential to our human self-understanding is considered by the bedrock of human sciences to be a brain induced mirage has led to a rebellion of at least one prominent physicists, Lee Smolin, but he’s almost a lone voice in the quest to restore time. Nor is Hammond all that interested in the philosophy of time, its history or what time actually is. You won’t find here any detailed discussion of how to define time, it’s more like Supreme Court Justice Potter Stewart’s definition of pornography: “you know it when you see it.” Hammond is, though, on firm scientific ground discussing her main subject, the human perception of time, which, whatever it’s underlying reality or unreality, we find it nearly impossible to live without.

Evolution might have kept things simple and given the human brain just one clock, a steady Bigben of a thing to accurately mark the time. Instead, Hammond draws our attention to the fact that we seem to have multiple clocks within us all running at once.

We seem to be extremely good at gauging the passage of seconds or minutes without counting.  We also have a twenty four- hour clock that runs with the same length but independent of the alternating light and darkness of our spinning earth as Hammond shows was proven by Michel Stiffre who, in the name of science and youthful stupidity, (he was 23) braved two months in a dark cave meticulously recording his bodily rhythms. What Stiffre proved is that, sun or no sun, our bodies follow twenty-four hour cycles. The turning of the earth has bored its traces deep into us, which we fight against using the miracle of electric lights, and if the popularity of sleeping pills are any indication, so often lose.

For some of us, there seems to be an inbuilt ability and need to see longer stretches of time spatially in the form of ovals, circles, or zig-zags, rather than the linear timelines one sees in history books. One day, not long before I read Hammond’s book, I found myself scribbling thinking about how far into the future my great-grandchildren would live, should my now small daughters and their children ever have children of their own.

For whatever reason, I didn’t draw out the decades as blocks of a line but like a set of steps. I thought nothing of it until I read Time Warped and saw that this was a common way for people to picture decades, though many do so in three dimensions, rather than my paltry two. Some people also associate days with color- a kind of synesthesia that isn’t just playful imagination, but is often stable across an individual’s life.

There is no real way to talk about how human beings experience time without discussing memory. What I found mind-blowing about Time Warped was just how many of what we consider the flaws of our memory end up being ambiguities we would be better off not having resolved.

Take the fact that our memories are so fallible and incomplete. One would think that things would be so much better if our brains could record everything and have it for playback on a sort of neuronal blu- ray. For certain situations like criminal trials this would solve a whole host of problems, but elsewhere, we should watch what we wish for. As Hammond shows, there are people who can remember every piece of minutia, down to the socks they wore on a particular day, decades earlier, but a moment’s reflection leads to the conclusion that such natural born mnemonic prodigies fail to dominate creative fields, the sciences, or anything else, and such was the case long before we had Google to remember things for us.

There are people who believe that the path to ensuring they are not unraveled by the flow of time is to record and document everything about themselves and all of their experiences. Digital technology has doubtless made such a quest easier, but Hammond leads us to wonder whether or not the effort to record our every action and keystroke is quixotic. Who will actually take the time to look at all this stuff? How many times, she asks, have any of us sat down to watch our wedding video?

People obsessed with recording every detail of their lives are very likely motivated by the idea that it is their memories that make them who they are. Part of our deep fear of developing Alzheimer’s probably originates in this idea that the loss of our memories would constitute the loss of our self. Yet somehow the loss of memories (and the damage of Alzheimer’s runs much deeper than the loss of memories) does not seem to rob those who experience such losses of what other recognize as their long standing personality.

Strangely, our not too reliable memories, when combined with our ability to mentally time-travel into the past, Hammond believes, gives rise to our ability to imagine futures which are not. It allows us to mix and match different scenes from our memory to come up with whole new ones we anticipate will happen, or even ones that could never happen.

The idea that our imagination might owe its existence to our faulty memory put me in mind of the recent findings of Laurie Santos of the Comparative Cognition Laboratory at Yale. Santos has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different, whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. It seems it is our flaws rather than our superiority that have so favored us above our animal kin.

What, though, of the big problem, the one we all face- the frightening speed through which we are running through our short lives? There is, it seems, some wisdom in the adage that the best way to approach time is in focusing on the present, even if you’re like me and watching another TED talk on the subject by Pico Iyer is enough to make you hurl. If the future is a realm of anxiety and the past a realm of regret, as long as one is not in pain, the present moment is a sort of refuge. Hammond believes that thinking about the future, even if we so often get it wrong by, for instance, thinking that our future self will have more time, money, or will-power, is the default mode of the brain.

Any meditative tradition worth its salt tries to free us from this future obsessed mode and connect us more fully with the present moment of our existence, our breath, its rhythms, the people we care about. There are ways we can achieve this focus on the present without mediation, but they often involve contemplation of our own impending death, which is why soldiers amid the suffering of war and the terminally ill or the very old like my Nanna can often unhitch themselves from the train pulling our thinking off to the future.

Focusing on the present is one way to not only slow the pace of time, but to infuse the short time we have here with the meaning it deserves. Knowing that my small children will outgrow my silliness is the best way I have found to appreciate their laughter now.

Present focus does not, however, solve the central paradox of time for the middle aged, namely, why it seems to move so much faster as we get older, for it is doubtful we were all that more capable of savoring the moment as teenagers than adults. Our commonsense explanation of time speeding up as we age typically has to do with proportionality as in “a year for a five year old is 1/5 of their life, but for a forty year old it is merely 1/40.” Hammond shows this proportionality theory to to wrong on its face, for, if it were true, the days for a middle aged person would be quite literally buzzing by in comparison to the days of their younger selves.

Only a moment’s reflection should show us that the proportionality theory for time’s seeming quickening as we age can’t be true. Think back to your school days waiting impatiently for the 3:00 pm bell to ring: was it really much longer than the time you spend now stuck to your chair in some meaningless business meeting? There are some difference in the gauging of how much time has passed between the young and the old, yet these are nowhere near large enough to explain the differences in the subjective experiences of how fast time is passing between those two groups. So if proportionality theory doesn’t explaining the speeding up of time for the middle aged- what does?

When thinking about duration, the thing we need to keep in mind is, as the work of Daniel Kahneman has shown, we have not one but two “selves” an experiencing self and a remembering self. Having two selves does a number on our ability to make decisions with our future in mind. The experiencing self wants us to eat the cookie now, because it’s the remembering self that will regret it later. It also skews our sense of the past.

Our sense of the duration of time is experienced differently by these two separate selves.Waiting in a long line feels like forever when you’re there, but unless something particularly interesting happened during your wait, the remembered experience feels like it happened in a blink of an eye. Yet, a wonderful or frightening experience, like a first kiss or a car accident, though it seems to fly by while we’re in it, usually cuts its groves deep enough into our memory that when we reflect upon it it seems to have taken a very long time to unfold.

Hammond’s explanation for why youth seems stretched out in time compared to middle age  is what she calls the “reminiscence bump” and the “holiday paradox”. Adolescence and young adulthood are filled with so many firsts they leave a deep impression on our memory and this “thickness” of memory leads our remembering self to conclude time must have been going more slowly back in the heady days of our youth- the reminiscence bump . If you want to make your middle age days seem longer, then you need to fill them up with exciting and new things, which is the reason, Hammond speculates, that holidays full of new experiences seem fast when we’re in them, but to be stretched out on reflection- the holiday paradox. She wonders, however, whether the better option is just not to worry so much about time’s speed and rest when we need it rather than constantly chase after new memories.

Given the interest of the audience here in extending the human lifespan I wonder what the implications of such discovers regarding time on that project might be? A comedy could certainly be written in which we have doubled the length of human life, and end up also doubling all those things we now find banal about time. Would human beings who lived well beyond their hundreds be subject to meetings that stretched out for days and weeks? Would traffic jams in which you spent a week in your car be normal?

Perhaps we might even want to focus on our ability to manipulate our sense of time’s duration as an easier path towards a sort of longevity. Imagine a world where love affairs could stretch out centuries and pain and boredom are reduced to a blink, or a future that has “time retreats” (like today’s religious retreats) where one goes away for a week that has been neurologically altered to having felt like it was decades or longer. We might use the same sorts of time manipulation to punish people for heinous crimes so that a 600 year sentence actually means something. One might object that such induced experiences of slow time aren’t real, but then again neither are most versions of digital immortality, or even, as Hammond showed us, our subjective experience of time itself.

All of this talk of manipulating our sense of time as a road to longevity is just playful speculation on my part. What should be clear is that any move towards changing the human body so that it lives much longer than it does now is probably also going to have to grapple with and transform our psychological notions of time and the society we have built around our strange capacity to warp it.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.