Life: Inevitable or Accident?

The-Tree-Of-Life Gustav Klimt                                                             https://www.artsy.net/artist/gustav-klimt

Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon?

There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.

The proponent of the new theory is a young physicist at MIT named Jeremy England. I can’t claim I quite grasp all the intricate details of England’s theory, though he does an excellent job of explaining it here, but perhaps the best way of capturing it succinctly is by thinking of the laws of physics as a landscape, and a leaning one at that.

The second law of thermodynamics leans in the direction of increased entropy: systems naturally move in the direction of losing rather than gaining order over time, which is why we break eggs to make omelettes and not the other way round. The second law would seem to be a bad thing for living organisms, but oddly enough, ends up being a blessing not just for life, but for any self-organizing system so long as that system has a means of radiating this entropy away from itself.

For England, the second law provides the environment and direction in which life evolves. In those places where energy outputs from outside are available and can be dissipated because they have some boundary, such as a pool of water, self-organizing systems naturally come to be dominated by those forms that are particularly good at absorbing energy from their surrounding environment and dissipating less organized forms of energy in the form of heat (entropy) back into it.

This landscape in which life evolves, England postulates, may tilt as well in the direction of complexity and intelligence due to the fact that in a system that frequently changes in terms of oscillations of energy, those forms able to anticipate the direction of such oscillations gain the possibility of aligning themselves with them and thus become able to accomplish even more work through resonance.

England is in no sense out to replace Darwin’s natural selection as the mechanism through which evolution is best understood, though, should he be proved right, he would end up greatly amending it. If his theory ultimately proves successful, and it is admittedly very early days, England’s theory will have answered one of the fundamental questions that has dogged evolution since its beginnings. For while Darwin’s theory provides us with all the explanation we need for how complex organisms such as ourselves could have emerged out of seemingly random processes- that is through natural selection- it has never quite explained how you go from the inorganic to the organic and get evolution working in the first place. England’s work is blurring the line between the organic and the most complicated self-organizing forms of the inorganic, making the line separating cells from snowflakes and storms a little less distinct.

Whatever its ultimate fate, however, England’s theory faces major hurdles, not least because it seems to have a bias towards increasing complexity, and in its most radical form, points towards the inevitability that life will evolve in the direction of increased intelligence, ideas which many evolutionary thinkers vehemently disavow.

Some evolutionary theorists may see effort such as England’s not as a paradigm shift waiting in the wings, but as an example of a misconception regarding the relationship between increasing complexity and evolution that now appears to have been adopted by actual scientists rather than a merely misguided public. A misconception that, couched in scientific language, will further muddy the minds of the public leaving them with a conception of evolution that belongs much more to the 19th century than to the 21st. It is a misconception whose most vocal living opponent after the death of the irreplaceable Stephen J Gould has been the paleontologist, evolutionary biologist, and senior editor of the journal Nature, Henry Gee, who has set out to disabuse us of it in his book The Accidental Species.

Gee’s goal is to remind us of what he holds to be the fundamental truth behind the theory of evolution- evolution has one singular purpose from which everything else follows in lockstep- reproduction. His objective is to do away, once and for all, with what he feels is a common misconception that evolution is leading towards complexity and progress and that the highest peak of this complexity and progress is us- human beings.

If improved prospects for reproduction can be bought through the increased complexity of an organism then that is what will happen, but it needn’t be the case. Gee points out that many species, notably some worms and many parasites, have achieved improved reproductive prospects by decreasing their complexity.Therefore the idea that complexity (as in an increase in the specialization and number of parts an organism has)  is a merely matter of evolution plus time doesn’t hold up to close scrutiny. Judged through the eyes of evolution, losing features and becoming more simple is not necessarily a vice. All that counts is an organism’s ability to make more copies, or for animals that reproduce through sex, blended copies of itself.

Evolution in this view isn’t beautiful but coldly functional and messy- a matter of mere reproductive success. Gee reminds us of Darwin’s idea of evolution’s product as a “tangled bank”- a weird menagerie of creatures each having their own particular historical evolutionary trajectory. The anal retentive Victorian era philosophers who tried to build upon his ideas couldn’t accept such a mess and:

…missed the essential metaphor of Darwin’s tangled bank, however, and saw natural selection as a sort of motor that would drive transformation from one preordained station on the ladder of life to the next one.” (37)

Gee also sets out to show how deeply limited our abilities are when it comes to understanding the past through the fossil record. Very, very, few of the species that have ever existed left evidence of their lives in the form of fossils, which are formed only under very special conditions, and where the process of fossilization greatly favors the preservation of some species over others. The past is thus incredibly opaque making it impossible to impose an overarching narrative upon it- such as increasing complexity- as we move from the past towards the present.

Gee, though an ardent defender of evolution and opponent of creationist pseudoscience, finds the gaps in the fossil record so pronounced that he thinks we can create almost any story we want from it and end up projecting our contemporary biases onto the speechless stones. This is the case even when the remains we are dealing with are of much more recent origin and especially when their subject is the origin of us.

We’ve tended, for instance, to link tool use and intelligence, even in those cases such as Homo Habilis, when the records and artifacts point to a different story. We’ve tended not to see other human species such as the so-called Hobbit man as ways we might have actually evolved had circumstances not played out in precisely the way they had. We have not, in Gee’s estimation, been working our way towards the inevitable goal of our current intelligence and planetary dominance, but have stumbled into it by accident.

Although Gee is in no sense writing in anticipation of a theory such as England’s his line of thinking does seem to pose obstacles that the latter’s hypothesis will have to address. If it is indeed the case that, as England has stated it, complex life arises inevitably from the physics of the universe, so that in his estimation:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

Then England will have to address why it took so incredibly long – 4 billion years out of the earth’s 4.5 billion year history for actual plants to make their debut, not to mention similar spans for other complex eukarya such as animals like ourselves.

Whether something like England’s inevitable complexity or Gee’s, not just blind, but drunk and random, evolutionary walk is ultimately the right way to understand evolution has implications far beyond evolutionary theory. Indeed, it might have deep implications for the status and distribution of life in the universe and even inform the way we understand the current development of new forms of artificial intelligence.

What we have discovered over the last decade is that bodies of water appear to be both much more widespread and can be found in environments far beyond those previously considered. Hence NASA’s recent announcement that we are likely to find microbial life in the next 10 – 30 years both in our solar system and beyond. What this means is that England’s heat baths are likely ubiquitous, and if he’s correct, life likely can be found anywhere there is water- meaning nearly everywhere. There may even be complex lifelike forms that did not evolve through what we would consider normal natural selection at all.

If Gee is right the universe might be ripe for life, but the vast, vast majority of that life will be microbial and no amount of time will change that fate on most life inhabited worlds. If England in his minor key is correct the universe should at least be filled with complex multicellular life forms such as ourselves. Yet it is the possibility that England is right in his major key, that consciousness, civilization, and computation might flow naturally from the need of organisms to resonate with their fluctuating environments that, biased as we are, we likely find most exciting. Such a view leaves us with the prospect of many, many more forms of intelligence and technological civilizations like ourselves spread throughout the cosmos.

The fact that the universe so far has proven silent and devoid of any signs of technological civilization might give us pause when it comes to endorsing England’s optimism over Gee’s pessimism, unless, that is, there is some sort of limit or wall when it comes to our own perceived technological trajectory that can address the questions that emerge from the ideas of both. To that story, next time…

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and are certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

Big Data as statistical masturbation

Infinite Book Tunnel

It’s just possible that there is a looming crisis in yet another technological sector whose proponents have leaped too far ahead, and too soon, promising all kinds of things they are unable to deliver. It strange how we keep ramming our head into this same damned wall, but this next crisis is perhaps more important than deflated hype at other times, say our over optimism about the timeline for human space flight in the 1970’s, or the “AI winter” in the 1980’s, or the miracles that seemed just at our fingertips when we cracked the Human Genome while pulling riches out of the air during the dotcom boom- both of which brought us to a state of mania in the 1990’s and early 2000’s.

The thing that separates a potentially new crisis in the area of so-called “Big-Data” from these earlier ones is that, literally overnight, we have reconstructed much of our economy, national security infrastructure and in the process of eroding our ancient right privacy on it’s yet to be proven premises. Now, we are on the verge of changing not just the nature of the science upon which we all depend, but nearly every other field of human intellectual endeavor. And we’ve done and are doing this despite the fact that the the most over the top promises of Big Data are about as epistemologically grounded as divining the future by looking at goat entrails.

Well, that might be a little unfair. Big Data is helpful, but the question is helpful for what? A tool, as opposed to a supposedly magical talisman has its limits, and understanding those limits should lead not to our jettisoning the tool of large scale data based analysis, but what needs to be done to make these new capacities actually useful rather than, like all forms of divination, comforting us with the idea that we can know the future and thus somehow exert control over it, when in reality both our foresight and our powers are much more limited.

Start with the issue of the digital economy. One model underlies most of the major Internet giants- Google, FaceBook and to a lesser extent Apple and Amazon, along with a whole set of behemoths who few of us can name but that underlie everything we do online, especially data aggregators such as Axicom. That model is to essentially gather up every last digital record we leave behind, many of them gained in exchange for “free” services and using this living archive to target advertisements at us.

It’s not only that this model has provided the infrastructure for an unprecedented violation of privacy by the security state (more on which below) it’s that there’s no real evidence that it even works.

Just anecdotally reflect on your own personal experience. If companies can very reasonably be said to know you better than your mother, your wife, or even you know yourself, why are the ads coming your way so damn obvious, and frankly even oblivious? In my own case, if I shop online for something, a hammer, a car, a pair of pants, I end up getting ads for that very same type of product weeks or even months after I have actually bought a version of the item I was searching for.

In large measure, the Internet is a giant market in which we can find products or information. Targeted ads can only really work if they are able refract in their marketed product’s favor the information I am searching for, if they lead me to buy something I would not have purchased in the first place. Derek Thompson, in the piece linked to above points out that this problem is called Endogeneity, or more colloquially: “hell, I was going to buy it anyway.”

The problem with this economic model, though, goes even deeper than that. At least one-third of clicks on digital ads aren’t human beings at all but bots that represent a way of gaming advertising revenue like something right out of a William Gibson novel.

Okay, so we have this economic model based on what at it’s root is really just spyware, and despite all the billions poured into it, we have no idea if it actually affects consumer behavior. That might be merely an annoying feature of the present rather than something to fret about were it not for the fact that this surveillance architecture has apparently been captured by the security services of the state. The model is essentially just a darker version of its commercial forbearer. Here the NSA, GCHQ et al hoover up as much of the Internet’s information as they can get their hands on. Ostensibly, their doing this so they can algorithmically sort through this data to identify threats.

In this case, we have just as many reasons to suspect that it doesn’t really work, and though they claim it does, none of these intelligence agencies will actually look at their supposed evidence that it does. The reasons to suspect that mass surveillance might suffer similar flaws as mass “personalized” marketing, was excellently summed up   in a recent article in the Financial Times Zeynep Tufekci when she wrote:

But the assertion that big data is “what it’s all about” when it comes to predicting rare events is not supported by what we know about how these methods work, and more importantly, don’t work. Analytics on massive datasets can be powerful in analysing and identifying broad patterns, or events that occur regularly and frequently, but are singularly unsuited to finding unpredictable, erratic, and rare needles in huge haystacks. In fact, the bigger the haystack — the more massive the scale and the wider the scope of the surveillance — the less suited these methods are to finding such exceptional events, and the more they may serve to direct resources and attention away from appropriate tools and methods.

I’ll get to what’s epistemologically wrong with using Big Data in the way used by the NSA that Tufekci rightly criticizes in a moment, but on a personal, not societal level, the biggest danger from getting the capabilities of Big Data wrong seems most likely to come through its potentially flawed use in medicine.

Here’s the kind of hype we’re in the midst of as found in a recent article by Tim Mcdonnell in Nautilus:

We’re well on our way to a future where massive data processing will power not just medical research, but nearly every aspect of society. Viktor Mayer-Schönberger, a data scholar at the University of Oxford’s Oxford Internet Institute, says we are in the midst of a fundamental shift from a culture in which we make inferences about the world based on a small amount of information to one in which sweeping new insights are gleaned by steadily accumulating a virtually limitless amount of data on everything.

The value of collecting all the information, says Mayer-Schönberger, who published an exhaustive treatise entitled Big Data in March, is that “you don’t have to worry about biases or randomization. You don’t have to worry about having a hypothesis, a conclusion, beforehand.” If you look at everything, the landscape will become apparent and patterns will naturally emerge.

Here’s the problem with this line of reasoning, a problem that I think is the same, and shares the same solution to the issue of mass surveillance by the NSA and other security agencies. It begins with this idea that “the landscape will become apparent and patterns will naturally emerge.”

The flaw that this reasoning suffers has to do with the way very large data sets work. One would think that the fact that sampling millions of people, which we’re now able to do via ubiquitous monitoring, would offer enormous gains over the way we used to be confined to population samples of only a few thousand, yet this isn’t necessarily the case. The problem is the larger your sample size the greater your chance at false correlations.

Previously I had thought that surely this is a problem that statisticians had either solved or were on the verge of solving. They’re not, at least according to the computer scientist Michael Jordan, who fears that we might be on the verge of a “Big Data winter” similar to the one AI went through in the 1980’s and 90’s. Let’s say you had an extremely large database with multiple forms of metrics:

Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

The actual mathematics of sorting out spurious from potentially useful correlations from being distinguished is, in Jordan’s estimation, far from being worked out:

We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

Alright, now that’s a problem. As you’ll no doubt notice the danger of false correlation that Jordan identifies as a problem for science is almost exactly the same critique Tufekci  made against the mass surveillance of the NSA. That is, unless the NSA and its cohorts have actually solved the statistical/engineering problems Jordan identified and haven’t told us, all the biggest data haystack in the world is going to lead to is too many leads to follow, most of them false, and many of which will drain resources from actual public protection. Perhaps equally troubling: if security services have solved these statistical/engineering problems how much will be wasted in research funding and how many lives will be lost because medical scientists were kept from the tools that would have empowered their research?

At least part of the solution to this will be remembering why we developed statistical analysis in the first place. Herbert I. Weisberg with his recent book Willful Ignorance: The Mismeasure of Uncertainty has provided a wonderful, short primer on the subject.

Statistical evidence, according to Weisberg was first introduced to medical research back in the 1950’s as a protection against exaggerated claims to efficacy and widespread quackery. Since then we have come to take the p value .05 almost as the truth itself. Weisberg’s book is really a plea to clinicians to know their patients and not rely almost exclusively on statistical analyses of “average” patients to help those in their care make life altering decisions in terms of what medicines to take or procedures to undergo. Weisberg thinks that personalized medicine will over the long term solve these problems, and while I won’t go into my doubts about that here, I do think, in the experience of the physician, he identifies the root to the solution of our Big Data problem.

Rather than think of Big Data as somehow providing us with a picture of reality, “naturally emerging” as Mayer-Schönberger quoted above suggested we should start to view it as a way to easily and cheaply give us a metric for the potential validity of a hypothesis. And it’s not only the first step that continues to be guided by old fashioned science rather than computer driven numerology but the remaining steps as well, a positive signal  followed up by actual scientist and other researchers doing such now rusting skills as actual experiments and building theories to explain their results. Big Data, if done right, won’t end up making science a form of information processing, but will instead be used as the primary tool for keeping scientist from going down a cul-de-sac.

The same principle applied to mass surveillance means a return to old school human intelligence even if it now needs to be empowered by new digital tools. Rather than Big Data being used to hoover up and analyze all potential leads, espionage and counterterrorism should become more targeted and based on efforts to understand and penetrate threat groups themselves. The move back to human intelligence and towards more targeted surveillance rather than the mass data grab symbolized by Bluffdale may be a reality forced on the NSA et al by events. In part due to the Snowden revelations terrorist and criminal networks have already abandoned the non-secure public networks which the rest of us use. Mass surveillance has lost its raison d’etre.

At least it terms of science and medicine, I recently saw a version of how Big Data done right might work. In an article for Qunta and Scientific American by Veronique Greenwood she discussed two recent efforts by researchers to use Big Data to find new understandings of and treatments for disease.

The physicist (not biologist) Stefan Thurner has created a network model of comorbid diseases trying to uncover the hidden relationships between different, seemingly unrelated medical conditions. What I find interesting about this is that it gives us a new way of understanding disease, breaking free of hermetically sealed categories that may blind us to underlying shared mechanisms by medical conditions. I find this especially pressing where it comes to mental health where the kind of symptom listing found in the DSM- the Bible for mental health care professionals- has never resulted in a causative model of how conditions such as anxiety or depression actually work and is based on an antiquated separation between the mind and the body not to mention the social and environmental factors that all give shape to mental health.

Even more interesting, from Greenwood’s piece, are the efforts by Joseph Loscalzo of Harvard Medical School to try and come up with a whole new model for disease that looks beyond genome associations for diseases to map out the molecular networks of disease isolating the statistical correlation between a particular variant of such a map and a disease. This relationship between genes and proteins correlated with a disease is something Loscalzo calls a “disease module”.

Thurner describes the underlying methodology behind his, and by implication Loscalzo’s,  efforts to Greenwood this way:

Once you draw a network, you are drawing hypotheses on a piece of paper,” Thurner said. “You are saying, ‘Wow, look, I didn’t know these two things were related. Why could they be? Or is it just that our statistical threshold did not kick it out?’” In network analysis, you first validate your analysis by checking that it recreates connections that people have already identified in whatever system you are studying. After that, Thurner said, “the ones that did not exist before, those are new hypotheses. Then the work really starts.

It’s the next steps, the testing of hypotheses, the development of a stable model where the most important work really lies. Like any intellectual fad, Big Data has its element of truth. We can now much more easily distill large and sometimes previously invisible  patterns from the deluge of information in which we are now drowning. This has potentially huge benefits for science, medicine, social policy, and law enforcement.

The problem comes from thinking that we are at the point where our data crunching algorithms can do the work for us and are about to replace the human beings and their skills at investigating problems deeply and in the real world. The danger there would be thinking that knowledge could work like self-gratification a mere thing of the mind without all the hard work, compromises, and conflict between expectations and reality that goes into a real relationship. Ironically, this was a truth perhaps discovered first not by scientists or intelligence agencies but by online dating services. To that strange story, next time….

Edward O. Wilson’s Dull Paradise

Garden of Eden

In all sincerity I have to admit that there is much I admire about the biologist Edward O. Wilson. I can only pray that not only should I live into my 80’s, but still possess the intellectual stamina to write what are at least thought provoking books when I get there. I also wish I still have the balls to write a book with the title of Wilson’s latest- The Meaning of Human Existence, for publishing with an appellation like that would mean I wasn’t afraid I would disappoint my readers, and Wilson did indeed leave me wondering if the whole thing was worth the effort.

Nevertheless,  I think Wilson opened up an important alternative future that is seldom discussed here- namely what if we aimed not at a supposedly brighter, so-called post-human future but to keep things the same? Well, there would be some changes, no extremes of human poverty, along with the restoration of much of the natural environment to its pre-industrial revolution health. Still, we ourselves would aim to stay largely the same human beings who emerged some 100,000 years ago- flaws and all.

Wilson calls this admittedly conservative vision paradise, and I’ve seen his eyes light up like a child contemplating Christmas when using the word in interviews. Another point that might be of interest to this audience is who he largely blames for keeping us from entering this Shangri-la; archaic religions and their “creation stories.”

I have to admit that I find the idea of trying to preserve humanity as it is a valid alternative future. After all, “evolve or die” isn’t really the way nature works. Typically the “goal” of evolution is to find a “design” that works and then stick with it for as long as possible. Since we now dominate the entire planet and our numbers out-rival by a long way any other large animal it seems hard to assert that we need a major, and likely risky, upgrade. Here’s Wilson making the case:

While I am at it, I hereby cast a vote for existential conservatism, the preservation of biological human nature as a sacred trust. We are doing very well in terms of science and technology. Let’s agree to keep that up, and move both along even faster. But let’s also promote the humanities, that which makes us human, and not use science to mess around with the wellspring of this, the absolute and unique potential of the human future. (60)

It’s an idea that rings true to my inner Edmund Burke, and sounds simple, doesn’t it? And on reflection it would be, if human beings were bison, blue whales, or gray wolves. Indeed, I think Wilson has drawn this idea of human preservation from his lifetime of very laudable work on biodiversity. Yet had he reflected upon why efforts at preservation fail when they do he would have realized that the problem isn’t the wildlife itself, but the human beings who don’t share the same value system going in the opposite direction. That is, humans, though we are certainly animals, aren’t wildlife, in the sense that we take destiny into our own hands, even if doing so is sometimes for the worse. Wilson seems to think that it’s quite a short step from asserting it as a goal to gaining universal assent to the “preservation of biological human nature as a sacred trust”, the problem is there is no widespread agreement over what human nature even is, and then, even if you had such agreement, how in the world do you go about enforcing it for the minority who refuse to adhere to it? How far should we be willing to go to prevent persons from willingly crossing some line that defines what a human being is? And where exactly is that line in the first place? Wilson thinks we’re near the end of the argument when we only just took our seat at the debate.

Strange thing is the very people who would likely naturally lean towards the kind of biological conservatism that Wilson hopes “we” will ultimately choose are the sorts of traditionally religious persons he thinks are at the root of most of our conflicts. Here again is Wilson:

Religious warriors are not an anomaly. It is a mistake to classify believers of a particular religious and dogmatic religion-like ideologies into two groups, moderates versus extremists. The true cause of hatred and religious violence is faith versus faith, an outward expression of the ancient instinct of tribalism. Faith is the one thing that makes otherwise good people do bad things. (154)

For Wilson, a religious groups “defines itself foremost by its creation story, the supernatural narrative that explains how human beings came into existence.” (151)  The trouble with this is that it’s not even superficially true. Three of the world’s religions that have been busy killing one another over the last millennium – Judaism, Christianity and Islam all have the same creation story. Wilson knows a hell of a lot more about ants and evolution then he does about religion or even world history. And while religion is certainly the root of some of our tribalism, which I agree is the deep and perennial human problem, it’s far from the only source, and very few of our tribal conflicts have anything to do with the fight between human beings over our origins in the deep past. How about class conflict? Or racial conflict? Or nationalist conflicts when the two sides profess the not only the exact same religion but the exact same sect- such as the current fight between the two Christian Orthodox nations of Russia and Ukraine? If China and Japan someday go to war it will not be a horrifying replay of the Scopes Monkey Trial.

For a book called The Meaning of Human Existence Wilson’s ideas have very little explanatory power when it comes to anything other than our biological origins, and some quite questionable ideas regarding the origins of our capacity for violence. That is, the book lacks depth, and because of this I found it, well… dull.

Nowhere was I more hopeful that Wilson would have something interesting and different to say than when it came to the question of extraterrestrial life. Here we have one of the world’s greatest living biologists, a man who had spent a lifetime studying ants as an alternative route to the kinds of eusociality possessed only by humans, the naked mole rat, and a handful of insects. Here was a scientists who was clearly passionate about preserving the amazing diversity of life on our small planet.

Yet Wilson’s E.T.s are land dwellers, relatively large, biologically audiovisual, “their head is distinct, big, and located up front” (115) they have moderate teeth and jaws, they have a high social intelligence, and “a small number of free locomotory appendages, levered for maximum strength with stiff internal or external skeletons composed of hinged segments (as by human elbows and knees), and with at least one pair of which are terminated by digits with pulpy tips used for sensitive touch and grasping. “ (116)

In other words they are little green men.

What I had hoped was the Wilson would have used his deep knowledge of biology to imagine alternative paths to technological civilization. Couldn’t he have imagined a hive-like species that evolves in tandem with its own technological advancement? Or maybe some larger form of insect like animal which doesn’t just have an instinctive repertoire of things that it builds, but constantly improves upon its own designs, and explores the space of possible technologies? Or aquatic species that develop something like civilization through the use of sea-herding and ocean farming? How about species that communicate not audio-visually but through electrical impulses the way our computers do?

After all, nature on earth is pretty weird. There’s not just us, but termites that build air conditioned skyscrapers (at least from their view), whales which have culturally specific songs, and strange little things that eat and excrete electrons. One might guess that life elsewhere will be even weirder. Perhaps my problem with The Meaning of Human Existence is that it just wasn’t weird enough not just to capture the worlds of tomorrow and elsewhere- but the one we’re living in right now.

 

There are two paths to superlongevity: only one of them is good

Memento Mori Ivories

Looked at in the longer historical perspective we have already achieved something our ancestors would consider superlongevity. In the UK life expectancy at birth averaged around 37 in 1700. It is roughly 81 today. The extent to which this is a reflection of decreased child mortality versus an increase in the survival rate of the elderly I’ll get to a little later, but for now, just try to get your head around the fact that we have managed to nearly double the life expectancy of human beings in a little over two centuries.

By itself the gains we have made in longevity are pretty incredible, but we have also managed to redefine what it means to be old. A person in 1830 was old at forty not just because of averages, but by the conditions of his body. A revealing game to play is to find pictures of adults from the 19th century and try to guess their ages. My bet is that you, like myself, will consistently estimate the people in these photos to be older than they actually were when the picture was taken. This isn’t a reflection of their lack of Botox and Photoshop, so much as the fact that they were missing the miracle of modern dentistry, were felled, or at least weathered, by diseases which we now consider mere nuisances. If I were my current age in 1830 I would be missing most of my teeth and the pneumonia I caught a few years back would have surely killed me, having been a major cause of death in the age of Darwin and Dickens.

Sixty or even seventy year olds today are probably in the state of health that a forty year old was in the 19th century. In other words we’ve increased the healthspan, not just the lifespan. Sixty really is the new forty, though what is important is how you define “new”. Yet get passed eighty in the early 21st century and you’re almost right back in the world where our ancestors lived. Experiencing the debilitations of old age that is the fate of those of us lucky enough to survive through the pleasures of youth and middle age. The disability of the old is part of the tragic aspect of life, and as always when it comes to giving poetic shape to our comic/ tragic existence, the Greeks got to the essence of old age with their myth of Tithonus.

Tithonus was a youth who had the ill fortune of inspiring the love of the goddess of spring Eos. (Love affairs between gods and mortals never end well). Eos asked Zeus to grant the youth immortality, which he did, but, of course, not in the way Eos intended. Tithonus would never die, but he also would continue to age becoming not merely old and decrepit, but eventually shrivel away to a grasshopper hugging a room’s corner. It is best not to ask the gods for anything.

Despite our successes, those of us lucky enough to live into our 7th and 8th decades still end up like poor old Tithonus. The deep lesson of the ancient myth still holds- longevity is not worth as much as we might hope if not also combined with the health of youth, and despite all of our advances, we are essentially still in Tithonus’ world.

Yet perhaps not for long. At least if one believes the story told by Jonathan Weiner in his excellent book Long for this World.  I learned much about our quest for long life and eternal youth from Long for this World, both its religious and cultural history, and the trajectory and state of its science. I never knew that Jewish folklore had a magical city called Luz where the death unleashed in Eden was prevented from entering, and that existed until  all its inhabitants became so bored that they walked out from its walls and we struck down by the Angel of Death waiting eagerly outside.

I did not know that Descartes, who had helped unleash the scientific revolution, thought that gains in knowledge were growing so fast that he would live to be 1,000. (He died in 1650 at 54). I did not realize that two other key figures of the scientific revolution Roger and Francis Bacon (no relation) thought that science would restore us to the knowledge before the fall (prelapsarian) which would allow us to live forever, or the depth to which very different Chinese traditions had no guilt at all about human immorality and pursued the goal with all sorts of elixirs and practices, none of which, of course, worked. I was especially taken with the story of how Pennsylvania’s most famous son- Benjamin Franklin- wanted to be “pickled” and awoken a century later.

Reviewing the past, when even ancient Egyptian hieroglyphs offer up recipes for “guaranteed to work” wrinkle creams, shows us just how deeply human the longing for agelessness is. It wasn’t invented by Madison Avenue or Dr Oz if even the attempts to find a fountain of youth by the ancients seem no less silly than many of our own. The question, I suppose, is the one that most risks the accusation that one is a fool: “Is this time truly different?” Are we, out of all the generations that have come before us believing the discovery of the route to human “immortality” (and every generation since the rise of modern science has had those who thought so) actually the ones who will achieve this dream?

Long for this World is at its heart a serious attempt to grapple with this question and tries to give us a clear picture of longevity science built around the theoretical biologist, Aubrey de Grey, who will either go down in history as a courageous prophet of a new era of superlongevity, or as just another figure in our long history of thinking biological immortality is at our fingertips when all we are seeing is a mirage.

One thing we have on our ancestors who chased this dream is that we know much, much, more about the biology of aging. Darwinian evolution allowed us to be able to conceive non- poetic theories on the origins of death. In the 1880’s the German biologist, August Weismann in his essay “Upon the Eternal Duration of Life”, provided a kind of survival of the fittest argument for death and aging. Even an ageless creature, Weismann argued, would overtime have to absorb multiple shocks eventually end up disabled. The the longer something lives the more crippled and worn out it becomes. Thus, it is in the interest of the species that death exists to clear the world of these disabled- very damned German- the whole thing.

Just after World War II the biologist Peter Medawar challenged the view of  Weismann. For Medawar if you look at any species selective pressures are really only operating when the organism is young. Those who can survive long enough to breed are the only ones that really count when it comes to natural selection. Like versions of James Dean or Marilyn Monroe, nature is just fine if we exit the world in the bloom of youth- as long, that is, as we have passed our genes.

In other words, healthful longevity has not really been something that natural selection has been selecting most organisms for, and because of this it hasn’t been selecting against bad things that can happen to old organisms either, as we’re finding when, by saving people from heart attacks in their 50’s, we destin them to die of diseases that were rare or unknown in the past like Alzheimers. In a sense we’re the victim of natural selection not caring about the health of those past reproductive age or their longevity.

Well, this is only partly true. Organisms that live in conditions where survival in youth is more secure end up with stretched longevity for their size. Some bats can live decades when similar sized mice have a lifespan of only a couple of years. Tortoises can live for well over a century while alligators of the same weight live from 30-50 years.

Stretching healthful longevity is also something that occurs when you starve an animal. We’ve know for decades that lifespan (in other animals at least) can be increased through caloric restriction. Although the mechanism is unclear, the Darwinian logic is not. Under conditions of starvation it’s a bad idea to breed and the body seems to respond by slowing development waiting for the return of food and a good time to mate.

Thus, there is no such thing as a death clock, lifespan is malleable and can be changed if we just learn how to work the dials. We should have known this from our historical experience over the last two-hundred years in which we doubled the human lifespan, but now we know that nature itself does it all the time and not by, like we do , by addressing the symptoms of aging but by resetting the clock of life itself.

We might ourselves find it easy to reset our aging clock if there weren’t multiple factors that play a role in its ticking. Aubrey de Grey has identified seven- the most important of which (excluding cancerous mutations) are probably the accumulation of “junk” within cells and the development of harmful “cross links” between cells. Strange thing about these is that they are not something that suddenly appears when we are actually “old” but are there all along, only reaching levels when they become noticeable and start to cause problems after many decades. We start dying the day we are born.

As we learn in Long for This World, there is hope that someday we may be able to effectively intervene against all these causes of aging. Every year the science needed to do so advances. Yet as Aubrey de Grey has indicated, the greatest threat to this quest for biological immortality is something we are all too familiar with – cancer.

The possibility of developing cancer emerges from the very way our cells work. Over a lifetime our trillions of cells replicate themselves an even more mind bogglingly high number of times. It is almost impossible that every copying error will be caught before it takes on a life of its own and becomes a cancerous growth. Increasing lifespan only increases the amount of time such copying errors can occur.

It’s in Aubrey de Grey’s solution to this last and most serious of super-longevity’s medical hurdles that Weiner’s faith in the sense of that project breaks down, as does mine. De Grey’s cure for cancer goes by the name of WILT- whole body interdiction of the lengthening of telomeres. A great deal of the cancers that afflict human beings achieve their deadly replication without limit by taking control of the telomerase gene. De Grey’s solution is to strip every human gene of its telomeres, something that, even if successful in preventing cancerous growths, would also leave us without red and white blood cells. In order to allow us to live without these cells, de Grey proposes regular infusions of stem cells. What this leave us with would be a life of constant chemotherapy and invasive medical interventions just to keep us alive. In other words, a life when even healthy people relate to their bodies and are kept alive by medical interventions that are now only experienced by the terminally ill.

I think what shocks Weiner about this last step in SENS is the that it underscores just how radical the medical requirements of engineering superlongevity might become. It’s one thing to talk about strengthening the cell’s junk collector the lysosome by adding an enzyme or through some genetic tweak, it’s another to talk about removing the very cells and structures which define human biology, cells and platelets, which have always been essential for human life and health.

Yet, WILT struck me with somewhat different issues and questions. Here’s how I have come to understand it. For simplicities sake, we might be said to have two models of healthcare, both of which have contributed to the gains we have seen in human health and longevity since 1800. As is often noted, a good deal of this gain in longevity was a consequence of improving childhood mortality. Having less and less people die at the age of five drastically improves the average lifespan. We made these gains largely through public health: things like drastically improved sanitation, potable water, vaccinations, and, in the 20th century antibiotics.

This set of improvements in human health were cheap, “easy”, and either comprised of general environmental conditions, or administered at most annually- like the flu shoot. These features allowed this first model of healthcare to be distributed broadly across the population leading to increased longevity by saving the lives primarily of the young. In part these improvements, and above all the development of antibiotics, also allowed longevity increases from at older end of the scale, which although less pronounced than improvements in child mortality, are, nonetheless very real. This is my second model of healthcare and includes things everything from open heart surgery, to chemo and radiation treatments for cancer, to lifelong prescription drugs to treat chronic conditions.

As opposed to the first model, the second one is expensive, relatively difficult, and varies greatly among different segments of the population. My Amoxicillin and Larry Page’s Amoxicillin are the same, but the medical care we would receive to treat something like cancer would be radically different.

We actually are making greater strides in the battle against cancer than at any time since Nixon declared war on the scourge way back in the 1970’s. A new round of immunosuppressive drugs that are proving so successful against a host of different cancers that John LaMattina, former head of research and development for Pfizer has stated that “We are heading towards a world where cancer will become a chronic disease in much the same way as we have seen with diabetes and HIV.”

The problem is the cost which can range up to 150,000 per year. The costs of the new drugs are so expensive that the NHS has reduced the amount they are willing to spend on them by 30 percent. Here we are running up against the limits to second model of healthcare, a limit that at some point will force societies to choose between providing life preserving care for all, or only to those rich enough to afford it.

If the superlongevity project is going to be a progressive project it seems essential to me that it look like the first model of healthcare rather than the second. Otherwise it will either leave us with divergences in longevity within and between societies that make us long nostalgically for the “narrowness” of current gap between today’s poorest and richest societies, or it will bankrupt countries that seek to extend increased longevity to everyone.

This would require a u-turn from the trajectory of healthcare today which is dominated and distorted by the lucrative world of the second model. As an example of this distortion: the physicists, Paul Davies, is working on a new approach to cancer that involves attempting to attack the disease with viruses. If successful this would be a good example of model one. Using viruses (in a way the reverse of immunosuppressives) to treat cancer would likely be much cheaper than current approaches to cancer involving radiation, chemotherapy, and surgery due to the fact that viruses can self-replicate after being engineered rather than needing to be expensively and painstakingly constructed in drug labs. The problem is that it’s extremely difficult for Davies to get funding for such research precisely because there isn’t that much money to be made in it.

In an interview about his research, Davies compared his plight to how drug companies treat aspirin. There’s good evidence to show that plain old aspirin might be an effective preventative against cancer. Sadly, it’s almost impossible to find funding for large scale studies of aspirin’s efficacy in preventing cancer because you can buy a bottle of the stuff for a little over a buck, and what multi-billion dollar pharmaceutical company could justify profit margins as low as that?

The distortions of the second model are even more in evidence when it comes to antibiotics. Here is one of the few places where the second model of healthcare is dependent upon the first. As this chilling article by Maryn Mckenna drives home we are in danger of letting the second model lead to the nightmare of a sudden sharp reversal of the health and longevity gains of the last century.

We are only now waking up to the full danger implicit in antibiotic resistance. We’ve so over prescribed these miracle treatments both to ourselves and our poor farms animals who we treat as mere machines and “grow” in hellish sanitary conditions that bacteria have evolved to no longer be treatable with the suite of antibiotics we have, which are now a generation old, or older. If you don’t think this is a big deal, think about what it means to live in a world where a toothache can kill you and surgeries and chemotherapy can no longer be performed. A long winter of antibiotic resistance would just mean that many of our dreams of superlongevity this century would be moot. It would mean many of us might die quite young from common illnesses, or from surgical and treatment procedures that have combined given us the longevity we have now.

Again, the reason we don’t have alternatives to legacy antibiotics is that pharmaceutical companies don’t see any profit in these as opposed to, say Viagra. But the other part of the reason for their failure, is just as interesting. It’s that we have overtreated ourselves because we find the discomfort of being even mildly sick for a few days unbearable. It’s also because we want nature, in this case our farm animals, to function like machines. Mechanical functioning means regularity, predictability, standardization and efficiency and we’ve had to so distort the living conditions, food, and even genetics of the animals we raise that they would not survive without our constant medical interventions, including antibiotics.

There is a great deal of financial incentive to build solutions to human medical problems around interminable treatments rather than once and done cures or something that is done only periodically. Constant consumption and obsolescence guarantees revenue streams.  Not too long ago, Danny Hillis, who I otherwise have the deepest respect for, gave an interview on, among other things, proteomics, which, for my purposes here, essentially means the minute analysis of bodily processes with the purpose of intervening the moment things begin to go wrong- to catch diseases before they cause us to exhibit symptoms. An audience member asked a thought provoking question, which when followed up by the interviewer Alexis Madrigal, seemed to leave the otherwise loquacious Hillis, stumped. How do you draw the line between illness without symptoms and what the body just naturally does? The danger is you might end up turning everyone, including the healthy, into “patients” and “profit centers”.

We already have a world where seemingly healthy people needed to constantly monitor and medicate themselves just to keep themselves alive, where the body seems to be in a state of almost constant, secret revolt. This is the world as diabetics often experience it, and it’s not a pretty one.  What I wonder is if, in a world in which everyone sees themselves as permanently sick- as in the process of dying- and in need of medical intervention to counter this sickness if we will still remember the joy of considering ourselves healthy? This is medicine becoming subsumed under our current model of consumption.   

Everyone, it seems, has woken up to the fact that consumer electronics has the perfect consumption sustaining model. If things quickly grow “old” to the point where they no longer work with everything else you own, or become so rare that one is unable to find replacement parts, then one if forced to upgrade if merely to insure that your stuff still works. Like the automotive industry, healthcare now seems to be embracing technological obsolescence as a road to greater profitability. Insurance companies seem poised to use devices like the Apple watch to sort and monitor customers, but that is likely only the beginning.

Let me give you my nightmare scenario for a world of superlongevity. It’s a world largely bereft of children where our relationship to our bodies has become something like the one we have with our smart phones, where we are constantly faced with the obsolescence of the hardware and the chemicals, nano-machines and genetically engineered organisms under our own skins and in near continuous need of upgrades to keep us alive. It is a world where those too poor to be in the throes of this cycle of upgrades followed by obsolescence followed by further upgrades are considered a burden and disposable in  the same way August Weismann viewed the disabled in his day.  It’s a world where the rich have brought capitalism into the body itself, an individual life preserved because it serves as a perpetual “profit center”.

The other path would be for superlongevity to be pursued along my first model of healthcare focusing its efforts on understanding the genetic underpinnings of aging through looking at miracles such as the bowhead whale which can live for two centuries and gets cancer no more often than we do even though it has trillions more cells than us. It would focus on interventions that were cheap, one time or periodic, and could be spread quickly through populations. This would be a progressive superlongevity.  If successful, rather than bolster, it would bankrupt much of the system built around the second model of healthcare for it would represent a true cure rather than a treatment of many of the diseases that ail us.

Yet even superlongevity pursued to reflect the demands for justice seems to confront a moral dilemma that seems to be at the heart of any superlongevity project. The morally problematic features of superlongevity pursued along the second model of healthcare is that it risks giving long life only to the few. Troublingly, even superlongevity pursued along the first model of healthcare ends up in a similar place, robbing from future generations of both human beings and other lifeforms the possibility of existing, for it is very difficult to see how if a near future generation gains the ability to live indefinitely how this new state could exist side-by-side with the birth of new people or how such a world of many “immortals” of the types of highly consuming creatures we are is compatible with the survival of the diversity of the natural world.

I see no real solution to this dilemma, though perhaps as elsewhere, the limits of nature will provide one for us, that we will discover some bound to the length of human life which is compatible with new people being given the opportunity to be born and experience the sheer joy and wonder of being alive, a bound that would also allow other the other creatures with whom we share our planet to continue to experience these joys and wonders as well. Thankfully, there is probably some distance between current human lifespans and such a bound, and thus, the most important thing we can do for now, is try to ensure that research into superlongevity has the question of sustainable equity serve as its ethical lodestar.

 Image: Memento Mori, South Netherlands, c. 1500-1525, the Thomson collection

Think Time is Speeding Up? Here’s How to Slow It!

seven stages in man's life

One of the weirder things about human being’s perception of time is that our subjective clocks are so off. A day spent in our dreary cubicles can seem to crawl like an Amazonian sloth, while our weekends pass by as fast as a chameleon’s tongue . Most dreadful of all, once we pass into middle age, time seems to transform itself from a lumbering steam train heaving us through clearly delineated seasons and years to a Japanese bullet unstoppably hurdling us towards death with decades passing us by in a blurr.

Wondering about time is a habit of the middle aged, as sure a sign of having passed the clock- blind golden age of youth as the proverbial convertible or Harley. If my soon to be 93 year old grandmother is any indication, the old, like the young, aren’t much taken aback by the speed of time’s passage. Instead, time seems to take on the viscosity of New England molasses, the days gently flowing down life’s drain.

Up until now, I didn’t think there might be any empirical evidence to back up such colloquial observations, just the anecdotes passed around the holiday dinner table like turkey stuffing and cranberry sauce: “Can you believe it’s almost Christmas again?”, “Where did the year go?” Lucky for me I now know what happened to time, or how I’ve been confuddled all this time into thinking something had happened to it. I know because I’ve read the psychologist and BBC science broadcaster, Claudia Hammond’s excellent little book on the psychology of time called: Time Warped: Unlocking the Mysteries of Time Perception.     

If you’ve ever asked yourself why time seems to crawl when you’re watching the clock and want it to go faster, or why time appears to speed up in the face of an event you’re dreading like a speech, this is the book for you. But Hammond’s Time Warped goes much deeper than that and exposes us to the reality of what it would be like if some of our common dreams about controlling time actually came true. If we could indeed have “perfect memory” or, as everyone keeps reminding us to, “live in the present”. In addition to all that, the nature of our ambiguous relationship with time she reveals raises interesting questions for those hoping we wrestle from nature a great deal more of it.

Hammond doesn’t really discuss the physics of time, or more clearly, the fact that much of modern physics views time as an illusion akin to past imaginary entities like the ether or the phlogiston.  The fact that something so essential to our human self-understanding is considered by the bedrock of human sciences to be a brain induced mirage has led to a rebellion of at least one prominent physicists, Lee Smolin, but he’s almost a lone voice in the quest to restore time. Nor is Hammond all that interested in the philosophy of time, its history or what time actually is. You won’t find here any detailed discussion of how to define time, it’s more like Supreme Court Justice Potter Stewart’s definition of pornography: “you know it when you see it.” Hammond is, though, on firm scientific ground discussing her main subject, the human perception of time, which, whatever it’s underlying reality or unreality, we find it nearly impossible to live without.

Evolution might have kept things simple and given the human brain just one clock, a steady Bigben of a thing to accurately mark the time. Instead, Hammond draws our attention to the fact that we seem to have multiple clocks within us all running at once.

We seem to be extremely good at gauging the passage of seconds or minutes without counting.  We also have a twenty four- hour clock that runs with the same length but independent of the alternating light and darkness of our spinning earth as Hammond shows was proven by Michel Stiffre who, in the name of science and youthful stupidity, (he was 23) braved two months in a dark cave meticulously recording his bodily rhythms. What Stiffre proved is that, sun or no sun, our bodies follow twenty-four hour cycles. The turning of the earth has bored its traces deep into us, which we fight against using the miracle of electric lights, and if the popularity of sleeping pills are any indication, so often lose.

For some of us, there seems to be an inbuilt ability and need to see longer stretches of time spatially in the form of ovals, circles, or zig-zags, rather than the linear timelines one sees in history books. One day, not long before I read Hammond’s book, I found myself scribbling thinking about how far into the future my great-grandchildren would live, should my now small daughters and their children ever have children of their own.

For whatever reason, I didn’t draw out the decades as blocks of a line but like a set of steps. I thought nothing of it until I read Time Warped and saw that this was a common way for people to picture decades, though many do so in three dimensions, rather than my paltry two. Some people also associate days with color- a kind of synesthesia that isn’t just playful imagination, but is often stable across an individual’s life.

There is no real way to talk about how human beings experience time without discussing memory. What I found mind-blowing about Time Warped was just how many of what we consider the flaws of our memory end up being ambiguities we would be better off not having resolved.

Take the fact that our memories are so fallible and incomplete. One would think that things would be so much better if our brains could record everything and have it for playback on a sort of neuronal blu- ray. For certain situations like criminal trials this would solve a whole host of problems, but elsewhere, we should watch what we wish for. As Hammond shows, there are people who can remember every piece of minutia, down to the socks they wore on a particular day, decades earlier, but a moment’s reflection leads to the conclusion that such natural born mnemonic prodigies fail to dominate creative fields, the sciences, or anything else, and such was the case long before we had Google to remember things for us.

There are people who believe that the path to ensuring they are not unraveled by the flow of time is to record and document everything about themselves and all of their experiences. Digital technology has doubtless made such a quest easier, but Hammond leads us to wonder whether or not the effort to record our every action and keystroke is quixotic. Who will actually take the time to look at all this stuff? How many times, she asks, have any of us sat down to watch our wedding video?

People obsessed with recording every detail of their lives are very likely motivated by the idea that it is their memories that make them who they are. Part of our deep fear of developing Alzheimer’s probably originates in this idea that the loss of our memories would constitute the loss of our self. Yet somehow the loss of memories (and the damage of Alzheimer’s runs much deeper than the loss of memories) does not seem to rob those who experience such losses of what other recognize as their long standing personality.

Strangely, our not too reliable memories, when combined with our ability to mentally time-travel into the past, Hammond believes, gives rise to our ability to imagine futures which are not. It allows us to mix and match different scenes from our memory to come up with whole new ones we anticipate will happen, or even ones that could never happen.

The idea that our imagination might owe its existence to our faulty memory put me in mind of the recent findings of Laurie Santos of the Comparative Cognition Laboratory at Yale. Santos has shown that human beings can be less not more rational than animals when it comes to certain tasks due to our proclivity for imitation. Monkeys will solve a puzzle by themselves and aren’t thrown off by other monkeys doing something different, whereas a human being will copy a wrong procedure done by another human being even when able to independently work the puzzle out. Santos speculates that such irrationality may give us the plasticity necessary to innovate even if much of this innovation tends not to work. It seems it is our flaws rather than our superiority that have so favored us above our animal kin.

What, though, of the big problem, the one we all face- the frightening speed through which we are running through our short lives? There is, it seems, some wisdom in the adage that the best way to approach time is in focusing on the present, even if you’re like me and watching another TED talk on the subject by Pico Iyer is enough to make you hurl. If the future is a realm of anxiety and the past a realm of regret, as long as one is not in pain, the present moment is a sort of refuge. Hammond believes that thinking about the future, even if we so often get it wrong by, for instance, thinking that our future self will have more time, money, or will-power, is the default mode of the brain.

Any meditative tradition worth its salt tries to free us from this future obsessed mode and connect us more fully with the present moment of our existence, our breath, its rhythms, the people we care about. There are ways we can achieve this focus on the present without mediation, but they often involve contemplation of our own impending death, which is why soldiers amid the suffering of war and the terminally ill or the very old like my Nanna can often unhitch themselves from the train pulling our thinking off to the future.

Focusing on the present is one way to not only slow the pace of time, but to infuse the short time we have here with the meaning it deserves. Knowing that my small children will outgrow my silliness is the best way I have found to appreciate their laughter now.

Present focus does not, however, solve the central paradox of time for the middle aged, namely, why it seems to move so much faster as we get older, for it is doubtful we were all that more capable of savoring the moment as teenagers than adults. Our commonsense explanation of time speeding up as we age typically has to do with proportionality as in “a year for a five year old is 1/5 of their life, but for a forty year old it is merely 1/40.” Hammond shows this proportionality theory to to wrong on its face, for, if it were true, the days for a middle aged person would be quite literally buzzing by in comparison to the days of their younger selves.

Only a moment’s reflection should show us that the proportionality theory for time’s seeming quickening as we age can’t be true. Think back to your school days waiting impatiently for the 3:00 pm bell to ring: was it really much longer than the time you spend now stuck to your chair in some meaningless business meeting? There are some difference in the gauging of how much time has passed between the young and the old, yet these are nowhere near large enough to explain the differences in the subjective experiences of how fast time is passing between those two groups. So if proportionality theory doesn’t explaining the speeding up of time for the middle aged- what does?

When thinking about duration, the thing we need to keep in mind is, as the work of Daniel Kahneman has shown, we have not one but two “selves” an experiencing self and a remembering self. Having two selves does a number on our ability to make decisions with our future in mind. The experiencing self wants us to eat the cookie now, because it’s the remembering self that will regret it later. It also skews our sense of the past.

Our sense of the duration of time is experienced differently by these two separate selves.Waiting in a long line feels like forever when you’re there, but unless something particularly interesting happened during your wait, the remembered experience feels like it happened in a blink of an eye. Yet, a wonderful or frightening experience, like a first kiss or a car accident, though it seems to fly by while we’re in it, usually cuts its groves deep enough into our memory that when we reflect upon it it seems to have taken a very long time to unfold.

Hammond’s explanation for why youth seems stretched out in time compared to middle age  is what she calls the “reminiscence bump” and the “holiday paradox”. Adolescence and young adulthood are filled with so many firsts they leave a deep impression on our memory and this “thickness” of memory leads our remembering self to conclude time must have been going more slowly back in the heady days of our youth- the reminiscence bump . If you want to make your middle age days seem longer, then you need to fill them up with exciting and new things, which is the reason, Hammond speculates, that holidays full of new experiences seem fast when we’re in them, but to be stretched out on reflection- the holiday paradox. She wonders, however, whether the better option is just not to worry so much about time’s speed and rest when we need it rather than constantly chase after new memories.

Given the interest of the audience here in extending the human lifespan I wonder what the implications of such discovers regarding time on that project might be? A comedy could certainly be written in which we have doubled the length of human life, and end up also doubling all those things we now find banal about time. Would human beings who lived well beyond their hundreds be subject to meetings that stretched out for days and weeks? Would traffic jams in which you spent a week in your car be normal?

Perhaps we might even want to focus on our ability to manipulate our sense of time’s duration as an easier path towards a sort of longevity. Imagine a world where love affairs could stretch out centuries and pain and boredom are reduced to a blink, or a future that has “time retreats” (like today’s religious retreats) where one goes away for a week that has been neurologically altered to having felt like it was decades or longer. We might use the same sorts of time manipulation to punish people for heinous crimes so that a 600 year sentence actually means something. One might object that such induced experiences of slow time aren’t real, but then again neither are most versions of digital immortality, or even, as Hammond showed us, our subjective experience of time itself.

All of this talk of manipulating our sense of time as a road to longevity is just playful speculation on my part. What should be clear is that any move towards changing the human body so that it lives much longer than it does now is probably also going to have to grapple with and transform our psychological notions of time and the society we have built around our strange capacity to warp it.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.

Mary Shelley’s other horror story; Lessons for Super-pandemics

The Last Man

Back in the early 19th century a novel was written that tells the story of humanity’s downfall in the 21st century.  Our undoing was the consequence of a disease that originates in the developing world and radiates outward eventually spreading into North America, East Asia, and ultimately Europe. The disease proves unstoppable causing the collapse of civilization, our greatest cities becoming grave sites of ruin. For all the reader is left to know, not one human being survives the pandemic.

We best know the woman who wrote The Last Man in 1825 as the author of  Frankenstein, but it seems Mary Shelley had more than one dark tale up her sleeve. Yet, though the destruction wrought by disease in The Last Man is pessimistic to the extreme, we might learn some lessons from the novel that would prove helpful to understanding not only the very deadly, if less than absolute ruination, of the pandemic of the moment- Ebola- and even more regarding the dangers from super-pandemics more likely to emerge from within humanity than from what is a still quite dangerous nature herself.

The Last Man tells the story of son of a nobleman who had lost his fortune to gambling, Lionel Verney, who will become the sole remaining man on earth as humanity is destroyed by a plague in the 21st century. Do not read the novel hoping to get a glimpse of Shelley’s view of what our 21st century world would be like, for it looks almost exactly like the early 19th century, with people still getting around on horseback and little in the way of future technology.

My guess is that Shelley’s story is set in the “far future” in order to avoid any political heat for a novel in which England has become a republic. Surely, if she meant it to take place in a plausible 21st century, and had somehow missed the implications of the industrial revolution, there would at least have been some imagined political differences between that world and her own. The same Greco-Turkish conflict that raged in the 1820’s rages on in Shelley’s imagined 21st century with only changes in the borders of the war. Indeed, the novel is more of a reflection and critique on the Romantic movement, with Lord Byron making his appearance in the form of the character Lord Raymond, and Verney himself a not all that concealed version of Mary Shelley’s deceased husband Percy.

In The Last Man Shelley sets out to undermine all the myths of the Romantic movement, myths of the innocence of nature, the redemptive power of revolutionary politics and the transformative power of art. While of historical interests such debates offer us little in terms of the meaning of her story for us today. That meaning, I think,  can be found in the state of epidemiology, which on the very eve of Shelley’s story was about to undergo a revolution, a transformation that would occur in parallel with humanity’s assertion of general sovereignty over nature, the consequence of the scientific and industrial revolutions.

Reading The Last Man one needs to be carefully aware that Shelley has no idea of how disease actually works. In the 1820’s the leading theory of what caused diseases was the miasma theory, which held that they were caused by “bad air”. When Shelley wrote her story miasma theory was only beginning to be challenged by what we now call the “germ theory” of disease with the work of scientists such as Agostino Bassi. This despite the fact that we had known about microscopic organisms since the 1500s and their potential role in disease had been cited as early as 1546 by the Italian polymath Girolamo Fracastoro. Shelley’s characters thus do things that seem crazy in the light of germ theory; most especially, they make no effort to isolate the infected.

Well, some do. In The Last Man it is only the bad characters that try to run away or isolate themselves from the sick. The supremely tragic element in the novel is how what is most important to us, our small intimate circles, which we cling to despite everything, can be done away with by nature’s cruel shrug. Shelley’s tale is one of extreme pessimism not because it portrays the unraveling of human civilization, and turns our monuments into ruins, and eventually, dust, but because of how it portrays a world where everyone we love most dearly leave us almost overnight. The novel gives one an intimate portrait of what its like to watch one’s beloved family and friends vanish, a reality Mary Shelley was all too well acquainted with, having lost her husband and three children.

Here we can find the lesson we can take for the Ebola pandemic for the deaths we are witnessing today in west Africa are in a very real sense a measure of people’s humanity as if nature, perversely, set out to target those who are acting in a way that is most humane. For, absent modern medical infrastructure, the only ones left to care for the infected is the family of the sick themselves.

This is how is New York Times journalist Helene Cooper explained it to interviewer Terry Gross of Fresh Air:

COOPER: That’s the hardest thing, I think, about the disease is it does make pariahs out of the people who are sick. And it – you know, we’re telling the family people – the family members of people with Ebola to not try to help them or to make sure that they put on gloves. And, you know, that’s, you know, easier – I think that can be easier said than done. A lot of people are wearing gloves, but for a lot of people it’s really hard.

One of the things – two days after I got to Liberia, Thomas Eric Duncan sort of happened in the U.S. And, you know, I was getting all these questions from people in the U.S. about why did he, you know, help his neighbor? Why did he pick up that woman who was sick? Which is believed to be how we got it. And I set out trying to do this story about the whole touching thing because the whole culture of touching had gone away in Liberia, which was a difficult thing to understand. I knew the only way I could do that story was to talk to Ebola survivors because then you can ask people who actually contracted the disease because they touched somebody else, you know, why did you touch somebody? It’s not like you didn’t know that, you know, this was an Ebola – that, you know, you were putting yourself in danger. So why did you do it?

And in all the cases, the people I talked to there were, like, family members. There was this one woman, Patience, who contracted it from her daughter who – 2-year-old daughter, Rebecca – who had gotten it from a nanny. And Rebecca was crying, and she was vomiting and, you know, feverish, and her mom picked her up. When you’re seeing a familiar face that you love so much, it’s really, really hard to – I think it’s a physical – you have to physically – to physically restrain yourself from touching them is not as easy as we might think.

The thing we need to do to ensure naturally occurring pandemics such as Ebola cause the minimum of human suffering is to provide support for developing countries lacking the health infrastructure to respond to or avoid being the vectors for infectious diseases. We especially need to address the low number of doctors per capita found in some countries through, for example, providing doctor training programs. In a globalized world being our brother’s keeper is no longer just a matter of moral necessity, but helps preserve our own health as well.

A super-pandemic of the kind imagined by Mary Shelley, though, is an evolutionary near impossibility. It is highly unlikely that nature by itself would come up with a disease so devastating we will not be able to stop before it kills us in the billions. Having co-evolved with microscopic life some human being’s immune system, somewhere, anticipates even nature’s most devious tricks. We are also in the Anthropocene now, able to understand, anticipate, and respond to the deadliest games nature plays. Sadly, however, the 21st century could experience, as Shelley imagined, the world’s first super-pandemic only the source of such a disaster wouldn’t be nature- it would be us.

One might think I am referencing bio-terrorism, yet the disturbing thing is that the return address for any super-pandemic is just as likely to be stupid and irresponsible scientists as deliberate bioterrorism. Such is the indication from what happened in 2011 when the Dutch scientist Ron Fouchier deliberately turned the H5N1 bird flu into a form that could potentially spread human-to-human. As reported by Laurie Garrett:

Fouchier told the scientists in Malta that his Dutch group, funded by the U.S. National Institutes of Health, had “mutated the hell out of H5N1,” turning the bird flu into something that could infect ferrets (laboratory stand-ins for human beings). And then, Fouchier continued, he had done “something really, really stupid,” swabbing the noses of the infected ferrets and using the gathered viruses to infect another round of animals, repeating the process until he had a form of H5N1 that could spread through the air from one mammal to another.

Genetic research has become so cheap and easy that what once required national labs and huge budgets to do something nature would have great difficulty achieving through evolutionary means can now be done by run-of-the-mill scientists in simple laboratories, or even by high school students. The danger here is that scientists will create something so novel that  evolution has not prepared any of us for, and that through stupidity and lack of oversight it will escape from the lab and spread through human populations.

News of the crazy Dutch experiments with H5N1 was followed by revelations of mind bogglingly lax safety procedures around pandemic diseases at federal laboratories where smallpox virus had been forgotten in a storage area and pathogens were passed around in Ziploc bags.

The U.S. government, at least, has woken up to the danger imposing a moratorium on such research until their true risks and rewards can be understood and better safety standards established. This has already, and will necessarily, negatively impact potentially beneficial research. Yet what else, one might ask should the government do given the potential risks? What will ultimately be needed is an international treaty to monitor, regulate, and sometimes even ban certain kinds of research on pandemic diseases.

In terms of all the existential risks facing humanity in the 21st century, man-made super-pandemics are the one with the shortest path between reality and nightmare. The risk from runaway super-intelligence remains theoretical, based upon hypothetical technology that, for all we know, may never exist. The danger of runaway global warming is real, but we are unlikely to feel the full impact this century. Meanwhile, the technologies to create a super-pandemic in large part already here with the key uncertainty being how we might control such a dangerous potential if, as current trends suggest, the ability to manipulate and design organisms at the genetic level continues to both increase and democratize. Strangely enough, Mary Shelley’s warning in her Frankenstein about the dangers of science used for the wrong purposes has the greatest likelihood of coming in the form of her Last Man.

 

Plato and the Physicist: A Multicosmic Love Story

Life as a Braid of Space Time

 

So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.

I should start though with my initial question of how Tegmark manages to fit something very much like Plato’s Theory of the Forms into the seemingly chaotic landscape of multiverse theories. If you remember back to your college philosophy classes, you might recall something of Plato’s idea of forms, which in its very basics boils down to this: Plato thought there was a world of perfect, eternally existing ideas of which our own supposedly real world was little more than a shadow. The idea sounds out there until you realize that Plato was thinking like a mathematician. We should remember that over the walls of Plato’s Academy was written “Let no man ignorant of geometry enter here”, and for the Greeks geometry was the essence of mathematics. Plato aimed to create a school of philosophical mathematicians much more than he hoped to turn philosophers into a sect of moral geometers.

Probably almost all mathematicians and physicists hold to some version of platonism, which means that they think mathematical structures are something discovered rather than a form of language invented by human beings. Non- mathematicians, myself very much included, often have trouble understanding this, but a simple example from Plato himself might help clarify.

When the Greeks played around with shapes for long enough they discovered things. And here we really should say discover because they had no idea shapes had these properties until they stumbled upon them through play.Plato’s dialogue Meno gave us the most famous demonstration of the discovery rather than invention of mathematical structures. Socrates asks a “slave boy” (we should take this to be the modern day equivalent of the man off the street) to figure out the area of a square which is double that of a square with a length of 2. The key, as Socrates leads the boy to see, is that one should turn the square with the side of 2 into a right triangle the length of whose hypotenuse is then seen as equal to one of the lengths of the doubled square allowing you easily calculate its area. The slave boy explains his measurement epiphany as the “recovery of knowledge from a past life.”

The big gap between Plato and modern platonists is that the ancient philosopher thought the natural world was a flawed copy of the crystalline purity of the mathematics of  thought. Contrast that with Newton who saw the hand of God himself in nature’s calculable regularities. The deeper the scientists of the modern age probed with their new mathematical tools the more nature appeared as Galileo said “ a book written in the language of mathematics”. For the moderns mathematical structures and natural structures became almost one and the same. The Spanish filmmaker and graphic designer Cristóbal Vila has a beautiful short over at AEON reflecting precisely this view.

It’s that “almost” that Tegmark has lept over with his Mathematical Universe Hypothesis (MUH). The essence of the MUH is not only that mathematical structures have an independent identity, or that nature is a book written in mathematics, but that the nature is a mathematical structure and just as all mathematical structures exist independent of whether we have discovered them or not, all logically coherent universes exists whether or not we have discovered their structures. This is platonism with a capital P, the latter half explaining how the MUH intersects with the idea of the multiverse.

One of the beneficial things Tegmark does with his book is to provide a simple to understand set of levels for different ideas that there is more than one universe.

Level I: Beyond our cosmological horizon

A Level I multiverse is the easiest for me to understand. It is within the lifetime of people still alive that our universe was held to be no bigger than our galaxy. Before that people thought the entirety of what was consisted of nothing but our solar system, so it is no wonder that people thought humanity was the center of creation’s story. As of right now the observable universe is around 46 billion light years across, actually older than the age of the universe due to its expansion. Yet, why should we think this observable horizon constitutes everything when such assumption has never proved true in the past? The Level I multiverse holds that there are entire other universes outside the limit of what we can observe.

Level II: Universes with different physical constants

The Level II multiverse again makes intuitive sense to me. If one assumes that the Big Bang was not the first or the last of its kind, and  if one assumes there are whole other, potentially an infinite number of universes, why assume that our is the only way a universe should be organized? Indeed, having a variety of physical constants to choose from would make the fine tuning of our own universe make more sense.

Level III: Many-worlds interpretation of quantum mechanics

This is where I start to get lost, or at least this particular doppelganger of me starts to get lost. Here we find Hugh Everett’s interpretation of quantum unpredictability. Rather than Schrodinger’s Cat being pushed from a superposition of states between alive and dead when you open the box, exposing the feline causes the universe to split- in one universe you have an alive cat, and in another a dead one. It gets me dizzy just thinking about it, just imagine the poor cat- wait, I am the cat!

Level IV: Ultimate ensemble

Here we have Tegmark’s model itself where every universe that can represented as a logically consistent mathematical structure is said to actually exist. In such a multiverse when you roll a six-sided die, there end up being six universes corresponding to each of the universes, but there is no universe where you have rolled a “1 not 1” , and so on. If a universe’s mathematical structure can be described, then that universe can be said to exist there being, in Tegmark’s view, no difference between such a mathematical structure and a universe.

I had previously thought the idea of the multiverse was a way to give scale to the shadow of our ignorance and expand our horizon in space and time. As mentioned, we had once thought all that is was only as big as our solar system and merely thousands of years old. By the 19th century the universe had expanded to the size of our galaxy and the past had grown to as much as 400 million years. By the end of the 20th century we knew there were at least 100 billion galaxies in the universe and that its age was 13.7 billion. There is no reason to believe that we have grasped the full totality of existence, that the universe, beyond our observable horizon isn’t even bigger, and the past deeper. There is “no sign on the Big Bang saying ‘this happened only once’” as someone once said cleverly whose attribution I cannot find.

Ideas of the multiverse seemed to explain the odd fact that the universe seems fine-tuned to provide the conditions for life, Martin Rees “six numbers” such as Epsilon (ε)- the strength of the force binding nucleons to nuclei. If you have a large enough sample of universes then the fact that some universes are friendly for life starts to make more sense. The problem, I think, comes in when you realize just how large this sample size has to be to get you to fine tuning- somewhere on the order of 10 ^200. What this means is that you’ve proposed the existence of a very very large or even infinite number of values, as far as we know which are unobservable to explain essentially six. If this is science, it is radically different from the science we’ve known since Galileo dropped cannon balls off of the Leaning Tower of Pisa.

For whatever reason, rather than solidify my belief in the possibility of the multiverse, or convert me to platonism, Tegmark’s book left me with a whole host of new questions, which is what good books do. The problem is my damned doppelgangers who can be found not only at the crazy quantum Level III, but at the levels I thought were a preserve of Copernican Mediocrity – Levels I and II, or as Tegmark says.

The only difference between Level I and Level III is where your doppelgängers reside.

Yet, to my non-physicist eyes, the different levels of multiverse sure seems distinct. Level III seems to violate Copernican Mediocrity with observers and actors being able to call into being whole new timelines with even the most minutea laden of their choices, whereas Levels I and II simply posit that a universe sufficiently large enough and sufficiently extended enough in time would allow for repeat performances down to the smallest detail- perhaps the universe is just smaller than that, or less extended in time, or there is some sort of kink whereby when what the late Stephen J Gould called the “life tape” is replayed you can never get the same results twice.

Still, our intuitions about reality have often been proven wrong, so no theory can be discounted on the basis of intuitive doubts. There are other reasons, however, why we might use caution when it comes to multiverse theories, namely, their potential risk to the scientific endeavor itself.  The fact that we can never directly observe parts of the multiverse that are not our own means that we would have to step away from falsifiability as the criteria for scientific truth. The physicist Sean Carroll  argues that falsifiability is a weak criteria, what makes a theory scientific is that it is “direct” (says something definite about how reality works) and “empirical”, by which he no longer means the Popperian notion of falsifiability, but its ability to explain the world. He writes:

Consider the multiverse.

If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.

We can’t (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe.

One could look at Tegmark’s MUH and Carroll’s comments as a broadening of our scientific and imaginative horizons and the continuation of our powers to explain into realms beyond what human beings will ever observe. The idea of a 22nd version of Plato’s Academy using amazingly powerful computers to explore all the potential universes ala Tegmark’s MUH is an attractive future to me. Yet, given how reliant we are on science and the technology that grows from it, and given the role of science in our society in establishing the consensus view of what our shared physical reality actually is, we need to be cognizant and careful of what such a changed understanding of science actually might mean.

The physicist, George Ellis, for one, thinks the multiverse hypothesis, and not just Tegmark’s version of it, opens the door to all sorts of pseudoscience such as Intelligent Design. After all, the explanation that the laws and structure of our universe can be understood only by reference to something “outside” is the essence of explanations from design as well, and just like the multiverse, cannot be falsified.

One might think that the multiverse was a victory of theorizing over real world science, but I think Sean Carroll is essentially right when he defends the multiverse theory by saying:

 Science is not merely armchair theorizing; it’s about explaining the world we see, developing models that fit the data.

It’s the use of the word “model” here rather than “theory” that is telling. For a model is a type of representation of something whereas a theory constitutes an attempt at a coherent self-contained explanation. If the move from theories to models was only happening in physics then we might say that this had something to do merely with physics as a science rather than science in general. But we see this move all over the place.

Among, neuroscientists, for example, there is no widely agreed upon theory of how SSRIs work, even though they’ve been around for a generation, and there’s more. In a widely debated speech Noam Chomsky argued that current statistical models in AI were bringing us no closer to the goal of AGI or the understanding of human intelligence because they lacked any coherent theory of how intelligence works. As Yaden Katz wrote for The Atlantic:

Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

Likewise, the field of systems biology and especially genomic science is built not on theory but on our ability to scan enormous databases of genetic information looking for meaningful correlations. The new field of social physics is based on the idea that correlations of human behavior can be used as governance and management tools, and business already believes that statistical correlation is worth enough to spend billions on and build an economy around.

Will this work as well as the science we’ve had for the last five centuries? It’s too early to tell, but it certainly constitutes a big change for science and the rest of us who depend upon it. This shouldn’t be taken as an unqualified defense of theory- for if theory was working then we wouldn’t be pursuing this new route of data correlation whatever the powers of our computers. Yet, those who are pushing this new model of science should be aware of its uncertain success, and its dangers.

The primary danger I can see from these new sorts of science, and this includes the MUH, is that it challenges the role of science in establishing the consensus reality which we all must agree upon. Anyone who remembers their Thomas Kuhn can recall that what makes science distinct from almost any system of knowledge we’ve had before, is that it both enforces a consensus view of physical reality beyond which an individual’s view of the world can be considered “unreal”, and provides a mechanism by which this consensus reality can be challenged and where the challenge is successful overturned.

With multiverse theories we are in approaching what David Engelman calls Possibilism the exploration of every range of ways existence can be structured that is compatible with the findings of science and is rationally coherent. I find this interesting as a philosophical and even spiritual project, but it isn’t science, at least as we’ve understood science since the beginning of the modern world. Declaring the project to be scientific blurs the lines between science and speculation and might allow people to claim the kind of understanding over uncertainty that makes politics and consensus decisions regarding acute needs of the present, such a global warming, or projected needs of the future impossible.

Let me try to clarify this. I found it very important that in Our Mathematical Universe Tegmark tried to tackle the problem of existential risks facing the human future. He touches upon everything from climate change, to asteroid impacts, to pandemics to rogue AI. Yet, the very idea that there are multiple versions of us out there, and that our own future is determined seems to rob these issues of their urgency. In an “infinity” of predetermined worlds we destroy ourselves, just as in an “infinity” of predetermined worlds we do what needs to be done. There is no need to urge us forward because, puppet-like, we are destined to do one thing or the other on this particular timeline.

Morally and emotionally, how is what happens in this version of the universe in the future all that different from what happens in other universe? Persons in those parallel universes are even closer to us, our children, parents, spouses, and even ourselves than the people of the future on our own timeline. According to the deterministic models of the multiverse, the world of these others are outside of our influence and both the expansion or contraction of our ethical horizon leave us in the same state of moral paralysis. Given this, I will hold off on believing in the multiverse, at least on the doppelganger scale of Level I and II, and especially Levels III and IV until it actually becomes established as a scientific fact,which it is not at the moment, and given our limitations, perhaps never will be, even if it is ultimately true.

All that said, I greatly enjoyed Tegmark’s book, it was nothing if not thought provoking. Nor would I say it left me with little but despair, for in one section he imagined a Spinoza-like version of eternity that will last me a lifetime, or perhaps I should say beyond.  I am aware that I will contradict myself here: his image that gripped me was of an individual life seen as a braid of space-time. For Tegmark, human beings have the most complex space-time braids we know of. The idea vastly oversimplified by the image above.

About which Tegmark explains:

At both ends of your spacetime braid, corresponding to your birth and death, all the threads gradually separate, corresponding to all your particles joining, interacting and finally going their own separate ways. This makes the spacetime structure of your entire life resemble a tree: At the bottom, corresponding to early times, is an elaborate system of roots corresponding to the spacetime trajectories of many particles, which gradually merge into thicker strands and culminate in a single tube-like trunk corresponding to your current body (with a remarkable braid-like pattern inside as we described above). At the top, corresponding to late times, the trunk splits into ever finer branches, corresponding to your particles going their own separate ways once your life is over. In other words, the pattern of life has only a finite extent along the time dimension, with the braid coming apart into frizz at both ends.

Because mathematical structures always exist whether or not anyone has discovered them, our life braid can be said to have always existed and will always exist. I have never been able to wrap my head around the religious idea of eternity, but this eternity I understand. Someday I may even do a post on how the notion of time found in the MUH resembles the medieval idea of eternity as nunc stans, the standing-now, but for now I’ll use it to address more down to earth concerns.

My youngest daughter, philosopher that she is, has often asked me “where was I before I was born?”. To which my lame response has been “you were an egg” which for a while made big breakfasts difficult. Now I can just tell her to get out her crayons to scribble, and we’ll color our way to something profound.

 

How Should Humanity Steer the Future?

FQXi

Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.

If there was any theme I found across the 140 or so essays entered in the contest – it was that the 21st century was make- it- or-break-it for humanity, so we need to get our act together, and fast. If you want a metaphor for this sentiment, you couldn’t do much better than Nietzsche’s idea that humanity is like an individual walking on a “rope over an abyss”.

A Rope over an Abyss by Laurence Hitterdale

Hitterdale’s idea is that for most of human history the qualitative aspects of human experience have pretty much been the same, but that is about to change. What are facing, according to Hitterdale, is the the extinction of our species or the realization of our wildest perennial human dreams- biological superlongevity, machine intelligence that seem to imply the end of drudgery and scarcity. As he points out, some very heavy hitting thinkers seem to think we live in make or break times:

 John Leslie, judged the probability of human extinction during the next five centuries as perhaps around thirty per cent at least. Martin Rees in 2003 stated, “I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century.”Less than ten years later Rees added a comment: “I have been surprised by how many of my colleagues thought a catastrophe was even more likely than I did, and so considered me an optimist.”

In a nutshell, Hiterdale’s solution is for us to concentrate more on preventing negative outcomes that achieving positive ones in this century. This is because even positive outcomes like human superlongevity and greater than human AI could lead to negative outcomes if we don’t sort out our problems or establish controls first.

How to avoid steering blindly: The case for a robust repository of human knowledge by Jens C. Niemeyer

This was probably my favorite essay overall because it touched on issues dear to my heart- how will we preserve the past in light of the huge uncertainties of the future.  Niemeyer makes the case that we need to establish a repository of human knowledge in the event we suffer some general disaster, and how we might do this.

By one of those strange incidences of serendipity, while thinking about Niemeyer’s ideas and browsing the science section of my local bookstore I came across a new book by Lewis Dartnell The Knowledge: How to Rebuild Our World from Scratch which covers the essential technologies human beings will need if they want to revive civilization after a collapse. Or maybe I shouldn’t consider it so strange. Right next to The Knowledge was another new book The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day, by David Hand, but I digress.

The digitization of knowledge and its dependence on the whole technological apparatus of society actually makes us more vulnerable to the complete loss of information both social and personal and therefore demands that we backup our knowledge. Only things like a flood or a fire could have destroyed our lifetime visual records the way we used to store them- in photo albums- but now all many of us would have to do is lose or break our phone. As Niemeyer  says:

 Currently, no widespread efforts are being made to protect digital resources against global disasters and to establish the means and procedures for extracting safeguarded digital information without an existing technological infrastructure. Facilities like, for instance, the Barbarastollen underground archive for the preservation of Germany’s cultural heritage (or other national and international high-security archives) operate on the basis of microfilm stored at constant temperature and low humidity. New, digital information will most likely never exist in printed form and thus cannot be archived with these techniques even in principle. The repository must therefore not only be robust against man-made or natural disasters, it must also provide the means for accessing and copying digital data without computers, data connections, or even electricity.

Niemeyer imagines the creation of such a knowledge repository as a unifying project for humankind:

Ultimately, the protection and support of the repository may become one of humanity’s most unifying goals. After all, our collective memory of all things discovered or created by mankind, of our stories, songs and ideas, have a great part in defining what it means to be human. We must begin to protect this heritage and guarantee that future generations have access to the information they need to steer the future with open eyes.

Love it!

One Cannot Live in the Cradle Forever by Robert de Neufville

If Niemeyer is trying to goad us into preparing should the worst occur, like Hitterdale, Robert de Neufville is working towards making sure these nightmare, especially self-inflicted ones, don’t come true in the first place. He does this as a journalist and writer and as an associate of the Global Catastrophic Risk Institute.

As de Neufville points out, and as I myself have argued before, the silence of the universe gives us reason to be pessimistic about the long term survivability of technological civilization. Yet, the difficulties that stand in the way of our minimizing global catastrophic risks, thing like developing an environmentally sustainable modern economy, protecting ourselves against global pandemics or meteor strikes of a scale that might set civilization on its knees, or the elimination of the threat of nuclear war, are more challenges of politics than technology. He writes:

But the greatest challenges may be political. Overcoming the technical challenges may be easy in comparison to using our collective power as a species wisely. If humanity were a single person with all the knowledge and abilities of the entire human race, avoiding nuclear war, and environmental catastrophe would be relatively easy. But in fact we are billions of people with different experiences, different interests, and different visions for the future.

In a sense, the future is a collective action problem. Our species’ prospects are effectively what economists call a “common good”. Every person has a stake in our future. But no one person or country has the primary responsibility for the well-being of the human race. Most do not get much personal benefit from sacrificing to lower the risk of extinction. And all else being equal each would prefer that others bear the cost of action. Many powerful people and institutions in particular have a strong interest in keeping their investments from being stranded by social change. As Jason Matheny has said, “extinction risks are market failures”.

His essay makes an excellent case that it is time we mature as a species and live up to our global responsibilities. The most important of which is ensuring our continued existence.

The “I” and the Robot by Cristinel Stoica

Here Cristinel Stoica makes a great case for tolerance, intellectual humility and pluralism, a sentiment perhaps often expressed but rarely with such grace and passion.

As he writes:

The future is unpredictable and open, and we can make it better, for future us and for our children. We want them to live in peace and happiness. They can’t, if we want them to continue our fights and wars against others that are different, or to pay them back bills we inherited from our ancestors. The legacy we leave them should be a healthy planet, good relations with others, access to education, freedom, a healthy and critical way of thinking. We have to learn to be free, and to allow others to be free, because this is the only way our children will be happy and free. Then, they will be able to focus on any problems the future may reserve them.

Ends of History and Future Histories in the Longue Duree by Benjamin Pope

In his essay Benjamin Pope is trying to peer into the human future over the long term, by looking at the types of institutions that survive across centuries and even millennia: Universities, “churches”, economic systems- such as capitalism- and potentially multi-millennial, species – wide projects, namely space colonization.

I liked Pope’s essay a lot, but there are parts of it I disagreed with. For one, I wish he would have included cities. These are the oldest lived of human institutions, and unlike Pope’s other choices are political, and yet manage to far out live other political forms- namely states or empires. Rome far outlived the Roman Empire and my guess is that many American cities, as long as they are not underwater, will outlive the United States.

Pope’s read on religion might be music to the ears of some at the IEET:

Even the very far future will have a history, and this future history may have strong, path-dependent consequences. Once we are at the threshold of a post-human society the pace of change is expected to slow down only in the event of collapse, and there is a danger that any locked-in system not able to adapt appropriately will prevent a full spectrum of human flourishing that might otherwise occur.

Pope seems to lean toward the negative take on the role of religion to promote “a full spectrum of human flourishing” and , “as a worst-case scenario, may lock out humanity from futures in which peace and freedom will be more achievable.”

To the surprise of many in the secular West, and that includes an increasingly secular United States, the story of religion will very much be the story of humanity over the next couple of centuries, and that includes especially the religion that is dying in the West today, Christianity. I doubt, however, that religion has either the will or the capacity to stop or even significantly slow technological development, though it might change our understanding of it. It also the case that, at the end of the day, religion only thrives to the extent it promotes human flourishing and survival, though religious fanatics might lead us to think otherwise. I am also not the only one to doubt Pope’s belief that “Once we are at the threshold of a posthuman society the pace of change is expected to slow down only in the event of collapse”.

Still, I greatly enjoyed Pope’s essay, and it was certainly thought provoking.  

Smooth seas do not make good sailors by Georgina Parry

If you’re looking to break out of your dystopian gloom for a while, and I myself keep finding reasons for which to be gloomy, then you couldn’t do much better to take a peak and Georgina Parry’s fictionalized peak at a possible utopian future. Like a good parent, Parry encourages our confidence, but not our hubris:

 The image mankind call ‘the present’ has been written in the light but the material future has not been built. Now it is the mission of people like Grace, and the human species, to build a future. Success will be measured by the contentment, health, altruism, high culture, and creativity of its people. As a species, Homo sapiens sapiens are hackers of nature’s solutions presented by the tree of life, that has evolved over millions of years.

The future is the past by Roger Schlafly

Schlafly’s essay literally made my draw drop, it was so morally absurd and even obscene.

Consider a mundane decision to walk along the top of a cliff. Conventional advice would be to be safe by staying away from the edge. But as Tegmark explains, that safety is only an illusion. What you perceive as a decision to stay safe is really the creation of a clone who jumps off the cliff. You may think that you are safe, but you are really jumping to your death in an alternate universe.

Armed with this knowledge, there is no reason to be safe. If you decide to jump off thecliff, then you really create a clone of yourself who stays on top of the cliff. Both scenarios are equally real, no matter what you decide. Your clone is indistinguishable from yourself, and will have the same feelings, except that one lives and the other dies. The surviving one can make more clones of himself just by making more decisions.

Schlafly rams the point home that under current views of the multiverse in physics nothing you do really amount to a choice, we are stuck on an utterly deterministic wave-function on whose branching where we play hero and villain, and there is no space for either praise or guilt. You can always act as a coward or naive sure that somewhere “out there” another version of “you” does the right thing. Saving humanity from itself in the ways proposed by Hitterdale and de Neufville, preparing for the worst as in Niemeyer and Pope or trying to build a better future as Parry and Stoica makes no sense here. Like poor Schrodinger’s cat, on some branches we end up surviving, on some we destroy ourselves and it is not us who is in charge of which branch we are on.

The thought made me cringe, but then I realized Schlafly must be playing a Swiftian game. Applying quantum theory to the moral and political worlds we inhabit leads to absurdity. This might or might not call into question the fundamental  reality of the multiverse or the universal wave function, but it should not lead us to doubt or jettison our ideas regarding our own responsibility for the lives we live, which boil down to the decisions we have made.

Chinese Dream is Xuan Yuan’s Da Tong by KoGuan Leo

Those of us in the West probably can’t help seeing the future of technology as nearly synonymous with the future of our own civilization, and a civilization, when boiled down to its essence, amounts to a set of questions a particular group of human beings keeps asking, and their answer to these questions. The questions in the West are things like what is the right balance between social order and individual freedom? What is the relationship between the external and internal (mental/spiritual) worlds, including the question of the meaning of Truth? How might the most fragile thing in existence, and for us the most precious- the individual- survive across time? What is the relationship between the man-made world- and culture- visa-vi nature, and which is most important to the identity and authenticity of the individual?

The progress of science and technology intersect with all of these questions, but what we often forget is that we have sown the seeds of science and technology elsewhere and the environment in which they will grow can be very different and hence their application and understanding different based as they will be on a whole different set of questions and answers encountered by a distinct civilization.

Leo KoGuan’s essay approaches the future of science and technology from the perspective of Chinese civilization. Frankly, I did not really understand his essay which seemed to me a combination of singularitarianism and Chinese philosophy that I just couldn’t wrap my head around.  What am I to make of this from the Founder and Chairman of a 5.1 billion dollar computer company:

 Using the KQID time-engine, earthlings will literally become Tianming Ren with God-like power to create and distribute objects of desire at will. Unchained, we are free at last!

Other than the fact that anyone interested in the future of transhumanism absolutely needs to be paying attention to what is happening and what and how people are thinking in China.

Lastly, I myself had an essay in the contest. It was about how we are facing incredible hurdles in the near future and that one of the ways we might succeed in facing these hurdles is by recovering the ability to imagine what an ideal society, Utopia, might look like. Go figure.