How Science and Technology Slammed into a Wall and What We Should Do About It

Captin Future 1944

It might be said that some contemporary futurists tend to use technological innovation and scientific discovery in the same way God was said to use the whirlwind against defiant Job, or Donald Rumsfeld treated the poor citizens of Iraq a decade ago. It’s all about the “shock and awe”. One glance at something like KurzweilAI.Net leaves a reader with the impression that brand new discoveries are flying off the shelf by the nanosecond and that of all our deepest sci-fi dreams are about to come true. No similar effort is made, at least that I know of, to show all the scientific and technological paths that have led  into cul-de-sac, or chart all the projects packed up and put away like our childhood chemistry sets to gather dust in the attic of the human might-have- been.  In exact converse to the world of political news, in technological news it’s the jetpacks that do fly we read about not the ones that never get off the ground.

Aside from the technologies themselves future oriented discussion of the potential of technologies or scientific discovery tends to come in two stripes when it comes to political and ethical concerns: we’re either on the verge of paradise or about to make Frankenstein seem like an amiable dinner guest.

There are a number of problems with this approach to science and technology, I can name more, but here are three: 1) it distorts the reality of innovation and discovery 2) it isn’t necessarily true, 3) the political and ethical questions, which are the most essential ones, are too often presented in a simplistic all- good or all-bad manner when any adult knows that most of life is like ice-cream. It tastes great and will make you fat.

Let’s start with distortion: A futurists’ forum like the aforementioned, KurzweilAI.Net,  by presenting every hint of innovation or discovery side-by-side does not allow the reader to discriminate between both the quality and the importance of such discoveries. Most tentative technological breakouts and discoveries are just that- tentative- and ultimately go nowhere. The first question a reader should ask is whether or not some technique process or prediction has been replicated.  The second question is whether or not the technology or discovery being presented is actually all that important. Anyone who’s ever seen an infomercial knows people invent things everyday that are just minor tweaks on what we already have. Ask anyone trapped like Houdini in a university lab-  the majority of scientific discovery is not about revolutionary paradigm shifts ala Thomas Kuhn but merely filling in the details. Most scientists aren’t Einsteins in waiting. They just edit his paperwork.

Then we have the issue of reality: anyone familiar with the literature or websites of contemporary futurists is left with the impression that we live in the most innovative and scientifically productive era in history. Yet, things may not be as rosy as they might appear when we only read the headlines. At least since 2009, there has been a steady chorus of well respected technologists, scientists and academics telling us that innovation is not happening fast enough, that is that our rates of technological advancement are not merely not exceeding those found in the past, they are not even matching them. A common retort to this claim might be to club whoever said it over the head with Moore’s Law; surely,with computer speeds increasing exponentially it must be pulling everything else along. But, to pull a quote from ol’ Gershwin “ it ain’t necessarily so”.

As Paul Allen, co-founder of Microsoft and the financial muscle behind the Allen Institute for Brain Science pointed out in his 2011 article, The singularity isn’t near, the problems of building ever smaller and faster computer chips is actually relatively simple, but many of the other problems we face, such as understanding how the human brain works and applying the lessons of that model- to the human brain or in the creation of AI- suffer what Allen calls the “complexity break”. The problems have become so complex that they are slowing down the pace of innovation itself. Perhaps it’s not so much of a break as we’ve slammed into a wall.

A good real world example of the complexity break in action is what is happening with innovation in the drug industry where new discoveries have stalled. In the words of Mark Herper at Forbes:

But the drug industry has been very much a counter-example to Kurzweil’s proposed law of accelerating returns. The technologies used in the production of drugs, like DNA sequencing and various types of chemistry approaches, do tend to advance at a Moore’s Law-like clip. However, as Bernstein analyst Jack Scannell pointed out in Nature Review’s Drug Discovery, drug discovery itself has followed the opposite track, with costs increasing exponentially. This is like Moore’s law backwards, or, as Scannell put it, Eroom’s Law.”

It is only when we acknowledge that there is a barrier in front of our hopes for innovation and discovery that we can seek to find its source and try to remove it. If you don’t see a wall you run the risk of running into it and certainly won’t be able to do the smart things: swerve, scale, leap or prepare to bust through.

At least part of the problem stems from the fact that though we are collecting a simply enormous amount of scientific data we are having trouble bringing this data together to either solve problems or aid in our actual understanding of what it is we are studying. Trying to solve this aspect of the innovation problem is a goal of the brilliant young technologist,Jeffrey Hammerbach, founder of Cloudera. Hammerbach has embarked on a project with Mt. Sinai Hospital to apply tools for organizing and analyzing the overwhelming amounts of data gather by companies like Google and FaceBook to medical information in the hopes of spurring new understanding and treatments of diseases. The problem Hammerbach is trying to solve as he acknowledged on a recent interview with Charlie Rose is precisely the one identified by Herper in the quote above, that innovation in treating diseases like mental illness is simply not moving fast enough.

Hammerbach, is our Spiderman. Having helped create the analytical tools that underlie FaceBook he began to wonder if it was worth it quipping: “The best minds of my generation are thinking about how to make people click ads”.  The real problem wasn’t the lack of commercial technology it was the barriers to important scientific discoveries that would actually save people’s lives. Hammerbach’s conscientious objection to what technological innovation was being applied for vs what it wasn’t is, I think, a perfect segway (excuse the pun) to my third point the political and ethical dimension, or lack thereof, in much of futurists writing today.

In my view, futurists too seldom acknowledges the political and ethical dimension in which technology will be embedded. Technologies hoped for in futurists communities such as brain-computer interfaces, radical life extension, cognitive enhancements or AI are treated in the spirit of consumer goods. If they exist we will find a way to pay for them. There is, perhaps, the assumption that such technologies will follow the decreasing cost curves found in consumer electronics: cell phones were once toys for the rich and now the poorest people on earth have them.

Yet, it isn’t clear to me that the technologies hoped for will actually follow decreasing curves and may instead resemble health care costs rather than the plethora of cheap goods we’ve scored thanks to Moore’s Law. It’s also not clear to me that should such technologies be invented to be initially be affordable only by the rich that this gap will at all be acceptable by the mass of the middle class and the poor unless it is closed very, very fast. After all, some futurists are suggesting that not just life but some corporal form of immortality will be at stake. There isn’t much reason to riot if your wealthy neighbor toots around in his jetpack while you’re stuck driving a Pinto. But the equation would surely change if what was at stake was a rich guy living to be a thousand while you’re left broke, jetpackless, driving a Pinto and kicking the bucket at 75.

The political question of equity will thus be important as will much deeper ethical questions as to what we should do and how we should do it. The slower pace of innovation and discovery, if it holds, might ironically, for a time at least, be a good thing for society (though not for individuals who were banking on an earlier date for the arrival of technicolored miracles) for it will give us time to sort these political and ethical questions through.

There are 3 solutions I can think of that would improve the way science and technology is consumed and presented by futurists, help us get through the current barriers to invention and discovery and build our capacity to deal with whatever is on the other side. The first problem, that of distortion, might be dealt with by better sorting of scientific news stories so that the reader has some idea both where the finding being presented lies along the path of scientific discovery or technological innovation and how important a discovery is in the overall framework of a field. This would prevent things occurring such as a preliminary findings regarding the creation of an artificial hippocampus in a rat brain being placed next to the discovery of the Higgs Boson, at least without some color coating or other signification that these discoveries are both widely separated along the path of discovery and of grossly different import.

As to the barriers to innovation and discovery itself: more attempts such as those of Hammerbach’s need to be tried. Walls need to be better identified and perhaps whole projects bringing together government and venture capital resources and money used to scale over or even bust through these blocked paths. As Hammerbach’s case seems to illustrate, a lot of technology is fluff, it’s about click-rates, cool gadgets, and keeping up with the joneses. Yet, technology is also vitally important as the road to some of our highest aspirations. Without technological innovation we can not alleviate human suffering, extend the time we have here, or spread the ability to reach self-defined ends to every member of the human family.

Some technological breakthroughs would actually deserve the appellation. Quantum computing, if viable, and if it lives up to the hype, would be like Joshua’s trumpets against the walls of Jericho in terms of the barriers to innovation we face. This is because, theoretically at least, it would answer the biggest problem of the era, the same one identified by Hammerbach, that we are generating an enormous amount of real data but are having a great deal of trouble organizing this information into actionable units we actually understand. In effect, we are creating an exponentially increasing data base that requires exponentially increasing effort to put the pieces of this puzzle together- running to standstill. Quantum computing, again theoretically at least, would solve this problem by making such databases searchable without the need to organize them beforehand. 

Things in terms of innovation are not, of course, all gloom and doom. One fast moving  field that has recently come to public attention is that of molecular and synthetic biology perhaps the only area where knowledge and capacity is not merely equalling but exceeding Moore’s Law.

To conclude, the very fact that innovation might be slower than we hope- though we should make every effort to get it moving- should not be taken as an unmitigated disaster but as an opportunity to figure out what exactly it is we want to do when many of the hoped for wonders of science and technology actually arrive. At the end of his recent TED-Talk on reviving extinct species, a possibility that itself grows out of the biological revolution of which synthetic biology is a part, Stewart Brand, gives us an idea of what this might look like. When asked if it was ethical for scientist to “play God” in doing such a thing he responded that he and his fellow pioneers were trying to answer the question of if we could revive extinct species not if we should. The ability to successfully revive extinct species, if it worked, would take some time to master, and would be a multi-generational project the end of which Brand, given his age, would not see. This would give us plenty of time to decide if de-extinction was a good idea, a decision he certainly hoped we would make. The only way we can justly do this is to set up democratic forums to discuss and debate the question.

The ex-hippie Brand has been around a long time, and great evidence to me that old age plus experience can still result in what we once called wisdom. He has been present long enough to see many of our technological dreams of space colonies and flying cars and artificial intelligence fail to come true despite our childlike enthusiasm. He seems blissfully unswept up in all the contemporary hoopla, and still less his own importance in the grand scheme of things, and has devoted his remaining years to generation long projects such as the Clock of the Long now, or the revival of extinct species that will hopefully survive into the far future after he is gone.

Brand has also been around long enough to see all the Frankenstein’s that have not broken loose, the GMOs and China Syndromes and all those oh so frightening “test tube babies”.  His attitude towards science and technology seems to be that it is neither savior nor Shiva it’s just the cool stuff we can do when we try. Above all, he knows what the role of scientists and technologists are, and what is the role of the rest of us. Both of the former show us what tricks we can play, but it is up to all of us as to how or if we should play them.

The Ubiquitous Conflict between Past and Future

Materia and Dynamism of a Cyclist by Boccioni

Perhaps one of the best ways to get a grip on our thoughts about the future is to look at the future as seen in the eyes of the past. This is not supposed to be a Zen koan to cause the reader’s mind to ground to a screeching halt, but a serious suggestion. Looking at how the past saw the future might reveal some things we might not easily see with our nose so close to the glass of contemporary visions of it. A good place to look, I think, would be the artistic and cultural movement of the early 20th century that went under the name of Futurism.

Futurism, was a European movement found especially in Italy but also elsewhere, that took as its focus radical visions of a technologically transformed human future. Futurists were deeply attracted to the dynamic aspects of technology. They loved speed and skyscrapers and the age of the machine. They also hated the past and embraced violence two seemingly unrelated orientations that I think may in fact fit hand in glove.

The best document to understand Futurism is its first: F.T. Marinetti’s The Futurist Manifesto.  Here is Marinetti on the Futurist’s stance towards the cultural legacy of the past that also gives some indication of the movement’s positive outlook towards violence:

Let the good incendiaries with charred fingers come! Here they are! Heap up the fire to the shelves of the libraries! Divert the canals to flood the cellars of the museums! Let the glorious canvases swim ashore! Take the picks and hammers! Undermine the foundation of venerable towns!

 

Given what I think is probably the common modern view that technology, at least in its non-military form, is a benign force that makes human lives better, where better also means less violent, and that technology in the early 21st century is used as often to do things such as digitally scan and make globally available an ancient manuscript such as The Dead Sea Scrolls  as to destroy the past, one might ask what gives with Futurism?

Within the context of early 1900s Italy, both Futurisms’ praise of violence and its hatred for the accumulated historical legacy makes some sense. Italy was both a relatively new country having only been established in 1861, and compared to its much more modernized neighbors a backward one. At the same time the sheer weight of the past of pre-unification Italy stretching back to the Etruscans followed by the Romans followed by the age of great Italian city-states such as Florence, not to mention the long history as the seat of the Roman church, was overwhelming.

Italy was a country with too much history, something felt to be holding back its modernization, and the Futurist’s  hatred of history and their embrace of violence were dual symptoms of the desire to clear the deck so that modernity could take hold. What we find with Futurism is an acute awareness that the past may be the enemy of the future and an acknowledgement, indeed a full embrace, of the disruptive nature of technology, that technology upends and destroys old social forms and ways of doing things.

We might think such a conflict between past and future is itself a thing of the past, but I think this would be mistaken. At bottom much of the current debates around the question of what the human (or post-human) future should look like are really debates between those who value the past and wish to preserve it and those who want to escape this past and create the world anew.

Bio-conservatives might be thought of as those who hope to preserve evolved human nature and or the evolved biosphere in intact in the face of potentially transformative technology. Transhumanists might be thought of as holding the middle position in the debate between past and future wanting to preserve in varying degrees some aspects of evolved humanity while embracing some new characteristics that they hope technology will soon make widely available. Singularitarians are found on the far end of the future side of the past vs future spectrum, not merely embracing but pursuing the creation of a brand new evolutionary kingdom in an equally new substrate- silicon.

You also find an awareness of this conflict between past and future in the most seeming disparate of thinkers. On the future as destroyer of the past side of the ledger you have a recent mesmerizing speech at the SXSW Conference by the science-fiction author, Bruce Sterling. Part of the point Sterling seems to be making in this multifaceted talk is that technologists need to acknowledge that technology does not always produce the better, just the different. Technology upends the old and creates a new order and we should both lament the loss of what we have destroyed and embrace our role in having been a party to the destruction of the legacy of the past.

On the past as immovable anchor that prevents us from realizing not just the future but the present side of the ledger we have the architect Rem Koolhaas who designed such modern wonders as the CCTV Headquarters in Beijing. Koolhaas thinks the past is currently winning in the fight against the future. He is troubled by the increase in the area human being have declared “preserved”, that is off-limits to human development. This area is surprisingly huge, Koolhaas, claims it is comparable to the space of the entire country of India.

In a hypothetical plan to deal with the “crisis” of historic preservation in Beijing Koolhaas proposed mapping a grid over the city with areas to be preserved and those where development was unconstrained established at random. Koolhaas thinks that the process should be random to avoid cultural fights over what should be preserved and what should not, which end up reflecting the current realities of power more than any “real” historical significance. A version of “history is written by the victors”. But the very randomness of the process not only leaves me at least with the impression of being both historically illiterate and somewhat crazy, it is based, I think, on a distorted picture of the very facts of preservation themselves.

Koolhaas combines the area preserved for historic reasons with those preserved for natural ones. Thus, his numbers would include the vast amounts of space countries have set and hope to set aside as natural preserves. It’s not quite clear to me that these are areas that most human beings would really want to live in anyway, so it remains uncertain whether the fact that these areas prohibited from being developed indeed somehow hold back development in the aggregate. Koolhaas thinks we need a “theory” of preservation as a guide to what we should preserve and what we should allow to be preserved in the name of something new.  A theory suggests that the conflict between past and future is something that can or should be resolved, yet I do not think resolution is the goal we should seek.  To my lights what is really required is a discussion and a debate that acknowledges what we are in fact really arguing about. What we are arguing about is how much we should preserve the past vs how much we should allow the past to be destroyed in the name of the future.

What seems to me a new development, something that sets us off from the early 20th century Futurism with which this post began, is that technology, rather than by default being the force that destroys the past and gives rise to the future as was seen in Sterling’s speech is becoming neutral in the conflict between past and future. You can see some of this new neutrality not only in efforts to use technology to preserve and spread engagement with the past as was seen in the creation of a digital version of The Dead Sea Scrolls, you can see it in the current project of the eternal outsider, Stewart Brand, which hopes to bring back into existence extinct species such as the Passenger Pigeon through a combination of reverse genetic engineering and breeding.

For de-extinct species to be viable in the wild will probably require the resurrection of ecosystems and co-dependent species that have also been gone for quite some time such as Chestnut forests for the Passenger Pigeon. Thus, what is not in the present and in front of us- the future- in Brand’s vision will be the lost past. We are probably likely to see more of this past/future neutrality of technology going forward and therefore might do well to stop assuming that technological advancement of necessity lead to the victory of the new.

Because we can never fully determine whether we have not at least secondarily been responsible for a species extinction does that mean we should never allow another species to go extinct, if we can prevent it? This would constitute not an old world, but in fact a very new one, a world in which evolution does not occur, at least the type of evolution in large creatures that leads to extinction. When combined with the aim of ending human biological death this world has a strong resemblance to the mythical Garden of Eden, perhaps something that should call the goal itself into question. Are we merely trying to realize these deeply held religious ideas that are so enmeshed  in our thought patterns they have become invisible?

Advances in synthetic biology could lead as Freeman Dyson believes to life itself soon becoming part- software part- art with brand new species invented with the ease and frequency that new software is written or songs composed today. Or, it could lead to projects to reverse the damage humankind has done to the world and return it to the state of the pre-human past such as those of Brand. Perhaps the two can live side by side perhaps not. Yet, what does seem clear is that the choice of one takes time and talent away from the other. Time spent resurrecting the Passenger Pigeon or any other extinct species is time and intellectual capacity spent inventing species never seen.

Once one begins seeing technological advancement as neutral in the contest between the past and future a number of aspirations that seem futuristic because of their technological dependence become perhaps less so.  A world where people live “forever” seems like a very futuristic vision, but if one assumes that this will require less people being born it is perhaps the ultimate victory of the old vs the new. We might tend to think that genetic engineering will be used to “advance” humankind, but might it not be seen as an alternative to more radical versions of the future, cyborg-technologies or AI, that, unlike genetic engineering go beyond merely reaching biological potential that human beings have had since they emerged on the African savanna long ago?

There is no lasting solution to the conflict of past vs future nor is it a condition we should in someway lament. Rather, it is merely a reflection of our nature as creatures in time.If we do not hold onto at least something from our past, and that includes both our individual and collective past, we become creatures or societies without identity. At the same time, if we put all of our efforts into preserving what was and never attempt the new we are in a sense already dead frozen in amber inside a world of what was. The sooner we acknowledge this conflict is at the root of many of our debates the more likely we are to come up with goals to shoot for that neither aim to destroy the deep legacy of the past, both human and biological, nor prevent us from ever crossing into the frontier of the new.

The Ethics of a Simulated Universe

Zhuangzi-Butterfly-Dream

This year one of the more thought provoking thought experiments to appear in recent memory has its tenth anniversary.  Nick Bostrom’s paper in the Philosophical Quarterly “Are You Living in a Simulation?”” might have sounded like the types of conversations we all had after leaving the theater having seen The Matrix, but Bostrom’s attempt was serious. (There is a great recent video of Bostrom discussing his argument at the IEET). What he did in his paper was create a formal argument around the seemingly fanciful question of whether or not we were living in a simulated world. Here is how he stated it:


This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

To state his case in everyday language: any technological civilization whose progress did not stop at some point would gain the capacity to realistically simulate whole worlds, including the individual minds that inhabit them, that is, they could run realistic ancestor simulations. Technologically advanced civilizations either die out before gaining the capacity to produce realistic ancestor simulations,  there is  something stopping such technologically mature civilizations from running such simulations in large numbers or, we ourselves are most likely living in such a simulation because there are a great many more such simulated worlds than real ones.

There is a lot in that argument to digest and a number of underlying assumptions that might be explored or challenged, but I want to look at just one of them, #2. That is, I will make the case that there may be very good reasons why technological civilizations both prohibit and are largely uninterested in creating realistic ancestor simulations. Reasons that are both ethical and scientific. Bostrom himself discusses the possibility that ethical constraints might prevent  technologically mature civilizations from creating realistic ancestor simulations. He writes:

One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral. On the contrary, we tend to view the existence of our race as constituting a great ethical value. Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

I think the issue of “suffering that is inflicted on the inhabitants of a simulation” may be more serious than Bostrom appears to believe. Any civilization that has reached the stage where it can create realistic worlds that contain fully conscious human beings will have almost definitely escaped the two conditions that haunt the human condition in its current form- namely pain and death. The creation of realistic ancestor simulations will have brought back into existence these two horrors and thus might likely be considered not merely unethical but perhaps even evil. Were our world actually such a simulation it would confront us with questions that once went by the name of theodicy, namely, the attempt to reconcile the assumed goodness of the creator (for our case the simulator)  with the existence of evil: natural, moral, and metaphysical that exists in the world.

Questions as to why there is, or the morality of there being, such a wide disjunction between the conditions of any imaginable creator/simulator and those of the created/simulated is a road we’ve been down before as the philosopher Susan Neiman so brilliantly showed us in her Evil in Modern Thought: An Alternative History of Philosophy.  There, Neiman points out that the question of theodicy is a nearly invisible current that swept through the modern age. As a serious topic of thought in this period it began as an underlying assumption behind the scientific revolution with thinkers such as Leibniz arguing in his Essays on the Goodness of God, the Freedom of Man and the Origin of Evil that “ours is the best of all possible worlds”. Leibniz having invented calculus and been the first to envision what we would understand as a modern computer was no dope, so one might wonder how such a genius could ever believe in anything so seemingly contradictory to actual human experience?

What one needs to remember in order to understand this is that the early giants of the scientific revolution, Newton, Descartes, Leibniz, Bacon weren’t out to replace God but to understand what was thought of as his natural order. Given how stunning and quick what seemed at the time a complete understanding of the natural world had been formulated using the new methods it perhaps made sense to think that a similar human understanding of the moral order was close at hand. Given the intricate and harmonious picture of how nature was “designed” it was easy to think that this natural order did not somehow run in violation of the moral order. That underneath every seemingly senseless and death-bringing natural event- an earthquake or plague- there was some deeper and more necessary process for the good of humanity going on.    


The quest for theodicy continued when Rousseau gave us the idea that for harmony to be restored to the human world we needed to return to the balance found in nature, something we had lost when we developed civilization.  Nature, and therefore the intelligence that was thought to have designed it were good. Human beings got themselves into trouble when they failed to heed this nature- built their cities on fault lines, or even, for Rousseau, lived in cities at all.


Anyone who has ever taken a serious look at history (or even paid real attention to the news) would agree with Hegel that: “History… is, indeed, little more than the register of the ‘crimes, follies, and misfortunes’ of mankind”. There isn’t any room for an ethical creator there, but Hegel himself tried to find some larger meaning in the long term trajectory of history. With him, and Marx who followed on his heels we have not so much a creator, but a natural and historical process that leads to a fulfillment an intelligence or perfect society at its end. There may be a lot of blood and gore, a huge amount of seemingly unnecessary human suffering between the beginning of history and its end, but the price is worth paying.

Lest anyone think that these views are irrelevant in our current context, one can see a great deal of Rousseau in the positions of contemporary environmentalists and bio-conservatives who take the position that it is us, that is human beings and the artificial technological society we have created that is the problem. Likewise, the views of both singularitarians and some transhumanists who think we are on the verge of reaching some breakout stage where we complete or transcend the human condition, have deep echoes of both Hegel and Marx.

But perhaps the best analog for the kinds of rules a technologically advanced civilization might create around the issue of realistic ancestor simulations might lie not in these religiously based philosophical ideas but  in our regulations regarding more practical areas such as animal experimentation. Scientific researchers have long recognized that the pursuit of truth needs humane constraints and that such constraints apply not just to human research subjects, but to animal subjects as well. In the United States, standards regarding research that use live animals is subject to self-regulation and oversight based on those established by the Institutional Animal Care and Use Committee (IACUC). Those criteria are as follows:

The basic criteria for IACUC approval are that the research (a) has the potential to allow us to learn new information, (b) will teach skills or concepts that cannot be obtained using an alternative, (c) will generate knowledge that is scientifically and socially important, and (d) is designed such that animals are treated humanely.

The underlying idea behind these regulations is that researchers should never unnecessarily burden animals in research. Therefore, it is the job of researchers to design and carry out research in a way that does not subject animals to unnecessary burdens. Doing research that does not promise to generate important knowledge, subjecting animals to unnecessary pain, doing experiments on animals when the objectives can be reached without doing so are all ways of unnecessarily burdening animals.

Would realistic ancestor simulations meet these criteria? I think realistic ancestor simulations would probably fulfill criteria (a), but I have serious doubts that it meets any of the others. In terms of simulations offering us “skills or concepts that cannot be obtained using an alternative” (b), in what sense would a realistic simulations teach skills or concepts that could not be achieved using something short of such simulations where those within them actually suffer and die? Are there not many types of simulations that would fall short of the full range of subjective human experiences of suffering that would nevertheless grant us a good deal of knowledge regarding such types of worlds and societies?  There is probably an even greater ethical hurdle for realistic ancestor simulations to cross in (c), for it is difficult to see exactly what lessons or essential knowledge running such simulations would bring. Societies that are advanced enough to run such simulations are unlikely to gain vital information about how to run their own societies.

The knowledge they are seeking is bound to be historical in nature, that is, what was it like to live in such and such a period or what might have happened if some historical contingency were reversed? I find it extremely difficult to believe that we do not have the majority of information today to create realistic models of what it was like to live in a particular historical period, a recreation that does not have to entail real suffering on the part of innocent participants to be of worth.

Let’s take a specific rather than a general historical example dear to my heart because I am a Pennsylvanian- the Battle of Gettysburg. Imagine that you live hundreds or even thousands of years into the future when we are capable of creating realistic ancestor simulations. Imagine that you are absolutely fascinated by the American Civil War and would like to “live” in that period to see it for yourself. Certainly you might want to bring your own capacity for suffering and pain or to replicate this capacity in a “human” you, but why do the simulated beings in this world with you have to actually feel the lash of a whip, or the pain of a saw hacking off an injured limb? Again, if completely accurate simulations of limited historical events are ethically suspect, why would this suspicion not hold for the simulation of entire worlds?

One might raise the objection that any realistic ancestor simulation would need to possess sentient beings with free will such as ourselves that possess not merely the the ability to suffer, but to inflict evil upon others. This, of course, is exactly the argument Christian theodicy makes. It is also an argument that was undermined by sceptics of early modern theodicy- such as that put forth by Leibniz.    

Arguments that the sentient beings we are now (whether simulated or real) require free will that puts us at risk of self-harm were dealt with brilliantly by the 17th century philosopher, Pierre Bayle, who compared whatever creator might exist behind a world where sentient beings are in constant danger due to their exercise of free will to a negligent parent. Every parent knows that giving their children freedom is necessary for moral growth, but what parent would give this freedom such reign when it was not only likely but known that it would lead to the severe harm or even death of their child?

Applying this directly to the simulation argument any creator of such simulations knows that we will kill millions of our fellow human beings in war and torture, deliberately starve and enslave countless others. At least these are problems we have brought on ourselves,but the world also contains numerous so-called natural evils such as earthquakes and pandemics which have devastated us numerous times. Above all, it contains death itself which will kill all of us in the end.    

It was quite obvious to another sceptic, David Hume, that the reality we lived in had no concern for us. If it was “engineered” what did it say that the engineer did not provide obvious “tweaks” to the system that would make human life infinitely better? Why, are we driven by pain such as hunger and not correspondingly greater pleasure alone?

Arguments that human and animal suffering is somehow not “real” because it is simulated seem to me to be particularly tone deaf. In a lighter mood I might kick a stone like Samuel Johnson and squeak “I refute it thus!” In a darker mood I might ask those who hold the idea whether they would willingly exchange places with a burn victim. I think not! 

If it is the case that we live in a realistic ancestor simulation then the simulator cares nothing for our suffering on a granular level. This leads us to the question of what type of knowledge could truly be gained from running such realistic ancestor simulations. It might be the case that the more granular a simulation is the less knowledge can actually be gained from it. If one needs to create entire worlds with individuals and everything in them in order to truly understand what is going on, then the results of the process you are studying is either contingent or you are really not in possession of full knowledge regarding how such worlds work. It might be interesting to run natural selection over again from the beginning of life on earth to see what alternatives evolution might have come up with, but would we really be gaining any knowledge about evolution itself rather than a view of how it might have looked had such and such initial conditions been changed? And why do we need to replicate an entire world including the pain suffered by the simulated creatures within before we can grasp what alternatives to the path evolution or even history followed might have looked like. Full understanding of the process by which evolution works, which we do not have but a civilization able to create realistic ancestor simulations doubtless would, should allow us to envision alternatives without having to run the process from scratch a countless number of times.

Yet, it is in the last criteria (d) that experiments are  “designed such that animals are treated humanely” that the ethical nature of any realistic ancestor simulation really hits a wall. If we are indeed living in an ancestor simulation it seems pretty clear that the simulator(s) should be declared inhumane. How otherwise would the simulator create a world of pandemics and genocides, torture, war, and murder, and above all, universal death?

One might claim that any simulator is so far above us that it takes little concern of our suffering. Yet, we have granted this kind of concern to animals and thus might consider ourselves morally superior to an ethically blind simulator. Without any concern or interest in our collective well-being, why simulate us in the first place?

Indeed, any simulator who created a world such as our own would be the ultimate anti-transhumanist. Having escaped pain and death “he” would bring them back into the world on account of what would likely be socially useless curiosity. Here then we might have an answer to the second half of Bostrom’s quote above:


Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough: it must be combined with convergence on a civilization-wide social structure that enables activities considered immoral to be effectively banned.

A society that was technologically capable of creating realistic ancestor simulations and actually runs them would appear to have on of two features (1) it finds such simulations ethically permissible, (2) it is unable to prevent such simulations from being created. Perhaps any society that remains open to creating such a degree of suffering found in realistic ancestor simulations  for anything but the reason of existential survival would be likely to destroy itself for other reasons relating to such lack of ethical boundaries.

However, it is (2) that I find the most illuminating. For perhaps the condition that decides if a civilization will continue to exist is its ability to adequately regulate the use of technology within it. Any society that is unable to prevent rogue members from creating realistic ancestor simulations despite deep ethical prohibitions is incapable of preventing the use of destructive technologies or in managing its own technological development in a way that promotes survival. A situation we can perhaps see glimpses of in our own situation related to nuclear and biological weapons, or the dangers of the Anthropocene.

This link between ethics, successful control of technology and long term survival is perhaps is the real lesson we should glean from Bostrom’s provocative simulation argument.

Immortal Jellyfish and the Collapse of Civilization

Luca Giordano Cave of Eternity 1680s

The one rule that seems to hold for everything in our Universe is that all that exists, after a time, must pass away, which for life forms means they will die. From there, however, the bets are off and the definition of the word “time” in the phrase “after a time” comes into play. The Universe itself may exist for as long as 100s of trillions of years to at last disappear into the formless quantum field from whence it came. Galaxies, or more specifically, clusters of galaxies and super-galaxies, may survive for perhaps trillions of years to eventually be pulled in and destroyed by the black holes at their centers.

Stars last for a few billion years and our own sun some 5 or so billion years in the future will die after having expanded and then consumed the last of its nuclear fuel. The earth having lasted around 10 billion years by that point will be consumed in this  expansion of the Sun. Life on earth seems unlikely to make it all the way to the Sun’s envelopment of it and will likely be destroyed billions of years before the end of the Sun- as solar expansion boils away the atmosphere and oceans of our precious earth.

The lifespan of even the oldest lived individual among us is nothing compared to this kind of deep time. In contrast to deep time we are, all of us, little more than mayflies who live out their entire adult lives in little but a day. Yet, like the mayflies themselves who are one of the earth’s oldest existent species: by the very fact that we are the product of a long chain of life stretching backward we have contact with deep time.

Life on earth itself if not quite immortal does at least come within the range of the “lifespan” of other systems in our Universe, such as stars. If life that emerged from earth manages to survive and proves capable of moving beyond the life-cycle of its parent star, perhaps the chain in which we exist can continue in an unbroken line to reach the age of galaxies or even the Universe itself. Here then might lie something like immortality.

The most likely route by which this might happen is through our own species,  Homo Sapiens or our descendents. Species do not exist forever, and our is likely to share this fate of demise either through actual extinction or evolution into something else. In terms of the latter, one might ask if our survival is assumed, how far into the future we would need to go where our descendents are no longer recognizably, human? As long as something doesn’t kill us, or we don’t kill ourselves off first, I think that choice, for at least the foreseeable future will be up to us.

It is often assumed that species have to evolve or they will die. A common refrain I’ve heard among some transhumanists  is “evolve or die!”. In one sense, yes, we need to adapt to changing circumstances, in another, no, this is not really what evolution teaches us, or is not the only thing it teaches us. When one looks at the earth’s longest extant species what one often sees is that once natural selection comes us with a formula that works that model will be preserved essentially unchanged over very long stretches of time, even for what can be considered deep time. Cyanobacteria are nearly as old as life on earth itself, and the more complex Horseshoe Crab, is essentially the same as its relatives that walked the earth before the dinosaurs. The exact same type of small creatures that our children torture on beach vacations might have been snacks for a baby T-Rex!

That was the question of the longevity of species but what about the longevity of individuals? Anyone interested the should check out the amazing photo study of the subject by the artist Rachel Sussman. You can see Sussman’s work here at TED, and over at Long Now.  The specimen Sussman brings to light have individuals over 2,000 years old.  Almost all are bacteria or plants and clonal- that is they exist as a single organism composed of genetically identical individuals linked together by common root and other systems. Plants and especially trees are perhaps the most interesting because they are so familiar to us and though no plant can compete with the longevity of bacteria, a clonal colony of Quaking Aspen in Utah is an amazing 80,000 years old!

The only animals Sussman deals with are corals, an artistic decision that reflects the fact that animals do not survive for all that long- although one species of animal she does not cover might give the long-lifers in the other kingdoms a run for their money. The “immortal jellyfish” the turritopsis nutricula are thought to be effectively biologically immortal (though none are likely to have survived in the wild for anything even approaching the longevity of the longest lived plants). The way they achieve this feat is a wonder of biological evolution.The turritopsis nutricula, after mating upon sexual maturity, essentially reverses it own development process and reverts back to prior clonal state.

Perhaps we could say that the turritopsis nutricula survives indefinitely by moving between more and less complex types of structures all the while preserving the underlying genes of an individual specimen intact. Some hold out the hope that the turritopsis nutricula holds the keys to biological immortality for individuals, and let’s hope they’re right, but I, for one, think its lessons likely lie elsewhere.

A jellyfish is a jellyfish, after all, among more complex animals with well developed nervous systems longevity moves much closer to a humanly comprehensible lifespan with the oldest living animal a giant tortoise by the too cute name of “Jonathan” thought to be around 178 years old.  This is still a very long time frame in human terms, and perhaps puts the briefness of our own recent history in perspective: it would be another 26 years after Jonathan hatched from his egg till the first shots of the American Civil War were fired. A lot can happen over the life of a “turtle”.

Individual plants, however, put all individual animals to shame. The oldest non-clonal plant, The Great Basin Bristlecone Pine, has a specimen believed to be 5,062 years old. In some ways this oldest living non-clonal individual is perfectly illustrative of the (relatively) new way human beings have reoriented themselves to time, and even deep time.When this specimen of pine first emerged from a cone human beings had only just invented a whole set of tools that would make the transmission of cultural rather than genetic information across vast stretches of time possible. During the 31st century B.C.E. we invented monumental architecture such as Stonehenge and the pyramids of Egypt whose builders still “speak” to us, pose questions to us, from millennia ago. Above all, we invented writing which allowed someone with little more than a clay tablet and a carving utensil to say something to me living 5,000 years in his future.

Humans being the social animals that they are we might ask ourselves about the mortality or potential immortality of groups that survive across many generations, and even for thousands of years. Group that survive for such a long period of time seem to emerge most fully out of the technology of writing which allows both the ability to preserve historical memory and permits a common identity around a core set of ideas. The two major types of human groups based on writing are institutions, and societies which includes not just the state but also the economic, cultural, and intellectual features of a particular group.


Among the biggest mistakes I think those charged with responsibility for an institution or a society can make is to assume that it is naturally immortal, and that such a condition is independent of whatever decisions and actions those in charge of it take. This was part of the charge Augustine laid against the seemingly eternal Roman Empire in his The City of God. The Empire, Augustine pointed out, was a human institution that had grown and thrived from its virtues in the past just as surely as it was in his day dying from its vices. Augustine, however, saw the Church and its message as truly eternal. Empires would come and go but the people of God and their “city” would remain.

It is somewhat ironic, therefore, that the Catholic Church, which chooses a Pope this week, has been so beset by scandal that its very long-term survivability might be thought at stake. Even seemingly eternal institutions, such as the 2,000 year old Church, require from human beings an orientation that might be compared to the way theologians once viewed the relationship of God and nature. Once it was held that constant effort by God was required to keep the Universe from slipping back into the chaos from whence it came. That the action of God was necessary to open every flower. While this idea holds very little for us in terms of our understanding of nature, it is perhaps a good analog for human institutions, states and our own personal relationships which require our constant tending or they give way to mortality.

It is perhaps difficult for us to realize that our own societies are as mortal as the empires of old, and someday my own United States will be no more. America is a very odd country in respect to its’ views of time and history. A society seemingly obsessed with the new and the modern, contemporary debates almost always seek reference and legitimacy on the basis of men who lived and thought over 200 years ago. The Founding Fathers were obsessed with the mortality of states and deliberately crafted a form of government that they hoped might make the United States almost immortal.

Much of the structure of American constitutionalism where government is divided into “branches” which would “check and balance” one another was based on a particular reading of long-lived ancient systems of government which had something like this tripart structure, most notably Sparta and Rome. What “killed” a society, in the view of the Founders, was when one element- the democratic, oligarchic-aristocratic, or kingly rose to dominate all others. Constitutionally divided government was meant to keep this from happening and therefore would support the survival of the United States indefinitely.

Again, it is somewhat bitter irony that the very divided nature of American government that was supposed to help the United States survive into the far future seems to be making it impossible for the political class in the US to craft solutions to the country’s quite serious long-term problems and therefore might someday threaten the very survival of the country divided government was meant to secure.

Anyone interested in the question of the extended survival of their society, indeed of civilization itself, needs to take into account the work of Joseph A. Tainter and his The Collapse of Complex Societies (1988). Here, the archaeologist Tainter not only provides us with a “science” that explains the mortality of societies, his viewpoint, I think, provides us for ways to think about and gives us insight into seeming intractable social and economic and technological bottlenecks that now confront all developed economies: Japan, the EU/UK and the United States.

Tainter, in his Collapse wanted to move us away from vitalist ideas of the end of civilization seen in thinkers such as Oswald Spengler and Arnold Toynbee. We needed, in his view, to put our finger on the material reality of a society to figure out what conditions most often lead them to dissipate i.e. to move from a more complex and integrated form, such as the Roman Empire, to a more simple and less integrated form, such as the isolated medieval fiefdoms that followed.

Grossly oversimplified, Tainter’s answer was a dry two word concept borrowed from economics- marginal utility. The idea is simple if you think about it for a moment. Any society is likely to take advantage of “low-hanging fruit” first. The best land will be the first to be cultivated, the easiest resources to gain access to exploited.

The “fruit”,  however, quickly becomes harder to pick- problems become harder for a society to solve which leads to a growth in complexity. Romans first tapped tillable land around the city, but by the end of the Empire the city needed a complex international network of trade and political control to pump grain from the distant Nile Valley into the city of Rome.

Yet, as a society deploys more and more complex solutions to problems it becomes institutionally “heavy” (the legacy of all the problems it has solved in the past) just as problems become more and more difficult to solve. The result is, at some point, the shear amount of resources that need to be thrown at a problem to solve it are no longer possible and the only lasting solution becomes to move down the chain of complexity to a simpler form. Roman prosperity and civilization drew in the migration of “barbarian” populations in the north whose pressures would lead to the splitting of the Empire in two and the eventual collapse of its Western half.            

It would seem that we have broken through Tainter’s problem of marginal utility with the industrial revolution, but we should perhaps not judge so fast. The industrial revolution and all of its derivatives up to our current digital and biological revolutions, replaced a system in which goods were largely produced at a local level and communities were largely self-sufficient, with a sprawling global network of interconnections and coordinated activities requiring vast amounts of specialized knowledge on the part of human beings who, by necessity, must participate in this system to provide for their most basic needs.

Clothes that were once produced in the home of the individual who would wear them, are now produced thousands of miles away by workers connected to a production and transportation system that requires the coordination of millions of persons many of whom are exercising specialized knowledge. Food that was once grown or raised by the family that consumed it now requires vast systems of transportation, processing, the production of fertilizers from fossil fuels and the work of genetic engineers to design both crops and domesticated animals.

This gives us an indication of just how far up the chain of complexity we have moved, and I think leads inevitably to the questions of whether such increasing complexity might at some point stall for us, or even be thrown into reverse?

The idea that, despite all the whiz-bang! of modern digital technology, we have somehow stalled out in terms of innovation is an idea that has recently gained traction. There was the argument made by the technologist and entrepreneur, Peter Thiel, at the 2009 Singularity Summit, that the developed world faced real dangers of the Singularity not happening quickly enough. Thiel’s point was that our entire society was built around the expectations of exponential technological growth that showed ominous signs of not happening. I only need to think back to my Social Studies textbooks in the 1980s and their projections of the early 2000s with their glittering orbital and underwater cities, both of which I dreamed of someday living in, to realize our futuristic expectations are far from having been met. More depressingly, Thiel points out how all of our technological wonders have not translated into huge gains in economic growth and especially have not resulted in any increase in median income which has been stagnant since the 1970s.

In addition to Theil, you had the economist, Tyler Cowen, who in his The Great Stagnation (2011)  argued compellingly that the real root of America’s economic malaise was that the kinds of huge qualitative innovations that were seen in the 19th and early 20th centuries- from indoor toilets, to refrigerators, to the automobile, had largely petered out after the low hanging fruit- the technologies easiest to reach using the new industrial methods- were picked. I may love my iPhone (if I had one), but it sure doesn’t beat being able to sanitarily go to the bathroom indoors, or keep my food from rotting, or travel many miles overland on a daily basis in mere minutes or hours rather than days.

One reason why technological change is perhaps not happening as fast as boosters such as singularitarians hope, or our society perhaps needs to be able to continue to function in the way we have organized it, can be seen in the comments of the technologists, social critic and novelist, Ramez Naam. In a recent interview for  The Singularity Weblog, Naam points out that one of the things believers in the Singularity or others who hold to ideas regarding the exponential pace of technological growth miss is that the complexity of the problems technology is trying to solve are also growing exponentially, that is problems are becoming exponentially harder to solve. It’s for this reason that Naam finds the singularitarians’ timeline widely optimistic. We are a long long way from understanding the human brain in such a way that it can be replicated in an AI.

The recent proposal of the Obama Administration to launch an Apollo type project to understand the human brain along with the more circumspect, EU funded, Human Brain Project /Blue Brain Project might be seen as attempts to solve the epistemological problems posed by increasing complexity, and are meant to be responses to two seemingly unrelated technological bottlenecks stemming from complexity and the problem of increasing marginal returns.

On the epistemological front the problem seems to be that we are quite literally drowning in data, but are sorely lacking in models by which we can put the information we are gathering together into working theories that anyone actually understands. As Henry Markham the founder of the Blue Brain Project stated:

So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists are forced to become informaticians or robot operators.

This epistemological problem, which was brilliantly discussed by Noam Chomsky in an interview late last year is related to the very real bottleneck in Artificial Intelligence- the very technology Peter Thiel thinks is essentially if we are to achieve the rates of economic growth upon which our assumptions of technological and economic progress depend.

We have developed machines with incredible processing power, and the digital revolution is real, with amazing technologies just over the horizon. Still, these machines are nowhere near doing what we would call “thinking”. Or, to paraphrase the neuroscientist and novelist David Eagleman- the AI WATSON might have been able to beat the very best human being in the game Jeopardy! What it could not do was answer a question obvious to any two year old like “When Barack Obama enters a room, does his nose go with him?”

Understanding how human beings think, it is hoped, might allow us to overcome this AI bottleneck and produce machines that possess qualities such as our own or better- an obvious tool for solving society’s complex problems.

The other bottleneck a large scale research project on the brain is meant to solve is the halted development of psychotropic drugs– a product of the enormous and ever increasing costs for the creation of such products. Itself a product of the complexity of the problem pharmaceutical companies are trying to tackle, namely; how does the human brain work and how can we control its functions and manage its development?  This is especially troubling given the predictable rise in neurological diseases such as Alzheimer’s.   It is my hope that these large scale projects will help to crack the problem of the human brain, and especially as it pertains to devastating neurological disorders, let us pray they succeed.

On the broader front, Tainter has a number of solutions that societies have come up with to the problem of marginal utility two of which are merely temporary and the other long-term. The first is for society to become more complex, integrated, bigger. The old school way to do this was through conquest, but in an age of nuclear weapons and sophisticated insurgencies the big powers seem unlikely to follow that route. Instead what we are seeing is proposals such as the EU-US free trade area and the Trans-Pacific partnership both of which appear to assume that the solution to the problems of globalization is more globalization. The second solution is for a society to find a new source of energy. Many might have hoped this would have come in the form of green-energy rather than in the form it appears to have taken- shale gas, and oil from the tar sands of Canada. In any case, Tainter sees both of these solutions as but temporary respites for the problem of marginal utility.

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity. It is a solution that strangely echoes that of the “immortal jellyfish” the turritopsis nutricula, the only path complex entities have discovered that allows them to survive into something that whispers eternity.

Image description: From the National Gallery in London. “The Cave Of Eternity” (1680s) by Luca Giordan.“The serpent biting its tail symbolises Eternity. The crowned figure of Janus holds the fleece from which the Three Fates draw out the thread of life. The hooded figure is Demagorgon who receives gifts from Nature, from whose breasts pours forth milk. Seated at the entrance to the cave is the winged figure of Chronos, who represents Time.”

Finding Our Way in the Great God Debate, part 2

Last time, I tried to tackle the raging debate between religious persons and a group of thinkers that at least aspire to speak for science who go under the name of the New Atheists. I tried to show how much of the New Atheist’s critique of religion was based on a narrow definition of what “God” is, that is, a kind of uber-engineer who designed the cosmos we inhabit and set up its laws. Despite my many critiques of the New Atheists, much of which has to do with both their often inflammatory language, and just as often their dismissal and corresponding illiteracy regarding religion, I do think they highlight a very real philosophical and religious problem (at least in the West), namely, that the widely held religious concept of the world has almost completely severed any relationship with science, which offers the truest picture of what the world actually is that we have yet to discover.

In what follows I want to take a look at the opportunities for new kinds of religious thinking this gap between religion and science offers, but most especially for new avenues of secular thought that might manage, unlike the current crop of New Atheists to hold fast to science while not discarding the important types of thinking and approaches to the human condition that have been hard won by religious traditions since the earliest days of humanity.

The understanding of God as a celestial engineer, the conception of God held by figures of the early scientific revolution such as Newton, was, in some ways, doomed from the start predicated not only on a version of God that was so distant from the lives of your average person that it would likely become irrelevant, but predicated as well on the failure of science to explain the natural world from its own methods alone. If science is successful at explaining the world then a “God of the gaps” faces the likely fate that science at some point eventually explains away any conceivable gap in which the need for such a God might be called for.  It is precisely the claim that physics has or is on the verge of closing the gap in scientific knowledge of what Pope Pius XII thought was God’s domain- the Universe before the Big Bang- that is the argument of another atheist who has entered the “God debate”, Lawrence Krauss with his A Universe From Nothing.

In his book Krauss offers a compelling picture of the state of current cosmology, one which I find fascinating and even wonderous. Krauss shows how the Universe may have appeared from quantum fluctuations out of what was quite literally nothing. He reveals how physicists are moving away from the idea of one Universe whose laws seem ideally tuned to give rise to life to a version of a multiverse. A vision in which an untold number of universes other than our own exist in an extended plane or right next to one another on a so call “brane” which will forever remain beyond our reach, and that might all have a unique physical laws and even mathematics.

Krauss shows us not only the past but the even deeper time of the future where our own particular Universe is flat and expanding so rapidly that it will, on the order of a few hundred billion years, be re-organized into islands of galaxies surrounded by the blackness of empty space, and how in the long-long frame of time, trillions of years,  our Universe will become a dead place inhospitable for life- a sea of the same nothing from whence it came.

A Universe from Nothing is a great primer on the current state of physics, but it also has a secondary purpose as an atheist track- with the afterword written by none other than Dawkins himself. Dawkins, I think quite presumptuously, thinks Krauss will have an effect akin to Darwin banishing God from the questions regarding the beginning of the Universe and its order in the same way Darwin had managed to banish God from questions regarding the origin of life and its complexity.

Many people, myself included, have often found the case made by the New Atheists in many ways to be as counter-productive for the future of science as the kind of theologization and politicization of science found in those trying to have Intelligent Design (ID) taught in schools, or deny the weight of scientific evidence in regards to a public issue with a clear consensus of researchers for what amounts to a political position e.g. climate change. This is because their rhetoric forces people to choose between religion and science, a choice that in such a deeply religious country such as the United States would likely be to the detriment of science.  And yet, perhaps in the long-run the New Atheists will have had a positive effect not merely on public discourse, but ironically on religion itself.

New Atheists have encouraged a great “coming out” of both fellow atheists and agnostics in a way that has enabled people in a religious society like the United States to openly express their lack of belief and skepticism in a way that was perhaps never possible before. They have brought into sharp relief the incongruity of scientific truth and religious belief as it is currently held and thus pushed scholars such as Karen Armstrong to look for a conception of God that isn’t, like the modern conception, treated as an increasingly irrelevant hypothesis of held over from a pre-Darwinian view of the world.


Three of the most interesting voices to have emerged from the God Debate each follow seemingly very different tracks that when added together might point the way to the other side. The first is the religious scholar Stephen Prothero. Armstrong’s Case for God helped inspire Prothero, to write his God is Not One (2010.) He was not so much responding to the religious/atheist debate as he was cautioning us against the perennialism at the root of much contemporary thought regarding religion. If the New Atheists went overboard with their claim that religion was either all superstitious nonsense ,or evil, and most likely both, the perennialists, with whom he lumps Armstrong, went far too much to the other side arguing in their need to press for diversity and respect the equally false notion that all religions preached in essence the same thing.


Prothero comes up with a neat formula for what a religion is. “Each religion articulates”:

 

  • a problem;
  • a solution to this  problem, which also serves as the religious goal;
  • a technique (or techniques) for moving from this problem to this solution; and
  • an exemplar (or exemplars) who chart this path from problem to solution. (p. 14)

For Prothero, what religions share is, as the perennialists claim, the fundamentals of human ethics, but also a view that the human condition is problematic. But the devil, so to speak, is in the details, for the world’s great religions have come up with very different identifications of what exactly this problematic feature is ,and hence very different, and in more ways than we might be willing to admit, incompatible solutions   to the problems they identify.

God is Not One is a great introduction to the world’s major religions, a knowledge I think is sorely lacking especially among those who seem to have the most knee-jerk reactions to any mention of religious ideas. For me, one of the most frustrating things about the New Atheists, especially, is their stunning degree of religious illiteracy. Not only do they paint all of Judaism, Christianity, and Islam with the same broad brush of religious literalism and fundamentalism, and have no grasp of religious history within the West, they also appear completely disinterested in trying to understand non-Western religious traditions an understanding that might give them better insight into the actual nature of the human religious instinct which is less about pre-scientific explanations for natural events than it is about an ethical orientation towards the world. Perhaps no religion as much as Confucianism can teach us something in this regard.  In the words of Prothero:

Unlike Christianity which drives a wedge between the sacred and the secular- the eternal “City of God” and the temporal “City of Man”- Confucianism glories in creatively confusing the two. There is a transcendent dimension in Confucianism. Confucians just locate it in the world rather than above or beyond it.

For all these reasons, Confucianism can be considered as religious humanism. Confucians share with secular humanists a single minded focus on this world of rag and bone.  They, too, are far more interested in how to live than in plumbing the depths of Ultimate Reality. But whereas secular humanists insist on emptying the rest of the world of the sacred, Confucians insists on infusing the world with sacred import- of seeing Heaven in humanity, on investing human beings with incalculable value, on hallowing the everyday. In Confucianism, the secular is sacred. (GN1 108)

Religions, then, are in large part philosophical orientations towards the world. They define the world, or better the inadequacies of the world, in a certain way and prescribe a set of actions in light of those inadequacies. In this way they might be thought of as embodied moral philosophies, and it is my second thinker, the philosopher Alain de Botton who in his Religion for Atheists: A Non-believer’s Guide to the Uses of Religion tries to show just how much even the most secular among us can learn from religion.

What the secular, according to Botton, can learn from religions has little to do with matters of belief which in the light of modern science he too finds untenable. Yet, the secular can learn quite a lot from religion in terms institutions and practices. Religion often fosters a deep sense of community founded around a common view of the world. The religious often see education as less about the imparting of knowledge but the formation of character. Because religion in part arose in response to the human need to live peacefully together in society it tends to track people away from violence and retribution and toward kindness and reconciliation. Much of religion offers us tenderness in times of tragedy, and holds out the promise of moral improvement. It provides the basis for some of our longest lived institutions, embeds art and architecture in a philosophy of life.


Often a religion grants its followers a sense of cosmic perspective that might otherwise be missing from human life, and reminds us of our own smallness in light of all that is, encourages an attitude of humility in the face of the depths of all we do not yet and perhaps never will know. It provides human beings with a sense of the transcendent something emotionally and intellectually beyond our reach which stretches the human heart and mind to the very edge of what it can be and understand.

So if religion, then, is at root a way for people to ethically orient themselves to others and the world even the most avowed atheist who otherwise believes that life can have meaning can learn something from it.  Though one must hope that monstrous absurdities such as the French Revolution’s Cult of the Supreme Being, or the “miraculously” never decomposing body of V.I. Lenin are never repeated.

One problem however remains before we all rush out to the local Church, Temple, or Mosque, or start our own version of the Church of Scientology. It is that religion is untrue, or better, while much of the natural ethics found at the bottom of most religions is something to which we secularists can assent because it was forged for the problems of human living together, and we still live in together in societies, the idea of the natural world and the divinities that supposedly inhabit and guide it are patently false from a scientific point of view. There are two possible solutions I can imagine to this dilemma, both of which play off of the ideas of my third figure, the neuroscientist
and fiction author, David Eagleman.

Like others, what frustrates Eagleman about the current God debate is the absurd position of certainty and narrowness of perspective taken by both the religious and the anti-religious. Fundamentalist Christians insist that a book written before we knew what the Universe was or how life develops or what the brain does somehow tells us all we really need to know, indeed thinks they possess all the questions we should ask. The New Atheists are no better when it comes to such a stance of false certainty and seem to base many of their argument on the belief that we are on the verge of knowing all we can know, and that the Universe(s) is already figured out except for a few loose ends.

In my own example, I think you can see this in Krauss who not only thinks we have now almost fully figured out the origins of the Universe and its fate. He also thinks he has in his hand its meaning, which is that it doesn’t have one, and to drive this home avoids looking at the hundreds of billions of years between now and the Universe’s predicted end.

Take this almost adolescent quote from Dawkins’ afterword to A Universe From Nothing:

Finally, and inevitably, the universe will further flatten into a nothing that mirrors its beginning” what will be left of our current Universe will be “Nothing at all. Not even atoms. Nothing. “ Dawkins then adds with his characteristic flourish“If you think that’s bleak and cheerless, too bad. Reality doesn’t owe us comfort.  (UFN 188).

And here’s Krauss himself:

The structures we can see, like stars and galaxies, were all created in quantum fluctuations from nothing. And the average total Newtonian gravitational energy of each object in our universe is equal to nothing. Enjoy the thought while you can, if this is true, we live in the worst of all possible universes one can live in, at least as far as the future of life is concerned. (UFN 105)

 

Or perhaps more prosaically the late Hitchens:

 

For those of us who find it remarkable that we live in a universe of Something, just wait. Nothingness is headed on a collision course right towards us! (UFN 119)

 

Really? Even if the view of the fate of the Universe is exactly like Krauss thinks it will be, and that’s a Big- Big-  If, what needs to be emphasized here, a point that neither Krauss or Dawkins seem prone to highlight is that this death of the Universe is trillions of years in the future and that there will be plenty of time for living, according to Dimitar Sasselov, hundreds of billions of years, for life to evolve throughout the Universe, and thus for an unimaginable diversity of experiences to be had. Our Universe, at least at the stage we have entered and for many billions of years longer than life on earth has existed, is actually a wonderful place for creatures such as ourselves.  If one adds to that the idea that there may not just be one Universe, but many many universes, with at least some perhaps having the right conditions for life developing and lasting for hundreds of billions of years as well, you get a picture of diversity and profusion that puts even the Hindu Upanishads to shame. That’s not a bleak picture. It one of the most exhilarating pictures I can think of, and perhaps in some sense even a religious one.


And the fact of the matter is we have no idea. We have no idea if the theory put forward by Krauss regarding the future of the Universe is ultimately the right one, we have no idea if life plays a large, small, or no role in the ultimate fate of the Universe, we have no idea if there is any other life in the Universe that resembles ourselves in terms of intelligence, or what scale- planet, galaxy, even larger- a civilization such as our own can achieve if it survives for a sufficiently long time, or how common such civilizations might become in the Universe as the time frame in which the conditions for life to appear and civilizations to appear and develop grows over the next hundred billion years or so. See the glass empty, half empty, or potentially overflowing it’s all just guesswork, even if Krauss’ physics make it an extremely sophisticated and interesting guesswork. To think otherwise is to assume the kind of block-headed certainty Krauss reserves for religious fanatics.  

David Eagleman wants to avoid the false certainty found in many of the religious and the New Atheists by adopting what he calls Possibilism. The major idea behind Possibilism is the same one, although Eagleman himself doesn’t make this connection, that is found in Armstrong’s pre-modern religious thinkers, especially among the so-called mystics, that is the conviction that we are in a position of profound ignorance.

Science has taken us very very far in terms of our knowledge of the world and will no doubt take us much much farther, but we are still probably at a point where we know an unbelievable amount less than will eventually become known, and perhaps there are limits to our ability to know in front of us beyond which will never be able to pass. We just don’t know. In a similar way to how the mystics tried to lead a path to the very limits of human thought beyond which we can not pass, Possibilism encourages us to step to the very edge of our scientific knowledge and imagine what lies beyond our grasp.

I can imagine at least two potential futures for Possibilism either one of which I find very encouraging.  If traditional religion is to regain its attraction for the educated it will probably have to develop and adopt a speculative theology that looks a lot like Eagleman’s Possibilism.  

A speculative theology would no longer seek to find support for its religious convictions in selective discoveries of science, but would  place its core ideas in dialogue with whatever version of the world science comes up with. This need not mean that any religion need abandon its core ethical beliefs or practices both of which were created for human beings at moral scale that reflects the level at which an individual life is lived. The Golden Rule needs no reference to the Universe as a whole nor do the rituals surrounding the life cycle of the individual- birth, marriage, and death at which religion so excels.


What a speculative theology would entail is that religious thinkers would be free to attempt to understand their own tradition in light of modern science and historical studies without any attempt to use either science or history to buttress its own convictions.

Speculative theology would ask how its concepts can continue to be understood in light of the world presented by modern science and would aim at being a dynamic, creative, and continuously changing theology which would better reflect the dynamic nature of modern knowledge, just as theology in the past was tied to a view of knowledge that was static and seemingly eternal. It might more easily be able to tackle contemporary social concerns such as global warming, inequality and technological change by holding the exact same assumptions as to the nature of the physical world as science while being committed to interpreting the findings in light of their own ethical traditions and perspectives.

Something like this speculative philosophy already existed during the Islamic Golden Age a period lasting from the 700s through the 1200s in which Islamic scientists such as Ibn Sina (Avicenna) managed to combine the insights of ancient Greek, Indian, and even Chinese thinking to create whole new field of inquiry and ways of knowing. Tools from algebra to trigonometry to empiricism that would later be built upon by Europeans to launch the scientific revolution. The very existence of this golden age of learning in the Muslim world exposes Sam Harris’ anti-Islamic comment for what it is- ignorant bigotry.

Still, a contemporary speculative theology would have more expansive and self-aware than anything found among religious thinkers before.  It would need to be an open system of knowledge cognizant of its own limitations and would examine and take into itself ideas from other religious traditions, even dead ones such as paganism, that added depth and dimensions to its core ethical stance. It would be a vehicle through which religious persons could enter into dialogue and debate with persons from other perspectives both religious and non-religious who are equally concerned with the human future, humanists and transhumanists, singularitarians, and progressives along with those who with some justification have deep anxieties regarding the future, traditional conservatives, bio-conservatives,  neo-luddites, and persons from every other spiritual tradition in the world.  It would mean an end to that dangerous anxiety held by a great number of the world’s religions that it alone needs to be the only truth in order to thrive.   

Yet, however beneficial such a speculative theology would be I find its development out of any current religion highly unlikely.  If traditional religions do not adopt something like this stance towards knowledge I find it increasingly likely that persons alienated from them as a consequence of the way their beliefs are contradicted by science will invent something similar for themselves. This is because despite the fact that the Universe in the way it is presented by the New Atheists is devoid of meaning this human need for meaning, to discuss it, and argue it, and try to live it, is unlikely to ever stop among human beings. The very quest seems written into our very core. Hopefully, such dialogues will avoid dogma and be self-critical and self-conscious in a way religion or secular ideologies never were before. I see such discussions more akin to works of art than religious-treatises, though I hope they provide the bases for community and ethics in the same way religion has in the past as well.

And we already have nascent forms of such communities- the environmentalist cause is one such community as is the transhumanist movement. So, is something like The Long Now Foundation which seeks to bring attention to the issues of the long term human future and has even adopted the religious practice of pilgrimage to its 10,000 year clock– a journey that is meant to inspire deeper time horizons for our present obsessed culture.

Even should such communities and movements become the primary way a certain cohort of scientifically inclined persons seek to express the human need for meaning, there is every reason for those so inclined to seek out and foster relationships with the more traditionally religious ,who are, and will likely always, comprise the vast majority human beings. Contrary to the predictions of New Atheists such as Daniel Dennett the world is becoming more not less religious, and Christianity is not a dying faith but changing location moving from its locus in Europe and North America to Latin America, Africa, and even Asia. Islam even more so than Christianity is likely to be a potent force in world affairs for some time to come, not to mention Hinduism with India destined to become the world’s most populous country by mid-century. We will need all of their cooperation to address pressing global problems from climate change, to biodiversity, to global pandemics, and inequality. And we should not think them persons who profess such faiths ignorant for holding fast to beliefs that largely bypass the world brought to us by science. Next to science religion is among the most amazing things to emerge from humanity and those who seek meaning from it are entitled to our respect as fellow human beings struggling to figure out not the what? but the why? of human life.

Whether inside or outside of traditional religion the development of scientifically grounded meaning discourses and communities, and the projects that would hopefully grow from them, would signal that wrestling with the “big questions” again offered a path to human transcendence. A path that was at long last no longer in conflict with the amazing Universe that has been shown to us by modern science. Finding such ways of living would mean that we truly have found our way through the great God debate.