Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.
If there was any theme I found across the 140 or so essays entered in the contest – it was that the 21st century was make- it- or-break-it for humanity, so we need to get our act together, and fast. If you want a metaphor for this sentiment, you couldn’t do much better than Nietzsche’s idea that humanity is like an individual walking on a “rope over an abyss”.
Hitterdale’s idea is that for most of human history the qualitative aspects of human experience have pretty much been the same, but that is about to change. What are facing, according to Hitterdale, is the the extinction of our species or the realization of our wildest perennial human dreams- biological superlongevity, machine intelligence that seem to imply the end of drudgery and scarcity. As he points out, some very heavy hitting thinkers seem to think we live in make or break times:
John Leslie, judged the probability of human extinction during the next five centuries as perhaps around thirty per cent at least. Martin Rees in 2003 stated, “I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century.”Less than ten years later Rees added a comment: “I have been surprised by how many of my colleagues thought a catastrophe was even more likely than I did, and so considered me an optimist.”
In a nutshell, Hiterdale’s solution is for us to concentrate more on preventing negative outcomes that achieving positive ones in this century. This is because even positive outcomes like human superlongevity and greater than human AI could lead to negative outcomes if we don’t sort out our problems or establish controls first.
This was probably my favorite essay overall because it touched on issues dear to my heart- how will we preserve the past in light of the huge uncertainties of the future. Niemeyer makes the case that we need to establish a repository of human knowledge in the event we suffer some general disaster, and how we might do this.
By one of those strange incidences of serendipity, while thinking about Niemeyer’s ideas and browsing the science section of my local bookstore I came across a new book by Lewis Dartnell The Knowledge: How to Rebuild Our World from Scratch which covers the essential technologies human beings will need if they want to revive civilization after a collapse. Or maybe I shouldn’t consider it so strange. Right next to The Knowledge was another new book The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day, by David Hand, but I digress.
The digitization of knowledge and its dependence on the whole technological apparatus of society actually makes us more vulnerable to the complete loss of information both social and personal and therefore demands that we backup our knowledge. Only things like a flood or a fire could have destroyed our lifetime visual records the way we used to store them- in photo albums- but now all many of us would have to do is lose or break our phone. As Niemeyer says:
Currently, no widespread eﬀorts are being made to protect digital resources against global disasters and to establish the means and procedures for extracting safeguarded digital information without an existing technological infrastructure. Facilities like, for instance, the Barbarastollen underground archive for the preservation of Germany’s cultural heritage (or other national and international high-security archives) operate on the basis of microﬁlm stored at constant temperature and low humidity. New, digital information will most likely never exist in printed form and thus cannot be archived with these techniques even in principle. The repository must therefore not only be robust against man-made or natural disasters, it must also provide the means for accessing and copying digital data without computers, data connections, or even electricity.
Niemeyer imagines the creation of such a knowledge repository as a unifying project for humankind:
Ultimately, the protection and support of the repository may become one of humanity’s most unifying goals. After all, our collective memory of all things discovered or created by mankind, of our stories, songs and ideas, have a great part in deﬁning what it means to be human. We must begin to protect this heritage and guarantee that future generations have access to the information they need to steer the future with open eyes.
If Niemeyer is trying to goad us into preparing should the worst occur, like Hitterdale, Robert de Neufville is working towards making sure these nightmare, especially self-inflicted ones, don’t come true in the first place. He does this as a journalist and writer and as an associate of the Global Catastrophic Risk Institute.
As de Neufville points out, and as I myself have argued before, the silence of the universe gives us reason to be pessimistic about the long term survivability of technological civilization. Yet, the difficulties that stand in the way of our minimizing global catastrophic risks, thing like developing an environmentally sustainable modern economy, protecting ourselves against global pandemics or meteor strikes of a scale that might set civilization on its knees, or the elimination of the threat of nuclear war, are more challenges of politics than technology. He writes:
But the greatest challenges may be political. Overcoming the technical challenges may be easy in comparison to using our collective power as a species wisely. If humanity were a single person with all the knowledge and abilities of the entire human race, avoiding nuclear war, and environmental catastrophe would be relatively easy. But in fact we are billions of people with different experiences, different interests, and different visions for the future.
In a sense, the future is a collective action problem. Our species’ prospects are effectively what economists call a “common good”. Every person has a stake in our future. But no one person or country has the primary responsibility for the well-being of the human race. Most do not get much personal benefit from sacrificing to lower the risk of extinction. And all else being equal each would prefer that others bear the cost of action. Many powerful people and institutions in particular have a strong interest in keeping their investments from being stranded by social change. As Jason Matheny has said, “extinction risks are market failures”.
His essay makes an excellent case that it is time we mature as a species and live up to our global responsibilities. The most important of which is ensuring our continued existence.
Here Cristinel Stoica makes a great case for tolerance, intellectual humility and pluralism, a sentiment perhaps often expressed but rarely with such grace and passion.
As he writes:
The future is unpredictable and open, and we can make it better, for future us and for our children. We want them to live in peace and happiness. They can’t, if we want them to continue our ﬁghts and wars against others that are diﬀerent, or to pay them back bills we inherited from our ancestors. The legacy we leave them should be a healthy planet, good relations with others, access to education, freedom, a healthy and critical way of thinking. We have to learn to be free, and to allow others to be free, because this is the only way our children will be happy and free. Then, they will be able to focus on any problems the future may reserve them.
In his essay Benjamin Pope is trying to peer into the human future over the long term, by looking at the types of institutions that survive across centuries and even millennia: Universities, “churches”, economic systems- such as capitalism- and potentially multi-millennial, species – wide projects, namely space colonization.
I liked Pope’s essay a lot, but there are parts of it I disagreed with. For one, I wish he would have included cities. These are the oldest lived of human institutions, and unlike Pope’s other choices are political, and yet manage to far out live other political forms- namely states or empires. Rome far outlived the Roman Empire and my guess is that many American cities, as long as they are not underwater, will outlive the United States.
Pope’s read on religion might be music to the ears of some at the IEET:
Even the very far future will have a history, and this future history may have strong, path-dependent consequences. Once we are at the threshold of a post-human society the pace of change is expected to slow down only in the event of collapse, and there is a danger that any locked-in system not able to adapt appropriately will prevent a full spectrum of human flourishing that might otherwise occur.
Pope seems to lean toward the negative take on the role of religion to promote “a full spectrum of human flourishing” and , “as a worst-case scenario, may lock out humanity from futures in which peace and freedom will be more achievable.”
To the surprise of many in the secular West, and that includes an increasingly secular United States, the story of religion will very much be the story of humanity over the next couple of centuries, and that includes especially the religion that is dying in the West today, Christianity. I doubt, however, that religion has either the will or the capacity to stop or even significantly slow technological development, though it might change our understanding of it. It also the case that, at the end of the day, religion only thrives to the extent it promotes human flourishing and survival, though religious fanatics might lead us to think otherwise. I am also not the only one to doubt Pope’s belief that “Once we are at the threshold of a posthuman society the pace of change is expected to slow down only in the event of collapse”.
Still, I greatly enjoyed Pope’s essay, and it was certainly thought provoking.
If you’re looking to break out of your dystopian gloom for a while, and I myself keep finding reasons for which to be gloomy, then you couldn’t do much better to take a peak and Georgina Parry’s fictionalized peak at a possible utopian future. Like a good parent, Parry encourages our confidence, but not our hubris:
The image mankind call ‘the present’ has been written in the light but the material future has not been built. Now it is the mission of people like Grace, and the human species, to build a future. Success will be measured by the contentment, health, altruism, high culture, and creativity of its people. As a species, Homo sapiens sapiens are hackers of nature’s solutions presented by the tree of life, that has evolved over millions of years.
Schlafly’s essay literally made my draw drop, it was so morally absurd and even obscene.
Consider a mundane decision to walk along the top of a cliff. Conventional advice would be to be safe by staying away from the edge. But as Tegmark explains, that safety is only an illusion. What you perceive as a decision to stay safe is really the creation of a clone who jumps off the cliff. You may think that you are safe, but you are really jumping to your death in an alternate universe.
Armed with this knowledge, there is no reason to be safe. If you decide to jump off thecliff, then you really create a clone of yourself who stays on top of the cliff. Both scenarios are equally real, no matter what you decide. Your clone is indistinguishable from yourself, and will have the same feelings, except that one lives and the other dies. The surviving one can make more clones of himself just by making more decisions.
Schlafly rams the point home that under current views of the multiverse in physics nothing you do really amount to a choice, we are stuck on an utterly deterministic wave-function on whose branching where we play hero and villain, and there is no space for either praise or guilt. You can always act as a coward or naive sure that somewhere “out there” another version of “you” does the right thing. Saving humanity from itself in the ways proposed by Hitterdale and de Neufville, preparing for the worst as in Niemeyer and Pope or trying to build a better future as Parry and Stoica makes no sense here. Like poor Schrodinger’s cat, on some branches we end up surviving, on some we destroy ourselves and it is not us who is in charge of which branch we are on.
The thought made me cringe, but then I realized Schlafly must be playing a Swiftian game. Applying quantum theory to the moral and political worlds we inhabit leads to absurdity. This might or might not call into question the fundamental reality of the multiverse or the universal wave function, but it should not lead us to doubt or jettison our ideas regarding our own responsibility for the lives we live, which boil down to the decisions we have made.
Those of us in the West probably can’t help seeing the future of technology as nearly synonymous with the future of our own civilization, and a civilization, when boiled down to its essence, amounts to a set of questions a particular group of human beings keeps asking, and their answer to these questions. The questions in the West are things like what is the right balance between social order and individual freedom? What is the relationship between the external and internal (mental/spiritual) worlds, including the question of the meaning of Truth? How might the most fragile thing in existence, and for us the most precious- the individual- survive across time? What is the relationship between the man-made world- and culture- visa-vi nature, and which is most important to the identity and authenticity of the individual?
The progress of science and technology intersect with all of these questions, but what we often forget is that we have sown the seeds of science and technology elsewhere and the environment in which they will grow can be very different and hence their application and understanding different based as they will be on a whole different set of questions and answers encountered by a distinct civilization.
Leo KoGuan’s essay approaches the future of science and technology from the perspective of Chinese civilization. Frankly, I did not really understand his essay which seemed to me a combination of singularitarianism and Chinese philosophy that I just couldn’t wrap my head around. What am I to make of this from the Founder and Chairman of a 5.1 billion dollar computer company:
Using the KQID time-engine, earthlings will literally become Tianming Ren with God-like power to create and distribute objects of desire at will. Unchained, we are free at last!
Other than the fact that anyone interested in the future of transhumanism absolutely needs to be paying attention to what is happening and what and how people are thinking in China.
Lastly, I myself had an essay in the contest. It was about how we are facing incredible hurdles in the near future and that one of the ways we might succeed in facing these hurdles is by recovering the ability to imagine what an ideal society, Utopia, might look like. Go figure.