A Cure for Our Deflated Sense of The Future

Progressland 1964 World's Fair

There’s a condition I’ve noted among former hard-core science-fiction fans that for want of a better word I’ll call future-deflation. The condition consists of an air of disappointment and detachment with the present that emerges on account of the fact that the future one dreamed of in one’s youth has failed to materialize. It was a dream of what the 21st century would entail that was fostered by science-fiction novels, films and television shows, a dream that has not arrived, and will seemingly never arrive- at least within our lifetimes. I think I have a cure for it, or at least a strong preventative.

The strange thing, perhaps, is that anyone would be disappointed in the fact that a fictional world has failed to become real in the first place. No one, I hope, feels like the present is constricted and dull because there aren’t any flying dragons in it to slay. The problem, then, might lie in the way science-fiction is understood in the minds of some avid fans- not as fiction, but as plausible future history, or even a sort of “preview” and promise of all the cool things that await.

Future- deflation is a kind of dulling hang-over from a prior bout of future-inflation when expectations got way out ahead of themselves. If, mostly boys, now become men, feel let down by inflated expectations driven by what proved to be the Venetian sunset, rather than the beginning, of the space race regarding orbital cities, bases on the moon and Mars, and a hundred other things, their experience is a little like girls, fed on a diet of romance, who have as adults tasted the bitter reality of love. Following the rule I suppose I’d call it romance-deflation- cue the Viagra jokes.

Okay, so that’s the condition, how might it be cured? The first step to recovery is admitting you have a problem and identifying its source. Perhaps the main culprit behind future-deflation is the crack cocaine of CGI- (and I really do mean CGI as in computer generated graphics which I’ve written about before). Whereas late 20th century novels,  movies, and television shows served as gateway drugs to our addiction to digital versions of the future, CGI and the technologies that will follow is the true rush, allowing us to experience versions of the future that just might be more real than reality itself.

There’s a phenomenon discovered by social psychologists studying motivation that says that it’s a mistake to visualize your idealized outcomes too clearly for to do so actually diminishes your motivation to achieve them. You get many of the same emotional rewards without any of the costs, and on that account never get down to doing the real thing. Our ability to create not just compelling but mind blowing visualizations of technologies that are nowhere on the horizon has become so good, and will only get better, that it may be exacerbating disappointment with the present state of technology and the pace of technological change- increasing the sense of “where’s my jet pack”.

There’s a theory that I’ve heard discussed by Brian Eno that the reason we haven’t been visited by any space aliens is that civilizations at a certain point fall into a state of masturbatory self-satisfaction. They stop actually doing stuff because the imagination of doing things becomes so much better and easier than the difficult and much less satisfying achievements experienced in reality.

The cure for future deflation is really just adulthood. We need to realize that the things we would like to do and see done are hard and expensive and take long commitments over time- often far past our own lifetimes- to achieve. We need to get off our Oculus Rift weighed down assess and actually do stuff. Elon Musk with his SpaceX seems to realize this, but with a ticket to Mars to cost 500 thousand dollars one can legitimately wonder whether he’ll end up creating an escape hatch from earth for the very rich that the rest of us will be stuck gawking at on our big-screen TVs.

And therein lies the heart of the problem, for it’s actually less important for the majority of us what technologies are available in the future than the largely non-technological question of how such a future is politically and economically organized which will translate into how many of us have access to these technologies.  

The question we should be asking when thinking about things we should be doing now to shape the future is a simple and very human one – “what kind of world do I hope my grandchildren live in?” A part of the answer to this question is going to involve technology and scientific advancement, but not as much of it as we might think. Other types of questions dealing with issues such as the level of equality, peace and security, a livable environment, and amount of freedom and purpose, are both more important and more capable of being influenced by the average person.  These are things we can pursue even if we have no connection to the communities of science and technology. We could even achieve many of these things should technological progress completely stall with the technological kit we already have.

In a way because it emerged in tandem with the technological progress started with the scientific and industrial revolutions science-fiction seemed to own the future, and those who practiced the art largely did well by us in giving it shape- at least in our minds. But in reality the future was just on loan, and it might do us best to take back a large portion of it and encourage everyone who wants to have more say in defining it. Or better, here’s my advice: for those techno-progressives not involved directly in the development of science and technology focus more of your efforts on the progressive side of the scale. That way, if even part of the promised future arrives you won’t be confined to just watching it while wearing your Oculus Rift or stuck in your seat at the IMAX.

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

The First Machine War and the Lessons of Mortality

Lincoln Motor Co., in Detroit, Michigan, ca. 1918 U.S. Army Signal Corps Library of Congress

I just finished a thrilling little book about the first machine war. The author writes of a war set off by a terrorist attack where the very speed of machines being put into action,and the near light speed of telecommunications whipping up public opinion to do something now, drives countries into a world war.

In his vision whole new theaters of war, amounting to fourth and fifth dimensions, have been invented. Amid a storm of steel huge hulking machines roam across the landscape and literally shred human beings in their path to pieces. Low flying avions fill the sky taking out individual targets or help calibrate precision attacks from incredible distances beyond. Wireless communications connect soldiers and machine together in a kind of world-net.

But the most frightening aspect of the new war are weapons based on plant biology. Such weapons, if they do not immediately scar the face and infect the bodies of those who had been targeted, relocate themselves in the soil like spores waiting to release and kill and maim when conditions are ripe- the ultimate terrorist weapon.

Amid all this the author searches for what human meaning might be in a world where men are caught between a world of warring machines.  In the end he comes to understand himself as mere cog in a great machine, a metallic artifice that echoes and rides rhythms of biological nature including his own.

__________________________________

A century and a week back from today humanity began its first war of machines. (July, 28 1914). There had been forerunners such as the American Civil War and the Franco-Prussian War in the 19th century, but nothing before had exposed the full features and horrors of what war between mechanized and industrial societies would actually look like until it came quite unexpectedly in the form of World War I.

Those who wish to argue that the speed of technological development is exaggerated need only to look to at the First World War where almost all of the weapons we now use in combat were either first used or used to full effect- the radio, submarine, airplane, tank and other vehicles using the internal combustion engine. Machine guns were let loose in new and devastating ways as was long-range artillery.

Although again there were forerunners, the first biological and chemical weapons saw there true debut in WWI. The Germans tried to infect the city of St. Petersburg with a strain of the plague, but the most widely used WMDs were chemical weapons, some of them derived from the work on the nitrogen cycle of plants, gases such chlorine and mustard gas, which killed less than they maimed, and sometimes sat in the soil ready to emerge like poisonous mushrooms when weather conditions permitted.

Indeed, the few other weapons in our 21st century arsenal that can’t be found in the First World War such as the jet, rocket, atomic bomb, radar, and even the computer, would make their debut only a few decades after the end of that war, and during what most historians consider its second half- World War II.

What is called the Great War began, as our 9-11 wars began, with a terrorist attack. The Archduke of Austria- Hungary Franz Ferdinand assassinated by the ultimate nobody, a Serbian nationalist not much older than a boy- Gavrilo Princip- whose purely accidental success (he was only able to take his shot because the car the Archduke was riding in had taken a wrong turn) ended up being the catalyst for the most deadly war in human history up until that point, a conflict that would unleash nearly a century of darkness and mortal danger upon the world.

For the first time it would be a war that would be experienced by people thousands of miles from the battlefield in almost real time via the relatively new miracle of the radio. This was only part of the lightning fast feedback loop that launched and sped European civilization from a minor political assassination to total war. As I recall from Stephen Kern’s 1983 The Culture of Time and Space 1880-1914   political caution and prudence found themselves crushed in the vice of compressed space and time and suffered the need to make policy decisions that aligned with the need, not of human beings and their slow, messy and deliberative politics, but the pace of machines. Once the decision to mobilize was made it was almost impossible to stop it without subjecting the nation to extreme vulnerability, once a jingoistic public was whipped up to demand revenge and action via the new mass media it was again nearly impossible to silence and temper.

The First World War is perhaps the penultimate example of what Nassim Taleb called a “Black Swan” an event whose nature failed to be foreseen and whose effect ended up changing the future radically from the shape it had previously been projected to have. Taleb defines a Black Swan this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Secondly it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

Taleb has something to teach futurists who he suggests might start looking for a different day job. For what he is saying is it is the events we do not and cannot predict that will have the greatest influence on the future, and on account of this blindness the future is essentially unknowable. The best we can do in his estimation is to build resilience, robustness and redundancy, which it is hoped might allow us to survive, or even gain in the face of multiple forms of unpredictable crises, to become, in his terms “anti-fragile”.

No one seems to think another world war is possible today, which might give us reason for worry. We do have more circuit breakers in place which might allow us to dampen a surge in the direction of war between the big states in the face of a dangerous event such as Japan downing a Chinese plane in their dispute over Pacific islands, but many more and stronger ones need to be created to avoid such frictions spinning out of control.

States continue to prepare for limited conventional wars against one another. China practices and plans to retake disputed islands including Taiwan by force, and to push the U.S. Navy deeper into the Pacific, while the U.S. and Japan practice retaking islands from the Chinese. We do this without recognizing that we need to do everything possible to prevent such potential clashes in the first place because we have no idea once they begin where or how they will end.  As in financial crises, the further in time we become removed from the last great crisis the more likely we are to have fallen into a dangerous form of complacency, though the threat of nuclear destruction may act as an ultimate break.

The book I began this essay with is, of course, not some meditation on 21st or even 22nd century war, but the First World War itself. Ernst Jünger’s Storm of Steel is perhaps the best book ever written on the Great War and arguably one of the best books written on the subject of war- period.

It is Jünger’s incredible powers of observation and his desire for reflection that give the book such force. There is a scene that will ever stick with me where Jünger is walking across the front lines and sees a man sitting there in seemingly perfect stoicism as the war rages around him. It’s only when he looks closer that Jünger realizes the man is dead and that he has no eyes in his sockets- they have been blown out from an explosion behind.

Unlike another great book on the First World War, Erich Remarque’s All Quiet on the Western Front, Storm of Steel is not a pacifist book. Jünger is a soldier who sees the war as a personal quest and part of his duty as a man. His bravery is undeniable, but he does not question the justice or rationality of the war itself, a fact that would later allow Jünger’s war memoir to be used as a propaganda tool by the Nazis.

Storm of Steel has about it something of the view of war found in the ancients- that it was sent by the gods and there was nothing that could be done about it but to fulfill one’s duty within it. In the Hindu holy book the Bhagavad Gita the prince Arjuna is filled with doubt over the moral righteousness of the battle he is about to engage in- to which the god Krishna responds:

 …you Arjuna, are only a mortal appointee to carry out my divine will, since the Kauravas are destined to die either way, due to their heap of sins. Open your eyes O Bhaarata and know that I encompass the Karta, Karma and Kriya, all in myself. There is no scope for contemplation now or remorse later, it is indeed time for war and the world will remember your might and immense powers for time to come. So rise O Arjuna!, tighten up your Gandiva and let all directions shiver till their farthest horizons, by the reverberation of its string.

Jünger lived in a world that had begun to abandon the gods, or rather adopted new materialist versions of them – whether the force of history or evolution- stories in which Jünger like Arjuna comes to see himself as playing a small part.

 The incredible massing of forces in the hour of destiny, to fight for a distant future, and the violence it so surprisingly, suddenly unleashed, had taken me for the first time into the depths of something that was more than mere personal experience. That was what distinguished it from what I had been through before; it was an initiation that not only had opened red-hot chambers of dread but had also led me through them. (255-256)

Jünger is not a German propagandist. He seems blithely unaware of the reasons his Kaiser’s government was arguing why the war was being fought. The dehumanization of the other side which was a main part of the war propaganda towards the end of the conflict, and on both sides, does not touch him. He is a mere soldier whose bravery comes from the recognition of his own ultimate mortality just as the mortality of everyone else allows him to kill without malice, as a mere matter of the simple duty of a combatant in war.

Because his memoir of the conflict is so authentic, so without bias or apparent political aims, he ends up conveying truths about war which it is difficult for civilians to understand and this difficulty in understanding can be found not only in pacifists, but in nationalist war-mongers with no experience of actual combat.

If we open ourselves to to the deepest meditations of those who have actually experienced war, what we find is that combat seems to bring the existential reality of the human condition out from its normal occlusion by the tedium of everyday living. To live in the midst of war is a screaming reminder that we are mortal and our lives ultimately very short. In war it is very clear that we are playing a game of chance against death, which is merely the flip side of the statistical unlikelihood of our very existence, as if our one true God was indeed chance itself. Like any form of gambling, victory against death itself becomes addictive.

War makes it painfully clear to those who fight in it that we are hanging on just barely to this thread, this thin mortal coil, where our only hope for survival for a time is to hang on tightly to those closest to us- war’s famed brotherhood in arms. These experiences, rather than childish flag- waving notions of nationalism, are likely the primary source of what those who have experience of only of peace often find unfathomable- that soldiers from the front often eagerly return to battle. It is a shared experience among those who have experienced combat that often leads soldiers to find more in common with the enemies they have engaged than their fellow citizens back home who have never been to war.

The essential lessons of Storm of Steel are really spiritual answers to the question of combat. Jünger’s war experience leads him to something like Buddhist non-attachment both to himself and to the futility of the bird-eye view justifications of the conflict.

The nights brought heavy bombardment like swift, devastating summer thunderstorms. I would lie on my bunk on a mattress of fresh grass, and listen, with a strange and quite unjustified feeling of security, to the explosions all around that sent the sand trickling out of the walls.

At such moments, there crept over me a mood I hadn’t known before. A profound reorientation, a reaction to so much time spent so intensely, on the edge. The seasons followed one another, it was winter and then it was summer again, but it was still war. I felt I had got tired, and used to the aspect of war, but it was from familiarity that I observed what was in front of me in a new and subdued light. Things were less dazzlingly distinct. And I felt the purpose with which I had gone out to fight had been used up and no longer held. The war posed new, deeper puzzles. It was a strange time altogether. (260)

In another scene Jünger comes upon a lone enemy officer while by himself on patrol.

 I saw him jump as I approached, and stare at me with gaping eyes, while I, with my face behind my pistol, stalked up to him slowly and coldly. A bloody scene with no witnesses was about to happen. It was a relief to me, finally, to have the foe in front of me and within reach. I set the mouth of my pistol at the man’s temple- he was too frightened to move- while my other fist grabbed hold of his tunic…

With a plaintive sound, he reached into his pocket, not to pull out a weapon, but a photograph which he held up to me. I saw him on it, surrounded by numerous family, all standing on a terrace.

It was a plea from another world. Later, I thought it was blind chance that I had let him go and plunged onward. That one man of all often appeared in my dreams. I hope that meant he got to see his homeland again. (234)

The unfortunate thing about the future of war is not that human beings seem likely to play an increasingly diminishing role as fighters in it, as warfare undergoes the same process of automation, which has resulted in the fact that so few of us now grow our food or produce our goods. Rather, it is the fact that wars will continue to be fought and human beings, which will come to mean almost solely non-combatants, will continue to die in them.

The lessons Jünger took from war are not so much the product of war itself as they emerge from intense reflection on our own and others mortality. They are the same type of understanding and depth often seen in those who suffer long periods of death, the terminally ill, who die not in the swift “thief in the night” mode of accidents or bodily failure, but slip from the world with enough time to and while retaining the capacity to reflect. Even the very young who are terminally ill often speak of a diminishing sense of their own importance, a need to hang onto the moment, a drive to live life to the full, and the longing to treat others in a spirit of charity and mercy.

Even should the next “great war” be fought almost entirely by machines we can retain these lessons as a culture as long as we give our thoughts over to what it means to be a finite creature with an ending and will have the opportunity to experience them personally as long as we are mortal, and given the impossibility of any form of eternity no matter how far we will extend our lifespans, mortal we always will be.

 

 

How Should Humanity Steer the Future?

FQXi

Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.

If there was any theme I found across the 140 or so essays entered in the contest – it was that the 21st century was make- it- or-break-it for humanity, so we need to get our act together, and fast. If you want a metaphor for this sentiment, you couldn’t do much better than Nietzsche’s idea that humanity is like an individual walking on a “rope over an abyss”.

A Rope over an Abyss by Laurence Hitterdale

Hitterdale’s idea is that for most of human history the qualitative aspects of human experience have pretty much been the same, but that is about to change. What are facing, according to Hitterdale, is the the extinction of our species or the realization of our wildest perennial human dreams- biological superlongevity, machine intelligence that seem to imply the end of drudgery and scarcity. As he points out, some very heavy hitting thinkers seem to think we live in make or break times:

 John Leslie, judged the probability of human extinction during the next five centuries as perhaps around thirty per cent at least. Martin Rees in 2003 stated, “I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century.”Less than ten years later Rees added a comment: “I have been surprised by how many of my colleagues thought a catastrophe was even more likely than I did, and so considered me an optimist.”

In a nutshell, Hiterdale’s solution is for us to concentrate more on preventing negative outcomes that achieving positive ones in this century. This is because even positive outcomes like human superlongevity and greater than human AI could lead to negative outcomes if we don’t sort out our problems or establish controls first.

How to avoid steering blindly: The case for a robust repository of human knowledge by Jens C. Niemeyer

This was probably my favorite essay overall because it touched on issues dear to my heart- how will we preserve the past in light of the huge uncertainties of the future.  Niemeyer makes the case that we need to establish a repository of human knowledge in the event we suffer some general disaster, and how we might do this.

By one of those strange incidences of serendipity, while thinking about Niemeyer’s ideas and browsing the science section of my local bookstore I came across a new book by Lewis Dartnell The Knowledge: How to Rebuild Our World from Scratch which covers the essential technologies human beings will need if they want to revive civilization after a collapse. Or maybe I shouldn’t consider it so strange. Right next to The Knowledge was another new book The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day, by David Hand, but I digress.

The digitization of knowledge and its dependence on the whole technological apparatus of society actually makes us more vulnerable to the complete loss of information both social and personal and therefore demands that we backup our knowledge. Only things like a flood or a fire could have destroyed our lifetime visual records the way we used to store them- in photo albums- but now all many of us would have to do is lose or break our phone. As Niemeyer  says:

 Currently, no widespread efforts are being made to protect digital resources against global disasters and to establish the means and procedures for extracting safeguarded digital information without an existing technological infrastructure. Facilities like, for instance, the Barbarastollen underground archive for the preservation of Germany’s cultural heritage (or other national and international high-security archives) operate on the basis of microfilm stored at constant temperature and low humidity. New, digital information will most likely never exist in printed form and thus cannot be archived with these techniques even in principle. The repository must therefore not only be robust against man-made or natural disasters, it must also provide the means for accessing and copying digital data without computers, data connections, or even electricity.

Niemeyer imagines the creation of such a knowledge repository as a unifying project for humankind:

Ultimately, the protection and support of the repository may become one of humanity’s most unifying goals. After all, our collective memory of all things discovered or created by mankind, of our stories, songs and ideas, have a great part in defining what it means to be human. We must begin to protect this heritage and guarantee that future generations have access to the information they need to steer the future with open eyes.

Love it!

One Cannot Live in the Cradle Forever by Robert de Neufville

If Niemeyer is trying to goad us into preparing should the worst occur, like Hitterdale, Robert de Neufville is working towards making sure these nightmare, especially self-inflicted ones, don’t come true in the first place. He does this as a journalist and writer and as an associate of the Global Catastrophic Risk Institute.

As de Neufville points out, and as I myself have argued before, the silence of the universe gives us reason to be pessimistic about the long term survivability of technological civilization. Yet, the difficulties that stand in the way of our minimizing global catastrophic risks, thing like developing an environmentally sustainable modern economy, protecting ourselves against global pandemics or meteor strikes of a scale that might set civilization on its knees, or the elimination of the threat of nuclear war, are more challenges of politics than technology. He writes:

But the greatest challenges may be political. Overcoming the technical challenges may be easy in comparison to using our collective power as a species wisely. If humanity were a single person with all the knowledge and abilities of the entire human race, avoiding nuclear war, and environmental catastrophe would be relatively easy. But in fact we are billions of people with different experiences, different interests, and different visions for the future.

In a sense, the future is a collective action problem. Our species’ prospects are effectively what economists call a “common good”. Every person has a stake in our future. But no one person or country has the primary responsibility for the well-being of the human race. Most do not get much personal benefit from sacrificing to lower the risk of extinction. And all else being equal each would prefer that others bear the cost of action. Many powerful people and institutions in particular have a strong interest in keeping their investments from being stranded by social change. As Jason Matheny has said, “extinction risks are market failures”.

His essay makes an excellent case that it is time we mature as a species and live up to our global responsibilities. The most important of which is ensuring our continued existence.

The “I” and the Robot by Cristinel Stoica

Here Cristinel Stoica makes a great case for tolerance, intellectual humility and pluralism, a sentiment perhaps often expressed but rarely with such grace and passion.

As he writes:

The future is unpredictable and open, and we can make it better, for future us and for our children. We want them to live in peace and happiness. They can’t, if we want them to continue our fights and wars against others that are different, or to pay them back bills we inherited from our ancestors. The legacy we leave them should be a healthy planet, good relations with others, access to education, freedom, a healthy and critical way of thinking. We have to learn to be free, and to allow others to be free, because this is the only way our children will be happy and free. Then, they will be able to focus on any problems the future may reserve them.

Ends of History and Future Histories in the Longue Duree by Benjamin Pope

In his essay Benjamin Pope is trying to peer into the human future over the long term, by looking at the types of institutions that survive across centuries and even millennia: Universities, “churches”, economic systems- such as capitalism- and potentially multi-millennial, species – wide projects, namely space colonization.

I liked Pope’s essay a lot, but there are parts of it I disagreed with. For one, I wish he would have included cities. These are the oldest lived of human institutions, and unlike Pope’s other choices are political, and yet manage to far out live other political forms- namely states or empires. Rome far outlived the Roman Empire and my guess is that many American cities, as long as they are not underwater, will outlive the United States.

Pope’s read on religion might be music to the ears of some at the IEET:

Even the very far future will have a history, and this future history may have strong, path-dependent consequences. Once we are at the threshold of a post-human society the pace of change is expected to slow down only in the event of collapse, and there is a danger that any locked-in system not able to adapt appropriately will prevent a full spectrum of human flourishing that might otherwise occur.

Pope seems to lean toward the negative take on the role of religion to promote “a full spectrum of human flourishing” and , “as a worst-case scenario, may lock out humanity from futures in which peace and freedom will be more achievable.”

To the surprise of many in the secular West, and that includes an increasingly secular United States, the story of religion will very much be the story of humanity over the next couple of centuries, and that includes especially the religion that is dying in the West today, Christianity. I doubt, however, that religion has either the will or the capacity to stop or even significantly slow technological development, though it might change our understanding of it. It also the case that, at the end of the day, religion only thrives to the extent it promotes human flourishing and survival, though religious fanatics might lead us to think otherwise. I am also not the only one to doubt Pope’s belief that “Once we are at the threshold of a posthuman society the pace of change is expected to slow down only in the event of collapse”.

Still, I greatly enjoyed Pope’s essay, and it was certainly thought provoking.  

Smooth seas do not make good sailors by Georgina Parry

If you’re looking to break out of your dystopian gloom for a while, and I myself keep finding reasons for which to be gloomy, then you couldn’t do much better to take a peak and Georgina Parry’s fictionalized peak at a possible utopian future. Like a good parent, Parry encourages our confidence, but not our hubris:

 The image mankind call ‘the present’ has been written in the light but the material future has not been built. Now it is the mission of people like Grace, and the human species, to build a future. Success will be measured by the contentment, health, altruism, high culture, and creativity of its people. As a species, Homo sapiens sapiens are hackers of nature’s solutions presented by the tree of life, that has evolved over millions of years.

The future is the past by Roger Schlafly

Schlafly’s essay literally made my draw drop, it was so morally absurd and even obscene.

Consider a mundane decision to walk along the top of a cliff. Conventional advice would be to be safe by staying away from the edge. But as Tegmark explains, that safety is only an illusion. What you perceive as a decision to stay safe is really the creation of a clone who jumps off the cliff. You may think that you are safe, but you are really jumping to your death in an alternate universe.

Armed with this knowledge, there is no reason to be safe. If you decide to jump off thecliff, then you really create a clone of yourself who stays on top of the cliff. Both scenarios are equally real, no matter what you decide. Your clone is indistinguishable from yourself, and will have the same feelings, except that one lives and the other dies. The surviving one can make more clones of himself just by making more decisions.

Schlafly rams the point home that under current views of the multiverse in physics nothing you do really amount to a choice, we are stuck on an utterly deterministic wave-function on whose branching where we play hero and villain, and there is no space for either praise or guilt. You can always act as a coward or naive sure that somewhere “out there” another version of “you” does the right thing. Saving humanity from itself in the ways proposed by Hitterdale and de Neufville, preparing for the worst as in Niemeyer and Pope or trying to build a better future as Parry and Stoica makes no sense here. Like poor Schrodinger’s cat, on some branches we end up surviving, on some we destroy ourselves and it is not us who is in charge of which branch we are on.

The thought made me cringe, but then I realized Schlafly must be playing a Swiftian game. Applying quantum theory to the moral and political worlds we inhabit leads to absurdity. This might or might not call into question the fundamental  reality of the multiverse or the universal wave function, but it should not lead us to doubt or jettison our ideas regarding our own responsibility for the lives we live, which boil down to the decisions we have made.

Chinese Dream is Xuan Yuan’s Da Tong by KoGuan Leo

Those of us in the West probably can’t help seeing the future of technology as nearly synonymous with the future of our own civilization, and a civilization, when boiled down to its essence, amounts to a set of questions a particular group of human beings keeps asking, and their answer to these questions. The questions in the West are things like what is the right balance between social order and individual freedom? What is the relationship between the external and internal (mental/spiritual) worlds, including the question of the meaning of Truth? How might the most fragile thing in existence, and for us the most precious- the individual- survive across time? What is the relationship between the man-made world- and culture- visa-vi nature, and which is most important to the identity and authenticity of the individual?

The progress of science and technology intersect with all of these questions, but what we often forget is that we have sown the seeds of science and technology elsewhere and the environment in which they will grow can be very different and hence their application and understanding different based as they will be on a whole different set of questions and answers encountered by a distinct civilization.

Leo KoGuan’s essay approaches the future of science and technology from the perspective of Chinese civilization. Frankly, I did not really understand his essay which seemed to me a combination of singularitarianism and Chinese philosophy that I just couldn’t wrap my head around.  What am I to make of this from the Founder and Chairman of a 5.1 billion dollar computer company:

 Using the KQID time-engine, earthlings will literally become Tianming Ren with God-like power to create and distribute objects of desire at will. Unchained, we are free at last!

Other than the fact that anyone interested in the future of transhumanism absolutely needs to be paying attention to what is happening and what and how people are thinking in China.

Lastly, I myself had an essay in the contest. It was about how we are facing incredible hurdles in the near future and that one of the ways we might succeed in facing these hurdles is by recovering the ability to imagine what an ideal society, Utopia, might look like. Go figure.

Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…

 

 

 

Why does the world exist, and other dangerous questions for insomniacs

William Blake Creation of the World

 

A few weeks back I wrote a post on how the recent discovery of gravitational lensing provided evidence for inflationary models of the Big Bang. These are cosmological models that imply some version of the multiverse, essentially the idea that ours is just one of a series of universes, a tiny bubble, or region, of a much, much larger universe where perhaps even the laws of physics or rationality of mathematics differed from one region to another.

My earlier piece had taken some umbrage with the physicist Lawrence Krauss’ new atheist take on the discovery of gravitational lensing, in the New Yorker. Krauss is a “nothing theorists”, one of a group of physicists who argue that the universe emerged from what in effect was nothing at all, although; unlike other nothing theorists such as Stephen Hawking, Krauss uses his science as a cudgel to beat up on contemporary religion. It was this attack on religion I was interested in, while the deeper issue the issue of a universe arising from nothing, left me shrugging my shoulders as if there was, excuse the pun, nothing of much importance in the assertion.

Perhaps I missed the heart of the issue because I am a nothingist myself, or at the very least, never found the issue of nothingness something worth grappling with.  It’s hard to write this without sounding like a zen koan or making my head hurt, but I didn’t look into the physics or the metaphysics of Krauss’ nothingingist take on gravitational lensing, inflation or anything else, in fact I don’t think I had ever really reflected on the nature of nothing at all.

The problems I had with Krauss’ overall view as seen in his book on the same subject A Universe from Nothing had to do with his understanding of the future and the present not the past.  I felt the book read the future far too pessimistically, missing the fact that just because the universe would end in nothing there was a lot of living to be done from now to the hundreds of billions of years before its heat death. As much as it was a work of popular science, Krauss’ book was mostly an atheist weapon in what I called “The Great God Debate” which, to my lights, was about attacking, or for creationists defending, a version of God as a cosmic engineer that was born no earlier and in conjunction with modern science itself. I felt it was about time we got beyond this conception of God and moved to a newer or even more ancient one.

Above all, A Universe from Nothing, as I saw it, was epistemologically hubristic, using science to make a non-scientific claim over the meaning of existence- that there wasn’t any- which cut off before they even got off the ground so many other interesting avenues of thought. What I hadn’t thought about was the issue of emergence from nothingness itself. Maybe the question of the past, the question of why our universe was here at all, was more important than I thought.

When thinking a question through, I always find a helpful first step to turn to the history of ideas to give me some context. Like much else, the idea that the universe began from nothing is a relatively recent one. The ancients had little notion of nothingness with their creation myths starring not with nothing but most often an eternally existing chaos that some divinity or divinities sculpted into the ordered world we see. You start to get ideas of creation out of nothing- ex nihilo- really only with Augustine in the 5th century, but full credit for the idea of a world that began with nothing would have to wait until Leibniz in the 1600s, who, when he wasn’t dreaming up new cosmologies was off independently inventing calculus at the same time as Newton and designing computers three centuries before any of us had lost a year playing Farmville.

Even when it came to nothingness Leibniz was ahead of his time. Again about three centuries after he had imagined a universe created from nothing the renegade Einstein was just reflecting universally held opinion when he made his biggest “mistake” tweaking his theory of general relativity with what he thought was a bogus cosmological constant so that he could get a universe that he and everyone else believed in- a universe that was eternal and unchanging- uncreated. Not long after Einstein had cooked the books Edwin Hubble discovered the universe was changing with time, moving apart, and not long after the that, evidence mounted that the universe had a beginning in the Big Bang.

With a creation event in the Big Bang cosmologists, philosophers and theologians were forced to confront the existence of a universe emerging from what was potentially nothing running into questions that had lain dormant since Leibniz- how did the universe emerge from nothing? why this particular universe? and ultimately why something rather than nothing at all? Krauss thinks we have solved the first and second questions and finds the third question, in his words, “stupid”.

Strange as it sounds coming out of my mouth, I actually find myself agreeing with Krauss: explanations that the universe emerged from fluctuations in a primordial “quantum foam” – closer to the ancient’s idea of chaos than our version of nothing- along with the idea that we are just one of many universes that follow varied natural laws- some like ours capable of fostering intelligent life- seem sufficient to me.  The third question, however, I find in no sense stupid, and if it’s childlike, it is childlike in the best wondrously curious kind of way. Indeed, the answers to the question “why is there something rather than nothing?” might result is some of the most thrilling ideas human beings have come up with yet.

The question of why there is something rather than nothing is brilliantly explored in a book by Jim Holt Why the World Exist?: An Existential Detective Story. As Holt points out, the problem with nothingists theories like those of Krauss is that they fail to answer  the question as to why the quantum foam or multiple universes churning out their versions of existence are there in the first place. The simplest explanation we have is that “God made it”, and Holt does look at this answer as provided by philosopher of religion Richard Swinburne who answers the obvious question “who made God?” with the traditional answer “God is eternal and not made” which makes one wonder why we can’t just stick with Krauss’ self-generating universe in the first place?

Yet, it’s not only religious persons who think the why question is addressing something fundamental or even that science reveals the question as important even if we are forever barred from completely answering it. As physicist David Deutsch says in Why does the world exist:

 … none of our laws of physics can possibly answer the question of why the multiverse is there…. Laws don’t do that kind of work.

Wheeler used to say, take all the best laws of physics and put those bits on a piece of paper on the floor. Then stand back and look at them and say, “Fly!” They won’t fly they just sit there. Quantum theory may explain why the Big Bang happened, but it can’t answer the question you’re interested in, the question of existence. The very concept of existence is a complex one that needs to be unpacked. And the question Why is there something rather than nothing is a layered one, I expect. Even if you succeeded in answering it at some level, you’d still have the next level to worry about.  (128)

Holt quotes Deutsch from his book The Fabric of Reality “I do not believe that we are now, or shall ever be, close to understanding everything there is”. (129)

Others, philosophers and physicists are trying to answer the “why” question by composing solutions that combine ancient and modern elements. These are the Platonic multiverses of John Leslie and Max Tegmark both of whom, though in different ways, believe in eternally existing “forms”, goodness in the case of Leslie and mathematics in the case of Tegmark, which an infinity of universes express and realize. For the philosopher Leslie:

 … what the cosmos consists of is an infinite number of infinite minds, each of which knows absolutely everything which is worth knowing. (200)

Leslie borrows from Plato the idea that the world appeared out of the sheer ethical requirement for Goodness, that “the form of the Good bestows existence upon the world” (199).

If that leaves you scratching your scientifically skeptical head as much as it does mine, there are actual scientists, in this case the cosmologist Max Tegmark who hold similar Platonic ideas. According to Holt, Tegmark believes that:

 … every consistently desirable mathematical structure exists in a genuine physical sense. Each of these structures constitute a parallel world, and together these parallel worlds make up a mathematical multiverse. 182

Like Leslie, Tegmark looks to Plato’s Eternal Forms:

 The elements of this multiverse do not exist in the same space but exist outside space and time they are “static sculptures” that represent the mathematical structure of the physical laws that govern them.  183

If you like this line of reasoning, Tegmark has a whole book on the subject, Our Mathematical Universe. I am no Platonist and Tegmark is unlikely to convert me, but I am eager to read it. What I find most surprising about the ideas of both Leslie and Tegmark is that they combine two things I did not previously see as capable of being combined ,or even considered outright rival models of the world- an idea of an eternal Platonic world behind existence and the prolific features of multiverse theory in which there are many, perhaps infinite varieties of universes.

The idea that the universe is mind bogglingly prolific in its scale and diversity is the “fecundity” of the philosopher Robert Nozick who until Holt I had only associated with libertarian economics. Anyone who has a vision of a universe so prolific and diverse is okay in my book, though I do wish the late Nozick had been as open to the diversity of human socio-economic systems as he had been to the diversity of universes.

Like the physicist Paul Davies, or even better to my lights the novelists John Updike, both discussed by Holt, I had previously thought the idea of the multiverse was a way to avoid the need for either a creator God or eternally existing laws- although, unlike Davies and Updike and in the spirit of Ockham’s Razor I thought this a good thing. The one problem I had with multiverse theories was the idea of not just a very large or even infinite number of alternative universes but parallel universes where there are other versions of me running around, Holt managed to clear that up for me.

The idea that the universe was splitting every time I chose to eat or not eat a chocolate bar or some such always struck me as silly and also somehow suffocating. Hints that we may live in a parallel universe of this sort are just one of the weird phenomenon that emerge from quantum mechanics, you know, poor Schrodinger’s Cat . Holt points out that this is much different and not connected to the idea of multiple universes that emerge from the cosmological theory of inflation. We simply don’t know if these two ideas have any connection. Whew! I can now let others wrestle with the bizarre world of the quantum and rest comforted that the minutiae of my every decision doesn’t make me responsible for creating a whole other universe.

This returning to Plato seen in Leslie and Tegmark, a philosopher who died, after all,  2,5000 years ago, struck me as both weird and incredibly interesting. Stepping back, it seems to me that it’s not so much that we’re in the middle of some Hegelian dialectic relentlessly moving forward through thesis-antithesis-synthesis, but more involved in a very long conversation that is moving in no particular direction and every so often will loop back upon itself and bring up issues and perspectives we had long left behind.  It’s like a maze where you have to backtrack to the point you made a wrong turn in order to go in the right direction. We can seemingly escape the cosmological dead end created by Christian theology and Leibniz’s idea of creation ex nihilo only by going back to ideas found before we went down that path, to Plato. Though, for my money, I even better prefer another ancient philosopher- Lucretius.

Yet, maybe Plato isn’t back quite far enough. It was the pre-socratics who invented the natural philosophy that eventually became science. There is a kind of playfulness to their ideas all of which could exist side-by-side in dialogue and debate with one another with no clear way for any theory to win. Theories such as Heraclitus: world as flux and fire, or Pythagoras: world as number, or Democritus: world as atoms.

My hope is that we recognize our contemporary versions of these theories for what they are “just-so” stories that we tell about the darkness beyond the edge of scientific knowledge- and the darkness is vast. They are versions of a speculative theology- the possibilism of David Eagleman, which I have written about before and which are harmful only when they become as rigid and inflexible as the old school theology they are meant to replace or falsely claim the kinds of proof from evidence that only science and its experimental verification can afford. We should be playful with them, in the way Plato himself was playful with such stories in the knowledge that while we are in the “cave” we can only find the truth by looking through the darkness at varied angles.

Does Holt think there is a reason the world exists? What is really being asked here is what type of, in the philosopher Derek Parfit’s term “selector” brought existence into being. For Swinburne  the selector was God, for Leslie Goodness, Tegmark mathematics, Nozik fullness, but Holt thinks the selector might have been more simple, indeed, that the selector was simplicity. All the other selectors Holt finds to be circular, ultimately ending up being used to explain themselves. But what if our world is merely the simplest one possible that is also full? Moving from reason alone Holt adopts something like the mid-point between a universe that contained nothing and one that contained an infinite number of universes that are perfectly good adopting a mean he calls  “infinite mediocrity.”

I was not quite convinced by Holt’s conclusion, and was more intrigued by the open-ended and ambiguous quality of his exploration of the question of why there is something rather than nothing than I was his “proof” that our existence could be explained in such a way.

What has often strikes me as deplorable when it comes to theists and atheists alike is their lack of awe at the mysterious majesty of it all. That “God made it” or “it just is” strikes me flat. Whenever I have the peace in a busy world to reflect it is not nothingness that hits me but the awe -That the world is here, that I am here, that you are here, a fact that is a statistical miracle of sorts – a web weaving itself. Holt gave me a whole new way to think about this wonder.

How wonderfully strange that our small and isolated minds leverage cosmic history and reality to reflect the universe back upon itself, that our universe might come into existence and disappear much like we do. On that score, of all the sections in Holt’s beautiful little book it was the most personal section on the death of his mother that taught me the most about nothingness. Reflecting on the memory of being at his mother’s bedside in hospice he writes:

My mother’s breathing was getting shallower. Her eyes remained closed. She still looked peaceful, although every once in a while she made a little gasping noise.

Then I was standing over her, still holding her hand, my mother’s eyes open wide, as if in alarm. It was the first time I had seen them that day. She seemed to be looking at me. She opened her mouth. I saw her tongue twitch two or three times. Was she trying to say something? Within a couple of seconds her breathing stopped.

I leaned down and told her I loved her. Then I went into the hall and said to the nurse. “I think she just died.” (272-273)

The scene struck me as the exact opposite of the joyous experience I had at the birth of my own children and somehow reminded me of a scene from Diane Ackerman’s book Deep Play.

 The moment a new-born opens its eyes discovery begins. I learned this with a laugh one morning in New Mexico where I worked through the seasons of a large cattle ranch. One day, I delivered a calf. When it lifted up its fluffy head and looked at me its eyes held the absolute bewilderment of the newly born. A moment before it had enjoyed the even, black  nowhere of the womb and suddenly its world was full of color, movement and noise. I’ve never seen anything so shocked to be alive. (141-142)

At the end of the day, for the whole of existence, the question of why there is something rather than nothing may remain forever outside our reach, but we, the dead who have become the living and the living who will become the dead, are certainly intimates with the reality of being and nothingness.
 

The Problem with the Trolley Problem, or why I avoid utilitarians near subways

 

Master Bertram of Miden The Fall

Human beings are weird. At least, that is, when comparing ourselves to our animal cousins. We’re weird in terms of our use of language, our creation and use of symbolic art and mathematics, our extensive use of tools. We’re also weird in terms of our morality, and engage in strange behaviors visa-via one another that are almost impossible to find throughout the rest of the animal world.

We will help one another in ways that no other animal would think of, sacrificing resources, and sometimes going so far as surrendering the one thing evolution commands us to do- to reproduce- in order to aid total strangers. But don’t get on our bad side. No species has ever murdered or abused fellow members of its own species with such viciousness or cruelty. Perhaps religious traditions have something fundamentally right, that our intelligence, our “knowledge of good and evil” really has put us on an entirely different moral plain, opened up a series of possibilities and consciousness new to the universe or at least our limited corner of it.

We’ve been looking for purely naturalistic explanations for our moral strangeness at least since Darwin. Over the past decade or so there has been a growing number of hybrid explorations of this topic combining elements in varying degrees of philosophy, evolutionary theory and cognitive psychology.

Joshua Greene’s recent Moral Tribes: Emotion, Reason, and the Gap Between Us and Them   is an excellent example of these. Yet, it is also a book that suffers from the flaw of almost all of these reflections on singularity of human moral distinction. What it gains in explanatory breadth comes at the cost of a diminished understanding of how life on this new moral plain is actually experienced the way knowing the volume of water in an ocean, lake or pool tells you nothing about what it means to swim.

Greene like any thinker looking for a “genealogy of morals” peers back into our evolutionary past. Human beings spent the vast majority of our history as members of small tribal groups of perhaps a few hundred members. Wired over eons of evolutionary time into our very nature is an unconscious ability to grapple with conflicting interests between us as individuals and the interest of the tribe to which we belong. What Greene calls the conflicts of Me vs Us.

When evolution comes up with a solution to a commonly experienced set of problems it tends to hard-wire that solution. We don’t think all at all about how to see, ditto how to breathe, and when we need to, it’s a sure sign that something has gone wrong. There is some degree of nurture in this equation, however. Raise a child up to a certain age in total darkness and they will go blind. The development of language skills, especially, show this nature-nurture codependency. We are primed by evolution for language acquisition at birth and just need an environment in which any language is regularly spoken. Greene thinks our normal everyday “common sense morality” is like this. It comes natural to us and becomes automatic given the right environment.

Why might evolution have wired human beings morally in this way, especially when it comes to our natural aversion to violence? Greene thinks Hobbes was right when he proposed that human beings possess a natural equality to kill the consequence of our evolved ability to plan and use tools. Even a small individual could kill a much larger human being if they planned it well and struck fast enough. Without some in built inhibition against acts of personal aggression humanity would have quickly killed each other off in a stone age version of the Hatfields and the McCoys.

This automatic common sense morality has gotten us pretty far, but Greene thinks he has found a way where it has also become a bug, distorting our thinking rather than helping us make moral decisions. He thinks he sees distortion in the way we make wrong decisions when imagining how to save people being run over by runaway trolley cars.

Over the last decade the Trolley problem has become standard fair in any general work dealing with cognitive psychology. The problem varies but in its most standard depiction the scenario goes as follows: there is a trolley hurtling down a track that is about to run over and kill five people. The only way to stop this trolley is to push an innocent, very heavy man onto the tracks. A decision guaranteed to kill the man. Do you push him?

The answer people give is almost universally- no, and Greene intends to show why most of us answer in this way even though, as a matter of blunt moral reasoning, saving five lives should be worth more than sacrificing one.

The problem, as Greene sees it, is that we are using our automatic moral decision making faculties to make the “don’t push call.” The evidence for this can be found in the fact that those who have their ability to make automatic moral decisions compromised end up making the “right” moral decision to sacrifice one life in order to save five.

People who have compromised automatic decision making processes might be persons with damage to their ventromedial prefrontal cortex, persons such as Phineas Gauge whose case was made famous by the neuroscientist, Antonio Damasio in his book Descartes’ Error. Gauge was a 19th century miner who after suffering an accident that damaged part of his brain became emotionally unstable and unable to make good decisions. Damasio used his case and more modern ones to show just how important emotion was to our ability to reason.

It just so happens that persons suffering from Gauge style injuries also decide the Trolley problem in favor of saving five persons rather than refusing to push one to their death. Persons with brain damage aren’t the only ones who decide the Trolley problem in this way- autistic individuals and psychopaths do so as well.

Greene makes some leaps from this difference in how people respond to the Trolley problem, proposing that we have not one but two systems of moral decision making which he compares to the automatic and manual settings on a camera. The automatic mode is visceral, fast and instinctive whereas the manual mode is reasoned, deliberative and slow. Greene thinks that those who are able to decide in the Trolley problem to push the man onto the tracks to save five people are accessing their manual mode because their automatic settings have been shut off.

The leaps continue in that Greene thinks we need manual mode to solve the types of moral problems our evolution did not prepare us for, not Me vs. Us, but Us vs. Them, of our society in conflict other societies. Moral philosophy is manual mode in action, and Greene believes one version of our moral philosophy is head and shoulders above the rest, Utilitarianism, which he would prefer to be called Deep Pragmatism.

Needless to say there are problems with the Trolley Problem. How for instance does one interpret the change in responses to the problem when one substitutes a push with a switch? The number of respondents who chose to send the fat man to his death on the tracks to save five people significantly increases when one exchanges a push for a switch of a button that opens a trapdoor. Greene thinks it is because giving us bodily distance allows our more reasoned moral thinking to come into play. However, this is not the conclusion one gets when looking at real instances of personal versus instrumentalized killing i.e. in war.

We’ve known for quite a long time that individual soldiers have incredible difficulty killing even fellow enemy soldiers on the battlefield a subject brilliantly explored by Dave Grossman in his book On Killing: The Psychological Cost of Learning to Kill in War and Society. Strange thing is, whereas an individual soldier has problems killing even one human being who is shooting at him, and can sustain psychological scars because of such killing, airmen working in bombers who incinerate thousands of human beings including men women and children do not seem to be affected to such a degree by either inhibitions on killing or its mental scars. The US military’s response to this is to try to instrumentalize and automaticize killing by infantrymen in the same way killing by airmen and sailors is disassociated. Disconnection from their instinctual inhibitions against violence allows human being to achieve levels of violence impossible at an individual level.     

It may be the persons who chose to save five persons at the cost of one aren’t engaging in a form of moral reasoning at all they are merely comparing numbers. 5 > 1, so all else being equal choose the greater number. Indeed, the only way it might be said that higher moral reasoning was being used in choosing 5 over 1 was if the 1 that needed to be sacrificed to save 5 was the person being asked. With the question being would you throw yourself on the trolley tracks in order to save 5 other people?

Our automatic, emotional systems are actually very good at leading us to moral decisions, and the reasons we have trouble stretching them to Us vs Them problems might be different than the ones Greene identifies.

A reason the stretching might be difficult can be seen in a famous section in The Brother’s Karamazov by Russian novelist, Fyodor Dostoyevsky. “The Grand Inquisitor” is a section where the character Ivan is conveying an imaginary future to his brother Alyosha where Jesus has returned to earth and is immediately arrested by authorities of the Roman church who claim that they have already solved the problem of man’s nature: the world is stable so long as mankind is given bread and prohibited from exercising his free will. Writing in the late 19th century, Dostoyevsky was an arch-conservative, with deep belief in the Orthodox Church and had a prescient anxiety regarding the moral dangers of nihilisms and revolutionary socialism in Russia.

In her On Revolution, Hannah Arendt pointed out that Dostoyevsky was conveying with his story of the Grand Inquisitor, not only an important theological observation, that only an omniscient God could experience the totality of suffering without losing sight of the suffering individual, but an important moral and political observation as well- the contrast between compassion and pity.

Compassion by its very nature can not be touched off by the sufferings of a whole group of people, or, least of all,  mankind as a whole. It cannot reach out further than what is suffered by one person and remain what it is supposed to be, co-suffering. Its strength hinges on the strength of passion itself, which, in contrast to reason, can comprehend only the particular, but has no notion of the general and no capacity for generalization. (75)

In light of Greene’s argument what this means is that there is a very good reason why our automatic morality has trouble with abstract moral problems- without having sight of an individual to which moral decisions can be attached one must engage in a level of abstraction our older moral systems did not evolve to grapple with. As a consequence of our lack of God-like omniscience moral problems that deal with masses of individuals achieve consistency only by applying some general rule. Moral philosophers, not to mention the rest of us, are in effect imaginary kings issuing laws for the good of their kingdoms. The problem here is not only that the ultimate effect of such abstract level decisions are opaque due to our general uncertainty regarding the future, but there is no clear and absolute way of defining for whose good exactly are we imposing these rules?

Even the Utilitarianism that Greene thinks is our best bet for reaching common agreement regarding such rules has widely different and competing answers to how we should answer the question “good for who?” In their camp can be found Abolitionist such as David Pierce who advocate the re-engineering of all of nature to minimize suffering, and Peter Singer who establishes a hierarchy of rational beings. For the latter, the more of a rational being you are the more the world should conform to your needs, so much so, that Singer thinks infanticide is morally justifiable because newborns do not yet possess the rational goals and ideas regarding the future possessed by older children and adults. Greene, who thinks Utilitarianism could serve as mankind’s common “moral currency”, or language,  himself has little to say regarding where his concept of Utilitarianism falls on this spectrum, but we are very far indeed from anything like universal moral agreement on the Utilitarianism of Pierce or Singer.

21st century global society has indeed come to embrace some general norms. Past forms of domination and abuse, most notably slavery, have become incredibly rare relative to other periods of human history, though far from universal, women are better treated than in ages past. External relations between states have improved as well the world is far less violent than in any other historical era, the global instruments of cooperation, if far too seldom, coordination, are more robust.

Many factors might be credited here, but the one I would least credit is any adoption of a universal moral philosophy. Certainly one of the large contributors has been the increased range over which we exercise our automatic systems of morality. Global travel, migration, television, fiction and film all allow us to see members of the “Them” as fragile, suffering, and human all too human individuals like “Us”. This may not provide us with a negotiating tool to resolve global disputes over questions like global warming, but it does put us on the path to solving such problems. For, a stable global society will only appear when we realize that in more sense than not there is no tribe on the other side of us, that there is no “Them”, there is only “Us”.