Imagining the Anthropocene

Mining operations near Green Valley, Arizona. (NASA)

 

Almost a year ago now, while reading an article by the historian Yuval Harari in the British newspaper The Guardian, I had a visceral experience of what it means to live in the Anthropocene. Harari’s piece was about the horrors of industrial meat production, and as evidence of the scale of the monstrosity, he listed a set of facts that I had either not known, or had never taken the time to fully contemplate. Facts such as that the world’s domesticated animals, taken together, weigh not only double that of all the human beings on earth, but are seven times the weight of all of the world’s large land animals combined, or that there are more chickens in Europe than all of that continent’s wild birds taken together.  It struck me while reading Harri’s piece the degree to which we as a species had changed much of nature into something mechanically hellish, and I shuddered at the thought.

If one conducted a kind of moral forensics of the human impact on nature certainly industrial farming would be among its darkest aspects. Luckily for us, such a forensics would also result in some signs of human benevolence, such as the millions of acres many of the world’s nations have set aside for the protection of wildlife, or our growing propensity to establish animal rights.

While a moral forensics would give us an idea of our impact on the natural world right now, the proposed geological epoch known as the Anthropocene is measured in the duration of the geological and atmospheric scars we are leaving behind, for geological epochs are marked off by the differences in the layers that have been put down by planet transforming processes. Collectively we have become just such a process, and hypothetical geologists living in the deep future will be able to read evidence of how we have shaped and changed the earth and the rest of life upon it. Whether that evidence ultimately comes to reflect our uncontrolled and self-destructive avariciousness and shortsightedness, or our benevolence and foresight, remains up to us to decide.

Communicating the idea that the Anthropocene is both the period of greatest danger and a historical opportunity to right our relationship to the planet and to one another isn’t easy in an age of ever sharper ideological divisions and politics performed in 140 characters. Nevertheless, such communication is something Steven Bradshaw’s newly released documentary ANTHROPOCENE does brilliantly introducing viewers to the idea in a way that retains its complexity while at the same time conveying the concept in the visceral way only a well done film can accomplish.

ANTHROPOCENE  conveys the perspective of seven members of the working group on the Anthropocene, along with an environmental expert, on what it means to say we have entered the Anthropocene. Among them are some of the leading figures of twenty-first century environmentalism: Will Steffen, Erle Ellis, Jan Zalasiewicz, Andrew Revkin, John McNeil, Monica Berger Gonzalez, Eric Odada, and Davor Vidas.

The working group was established by the Subcommission on Quaternary Stratigraphy, “the only body concerned with stratigraphy on a global scale”. Its task is to establish whether we have truly exited the geological epoch in which humans have lived since our beginnings- the Holocene- and caused the onset of a new epoch the Anthropocene.

It was the Nobel Prize winning atmospheric chemist Paul J. Crutzen who in 2000 helped revive the term “anthropocene” and propel it to its current unprecedented traction. The idea of the Anthropocene may be academic, but such ideas have consequences and conveying them to the larger public, as Bradshaw’s documentary sets out to do, is extremely important in light of these consequences. Only when we have some intuitive sense of the scale of humanity’s impact on the planet since the industrial revolution can we overcome the much older sense of being dwarfed by nature and that anything we are capable of doing pales in comparison to what nature herself does to us.

Bradshaw’s ANTHROPOCENE tells the story of the development of humanity into a force capable of shaping the whole of nature in the form of chapters of a book. While the early chapters set the stage and introduce us to a human species that has always shaped, and, as with the extinction of megafauna, severely disrupted, nature to our own interests, the rising action of the story does not occur until as late as the 1950’s with the “Great Acceleration”, when human population growth and energy use began their exponential rise. And though the developed countries have since fallen off of this exponential curve, the majority of the world’s population is only now undergoing a Great Acceleration of their own.

While human beings prior to the contemporary period that began around the middle of the last century have always had an outsized impact, only after 1950 has our effect been such to both leave behind evidence that will be discoverable millions of years into the future, and which are of a completely different order than the kinds of scars left by non-human natural processes.

Many of these scars will be located in what the documentary calls “sacrifice zones” areas such as islands in the Pacific where countries tested the most powerful nuclear weapons ever built.  Sacrifice zones are also comprised of the vast areas of the earth that have been scared by our resource extraction, whole mountains torn into in the quest for coal or precious metals. In addition there will be the huge swaths of territory where we have disposed the waste of human civilization. Our plastics and toxins will likely far out last us, while those aspects we most identify with the pinnacle of urbanism- being built of concrete and glass- may survive for less time than the stone monuments of prior civilizations.

Still, much of the underbelly of cities along with other structures and artifacts that become subsumed by tectonic plates will form an event layer, which will speak of the strange species who dominated a world only to lose it, that is ourselves.

It will not only be these debris and artifacts which will call out from the geological  strata the sheer fact of our past existence, that is, what is there, but we will also be legible through what is absent. If we succeed in causing what some are calling the sixth great extinction then many the anthropocene strata will be a kind of dead-zone lacking the great diversity of plants and animals found in the strata before it.

The idea of a planet scared for millions of years by our technological civilization is certainly disturbing, yet the ultimate message of Bradshaw’s documentary neither surrenders to the dystopian spirit of the times, nor does  it counsel stoic resignation to our self-destruction. The message I took from the film was much more nuanced: we have spent the last few centuries transforming a nature we believed separate from us only to learn that this distinction was like a child playing pretend. If we can mature quickly enough we can foster a world good for both ourselves and the rest of life. But should we fail to grow up in time the earth will shrug free from our weight, and the life that remains will continue into the deep future without us.

* Bullfrog Films is the distributor the documentary ANTHROPOCENE and holds the license to to public performance rights. The DVD is featured in their catalog:  http://www.bullfrogfilms.com/catalog/anthro.html

 

 

A Less Bleak Lesson from the Silent Universe

Mars Veg brain

The astronomers Adam Frank and Woodruff Sullivan have an interesting paper out where they’ve essentially flipped the Drake Equation on its head. If that equation is meant to give us some handle on the probability that there are aliens out there, Frank and Sullivan have used the plethora of exoplanets discovered since the launch of the Kepler space telescope to calculate the chance that, so far, we alone have been the only advanced civilization in the 13.7 billion year history of the universe. I won’t bore you with actual numbers, but they estimate the chance that we’re the first and only is 1 in 10 billion trillion. I shouldn’t have to tell you that is a really, really small number.

Frank and Sullivan’s paper emerges as a consequence of the fact that we now have very clear answers to at least some of the values of the Drake equation. We now know not only how many stars are out there, but also how many of those stars are likely to have planets, and how many of those planets are likely within their star’s habitable zone. It is the fact that this number of potentially habitable planets is, to channel Donald Trump, so huge, which leads to the conclusion that (when plugged into Frank and Sullivan’s rejiggered Drake equation where the requirements that alien civilization exists presently and are within communication distance from us are dropped) the only way we have been the first and only technological civilization would be if we were the beneficiaries of the most extraordinary luck.

We’re not quite at the point, however, where we can make a definitive guess as to just how lucky, or how normal, our civilization is in history the cosmos. To do that we’ll need to find out what the probability is that life will develop on planets that have liquid water, and how difficult the leap is from single celled life to multicellularity. As I’ve written about before these questions are the subject of sharp differences and debate between bio-physicists and evolutionary biologists with some of the former, notably Jeremy England, seeing the laws of physics in a sense priming the universe for life and complexity and the latter seeing evolution as much more contingent and intelligence of our sort a lucky accident.

The historical question, namely, how likely is complex life likely to give rise to a technological species will remain unanswerable as long as the universe continues to be silent or until we discover the artifacts of some dead civilization. Frank and Sullivan’s answer to the silence is that the vast majority of alien civilizations have died, and those that might exist are beyond any distance in which we could observe or communicate with them. Theirs is the ultimate cautionary tale.

Then again,  perhaps we’re on the very verge of finding other technological civilizations that either exist now or existed in the very recent past once we start to look in the proper way. That, I think, is the lesson we should take from last year’s observation by the Kepler space telescope of some very strange phenomenon circling a star between the constellations of Cygnus the Swan and Lyra. Initially caught by citizen scientists reviewing Kepler data, what stuck astronomers such as Tabetha Boyajian  was the fact whatever it is they are looking at is distributed so widely, and follows such an unusual orbit, that it has lead some to speculate that there is a remote possibility that this object might be some sort of Dyson Sphere.

It seems most likely that some natural explanation will be found for the Kepler data, and yet, whatever happens, we are witnessing the birth of what Frank and Sullivan call cosmic archeology. The use of use of existing tools, and eventually the creation of tools for that purpose, to look for the imprint of technological civilizations elsewhere in the cosmos.

We do have pretty good evidence of at least one thing: if there are, or have been, technological civilizations out there none is using the majority of its galaxy’s energy. As Jim Wright at Penn State who conceived of the recent scanning 100,000 galaxies that had been observed by NASA’s Wise satellite for the infrared fingerprints of a galactic civilization  discovered. Wright observed:

Our results mean that, out of the 100,000 galaxies that WISE could see in sufficient detail, none of them is widely populated by an alien civilization using most of the starlight in its galaxy for its own purposes. That’s interesting because these galaxies are billions of years old, which should have been plenty of time for them to have been filled with alien civilizations, if they exist. Either they don’t exist, or they don’t yet use enough energy for us to recognize them.

Yet perhaps we should conclude something different about the human future from this absence of galactic scale civilizations than the sad recognition that our species is highly unlikely to have one.  Instead, maybe what we’re learning is that the kind of extrapolation of the industrial revolution into an infinite future that has been prevalent in science-fiction and futurism for well over a century is itself deeply flawed. We might actually have very little idea of what the future will actually be like.

Then again, maybe the silence gives us some clues. Rather than present us with evidence for our species probable extinction, perhaps what we’re witnessing is the propensity of civilizations to reach technological limits before they have grown to the extent that they are observable across great interstellar distances by other technological civilizations. To quote myself from a past post:

This physics of civilizational limits comes from Tom Murphy of the University of California, San Diego who writes the blog Do The Math. Murphy’s argument, as profiled by the BBC, has some of the following points:

  • Assuming rising energy use and economic growth remain coupled, as they have in the past, confronts us with the absurdity of exponentials. At a 2.3 percent growth rate within 2,500 hundred years we would require all the energy from all the stars in the Milky Way galaxy to function.
  • At 3 percent growth, within four hundred years we will have boiled away the earth’s oceans, not because of global warming, but from the excess heat that is the normal product of energy production. (Even clean fusion leaves us burning away the world’s oceans for the same reason)
  • Renewables push out this reckoning, but not indefinitely. At a 3 percent growth rate,even if the solar efficiency was 100% we would need to capture all of the sunlight hitting the earth within three hundred years.

There are thus reasonable grounds for assuming no technological civilization ever reaches a galactic scale whether because it has destroyed itself, or, for all we know just as likely, that all such civilizations run into development constraints far closer to our own than someone like Ray Kurzweil would have you believe.

Energy constraints might even result in the cyclical return of intelligence from silicon back to carbon based forms. At least that’s one unique version of the future as imagined by the astrobiologists Caleb Scharf in a recent piece for Aeon. Silicon intelligence has some advantages over carbon-based, as Lee Sedo can tell you, but you can’t beat life when it come to efficiency. As Scarf points out:

Estimates of what you’d need in terms of computing power to approach the oomph of a human brain (measured by speed and complexity of operations) come with an energy efficiency budget that needs to be about a billion times better than that wall.

To put that in a different context, our brains use energy at a rate of about 20 watts. If you wanted to upload yourself intact into a machine using current computing technology, you’d need a power supply roughly the same as that generated by the Three Gorges Dam hydroelectric plant in China, the biggest in the world. To take our species, all 7.3 billion living minds, to machine form would require an energy flow of at least 140,000 petawatts. That’s about 800 times the total solar power hitting the top of Earth’s atmosphere. Clearly human transcendence might be a way off.

A problem that neither Murphy nor Scharf really deal with is one of integration over a vast scale. A not insignificant group of techno-optimists sees the human technological artifice not just an advanced form of industry-based civilization but as an emerging universal mind in embryo. Personally, I think it more likely that we are moving in the exact opposite direction, towards a balkanization of this global brain, but there might be reasons to think that even if what we’re seeing is the birth pangs of something out of Stanisław Lem’s Solaris or Teilhard de Chardin, that such intelligence wouldn’t occupy a space all that much larger than the earth.   

As the physicists Gregory Laughlin recently pointed out in the magazine Nautilus, the speed of light would seem to impose limits on how big any integrated intelligent being can become:

If our brains grew enormously to say, the size of our solar system, and featured speed-of-light signaling, the same number of message crossings would require more than the entire current age of the universe, leaving no time for evolution to work its course. If a brain were as big as our galaxy, the problem would become even more severe. From the moment of its formation, there has been time for only 10,000 or so messages to travel from one side of our galaxy to the other. We can argue, then, that, it is difficult to imagine any life-like entities with complexity rivaling the human brain that occupy scales larger than the stellar size scale. Were they to exist, they wouldn’t yet have had sufficient time to actually do anything.

Since the industrial revolution our ideas about both the human future and the nature of any alien civilization have taken the shape of being more of the same. Yet the evidence so far seems to point to a much different fate. We need to start thinking through the implications of the silence beyond just assuming we are either prodigies,or that, in something much less than the long run, we’re doomed.

 

The Deeper Meaning of the Anthropocene

TheSublime1938

Last year when I wrote a review of E.O. Wilson’s book The Meaning of Human Existence I felt sure it would be the then 85  year old’s last major work. I was wrong having underestimated Professor Wilson’s already impressive intellectual stamina. Perhaps his latest book Half-Earth: Our Planet’s Fight for Life  is indeed his last, the final book that concludes the trilogy of The Social Conquest of Earth and the Meaning of Human Existence. This has less to do with his mental staying power ,which I will not make the mistake of underestimating again, than because it’s hard to imagine what might follow Half Earth, for with it Wilson has less started a conversation than attempted to launch a crusade.

The argument Wilson makes in Half Earth isn’t all that difficult to understand, and for those who are concerned with the health of the planet, and especially the well being of the flora and fauna with which we share the earth, might initially be hard to disagree with. Powerfully, Wilson reminds us that we are at the beginning of the sixth great extinction a mass death of species on par with other great dyings such as the one that killed the dinosaurs.

The difference between the mass extinction of our own time when compared to those that occurred in the past is that rather than being the consequence of mindless processes like a meteor strike or bacteria breathing poison, it is the result of the actions of a species that actually knows what it is doing- that is us. Wilson’s answer to the crisis we are causing is apparent in the very title of his book. He is urging us to declare half of the earth’s surface a wildlife preserve where large scale human settlement and development will be prohibited.

Any polemic such as the one Wilson has written requires an enemy, but rather than a take aim at the capitlist/industrial system, or the aspiration to endless consumption, Wilson’s enemy is a relatively new and yet to be influential movement within environmentalism that aims to normalize our perspective on the natural world.

Despite the fact he that definitely has a political and ideological target with which he takes umbrage being a scientist rather than a philosopher he fails to clearly define what exactly it is. Instead he labels anyone who holds doubts about the major assumptions of the environmental movement as believers in the “Anthropocene”- the idea that human beings have become so technologically powerful that we now constitute a geological force.
The problem with this is that Wilson’s beef is really only with a small subset of people who believe in the Anthropocene, indeed, Wilson himself seems to believe in it, which shows you just how confused his argument is.

The subset he opposes would include thinkers like Emma Marris or Jedediah Purdy who have been arguing that we need to untangle ourselves from ideas about nature that we inherited from 19th century romanticism. These concepts regarding the purity of the natural as opposed to the artificiality of the man made- the idea that not only is humanity distinct from nature but that anything caused by our agency is somehow unnatural- are now both ubiquitous and have become the subject of increasing doubts.

While mass extinction is certainly real and constitutes an enormous tragedy, it does not necessarily follow that the best way to counter such extinction is to declare half of the earth off limits to humans. Much better for both human and animal welfare would be to make the human artifice more compatible with the needs of wildlife. Though the idea of a pure, wild and natural place free from human impact, and above all dark and quiet, is one I certainly understand and find attractive, our desire that it exist is certainly much less a matter of environmental science than a particular type of spiritual desire.

As Daniel Duane pointed out in a recent New York Times article the places we deem to be the most natural, that is the national parks, which have been put aside for the very purpose of preserving wilderness, are instead among the most human- managed landscapes on earth. And technology, though it can never lead to complete mastery, makes this nature increasingly easy to manage:

More and more, though, as we humans devour habitat, and as hardworking biologists — thank heaven — use the best tools available to protect whatever wild creatures remain, we approach that perhaps inevitable time when every predator-prey interaction, every live birth and every death in every species supported by the terrestrial biosphere, will be monitored and manipulated by the human hive mind.

Yet even were we to adopt Wilson’s half earth proposal whole cloth we would still face scenarios where we will want to act against the dictates of nature. There are, for instance , good arguments to intervene on behalf of, say, bats whose populations have been decimated by the white nose fungus or great apes who are threatened extinction as a consequence of viral infections. Where and why such interventions occur are more than merely scientific questions ,and they arise not from the human desire to undo the damage we have done, but from the damage nature inflicts upon herself.

From the opposite angle, climate change will not respect any artificial borders we establish between the natural and the human worlds. It seem clear that we have a moral duty to limit the suffering nature experiences as a consequence of our action or inaction. We are in the midst of discovering the burden of our moral responsibility. Perhaps this discovery points to a need to expand the moral boundaries of the Anthropocene itself.

Rather than abandoning or merely continuing to narrowly apply the idea of the Anthropocene to the environment alone, maybe we should extend it to embrace other aspects of human agency that have expanded since the birth of the modern world. For what has happened, especially since the industrial revolution, is that areas previously outside the human ability to effect through action have come increasingly not so much under our control as our ability to influence, both for good and ill.

It’s not just nature that is now shaped by our choices, that has become a matter of moral and political dispute, but poverty and hunger, along with disease. Some even now interpret death itself in moral and political terms. With his half earth proposal Wilson wants to do away with a world where the state of the biosphere has become a matter of moral and political dispute. This dismissal of human political capacity and rights seems to run like a red thread through Wilson’s thinking, and ends, as I have pointed out elsewhere, in treating human like animals in a game preserve.

Indeed, the American ideal of wilderness as an edenic world unsullied by the hands of man that Wilson wants to see over half the earth has had negative political consequences even in the richest of nations. The recent violent standoff in Oregon emerged out of the resentments of a West where the federal government owns nearly 50 percent of the land. Such resentments have made the West, which might culturally lean much differently, a bulwark of the political right. As Robert Fletcher and Bram Büscher have argued in Aeon Wilson’s prescription could result in grave injustices in the developing world against native peoples in the same way the demands of environmentalists for wilderness stripped of humans resulted in the violent expulsion of Native Americans from large swaths of the American West.

Eden, it seems, refuses to be reestablished despite our best efforts and intentions. Welcome to the Fall.

 

Bruce Sterling urges us not to panic, just yet

realistic_pencil_sketch_photo_effect

My favorite part about the SXSW festival comes at the end. For three decades now the science-fiction writer Bruce Sterling has been giving some of the most insightful (and funny) speeches on the state of technology and society. In some sense this year’s closing remarks were no different, and in others they represented something very new.

What made this year’s speech different was that politics has taken such a weird turn, like something out of dystopian science-fiction that Sterling, having mastered the craft, felt obliged to anchor our sense of reality. He did this, however, only after trying to come to grips with exactly why had gotten so weird that the writers of The Simpsons seemed to be in possession of a crystal ball.

A read on events Sterling finds somewhat compelling is that put forward by Clay Shirky who claims that the age of social media has shattered something political science geeks call the Overton window.  The Overton window is essentially the boundary of politically acceptable discourse as defined by political elites. Sterling points out that in the age of broadcast television that boundary was easy to control, but with the balkanization of media- first with cable TV and then the Internet (and I would add talk radio) that border has eroded.

Here’s the conservative, David French’s, view on what Donald Trump himself has done to the Overton window:

Then along came Donald Trump. On key issues, he didn’t just move the Overton Window, he smashed it, scattered the shards, and rolled over them with a steamroller. On issues like immigration, national security, and even the manner of political debate itself, there’s no window left. Registration of Muslims? On the table. Bans on Muslims entering the country? On the table. Mass deportation? On the table. Walling off our southern border at Mexico’s expense? On the table. The current GOP front-runner is advocating policies that represent the mirror-image extremism to the Left’s race and identity-soaked politics.

All this certainly resembles what Moisés Naím has described as the end of power where traditional institutions and elites have lost control over events largely as a result of a democratized communication environment. Or, as Sterling himself put it in his speech the political parties have been:

“Balkanized by demagogues who brought in their own megaphones”.  

Sterling thinks it’s clear that the new technology and media landscape is a contributing factor of the current dystopian ambiance. The world has tended to take some very strange turns during the rise to dominance of new forms of media and new forms of economy, and maybe this is one of the those moments where old media and tech is supplanted by the new in the form of the “Big five” Apple, Amazon, Alphabet (Google), Facebook and Microsoft. Sterling thinks the academic Shoshana Zuboff is onto something when she describes this new order as surveillance capitalism an economic order based on turning the private lives of individuals into a saleable commodity.

Sterling is clearly worried about this but is also certain that the illusion of techno-libertarianism behind something like Bitcoin isn’t the solution. Some alternative technological order can’t solve our problems, but if it can’t solve them then perhaps technology itself isn’t the primary source of our problems in the first place.

Evidence that technology alone, or the coming into being of surveillance capitalism, isn’t to blame can be seen in the global nature of the current political crisis. The same, and indeed incomparably worse, problems exemplified by the rise of Trump in the US are apparent almost everywhere. Middle Eastern states have collapsed, an anti-immigrant anti-globalization right is on the rise across Europe, Great Britain is threatening to exit the EU further weakening that institution with dissolution. Venezuela is on the verge of collapse, nationalist tensions continue to roil Asia, the global economy continues to suffer the injuries from the financial crisis even as economic policies become increasingly unorthodox. A much more environmentally and politically unstable world looms.

Yet Sterling points out that there’s one people that seem particularly calm through this whole affair and do not seem generally to be panicked by the bizarre turn politics has taken in the US. The Italians see in Trump America’s version of their own Silvio Berlusconi. If politics in the US follows the Berlusconi model after a Trump victory (however unlikely), then though we may be in for a very seedy political period it will not necessarily be a dangerous or chaotic one.

As for myself I am not as sanguine as Sterling about the idea of a president Trump given that he will have at his disposal the most powerful military and survelillance apparatus on the planet. Francis Fukuyama who also pointed the resemblance between Trump and Berlusconi thinks Trump’s flirtation with violence is much more troubling.

Nevertheless, Sterling certainly is right when he points out that, in light of historical precedents- say the 1960’s- the level of political violence we have seen in 2016 is nothing to panic over. Nor is society in any way in a state of collapse – the lights are still on, food is still available, we are not entering some survivalist scenario- for the moment.

While events elsewhere may continue to take the world in a dystopian direction as a result of state and institutional collapse, the dystopia the US will most likely enter will be much less of the type found in science-fiction novels. It is one where the US is governed by a gentrified political elite which clings to its own power and the status quo while Americans remain distracted by the “glass lozenges” of their smart phones. Where mass surveillance isn’t scary a la Minority Report because it isn’t all that effective, or as Sterling puts it:

“Is there anybody with a drone over their head who is actually doing what the guys with the drones want?”

It’s a world where everything is failing but nothing has truly and completely failed where we have plenty to be unhappy about but also no reason in particular to panic.

 

A Box of a Trillion Souls

pandora's box

“The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.”                                                                          

Jaron Lanier

 

Stephen Wolfram may, or may not, have a justifiable reputation for intellectual egotism, but I like him anyway. I am pretty sure this is because, whenever I listen to the man speak I most often  walk away no so much with answers as a whole new way to frame questions I had never seen before, but sometimes I’m just left mesmerized, or perhaps bewildered, by an image he’s managed to draw.

A while back during a talk/demo of at the SXSW festival he managed to do this when he brought up the idea of “a box of a trillion souls”. He didn’t elaborate much, but left it there, after which I chewed on the metaphor for a few days and then returned to real life, which can be mesmerizing and bewildering enough.

A couple days ago I finally came across an explanation of the idea in a speech by Wolfram over at John Brockman’s Edge.org  There, Wolfram also opined on the near future of computation and the place of  humanity in the universe. I’ll cover those thoughts first before I get to his box full of souls.

One of the things I like about Wolfram is that, uncommonly for a technologist, he tends to approach explanations historically. In his speech he lays out a sort of history of information that begins with information being conveyed genetically with the emergence of life, moves to the interplay between individual and environment with the development of more complex life, and flowers in spoken language with the appearance of humans.

Spoken language eventually gave rise to the written word, though it took almost all of human history for writing to become nearly as common as speaking. For most of that time reading and writing were monopolized by elites. A good deal of mathematics, as well has moved from being utilized by an intellectual minority to being part of the furniture of the everyday world, though more advanced maths continues to be understandable by specialists alone.

The next stage in Wolfram’s history of information, the one we are living in, is the age of code. What distinguishes code from language is that it is “immediately executable” by which I understand him to mean that code is not just some set of instructions but, when run, the thing those instruction describe itself.

Much like reading, writing and basic mathematics before the invention of printing and universal education, code is today largely understood by specialists only. Yet rather than endure for millennia, as was the case with the monopoly of writing by the clerisy, Wolfram sees the age of non-universal code to be ending almost as soon as it began.

Wolfram believes that specialized computer languages will soon give way to “natural language programming”.  A fully developed form of natural language programming would be readable by both computers and human beings- numbers of people far beyond those who know how to code, so that code would be written in typical human languages like English or Chinese. He is not just making idle predictions, but has created a free program that allows you to play around with his own version of a NLP.

Wolfram makes some predictions as to what a world where natural language programming became ubiquitous- where just as many people could code as could now write- might look like. The gap between law and code would largely disappear. The vast majority of people, including school children, would have at the ability to program computers to do interesting things, including perform original research. As computers become embedded in objects the environment itself will be open to the programming of everyone.

All this would seem very good for us humans and would be even better given that Wolfram sees it as the prelude to the end of scarcity, including the scarcity of time that we now call death. But then comes the AI. Artificial intelligence will be both the necessary tool to explore the possibility space of the computational universe and the primary intelligence via which we interact with the entirety of the realm of human thought.  Yet at some threshold AI might leave us with nothing to do as it will have become the best and most efficient way to meet our goals.

What makes Wolfram nervous isn’t human extinction at the hands of super-intelligence so much as what becomes of us after scarcity and death have been eliminated and AI can achieve any goal- artistic ones included- better than us. This is Wolfram’s  vision of the not too far off future, which given the competition with even current reality, isn’t near sufficiently weird enough. It’s only when he starts speculating on where this whole thing is ultimately headed that anything so strange as Boltzmann brains make their appearance, yet something like them does and no one should be surprised given his ideas about the nature of computation.

One of Wolfram’s most intriguing, and controversial, ideas is something he calls computational equivalence. With this idea he claims not only that computation is ubiquitous across nature, but that the line between intelligence and merely complicated behavior that grows out of ubiquitous natural computation is exceedingly difficult to draw.

For Wolfram the colloquialism that “the weather has a mind of its own” isn’t just a way of complaining that the rain has ruined your picnic, but, in an almost panpsychic or pantheistic way, captures a deeper truth that natural phenomenon are the enactment of a sort of algorithm, which, he would claim, is why we can successfully model their behavior with other algorithms we call computer “simulations.” The word simulations needs quotes because, if I understand him, Wolfram is claiming that there would be no difference between a computer simulation of something at a certain level of description and the real thing.

It’s this view of computation that leads Wolfram to his far future and his box of a trillion souls. For if there is no difference between a perfect simulation and reality, if there is nothing that will prevent us from creating perfect simulations, at some point in the future however far off, then it makes perfect sense to think that some digitized version of you, which as far as you are concerned will be you, could end up in a “box”, along with billions or trillions of similar digitized persons, with perhaps millions or more copies of  you.   

I’ve tried to figure out where exactly this conclusion for an idea I otherwise find attractive, that is computational equivalence, goes wrong other just in terms of my intuition or common sense. I think the problem might come down to the fact that while many complex phenomenon in nature may have computer like features, they are not universal Turing machines i.e. general purpose computers, but machines whose information processing is very limited and specific to that established by its makeup.

Natural systems, including animals like ourselves, are more like the Tic-Tac-Toe machine built by the young Danny Hillis and described in his excellent primer on computers, that is still insightful decades after its publication- The Pattern on the Stone. Of course, animals such as ourselves can show vastly more types of behavior and exhibit a form of freedom of a totally different order than a game tree built out of circuit boards and lightbulbs, but, much like such a specialized machine, the way in which we think isn’t a form of generalized computation, but shows a definitive shape based on our evolutionary, cultural and personal history. In a way, Wolfram’s overgeneralization of computational equivalence negates what I find to be his as or more important idea of the central importance of particular pasts in defining who we are as a species, people and individuals.

Oddly enough, Wolfram falls into the exact same trap that the science-fiction writer Stanislaw Lem fell into after he had hit upon an equally intriguing, though in some ways quite opposite understanding of computation and information.

Lem believed that the whole system of computation and mathematics human beings use to describe the world was a kind of historical artifact for which there much be much better alternatives buried in the way systems that had evolved over time processed information. A key scientific task he thought would be to uncover this natural computation and find ways to use it in the way we now use math and computation.

Where this leads him is to precisely the same conclusion as Wolfram, the possibility of building a actual world in the form of simulation. He imagines the future designers of just such simulated worlds:

“Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess.” ( 291 -292)

Yet it seems to me that moving from the idea that things in the world: a storm, the structure of a sea-shell, the way particular types of problems are solved are algorithmic to the conclusion that the entirety of the world could be hung together in one universal  algorithm is a massive overgeneralization. Perhaps there is some sense that the universe might be said to be weakly analogous, not to one program, but to a computer language (the laws of physics) upon which an infinite ensemble of other programs can be instantiated, but which is structured so as to make some programs more likely to be run while deeming others impossible. Nevertheless, which programs actually get executed is subject to some degree of contingency- all that happens in the universe is not determined from initial conditions. Our choices actually count.

Still, such a view continues to treat the question of corporal structure as irrelevant, whereas structure itself may be primary.

The idea of the world as code, or DNA as a sort of code is incredibly attractive because it implies a kind of plasticity which equals power. What gets lost however, is something of the artifact like nature of everything that is, the physical stuff that surrounds us, life, our cultural environment. All that is exists as the product of a unique history where every moment counts, and this history, as it were, is the anchor that determines what is real. Asserting the world is or could be fully represented as a simulation either implies that such a simulation possesses the kinds of compression and abstraction, along with the ahistorical plasticity that comes with mathematics and code or it doesn’t, and if it doesn’t, it’s difficult to say how anything like a person, let alone, trillions of persons, or a universe could actually, rather than merely symbolically, be contained in a box even a beautiful one.

For the truly real can perhaps most often be identified by its refusal to be abstracted away or compressed and by its stubborn resistance to our desire to give it whatever shape we please.

 

How dark epistemology explains the rise of Donald Trump

Conjurer_Bosch

We are living in what is likely the golden age of deception. It would be difficult enough were we merely threatened with drowning in what James Gleick has called the flood of information, or were we doomed to roam blind through the corridors of Borges’ library of Babel, but the problem is actually much worse than that. Our dilemma is that the very instruments that once promised liberation via the power of universal access to all the world’s knowledge seem just as likely are being used to sow the seeds of conspiracy, to manipulate us and obscure the path to the truth.

Unlike what passes for politicians these days I won’t open with such a tirade only to walk away. Let me instead explain myself. You can trace the origins of our age of deception not only to the 2008 financial crisis but back much further to its very root. Even before the 1950’s elites believed they had the economic problem, and therefore the political instability that came with this problem, permanently licked. The solution was some measure of state intervention into the workings of capitalism.

These interventions ranged on a spectrum from the complete seizure and control of the economy by the state in communist countries, to regulation, social welfare and redistributive taxation in even the most solidly capitalist economies such as the United States. Here both the pro-business and pro-labor parties, despite the initial resistance of the former, ended up accepting the basic parameters of the welfare-state. Remember it was the Nixon administration that both created the EPA and flirted with the idea of a basic income.  By the early 1980’s with the rise of Reagan and Thatcher the hope that politics had become a realm of permanent consensus- Frederick Engel’s prophesied “administration of things”- collapsed in the face of inflation, economic stagnation and racial tensions.

The ideological groundwork for this neo-liberal revolution had, however, been laid as far back as 1945 when state and expert directed economics was at its height. It was in that year that Austrian economist Friedrich Hayek in a remarkable essay entitled The Use of Knowledge in Society pointed out that no central planner or director could ever be as wise as the collective perception and decision making of economic actors distributed across an entire economy.

At the risk of vastly over simplifying his argument, what Hayek was in essence pointing out was that markets provide an unrivaled form of continuous and distributed feedback. The “five year plans” of state run economies may or may not have been able to meet their production targets, but only the ultimate register of price can tell you whether any particular level of production is justified or not.

A spike in price is the consequence of an unanticipated demand and will send producers scrambling to meet in the moment it is encountered. The hubris behind rational planning is that it claims to be able to see through the uncertainty that lies at the heart of any economy, and that experts from 10 000 feet are someone more knowledgeable than the people on the ground who exist not in some abstract version of an economy built out of equations, but the real thing.

It was perhaps one of the first versions of the idea of the wisdom of crowds, and an argument for what we now understand as the advantages of evolutionary approaches over deliberate design. It was also perhaps one of the first arguments that what lies at the very core of an economy was not so much the exchange of goods as the exchange of information.

The problem with Hayek’s understanding of economics and information wasn’t that it failed to capture the inadequacies of state run economies, at least with the level of information technologies they possessed when he was writing, (a distinction I think important and hope to return in the future), but that it was true for only part of the economy- that dealing largely with the production and distribution of goods and not with the consumer economy that would take center stage after the Second World War.

Hayek’s idea that markets were better ways of conveying information than any kind of centralized direction worked well in a world of scarcity where the problem was an accurate gauge of supply vs demand for a given resource, yet it missed that the new era would be one of engineered scarcity where the key to economic survival was to convince consumers they had a “need” that they had not previously identified. Or as John Kenneth Galbraith put it in his 1958 book The Affluent Society we had:

… managed to transfer the sense of urgency in meeting consumer need that was once felt in a world where more production meant more food for the hungry, more clothing for the cold, and more houses for the homeless to a world where increased output satisfies the craving for more elegant automobiles, more exotic food, more elaborate entertainment- indeed for the entire modern range of sensuous, edifying, and lethal desires. (114-115).

Yet rather than seeing the economic problems of the 1970’s through this lens, that the difficulties we were experiencing were as much a matter of our expectations regarding what economic growth should look like and the measure of our success in having rid ourselves (in advanced countries) of the kinds of life threatening scarcity that had threatened all prior human generations, the US and Britain set off on the path prescribed by conservative economists such as Hayek and began to dismantle the hybrid market/state society that had been constructed after the Great Depression.

It was this revolt against state directed (or even just restrained) capitalism which was the neoliberal gospel that reigned almost everywhere after the fall of the Soviet Union, and to which the Clinton administration converted the Democratic party. The whole edifice came crashing down in 2008, since which we have become confused enough that demons long dormant  have come home to roost.

At least since the crisis, economists have taken a renewed interest in not only the irrational elements of human economic behavior, but how that irrationality has itself become a sort of saleable commodity. A good version of this is Robert J Shiller and George Akerlof’s recent Phishing for Phools: The Economics of Manipulation and Deception. In their short book the authors examine the myriad of ways all of us are “phished” – probed by con-artists looking for “phools” to take advantage of and manipulate.

The techniques have become increasingly sophisticated as psychologists have gotten a clearer handle on the typology of irrationality otherwise known as human nature. Gains in knowledge always come with tradeoffs:

“But theory of mind also has its downside. It also means we can figure out how to lure people into doing things that are in our interest, but not in theirs. As a result, many new ideas are not just technological. They are not ways to deliver good-for-you/good-for-me’s. They are, instead, new uses of the theory of mind,  regarding how to deliver good-for-me/bad-for-you’s.” (98)

This it seems would be the very opposite of a world dominated by non- zero sum games that were heralded in the 1990’s, rather it’s the construction of an entire society around the logic of the casino, where psychological knowledge is turned into a tool against consumers to make choices contrary to their own long term interest.

This type of manipulation, of course, has been the basis of our economies for quite sometime. What is different is the level of sophistication and resources being thrown at the problem of how to sustain human consumption in a world drowning in stuff. The solution has been to sell things that simply disappear after use- like experiences- which are made to take on the qualities of the ultimate version of such consumables,  namely addictive drugs.

It might seem strange, but the Internet hasn’t made achieving safety from this manipulation any easier. Part of the reason for this is something Shiller and Akerlof do not fully discuss- that much of the information resources used in our economies serve the purpose not so much of selling things consumers would be better off avoiding, let alone convey actual useful information, but in distorting the truth to the advantage of those doing the distorting.

This is a phenomenon for which Robert Proctor has coined the term agontology. It is essentially a form of dark epistemology whose knowledge consist in how to prevent others from obtaining the knowledge you wish to hide.

We live in an age too cultured for any barbarism such as book burning or direct censorship. Instead we have discovered alternative means of preventing the spread of information detrimental to our interests. The tobacco companies pioneered this. Outright denials of the health risks of smoking were replaced with the deliberate manufacture of doubt. Companies whose businesses models are threatened by any concerted efforts to address climate change have adopted similar methods.

Warfare itself, where the power of deception and disinformation was always better understood has woken up to its potential in the digital age: witness the information war still being waged by Russia in eastern Ukraine.

All this I think partly explains the strange rise of Trump. Ultimately, neoliberal policies failed to sustain rising living standards for the working and middle class- with incomes stagnant since the 1970’s. Perhaps this should have never been the goal in the first place.

At the same time we live in a media environment in which no one can be assumed to be telling the truth, in which everything is a sales pitch of one sort or another, and in which no institution’s narrative fails to be spun by its opponents into a conspiracy repackaged for maximum emotional effect. In an information ecosystem where trusted filters have failed, or are deemed irredeemably biased, and in which we are saturated by a flood of data so large it can never be processed, those who inspire the strongest emotions, even the emotion of revulsion, garner the only sustained attention. In such an atmosphere the fact that Trump is a deliberate showman whose pretense to authenticity is not that he is committed to core values, but that he is open about the very reality of his manipulations makes a disturbing kind of sense.

An age of dark epistemology will be ruled by those who can tap into the hidden parts of our nature, including the worst ones, for their own benefit, and will prey off the fact we no longer know what the truth is nor how we could find it even if we still believed in its existence. Donald Trump is the perfect character for it.

 

The Future of Money is Liquid Robots

 

Klimpt Midas

Over the last several weeks global financial markets have experienced what amounts to some really stomach churning volatility and though this seems to have stabilized or even reversed for the moment as players deem assets oversold, the turmoil has revealed in much clearer way what many have suspected all along; namely, that the idea that the Federal Reserve and other central banks, by forcing banks to lend money could heal the wounds caused by the 2008 financial crisis was badly mistaken.

Central banks were perhaps always the wrong knights in shining armor to pin our hopes on, for at precisely the moment they were called upon to fill a vacuum left by a failed and impotent political system, the very instrument under their control, that is money, was changing beyond all recognition, and had been absorbed into the revolutions (ongoing if soon to slow) in communications and artificial intelligence.

The money of the near future will likely be very different than anything at any time in all of human history. Already we are witnessing wealth that has become as formless as electrons and the currencies of sovereign states less important to an individual’s fortune than the access to and valuation of artificial intelligence able to surf the liquidity of data and its chaotic churn.

As a reminder, authorities at the onset of the 2008 crisis were facing a Great Depression level collapse in the demand for goods and services brought about the bursting of the credit bubble. To stem the crisis authorities largely surrendered to the demands for bailouts by the “masters of the universe” who had become their most powerful base of support. Yet for political and ideological reasons politicians found themselves unwilling or unable to provide similar levels of support for lost spending power of average consumers or to address the crisis of unemployment fiscally- that is, politicians refused to embark on deliberate, sufficient government spending on infrastructure and the like to fill the role of the vacated private sector.

The response authorities hit upon instead and that would spread from the United States to all of the world’s major economies would be to suppress interest rates in order to encourage lending. Part of this was in a deliberate effort to re-inflate asset prices that had collapsed during the crisis. It was hoped that with stock markets restored to their highs the so-called wealth effect would encourage consumers to return to emptying their pocket books and restoring the economy to a state of normalcy.

It’s going on eight years since the onset of the financial crisis, and though the US economy in terms of the unemployment rate and GDP has recovered somewhat from its lows, the recovery has been slow and achieved only in light of the most unusual of financial conditions- money lent out with an interest rate near zero. Even the small recent move by the Federal Reserve away from zero has been enough to throw the rest of the world’s financial markets into a tail spin.

While the US has taken a small step away from zero interest rates a country like Japan has done the opposite and the unprecedented. It has set rates below zero. To understand how bizarre this is banks in Japan now charges savers to hold their money. Prominent economists have argued that the US would benefit from negative rates as well, and the Fed has not denied such a possibility should the fragile American Recovery stall.

There are plenty of reasons why the kinds of growth that might have been expected from lending money out for free has failed to materialize. One reason I haven’t heard much discussed is that the world’s central banks are acting under a set of assumptions about what money is that no longer holds- that over the last few decades the very nature of money has fundamentally changed in ways that make zero or lower interest rates set by the central banks of decreasing relevance.

That change started quite some time ago with the move away from money backed up with gold to fiat currencies. Those gold bugs who long to return to the era of Bretton Woods understand the current crisis mostly in terms of the distortions caused by countries that have abandoned the link between money and anything “real” that is precious metals and especially gold way back in the 1970’s. Indeed it was at this time that money started its transition from a means of exchange to a form of pure information.

That information is a form of bet. The value of the dollars, euros, yen or yuan in your pocket is a wager by those who trade in such things on the future economic prospects and fiscal responsibility of the power that issued the currency. That is, nation-states no longer really control the value of their currency, the money traders who operate the largest market on the planet, which in reality is nothing but bits representing the world’s currencies, are the ones truly running the show.

We had to wait for a more recent period for this move to money in the form of bits to change the existential character of money itself. Both the greatest virtue of money in the form of coins or cash and it’s greatest danger is its lack of memory. It is a virtue in the sense that money is blind to tribal ties and thus allows individuals to free themselves from dependence upon the narrow circle of those whom they personally know. It is a danger as a consequence of this same amnesia for a dollar doesn’t care how it was made, and human beings being the types of creatures that they are will purchase all sorts of horrific things.

At first blush it would seem that libertarian anarchism behind a digital currency like  Bitcoin promises to deepen this ability of money to forget.  However, governments along with major financial institutions are interested in bitcoin like currencies because they promise to rob cash of this very natural amnesia and serve as the ultimate weapon against the economic underworld. That is, rather than use Bitcoin like technologies to hide transactions they could be used to ensure that every transaction was linked and therefore traceable to its actual transactee.

Though some economists fear that the current regime of loose money and the financial repression of savers is driving people away from traditional forms of money to digital alternatives others see digital currency as the ultimate tool. Something that would also allow central banks to more easily do what they have so far proven spectacularly incapable of doing- namely to spur the spending rather than the hoarding of cash.

Even with interest rates set to zero or even below a person can at least not lose their money by putting it in a safe. Digital currency however could be made to disappear if at a certain date it wasn’t invested. Talk about power!- which is why digital currency will not long remain in the hands of libertarians and anarchists.

The loss of the egalitarian characteristics of cash will likely further entrench already soaring inequality. The wealth of many of us is already leveraged by credit ratings, preferred customer privileges and the like, whereas others among us are charged a premium for our consumption in the form of higher interest rates, rent instead of ownership and the need to leverage income through government assistance and coupons. In the future all these features are likely to be woven into our digital currency itself. A dollar in my pocket will mean a very different thing from a dollar in yours or anyone else’s.

With the increased use of biometric technologies money itself might disappear into the person and may become as David Birch has suggested synonymous with identity itself.The value of such personalized forms of currency- which is really just a measure of individual power- will be in a state of constant flux. With everyone linked to some form of artificial intelligence prices will be in a constant state of permanent and rarely seen negotiation between bots.

There will be a huge inequality in the quality and capability of these bots, and while those of the wealthy or used by institutions will roam the world for short lived investments and unleash universal volatility, those of the poor will shop for the best deals at box stores and vainly defend their owners against the manipulation of ad bots who prod them into self-destructive consumption.

Depending on how well the bots we use for ourselves do against the bots used to cajole us into spending- the age of currency as liquid robot money could be extremely deflationary, but would at the same time be more efficient and thus better for the planet.

One could imagine  much different us for artificial intelligence, something like the AI used to run economies found in the Iain Banks’ novels. It doesn’t look likely. Rather, to quote Jack Weatherford from his excellent History of Money that still holds up nearly two decades after it was written:

In the global economy that is still emerging, the power of money and the institutions built on it will supersede that of any nation, combination of nations, or international organization now in existence. Propelled and protected by the power of electronic technology, a new global elite is emerging- an elite without loyalty to any particular country. But history has already shown that the people who make monetary revolutions are not always the ones who benefit from them in the end. The current electronic revolution in money promises to increase even more the role of money in our public and private lives, surpassing kinship, religion, occupation and citizenship as the defining element of social life. We stand now at the dawn of the Age of Money. (268)