The Man Who Invented the Future

joachim circle trinity

It is strange how some of the most influential individuals in human history can sometimes manage to slip out of public consciousness to the extent that almost no one knows who they are. What if I were to tell you that the ideas of one person who lived almost 900 years ago were central to everything from the Protestant Reformation, to the French Revolution, to Russia and America’s peculiar type of nationalism, to Communism and Nazism, to neo-liberal optimists such as Steven Pinker and now Michael Shermer, to (of most interest to this audience) followers of Ray Kurzweil and his Singularity; would you believe me, or think I was pulling a Dan Brown?

That individual was a 12th century was a monk named Joachim de Fiore, very much for real, and who up until very recently I had never heard of. In some ways this strange monk not only was a necessary figure in formation all of the systems of thought and political movements listed above, he also might seriously be credited with inventing the very idea of the future itself.

Whether you’re a fan of Game of Thrones or not put yourself for a moment in the mind of a European in the Middle Ages. Nothing around you has what we would consider a rational explanation, rather, it can only be understood in reference to the will of an unseen deity or his demonic rival. It is quite a frightening place, and would even be spatially disorienting  for a modern person used to maps and ideas regarding how the different worlds one sees while looking at the ground or up towards the sky, especially at night, fit together. It would have seemed like being stuck at the bottom of a seemingly endless well, unable to reach the “real” world above. A vertical version of Plato’s famous cave.

Starting in the 13th century there were attempts to understand humankind’s position in space more clearly, and some of these attempts were indeed brilliant, even anticipating current idea such as the Big Bang and the multiverse. This was shown recently in a wonderful collaboration between scholars in the humanities, mathematicians and scientists on the work of another largely forgotten medieval figure, Robert Grosseteste.

Even before Grosseteste was helping expand medievals’ understanding of space, Joachim de Fiore had expanded their notion of time. For time in the medieval worldview time was almost as suffocating as the stuck-in-a-well quality of their notion of space.

Medievals largely lacked a notion of what we would understand as an impersonal future that would be different from the past. It wasn’t as if a person living then would lack the understanding that their own personal tomorrow would be different than today- that their children would age and have children of their own and that the old would die- it was that there was little notion that the world itself was changing. It would be a tough sell to get a medieval to pay for a trip into the future, for in their view whether you’d travel 100 or 1000 years forward everything would be almost exactly the same, unless, that is, the world wasn’t there to visit at all.

Being a medieval was a lot like being on death row- you know exactly how your life will end- you just wouldn’t know when exactly you’ll run out of appeals. A medieval Christian had the end of the story in the Book of Revelation with it’s cast of characters such as the Antichrist and scenes such as the battle of armageddon. A Christian was always on the look out for the arrival of all the props for the play that would be the end of the current world, and if they believed in any real difference between the present and the future it was that the world in which these events were to occur would be what we would consider less technologically and socially advanced than the Roman world into which Christ had been born. History for the medievals, far from being the story of human advancement, was instead the tale of societal decay.

Well before actual events on the ground, improvements in human living standards, technological capacity and scientific understanding, undermined this medieval idea of the future as mere decadence before a final ending, Joachim de Fiore would do so philosophically and theologically.

Joachim was born around 1135 into a well of family with his father being a member of the Sicilian court. Joachim too would begin his career as a court official, but it would not last. In his early twenties, while on a diplomatic mission to the Byzantine Emperor, Joachim broke away to visit the Holy Land, and according to legend felt the call of God.He spent the Lenten season in meditation on Mt Tabor where “on the eve of Easter day, he received ‘the fullness of knowledge’”. (3)  

He became a monk and soon a prior and abbot for of Corazzo one assumes on account of his remarkable intelligence, but Joachim would spend his time writing his great trilogy: The Harmony of the New and Old Testaments, Exposition of Apocalypse, and the Psaltery of Ten Strings eventually receiving a dispensation from his work as abbot by  Pope Clement III so he could devote himself fully to his writing.

Eventually a whole monastic order would grow up around Joachim, and though this order would last only a few centuries, and Joachim would die in 1202 before finishing his last book the Tract on the Four Gospels his legacy would almost fully be felt in the way we understand history and the future.

Joachim constructed a theory of history based on the Christian Trinity: the Age of the Father – from Adam to the birth of Christ, the Age of the Son- from Christ until The Age of the Spirit. There had been examples of breaking history into historical ages before, what made Joachim different was his idea that:

“… Scripture taught a record of man’s gradual spiritual developments, leading to a perfected future age which was the fulfillment of prophetic hope.” (13)

“… one to be ushered in with a New Age of guidance by the Holy Spirit acting through a new order of meditative men who truly contemplated God. “ (12)

What is distinct and new about this was that history was spiritualized, it became the story of humankind’s gradual improvement and moving towards a state of perfection that was achievable in the material world (not in some purely spiritual paradise). And we could arrive at this destination if only we could accept the counsel of the virtuous and wise. It was a powerful story that has done us far more harm than good.

Joachim trinitarian tree circles

Joachimite ideas can be found at the root of the millennia fantasies of the European religious wars, and were secularized in the period of the French Revolution by figures like the Marquis de Condorcet and Auguste Comte. The latter even managed to duplicate Joachim’s tri-part model of history only now the ages only now they were The Theological, The Metaphysical, and the Positive Stage which we should read as religious, philosophical and scientific with Joachim’s monks being swapped for scientists.  

Hegel turned this progressive version of history into a whole system of philosophy and the atheist Marx turned Hegel “upside down” and created a system of political economy and revolutionary program. The West would justify its imperial conquest of Africa and subjugation of Asia on the grounds that they were bringing the progressive forces of history to “barbaric” peoples. The great ideological struggle of the early 20th century between Communism, Nazism, and Liberalism pitted versions of historical determinism against one another.

If postmodernism should have meant the end of overarching metanarratives the political movements that have so far most shaped the 21st century seem not to have gotten the memo. The century began with an attack by a quasi- millenarian cult (though they would not recognize Joachim as a forbearer). The 9-11 attacks enabled an averous and ultimately stupid foreign policy on the part of the United States which was only possible because the American people actually believed in their own myth that their system of government represented the end of history and the secret wish of every oppressed people in the world.

Now we have a truly apocalyptic cult in the form of ISIS while Russia descends into its own version of Joachimite fantasy based on Russia’s “historical mission” while truth itself disappears in a postmodern hall of mirrors. Thus it is that Joachim’s ideas regarding the future remain potent. And the characters attracted to his idea of history are, of course, not all bad. I’d include among this benign group both singularitarians and the new neo-liberal optimists. The first because they see human history moving towards an inevitable conclusion and believe that their is an elite that should guide us into paradise – the technologists. Neo-liberal optimists may not be so starry eyed but they do see history as the gradual unfolding of progress and seem doubtful that this trend might reverse.

The problem I have with the singularitarians is their blindness to the sigmoid curve – the graveyard where all exponentials go to die. There is a sort of “evolutionary” determinism to singularitarianism that seems to think not only that there is only one destination to history, but that we already largely know what that destination is. For Joachim such determinism makes sense, his world having been set up and run by an omnipotent God, for what I assume to be mostly secular singularitarians such determinism does not make sense and we are faced with the contingencies of evolution and history.

My beef with the neo-liberal optimists, in addition to the fact that they keep assaulting me with their “never been better” graphs is that I care less that human suffering has decreased than the reasons why, so that such a decrease can be continued or its lower levels preserved. I also care much more where the moral flaws of our society remain acute because only then will I know where to concentrate my political and ethical action. If the world today is indeed better than the world in the past (and it’s not a slam dunk argument even with the power point) let’s remember the struggles that were necessary to achieve that and continue to move ourselves towards Joachim’s paradise while being humble and wise enough to avoid mistaking ourselves with the forces of God or history, a mistake that has been at the root of so much suffering and evil.            

Freedom in the Age of Algorithms

modern-times-22

Reflect for a moment on what for many of us has become the average day. You are awoken by your phone whose clock is set via a wireless connection to a cell phone tower, connected to a satellite, all ultimately ending in the ultimate precision machine, a clock that will not lose even a second after 15 billion years of ticking. Upon waking you connect to the world through the “sirene server” of your pleasure that “decides” for you based on an intimate profile built up over years the very world you will see- ranging from Kazakhstan to Kardasian, and connects your intimates, and those whom you “follow” and, if you’re lucky enough, your streams of “followers”.

Perhaps you use a health app to count your breakfast calories or the fat you’ve burned on your morning run, or perhaps you’ve just spent the morning by playing Bejeweled, and will need to pay for your sins of omission later. On your mindless drive to work you make the mistake of answering a text from the office while in front of a cop who unbeknownst to you has instantly run your licence plate to find out if you are a weirdo or terrorist. Thank heavens his algorithm confirms you’re neither.

When you pull into the Burger King drive through to buy your morning coffee, you thoughtlessly end up buying yet another bacon egg and cheese with a side of hash browns in spite of your best self nagging you from inside your smart phone. Having done this one too many times this month your fried food preference has now been sold to the highest bidders, two databanks, through which you’ll now be receiving annoying coupons in the mail along with even more annoying and intrusive adware while you surf the web, the first from all the fast food restaurants along the path of your morning commute, the other friendly, and sometimes frightening, suggestions you ask your doctor about the new cholesterol drug evolocumab.

You did not, of course, pay for your meal with cash but with plastic. Your money swirling somewhere out there in the ether in the form of ones and zeroes stored and exchanged on computers to magically re-coalesce and fill your stomach with potatoes and grease. Your purchases correlated and crunched to define you for all the machines and their cold souls of software that gauge your value as you go about your squishy biological and soulless existence.

____________________________________________

The bizarre thing about this common scenario is that all of this happens before you arrive at the office, or store, or factory, or wherever it is you earn your life’s bread. Not only that, almost all of these constraints on how one views and interacts with the world have been self imposed. The medium through which much of the world and our response to is now apps and algorithms of one sort or another. It’s gotten to the point that we now need apps and algorithms to experience what it’s like to be lost, which seems to, well… misunderstand the definition of being lost.

I have no idea where future historians, whatever their minds are made of, will date the start of this trend of tracking and constraining ourselves so as to maintain “productivity” and “wellness”, perhaps with all those 7- habits- of- highly- effective books that started taking up shelf space in now ancient book stores sometime in the 1980’s, but it’s certainly gotten more personal and intimate with the rise of the smart phone. In a way we’ve brought the logic of the machine out of the factory and into our lives and even our bodies- the idea of super efficient man-machine merger as invented by Frederick Taylor and never captured better than in Charlie Chaplin’s brilliant 1936 film Modern Times.

The film is for silent pictures what the Wizard of OZ was for color and bridges the two worlds where almost all of the spoken parts are through the medium of machines including a giant flat screen that seemed entirely natural in a world that has been gone for eighty years. It portrays the Tramp asserting his humanity in the dehumanizing world of automation found in a factory where even eating lunch had been mechanized and made maximally efficient. Chaplin no doubt would have been pleasantly surprised with how well much of the world turned out given the bleakness of economic depression and soon world war he was facing, but I think he also would have been shocked at how much we have given up of the Tramp in us all without reason and largely of our own volition.

Still, the fact of the matter is that this new rule of apps and and algorithms much of which comes packaged in the spiritualized wrapping of “mindfulness” and “happiness” would be much less troubling did it not smack of a new form of Marx’s “opiate for the people” and divert us away from trying to understand and challenge the structural inadequacies of society.

For there is nothing inherently wrong with measuring performance as a means to pursue excellence, or attending to one’s health and mental tranquility. There’s a sort of postmodern cynicism that kicks in whenever some cultural trend becomes too popular, and while it protects us from groupthink, it also tends to lead to intellectual and cultural paralysis.  It’s only when performance measures find their way into aspects of our lives that are trivialized by quantifying – such as love or family life- that I think we should earnestly worry, along, perhaps, with the atrophy of our skills to engage with the world absent these algorithmic tools.

My really deep concern lies with the way apps and algorithms now play the role of invisible instruments of power. Again, this is nothing new to the extent that in the pre-digital age such instruments came in the form of bureaucracy and the rule by decree rather than law as Hannah Arendt laid out in her Origins of Totalitarianism back in the 1950:

In governments by bureaucracy decrees appear in their naked purity as though they were no longer issued by powerful men, but were the incarnation of power itself and the administrator only its accidental agent. There are no general principles behind the decree, but ever changing circumstances which only an expert can know in detail. People ruled by decree never know what rules them because of the impossibility of understanding decrees in themselves and the carefully organized ignorance of specific circumstances and their practical significance in which all administrators keep their subjects.  (244)

It’s quite easy to read the rule of apps and algorithms in that quote especially the part about “only an expert can know in detail” and “carefully organized ignorance” a fact that became clear to me after I read what is perhaps the best book yet on our new algorithmically ruled lives, Frank Pasquale’s  The Black Box Society: The Secret Algorithms That Control Money and Information.

I have often wondered what exactly was being financially gained by gathering up all this data on individuals given how obvious and ineffective the so-called targeted advertisements that follow me around on the internet seem to be, and Pasquale managed to explain this clearly. What is being “traded” is my “digital reputation” whether as a debtor, or insurance risk (medical or otherwise), or customer with a certain depth of pocket and identity- “father, 40’s etc”- or even the degree to which I can be considered a “sucker” for scam and con artists of one sort or another.

This is a reputation matrix much different from the earlier arrangements based or personal knowledge or later impersonal systems such as credit reporting (though both had their abuses) or that for health records under H.I.P.A.A  in the sense that the new digital form or reputation is largely invisible to me, its methodology inscrutable, its declarations of my “identity” immune to challenge and immutable. It is as Pasquale so-aptly terms a “black box” in the strongest sense of that word meaning unintelligible and opaque to the individual within it like the rules Kafka’s characters suffer under in his novels about the absurdity of hyper- bureaucracy (and of course more) The Castle and The Trial.   

Much more troubling, however, is how such corporate surveillance interacts with the blurring of the line between intelligence and police functions  – the distinction between the foreign and domestic spheres- that has been what of the defining features of our constitutional democracy. As Pasquale reminds us:

Traditionally, a critical distinction has been made between intelligence and investigation. Once reserved primarily for overseas spy operations, “intelligence” work is anticipatory, it is the job of agencies like the CIA, which gather potentially useful information on external enemies that pose a threat to national security. “Investigation” is what police do once they have evidence of a crime. (47)

It isn’t only that such moves towards a model of “predictive policing” mean the undoing of constitutionally guaranteed protections and legal due process (presumptions of innocence, and 5th amendment protections) it is also that it has far too often turned the police into a political instrument, which, as Pasquale documents, have monitored groups ranging from peaceful protesters to supporters of Ron Paul all in the name of preventing a “terrorist act” by these members of these groups. (48)

The kinds of illegal domestic spying performed by the NSA and its acronymic companions was built on back of an already existing infrastructure of commercial surveillance. The same could be said for the blurring of the line between intelligence and investigation exemplified by creation of “fusion centers” after 9-11 which repurposed the espionage tools once contained to intelligence services and towards domestic targets and for the purpose of controlling crime.

Both domestic spying by federal intelligence agencies and new forms of invasive surveillance by state and local law enforcement had been enabled by the commercial surveillance architecture established by the likes of corporate behemoths such as FaceBook and Google to whom citizens had surrendered their right to privacy seemingly willingly.

Given the degree to which these companies now hold near monopolies hold over the information citizens receive Pasquale thinks it would be wise to revisit the breakup of the “trusts” in the early part of the last century. It’s not only that the power of these companies is already enormous it’s that were they ever turned into overt political tools they would undermine or upend democracy itself given that citizen action requires the free exchange of information to achieve anything at all.

The black box features of our current information environment have not just managed to colonize the worlds of security, crime, and advertisement, they have become the defining feature of late capitalism itself. A great deal of the 2008 financial crisis can be traced to the computerization of finance over the 1980’s. Computers were an important feature of the pre-crisis argument that we had entered a period of “The Great Equilibrium”. We had become smart enough, and our markets sophisticated enough (so the argument went) that there would be no replay of something like the 1929 Wall Street crash and Great Depression. Unlike the prior era markets without debilitating crashes were not to be the consequence of government regulation to contain the madness of crowds and their bubbles and busts, but in part from the new computer modeling which would exorcise from the markets the demon of “animal spirits” and allow human beings to do what they had always dreamed of doing- to know the future.  Pasquale describes it this way:

As information technology improved, lobbyists could tell a seductive story: regulators were no longer necessary.  Sophisticated investors could vet their purchases.  Computer models could identify and mitigate risk. But the replacement of regulation by automation turned out to be as fanciful as flying cars or space colonization. (105)

Computerization gave rise to ever more sophisticated financial products, such as mortgage backed securities, based on ever more sophisticated statistical models that by bundling investments gave the illusion of stability. Even had there been more prophets crying from the wilderness that the system was unstable they would not have been able to prove it for the models being used were “a black box, programmed in proprietary software with the details left to the quants and the computers”. (106)

It seems there is a strange dynamic at work throughout the digital economy, not just in finance but certainly exhibited in full force there, where the whole game in essence a contest of asymmetric information. You either have the data someone else lacks to make a trade, you process that data faster, or both. Keeping your algorithms secret becomes a matter of survival for as soon as they are out there they can be exploited by rivals or cracked by hackers- or at least this is the argument companies make. One might doubt it though once this you see how nearly ubiquitous this corporate secrecy and patent hoarding has become in areas radically different from software such as the pharmaceuticals or by biotech corporations like Monsanto which hold patents on life itself and whose logic leads to something like Paolo Bacigalupi’s dystopian novel The Windup Girl.

For Pasquale complexity itself becomes a tool of obfuscation in which corruption and skimming can’t help but become commonplace. The contest of asymmetric information means companies are engaged in what amounts to an information war where the goal is as much to obscure real value to rivals and clients so as to profit from the difference in this distortion. In such an atmosphere markets stop being able to perform the informative role Friedrich Hayek thought was their very purpose.  Here’s Pasquale himself:

…financialization has created enormous uncertainty about the value of companies, homes, and even (thanks to the pressing need for bailouts) the once rock solid promises of governments themselves.

Finance thrives in this environment of radical uncertainty, taking commissions in cash as investors (or, more likely, their poorly monitored agents) race to speculate on or hedge against an ever less knowable future. (138)

Okay, if Pasquale has clearly laid out the problem, what is his solution? I could go through a list of his suggestions, but I should stick to the general principle. Pasquale’s goal, I think, is to restore our faith in our ability to publicly shape digital technology in ways that better reflect our democratic values. That the argument which claims software is unregulable is an assumption not a truth and the tools and models for regulation and public input over the last century for the physical world are equally applicable to the digital one.

We have already developed a complex, effective, system of privacy protections in the form of H.I.P.A, there are already examples of mandating fair understandable contracts (as opposed to indecipherable “terms of service” agreements) in the form of various consumer protection provisions, up until the 1980’s we were capable of regulating the boom and bust cycles of markets without crashing the economy. Lastly the world did not collapse when earlier corporations that had gotten so large they threatened not only the free competition of markets, but more importantly, democracy itself, were broken up and would not collapse were the like of FaceBook, Google or the big banks broken up either.

Above all, Pasquale urges us to seek out and find some way to make the algorithmization of the world intelligible and open to the political, social and ethical influence of a much broader segment of society than the current group of programmers and their paymasters who have so far been the only ones running the show. For if we do not assert such influence and algorithms continue to structure more and more of our relationship with the world and each other, them algorithmization and democracy would seem to be on a collision course. Or, as Taylor Owen pointed out in a recent issue of Foreign Affairs:

If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system, one that requires transparency and accountability in order to function. A node of power that exists outside of these bounds is a threat to the notion of collective governance itself. This, at its core, is a profoundly undemocratic notion—one that states will have to engage with seriously if they are going to remain relevant and legitimate to their digital citizenry who give them their power.

Pasquale has given us an excellent start to answering the question of how democracy, and freedom, can survive in the age of algorithms.

 

The strange prescience of Frank Herbert’s Dune

Dune Cover

As William Gibson always reminds us the real role of science-fiction isn’t so much to predict the future as to astound us with the future’s possible weirdness. It almost never happens that science-fiction writers get core or essential features of this future weirdness right, and when they do, according to Gibson, it’s almost entirely by accident. Nevertheless, someone writing about the future can sometimes, and even deliberately, play the role of Old Testament prophet, seeing some danger to which the rest of us are oblivious and guess at traps and dangers into which we later fall. (Though let’s not forget about the predictions of opportunity.)

Frank Herbert’s Dune certainly wasn’t intended to predict the future, but he was certainly trying to give us a warning. Unlike others who would spend the 1960’s and 1970’s warning us of dangers that we ended up avoiding almost by sheer luck- such as nuclear war- Herbert focused his warnings on very ancient dangers, the greed of mercantile corporations, the conflicts of feudalism, and the danger that arises from a too tight coupling between politics and religion. This Herbert imagined at a time well before capitalism’s comeback, when the state and its authority seemed ascendant, and secularism seemed inseparable from modernity to the extent it that it appeared we had left religion in history’s dry dust.

To these ancient dangers Herbert added a new one – ecological fragility- a relatively newly discovered danger to humanity at the time Dune was published (1965). In a very strange way these things added together capture, I think, something essential about our 21st century world.

The world the novel depicts is a future some 21,000 years, which if we were taking the date seriously means that it is almost certain that everything Herbert “predicted” would be wrong. The usefulness in placing his novel so far ahead in the future, I think, lies in the fact that he could essentially ignore all the major stories of his day, like the Cold War, or the threat of nuclear destruction, Vietnam, or even social movements such as those fighting for civil rights.

By depicting such a far removed future Herbert had no obligation to establishing continuity with our own time. The only pressing assumption or question that a reader would face when considering the plausibility of this future world was “where are the computers and robots?” for surely human civilization in the future will have robots!  Dune’s answer is that they had been destroyed in something known as the Butlerian Jihad. This is brilliant because it liberated Herbert from the fool’s errand of having to make technological predictions about the future, and allowed him to build a far future with recognizable human beings still in it.

Herbert essentially ransacks the past for artifacts, including ideas and social systems and uses it to build a world that will allow him to flesh out his warnings including new question of ecological fragility mentioned above .

Most of the novel takes place on the desert planet of Arrakis a planet that would be without importance for anyone but the Fremen who inhabit it were it not for the fact that it is also the only source of “the spice” (melange) a sort of psychotropic drug and elixir that is the most valuable commodity in the universe not only because once ingested its absence will lead to death, but because it is the source of the prescience humans need in a world without even the most rudimentary form of artificial intelligence as a consequence of the Butlerian Jihad, about which the novel contains only whispers.

Given our current concerns about the rise of artificial intelligence, when reading Dune now, the Butlerian Jihad jumps out at you. Could this be where it ends, not with superintelligence but with a version of Samuel Butler’s revolt against the machines depicted in his novel Erewhon,  only this revolt on religious and humanists grounds?

Yet rather than present a world that returned to a pre-technological state because it denied itself the use of even “thinking” machines at the level of a calculator, those roles become filled by human/biological computers the “mentats”. Who like our computers today are used to see into a future we believe to be determined.

It is the navigational computation of the mentats that allow space travel and thus exchange between the planets. The spice trade is controlled by two monopolistic corporate entities The Spacing Guild and the CHOAM that effectively control all trade in the interstellar empire.

It is in reference to our looming fears about artificial intelligence and trepidation at growing inequality where the kind of mercantilism and feudalism depicted in Dune  make the novel feel prescient even if accidentally so. There is an Empire in Dune, much as there is a global empire today in the form of the United States, but, just as in our case, it is a very weak empire riddled by divisions between corporate entities that control trade and rival families that compete to take center stage.

Then there is the predominance of religion. Many have been very surprised by la revanche de Dieu in the late 20th and early 21st century- the predominance of religious questions and conflicts at a time when many had predicted God’s death. Dune reminds us of our current time because it is seeping with religion. Religious terms – most tellingly jihad- are used throughout the novel. Characters understand themselves and are understood by others in religious terms. Paul (Muad’Dib), the protagonist of the novel, is understood in messianic terms. He is a figure prophesized to save the desert Fremen people of Arrakis and convert their world to a paradise.

Yet, however much he was interested in and sympathetic to world religions, Herbert was also trying to warn us against their potential for violence and abuse. Though he tries to escape it, Paul feels fated to conquer the universe in a global jihad. This despite the fact that he knows the messianic myth is a mere role he is playing created by others- the Bene Gesserit mentat order- to which he and his mother belong. In Dune religious longings are manipulated in plots and counter-plots over the control over resources, a phenomenon with which we are all too familiar.

It not just that in Dune we find much of the same, sometimes alien, religious language we’ve heard on the news since the start of the “Long War”, even the effectiveness of the Fremen insurgents of the deserts against crack imperial troops the Sardaukar feels too damned familiar. Though perhaps what Herbert had done was gave us a glimpse of what would be the future of the Middle East by looking at its past including figures such as Lawrence of Arabia off of whom the character of Paul Atreides appears to be based.

All this and we haven’t even gotten to the one danger that Herbert identifies in Dune that was relatively new, that is ecological  fragility.  As is well known Dune, was inspired by Herbert’s experience of the Oregon Dunes and the US Department of Agriculture’s attempt to control the spread of its sands created by millions of years of coastal erosion by using natural methods such as the planting of grasses.

Here I think Herbert found what he thought was the correct model for our relationship with nature. We would neither be able to rule over nature like gods, but nor would we surrender our efforts to control her destructiveness or to make deserts bloom. Instead of pummeling her with mechanical power (a form of exploitation that will eventually kill a living planet) , we should use the softer and more intelligent methods of nature herself to steer her in a slow dance where we would not always be in the lead.

The interstellar civilization in Dune is addicted to the spice in the same way we are addicted to our fossil fuels and that addiction has turned the world of Arrakis into a desert- for the worms that produce the spice also make the world dry  in the same way the carbon we are emitting is turning much of the North American continent into a desert.

As I was reading Dune the story of California’s historic drought was all over the news- especially the pictures. Our own Arrakis. As Kynes the ecologist imagines his dead father saying (how many other novels have an ecologist as a main character?):

“The highest function of ecology is understanding consequences.” (272)

If Herbert was in a sense prescient about the themes of the first decades of the 21st century it was largely by accident, and his novel provides a metaphysical theory as to why true prescience will prove ultimately impossible even for the most powerful superintelligence should we chose to build (or biologically engineer) them.

Paul experiences the height of his ability to peer into the future this way:

The prescience, he realized, was an illumination that incorporated the limits of what it revealed- at once a source of accuracy and meaningful error. A kind of Heisenberg indeterminacy intervened: the expenditure of energy that revealed what he saw, changed what he saw.

… the most minute action- the wink of an eye, a careless word, a misplaced grain of sand- moved a gigantic lever across the known universe. He saw violence with the outcome subject to so many variables that his slightest movement created vast shiftings in the patterns.

The vision made him want to freeze into immobility, but this, too was action with its consequences. (296)

In other words, if reality is truly deterministic it remains unpredictable because the smallest action(or inaction) can have the consequence of opening up another set of possible possibilities – a whole new multiverse that will have its own future. Either that, or perhaps all Paul ever sees are just imagined possibilities and we remain undetermined and free.

 

Auguries of Immortality, Malthus and the Verge

Hindu Goddess Tara

Sometimes, if you want to see something in the present clearly it’s best to go back to its origins. This is especially true when dealing with some monumental historical change, a phase transition from one stage to the next. The reason I think this is helpful is that those lucky enough to live at the beginning of such events have no historical or cultural baggage to obscure their forward view. When you live in the middle, or at the end of an era, you find yourself surrounded, sometimes suffocated, by all the good and bad that has come as a result. As a consequence, understanding the true contours of your surroundings or ultimate destination is almost impossible, your nose is stuck to the glass.

Question is, are we ourselves in the beginning of such an era, in the middle, or at an end? How would we even know?

If I were to make the case that we find ourselves in either the middle or the end of an era, I know exactly where I would start. In 1793 the eccentric English writer William Godwin published his Enquiry Concerning Political Justice and its Influence on Morals and Happiness, a book which few people remember. What Godwin is remembered for instead is his famous daughter Mary Shelley, and her even more famous monster, though I should add that if you like thrillers you can thank Godwin for having invented them.

Godwin’s Enquiry, however, was a different kind of book. It grew out of the environment of a time which, in Godwin’s eyes at least, seemed pregnant with once unimaginable hope. The scientific revolution had brought about a fuller understanding of nature and her laws than anything achieved by the ancients, the superstitions of earlier eras were abandoned for a new age of enlightenment, the American Revolution had brought into the world a whole new form of government based on Enlightenment principles, and, as Godwin wrote, a much more important similar revolution had overthrown the monarchy in France.

All this along with the first manifestations of what would become the industrial revolution led Godwin to speculate in the Enquiry that mankind had entered a new era of perpetual progress. Where then could such progress and mastery over nature ultimately lead? Jumping off of a comment by his friend Ben Franklin, Godwin wrote:

 Let us here return to the sublime conjecture of Franklin, that “mind will one day become omnipotent over matter.” If over all other matter, why not over the matter of our own bodies? If over matter at ever so great a distance, why not over matter which, however ignorant we may be of the tie that connects it with the thinking principle, we always carry about with us, and which is in all cases the medium of communication between that principle and the external universe? In a word, why may not man be one day immortal?

Here then we can find evidence for the recent claim of Yuval Harari that “The leading project of the Scientific Revolution is to give humankind eternal life.” (268)  In later editions of the Enquiry, however, Godwin dropped the suggestion of immortality, though it seems he did it not so much because of the criticism that followed from such comments, or the fact that he stopped believing in it, but rather that it seemed too much a termination point for his notion of progress that he now thought really would extend forever into the future for his key point was that the growing understanding by the mind would result in an ever increasing power of the mind over the material world in a process that would literally never end.

Almost at the exact same time as Godwin was writing his Enquiry another figure was making almost the exact same argument including the idea that scientific progress would eventually result in indefinite human lifespans. The Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit was a book written by a courageous man while on the run from a French Revolutionary government that wanted to cut off his head. Amazingly, even while hunted down during the Terror, Condorcet retained his long term optimism regarding the ultimate fate of humankind.

A young English parson with a knack for the just emerging science of economics, not only wasn’t buying it, he wanted to actually scientifically prove (though I am using the term loosely) exactly why  such optimism should not be believed. This was Thomas Malthus, whose name, quite mistakenly, has spawned its own adjective, Malthusian, which means essentially environmental disaster caused by our own human hands and faults.

As is commonly known, Malthus’ argument was that historically there has been a mismatch between the growth of population and the production of food which sooner or later has led to famine and decline. It was Godwin’s and Condorcet’s claims regarding future human immortality that, in part, was responsible for Malthus stumbling upon his specific argument centered on population. For the obvious rejoinder to those claiming that the human lifespan would increase forever was-  what would we do will all of these people?

Both Godwin and Condorcet thought they had answered this question by claiming that in the future the birth rate would decline to zero. Stunningly, this has actually proven to be correct- that population growth rates have declined in parallel with increases in longevity. Though, rather than declining due to the victory of “reason” and the conquest of the “passions” as both Godwin and Condorcet thought they declined because sex was, for the first time in human history, decoupled from reproduction through the creation of effective forms of birth control.

So far, at least, it seems it has been Godwin and Condorcet that have gotten the better side of the argument. Since the  Enquiry we have experienced nearly 250 years of uninterrupted progress where mind has gained increasing mastery over the material world. And though,we are little closer to the optimists’ dream of “immortality” their prescient guess that longevity would be coupled with a declining birth rate would seem to clear the goal of increased longevity from being self-defeating on Malthusian grounds.

This would not be, of course, the first time Malthus has been shown to be wrong. Yet his ideas, or a caricature of his ideas, have a long history of retaining their hold over our imagination.  Exactly why this is the case is a question  explored in detail by Robert J. Mayhew in his excellent, Malthus: The Life and Legacies of an Untimely Prophet, Malthus’ argument in his famous if rarely actually read An Essay on the Principle of Population has become a sort of secular version of armageddon, his views latched onto by figures both sinister and benign over the two centuries since his essay’s publication.

Malthus’ argument was used against laws to alleviate the burdens of poverty, which it was argued would only increase population growth and an ultimate reckoning (and this view, at least, was close to that of Malthus himself). They were used by anti-immigrant and racist groups in the 19th and early 20th century. Hitler’s expansionist and genocidal policy in eastern Europe was justified on Malthusian grounds.

On a more benign side, Malthusian arguments were used as a spur to the Green Revolution in Agriculture in the 1960’s (though Mayhew thinks the warnings of pending famine were political -arising from the cold war- and overdone.) Malthusianism was found in the 1970’s by Paul Ehrlich to warn of a “population bomb” that never came, during the Oil Crisis slid into the fear over resource constraints, and can now be found in predictions about the coming “resource” and “water” wars. There is also a case where Malthus really may have his revenge, though more on that in a little bit.

And yet, we would be highly remiss were we to not take the question Malthus posed seriously. For what he was really inquiring about is whether or not their might be ultimate limits on the ability of the human mind to shape the world in which it found itself. What Malthus was  looking for was the boundary or verge of our limits as established by the laws of nature as he understood them. Those whose espoused the new human perfectionism, such as Godwin and Condorcet, faced what appeared to Malthus to be an insurmountable barrier to their theories being considered scientific, no matter how much they attached themselves to the language and symbols of the recent success of science. For what they were predicting would happen had no empirical basis- it had never happened before. Given that knowledge did indeed seem to increase through history, if merely as a consequence of having the time to do so, it was still the case that the kind of radical and perpetual progress that Godwin and Condorcet predicted was absent from human history. Malthus set out to provide a scientific argument for why.

In philosophical terms Malthus’ Essay is best read as a theodicy, an attempt like that of Leibniz before him, to argue that even in light of the world’s suffering we live in the “best of all possible world’s”. Like Newton did for falling objects, Malthus sought the laws of nature as designed by his God that explained the development of human society. Technological and social progress had remained static in the past even as human knowledge regarding the world accumulated over generations because the gap between mind and matter was what made us uniquely human. What caused us to most directly experience this gap and caused progress to remain static? Malthus thought he pinned the source of stasis in famine and population decline.

As is the case with any other physical system, for human societies, the question boiled down to how much energy was available to do meaningful work. Given that the vast majority of work during Malthus’ day was done by things that required energy in the form of food whether humans or animals, the limited amount of land that could be efficiently tilled presented an upper bound to the size, complexity, and progress of any human society.

What Malthus missed, of course, was the fact that the relationship between food and work was about to be severed. Or rather, the new machines did consume a form of processed “food” organic material that had been chemically “constructed” and accumulated over the eons, in the form of fossil fuels that offered an easily accessible type of energy that was different in kind from anything that had come before it.

The sheer force of the age of machines Malthus had failed to foresee did indeed break with the flatline of human history he had identified in every prior age. A fact that has never perhaps been more clearly than in Ian Morris’ simple graph below.

Ian Morris Great Divergence Graph

What made this breaking of the pattern between all of past human history and the last few centuries is the thing that could possibly, and tragically, prove Malthus right after all- fossil fuels.  For any society before 1800 the majority of energy other than that derived from food came in the form of wood- whether as timber itself or charcoal. But as Lewis Dartnell pointed out in a recent piece in AEON the world before fossil fuels posed a seemingly unsurmountable (should I say Malthusian?) dilemma; namely:

The central problem is that woodland, even when it is well-managed, competes with other land uses, principally agriculture. The double-whammy of development is that, as a society’s population grows, it requires more farmland to provide enough food and also greater timber production for energy. The two needs compete for largely the same land areas.

Dartnell’s point is that we have been both extremely lucky and unlucky in how accessible and potent fossil fuels have been. On the one hand fossil fuels gave us a rather short path to technological society, on the other, not only will it be difficult to wean ourselves from them, it is hard to imagine how we could reboot as a civilization should we suffer collapse and find ourselves having already used up most of the world’s most easily accessed forms of energy.

It is a useful exercise, then, to continue to take Malthus’ argument seriously, for even if we escape the second Malthusian trap- fossil fuel induced climate change- that allowed us to break free from the trap Malthus’ originally identified- our need to literally grow our energy- there are other predictable traps that likely lie in store.

One of these traps that interests me the most has to do with the “energy problem” that Malthus understood in terms of the production of food. As I’ve written about before, and as brought to my attention by the science writer Lee Billings in his book Five Billion Years of Solitude, there is a good and little discussed case from physics for thinking we might be closer to the end of an era that began with the industrial revolution rather than in the middle or even at the beginning.

This physics of civilizational limits comes from Tom Murphy of the University of California, San Diego who writes the blog Do The Math. Murphy’s argument, as profiled by the BBC, has some of the following points:

  • Assuming rising energy use and economic growth remain coupled, as they have in the past, confronts us with the absurdity of exponentials. At a 2.3 percent growth rate within 2,500 hundred years we would require all the energy from all the stars in the Milky Way galaxy to function.
  • At 3 percent growth, within four hundred years we will have boiled away the earth’s oceans, not because of global warming, but from the excess heat that is the normal product of energy production. (Even clean fusion leaves us burning away the world’s oceans for the same reason)
  • Renewables push out this reckoning, but not indefinitely. At a 3 percent growth rate, even if the solar efficiency was 100% we would need to capture all of the sunlight hitting the earth within three hundred years.

Will such limits prove to be correct and Malthus, in some sense, be shown to have been right all along? Who knows. The only way we’d have strong evidence to the contrary is if we came across evidence of civilizations with a much, much greater energy signature than our own. A recent project out of Penn State to do just that, which looked at 100,000 galaxies, found nothing, though this doesn’t mean the search is over.

Relating back to my last post: the universe may lean towards giving rise to complexity in the way the physicist Jeremy England suggests, but the landscape is littered with great canyons where evolution gets stuck for very long periods of time which explains its blindness as perceived by someone like Henry Gee. The scary thing is getting out of these canyons is a race against time: complex life could have been killed off by some disaster shortly after the Cambrian explosion, we could have remained hunter gatherers and failed to develop agriculture before another ice age did us in, some historical contingency could have prevented industrialization before some global catastrophe we are advanced enough now to respond to wiped us out.

If Tom Murphy is right we are now in a race to secure exponentially growing sources of energy, and it is a race we are destined to lose. The reason we don’t see any advanced civilizations out there is because the kind of growth we’ve extrapolated from the narrow slice of the past few centuries is indeed a finite thing as the amount of energy such growth requires reaches either terrestrial or cosmic limits. We simply won’t be able to gain access to enough energy fast enough to keep technological progress going at its current rate.

Of course, even if we believe that progress has some limit out there, that does not necessarily entail we shouldn’t pursue it, in many of its forms, until we hit the verge itself. Taking arguments that there might be limits to our technological progress is one thing, but to our moral progress, our efforts to address suffering in the world, they are quite another, for there accepting limits would mean accepting some level of suffering or injustice as just the “way things are”.  That we should not accept this Malthus himself nearly concluded:

 Evil exists in the world not to create despair but activity. We are not patiently to submit to it, but to exert ourselves to avoid it. It is not only the interest but the duty of every individual to use his utmost efforts to remove evil from himself and from as large a circle as he can influence, and the more he exercises himself in this duty, the more wisely he directs his efforts, and the more successful these efforts are, the more he will probably improve and exalt his own mind and the more completely does he appear to fulfil the will of his Creator. (124-125)

The problem lies with the justification of the suffering of individual human beings in any particular form as “natural”. The crime at the heart of many versions of Malthusianism is this kind of aggregation of human beings into some kind of destructive force, which leads to the denial of the only scale where someone’s true humanity can be scene- at the level of the individual. Such a moral blindness that sees only the crowd can be found in the work the most famous piece of modern Malthusianism -Paul Ehrlich’s Population Bomb where he discusses his experience of Delhi.

The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to the buses. People herding animals. People, people, people, people. As we moved slowly through the mob, hand horn squawking, the  dust, noise, heat, and the cooking fires gave the scene a hellish aspect. Would we ever get to our hotel? All three of us were, frankly, frightened. (p. 1)

Striped of this inability to see the that the value of human beings can only be grasped at the level of the individual and that suffering can only be assessed in a moral sense at this individual level, Malthus can help to remind us that our mind’s themselves emerge out of their confrontation and interaction with a material world where we constantly explore, overcome, and confront again its boundaries. That the world itself was probably “a mighty process for awakening matter into mind” and even the most ardent proponents of human perfectionism, modern day transhumanists or singularitarians, or just plain old humanists would agree with that.

* Image: Tara (Devi): Hindu goddess of the unquenchable hunger that compels all life.

Life: Inevitable or Accident?

The-Tree-Of-Life Gustav Klimt                                                             https://www.artsy.net/artist/gustav-klimt

Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon?

There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.

The proponent of the new theory is a young physicist at MIT named Jeremy England. I can’t claim I quite grasp all the intricate details of England’s theory, though he does an excellent job of explaining it here, but perhaps the best way of capturing it succinctly is by thinking of the laws of physics as a landscape, and a leaning one at that.

The second law of thermodynamics leans in the direction of increased entropy: systems naturally move in the direction of losing rather than gaining order over time, which is why we break eggs to make omelettes and not the other way round. The second law would seem to be a bad thing for living organisms, but oddly enough, ends up being a blessing not just for life, but for any self-organizing system so long as that system has a means of radiating this entropy away from itself.

For England, the second law provides the environment and direction in which life evolves. In those places where energy outputs from outside are available and can be dissipated because they have some boundary, such as a pool of water, self-organizing systems naturally come to be dominated by those forms that are particularly good at absorbing energy from their surrounding environment and dissipating less organized forms of energy in the form of heat (entropy) back into it.

This landscape in which life evolves, England postulates, may tilt as well in the direction of complexity and intelligence due to the fact that in a system that frequently changes in terms of oscillations of energy, those forms able to anticipate the direction of such oscillations gain the possibility of aligning themselves with them and thus become able to accomplish even more work through resonance.

England is in no sense out to replace Darwin’s natural selection as the mechanism through which evolution is best understood, though, should he be proved right, he would end up greatly amending it. If his theory ultimately proves successful, and it is admittedly very early days, England’s theory will have answered one of the fundamental questions that has dogged evolution since its beginnings. For while Darwin’s theory provides us with all the explanation we need for how complex organisms such as ourselves could have emerged out of seemingly random processes- that is through natural selection- it has never quite explained how you go from the inorganic to the organic and get evolution working in the first place. England’s work is blurring the line between the organic and the most complicated self-organizing forms of the inorganic, making the line separating cells from snowflakes and storms a little less distinct.

Whatever its ultimate fate, however, England’s theory faces major hurdles, not least because it seems to have a bias towards increasing complexity, and in its most radical form, points towards the inevitability that life will evolve in the direction of increased intelligence, ideas which many evolutionary thinkers vehemently disavow.

Some evolutionary theorists may see effort such as England’s not as a paradigm shift waiting in the wings, but as an example of a misconception regarding the relationship between increasing complexity and evolution that now appears to have been adopted by actual scientists rather than a merely misguided public. A misconception that, couched in scientific language, will further muddy the minds of the public leaving them with a conception of evolution that belongs much more to the 19th century than to the 21st. It is a misconception whose most vocal living opponent after the death of the irreplaceable Stephen J Gould has been the paleontologist, evolutionary biologist, and senior editor of the journal Nature, Henry Gee, who has set out to disabuse us of it in his book The Accidental Species.

Gee’s goal is to remind us of what he holds to be the fundamental truth behind the theory of evolution- evolution has one singular purpose from which everything else follows in lockstep- reproduction. His objective is to do away, once and for all, with what he feels is a common misconception that evolution is leading towards complexity and progress and that the highest peak of this complexity and progress is us- human beings.

If improved prospects for reproduction can be bought through the increased complexity of an organism then that is what will happen, but it needn’t be the case. Gee points out that many species, notably some worms and many parasites, have achieved improved reproductive prospects by decreasing their complexity.Therefore the idea that complexity (as in an increase in the specialization and number of parts an organism has)  is a merely matter of evolution plus time doesn’t hold up to close scrutiny. Judged through the eyes of evolution, losing features and becoming more simple is not necessarily a vice. All that counts is an organism’s ability to make more copies, or for animals that reproduce through sex, blended copies of itself.

Evolution in this view isn’t beautiful but coldly functional and messy- a matter of mere reproductive success. Gee reminds us of Darwin’s idea of evolution’s product as a “tangled bank”- a weird menagerie of creatures each having their own particular historical evolutionary trajectory. The anal retentive Victorian era philosophers who tried to build upon his ideas couldn’t accept such a mess and:

…missed the essential metaphor of Darwin’s tangled bank, however, and saw natural selection as a sort of motor that would drive transformation from one preordained station on the ladder of life to the next one.” (37)

Gee also sets out to show how deeply limited our abilities are when it comes to understanding the past through the fossil record. Very, very, few of the species that have ever existed left evidence of their lives in the form of fossils, which are formed only under very special conditions, and where the process of fossilization greatly favors the preservation of some species over others. The past is thus incredibly opaque making it impossible to impose an overarching narrative upon it- such as increasing complexity- as we move from the past towards the present.

Gee, though an ardent defender of evolution and opponent of creationist pseudoscience, finds the gaps in the fossil record so pronounced that he thinks we can create almost any story we want from it and end up projecting our contemporary biases onto the speechless stones. This is the case even when the remains we are dealing with are of much more recent origin and especially when their subject is the origin of us.

We’ve tended, for instance, to link tool use and intelligence, even in those cases such as Homo Habilis, when the records and artifacts point to a different story. We’ve tended not to see other human species such as the so-called Hobbit man as ways we might have actually evolved had circumstances not played out in precisely the way they had. We have not, in Gee’s estimation, been working our way towards the inevitable goal of our current intelligence and planetary dominance, but have stumbled into it by accident.

Although Gee is in no sense writing in anticipation of a theory such as England’s his line of thinking does seem to pose obstacles that the latter’s hypothesis will have to address. If it is indeed the case that, as England has stated it, complex life arises inevitably from the physics of the universe, so that in his estimation:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

Then England will have to address why it took so incredibly long – 4 billion years out of the earth’s 4.5 billion year history for actual plants to make their debut, not to mention similar spans for other complex eukarya such as animals like ourselves.

Whether something like England’s inevitable complexity or Gee’s, not just blind, but drunk and random, evolutionary walk is ultimately the right way to understand evolution has implications far beyond evolutionary theory. Indeed, it might have deep implications for the status and distribution of life in the universe and even inform the way we understand the current development of new forms of artificial intelligence.

What we have discovered over the last decade is that bodies of water appear to be both much more widespread and can be found in environments far beyond those previously considered. Hence NASA’s recent announcement that we are likely to find microbial life in the next 10 – 30 years both in our solar system and beyond. What this means is that England’s heat baths are likely ubiquitous, and if he’s correct, life likely can be found anywhere there is water- meaning nearly everywhere. There may even be complex lifelike forms that did not evolve through what we would consider normal natural selection at all.

If Gee is right the universe might be ripe for life, but the vast, vast majority of that life will be microbial and no amount of time will change that fate on most life inhabited worlds. If England in his minor key is correct the universe should at least be filled with complex multicellular life forms such as ourselves. Yet it is the possibility that England is right in his major key, that consciousness, civilization, and computation might flow naturally from the need of organisms to resonate with their fluctuating environments that, biased as we are, we likely find most exciting. Such a view leaves us with the prospect of many, many more forms of intelligence and technological civilizations like ourselves spread throughout the cosmos.

The fact that the universe so far has proven silent and devoid of any signs of technological civilization might give us pause when it comes to endorsing England’s optimism over Gee’s pessimism, unless, that is, there is some sort of limit or wall when it comes to our own perceived technological trajectory that can address the questions that emerge from the ideas of both. To that story, next time…

Is the Anthropocene a Disaster or an Opportunity?

Bradley Cantrell Pod-mod

Bradley Cantrell Pod-Mod

Recently the journal Nature published a paper arguing that the year in which the Anthropocene, the proposed geological era in which the collective actions of the human species started to trump other natural processes in terms of their impact, began in the year 1610 AD. If that year leaves you, like it did me, scratching your head and wondering what your missed while you dozed off in your 10th grade history class, don’t worry, because 1610 is a year in which nothing much happened at all. In fact, that’s why the author’s chose it.

There are a whole host of plausible candidates for the beginning of the Anthropocene, that are much, much more well know than 1610, starting relatively recently with the explosion of the first atomic bomb in 1945 and stretching backwards in time through the beginning of the industrial revolution (1750’s) to the discovery of the Americas and the bridging between the old and new worlds in the form of the Columbian Exchange, (1492), and back even further to some hypothetical year in which humans first began farming, or further still to the extinction of the mega-fauna such as Woolly Mammoths probably at the hands of our very clever, and therefore dangerous, hunter gatherer forebears.

The reason Simon L. Lewis and Mark A. Maslin (the authors of the Nature paper) chose 1610 is that in that year there can be found a huge, discernible drop in the amount of carbon dioxide in the atmosphere. A rapid decline in greenhouses gases which probably plunged the earth into a cold snap- global warming in reverse. The reason this happened goes to the heart of a debate plaguing the environmental community, a debate that someone like myself, merely listening from the sidelines, but concerned about the ultimate result, should have been paying closer attention to.

It seems that while I was focusing on how paleontologists had been undermining the myth of the “peaceful savage”, they had also called into question another widely held idea regarding the past, one that was deeply ingrained in the American environmental movement itself- that the Native Americans did not attempt the intensive engineering of the environment of the sort that Europeans brought with them from the Old World, and therefore, lived in a state of harmony with nature we should pine for or emulate.

As paleontologists and historians have come to understand the past more completely we’ve learned that it just isn’t so. Native Americans, across North America, and not just in well known population centers such as Tenochtitlan in Mexico, the lost civilizations of the Pueblos (Anasazi) in the deserts of the southwest, or Cahokia in the Mississippi Valley, but in the woodlands of the northeast in states such as my beloved Pennsylvania, practiced extensive agriculture and land management in ways that had a profound impact on the environment over the course of centuries.

Their tool in all this was the planful use of fire. As the naturalist and historian Scott Weidensaul tells the story in his excellent account of the conflicts between Native Americans and the Europeans who settled the eastern seaboard, The First Frontier:

Indians had been burning Virginia and much of the rest of North America annually for fifteen thousand years and it was certainly not a desert. But they had clearly altered their world in ways we are only still beginning to appreciate. The grasslands of the Great Plains, the lupine-studded savannas of northwestern Ohio’s “oak openings”, the longleaf pine forests of the southern coastal plain, the pine barrens of the Mid-Atlantic and Long Island- all were ecosystems that depended on fire and were shaped over millennia of use by its frequent and predictable application.  (69)

The reason, I think, most of us have this vision in our heads of North America at the dawn of European settlement as an endless wilderness is not on account of some conspiracy by English and French settlers (though I wouldn’t put it past them), but rather, when the first settlers looked out into a continent sparsely settled by noble “savages” what they were seeing was a mere remnant of the large Indian population that had existed there only a little over a century before. That the native peoples of the North America Europeans struggled so hard to subdue and expel were mere refugees from a fallen civilization. As the historian Charles Mann has put it: when settlers gazed out into a seemingly boundless American wilderness they were not seeing a land untouched by the hands of man, a virgin land given to them by God, but a graveyard.

Given the valiant, ingenious, and indeed, sometime brutal resistance Native Americans gave against European (mostly English) encroachment it seems highly unlikely the the United States and Canada would exist today in anything like their current form had the Indian population not been continually subject to collapse. What so ravaged the Native American population and caused their civilization to go into reverse were the same infectious diseases unwittingly brought by Europeans that were the primary culprit in the destruction of the even larger Indian civilizations in Mexico and South America. It was a form of accidental biological warfare that targeted Native American based upon their genomes, their immunity to diseases having never been shaped by contact with domesticated animals like pigs and cows as had the Europeans who were set on stealing their lands.

Well, it wasn’t always accidental. For though it would be sometime before we understood how infectious diseases worked, Europeans were well aware of their disproportionate impact on the Indians. In 1763 William Trent in charge of Fort Pitt, the seed that would become Pittsburgh, besieged by Indians, engaged in the first documented case of biological warfare. Weidensaul quotes one of Trent’s letters:

Out of regard to the [Indian emissaries] we gave them two blankets and a handkerchief out of the Small Pox Hospital. I hope it will have the desired effect (381)

It’s from this devastating impact of infectious diseases that decimated Native American populations and ended their management of the ecosystem through the systematic use of fire that (the authors of the Nature paper) get their 1610 start date for the beginning of the Anthropocene. With the collapse of Indian fire ecology eastern forests bloomed to such an extent that vast amounts of carbon otherwise left in the atmosphere was sucked up by the trees with the effect that the earth rapidly cooled.

Beyond the desire to uncover the truth and to enter into sympathy with those of the past that lies at the heart of the practice and study of history, this new view of the North American past has implications for the present. For the goal of returning, both the land itself and as individuals, to the purity of the wilderness that has been at the heart of the environmentalist project since its inception in the 19th century now encounters a problem with time. Where in the past does the supposedly pure natural state to which we are supposed to be returning actually exist?

Such an untrammeled state would seem to exist either before Native American’s development of fire ecology, or after their decimation by disease before European settlement west of the Allegheny mountains. This, after all, was the period of America as endless wilderness that has left its mark on our imagination. That is, we should attempt to return as much of North America as is compatible with civilization to the state it was after the collapse of the Native American population, but before the spread of Europeans into the American wilderness. The problem with such a goal is that this natural splendor may have hid the fact that such an ecology was ultimately unsustainable in the wild  state in which European settlers first encountered it.

As Tim Flannery pointed out in his book The Eternal Frontier: An Ecological History of North America and Its Peoples, the fire ecology adopted by Native Americans may have been filling a vital role, doing the job that should have been done by large herbivores, which North America, unlike other similar continents, and likely due to the effectiveness of the first wave of human hunters to enter it as the ultimate “invasive species” from Asia, lacked.  Large herbivores cull the forests and keep them healthy, a role which in their absence can also be filled by fire.

Although projects are currently in the works to try to bring back some of these large foresters such as the woolly mammoth I am doubtful large populations will ever be restored in a world filled even with self-driving cars. Living in Pennsylvania I’ve had the misfortune of running into a mere hundred pound or so white tailed deer (or her running into me). I don’t even want to think about what a mammoth would do to my compact. Besides, woolly mammoths don’t seem to be a creature made for the age we’re headed into, which is one of likely continued, and extreme, warming.

Losing our romanticized picture of nature’s past means we are now also unclear as to its future.  Some are using this new understanding of the ecological history of North America to make an argument against traditional environmentalism arguing in essence that without a clear past to return to the future of the environment is ours to decide.

Perhaps what this New Environmentalism signals is merely the normalization of North American’s relationship with nature, a move away from the idea of nature as a sort of lost Eden and towards something like ideas found elsewhere, which need not be any less reverential or religious. In Japan especially nature is worshiped, but it is in the form of a natural world that has been deeply sculpted by human beings to serve their aesthetic and spiritual needs. This the Anthropocene interpreted as opportunity rather than as unmitigated disaster.

Ultimately, I think it is far too soon to embrace without some small degree of skepticism the project of the New Environmentalism espoused by organizations such as the Breakthrough Institute or the chief scientist and fire- brand of the Nature Conservancy Peter Kareiva, or environmental journalists and scholars espousing those views like Andrew Rifkin, Emma Marris, or even the incomparable Diane Ackerman. For certainly the argument that even if the New Environmentalism contains a degree of truth it is inherently at risk of being captured by actors whose environmental concern is a mere smokescreen for economic interests, and which, at the very least, threatens to undermine the long held support for the setting aside of areas free from human development whose stillness and silence many of us in our drenched in neon and ever beeping world hold dear.

It would be wonderful if objective scientists could adjudicate disputes between traditional environmentalists and New Environmentalists expressing non-orthodox views on everything from hydraulic fracking, to genetic engineering, to invasive species, to geo-engineering. The problem is that the very uncertainty at the heart of these types of issues probably makes definitive objectivity impossible. At the end of the day the way one decides comes down to values.

Yet what the New Environmentalists have done that I find most intriguing is to challenge  the mythology of Edenic nature, and just as in any other undermining of a ruling mythology this has opened the potential to both freedom and risks. On the basis of my own values the way I would use that freedom would not be to hive off humanity from nature, to achieve the “de-ecologization of our material welfare”,  as some New Environmentalists want to do, but for the artifact of human civilization itself to be made more natural.

Up until today humanity has gained its ascendancy over nature by assaulting it with greater and more concentrated physical force. We’ve damned and redrawn the course of rivers, built massive levies against the sea, cleared the land of its natural inhabitants (nonhuman and human) and reshaped it to grow our food.

We’ve reached the point where we’re smarter than this now and understand much better how nature achieve ends close to our own without resorting to excessively destructive and wasteful force. Termites achieve climate control from the geometry of their mounds rather than our energy intensive air conditioning. IBM’s Watson took a small city’s worth of electric power to beat its human opponents on Jeopardy! whose brains consumed no more power than a light bulb.

Although I am no great fan of the term “cyborg-ecology” I am attracted to the idea as it is being developed by figures such as Bradley Cantrell who propose that we use our new capabilities and understanding to create a flexible infrastructure that allows, for instance, a river to be a river rather than a mere “concrete channel” which we’ve found superior to the alternative of a natural river because it is predictable, while providing for our needs. A suffocatingly narrow redefinition of nature that has come at great costs to the other species with which we share with other such habitats, and ultimately causes large scale problems that require yet more brute force engineering for us to control once nature breaks free from our chains.

I can at least imagine an Anthropocene that provides a much richer and biodiverse landscape than our current one, a world where we are much more planful about providing for creatures other than just our fellow humans. It would be a world where we perhaps break from environmentalism’s shibboleths and do things such as prudently and judiciously use genetic engineering to make the natural world more robust, and where scientists who propose that we use the tools our knowledge has given to address plagues not just to ourselves, but to fellow creatures, do not engender scorn, but careful consideration of their proposals.

It would be a much greener world where the cold kingdom of machines hadn’t replaced the living, but was merely a temporary way station towards the rebirth, after our long, and almost terminal assault, on the biological world. If our strategies before have been divided between raping and subduing nature in light of our needs, or conversely, placing nature at such a remove from our civilization that she might be deemed safe from our impact, perhaps, having gained knowledge regarding nature’s secrets, we can now live not so much with her as a part of her, and with her as a part of us.

The Sofalarity is Near

Mucha Goddess Maia

Many readers here have no doubt spent at least some time thinking about the Singularity, whether in a spirit of hope or fear, or perhaps more reasonably some admixture of both. For my part, though, I am much less worried about a coming Singularity than I am about a Sofalarity in which our ability to create realistic illusions of achievement and adventure convinces the majority of humans that reality isn’t really worth all the trouble after all. Let me run through the evidence of an approaching Sofalarity. I hope you’re sitting down… well… actually I hope you’re not.

I would define a Sofalarity as a hypothetical  point in human history would when the majority of human beings spend most of their time engaged in activities that have little or no connection to actual life in the physical world. It’s not hard to see the outline of this today: on average, Americans already spend an enormous amount of time with their attention focused on worlds either wholly or partly imagined. The numbers aren’t very precise, and differ among age groups and sectors of the population, but they come out to be somewhere around five hours watching television per day, three hours online, and another three hours playing video games. That means, collectively at least, we spend almost half of our day in dream worlds, not counting the old fashioned kind such as those found in books or the ones we encounter when we’re actually sleeping.

There’s perhaps no better example of how the virtual is able to hijack our very real biology than pornography . Worldwide the amount of total internet traffic that is categorized as erotica ranges from a low of four to as high as thirty percent. When one combines that with recent figures claiming that up to 36 percent of internet traffic aren’t even human beings but bots, then it’s hard not to experience future shock.

Amidst all the complaining that the future hasn’t arrived yet and “where’s my jetpack?” a 21st century showed up where upwards of 66 percent of internet traffic could be people looking for pornography, bots pretending to be human, or, weirdest of all, bots pretending to be human looking for humans to have sex with. Take that Alvin Toffler.

Still all of this remains simply our version of painting on the walls of a prehistoric cave. Any true Sofalarity would likely require more than just television shows and Youtube clips. It would need to have gained the keys to our emotional motivation and senses.

As a species we’ve been trying to open the doors of perception with drugs long before almost anything else. What makes our current situation more likely to be leading toward a Sofalarity is that now this quest is a global business and that we’ve become increasingly sophisticated when it comes to playing tricks on our neurochemistry.

The problem any society has with individuals screwing with their neurochemistry is two-fold. The first is to make sure that enough sober people are available for the necessary work of keeping their society operating at a functional level, and the second is to prevent any ill effects from the mind altered from spilling over into the society at large.

The contemporary world has seemingly found a brilliant solution to this problem- to contain the mind altered in space and time, and making sure only the sober are running the show. The reason bars or dance clubs work is that only the customers are drunk or stoned and such places exist in a state of controlled chaos with the management carefully orchestrating the whole affair and making sure things remain lively enough that customers will return while ensuring that things also don’t get so dangerous patrons will stay away for the opposite reason.

The whole affair is contained in time because drunken binges last only through the weekend with individuals returning to their straight-laced bourgeois jobs on Monday, propped up, perhaps by a bit of stimulants to promote productivity.

Sometimes this controlled chaos is meant to last for longer stretches than the weekends, yet here again, it is contained in space and time. If you want to see controlled chaos perfected with technology thrown into the mix you can’t get any better than Las Vegas where seemingly endless opportunities for pleasure and losing one’s wits abound all the while one is being constantly monitored both in the name of safety, and in order that one not develop any existential doubts about the meaning of life under all that neon.   

If you ever find yourself in Vegas losing your dopamine fix after one too many blows from lady luck behind a one-armed bandit, and suddenly find some friendly casino staff next to you offering you free drinks or tickets to a local show, bless not the goddess of Fortune, but the surveillance cameras that have informed the house they are about to lose an unlucky, and therefore lucrative, customer. La Vegas is the surveillance capital of the United States, and it’s not just inside the casinos.

Ubiquitous monitoring seems to be the price of Las Vegas’ adoption of vice as a form of economy. Or as Megan McArdle put it in a recent article:

 Is the friendly police state the price of the freedom to drink and gamble with abandon?Whatever your position on vice industries, they are heavily associated with crime, even where they are legal. Drinking makes people both violent and vulnerable; gambling presents an almost irresistible temptation to cheating and theft.  Las Vegas has Disneyfied libertinism. But to do so, it employs armies of security guards and acres of surveillance cameras that are always and everywhere recording your every move.

Even the youngest of our young children now have a version of this: we call it Disney World. The home of Mickey Mouse has used current surveillance technology to its fullest, allowing it to give visitors to the “magic kingdom” both the experience of being free and one of reality seemingly bending itself in the shape of innocent fantasy and expectations. It’s a technology they work very hard to keep invisible. Disney’s magic band, which not only allows visitors to navigate seamlessly through its theme parks, but allows your dinner to be brought to you before you ordered it, or the guy or gal in the Mickey suit to greet your children by name before they have introduced themselves was described recently in a glowing article in Wired that quoted the company’s COO Tom Staggs this way:

 Staggs couches Disney’s goals for the MagicBand system in an old saw from Arthur C. Clarke. “Any sufficiently advanced technology is indistinguishable from magic,” he says. “That’s how we think of it. If we can get out of the way, our guests can create more memories.”

Nothing against the “magic of Disney” for children, but I do shudder a little thinking that so many parents don’t think twice about creating memories in a “world” that is not so much artificial as completely staged. And it’s not just for kids. They have actually built an entire industry around our ridiculousness here, especially in places in like China, where people pay to have their photos taken in front of fake pyramids or the Eiffel tower, or to vacation in places pretending to be someplace else.

Yet neither Las Vegas nor a Disney theme park resemble what a Sofalarity would look like in full flower. After all, the show girls at Bally’s or the poor soul under the mouse suit in Orlando are real people. What a true Sofalarity would entail is nobody being there at all, for the very people behind the pretend to no longer be there.

We’re probably some way off from a point where the majority of human labor is superfluous, but if things keep going at the rate they are, we’re not talking centuries. The rejoinder to claims that human labor will be replaced to the extent that most of us no longer have anything to do is often that we’ll become the creators and behind the scenes, the same way Apple’s American workers do the high end work of designing its products while the work of actually putting them together is done by numb fingers over in China. In the utopian version of our automated future we’ll all be designers on the equivalent of Infinite Loop Street while the robots provide the fingers.

Yet, over the long run, I am not sure this humans as mental creators/machines as physical producers distinction will hold. Our (quite dumb) computers already create visually stunning and unanticipated works or art, compose music that is indistinguishable from that created in human minds, and write things none of us realize are the product of clever programs. Who’s to say that decades hence, or over a longer stretch, they won’t be able to create richer fantasy worlds of every type that blow our minds and grip our attention far more than any crafted by our flesh and blood brethren?

And still, even should every human endeavor be taken over by machines, including our politics, we would still be short of a true Sofalarity because we would be left with the things that make us most human- the relationship we have with our loved ones. No, to see the Sofalarity in full force we’d need to have become little more than a pile of undulating mush like the creatures in the original conception of the movie Wall-E from which I ripped the term.

The only way we’d get to that point is if our created fantasies could reach deep under our skin and skulls and give us worlds and emotional experiences that atrophied to the point of irrecoverability what we now consider most essential to being a person. The signs are clear that we’re headed there. In Japan, for instance, there are perhaps 700,000 Hikikomori, modern day hermits that consist of adults who have withdrawn from 3 dimensional social relationships and live out their lives primarily online.

Don’t get me wrong, there is a lot of very cool stuff either here or shortly coming down the pike, there’s Oculus Rift should it ever find developers and conquer the nausea problem, and there’s wonders such as Magic Leap, a virtual reality platform that allows you to see 3D images by beaming them directly into your eyes. Add to these things like David Eagleman’s crazy haptic vest, or brain readers that sit it your ear, not to mention things a little further off in terms of public debut that seem to have jumped right off the pages of Nexus, like brain-to-brain communication, or magnetic nanoparticles that allow brain stimulation without wires, and it’s plain to see we’re on the cusp of revolution in creating and experiencing purely imagined worlds, but all this makes it even more difficult to bust a poor hikikomori out of his 4’ x 4’ apartment.

It seems we might be on the verge of losing the distinction between the Enchantment of Fantasy and Magic that J.R.R Tolkien brought us in his brilliant lecture On Fairy Stories:

Enchantment produces a Secondary World into which both designer and spectator can enter, to the satisfaction of their senses while they are inside; but in its purity it is artistic in desire and purpose. Magic produces, or pretends to produce, an alteration in the Primary World. It does not matter by whom it is said to be practiced, fay or mortal, it remains distinct from the other two; it is not an art but a technique; its desire is power in this world, domination of things and wills.

Fantasy is a natural human activity. It certainly does not destroy or even insult Reason; and it does not either blunt the appetite for, nor obscure the perception of, scientific verity. On the contrary. The keener and the clearer is the reason, the better fantasy will it make. If men were ever in a state in which they did not want to know or could not perceive truth (facts or evidence), then Fantasy would languish until they were cured. If they ever get into that state (it would not seem at all impossible), Fantasy will perish, and become Morbid Delusion.

For creative Fantasy is founded upon the hard recognition that things are so in the world as it appears under the sun; on a recognition of fact, but not a slavery to it. So upon logic was founded the nonsense that displays itself in the tales and rhymes of Lewis Carroll. If men really could not distinguish between frogs and men, fairy-stories about frog-kings would not have arisen.

For Tolkien, the primary point of fantasy was to enrich our engagement with the real world to allow us to see the ordinary anew. Though it might also be a place to hide and escape should the real world, whether for the individual or society as a whole , become hellish, as Tolkien, having fought in the World War I, lived through the Great Depression, and was on the eve of a second world war when he gave his lecture well knew.

To those who believe we might already be living in a simulation perhaps all this is merely like having traveled around the world only to end up exactly where you started as in the Borges’ story The Circular Ruins, or in the idea of many of the world’s great religions that we are already living in a state of maya or illusion, though we now make the case using much more scientific language. There’s a very serious argument out there, such as that of Nick Bostrom, that we are already living in a simulation. The way one comes to this conclusion is merely by looking at the virtual world we’ve already created and extrapolating the trend outward for hundreds or thousands of years. In such a world the majority of sentient creatures would be “living” entities in virtual environments, and these end up comprising the overwhelming number of sentient creatures that will ever exist. Statistical reasoning would seem to lead to the conclusion that you are more likely than not, right now, a merely virtual entity. There are even, supposedly, possible scientific experiments to test for evidence of this. Thankfully, in my view at least, the Higgs particle might prevent me from being a Boltzmann brain.

For my part, I have trouble believing I am living in a simulation. Running “ancestor simulations” seems like something a being with human intelligence might do, but it would probably bore the hell out of any superintelligence capable of actually creating the things, they would not provide any essential information for their survival, and given the historical and present suffering of our world would have to be run by a very amoral, indeed immoral being.

That was part of the fear Descartes was tapping into when he proposed that the world, and himself in it, might be nothing more than the dream of an “evil demon”. Yet what he was really getting at, as same as was the case with other great skeptics of history such as Hume, wasn’t so much the plausibility of the Matrix, but the limits surrounding what we can ever be said to truly know.

Some might welcome the prospect of a coming Sofalarity for the same reasons they embrace the Singularity, and indeed, when it comes to many features such as exponential technological advancement or automation, the two are hardly distinguishable. Yet the biggest hope that sofaltarians and singularitarians would probably share is that technological trends point towards the possibility of uploading minds into computers.

Sadly, or thankfully, uploading is some ways off. The EU seems to have finally brought the complaints of neuroscientists that Henry Markum’s Human Brain Project, that aimed to simulate an entire human brain was scientifically premature enough to be laughable, were it not for the billion Euro’s invested in it that might have been spent on much more pressing areas like mental illness or Alzheimer’s research. The project has not been halted but a recent scathing official report is certainly a big blow.

Nick Bostrom has pondered that if we are not now living in a simulation then there is something that prevents civilizations such as our from reaching the technological maturity to create such simulations. As I see it, perhaps the movement towards a Sofalarity ultimately contains the seeds of its own destruction.  Just as I am convinced that hell, which exists in the circumscribed borders of torture chambers, death camps, or the human heart, can never be the basis for an entire society, let alone a world, it is quite possible that a heaven that we could only reach by escaping the world as it exists is like that as well, and that any society that would be built around the fantasy of permanent escape would not last long in its confrontation with reality. Fermi paradox, anyone?