The root cause of the US failure to grapple with the ongoing health and economic crises unleashed by the coronavirus ultimately stems from the collapse of any notion of a publicly shared reality. To beat the virus everyone would need to believe in its existence and act on that reality. They would need to socially distance, wear a mask, wash their hands, limit interactions with others to small groups and necessity. They would need to eliminate unnecessary travel, and avoid vulnerable members of the population whenever possible.

If most evil arises from ignorance, then what is driving our failed response is a lack of belief because the knowledge of what to do is everywhere. Too many of us distrust the medical experts’ understanding of the disease and the media’s communication of what needs to be done in light of it. America is not alone in this epistemic crisis, it being a shared legacy of the West, but suffers from a particularly acute version of it, one that has made our capacity to respond to the virus much worse than any of our peers. It’s a crisis that like Trump himself has been a long time in coming.

Restoring a shared sense of reality is the clearest way out of the crisis, but the difficulty such a restoration faces is that we’ve known for a long time now that any such notion of the real world is a fabrication. Like Adam and Eve, we find it impossible to return to a state of lost innocence, our intellectual virginity was lost a long time ago. We’ve known since Kant that the “thing in itself” lies just outside our reach, and since Nietzsche that we’re all liars. The lessons of social construction have been deeply embedded in anthropology and sociology, and infuses philosophy through the lucid caterwaul of postmodernism and the workaday plodding of pragmatism.

Even the most whiggish liberal among us understands that the news is a ratings and click-bait game, objectivity a smokescreen- we’re all Chomsky-est now.  All good scientists now know that our theories don’t correspond to some true world “out there”, but are mere models- provisional guides for the blind through an uncertain labyrinth. Even the best of our models of covid-19 and its spread are merely provisional maps of a constantly changing landscape, and because of that full of holes only the future will reveal.

The greater public might not learn of this alienation from Truth via university education, but they experience it viscerally every day through television and the internet. Hard core reality has disappeared behind screens, becoming fungible, editable. Not a mirror held up to the real but a means of distortion, compression, enhancement, and amping. All we have are models now, and whose models you trust depend on a set of prior commitments that seem unshakable even in the face of the catastrophic failure of one’s own assumptions regarding the world.

This reduction of capital T truth to models isn’t necessarily a bad thing. The idea that human beings who are part of and made by the world could somehow through our narrow sieve of cognition create a completely faithful copy of that world was always a species of hubris. In that sense, our closest approximation of the truth might come from the acknowledgement of our own limits- identifying the permanent holes in our maps. Yet the replacement of the truth by models does come with its own unique set of dangers, the first being that only part of a society gets the memo.

In that case you get a group or groups of individuals in a society who have sincerely confused the map with the territory. We might call these people fanatics, or maybe just believers depending on our mood. And they need not necessarily be anti-scientific, but instead confuse what science does with a mistaken belief that it describes reality, rather than its actual task of demarcating uncertainty. Because they never update their beliefs to incorporate new information that conflicts with their priors, given the right circumstance they will repeat the same mistakes over and over again- like Charlie Brown and his football. In honor of Lucy we can call them blockheads, though this should not be seen as an insult. Blockheads are made not born. And it is the supposedly cleverest among us who are responsible for having made so many of them.

Of course, all past societies did this. They organized life around socially constructed fictions. Surprisingly, this ability to live on the basis of imagined worlds has been a great strength of our species rather than a weakness. Part of the reason that it could remain a strength was that such delusions have always been constrained by common sense. This partly biological and partly learned shared understanding of how the world works arises from our inherited or experienced contact with the real world. At a bare minimum our social constructions are anchored to the language we share with whatever larger human group we belong to. We share meanings for commonly experienced things within the world, where what we have inherited constrains the forms any future social constructions can take.

Even the insane don’t normally lose this grounding, but merely adopt a private version of delusions that are usually social and make little sense outside of a collective context. The only people who experience its loss are those suffering from severe forms of dementia where what is lost is not so much beliefs but shared protocols on how to interface with the world, and thanks to the globalization of technology, these protocols used in the human made world are now universal. A person suffering from a manic delusion might wrongly think that there is a monster hiding behind a door, but they will rarely forget how to open one.

What distinguishes our own age of belief is how good we’ve become in inventing and spreading imaginary worlds while, at the very same time, the origin of our protocols has drifted away from common sense. The modern scientific revolution began as an effort to clarify, demarcate, and formalize this common sense. In the search for the grounds of our commonly experienced world science ended up taking us farther and farther away from the scale of the world as actually experienced by human beings– confronting us with new scales in both space and time. Invisible entities or hyper-objects that are too large for us to see, time scales too fast for human perception, or too long to be comprehensible when compared with our own limited human lifespan replaced angels and demons, infinity and the abyss.

Yet our belief or disbelief in these invisible entities has no consequence for our use of technologies and interfaces based upon them. Using my computer requires no understanding of what computation actually is, for all I know, it could be tiny Keebler Elves running the show back there. A person traveling to a flat earth convention isn’t flummoxed in the least by the fact that he uses GPS to find his way there.

Blockheads can normally get along just fine in the world precisely because of this persistent gap between belief and performance. Politics in such a world doesn’t normally take the form of Mytilenian debates, with all the existential risks they entail, and are more like loyalties to sports teams- matters of taste where nothing of real consequence is at stake. Like with sports, politics can be tribal and because a change in position is tied to questions of identity they are usually rare. And analogously to someone switching out his favorite team for a new one the job can be taxing and deemed not worth it. Constantly updating one’s priors is hard, and when the civil war is merely a Potemkin conflict, so long as you yourself aren’t a member of its “collateral damage”, is just rare enough to not have revolutionary consequences- at least up until now.

With the coronavirus (and climate change) we’re faced with particularly harsh examples of what it means to live in a society that depends on science for its existence, yet where so few of us need any understanding or experience of this science to thrive. With the coronavirus you have something that is both too small for us to see, and whose effects- until it infects ourselves or a loved one- are too large and diffuse for us to understand. Outside of the normal human scale it becomes just another demon dreamed up by eggheads in lab coats. Whereas every human society in the past was confronted by the mass death of pandemics in their homes or on the streets, our deaths are now hidden away from us in hospitals, whose horror shows, like with our slaughter houses, factories, or prisons we never need to personally see.

You combine the fact that the virus is outside of our normal human scale and hidden from everyday experience with an information ecosystem that is so obviously corrupted, and where much of science itself has been captured by these very forces of corruption, then it’s no wonder why we find ourselves in a situation where a medical crisis has become yet another football in our endless and self-destructive partisan conflict. Indeed, and in the greatest of ironies, once the pandemic became political it thereby could be deemed unreal, just another front in the meaningless war between rival armies of blockheads.      

For the second danger in replacing truth with models is that we start to view all models with cynicism. Ultimately, the cynic comes to see the search for knowledge itself as nothing but a game. In his view what separates the wise man from the fool is that the wise man knows that everybody is playing him. A chaotic good cynic might take this one remaining piece of truth in the world and use it to become a prankster whose sole goal is the exposure of all those whose power arises from their control of these language games. Someone of the chaotic evil bent, on the other hand, might use the truth of untruth as a vehicle for wealth or power for what it gives him is an unprecedented flexibility when it comes to language. The primary skill of both the salesman and the demagogue is knowing how to tell you exactly what you wanted to hear.

This penchant for using the flexibility of language to create fantasy worlds or steer the block headed crowd lies deep in the roots of the American psyche, a country whose first truly novel cultural export was confidence men like P.T. Barnum. Some would merely see it as salesmanship, but there is a fine line between persuasion and deception, a line that is drawn where the persuader’s belief in his own veracity fails.

The constant pulling at our coattails by marketers wouldn’t matter so much had we been able to hive off the important issues of the public sphere from its influence- something which Walter Lippmann urged us to do as far back as the 1950’s. What we’ve proven since then is that it’s impossible to build a system of public communication that is supported by advertising and political propaganda while at the same time dedicated to informing us of the truth in a way we’ll actually believe.

Luckily, in America very few of these cynics leading blockheads have become demagogues, and even fewer of these demagogues have obtained positions of national power. On the white populist side, we’ve had William Jennings Bryan, Huey Long, Father Coughlin, Joseph McCarthy, George Wallace and now Donald Trump. The last has managed to smoothly transition from being a huckster cynic to becoming the only demagogue to ever obtain the nation’s highest office. God help us all.

The seemingly intractable problem we now face is that we have a man leading the country who doesn’t believe in reality and who has amassed a following of cynics and blockheads large enough to make an effective response to a very real pandemic impossible. In all likelihood the pandemic will move much faster than us finding an escape route from our epistemic jam, but we should try chart a path towards an exit in any case- otherwise we’ll keep repeating the same mistakes until reality has its final say and at last does us in.

The fact is that we’re returning to a world where substituting the creation of fantasy worlds for the discovery of necessarily tentative truths will likely become increasingly less tenable. It’s often been said that it was Huxley’s Brave New World rather than Orwell’s 1984 that got the late 20th century right because Huxley put his finger on the fact that the society being built was all about escapism. What critics of Orwell miss is that the societies he imagined were based on an entirely fake war because war itself had become impossible.

It was the absence of war which made the complete replacement of the search for reality with the construction of fantasy possible because the existential stakes of war require those who wage it to see the world in front of them as truthfully as possible, while at the same time acknowledging the limits of their own perception, along with the necessity of making decisions even when information is less than complete. And while 21st century warfare may in large part concern itself with getting inside the enemies’ OODA Loop, and leveraging misinformation both assume that there is an actual reality whose perception can be disrupted or hid.

The US may or may not be headed towards a new cold war style conflict with China, but in any case, the American exceptionalism that is our refusal to accept reality cannot be sustained. Our wealth and geographic isolation have saved us from facing the full consequences of our numerous mistakes going all the way back to at least the Vietnam War. What we face now is not so much an escalating competition with other societies as it is an intensifying struggle with Nature itself. What we desperately need to learn is that the world outside of us follows an agenda all its own, oblivious to our fantasy worlds and regardless of how skilled we are at building them.


How the police became storm troopers

It’s been six goddamned years since I wrote the bulk of what follows below, six goddamned years since the combination of racism, militarized police, and ubiquitous cell phone cameras exploded into the BLM movement, six goddamned years in which nothing of substance has really changed.

But maybe, just maybe, this time really will have been different, the murder at the hands of a policeman too brutal to ignore, the images of cops acting like an occupying army too discomforting to liberal centrists, the rage at the injustice of it all torqued even higher by the shattering experience under a pandemic too deep to be swept aside by the next fashion in the news cycle to really change things for the better.

And we are better prepared for deep reforms. Much intellectual ground work has been laid since Radley Balko wrote his book  The Rise of the Warrior Cop explaining exactly how our police became militarized. There has been Stuart Schrader’s Badges Without Borders: How Global Counterinsurgency Transformed American Policing, which places militarized policing in the broader historical context of American empire, a boomerang effect where the type of power to coerce foreigners far away from the homeland is eventually turned on citizens themselves. Much in the way that Hannah Arendt traced the rise of the European police state to its age of imperialism.

In addition to these excellent historical descriptions there have been increasing calls for abolition- the wholesale transformation of law enforcement into something radically different. Books such as Alex S. Vitale’s The End of Policing along with a growing chorus (among both the left and the libertarian right) to move away from the carceral state, which, much more than policing, make the US an outlier among developed nations.     

I have no idea where any of these movements will end up, but given dangers to any democracy that come from abandoning legitimacy for the compulsion of force, I think it helpful to review how we got here- a reality I first learned from Balko’s excellent book.


A democracy has entered a highly unstable state when its executive elements, the police and security services it pays for through its taxes, which exist for the sole purpose of protecting and preserving that very community, are turned against it. I would have had only a small clue as to how this came about were it not for a rare library accident. I was trying to get out a book on robots in warfare for a project I am working on, but had grabbed the book next to it by mistake- Radley Balko’s The Rise of the Warrior Cop

As Balko explains much of what we now take as normal police functions would have likely been viewed by the Founders as “a standing army”, something they were keen to prevent. In addition to the fact that Americans were incensed by the British use of soldiers to exercise police functions, the American Revolution had been inspired in part by the use by the British of “General Warrants” that allowed them to bust into American homes to search in their battle against smuggling. From its beginning the United States has had a tradition of separation between military and police power along with a tradition of limiting police power, indeed, this the reason our constitutional government exists in the first place.

Balko points out how the U.S. as it developed its own police forces, something that became necessary with the country’s urbanization and modernization, maintained these traditions which only fairly recently started to become eroded, largely beginning with the Nixon administration’s “law and order” policy and especially the “war on drugs” launched under Reagan.

In framing the problem of drug use as a war rather than a public health concern we started down the path of using the police to enforce military style solutions. If drug use is a public health concern then efforts will go into providing rehabilitation services for addicts, addressing systemic causes and underlying perceptions, and legalization as a matter of personal liberty where doing so does not pose inordinate risk to the public. If the problem of drug use is framed as a war, then this means using kinetic action to disrupt and disable “enemy” forces. It means adhering as close to the limits of what is legally allowable when using force to protect one’s own “troops”. It means mass incarceration of captured enemy forces. Fighting a war means that training and equipment needs focus on the effective use of force and not “social work”.

The militarization of America’s police forces that began in earnest with the war on drugs, Balko reminds us, is not an issue that can easily be reduced to Conservative vs. Liberal, Republican vs Democrat. In the 1990’s conservatives were incensed at police brutality and misuse of military style tactics at Waco and Ruby Ridge. Yet conservatives largely turned a blind eye to the same brutality used against anarchists and anti-globalization protestors in The Battle of Seattle in 1999. Conservatives have largely supported the militarized effort to stomp out drug abuse and the use of swat teams to enforce laws against non-violent offenders, especially illegal immigrants.

The fact that police were increasingly turning to military tactics and equipment was not, however, all an over-reaction. It was inspired by high profile events such as the Columbine massacre, and a dramatic robbery in North Hollywood in 1997. In the latter the two robbers Larry Phillips, Jr. and Emil Mătăsăreanu wore body armor police with light weapons could not penetrate. The 2008 attacks in Mumbai in which a small group of heavily armed and well trained terrorists were able to kill 164 people and temporarily cripple large parts of the city should serve as a warning of what happens when police cannot rapidly deploy lethal force as should a whole series of high profile “lone wolf” style shootings. Police can thus rationally argue that they need access to heavy weapons when needed and swat teams and training for military style contingencies as well. It is important to remember that the police daily put their lives at risk in the name of public safety.

Yet militarization has gone too far and is being influenced more by security corporations and their lobbyists than conditions in actual communities. If the drug war and attention grabbing acts of violence was where the militarization of America’s police forces began, 9-11 and the wars in Afghanistan and Iraq acted as an accelerant on the trend. These events launched a militarized-police-industrial complex, the country was flooded with grants from the Department of Homeland Security which funded even small communities to set up swat teams and purchase military grade equipment. Veterans from wars which were largely wars of occupation and counterinsurgency were naturally attracted to using these hard won skill sets in civilian life- which largely meant either becoming police or entering the burgeoning sector of private security.

So that’s the problem as laid out by Balko, what is his solution? For Balko, the biggest step we could take to rolling back militarization is to end the drug war and stop using military style methods to enforce immigration law. He would like to see a return to community policing, if not quite Mayberry, then at least something like the innovative program launched in San Antonio which uses police as social workers rather than commandos in to respond to mental health related crime.

Balko also wants us to end our militarized response to protests. There is no reason why protesters in a democratic society should be met by police wielding automatic weapons or dispersed through the use of tear gas. We can also stop the flood of federal funding being used by local police departments to buy surplus military equipment. Something that the Obama administration prompted by Ferguson seems keen to review.

A positive trend that Balko sees is the ubiquity of photography and film permitted by smart phones which allows protesters to capture brutality as it occurs a right which everyone has, despite the insistence of some police in protest situations to the contrary, and has been consistently upheld by U.S. courts. Indeed the other potentially positive legacy of Ferguson other than bringing the problem of police militarization into the public spotlight, for there is no wind so ill it does not blow some good, might be that it has helped launch true citizen based and crowd-sourced media.

My criticism of The Rise of the Warrior Cop, to the extent I have any, is that Balko only tells the American version of this tale, but it is a story that is playing out globally. The inequality of late capitalism certainly plays a role in this. Wars between states has at least temporarily been replaced by wars within states. Global elites who are more connected to their rich analogs in other countries than they are to their own nationals find themselves turning to a large number of the middle class who find themselves located in one form or another in the security services of the state. Elites pursue equally internationalized rivals, such as drug cartels and terrorist networks like one would a cancerous tumor- wishing to rip it out by force- not realizing this form of treatment is not getting to the root of the problem and might even end up killing the patient.

More troublingly they use these security services to choke off mass protests by the poor and other members of the middle class now enabled by mobile technologies because they find themselves incapable of responding to the problems that initiated these protests with long-term political solutions. This relates to another aspect of the police militarization issue Balko doesn’t really explore, namely the privatization of police services as those who can afford them retreat behind the fortress of private security while the conditions of the society around them erode.

Maybe there was a good reason that The Rise of the Warrior Cop was placed on the library shelf next to books on robot weapons after all. It may sound crazy, but perhaps in the not so far off future elites will automate policing as they are automating everything else. Mass protests, violent or not, will be met not with flesh and blood policemen but military style robots and drones. And perhaps only then will once middle class policemen made poor by the automation of their calling realize that all this time they have been fighting on the wrong side of the rebellion.

Mary Shelley’s other horror story; Lessons for Super-pandemics

Given the ongoing COVID-19 pandemic, I thought it might be a good idea to re-post this piece I wrote several years back during the halcyon days when the US had a government competent enough to help other countries in a pandemic, rather than being incapable of even helping itself. It was written back in 2014 at the height of the Ebola outbreak and reviews Mary Shelley’s horror novel about a civilization destroying pandemic, The Last Man. 

The lessons I gleaned from Shelley’s novel I believe still stand, which is not that the world is ending, but that which we most have to fear from pandemics are their uncanny ability to make almost everything we love disappear. This includes, especially, those closest to us.

What made Ebola particularly tragic was that it tended to prey upon those who loved its victims enough to provide them with some kind of care. COVID-19, although far less deadly, may in some sense be worse, for what appears to work best at stopping its spread isn’t the massive lock down of cities as seen in Wuhan, but the identification of infected individuals followed by their rapid separation from their families. Let’s hope that as few as possible of these separations turn out to be final, and that we use what this crisis is teaching us to fix our all too broken society.


Back in the early 19th century a novel was written that tells the story of humanity’s downfall in the 21st century.  Our undoing was the consequence of a disease that originates in the developing world and radiates outward eventually spreading into North America, East Asia, and ultimately Europe. The disease proves unstoppable causing the collapse of civilization, our greatest cities becoming grave sites of ruin. For all the reader is left to know, not one human being survives the pandemic.

We best know the woman who wrote The Last Man in 1825 as the author of Frankenstein, but it seems Mary Shelley had more than one dark tale up her sleeve. Yet, though the destruction wrought by disease in The Last Man is pessimistic to the extreme, we might learn some lessons from the novel that would prove helpful to understanding not only the very deadly, if less than absolute ruination, of the pandemic of the moment- Ebola- and even more regarding the dangers from super-pandemics more likely to emerge from within humanity than from what is a still quite dangerous nature herself.

The Last Man tells the story of son of a nobleman who had lost his fortune to gambling, Lionel Verney, who will become the sole remaining man on earth as humanity is destroyed by a plague in the 21st century. Do not read the novel hoping to get a glimpse of Shelley’s view of what our 21st century world would be like, for it looks almost exactly like the early 19th century, with people still getting around on horseback and little in the way of future technology.

My guess is that Shelley’s story is set in the “far future” in order to avoid any political heat for a novel in which England has become a republic. Surely, if she meant it to take place in a plausible 21st century, and had somehow missed the implications of the industrial revolution, there would at least have been some imagined political differences between that world and her own. The same Greco-Turkish conflict that raged in the 1820’s rages on in Shelley’s imagined 21st century with only changes in the borders of the war. Indeed, the novel is more of a reflection and critique on the Romantic movement, with Lord Byron making his appearance in the form of the character Lord Raymond, and Verney himself a not all that concealed version of Mary Shelley’s deceased husband Percy.

In The Last Man Shelley sets out to undermine all the myths of the Romantic movement, myths of the innocence of nature, the redemptive power of revolutionary politics and the transformative power of art. While of historical interests such debates offer us little in terms of the meaning of her story for us today. That meaning, I think, can be found in the state of epidemiology, which on the very eve of Shelley’s story was about to undergo a revolution, a transformation that would occur in parallel with humanity’s assertion of general sovereignty over nature, the consequence of the scientific and industrial revolutions.

Reading The Last Man one needs to be carefully aware that Shelley has no idea of how disease actually works. In the 1820’s the leading theory of what caused diseases was the miasma theory, which held that they were caused by “bad air”. When Shelley wrote her story miasma theory was only beginning to be challenged by what we now call the “germ theory” of disease with the work of scientists such as Agostino Bassi. This despite the fact that we had known about microscopic organisms since the 1500s and their potential role in disease had been cited as early as 1546 by the Italian polymath Girolamo Fracastoro. Shelley’s characters thus do things that seem crazy in the light of germ theory; most especially, they make no effort to isolate the infected.

Well, some do. In The Last Man it is only the bad characters that try to run away or isolate themselves from the sick. The supremely tragic element in the novel is how what is most important to us, our small intimate circles, which we cling to despite everything, can be done away with by nature’s cruel shrug. Shelley’s tale is one of extreme pessimism not because it portrays the unraveling of human civilization, and turns our monuments into ruins, and eventually, dust, but because of how it portrays a world where everyone we love most dearly leave us almost overnight. The novel gives one an intimate portrait of what its like to watch one’s beloved family and friends vanish, a reality Mary Shelley was all too well acquainted with, having lost her husband and three children.

Here we can find the lesson we can take for the Ebola pandemic for the deaths we are witnessing today in west Africa are in a very real sense a measure of people’s humanity as if nature, perversely, set out to target those who are acting in a way that is most humane. For, absent modern medical infrastructure, the only ones left to care for the infected is the family of the sick themselves.

This is how is New York Times journalist Helene Cooper explained it to interviewer Terry Gross of Fresh Air:


COOPER: That’s the hardest thing, I think, about the disease is it does make pariahs out of the people who are sick. And it – you know, we’re telling the family people – the family members of people with Ebola to not try to help them or to make sure that they put on gloves. And, you know, that’s, you know, easier – I think that can be easier said than done. A lot of people are wearing gloves, but for a lot of people it’s really hard.

One of the things – two days after I got to Liberia, Thomas Eric Duncan sort of happened in the U.S. And, you know, I was getting all these questions from people in the U.S. about why did he, you know, help his neighbor? Why did he pick up that woman who was sick? Which is believed to be how we got it. And I set out trying to do this story about the whole touching thing because the whole culture of touching had gone away in Liberia, which was a difficult thing to understand. I knew the only way I could do that story was to talk to Ebola survivors because then you can ask people who actually contracted the disease because they touched somebody else, you know, why did you touch somebody? It’s not like you didn’t know that, you know, this was an Ebola – that, you know, you were putting yourself in danger. So why did you do it?

And in all the cases, the people I talked to there were, like, family members. There was this one woman, Patience, who contracted it from her daughter who – 2-year-old daughter, Rebecca – who had gotten it from a nanny. And Rebecca was crying, and she was vomiting and, you know, feverish, and her mom picked her up. When you’re seeing a familiar face that you love so much, it’s really, really hard to – I think it’s a physical – you have to physically – to physically restrain yourself from touching them is not as easy as we might think.


The thing we need to do to ensure naturally occurring pandemics such as Ebola cause the minimum of human suffering is to provide support for developing countries lacking the health infrastructure to respond to or avoid being the vectors for infectious diseases. We especially need to address the low number of doctors per capita found in some countries through, for example, providing doctor training programs. In a globalized world being our brother’s keeper is no longer just a matter of moral necessity, but helps preserve our own health as well.

A super-pandemic of the kind imagined by Mary Shelley, though, is an evolutionary near impossibility. It is highly unlikely that nature by itself would come up with a disease so devastating we will not be able to stop before it kills us in the billions. Having co-evolved with microscopic life some human being’s immune system, somewhere, anticipates even nature’s most devious tricks. We are also in the Anthropocene now, able to understand, anticipate, and respond to the deadliest games nature plays. Sadly, however, the 21st century could experience, as Shelley imagined, the world’s first super-pandemic only the source of such a disaster wouldn’t be nature- it would be us.

One might think I am referencing bio-terrorism, yet the disturbing thing is that the return address for any super-pandemic is just as likely to be stupid and irresponsible scientists as deliberate bio-terrorism. Such is the indication from what happened in 2011 when the Dutch scientist Ron Fouchier deliberately turned the H5N1 bird flu into a form that could potentially spread human-to-human. As reported by Laurie Garrett:

Fouchier told the scientists in Malta that his Dutch group, funded by the U.S. National Institutes of Health, had “mutated the hell out of H5N1,” turning the bird flu into something that could infect ferrets (laboratory stand-ins for human beings). And then, Fouchier continued, he had done “something really, really stupid,” swabbing the noses of the infected ferrets and using the gathered viruses to infect another round of animals, repeating the process until he had a form of H5N1 that could spread through the air from one mammal to another.

Genetic research has become so cheap and easy that what once required national labs and huge budgets to do something nature would have great difficulty achieving through evolutionary means can now be done by run-of-the-mill scientists in simple laboratories, or even by high school students. The danger here is that scientists will create something so novel that  evolution has not prepared any of us for, and that through stupidity and lack of oversight it will escape from the lab and spread through human populations.

News of the crazy Dutch experiments with H5N1 was followed by revelations of mind bogglingly lax safety procedures around pandemic diseases at federal laboratories where smallpox virus had been forgotten in a storage area and pathogens were passed around in Ziploc bags.

The U.S. government, at least, has woken up to the danger imposing a moratorium on such research until their true risks and rewards can be understood and better safety standards established. This has already, and will necessarily, negatively impact potentially beneficial research. Yet what else, one might ask should the government do given the potential risks? What will ultimately be needed is an international treaty to monitor, regulate, and sometimes even ban certain kinds of research on pandemic diseases.

In terms of all the existential risks facing humanity in the 21st century, man-made super-pandemics are the one with the shortest path between reality and nightmare. The risk from runaway super-intelligence remains theoretical, based upon hypothetical technology that, for all we know, may never exist. The danger of runaway global warming is real, but we are unlikely to feel the full impact this century. Meanwhile, the technologies to create a super-pandemic in large part already here with the key uncertainty being how we might control such a dangerous potential if, as current trends suggest, the ability to manipulate and design organisms at the genetic level continues to both increase and democratize. Strangely enough, Mary Shelley’s warning in her Frankenstein about the dangers of science used for the wrong purposes has the greatest likelihood of coming in the form of her Last Man.

We Are All Thursdays Now

I’ve been in quite a mood for mysteries lately, only God knows why. Just in the past few months I’ve devoured a good chunk of Chandler, somewhat less of Borges, and even a slight bit of unread Conan Doyle. I’ve also been reading an author whose work I was only familiar with in the form of essays and religious polemics- the incomparable G.K. Chesterton.

I wasn’t all that impressed by the stories revolving around the character Chesterton is now most famous for- the seemingly dumb witted slueth, Father Brown who, Columbo like, solves crimes while leaving other to wonder how he’s managed to tie his shoes. But if Father Brown proved a less compelling character than Philip Marlowe or Sherlock Holmes, Chesterton made up for in spades by the last page of his metaphysical thriller- The Man Who Was Thursday.

Published in 1908, The Man Who Was Thursday tells the story of Syme, a policeman who infiltrates the High Council of Anarchist a group of terrorists plotting a double assassination of the president of France and the Czar of Russia. Chesterton ripped this background right out of the news. Back in 1885 the anarchist Johann Most (great-grandfather of the Celtics broadcaster) had preached the Propaganda of the Deed. Most had even written a handbook with detailed instruction on how to make bombs and succeed in acts of violence- The Science of Revolutionary Warfare. The book earned Most a moniker worthy of a supervillain or a 1970’s disco act- Dynamost.

For the Dynamost and his ilk the quickest way to overthrow society was “by the annihilation of its exponents.” High profile murders played on the weaknesses of bourgeois society both fascinated and terrified by acts of violence. The new mass media of the industrial press could be exploited to amplify the impact of otherwise limited acts of political violence. Political murder as media spectacle thus predates the electronic, age and was already over a century old when it was picked up by late 20th century maniacs like Charles Manson.

Before writing The Man Who Was Thursday Chesterton would have lived through a number of such high profile assassinations and examples of the Propaganda of the Deed. In 1896 Sadi Carnot, the president of France stabbed to death. In 1897 the prime minister of France shot and killed. 1898 saw Empress Elizabeth of Austria stabbed to death, and 1900 King of Italy assassinated. By 1901 anarchist violence had come to the United States with the assassination of President McKinley. In the year The Man Who Was Thursday was published the King of Portugal and his son were shot to death.

The assassination of Austrian Archduke Franz Ferdinand in 1914 in some ways proved those preaching Propaganda of the Deed right after all. For with this one act the foundations of Europe would crumble leaving an era of war and revolution in their wake at the end of which would emerge a world likely unrecognizable to either Chesterton or the anarchists.

But in 1908 Chesterton wasn’t interested in all that. His point wasn’t to play into public fears and turn revolutionaries into uber- villains, or the staid middle class into heroes, but to show us something of the vanity, blindness and profound weakness behind the kinds of Manichean tales we tell ourselves.

You see, The Man Who was Thursday is a comedy. It’s a Keystone Cops style policemen, but it’s also a comedy in a metaphysical sense as well. As long as The Good lives life, existence itself, can only be deemed a comedy.

Whereas Syme, who is known as Thursday to the other anarchist of the High Council, thinks he is the sole policeman infiltrating the group in fact all of them, who are also called by a day of the week, are also policemen, with the exception of a towering man called Sunday- the mysterious figure who has brought the group together. In fact, none of the fake anarchists can even agree on who or what Sunday even is, for each only see him within the frame of their own narrow perspective as a kind of shadow on their relationship to the world.

Throughout the book, and to humorous effect, Chesterton plays on an all too common human weakness. Once we’ve deemed another person as “bad” it becomes extremely difficult to see them as good, and all of their actions are interpreted in the light of this moral judgement.

The only actually potentially villainous character in the novel is a man named Gregory, an actual anarchist- perhaps Satan himself- and the figure who Syme dupes into giving him his place on the High Council. What Chesterton does with this character is to give us insight into the origins of evil, which is not so much a matter of nature or nurture but a matter of perspective. What evil consists of is a certain stance by the individual visa-vi the world.

What that perspective consists of is a profound sense of victim-hood, entitlement and injustice. Gregory’s hatred of Syme and the others on the Council, his desire for the destruction of the entire world is driven by the belief that others do not suffer:

“Oh, I could forgive you everything, you that rule all mankind, if I could feel for once that you have suffered for one hour a real agony such as I –.” (190).   

In more ways than we are merely in a digitally souped up version of Chesterton’s world. All of us are Thursdays now, turning real people into cartoon villains, the vast majority of whom are simply working their way through some alternative notion of The Good, or who are frightened by demons we can’t even perceive.

It’s a comedy of sorts, even if it makes necessities like governing impossible. Even when it’s deadly serious the echoes with Chesterson’s world remain profound. Gregory could easily be an Incel rather than an anarchist, we could replace dynamite with the AR-15, Johann Most’s deadly pamphlet upgraded to a digital manifesto and how to.

In our world, however, the puppet master isn’t a benevolent trickster and stand in for God, like Sunday, but those who foster and profit from the play. Perhaps these are the villains we should most seek to fight for they are making it increasingly difficult to distinguish good from evil in the first place- a nihilism of confusion rather than destruction, a world full of Thursdays chasing themselves.


Pray for a Good Behemoth

Try for a moment to imagine what the world’s political order will look like in the year 2075. Of course prediction is the pass time of fools, and anything imagined today about something so far in the future will by its nature become a sort of comedy. Then again, our century at the three-quarter-mark- really isn’t that far off when you think about it. 2075 is only as far from today as today is from 1963. As much time separates us from then as it does us from the assassination of JFK, which for those of a certain age, shows it’s not so far off after all.

So indulge me, then, is this flight of fancy- or rather Geoff Mann and Joel Wainwright’s flight of fancy, for what follows immediately comes from their book of political futurology- Climate Leviathan

By the end of the 21st century one of four global political orders have come into being, each a product of what will be the most important issue for the foreseeable future- climate change. In a version of the first of these futures the world’s two most powerful countries- the US and China- have entered an alliance that aims to use all the powers of the state to systematically decarbonize the world economy without replacing it’s capitalist basis.

The two countries use every tool available from Henry Farrell’s “weaponized interdependence” to military force to compel even recalcitrant petro-states to abandon fossil fuels, and have embarked on a massive infrastructure program in the developing world to establish renewable energy as the sole source of economic growth, in addition to forcefully preventing the exploitation of resources, such as forests, that contribute to atmospheric carbon dioxide. Along with this, and in conjunction, both countries actively pursue geoengineering- whether to cool the earth or to suck carbon from the atmosphere- and on a scale that mirrors the very fossil fuel economy that drove warming in the first place.

This is just one variant of the scenario Mann and Wainwright call “Climate Leviathan”. It’s a response to climate change that clings to both capitalism and sovereignty, which means that an era of global crisis will require a global sovereign- a world empire, just as Thomas Hobbes believed that civil war required the creation of a national sovereign- a leviathan.

A somewhat more difficult future to imagine from our current juncture would be what Mann and Wainwright call “Climate Mao”. It is a global revolutionary government that has dismantled capitalism and forcefully wrenched at least part of the world economy away from carbon and towards sustainable forms of energy. Climate Mao is not democratic but an extreme form of the “dictatorship of the proletariat” whose rapid transformations would almost certainly require violence to achieve its ends.

Mann and Wainwright freely admit that from our vantage point Climate Mao does not seem very likely given the fact that the world’s two most powerful states – the US and China- are so deeply capitalist, even if the latter falsely wears the mantle of Maoisms, and contains a reservoir of revolutionary Marxists adherents and puritans that given the right circumstances might find themselves at the helm of a Chinese state that has been transformed into a hyper-capitalist global juggernaut.

Following Climate Mao in Mann and Wainwright’s four quadrant scheme there is Climate Behemoth to my mind the most likely of their four future scenarios. In a world ruled by Behemoths, sovereignty, rather than going global, has reasserted itself at the national scale. Behemoths might be petro-states whose national interests prevents them from accepting the implications of climate change, or they may accept climate change as a reality, but nonetheless refuse to surrender their powers to a global political order necessary to prevent its worst impacts.

Instead of coordinating with other powers, Behemoths build walls, hoping to deflect the human costs- the flood of refugees and environmental crises- unto weaker states, thus protecting their own populations and their standard of living at the expense of other peoples deemed to possess the wrong creed or color. Climate change denying Trumpian nationalists are one example of the politics of Behemoth, but such anti-environmentalism on the part of conservatives need not be permanent, and may eventually give way to what Nils Gilman calls “avocado politics” that uses panic over the climate to fuel the cultural, racial and security agenda of the right.

It is the last and most hopeful quadrant of Mann and Wainwright’s imagined future politics that at this juncture seems least likely. What they call “Climate X” would entail a democratic, global mass movement that would combine Marxists and indigenous politics. It would be a revolution that would overthrow capitalism, right global inequality, and establish the basis for a sustainable economy that included the interests of both the human and the non-human would.

From the perspective of a middle class person in the developed world today, none of these futures might seem all that plausible, even if it seems our politics have already gone off the deep end. Yet our lack of imagination likely stems from the fact that we have yet to really absorb just how profoundly climate change will challenge our current political and economic order, not in the far future, but over the next few decades (lasting as far as the eye can see). Indeed, these changes are not something we need to wait for, but occur with ever greater frequency- today.

The best glimpse so far for what climate change will look like should we continue on our current trajectory is provided by David Wallace Wells in his apocalyptically titled book The Uninhabitable Earth. Wells first came to widespread attention after a 2017 article for the New York Review of Books with the same title. He was widely attacked in climate change circles for doom mongering and putting people in the “wrong frame” to tackle the problem of anthropogenic warming where the rhetoric has largely centered around our ability to “fix” things were we only to possess the political will.

The problem with this logic, Wells seems to have recognized, was that this optimism was built purely on faith- the belief that technology would be developed and deployed at the scale to reverse the changes we had already caused (and with it the naive premise that we would ever possess such a precise reset button in the first place), along with the blind faith that human reason and benevolent politics would eventually marshal the resources necessary to stop climate change even though neither reason nor benevolence were two things human history are especially noted for.

All Wells does, then, is to try to lay out what is likely to happen should the world continue to do exactly what it has been doing since climate change was widely acknowledged as a looming threat to the human future- which means next to nothing. The nightmare he thus presents isn’t a fantasy, but a version of our most likely future. And it is nothing short of a nightmare.

In the coming decades climate change will stress to the point of collapse the systems upon which modern civilization and our current historically unprecedented lifestyle depend. Take the food system: by 2050 we could be living in a world with 50 percent more people than today who will need to be fed on 50 percent less food as climate change devastates grain yields. (50) Just to feed ourselves meat and dairy production will need to be cut in half. (54) And the food we will grow will have become much less nutritious as the buildup of carbon in the atmosphere, counter-intuitively, will negatively impact the nutrient profile of the world’s most important crops. (57)

Within the same time frame, water- both too much and too little of it- will transform the world as we know it necessitating that whole cities be moved. In less than two decades much of the world’s internet infrastructure could be inundated by sea level rise. (62) Major cities such as Shanghai, Hong Kong, Mumbai, Kolkata might be permanently flooded. (64) Tens of millions of people globally affected by inland flooding. Much of the world will suffer near permanent states of drought with water scarcity likely to become an increasing source of civil disruption and interstate conflict.

As we exit what has been for humanity a climactic Goldilocks period, the Holocene, and enter the Anthropocene driven by the impact of human built systems, nature will have her revenge like Godzilla appearing from the deep. By 2070 Asian megacities could lose up to $75 trillion from the impact of ever more powerful storms. (82) Forest fires such as the ones that recently ravaged California could be up to 60 times more powerful. Tropical diseases will spread farther and farther north.

Heat waves in certain regions will make it unsafe for human beings to spend any time outside in non-air conditioned environments. The Haj to Mecca might be made impossible. Much of the ocean’s life will have suffocated to death.

It’s not hard to imagine a truly radical politics emerging in such a scenario, most of which are quite terrifying. A racist and Islamophobic right- wing emboldened by the flood of refugees, or a nihilistic left driven to Kaczynskite terrorism in an attempt to disable an industrial society rushing towards ecocide.

Indeed, any politics capable of addressing climate change will need to be radical from the standpoint of today given the sheer scale of the changes would be necessary to decarbonize the world economy. As Wells puts it:

“The scale of the technological transformation required dwarfs any achievement that has emerged from Silicon Valley- in fact dwarfs every technological revolution ever engineered in human history, including electricity and telecommunications and even the invention of agriculture ten thousand years ago. It dwarfs them by definition, because it contains all of them- every single one needs to be replaced at the root, since every single one breathes carbon, like a ventilator.”(180)

The problem with the wholesale replacement of our current fossil fuel based economy with a carbon neutral one, thus, has much less to do with technological solutions (which in areas such as transportation and electricity are both abundant and increasingly cheap) as much as it is a political and economic crisis- the need to uproot vast numbers of people from whole sectors of the economy, and with it the basis of material well-being within entire countries.

In a recent interview with Wells the environmental scientist Valcav Smil bluntly described the difficulty of our position this way:

This is what I’m telling you. It cannot be simply done without wrenchingly, massively centering our economy. It cannot be done on the margin only. It has to be done on a very large scale to have a large-scale global effect. And it has to be done in China, and in India, eventually, not only among the rich countries. It is the scale of the problem which is most important. We have many known solutions, we have many technical means how to fix things, we are pretty inventive, and we can come up with more, and better things yet. But it’s a thing to deploy them in time and on the scale needed. That’s our major problem, scale.

To do a dent [sic] in this global need for water, gas, oil, electricity, carbon emissions, whatever — to make that dent, you are talking about billions and billions of tons of everything. Let’s say if you want to get rid of coal, right? We are mining now more than 7 billion tons of coal. So, you want to lower the coal consumption by half, you have to cut down close to 4 billion tons of coal. More than 4 billion tons of oil. You want to get rid of oil and replace it with natural gas? Fine and dandy, but you have to get rid of more than 2 billion tons of oil.

These are transformations on a billion-ton scale, globally: (A) They cannot be done alone by next Monday; (B) they will be wrenching with huge economic consequences; and (C), what we can do, and the Chinese can do, the Indians can not. The Indians published a new paper a few months ago saying, “Coal will be our No. 1 fuel until 2047.” “

And this pessimism doesn’t even take into account what will happen within the large number of states whose economies are almost completely reliant on fossil fuel extraction- highly fragile states such as Russia, Saudi Arabia, and Iran. Among the richer economies who will exactly will be responsible for plugging the fiscal hole that opens up when the Canadian tar sands are closed, or will take charge of dismantling the fracking industry that has overnight transformed the US into the world’s leading oil exporter?

It’s when you really try to get a grip on the sheer scale of this transformation, a transformation that I very much believe to be necessary, that Mann and Wainwright’s political predictions move from the implausible to the seemingly inevitable. Yet if I were a betting man I’d sadly have to put all of my money on Behemoth.

I say this despite the fact that the climate left is politically in the best position it has ever been, and as the future shadows of what might become Climate X- in the form of massive global protests and an increasingly activist youth- have become visible.

This rising chorus to save the planet has intersected with a very real and justified revolt against inequality, such that the two causes have become confused. It is always easiest to have one villain rather than several and for this reason it is now de rigueur among the left to blame climate change on the 20 or so multinational companies that dominate the global economy and concentrate its wealth. Yet while this analysis is at root true, it skillfully elides the fact that it is only through these entities that the stuff of middle class life is obtained. While socializing or heavily taxing these corporations would certainly move us in the direction of economic justice, and would help pay for a transition away from our carbon based economy, it does nothing to address the fundamental issue driving the environmental crisis and its political constraints in the first place- namely the need to support mass consumption and the entirely understandable instinct of people to not allow that consumption to be taken away, even, and perhaps especially, in places where middle class societies are only now taking root.

It’s this middle class based panic over the loss of current or expected levels of consumption that I think the more likely load-star for right-wing movements than Gilman’s “avocado politics”.  Gilman fears the right will move from outright denial to full scale embrace of the frightening reality of climate change and will use this fear to justify its log held policy agenda of restricting immigration and even further empowering the security state.

Given the fact that much of the right is funded by fossil fuel interests, at least in the United States, it’s hard to see how this could happen. And unlike what Trump did in the 2016 election, which was mobilize a latent xenophobia which had rightfully been previously excluded from American politics there doesn’t seem to be a large population of people waiting to be mobilized if only right-wing cultural policies and green environmental concerns were brought into alignment.

Rather, the right has been able to appear populists to the extent it has presented itself as the defender of the lifestyle of a middle class that fears being crushed in the vice of inequality and the state. This is the lesson to be gleaned by the mass protests such as that of the Yellow Vests in France, which were sparked by moves by the state against middle class consumption for the purpose of environmental protection.

Gilman is certainly correct when he points out that a kind of merger between right-wing and green politics is not without historical precedent in that Nazism exhibited a kind of proto-environmentalism. But what this misses is that Nazism was much more a kind of Malthusian consumerism. Hitler with his re-conceptualization of Lebensraum as both living space and “lifestyle” essentially claimed a right to starve rival peoples if that was the only way German consumers could enjoy the same standard of living as their American counterparts.  

Severing this nearly 50-year-old link between consumption and mass politics the democratic variant of which has been called “the consumer republic” is something likely to be a long process, precisely the kind of patience we can seemingly no longer afford. And while I have no idea how to untangle this Gordian knot, I am almost certain it is our primary problem.

What we are experiencing is a strange reversal of normal time scales where geological forces are running faster than political ones, where human systems appear inert and immovable while the seemingly eternally solid earth melts away at speeds we can see. Thus, we might gain insight by acknowledging our inertia rather than dreaming it away, which means we’re unlikely to face unprecedented challenges by creating hitherto unseen political forms- good or ill- but will likely be confronted by all the horrors Wells describes while still clinging to political institutions and an international system that in more ways than it doesn’t resembles that of today.

On the geopolitical side that means a multipolar world of states rather than some version of global empire whether wicked or benign, meaning a world of Behemoths. It’s a fate whose more benevolent possibilities Mann and Wainwright for some reason elide. We will likely see more Trumpian Behemoths obsessed with defending the indefensible and even seeking to extend the reach of carbon into the economy. And it’s possible that Europe will see Behemoths driven by the kind of avocado politics predicted by Gilman. Yet one can at least hope that we’ll witness the appearance of “good Behemoths” as well.

A good Behemoth would be a state whose politics had become sufficiently green that almost its entire domestic and international agenda would be driven by environmental concerns. It would be a state (and eventually if successful an alliance of like minded states) that used all of its sovereign powers to respond to climate change and its related environmental crises. It would extend its influence globally both in aiding foreign citizens lawfare against climate destroying corporate interest and grassroots indigenous environmental movements. It would fund international research in creating alternatives to carbon, invest in its own, and conditionally in other state’s, resilience in the face of climate events, structure its trade based on carbon footprint. It would above all seek to present the world with a model of what a decarbonized middle class society could look like in the hope of winning over middle classes elsewhere of its feasibility.

In other words, had the state not existed the climate crisis would have forced us to invent it. For it is only through something like a state that collective priorities can be transformed into binding policy and law. A world state may not be on the horizon, but just like in our current geopolitical world, size does matter. Given the realities of geopolitics the bigger and more powerful such a good Behemoth is the more global will be its impact. Absent their arrival we might have little chance of avoiding crisis at a scale that would make them impossible.


Capitalism, Communism and the end of Nature

When it comes to its understanding of nature, capitalism is nothing short of schizophrenic. Right from its beginnings political economists have argued that of all the possible economic systems, none is more in tune with the way the world actually is than the capitalist system where human beings are free to truck and barter till their heart’s content.

One could begin here in 1705 with Madevelle’s infamous The Fable of the Bees: or, Private Vices, Publick Benefits where the subtitle gives the whole plot away. The idea that the world has been set up so that individual greed in the aggregate brings untold benefits to the whole was maybe the earliest versions of “the balance of nature”, a fairy tale constructed from the observations of an industrious insect. And we’ve gotten to the point where people are not just arguing that markets are natural, but that nature itself is a market. 

Weirdly enough, these ideas of capitalism’s naturalness run parallel to another idea at the root of its economics- that nature is something external to the capitalist system, a mere source of inputs at best and at worst an easily ignorable dumping ground for its unavoidable material waste.

It is this second (and in his view unique) attitude towards nature that is the main subject of Jason Moore’s book Capitalism in the web of life : ecology and the accumulation of capital. In his view capitalism inherited a dualistic idea of nature from the Cartesians where society and nature are something separate and that all of us, including environmentalists, have been in this intellectual trap ever since.       

Moore argues that all economic systems are natural in the sense that they are the way in which human beings socially plug into the flows of the larger biosphere in which they are embedded. All such social systems aim to reshape nature in its own image, and while at first glance this might seem a highly unnatural thing to do, on reflection one realizes that not just humans but probably most animals do something similar in kind if not to the same degree.

What he is describing is something ecologists call niche construction, and can be found throughout the living world. Beavers are famous for so molding the natural environment, and before the arrival of humans in the Americas (and perhaps even after), had a profound impact on the natural ecology. Yet Beavers are nothing when compared to microbes which have radically reshaped the earth’s oceans and atmosphere, sometimes to disastrous effect.

For Moore what makes capitalism distinct is that it is a system that can only function when it has access to “cheap” nature, that is, cheap labor, cheap food, cheap energy, and cheap resources. Capitalism’s expertise lies in scanning the globe and constantly finding new sources of cheap inputs to exploit. At the heart of every wave of wealth accumulation, somewhere lies a hidden act of expropriation.

What I find compelling here is that Moore has managed to clearly link the concerns of environmental and social justice while also connecting different realms of social justice into one overarching framework that includes the nonhuman world. Capitalism hasn’t merely lived off the underpaid labor of child and industrial workers, or on imperialism against the global south, or slavery and sharecropping, but the unpaid work of women and even the family itself.

Like all social systems, Moore thinks capitalism has an end date, and though I agree with him on that point, I think the death of capitalism is nowhere in sight, and this is the case even if I accept his proposed cause of mortality.

What Capitalism in the web of life argues is that we’re coming to the end of the age of the “Four Cheaps” and without them capitalism will be unable to function. As the population ages cheap labor disappears, the combined effects of a plateau in crop yields and climate change means the end of cheap food, we’ve reached the end of the line when it comes to cheap energy in the form of fossil fuels, and resources are becoming ever scarcer and therefore more expensive to extract. However, capitalism is a wily beast and if it resembles anything in nature it is evolution and its seemingly infinite adaptability. On the verge of running out of their current cheaps, capitalists are exploiting yet more new ones.

In terms of labor we have not so much automation as partial automation as a route to turn the customers themselves into part of the workforce. A large Walmart night have just two or three cashiers leaving those unwilling to waste an afternoon in line to check themselves out. Turning customers into workers is the essence of surveillance capitalism. The aging of the workforce in developed countries may or may not lead to increasing levels of automation, but for now our robots can be driven by remote workers in the developing world, our AI a magic trick performed by ghost workers hidden from sight. Many of these workers will eventually be plugged in global youth hungry for wages and prevented from being able to move. Africa is set to have 100s of millions of those. Frighteningly enough, capitalism’s new frontier seems to be inside the human body, our genes, our thoughts and emotions and the body itself.

The end of cheap energy doesn’t seem to be in the current cards either. The arrival of green energy is supplementing rather than replacing fossil fuels, fracking has turned the US into the world’s largest producer of oil, dirty coal remains cheap and plentiful, and is in decline largely because fracked natural gas has gotten so cheap so fast. When it comes to cheap resources, we now have serious talk about mining the ocean floor, or pummeling the earth with the mineral wealth derived from asteroids and the moon.

The one place where continued cheapness seems legitimately threatened is in terms of food as yields plateau in the face of climate change a still rising global population and a hoped for second Green Revolution via biotech continues to fail to arrive. I would include with that cheap water. But then again capitalism might just end up proposing we turn bugs into food.

A question I kept asking myself over and over as I read Capitalism in the web of life was if capitalism was really to blame for our environmental crisis or was it something else? For while I hate that system as much as anyone I can’t help wondering if we lost something important when we abandoned the concept of modernization. It was a detente era concept meant to explain the transition from agricultural to industrial societies from economies based on limited to commerce to consumer societies driven by the needs and wants of mass society.

Modernization theory, as I understand it, was ideologically ecumenical. Whether these transitions we driven from the top down as in Soviet communism or from the bottom up as in US style capitalism wasn’t as much a concern as the universal nature of these developments. We jettisoned modernization theory after the Cold War. The capitalist had won- it looked like there was only one way to truly modernize after all.

What was lost in abandoning modernization was the ability to see not just the difference between capitalism and communism, but their similarities. Creating societies based on ever rising production and mass consumption was bound to have a huge impact on the larger biosphere. Communist societies pursuing these goals were no better, and sometimes much worse, than capitalist ones.

In line with what I’ve said previously, what makes capitalism worse for nature than its alternatives is not so much the problems it causes but that it makes addressing these problems so wickedly difficult. If we really are facing the end of the Four Cheaps it will be because capitalism in its current neoliberal form proves itself incapable of making the kinds of systemic changes that system’s very survival requires. Balkanized global capital is probably not up to the task of generating what would amount to a second industrial revolution in terms of energy use, food production, resource extraction and a dozen other things. In that case our only hope will lie in planning from above, otherwise we’ll face not just the end of this despicable form of economy we call capitalism, but the end of a nature compatible with human flourishing as well.

The Wicked Problem

“She was asked what she had learned from the Holocaust, and she said that 10 percent of any population is cruel, no matter what, and that 10 percent is merciful, no matter what, and that the remaining 80 percent could be moved in either direction.”  Kurt Vonnegut

It seems certain that human beings need stories to live, and need to share some of these stories in order to coexist with one another. In our postmodern era these shared stories- meta-narratives- are passe, the voluntary suspension of disbelief has become impossible, the wizard behind the curtain has been unmasked. Today’s apparent true believers are instead almost cartoonish versions of the adherents of the fanatical belief systems, political ideologies and unquestionable cultural assumptions and prejudices of past eras. Not even their most vocal adherents really believe in them, except, perhaps, for those who put their faith in conspiracy theories, which at the root are little but the panicked to the point of derangement search for answers after realizing the world is a scam.      

Yet just because we live in an age when all stories have an aura of fantasy doesn’t mean we’ve stopped making them, or even stopped looking for an overarching story that might explain to us our predicament and provide us with some guidance. The realization that the map is never the territory doesn’t imply that maps are useless, only that every map demands interrogation during its use and as a prerequisite to our trust.   

I recently had the pleasure of picking up a fresh version of one of these meta-narratives or maps, a book by the astrobiologist Adam Frank called Light of the Stars: Alien Worlds and the Fate of the Earth. Frank’s viewpoint, I think, is a somewhat common one among secular, environmentally conscious persons. He thinks that we as a species have been going through the equivalent of adolescence, unless we learn to use our newly developed capacities wisely we’re doomed to a bad end. 

The difference between Frank and others on this score is that he wants us to see this story in a cosmic context. We are, he argues, very unlikely to be the first species in the universe to experience growing up in this sense. Recent discoveries showing the ubiquity of planets, for Frank, puts the odds in favor of life, and even intelligence and technological civilization developing throughout the universe many times before 

Using what we already know about earth and the thermodynamic costs of energy use Frank is able to create sophisticated mathematical models that show how technological civilizations can rise only to collapse due to the impact of energy use on their planetary environment, and why any technological civilization that survives will need to have found a way to align its system of energy use with the boundaries of its biosphere.      

Frank’s is an interesting and certainly humane viewpoint, yet it leads to all sorts of questions. Does the idea of adolescence and maturation even make any sense when applied to our species? If anything, doesn’t history shows the development of civilization to be cyclical rather than progressive? To get a real sense of our predicament wouldn’t it be better to turn to the actual history of human societies rather than look to the fates of purely imagined alien civilizations? 

Indeed, for a book on how our technological civilization can avoid what is an otherwise inevitable collapse Light of the Stars is surprisingly thin on the rise and fall of real human societies over the course of history. To the extent such history plays any role at all in Frank’s model it focuses on the kinds of outright collapse seen in places like Easter Island, which have recently become the focus of historians and anthropologists such as Jared Diamond.         

By focusing on the binary division between extinction and redemption Frank’s is just one more voice urging us to “immanentize the eschaton”, but one can ask if what we face is less a sort of juncture between utopian or dystopian outcomes or more something like the rolling apocalypse of William Gibson’s “Jackpot”. That is, not the “utopia or oblivion” version of alternative futures that probably made sense during the mutually assured destruction madness of the Cold War, but the perhaps permanent end to the golden age climatic, technological and economic conditions of the past as bets on the human future that had been placed long ago draw dead. 

Whiggish tales of perpetual progress we’re popular a few years ago, but have run into hard times of late, and for good reasons. Instead, we have the return of cyclical narratives, stories of rise and fall. The Age of Trump lends itself to comparison with the fall of Rome– a declining empire with a vain, corrupt, incompetent, and increasingly deranged leadership. Trump is like the love child of Nero and Caligula as someone joked on Twitter, which is both funny and disturbing because it’s true.              

Personally I’m much more inclined towards these cyclical versions of history than I am linear ones, though admittedly this is some sort of deep seated cognitive bias for I tend to find cyclic cosmologies more intriguing as well. Unfortunately, there doesn’t seem to be any alternatives besides a history with a clear beginning, middle, and end and one that circles back upon itself. It may be a limit of culture, or human cognition rather than a true reflection of the world, but how to see beyond it in a way that doesn’t deny time and change entirely I cannot fathom.   

These days it’s hard to mention cyclical history without being confused for a fanboy of Oswald Spengler and getting spammed by Jordan Peterson with invitations to join the Intellectual Dark Web. Nevertheless, there are good (and strange to say), progressive versions of such histories if you know where to find them.      

A recent example of these is Bas Van Bavel’s The Invisible Hand? How Market Economies Have Emerged and Declined since AD 500. In a kind of modern version of the theory of societies transition from barbarism to decadence and finally back to barbarism by the 14th century Islamic scholar Ibn Khaldun, Bavel traces the way in which prosperous economies have time and time again been undone by elite capture. Every thriving economy eventually gives way to a revolt of the winners who use their wealth to influence politics in their favor in an effort to institutionalize their position. Eventually, this has the effect of undermining the very efficiency that had allowed the economy to prosper in the first place.   

Though largely focused on the distant past, Bavel is clearly saying something about our own post-Keynesian era where plutocrats and predators use the state as a means of pursuing their own interest. And like most cyclical versions of history his view of our capacity to break free from this vicious cycle is deeply pessimistic. 

“None of the different types of states or government systems in the long run was able to sustain or protect the relatively broad distribution of property and power found in these societies that became dominated by factor markets, for instance by devising redistributive mechanisms. Rather, in all these cases, the state increasingly came under the influence of those who benefited most from the market system and would resist redistribution.” (271)

How this elite capture of our politics will intersect with global climate change is anybody’s guess, right now it doesn’t look good, but as long as a large portion of this elite has its wealth tied up in the carbon economy, or worse, think their wealth somehow gives them an escape hatch from the earth’s environmental crisis, the move towards decarbonization will continue to be too little and too late.      

One downside to cyclical theories of history, for me at least, is that far too often they become reduced to virtue politics. In a sort of inversion of the way old school liberal like Steven Pinker sees moderns as morally superior to people in the past, old school conservatives who by their nature are in thrall to their ancestors tend to view those who came before as better versions of ourselves. 

Though it’s becoming increasingly difficult to argue that the time in which we live hasn’t produced a greater share of despicable characters than in times past, on reflection that’s very unlikely to be the case. What separates us from our ancestors isn’t their superior virtue, but the degree of autonomy and interdependence that makes such virtues necessary in the first place. A lack of autonomy along with the fact that each of us is now interchangeable with another of similar skills (of which there are many) is a reflection of our society’s complexity, and for this reason it’s the stories told about the unsustainability of this complexity that I find the most compelling.  

The granddaddy of this view idea that it is unsustainable complexity which makes the fall of societies inevitable was certainly Joseph Tainter and his 1988 book The Collapse of Complex Societies.  Tainter thought that societies begin to decline once they reach the point where increasing complexity only results in an ever thinner marginal return. Although, “fall” isn’t quite the right word here, rather, Tainter broke from the moralism that had colored prior histories of rise and fall. Instead, he viewed the move towards simplicity we characterize as decline as something natural, even good, just another stage in the life cycle of societies. 

More recent works on decline due to complexity are perhaps not as non-judgemental as Tainter’s, but have something of his spirit nonetheless. There is James Bridle’s excellent book New Dark Age, Samo Burja’s idea of the importance of what he calls “intellectual dark matter” and the dangers of its decay. Outside of historians and social scientists, the video game developer Johnathan Blow has done some important work on the need to remove complexity from bloated systems, while the programmer Casey Murtori has been arguing for the need to simplify software.      

To return to Adam Frank’s book- he’s right, the stories we tell ourselves are extremely important in that they serve as a guide to our actions. The dilemma is that we can never be sure if we’re telling ourselves the right one. The problem with either/or stories that focus on opposing outcomes- like human extinction/ or technologically enabled harmony with the biosphere- is that they’re likely focusing on the tails of the graph, whereas the meat lies in the middle with all the scenarios that exclude the two. 

Within that graph on the negative side, though far short of species extinction, lies the possibility that we’ve reached a point of no return when it comes to climate change, not in terms of the need to decarbonize, but in Roy Scranton’s sense of having set in motion feedback loops which we will not be able to stop and that will make human civilization as currently constituted impossible. Also here would be found the possibility that we’ve reached a plateau of sustainable complexity for a civilization. 

These two possibilities might be correlated. The tangled complexity of the carbon economy, including its political aspects, makes addressing climate change extremely difficult. To replace fossil fuels requires not just a new energy system, but new ways to grow our food, produce our chemicals, build our roads. It requires the deconstruction of vast industries that possess a huge amount of political power during precisely the time when wealth has seized control of politics, and the willful surrender of power and wealth by hydrocarbon states, including now, the most powerful country on the planet.  

If being faced with a problem of seemingly intractable complexity is the story we tell ourselves, then we should probably start preparing for scenarios in which we fail to crack the code before the bomb goes off. That would mean planning for a humane retreat by simplifying and localizing whatever can be, increasing our capacity to aid one another across borders, including ways to absorb refugees fleeing deteriorating conditions, and making preparations to shorten as much as possible, any period of intellectual and scientific darkness and suffering that would occur in conditions of systemic breakdown.        

Perhaps the most important story Frank provides is a way of getting ourselves out of an older one. Almost since the dawn of human space exploration science-fiction writers, and figures influenced by science-fiction such as Elon Musk, have been obsessed with- the Kardashev scale. This idea that technological civilizations can be grouped into types where the lowest tap the full energy of their home planet, the next up the energy of their sun, with the last using all the energy of their galaxy. It’s an idea that basically extends the industrial revolution into outer space and Frank will have none of it. A mature civilization, in his view, wouldn’t use all the energy of its biosphere because to do so would leave them without a world in which they could live. Instead, what he calls a Class 5 civilization would maximize the efficient use of energy for both itself and the rest of the living world. It’s an end state rather than a beginning, but perhaps we might have reached that destination without the painful, and increasingly unlikely, transition we will now need to make in order to do so. 

There’s an interesting section in Light of the Stars where Frank discusses the possible energy paths to modernization. He doesn’t just list fossil fuels, but also hydro, wind, solar and nuclear and possible sources of energy a civilization could tap to become technological. I might have once wondered whether an industrial revolution was even possible had fossil fuels not allowed us to take the first step, but I didn’t need to wonder. The answer was yes, a green industrial revolution was at least possible. In fact, it almost happened.    

No book has changed my understanding of the industrial revolution more than Andreas Malm’s Fossil Capital. There I learned that the early days of industrialization in the United Kingdom consisted of battle over whether water or coal would power the dawning age of machines, a battle water only just barely lost. What mattered in coal’s victory over water, according to Malm’s story, wasn’t so much coal’s superiority as a fuel source as it was the political economy of fossil fuels. The distributed nature of river to power water wheels left employers at a disadvantage to labor, whereas coal allowed factories to be concentrated in one place- cities- where labor was easy to find and thus easily dismissed. This is quite the opposite to what happened in the mines themselves where concentration in giving the working class access to vital choke points empowered labor, a situation that eventually led to coal being supplanted by oil, a form of energy impervious to national strikes.

But we probably shouldn’t take the idea of a green industrial revolution all that far. Water might have been capable of providing the same amount of energy for stationary machines as steam derived from burning coal, but it would not have had the same potential when it came to generating heat for locomotion or the generation of steel. At least not within the constraints of 19th century technology.   

In another one of those strange, and all too common, mountains emerging from mole-hills moments of human history, it may have been a simple case of greed that birthed the industrial revolution. The greed of owners wanting to capture the maximum income from their workers drove them to choose coal as their source of power, a choice which soon birthed a whole, and otherwise unlikely, infrastructure for steel and the world shrinking machines built from it.            

In other words, energy transitions are political and moral and have always been so. In a way looking to hypothetical civilizations in the cosmos that may have succeeded or failed in these transitions lends itself to ignoring these questions of values and politics at the core of our dilemma, and thus fails to provide the kind of map to the future Frank was hoping for. He assumes an already politically empowered “we” exists when in fact it is something that needs to be built in light of the very real divisions between countries and classes, the old and the young, humans and non-humans ,and even between those living in the present and those yet to be born.  

The outcome of such a conflict isn’t really a matter of our species maturing, for history likely has no such telos, no set terminus or promised land to arrive at- only a perpetual rise and fall. Nonetheless, one might consider it to be a story, and there really are villians and heros in the tale. Whether that story will ultimately be deemed to have been a triumph, a tragedy, or more likely something in between, is a matter of which 10 percent all of us among the swayable 80 ultimately side for. 

Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.


Listening to the Abyss

Black hole

In homage to the international team of scientists working on the Event Horizon Telescope who earlier this month gave us the first ever image of a black hole, below is an essay I wrote for Harvard’s BHI essay competition late last year. The weird reality of the universe we call home never ceases humble and astound me, nor when working together as a species, does our ingenuity at uncovering the sublime order of the natural world cease to be a source of hope and pride.


“Of all the conceptions of the human mind, from unicorns to gargoyles to the hydrogen bomb, the most fantastic, perhaps, is the black hole…” Kip Thorne

In the last century we’ve experienced an expansion of cosmic horizons that dwarfs any that occurred before. It’s a universe far larger and weirder than we could have imagined held together by dark matter and propelled apart by dark energy neither of which we truly understand. And out of all the things in this wide Alice in Wonderland universe nothing is as mind- blowingly weird as black holes.

Our expanded horizon has been made possible by a revolution in the tools used by astronomers: vastly improved optical telescopes and the birth of radio, gamma ray, infrared, x-ray astronomy. Only in the last few years have we seen the rise of astronomy based on gravitational waves, the ripples of space-time. Gravitational wave detection gives us the ability not to see, but in some sense, listen to black holes. [i]The question is how many of us will pay any attention to what they are saying?

For what is striking about this recent expansion of our cosmic horizons is how little it has impacted the world outside of science itself, the world of artists, authors, philosophers. [ii]

Certainly, the popularity of movies such as Interstellar, suggests a public hunger for the question begging awe implied by contemporary cosmology, but a comparison with the past reveals just how meh our reaction has been. One could lay blame for this disinterest on our selfie and celebrity based culture, but the problem goes deeper than that, and goes beyond the old complaint about the “two cultures”. [iii] What I think we’ve lost is our ability to experience the natural sublime.

The only expansion of human imagination in terms of our place in the universe that even comes close to the one we’ve experienced in the last century was the one that ended our pre-Copernican view of the world. When Copernicus and those who followed him gave us a new cosmos with the earth no longer at its center and threw the meaning of humanity into question. Yet these discoveries didn’t just circulate among early scientists, but quickly obsessed theologians, poets and philosophers who wrestled with what the new post-Copernican world meant for humanity’s place the universe.

Many of their reflections express not so much joy at our expanded horizons as a sentiment akin to vertigo, as if people on a previously stable earth had been turned upside down and they were in danger of plunging into a sky that had become an infinite pit. In other words, they were seriously weirded out.

Take the famous 18th century fable of Carazan’s Dream. Carazan, a miserly Baghdad merchant is tossed “up” into hell upon his death. He finds not tortures, but a journey through the infinite blackness of space. It is “A dreadful region of eternal silence, loneliness, and darkness” that envelopes Carzan as he loses sight of the illumination of the last stars. He is filled with unbearable anguish as he realizes that even after traveling “ten-thousand times a thousand years” his journey into blackness would not be at an end. [iv]

The 18th century poet and children’s writer Anna Laetitia Barbauld in her poem “A Summer Evening’s Meditation” expressed something similar.

What hand unseen

Impells me onward thro’ the glowing orbs

Of habitable nature, far remote,

To the dread confines of eternal night,

To solitudes of vast unpeopled space [v]

In the 18th century Edmund Burke termed this strange mixture of foreboding and awe the sublime. [vi] It’s an experience that comes not just from looking up into an infinite sky, but whenever one experiences nature in a way that puts our limited human scale and power in perspective. Immanuel Kant brought the sublime into the realm of mathematics. Noting that phenomena for which our everyday imagination showed itself to be woefully insufficient proved tractable with the use of mathematical thinking. Math was a tool for grabbing hold of the weird. [vii]

Another philosopher Arthur Schopenhauer went even deeper. For him, the experience of the sublime had nothing to do with fear or anxiety. Instead, it was the result of a kind of tension that emerged from contemplating the scale of nature relative to our own limits while simultaneously understanding it. Nature that seems so beyond us is instead in us, indeed is us. [viii]

It’s only a short step from this psychologizing of the sublime found in nature to seeing the sublime as originating in technology itself. For what made the natural sublime perceivable in the first place wasn’t so much the capabilities of the individual human mind as the whole history of science and technology up until that point, the knowledge that allowed us to finally apprehend just how large and weird our cosmic home actually was. Thus the natural sublime emerged in parallel with the idea of the technological sublime that would eventually swallow it. [ix]

Starting in the 19th century technology seemed to liberate us from nature. Our own creations became global in scope and were often looked upon with the same quasi-religious awe once limited to natural phenomenon. Yet after the world wars, the creation of nuclear weapons, and the growing environmental consciousness of the 20th century many non-scientists, with good reason, abandoned this idea of the technological sublime. Unfortunately, because the most powerful examples of the sublime in nature are only perceivable through the technology they distrusted, the natural sublime was rejected as well. [x]

Among the technologists of Silicon Valley and elsewhere belief in the technological sublime never really went away, and we’re still under its spell. Though the technological infrastructure we have built across and above the earth has been of great benefit to the material wellbeing of billions of people, culturally, that same structure has proven almost black hole like in its ability to torque our attention back upon ourselves. [xi] The most vocal proponents of the technological sublime use this very language. Our destiny is to enter the technological “singularity” and any attempts to see beyond it are futile. In a rejection of the universe Copernicus gave us, we’re back at the center of events. [xii]

Black holes seem almost ready made to provide a rejoinder to this kind of anthropocentric hubris, and if attended to, may even help us recover our sense of the natural sublime.

They dwarf anything created by human civilization in a way almost impossible to express with the mass of the largest discovered being 17 billion times larger than that of the sun. [xiii] It’s such scale that allows them to exist in the first place. Under normal circumstances gravity is by far the weakest of the four forces, but with enough mass this weakling brakes out of its chains and becomes monstrous. The force of gravity becomes so large that space curves back upon itself, every path outward becomes a path inward towards the center, where even light itself is no longer fast enough to escape the compression of space that veers to infinity, the real singularity.

As objects get closer to the event horizon, beyond which nothing but radiation, escapesthey rotate this cosmic drain faster and faster, whole stars whipping around the black hole at speeds as fasts as 93,00 miles per hour. [xiv] As matter is pulled towards absolute blackness it gives rise to quasars, the brightest objects in the universe glowing with the light of millions of suns.

Falling into a black hole really would be like something out of Carazan’s Dream, only weirder.

From the perspective of an outside observer someone slipping into a massive black hole would slow to the point of being frozen only to be vaporized. Yet the person who actually fell in wouldn’t experience anything at all except the movement ever forward towards the singularity in the same way we can only move forward in time. But what does that even mean? Here we come upon our limits: the apparent irreconcilability of quantum mechanics and general relativity, the limits of computation in finite time. [xv]

Long after the last star has burned out black holes and their weirdness will survive. The only energy source in the universe will be their sluggardly Hawking radiation until, after quadrillions of years, they too melt away. [xvi] Perhaps future civilizations will float just outside the death grip of the event horizon to siphon off the energy of the black hole at the center of the Milky Way, a mother lode with more energy than all of our galaxy’s billions of stars. [xvii] Perhaps space faring civilizations don’t head for the stars, but towards the black. [xvii] Maybe black holes are the womb of space-time giving rise to universes like and unlike our own. Or perhaps, if intelligences like ourselves survive, they will use the wonders of the abyss to create the world anew, which would mean that, in the very long run, the believers in the technological sublime were actually right after all.

[i] Levin, Janna. Black Hole Blues: and Other Songs from Outer Space. Anchor Books, a Division of Penguin Random House LLC, 2017.

[ii] There are, of course, exceptions. An excellent example of which is: Robinson, Marilynne. What Are We Doing Here? FarrarStraus and Giroux, 2018.

[iii] Snow, C. P., and Stefan Collini. The Two Cultures. Cambridge University Press, 2014.

[iv] Fisher, Anne. The Pleasing Instructor Or Entertaining Moralist. G. Robinson, 1777. p. 143

[v] Barbauld, and Lucy Aikin. The Works of Anna Laetitia Barbauld: with a Memoir. Cambridge University Press, 2014. p.122

[vi] Burke wasn’t the first, but he was certainly the most influential figure to bring the sublime into the public discourse with his: Burke, Edmund, et al. A Philosophical Enquiry into the Origin of Our Ideas of the Sublime and Beautiful. Printed for R. and and J. Taylor, 1772.

[viii] Kant, Immanuel, et al. Critique of Judgement. Oxford Univ. Press, 2008. p. 87

[ix] Schopenhauer, Arthur. The World as Will and Representation. Vol. 1, Cambridge University Press, 2010. p.230

[x] Miller, Perry. The Life of the Mind in America. from the Revolution to the Civil War. Harcourt, Brace & World, 1970.

[xi] A good recent example of this turn against the natural sublime is: Marris, Emma. Rambunctious Garden: Saving Nature in a Post-Wild World. Bloomsbury, 2013.

[xi] A point beautifully made in: Billings, Lee. Five Billion Years of Solitude: the Search for Life among the Stars. Current, 2014.

[xii] Kurzweil, Ray. The Singularity Is near: When Humans Transcend Biology. Duckworth, 2016.

[xiii] Thomas, Jens, et al. “A 17-Billion-Solar-Mass Black Hole in a Group Galaxy with a Diffuse Core.” Nature, vol. 532, no. 7599, 2016, pp. 340–342., doi:10.1038/nature17197.

[xiv] The fastest rotation of a black hole by a star ever recorded: Kuulkers, E., et al. “MAXI J1659−152: the Shortest Orbital Period Black-Hole Transient in Outburst.” Astronomy & Astrophysics, vol. 552, 2013, doi:10.1051/0004-6361/201219447.

[xv] Harlow, Daniel, and Patrick Hayden. “Quantum Computation vs. Firewalls.” Journal of High Energy Physics, vol. 2013, no. 6, 2013, doi:10.1007/jhep06(2013)085.

[xvi] Adams, Fred, and Greg Laughlin. The Five Ages of the Universe: inside the Physics of Eternity. Touchstone, 2000.

[xvii] Lawrence, Albion, and Emil Martinec. “Black Hole Evaporation along Macroscopic Strings.” Physical Review D, vol. 50, no. 4, 1994, pp. 2680–2691., doi:10.1103/physrevd.50.2680.

[xviii] It been suggested that one solution to the Fermi Paradox is that the aliens are “hibernating” waiting for the Black Hole Era when the universe has cooled to make computation extremely efficient: Sandberg, et al. “That Is Not Dead Which Can Eternal Lie: the Aestivation Hypothesis for Resolving Fermi’s Paradox.” [1402.1128] Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition, 27 Apr. 2017,

The Flash Crash of Reality

“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”

                                                                                                 H.P. Lovecraft, The Call of Cthulhu  

“All stable processes we shall predict. All unstable processes we shall control.”

John von Neumann

For at least as long as there has been a form of written language to record such a thing, human beings have lusted after divination. The classical Greeks had their trippy Oracle at Delphi, while the Romans scanned entrails for hidden patterns, or more beautifully, sought out the shape of the future in the murmurations of birds. All ancient cultures, it seems, looked for the signs of fate in the movement of the heavens. The ancient Chinese script may have even originated in a praxis of prophecy, a search for meaning in the branching patterns of “oracle bones” and tortoise shells, signaling that perhaps written language itself originated not with accountants but with prophets seeking to overcome the temporal confines of the second law, in whose measure we are forever condemned.

The promise of computation was that this power of divination was at our fingertips at last. Computers would allow us to outrun time, and thus in seeing the future we’d finally be able to change it or deftly avoid its blows- the goal of all seers in the first place.

Indeed, the binary language underlying computation sprung from the fecund imagination of Gottfried Leibniz who got the idea after he encountered the most famous form of Chinese divination, the I-Ching. The desire to create computing machines emerged with the Newtonian worldview and instantiated its premise; namely, that the world could be fully described in terms of equations whose outcome was preordained. What computers promised was the ability to calculate these equations, offering us a power born of asymmetric information- a kind of leverage via time travel.

Perhaps we should have known that time would not be so easily subdued. Outlining exactly how our recent efforts to know and therefore control the future have failed is the point of James Bridle’s wonderful book  New Dark Age: Technology and the End of the Future.

With the kind of clarity demanded of an effective manifesto, Bridle neatly breaks his book up into ten “C’s”: Chasm, Computation, Climate, Calculation, Complexity, Cognition, Complicity, Conspiracy, Concurrency, and Cloud.

Chasm defines what Bridle believes to be our problem. Our language is no longer up to the task of navigating the world which the complex systems upon which we are dependent have wrought. This failure of language, which amounts to a failure of thought, Bridle traces to the origins of computation itself.


To vastly oversimplify his argument, the problem with computation is that the detailed models it provides too often tempt us into confusing our map with the territory. Sometimes this leads us to mistrust our own judgement and defer to the “intelligence” of machines- a situation that in the most tragic of circumstances has resulted in what those in the National Parks Service call “death by GPS”. While in other cases our confusion of the model with reality results in the surrender of power to the minority of individuals capable of creating and the corporations which own and run such models.

Computation was invented under the aforementioned premise born with the success of calculus, that everything, including  history itself, could be contained in an equation. It was also seen as a solution to the growing complexity of society. Echoing Stanislaw Lem, one of the founders of modern computers, Vannevar Bush with his “memex” foresaw something like the internet in the 1940s. The mechanization of knowledge the only possible lifeboat in the deluge of information modernity had brought.


One of the first projected purposes of computers would be not just to predict, but to actually control the weather. And while we’ve certainly gotten better at the former, the best we’ve gotten from the later is Kurt Vonnegut’s humorous takedown of the premise in his novel Cat’s Cradle, which was actually based on his chemist brother’s work on weather control for General Electric. It is somewhat ironic, then, that the very fossil fuel based civilization upon which our computational infrastructure depends is not only making the weather less “controllable” and predictable, but is undermining the climatic stability of the Holocene, which facilitated the rise of a species capable of imagining and building something as sophisticated as computers in the first place.

Our new dark age is not just a product of our misplaced faith in computation, but in the growing unpredictability of the world itself. A reality whose existential importance is most apparent in the case of climate change. Our rapidly morphing climate threatens the very communications infrastructure that allows us to see and respond to its challenges. Essential servers and power sources will likely be drowned under the rising seas, cooling dependent processors taxed by increasing temperatures. Most disturbingly, rising CO2  levels are likely to make human beings dumber. As Bridle writes:

“At 1,000 ppm, human cognitive ability drops by 21 per cent.33 At higher atmospheric concentrations, CO2 stops us from thinking clearly. Outdoor CO2 already reaches 500 ppm”

An unstable climate undermines the bedrock, predictable natural cycles from which civilization itself emerged, that is, those of agriculture.  In a way our very success at controlling nature, by making it predictable is destabilizing the regularity of nature that made its predictability possible in the first place.

It is here that computation reveals its double edged nature, for while computation is the essential tool we need to see and respond to the “hyperobject” that is climate change, it is also one of the sources of this growing natural instability itself. Much of the energy of modern computation directly traceable to fossil fuels, a fact the demon coal lobby has eagerly pointed out.


What the explosion of computation has allowed, of course, is an exponential expansion of the power and range of calculation. While one can quibble over whether or not the driving force behind the fact that everything is now software, that is Moore’s Law, has finally proved Ray Kurzweil and his ilk wrong and bent towards the asymptote, the fact is that nothing else in our era has followed the semiconductor’s exponential curve. Indeed, as Bridle shows, precisely the opposite.

For all their astounding benefits, machine learning and big data have not, as Chris Anderson predicted, resulted in the “End of Theory”. Science still needs theory, experiment, and dare I say, humans to make progress, and what is clear is that many areas outside ICT itself progress has not merely slowed but stalled.

Over the past sixty years, rather than experience Moore’s Law type increases, the pharmaceutical industry has suffered the opposite. The so-called Eroom’s Law where: “The number of new drugs approved per billion US dollars spent on research and development has halved every nine years since 1950.”

Part of this stems from the fact that the low hanging fruit of discovery, not just in pharmaceuticals but elsewhere, have already been picked, along with the fact that the problems we’re dealing with are becoming exponentially harder to solve. Yet some portion of the slowdown in research progress is surely a consequence of technology itself, or at least the ways in which computers are being relied upon and deployed. Ease of sharing when combined with status hunting inevitably leads to widespread gaming. Scientists are little different from the rest of us, seeking ways to top Google’s Page Rank, Youtube recommendations, Instagram and Twitter feeds, or the sorting algorithms of Amazon, though for scientists the summit of recognition consists of prestigious journals where publication can make or break a career.

Data being easy to come by, while experimentation and theory remain difficult, has meant that “proof” is often conflated with what turn out to be spurious p-values, or “p-hacking”. The age of big data has also been the age of science’s “replication crisis”, where seemingly ever more findings disappear upon scrutiny.

What all this calculation has resulted in is an almost suffocating level of complexity, which is the source of much of our in-egalitarian turn. Connectivity and transparency were supposed to level the playing field, instead, in areas such as financial markets where the sheer amount of information to be processed has ended up barring new entrants, calculation has provided the ultimate asymmetric advantage to those with the highest capacity to identify and respond within nanoseconds to changing market conditions.

Asymmetries of information lie behind both our largest corporate successes and the rising inequality that they have brought in their wake. Companies such as WalMart and Amazon are in essences logistics operations built on the unique ability of these entities to see and respond first or most vigorously to consumer needs. As Bridle points out this rule of logistics has resulted in a bizarre scrambling of politics, the irony that:

“The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon.”

Yet unlike the corporate giants of old such as General Motors, our 21st century behemoths don’t actually employ all that many people, and in their lust after automation, seem determined to employ even less. The workplace, and the larger society, these companies are building seem even more designed around the logic of machines than factories in the heyday of heavy industry. The ‘chaotic storage’ deployed in Amazon’s warehouses is like something dreamed up by Kafka, but that’s because it emerges out of the alien “mind” of an algorithm, a real world analog to Google’s Deep Dream.

The world in this way becomes less and less sensible, except to the tiny number of human engineers who, for the moment, retain control over its systems. This is a problem that is only going to get worse with the spread of the Internet of Things. An extension of computation not out of necessity, but because capitalism in its current form seems hell bent on turning all products into services so as to procure a permanent revenue stream. It’s not a good idea. As Bridle puts it:

“We are inserting opaque and poorly understood computation at the very bottom of Maslow’s hierarchy of needs – respiration, food, sleep, and homeostasis – at the precise point, that is, where we are most vulnerable.”

What we should know by now, if anything, is that the more connected things are, the more hackable they become, and the more susceptible to rapid and often unexplainable crashes. Turning reality into a type of computer simulations comes with the danger the world at large might experience the kind of “flash crash” now limited to the stock market. Bridle wonders if we’ve already experienced just such a flash crash of reality:

“Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.”


It’s perhaps somewhat startling that even as we place ourselves in greater and greater dependence on artificial intelligence we’re still not really certain how or even if machines can think. Of course, we’re far from understanding how human beings exhibit intelligence, but we’ve never been confronted with this issue of inscrutability when it comes to our machines. Indeed, almost the whole point of machines is to replace the “herding cats” element of the organic world with the deterministic reliability of classical physics. Machines are supposed to be precise, predictable, legible, and above all, under the complete control of the human beings who use them.

The question of legibility today hasn’t gone far beyond the traditional debate between the two schools of AI that have rivaled each other since the field’s birth. There are those who believe intelligence is merely the product of connections between different parts of the brain and those who think intelligence has more to do with the mind’s capacity to manipulate symbols. In our era of deep learning the Connectionists hold sway, but it’s not clear if we are getting any closer to machines actually understanding anything, a lack of comprehension that can result in the new comedy genre of silly or disturbing mistakes made by computers, but that also presents us with real world dangers as we place more and more of our decision making upon machines that have no idea what any of the data they are processing actually means even as these programs discover dangerous hacks such as deception that are as old as life itself.


Of course, no techno-social system can exist unless it serves the interest of at least some group in a position of power. Bridle draws an analogy between the society in ancient Egypt and our own. There, the power of the priests was premised on their ability to predict the rise and fall of the Nile. To the public this predictive power was shrouded in the language of the religious castes’ ability to commune with the gods, all the while the priests were secretly using the much more prosaic technology of nilometers hidden underground.

Who are the priests of the current moment? Bridle makes a good case that it’s the “three letter agencies”, the NSA, MI5 and their ilk that are the priests of our age. It’s in the interest of these agencies, born in the militarized atmosphere of the Cold War and the War on Terrorism that the logic of radical transparency continues to unfold- where the goal is to see all and to know all.

Who knows how vulnerable these agencies have made our communications architecture in trying to see through it? Who can say, Bridle wonders, if the strongest encryption tools available haven’t already been made useless by some genius mathematician working for the security state? And here is the cruel irony of it all, that the agencies whose quest is to see into everything are completely opaque to the publics they supposedly server. There really is a “deep state” though given our bizzaro-land media landscape our grappling with it quickly gives way conspiracy theories and lunatic cults like Q-Anon.


The hunt for conspiracy stems from the understandable need of the human mind to simplify. It is the search for clear agency where all we can see are blurred lines. Ironically, believers in conspiracy hold more expansive ideas of power and freedom than those who explain the world in terms of “social forces” or other necessary abstractions. For a conspiracists the world is indeed controllable it’s just a matter that those doing the controlling happen to be terrifying.  None of this makes conspiracy anything but an unhinged way of dealing with reality, just a likely one whenever a large number of individuals feel the world is spinning out of control.

The internet ends up being a double boon for conspiracists politics because it both fragments the shared notion that of reality that existed in the age of print and mass media while allowing individuals who fall under some particular conspiracy’s spell to find one another and validate their own worldview. Yet it’s not just a matter of fragmented media and the rise of filter bubbles that plague us but a kind of shattering of our sense of reality itself.


It is certainly a terrible thing that our current communications and media landscape has fractured into digital tribes with the gap of language and assumptions between us seemingly unbridgeable, and emotion-driven political frictions resulting in what some have called “a cold civil war.” It’s perhaps even more terrifying that this same landscape has spontaneously given way to a kind of disturbed collective unconscious that is amplified, and sometimes created, by AI into what amounts to the lucid dreams of a madman that millions of people, many of them children, experience at once.

Youtube isn’t so much a television channel as it is a portal to the twilight zone, where one can move from videos of strangers compulsively wrapping and unwrapping products to cartoons of Peppa the Pig murdering her parents. Like its sister “tubes” in the porn industry, Youtube has seemingly found a way to jack straight into the human lizard brain. As is the case with slot machines, the system has been designed with addiction in mind, only the trick here is to hook into whatever tangle of weirdness or depravity exists in the individual human soul- and pull.

The even crazier thing about these sites is that the majority of viewers, and perhaps soon creators, are humans but bots. As Bridle writes:

“It’s not just trolls, or just automation; it’s not just human actors playing out an algorithmic logic, or algorithms mindlessly responding to recommendation engines. It’s a vast and almost completely hidden matrix of interactions between desires and rewards, technologies and audiences, tropes and masks.”


Bridle thinks one thing is certain, we will never again return to the feeling of being completely in control, and the very illusion that we can be, if we only had the right technical tweak, or the right political leader, is perhaps the greatest danger of our new dark age.

In a sense we’re stuck with complexity and it’s this complex human/machine artifice which has emerged without anyone having deliberately built it that is the source of all the ills he has detailed.

The historian George Dyson recently composed a very similar diagnosis. In his essay Childhood’s End Dyson argued that we are leaving the age of digital and going back to the era of analog. He didn’t mean that we’d shortly be cancelling our subscriptions to Spotify and rediscovering the beauty of LP’s (though plenty of us are doing that), but something much deeper. Rather than, being a model of the world’s knowledge, in some sense, now Google was the world’s knowledge. Rather than represent the world’s social graph, now FaceBook was the world’s social graph.

The problem with analogue systems when compared to digital is that they are hard to precisely control, and thus are full of surprises, some of which are far from good. Our quest to assert control over nature and society hasn’t worked out as planned. According to Dyson:

“Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.”

Bridle’s answer to our situation is to urge us to think, precisely the kind of meditation on the present he has provided with his book. It’s not as wanting a solution as one might suppose, and for me had clear echoes with the perspective put forward by Roy Scranton in his book Learning to Die in the Anthropocene where he wrote:

“We must practice suspending stress-semantic chains of social exhaustion through critical thought, contemplation, philosophical debate, and posing impertinent questions…

We must inculcate ruminative frequencies in the human animal by teaching slowness, attention to detail, argumentative rigor, careful reading, and meditative reflection.”

I’m down with that. Yet the problem I had with Scranton is ultimately the same one I had with Bridle. Where is the politics? Where is human agency? For it is one thing to say that we live in a complex world roamed by “hyperobjects” we at best partly control, but it quite another to discount our capacity for continuous intervention, especially our ability to “act in concert”, that is politically, to steer the world towards desirable ends.

Perhaps what the arrival of a new dark age means is that we’re regaining a sense of humility. Starting about two centuries ago human beings got it into their heads that they had finally gotten nature under their thumb. What we are learning in the 21st century was not only was this view incorrect, but that the human made world itself seems to have a mind of its own. What this means is that we’re likely barred forever from the promised land, condemned to a state of adaptation and response to nature’s cruelty and human injustice, which will only end with our personal death or the extinction of the species, and yet still contains all the joy and wonder of what it means to be a human being cast into a living world.