The Deeper Meaning of the Anthropocene


Last year when I wrote a review of E.O. Wilson’s book The Meaning of Human Existence I felt sure it would be the then 85  year old’s last major work. I was wrong having underestimated Professor Wilson’s already impressive intellectual stamina. Perhaps his latest book Half-Earth: Our Planet’s Fight for Life  is indeed his last, the final book that concludes the trilogy of The Social Conquest of Earth and the Meaning of Human Existence. This has less to do with his mental staying power ,which I will not make the mistake of underestimating again, than because it’s hard to imagine what might follow Half Earth, for with it Wilson has less started a conversation than attempted to launch a crusade.

The argument Wilson makes in Half Earth isn’t all that difficult to understand, and for those who are concerned with the health of the planet, and especially the well being of the flora and fauna with which we share the earth, might initially be hard to disagree with. Powerfully, Wilson reminds us that we are at the beginning of the sixth great extinction a mass death of species on par with other great dyings such as the one that killed the dinosaurs.

The difference between the mass extinction of our own time when compared to those that occurred in the past is that rather than being the consequence of mindless processes like a meteor strike or bacteria breathing poison, it is the result of the actions of a species that actually knows what it is doing- that is us. Wilson’s answer to the crisis we are causing is apparent in the very title of his book. He is urging us to declare half of the earth’s surface a wildlife preserve where large scale human settlement and development will be prohibited.

Any polemic such as the one Wilson has written requires an enemy, but rather than a take aim at the capitlist/industrial system, or the aspiration to endless consumption, Wilson’s enemy is a relatively new and yet to be influential movement within environmentalism that aims to normalize our perspective on the natural world.

Despite the fact he that definitely has a political and ideological target with which he takes umbrage being a scientist rather than a philosopher he fails to clearly define what exactly it is. Instead he labels anyone who holds doubts about the major assumptions of the environmental movement as believers in the “Anthropocene”- the idea that human beings have become so technologically powerful that we now constitute a geological force.
The problem with this is that Wilson’s beef is really only with a small subset of people who believe in the Anthropocene, indeed, Wilson himself seems to believe in it, which shows you just how confused his argument is.

The subset he opposes would include thinkers like Emma Marris or Jedediah Purdy who have been arguing that we need to untangle ourselves from ideas about nature that we inherited from 19th century romanticism. These concepts regarding the purity of the natural as opposed to the artificiality of the man made- the idea that not only is humanity distinct from nature but that anything caused by our agency is somehow unnatural- are now both ubiquitous and have become the subject of increasing doubts.

While mass extinction is certainly real and constitutes an enormous tragedy, it does not necessarily follow that the best way to counter such extinction is to declare half of the earth off limits to humans. Much better for both human and animal welfare would be to make the human artifice more compatible with the needs of wildlife. Though the idea of a pure, wild and natural place free from human impact, and above all dark and quiet, is one I certainly understand and find attractive, our desire that it exist is certainly much less a matter of environmental science than a particular type of spiritual desire.

As Daniel Duane pointed out in a recent New York Times article the places we deem to be the most natural, that is the national parks, which have been put aside for the very purpose of preserving wilderness, are instead among the most human- managed landscapes on earth. And technology, though it can never lead to complete mastery, makes this nature increasingly easy to manage:

More and more, though, as we humans devour habitat, and as hardworking biologists — thank heaven — use the best tools available to protect whatever wild creatures remain, we approach that perhaps inevitable time when every predator-prey interaction, every live birth and every death in every species supported by the terrestrial biosphere, will be monitored and manipulated by the human hive mind.

Yet even were we to adopt Wilson’s half earth proposal whole cloth we would still face scenarios where we will want to act against the dictates of nature. There are, for instance , good arguments to intervene on behalf of, say, bats whose populations have been decimated by the white nose fungus or great apes who are threatened extinction as a consequence of viral infections. Where and why such interventions occur are more than merely scientific questions ,and they arise not from the human desire to undo the damage we have done, but from the damage nature inflicts upon herself.

From the opposite angle, climate change will not respect any artificial borders we establish between the natural and the human worlds. It seem clear that we have a moral duty to limit the suffering nature experiences as a consequence of our action or inaction. We are in the midst of discovering the burden of our moral responsibility. Perhaps this discovery points to a need to expand the moral boundaries of the Anthropocene itself.

Rather than abandoning or merely continuing to narrowly apply the idea of the Anthropocene to the environment alone, maybe we should extend it to embrace other aspects of human agency that have expanded since the birth of the modern world. For what has happened, especially since the industrial revolution, is that areas previously outside the human ability to effect through action have come increasingly not so much under our control as our ability to influence, both for good and ill.

It’s not just nature that is now shaped by our choices, that has become a matter of moral and political dispute, but poverty and hunger, along with disease. Some even now interpret death itself in moral and political terms. With his half earth proposal Wilson wants to do away with a world where the state of the biosphere has become a matter of moral and political dispute. This dismissal of human political capacity and rights seems to run like a red thread through Wilson’s thinking, and ends, as I have pointed out elsewhere, in treating human like animals in a game preserve.

Indeed, the American ideal of wilderness as an edenic world unsullied by the hands of man that Wilson wants to see over half the earth has had negative political consequences even in the richest of nations. The recent violent standoff in Oregon emerged out of the resentments of a West where the federal government owns nearly 50 percent of the land. Such resentments have made the West, which might culturally lean much differently, a bulwark of the political right. As Robert Fletcher and Bram Büscher have argued in Aeon Wilson’s prescription could result in grave injustices in the developing world against native peoples in the same way the demands of environmentalists for wilderness stripped of humans resulted in the violent expulsion of Native Americans from large swaths of the American West.

Eden, it seems, refuses to be reestablished despite our best efforts and intentions. Welcome to the Fall.


Bruce Sterling urges us not to panic, just yet


My favorite part about the SXSW festival comes at the end. For three decades now the science-fiction writer Bruce Sterling has been giving some of the most insightful (and funny) speeches on the state of technology and society. In some sense this year’s closing remarks were no different, and in others they represented something very new.

What made this year’s speech different was that politics has taken such a weird turn, like something out of dystopian science-fiction that Sterling, having mastered the craft, felt obliged to anchor our sense of reality. He did this, however, only after trying to come to grips with exactly why had gotten so weird that the writers of The Simpsons seemed to be in possession of a crystal ball.

A read on events Sterling finds somewhat compelling is that put forward by Clay Shirky who claims that the age of social media has shattered something political science geeks call the Overton window.  The Overton window is essentially the boundary of politically acceptable discourse as defined by political elites. Sterling points out that in the age of broadcast television that boundary was easy to control, but with the balkanization of media- first with cable TV and then the Internet (and I would add talk radio) that border has eroded.

Here’s the conservative, David French’s, view on what Donald Trump himself has done to the Overton window:

Then along came Donald Trump. On key issues, he didn’t just move the Overton Window, he smashed it, scattered the shards, and rolled over them with a steamroller. On issues like immigration, national security, and even the manner of political debate itself, there’s no window left. Registration of Muslims? On the table. Bans on Muslims entering the country? On the table. Mass deportation? On the table. Walling off our southern border at Mexico’s expense? On the table. The current GOP front-runner is advocating policies that represent the mirror-image extremism to the Left’s race and identity-soaked politics.

All this certainly resembles what Moisés Naím has described as the end of power where traditional institutions and elites have lost control over events largely as a result of a democratized communication environment. Or, as Sterling himself put it in his speech the political parties have been:

“Balkanized by demagogues who brought in their own megaphones”.  

Sterling thinks it’s clear that the new technology and media landscape is a contributing factor of the current dystopian ambiance. The world has tended to take some very strange turns during the rise to dominance of new forms of media and new forms of economy, and maybe this is one of the those moments where old media and tech is supplanted by the new in the form of the “Big five” Apple, Amazon, Alphabet (Google), Facebook and Microsoft. Sterling thinks the academic Shoshana Zuboff is onto something when she describes this new order as surveillance capitalism an economic order based on turning the private lives of individuals into a saleable commodity.

Sterling is clearly worried about this but is also certain that the illusion of techno-libertarianism behind something like Bitcoin isn’t the solution. Some alternative technological order can’t solve our problems, but if it can’t solve them then perhaps technology itself isn’t the primary source of our problems in the first place.

Evidence that technology alone, or the coming into being of surveillance capitalism, isn’t to blame can be seen in the global nature of the current political crisis. The same, and indeed incomparably worse, problems exemplified by the rise of Trump in the US are apparent almost everywhere. Middle Eastern states have collapsed, an anti-immigrant anti-globalization right is on the rise across Europe, Great Britain is threatening to exit the EU further weakening that institution with dissolution. Venezuela is on the verge of collapse, nationalist tensions continue to roil Asia, the global economy continues to suffer the injuries from the financial crisis even as economic policies become increasingly unorthodox. A much more environmentally and politically unstable world looms.

Yet Sterling points out that there’s one people that seem particularly calm through this whole affair and do not seem generally to be panicked by the bizarre turn politics has taken in the US. The Italians see in Trump America’s version of their own Silvio Berlusconi. If politics in the US follows the Berlusconi model after a Trump victory (however unlikely), then though we may be in for a very seedy political period it will not necessarily be a dangerous or chaotic one.

As for myself I am not as sanguine as Sterling about the idea of a president Trump given that he will have at his disposal the most powerful military and survelillance apparatus on the planet. Francis Fukuyama who also pointed the resemblance between Trump and Berlusconi thinks Trump’s flirtation with violence is much more troubling.

Nevertheless, Sterling certainly is right when he points out that, in light of historical precedents- say the 1960’s- the level of political violence we have seen in 2016 is nothing to panic over. Nor is society in any way in a state of collapse – the lights are still on, food is still available, we are not entering some survivalist scenario- for the moment.

While events elsewhere may continue to take the world in a dystopian direction as a result of state and institutional collapse, the dystopia the US will most likely enter will be much less of the type found in science-fiction novels. It is one where the US is governed by a gentrified political elite which clings to its own power and the status quo while Americans remain distracted by the “glass lozenges” of their smart phones. Where mass surveillance isn’t scary a la Minority Report because it isn’t all that effective, or as Sterling puts it:

“Is there anybody with a drone over their head who is actually doing what the guys with the drones want?”

It’s a world where everything is failing but nothing has truly and completely failed where we have plenty to be unhappy about but also no reason in particular to panic.


A Box of a Trillion Souls

pandora's box

“The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.”                                                                          

Jaron Lanier


Stephen Wolfram may, or may not, have a justifiable reputation for intellectual egotism, but I like him anyway. I am pretty sure this is because, whenever I listen to the man speak I most often  walk away no so much with answers as a whole new way to frame questions I had never seen before, but sometimes I’m just left mesmerized, or perhaps bewildered, by an image he’s managed to draw.

A while back during a talk/demo of at the SXSW festival he managed to do this when he brought up the idea of “a box of a trillion souls”. He didn’t elaborate much, but left it there, after which I chewed on the metaphor for a few days and then returned to real life, which can be mesmerizing and bewildering enough.

A couple days ago I finally came across an explanation of the idea in a speech by Wolfram over at John Brockman’s  There, Wolfram also opined on the near future of computation and the place of  humanity in the universe. I’ll cover those thoughts first before I get to his box full of souls.

One of the things I like about Wolfram is that, uncommonly for a technologist, he tends to approach explanations historically. In his speech he lays out a sort of history of information that begins with information being conveyed genetically with the emergence of life, moves to the interplay between individual and environment with the development of more complex life, and flowers in spoken language with the appearance of humans.

Spoken language eventually gave rise to the written word, though it took almost all of human history for writing to become nearly as common as speaking. For most of that time reading and writing were monopolized by elites. A good deal of mathematics, as well has moved from being utilized by an intellectual minority to being part of the furniture of the everyday world, though more advanced maths continues to be understandable by specialists alone.

The next stage in Wolfram’s history of information, the one we are living in, is the age of code. What distinguishes code from language is that it is “immediately executable” by which I understand him to mean that code is not just some set of instructions but, when run, the thing those instruction describe itself.

Much like reading, writing and basic mathematics before the invention of printing and universal education, code is today largely understood by specialists only. Yet rather than endure for millennia, as was the case with the monopoly of writing by the clerisy, Wolfram sees the age of non-universal code to be ending almost as soon as it began.

Wolfram believes that specialized computer languages will soon give way to “natural language programming”.  A fully developed form of natural language programming would be readable by both computers and human beings- numbers of people far beyond those who know how to code, so that code would be written in typical human languages like English or Chinese. He is not just making idle predictions, but has created a free program that allows you to play around with his own version of a NLP.

Wolfram makes some predictions as to what a world where natural language programming became ubiquitous- where just as many people could code as could now write- might look like. The gap between law and code would largely disappear. The vast majority of people, including school children, would have at the ability to program computers to do interesting things, including perform original research. As computers become embedded in objects the environment itself will be open to the programming of everyone.

All this would seem very good for us humans and would be even better given that Wolfram sees it as the prelude to the end of scarcity, including the scarcity of time that we now call death. But then comes the AI. Artificial intelligence will be both the necessary tool to explore the possibility space of the computational universe and the primary intelligence via which we interact with the entirety of the realm of human thought.  Yet at some threshold AI might leave us with nothing to do as it will have become the best and most efficient way to meet our goals.

What makes Wolfram nervous isn’t human extinction at the hands of super-intelligence so much as what becomes of us after scarcity and death have been eliminated and AI can achieve any goal- artistic ones included- better than us. This is Wolfram’s  vision of the not too far off future, which given the competition with even current reality, isn’t near sufficiently weird enough. It’s only when he starts speculating on where this whole thing is ultimately headed that anything so strange as Boltzmann brains make their appearance, yet something like them does and no one should be surprised given his ideas about the nature of computation.

One of Wolfram’s most intriguing, and controversial, ideas is something he calls computational equivalence. With this idea he claims not only that computation is ubiquitous across nature, but that the line between intelligence and merely complicated behavior that grows out of ubiquitous natural computation is exceedingly difficult to draw.

For Wolfram the colloquialism that “the weather has a mind of its own” isn’t just a way of complaining that the rain has ruined your picnic, but, in an almost panpsychic or pantheistic way, captures a deeper truth that natural phenomenon are the enactment of a sort of algorithm, which, he would claim, is why we can successfully model their behavior with other algorithms we call computer “simulations.” The word simulations needs quotes because, if I understand him, Wolfram is claiming that there would be no difference between a computer simulation of something at a certain level of description and the real thing.

It’s this view of computation that leads Wolfram to his far future and his box of a trillion souls. For if there is no difference between a perfect simulation and reality, if there is nothing that will prevent us from creating perfect simulations, at some point in the future however far off, then it makes perfect sense to think that some digitized version of you, which as far as you are concerned will be you, could end up in a “box”, along with billions or trillions of similar digitized persons, with perhaps millions or more copies of  you.   

I’ve tried to figure out where exactly this conclusion for an idea I otherwise find attractive, that is computational equivalence, goes wrong other just in terms of my intuition or common sense. I think the problem might come down to the fact that while many complex phenomenon in nature may have computer like features, they are not universal Turing machines i.e. general purpose computers, but machines whose information processing is very limited and specific to that established by its makeup.

Natural systems, including animals like ourselves, are more like the Tic-Tac-Toe machine built by the young Danny Hillis and described in his excellent primer on computers, that is still insightful decades after its publication- The Pattern on the Stone. Of course, animals such as ourselves can show vastly more types of behavior and exhibit a form of freedom of a totally different order than a game tree built out of circuit boards and lightbulbs, but, much like such a specialized machine, the way in which we think isn’t a form of generalized computation, but shows a definitive shape based on our evolutionary, cultural and personal history. In a way, Wolfram’s overgeneralization of computational equivalence negates what I find to be his as or more important idea of the central importance of particular pasts in defining who we are as a species, people and individuals.

Oddly enough, Wolfram falls into the exact same trap that the science-fiction writer Stanislaw Lem fell into after he had hit upon an equally intriguing, though in some ways quite opposite understanding of computation and information.

Lem believed that the whole system of computation and mathematics human beings use to describe the world was a kind of historical artifact for which there much be much better alternatives buried in the way systems that had evolved over time processed information. A key scientific task he thought would be to uncover this natural computation and find ways to use it in the way we now use math and computation.

Where this leads him is to precisely the same conclusion as Wolfram, the possibility of building a actual world in the form of simulation. He imagines the future designers of just such simulated worlds:

“Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess.” ( 291 -292)

Yet it seems to me that moving from the idea that things in the world: a storm, the structure of a sea-shell, the way particular types of problems are solved are algorithmic to the conclusion that the entirety of the world could be hung together in one universal  algorithm is a massive overgeneralization. Perhaps there is some sense that the universe might be said to be weakly analogous, not to one program, but to a computer language (the laws of physics) upon which an infinite ensemble of other programs can be instantiated, but which is structured so as to make some programs more likely to be run while deeming others impossible. Nevertheless, which programs actually get executed is subject to some degree of contingency- all that happens in the universe is not determined from initial conditions. Our choices actually count.

Still, such a view continues to treat the question of corporal structure as irrelevant, whereas structure itself may be primary.

The idea of the world as code, or DNA as a sort of code is incredibly attractive because it implies a kind of plasticity which equals power. What gets lost however, is something of the artifact like nature of everything that is, the physical stuff that surrounds us, life, our cultural environment. All that is exists as the product of a unique history where every moment counts, and this history, as it were, is the anchor that determines what is real. Asserting the world is or could be fully represented as a simulation either implies that such a simulation possesses the kinds of compression and abstraction, along with the ahistorical plasticity that comes with mathematics and code or it doesn’t, and if it doesn’t, it’s difficult to say how anything like a person, let alone, trillions of persons, or a universe could actually, rather than merely symbolically, be contained in a box even a beautiful one.

For the truly real can perhaps most often be identified by its refusal to be abstracted away or compressed and by its stubborn resistance to our desire to give it whatever shape we please.


How dark epistemology explains the rise of Donald Trump


We are living in what is likely the golden age of deception. It would be difficult enough were we merely threatened with drowning in what James Gleick has called the flood of information, or were we doomed to roam blind through the corridors of Borges’ library of Babel, but the problem is actually much worse than that. Our dilemma is that the very instruments that once promised liberation via the power of universal access to all the world’s knowledge seem just as likely are being used to sow the seeds of conspiracy, to manipulate us and obscure the path to the truth.

Unlike what passes for politicians these days I won’t open with such a tirade only to walk away. Let me instead explain myself. You can trace the origins of our age of deception not only to the 2008 financial crisis but back much further to its very root. Even before the 1950’s elites believed they had the economic problem, and therefore the political instability that came with this problem, permanently licked. The solution was some measure of state intervention into the workings of capitalism.

These interventions ranged on a spectrum from the complete seizure and control of the economy by the state in communist countries, to regulation, social welfare and redistributive taxation in even the most solidly capitalist economies such as the United States. Here both the pro-business and pro-labor parties, despite the initial resistance of the former, ended up accepting the basic parameters of the welfare-state. Remember it was the Nixon administration that both created the EPA and flirted with the idea of a basic income.  By the early 1980’s with the rise of Reagan and Thatcher the hope that politics had become a realm of permanent consensus- Frederick Engel’s prophesied “administration of things”- collapsed in the face of inflation, economic stagnation and racial tensions.

The ideological groundwork for this neo-liberal revolution had, however, been laid as far back as 1945 when state and expert directed economics was at its height. It was in that year that Austrian economist Friedrich Hayek in a remarkable essay entitled The Use of Knowledge in Society pointed out that no central planner or director could ever be as wise as the collective perception and decision making of economic actors distributed across an entire economy.

At the risk of vastly over simplifying his argument, what Hayek was in essence pointing out was that markets provide an unrivaled form of continuous and distributed feedback. The “five year plans” of state run economies may or may not have been able to meet their production targets, but only the ultimate register of price can tell you whether any particular level of production is justified or not.

A spike in price is the consequence of an unanticipated demand and will send producers scrambling to meet in the moment it is encountered. The hubris behind rational planning is that it claims to be able to see through the uncertainty that lies at the heart of any economy, and that experts from 10 000 feet are someone more knowledgeable than the people on the ground who exist not in some abstract version of an economy built out of equations, but the real thing.

It was perhaps one of the first versions of the idea of the wisdom of crowds, and an argument for what we now understand as the advantages of evolutionary approaches over deliberate design. It was also perhaps one of the first arguments that what lies at the very core of an economy was not so much the exchange of goods as the exchange of information.

The problem with Hayek’s understanding of economics and information wasn’t that it failed to capture the inadequacies of state run economies, at least with the level of information technologies they possessed when he was writing, (a distinction I think important and hope to return in the future), but that it was true for only part of the economy- that dealing largely with the production and distribution of goods and not with the consumer economy that would take center stage after the Second World War.

Hayek’s idea that markets were better ways of conveying information than any kind of centralized direction worked well in a world of scarcity where the problem was an accurate gauge of supply vs demand for a given resource, yet it missed that the new era would be one of engineered scarcity where the key to economic survival was to convince consumers they had a “need” that they had not previously identified. Or as John Kenneth Galbraith put it in his 1958 book The Affluent Society we had:

… managed to transfer the sense of urgency in meeting consumer need that was once felt in a world where more production meant more food for the hungry, more clothing for the cold, and more houses for the homeless to a world where increased output satisfies the craving for more elegant automobiles, more exotic food, more elaborate entertainment- indeed for the entire modern range of sensuous, edifying, and lethal desires. (114-115).

Yet rather than seeing the economic problems of the 1970’s through this lens, that the difficulties we were experiencing were as much a matter of our expectations regarding what economic growth should look like and the measure of our success in having rid ourselves (in advanced countries) of the kinds of life threatening scarcity that had threatened all prior human generations, the US and Britain set off on the path prescribed by conservative economists such as Hayek and began to dismantle the hybrid market/state society that had been constructed after the Great Depression.

It was this revolt against state directed (or even just restrained) capitalism which was the neoliberal gospel that reigned almost everywhere after the fall of the Soviet Union, and to which the Clinton administration converted the Democratic party. The whole edifice came crashing down in 2008, since which we have become confused enough that demons long dormant  have come home to roost.

At least since the crisis, economists have taken a renewed interest in not only the irrational elements of human economic behavior, but how that irrationality has itself become a sort of saleable commodity. A good version of this is Robert J Shiller and George Akerlof’s recent Phishing for Phools: The Economics of Manipulation and Deception. In their short book the authors examine the myriad of ways all of us are “phished” – probed by con-artists looking for “phools” to take advantage of and manipulate.

The techniques have become increasingly sophisticated as psychologists have gotten a clearer handle on the typology of irrationality otherwise known as human nature. Gains in knowledge always come with tradeoffs:

“But theory of mind also has its downside. It also means we can figure out how to lure people into doing things that are in our interest, but not in theirs. As a result, many new ideas are not just technological. They are not ways to deliver good-for-you/good-for-me’s. They are, instead, new uses of the theory of mind,  regarding how to deliver good-for-me/bad-for-you’s.” (98)

This it seems would be the very opposite of a world dominated by non- zero sum games that were heralded in the 1990’s, rather it’s the construction of an entire society around the logic of the casino, where psychological knowledge is turned into a tool against consumers to make choices contrary to their own long term interest.

This type of manipulation, of course, has been the basis of our economies for quite sometime. What is different is the level of sophistication and resources being thrown at the problem of how to sustain human consumption in a world drowning in stuff. The solution has been to sell things that simply disappear after use- like experiences- which are made to take on the qualities of the ultimate version of such consumables,  namely addictive drugs.

It might seem strange, but the Internet hasn’t made achieving safety from this manipulation any easier. Part of the reason for this is something Shiller and Akerlof do not fully discuss- that much of the information resources used in our economies serve the purpose not so much of selling things consumers would be better off avoiding, let alone convey actual useful information, but in distorting the truth to the advantage of those doing the distorting.

This is a phenomenon for which Robert Proctor has coined the term agontology. It is essentially a form of dark epistemology whose knowledge consist in how to prevent others from obtaining the knowledge you wish to hide.

We live in an age too cultured for any barbarism such as book burning or direct censorship. Instead we have discovered alternative means of preventing the spread of information detrimental to our interests. The tobacco companies pioneered this. Outright denials of the health risks of smoking were replaced with the deliberate manufacture of doubt. Companies whose businesses models are threatened by any concerted efforts to address climate change have adopted similar methods.

Warfare itself, where the power of deception and disinformation was always better understood has woken up to its potential in the digital age: witness the information war still being waged by Russia in eastern Ukraine.

All this I think partly explains the strange rise of Trump. Ultimately, neoliberal policies failed to sustain rising living standards for the working and middle class- with incomes stagnant since the 1970’s. Perhaps this should have never been the goal in the first place.

At the same time we live in a media environment in which no one can be assumed to be telling the truth, in which everything is a sales pitch of one sort or another, and in which no institution’s narrative fails to be spun by its opponents into a conspiracy repackaged for maximum emotional effect. In an information ecosystem where trusted filters have failed, or are deemed irredeemably biased, and in which we are saturated by a flood of data so large it can never be processed, those who inspire the strongest emotions, even the emotion of revulsion, garner the only sustained attention. In such an atmosphere the fact that Trump is a deliberate showman whose pretense to authenticity is not that he is committed to core values, but that he is open about the very reality of his manipulations makes a disturbing kind of sense.

An age of dark epistemology will be ruled by those who can tap into the hidden parts of our nature, including the worst ones, for their own benefit, and will prey off the fact we no longer know what the truth is nor how we could find it even if we still believed in its existence. Donald Trump is the perfect character for it.


The Future of Money is Liquid Robots


Klimpt Midas

Over the last several weeks global financial markets have experienced what amounts to some really stomach churning volatility and though this seems to have stabilized or even reversed for the moment as players deem assets oversold, the turmoil has revealed in much clearer way what many have suspected all along; namely, that the idea that the Federal Reserve and other central banks, by forcing banks to lend money could heal the wounds caused by the 2008 financial crisis was badly mistaken.

Central banks were perhaps always the wrong knights in shining armor to pin our hopes on, for at precisely the moment they were called upon to fill a vacuum left by a failed and impotent political system, the very instrument under their control, that is money, was changing beyond all recognition, and had been absorbed into the revolutions (ongoing if soon to slow) in communications and artificial intelligence.

The money of the near future will likely be very different than anything at any time in all of human history. Already we are witnessing wealth that has become as formless as electrons and the currencies of sovereign states less important to an individual’s fortune than the access to and valuation of artificial intelligence able to surf the liquidity of data and its chaotic churn.

As a reminder, authorities at the onset of the 2008 crisis were facing a Great Depression level collapse in the demand for goods and services brought about the bursting of the credit bubble. To stem the crisis authorities largely surrendered to the demands for bailouts by the “masters of the universe” who had become their most powerful base of support. Yet for political and ideological reasons politicians found themselves unwilling or unable to provide similar levels of support for lost spending power of average consumers or to address the crisis of unemployment fiscally- that is, politicians refused to embark on deliberate, sufficient government spending on infrastructure and the like to fill the role of the vacated private sector.

The response authorities hit upon instead and that would spread from the United States to all of the world’s major economies would be to suppress interest rates in order to encourage lending. Part of this was in a deliberate effort to re-inflate asset prices that had collapsed during the crisis. It was hoped that with stock markets restored to their highs the so-called wealth effect would encourage consumers to return to emptying their pocket books and restoring the economy to a state of normalcy.

It’s going on eight years since the onset of the financial crisis, and though the US economy in terms of the unemployment rate and GDP has recovered somewhat from its lows, the recovery has been slow and achieved only in light of the most unusual of financial conditions- money lent out with an interest rate near zero. Even the small recent move by the Federal Reserve away from zero has been enough to throw the rest of the world’s financial markets into a tail spin.

While the US has taken a small step away from zero interest rates a country like Japan has done the opposite and the unprecedented. It has set rates below zero. To understand how bizarre this is banks in Japan now charges savers to hold their money. Prominent economists have argued that the US would benefit from negative rates as well, and the Fed has not denied such a possibility should the fragile American Recovery stall.

There are plenty of reasons why the kinds of growth that might have been expected from lending money out for free has failed to materialize. One reason I haven’t heard much discussed is that the world’s central banks are acting under a set of assumptions about what money is that no longer holds- that over the last few decades the very nature of money has fundamentally changed in ways that make zero or lower interest rates set by the central banks of decreasing relevance.

That change started quite some time ago with the move away from money backed up with gold to fiat currencies. Those gold bugs who long to return to the era of Bretton Woods understand the current crisis mostly in terms of the distortions caused by countries that have abandoned the link between money and anything “real” that is precious metals and especially gold way back in the 1970’s. Indeed it was at this time that money started its transition from a means of exchange to a form of pure information.

That information is a form of bet. The value of the dollars, euros, yen or yuan in your pocket is a wager by those who trade in such things on the future economic prospects and fiscal responsibility of the power that issued the currency. That is, nation-states no longer really control the value of their currency, the money traders who operate the largest market on the planet, which in reality is nothing but bits representing the world’s currencies, are the ones truly running the show.

We had to wait for a more recent period for this move to money in the form of bits to change the existential character of money itself. Both the greatest virtue of money in the form of coins or cash and it’s greatest danger is its lack of memory. It is a virtue in the sense that money is blind to tribal ties and thus allows individuals to free themselves from dependence upon the narrow circle of those whom they personally know. It is a danger as a consequence of this same amnesia for a dollar doesn’t care how it was made, and human beings being the types of creatures that they are will purchase all sorts of horrific things.

At first blush it would seem that libertarian anarchism behind a digital currency like  Bitcoin promises to deepen this ability of money to forget.  However, governments along with major financial institutions are interested in bitcoin like currencies because they promise to rob cash of this very natural amnesia and serve as the ultimate weapon against the economic underworld. That is, rather than use Bitcoin like technologies to hide transactions they could be used to ensure that every transaction was linked and therefore traceable to its actual transactee.

Though some economists fear that the current regime of loose money and the financial repression of savers is driving people away from traditional forms of money to digital alternatives others see digital currency as the ultimate tool. Something that would also allow central banks to more easily do what they have so far proven spectacularly incapable of doing- namely to spur the spending rather than the hoarding of cash.

Even with interest rates set to zero or even below a person can at least not lose their money by putting it in a safe. Digital currency however could be made to disappear if at a certain date it wasn’t invested. Talk about power!- which is why digital currency will not long remain in the hands of libertarians and anarchists.

The loss of the egalitarian characteristics of cash will likely further entrench already soaring inequality. The wealth of many of us is already leveraged by credit ratings, preferred customer privileges and the like, whereas others among us are charged a premium for our consumption in the form of higher interest rates, rent instead of ownership and the need to leverage income through government assistance and coupons. In the future all these features are likely to be woven into our digital currency itself. A dollar in my pocket will mean a very different thing from a dollar in yours or anyone else’s.

With the increased use of biometric technologies money itself might disappear into the person and may become as David Birch has suggested synonymous with identity itself.The value of such personalized forms of currency- which is really just a measure of individual power- will be in a state of constant flux. With everyone linked to some form of artificial intelligence prices will be in a constant state of permanent and rarely seen negotiation between bots.

There will be a huge inequality in the quality and capability of these bots, and while those of the wealthy or used by institutions will roam the world for short lived investments and unleash universal volatility, those of the poor will shop for the best deals at box stores and vainly defend their owners against the manipulation of ad bots who prod them into self-destructive consumption.

Depending on how well the bots we use for ourselves do against the bots used to cajole us into spending- the age of currency as liquid robot money could be extremely deflationary, but would at the same time be more efficient and thus better for the planet.

One could imagine  much different us for artificial intelligence, something like the AI used to run economies found in the Iain Banks’ novels. It doesn’t look likely. Rather, to quote Jack Weatherford from his excellent History of Money that still holds up nearly two decades after it was written:

In the global economy that is still emerging, the power of money and the institutions built on it will supersede that of any nation, combination of nations, or international organization now in existence. Propelled and protected by the power of electronic technology, a new global elite is emerging- an elite without loyalty to any particular country. But history has already shown that the people who make monetary revolutions are not always the ones who benefit from them in the end. The current electronic revolution in money promises to increase even more the role of money in our public and private lives, surpassing kinship, religion, occupation and citizenship as the defining element of social life. We stand now at the dawn of the Age of Money. (268)


The one percent discovers transhumanism: Davos 2016

Land of OZ

The World Economic Forum in Davos Switzerland just wrapped up its annual gathering. It isn’t hard to make fun of this yearly coming together of the global economic and cultural elites who rule the world, or at least think they do. A comment by one of the journalists covering the event summed up how even those who really do in some sense have the fate of the world in their hands are as blinded by glitz as the rest of us: rather than want to discuss policy the question he had most been asked was if he had seen Kevin Spacey or Bono. Nevertheless, 2016’s WEF might go down in history as one of the most important, for it was the year when the world’s rich and powerful acknowledged that we had entered an age of transhumanist revolution.

The theme of this year’s WEF was what Klaus Schwab calls The Fourth Industrial Revolution a period of deeply transformative change, which Schwab believes we are merely at the beginning of. The three revolutions which preceded the current one were the first industrial revolution which occurred between 1760 and 1840 and brought us the stream engine and railroads. The second industrial revolution in the late 19th and early 20th centuries brought us mass production and electricity. The third computer or digital revolution brought us mainframes, personal computers, the Internet, and mobile technologies, and began in the 1960’s.

The Fourth Industrial Revolution whose beginning all of us are lucky enough to see includes artificial intelligence and machine learning, the “Internet of things” and it’s ubiquitous sensors, along with big data. In addition to these technologies that grow directly out of Moore’s Law, the Fourth Industrial Revolution includes rapid advances in the biological sciences that portend everything from enormous gains in human longevity to “designer babies”. Here we find our rapidly increasing knowledge of the human brain, the new neuroscience, that will likely upend not only our treatment of mental and neurodegenerative diseases such as alzheimer’s but include areas from criminal justice to advertising.

If you have had any relationship to, or even knowledge of, transhumanism over the past generation or so then all of this should be very familiar to you. Yet to the extent that the kinds of people who attend or follow Davos have an outsized impact on the shape of our world, how they understand these transhumanist issues, and how they come to feel such issues should be responded to, might be a better indication of the near term future of transhumanism as anything that has occurred on the level of us poor plebs.

So what was the 1 percent’s take on the fourth industrial revolution? Below is a rundown of some of the most interesting sessions.

One session titled “The Transformation of Tomorrow”  managed to capture what I think are some of the contradictions of those who in some respects feel themselves responsible for the governance of the world. Two of panel members were Sheryl Sandberg COO of Facebook, and the much lesser known president of Rwanda Paul Kagame.  That pairing itself is kind of mind bending.  Kagame has been a darling of technocrats for his successful governance of Rwanda. He is also a repressive autocrat who has been accused of multiple human rights abuses and to the dismay of the Obama administration managed to secure through a referendum his rule of Rwanda into the 2030s.

Kagame did not have much to say other than that Rwanda would be a friendly place for Western countries wishing to export the Fourth Industrial Revolution. For her part Sandberg was faced with questions that have emerged as it has become increasingly clear that social media has proven to be a tool for both good and ill as groups like Daesh have proven masterful users of the connectivity brought by the democratization of media. Not only that, but the kinds of filter bubbles offered by this media landscape have often been found to be inimical to public discourse and a shared search for Truth.

Silicon Valley has begun to adopt the role of actively policing their networks for political speech they wish to counter. Sandberg did not discuss this but rather focused on attempts to understand how discourses such as that of Daesh manage to capture the imagination of some and to short-circuit such narratives of hate through things such as “like attacks” where the negativity of websites is essentially flooded by crowds posting messages of love.

It’s a nice thought, but as the very presence of Kagame on the same stage with Sandberg while she was making these comments makes clear: it seems blind to the underlying political issues and portends a quite frightening potential for a new and democratically unaccountable form of power. Here elites would use their control over technology and media platforms to enforce their own version of the world in a time when both democratic politics and international relations are failing.

Another panel dealt with “The State of Artificial Intelligence.” The takeaway here was that no one took Ray Kurzweil ’s 2029 – 2045 for human and greater level AI seriously, but everyone was convinced that the prospect of disruption to the labor force from AI was a real one that was not being taken seriously enough by policy makers.

In related session titled “A World Without Work” the panelists were largely in agreement that the adoption of AI would further push the labor force in the direction of bifurcation and would thus tend, absent public policy pushing in a contrary direction, to result in increasing inequality.

In the near term future AI seems poised to take middle income jobs- anything that deals with the routine processing of information. Where AI will struggle making inroads is in low skilled, low paying jobs that require a high level of mobility- jobs like cleaners and nurses. Given how reliant Western countries have become on immigrant labor for these tasks we might be looking at the re- emergence of an openly racist politics as a white middle class finds itself crushed between cheap immigrant labor and super efficient robots. According to those on the panel, AI will also continue to struggle with highly creative and entrepreneurial tasks, which means those at the top of the pyramid aren’t going anywhere.

At some point the only solution to technological unemployment short of smashing the machines or dismantling capitalism might be the adoption of a universal basic income, which again all panelist seemed to support. Though as one of the members of the panel Erik Brynjolfsson pointed out such a publicly guaranteed income would provide us will only one of Voltaire’s  three purposes of work which is to save us from “the great evils of boredom, vice and need.” Brynjolfsson also wisely observed that the question that confronts us over the next generation is whether we want to protect the past from the future or the future from the past.

The discussions at Davos also covered the topic of increasing longevity. The conclusions drawn by the panel “What if you are still alive in 2100?” were that aging itself is highly malleable, but that there was no one gene or set of genes that would prove to be a magic bullet in slowing or stopping the aging clock. Nevertheless, there is no reason human beings shouldn’t be able to live past the 120 year limit that has so far been a ceiling on human longevity.

Yet even the great success of increased longevity itself poses problems. Even extending current longevity estimates of those middle-aged today merely to 85 would bankrupt state pension systems as they are now structured. Here we will probably need changes such as workplace daycare for the elderly and cities re-engineered for the frail and old.

By 2050 upwards of 130 million people may suffer from dementia. Perhaps surprisingly technology rather than pharmaceuticals is proving to be the first order of defense in dealing with dementia by allowing those suffering from it to outsource their minds.

Many of the vulnerable and at need elderly will live in some of the poorest countries (on a per capita basis at least) on earth: China, India and Nigeria. Countries will need to significantly bolster and sometimes even build social security systems to support a tidal wave of the old.

Living to “even” 150 the panelists concluded would have a revolutionary effect on human social life. Perhaps it will lead to the abandonment of work life balance for women (and one should hope men as well) so that parent can focus their time on their children during their younger years. Extended longevity would make the idea of choosing a permanent career as early as one’s 20s untenable and result in much longer period of career exploration and may make lifelong monogamy untenable.

Lastly, there was a revealing panel on neuroscience and the law entitled “What if your brain confesses?” panelists there argued that Neuroscience is being increasing used and misused by criminal defense. Only in very few cases – such as those that result from tumors- can we draw a clear line between underlying neurological structure and specific behavior.

We can currently “read minds” in limited domains but our methods lack the kinds of precision and depth that would be necessary for standards questions of guilt and innocence. Eventually we should get there, but getting information in, as in Matrix kung-fu style uploading, will prove much harder than getting it out. We’re also getting much better at decoding thoughts from behavior- dark opportunities for marketing and other manipulation. Yet we could also be able to use this more granular knowledge of human psychology to structure judicial procedures to be much more free from human cognitive biases.

The fact that elites have begun to seriously discuss these issues is extremely important, but letting them take ownership of the response to these transformations would surely be a mistake. For just like any elite they are largely blinded to opportunities for the new by their desire to preserve the status quo despite how much the revolutionary the changes we face open up opportunities for a world much different and better than our own.


Religion and violence revisited


A few weeks back I did a post on religion and violence the gist of which was that it’s far too simplistic to connect religiosity to violence without paying much closer attention to the social context. Religious violence should been seen, I argued, as the response to some real or perceived mistreatment. In addition I also suggested that perhaps what appears to make believers in monotheistic faiths particularly prone to violence is their insistence that they alone possess religious truth.

That post got some push back, particularly here, and especially over the issue of whether references in the Koran especially made Muslims more prone to violence than other religious groups. As I tried to make clear in the discussion, it makes much less sense to try to understand the most relevant form of religious violence today, that is, Islamist violence, through reference to the Koran than it does to grapple with the actual historical and social context in which such violence occurs.

For the most part, I still think that. Where my views have become somewhat more nuanced is in the role of belonging, for at least if some recent work in moral psychology is correct, religious violence is just the flipside of the moral community all successful religions help create.

It seems that while I wasn’t paying attention the case being made by some of  the most vocal New Atheists was itself changing. Back in October of last year, Sam Harris, who could be accused of having made the most inflammatory statements about Islam published a dialogue with the former Islamist radical Maajid Nawaz. Having himself been jailed and tortured for his political views in Egypt, Nawaz has developed a four part model of the causes of religious extremism:

  1. grievance narrative (real or perceived)
  2. identity crisis
  3. a charismatic recruiter
  4. ideological dogma

It’s not this model, however, where Nawaz has managed to convert Harris to a less nihilistic interpretation of Islam, but through the implications of Islam’s long history of pluralism.

Sunni Islam especially, having no “pope” and no body of scholars in charge of defining what Islam is, the faith is essentially defined by what the majority of Muslims understand it to be. The task then, Nawaz argues, is to shrink the significant minority of Muslims who believe that either violence is a legitimate way to address political grievances, or that some group of believers has the right and duty to enforce their particular interpretation of Islam on not just other Muslims but non-believers as well.

Yet despite the power of Nawaz garners from his personal experience for explicating religious violence the one element he seems to miss is the one deemed most important by recent psychosocial research on the topic- namely communal participation. According to such research, religious violence, when it does indeed occur, has much less to do with the messages found in scriptures, actual belief, or even, oppression (though the last I still think the most important among the three) than the kind of hive- like nature of groups that human beings, almost alone outside of the eusocial insects, are able to create.

This at least is the case in two recent works by leading philosophers in the field of moral psychology- Paul Bloom’s Just Babies: the origins of good and evil and Jonathan Haidt’s The Righteous Mind: Why good people are divided by politics and religion. The last decade or so has seen a flood of popular and quite good books by moral philosophers and psychologists using evolutionary theory and clever experiments to gain insight into human thinking about moral topics. Bloom and Haidt are two of the best, but there’s also Joshua Green and, although covering quite different topics Daniel Kahneman both of whom I’ve written about before.

Perhaps I’ll write proper reviews of Bloom and Haidt’s books some other time, for now I just want to focus on one question: what does the latest work in moral psychology tell us about the relationship between religion and violence?

First off- it’s complicated. Bloom, in his discussion of the issue in Just Babies, points out how difficult it is to disaggregate anything from religion, for, as he phrases it, “religion is everywhere”. In his book Bloom also points out that though research on how religiosity affects behavior isn’t as informative as we might like, it does show a strong correlation between religion and charitable giving. It seems that religious individuals donate a higher percentage of their income to charity than secular persons, and even give more than non-religious individuals to charities that are secular, which is kind of mind blowing.

One might think that all this giving by religious people was somehow the result of adherence or belief. That is, one could reasonably assume that the reason religious individuals were so damned charitable is that their scriptures command it, and/or giving is the product of belief in an afterlife of rewards and punishments for how one acted in this life. Charity would  here be seen as a kind of non-temporal investment.

According to Bloom, however, that is not what the studies show. Bloom quotes the famous sociologist Robert Putnam to make his point, that it is neither adherence or belief that makes religious people more charitable but membership in a group:

“…. the statistics suggest that even an atheist who happened to become involved in the social life of the congregation (perhaps through a spouse) is much more likely to volunteer in a soup kitchen than the most fervent believer who prays alone. It is religious belongingness that matters for neighborliness, not religious believing.” (203)

This good side of religious belonging is mirrored by religion’s darker side where, again, it seems that it is a matter of participation in a religious community that predicts support for violence in the supposed defense of that community rather than the measure of the person’s depth of belief.

Jonathan Haidt characterizes religions as “moral exoskeletons”, or “moral matrixes”  For their members they establish communities of trust, but their very success in binding groups together means they can also lead to moralistic killings against what is believed to be betrayal of the group or result in martyrdom in the name of the community’s defense.

Anything that binds people together into a moral matrix that glorifies the in-group while at the same time demonizing another group can lead to moralistic killing, and many religions are well suited to that task. Religion is therefore often an accessory to atrocity, rather than the driving force of atrocity. (310)

The problem is that the very same features that allow religion, such as in the case of abolitionism, to work in the defense (even the violent defense) of minorities allows those same groups to act violently against minorities- whether Jews in the case of Christians up until very recently in historical times or groups like ISIS today against very vulnerable religious minorities such as the Yazidis or social minorities such as women or homosexuals.

The dilemma is that the atrophy of communal ties undeniably fosters the autonomy of individuals, yet we have not reached the point where we can be certain such a society lacking strong communal ties is sustainable over the long term. As Haidt puts it:

Societies that forgo the exoskeleton of religion should reflect carefully on what will happen to them over several generations. We don’t really know, because the first atheistic societies have only emerged in Europe in the last few decades. They are the least efficient societies ever know at turning resources (of which they have a lot) into offspring (of which they have few). (311)

At one point the smart money was predicting religious belief would be largely irrelevant to humanity  by the end of the 2oth century. If religion has appeared an especially powerful force during that period and into our own such an appearance is partly an illusion caused by the incredible atrophy of former secular collective bonds that at one point appeared to have replaced religious ties- nationalism- both ethnic and civic, socialism, communism or even the hold of political parties.

What we’ve gotten instead are a plethora of micro-affective communities competing for our attention and commitment some of whom target the very darkest aspects of human nature: our bloodlust, fear, or our anger. Sadly, the pluralism and diversity so beloved by liberals (myself included) fails to offer solutions to the problems posed by either fundamentalist inspired violence or the right’s ascending off of our fear of it; namely how to sustain a society over the longue duree absent some shared definition of the good, when our communications architecture seems built to result in sharp divisions, rival truths, and an environment where the violence of extreme minorities continually results in calls to either suppress minorities at home and protect vulnerable minorities in regions that have yet to discover the West’s pluralism.