Why liberals might kill free speech

We’ve got a huge problem on our hands which the 2016 election, along with Brexit, has not so much created as fully exposed. What we’ve witnessed is a kind of short-circuit between the three pillars that have defined our particular form of democratic liberalism over the last century. Democratic liberalism over the 20th and into the 21st century consisted of a kind of balance between the public at large, mass media, and policy elites with the link between the three being political representatives of one of the major parties. As idealized by public philosophers such as Walter Lippmann, the role of politicians was to choose among the policy options presented by experts and “sell” those policies to the public using the tools of mass communication to ensure their legitimacy.

The fact that such a balance became the ideal in the first place, let alone its inevitable failure, can only be grasped fully when one becomes familiar with its history.

Non-print based mass media only became available during the course of the First World War and it was here that the potential of media such as film, radio, posters and billboards to create a truly emotionally and ideologically unified public became apparent- although the US had come close to this discovery a little over in a decade earlier in the form of mass circulation newspapers which were instrumental in getting the American public behind the Spanish- American War and that itself gave rise to real standards of objectivity in journalism.

During WWI it was the Americans and British who mastered the art of war propaganda transforming their enemies the Germans into savage “huns” and engendering a kind of will to sacrifice for what (at least for the Americans) was a distant and abstract cause. Lippmann himself was on the Creel Committee which launched this then new form of political propaganda. Hitler would write enviously of British and American propaganda in Mein Kampf, and both the Nazis and the Soviet would use the new media and the proof of concept offered by allied powers in the war, to form the basis of the totalitarian state. Those systems ultimately failed but their rise and attraction reveal the extent to which democracy, less than a century from our own time, was seen to be failing. Not just the victory of the Soviets in the war, but the way they were able to rapidly transform the Russian Empire from an agrarian backwater to an industrial and scientific powerhouse seemed to show that the future belonged to the system that most fully empowered its technocrats.

The Great Depression and Second World War would prove to be the golden age of experts in the West as well. In the US in was technocrats who crafted the response to the economic crisis, who managed the American economy during the war, who were responsible for technological breakthroughs such as atomic weapons, rockets capable of reaching space, and the first computers. It was policy experts who crafted novel responses to unprecedented political events such as the Marshall Plan and Containment.

Where the Western and Soviet view of the role of experts differed had less to do with their prominence and more to do with their plurality or lack of it. Whereas in the Soviet Union all experts were united under the umbrella of the Party, Western countries left the plurality of experts intact so that the bureaucrats who ran big business were distinct from the bureaucrats who ran government agencies and neither had any clear relationship to the parties that remained the source of mass political mobilization while the press remained free (if not free of elite assumptions and pressures) to forge the public’s interpretation of events as it liked.

Lippmann had hoped the revolutionary medium of his time- television- would finally provide a way for the technocrats he thought necessary to rule a society that had become too complex for the form of representative democracy that had preceded allowing experts to directly communicate with the public and in so doing forge consensus for elite policies. What dashed his hopes was a rigged game show.

The Quiz show scandal that broke in the 1950’s (it was made into an excellent movie in the 90’s) proved to Lippmann that American style television with its commercial pressures could not be the medium he had hoped for. In his essay, Television: whose creature, whose servant?   Lippmann called for the creation of an American version of the BBC. (PBS would be created in 1970, as would NPR). Indeed, the scandal did drive the three major US television networks- especially CBS- towards the coverage of serious news and critical reporting. Such reporting helped erode political support for the Vietnam war, though not, as it’s often believed, by turning public opinion against the war, but as pointed out back in the 1980’s by Michael Mandelbaum in his essay Vietnam: The Television War  by helping to mobilize such as vast number of opponents as to polarize the American public in a way that made sustaining the post-war consensus unsustainable. Vietnam was the first large scale failure of the technocrats- it would not be their last.

From the 1970’s until today this polarization was mined by a new entry on the media landscape- cable news- starting with Ted Turner and CNN. As Tim Wu lays out in his book The Master Switch, the rise of cable was in part enabled by Nixon’s mistrust of what was then “mainstream news” (Nixon helped deregulate cable). This rise (more accurately return) of partisan media occurred at the same time Noam Chomsky (owl of Minerva like) in his book Manufacturing Consent was arguing that the press was much less free and independent than it pretended to be. Instead it was wholly subservient to commercial influence and the groupthink of those posing to be experts. And hadn’t, after all, George Kennan, the brilliant mind behind containment and an unapologetic elitists compared American democracy to a monster with a brain the size of a pin?

Chomsky’s point held even in the era of cable news for there was a great deal of political diversity that fell outside the range between Fox News and CNN. Manufactured consent would fail, however, with the rise of the internet which would allow the cheap production and distribution of political speech in a way that had never been seen before, though there had been glimpses. Political speech was democratized at almost the exact same time trust in policy elites had collapsed. The reasons for such a collapse in trust aren’t hard to find.

American policy elites have embraced an economic agenda that has left working class income stagnant for over a generation. The globalization and de-unionization they promoted has played a large (though not the only) role in the decline of the middle class on which stable democracy depends. The Clinton machine bears a large responsibility for the left’s foolish embrace of this neoliberal agenda, which abandoned blue collar workers to transform the Democratic party into a vehicle for white collar professionals and identity groups.

Foreign policy elites along with an uncritical mainstream media led us into at least one disastrous and wholly unnecessary war in Iraq, a war whose consequences continue to be felt and which was exacerbated by yet more failure by these same elites. Our economic high-priests brought us the 2008 financial crisis the response to which has been a coup by the owning classes at the cost of trillions of dollars. As Trump’s “populist” revolt of Goldman Sachs alums demonstrates, the oligarchs now thoroughly control American government.

And it’s not only social science experts, politicians and journalist who have earned the public’s lack of trust. Science itself is in a crisis of gaming where it seems “results” matter much more than the truth. Corporations engage in deliberate disinformation, what Robert Proctor calls agnotology.

The three legs of Lippmann’s stool- policy experts, the media, and the public have collapsed as expertise has become corporatized and politicians have become beholden to those corporate interest, while at the same time political speech has escaped from anyone’s overt control. Trump seems to be the first political figure to have capitalized on this breakdown- a fact that does not bode well for democracy’s future.

Perhaps we should just call a spade a spade and abandon political representation and policy experts for government via electronic referendum. Yet, however much I love the idea of direct democracy, it seems highly unlikely that the sort of highly complex society we currently possess could survive absent the heavy input of experts– even in light of their very obvious flaws.

It’s just as possible that China where technocrats rule and political speech and activity is tightly controlled by leveraging the centralized nature of internet could be the real shape of the future. The current structure of internet which is controlled by only a handful of companies certainly makes the path to such a plutocratic censorship regime possible.

Returning to the work of Tim Wu, we can see the way in which communications empires have risen and fell over the course of the last century: we’ve had the telephone, film, radio, television and now the computer. In all cases with the noted exception of television new media have arisen in a decentralized fashion, merged into gigantic corporations such as Bell telephone, and then are later broken up or lose dominance to upstarts who have adopted new means of transmission or whole new types of media itself.

What perhaps makes our era different in a way Wu doesn’t explore is that for the first time diversity of content is occurring under conditions of concentrated ownership. Were only a handful of companies such as FaceBook and Google to pursue the task in earnest they could exercise nearly complete control over political speech and thus end the current era. Such rule need not be rapacious but instead represent a kind of despotic-liberalism that mobilizes public opinion behind policies many of us care about such as stemming global warming. It’s the kind of highly rational nightmare Malka Older imagined in her sci-fi thriller Infomacracy and Dave Eggers gave a darker hue in his book The Circle.

Hopefully liberalism itself in the form of constitutional protections of free speech will prevent us from going so far down this route. (Although the Courts appear to think that Google et. al’s  right to police their platforms’ content is itself protected under the First Amendment.) How our long standing constitutional protections adapt to a world where “speech” can come in the form of bots which outnumber humans and foreign governments insert themselves into our elections is anybody’s guess.

The best alternative to either despotic-liberalism or chaos is to restore trust in policy elites by finding ways to make such elites more accountable and therefore trustworthy. We need to come up with new ways to combine the necessary input of real experts with the revolution in communications that has turned every citizen into a source of media. For failing to find a way to rebalance expertise and democratic governance would mean we either lose our democracy to flawed experts (as Plato would have wanted) or surrender to the chaos of an equally flawed and fickle, and now seemingly permanently Balkanized, public opinion.

 

Should Facebook Censor the News?

book-burning-1492

In the era of information wars knowledge of the past is perhaps the only way we can remain anchored to reality. Such collective memory shouldn’t only consist of an accurate record of the facts, but would also include a sense of the history of knowledge and inforwar itself.

When not seen from the point of false omniscience we call the present, history has always been the unwieldy struggle of rival forces, shifting alliances, and enemies that cannot be clearly distinguished along purely ideological or religious lines. There is not, nor has there ever been, a direction to history, it being as Churchill lamented “one damned thing after another”. It’s perhaps the fact that we’ve been forced to re-learn this that makes the present so damned painful. Many of those who thought we were headed towards a brighter future instead find themselves slipping back into nightfall.

At least part of the reason for our shock over the 2016 election wasn’t just the outcome but the fact that it happened when it did at all. Stable, even sclerotic, societies such as ours don’t usually play Russian roulette with their future whatever the imagined benefits that might come if the chamber is found empty. Almost from the start of the 21st century we had experienced shocks none of which gave rise to even minor reforms let alone the kind of political earthquake Trump’s election represents.

As a reminder, since 2000’s we’ve gone through a presidential election whose outcome was decided not by the voters but by the US Supreme Court, the bursting of the 90’s tech bubble, the 9-11 attacks, the 2008 financial crisis and subsequent Great Recession, two failed wars in Afghanistan and Iraq, along with nearly a decade of lackluster economic growth despite unprecedented measures being taken by the world’s major central banks. And yet it is now when none of these crises are as acute as they been in the past that their consequent political upheaval has occurred.

What I think such questions regarding timing miss is the fact that not only has the breakdown in trust between elites (especially in the media and the academy) and a large portion of the American (indeed Western) citizenry been occurring across these different crises, but that this erosion has been running in parallel with a transformation of the communications landscape that has upended the ability of elites to as Noam Chomsky characterized it “manufacture consent”.

Since the Second World War, and only starting to unwind the 1980’s ,there was only a marginal difference between Republicans and Democrats (it was Nixon, after all, whom we have to thank for the EPA and Jimmy Carter who started what we now think of as Reagan’s arms buildup). American elites were in overall agreement over the fundamental questions regarding society and possessed means the likes of which had never been seen before to ensure the rest of society also held these assumptions as sacrosanct.

This was perhaps an odd situation give that liberal elites in Western democracies were able to reach such mutual agreement and gain such a degree of public acquiescence absent the types of control over information and speech that had been present both historically and which was so pronounced in the Communist societies that were their penultimate rival. It was the shape of this occluded form of control which political theorists such as Herbert Marcuse among others tried to uncover.

None of these others is more important for my purposes than Noam Chomsky and his book Manufacturing Consent first published in 1981. Ever like the owl of Minerva this revelatory book appeared on the very eve when the conditions it depicted were about to be transformed into something radically different.

In that work Chomsky argues that five features of the 20th century media landscape resulted in a world in which the media, rather than challenge elites, instead helped to consolidate elite control over the public. These five features were:

1) Size and concentrated ownership of media outlets

2) Advertising as the main source of revenue

3) Media reliance on government and corporate “experts”

4) “Flak” individuals experienced when they stepped outside of elite norms.

5) Anti-communism as an inviolable national religion.

By 2016 all of the elements Chomsky had described in Manufacturing Consent had been either been radically transformed or were no longer in existence.

The internet had permitted the rise of alternative or even conspiratorial media of which Breitbart and Infowars were just two prominent right wing examples. While advertising remained a primary source of revenue the cost of producing and distributing media (minus the kinds of editorial constraints of mainstream media) effectively shrank to zero with advertising’s role having shifted to content distributors such as FaceBook that refused to bear any editorial responsibilities.

2016 was also the year of the revolt against experts. The consequence, no doubt, of their repeated failures from the non-existent weapons of mass destruction in Iraq to the financial collapse that had not been foreseen by the phony experts and pseudo-scientists into whose hands we had placed our future- we call them economists.

It was also a year in which standard norms regarding political discourse collapsed, and the national religion of anti-communism was such an ancient memory that a former KGB operative could hack the American election in favor of the Republican candidate and very few within the GOP would be upset about it.

In some ways at least this merely returns us to the pre-cold war era before the kinds of media/elite alliance Chomsky describes in Manufacturing Consent had taken hold. We’ve been moving in that direction for quite some time now with the rise of openly partisan cable news in the 1980’s and 90’s.

In order to get our bearing we might have to look back even further to the period of Yellow Journalism when figures like William Randolph Hearst and Joseph Pulitzer battled for readership using the tools of sensationalism and scandal. Indeed, it was Pulitzer’s shame over his abuse of the truth during this period that convinced him to foster professionalism and standards of evidence through instruments such as the Columbia School of Journalism.

Yet we may have to look even further back. For one of the historical conditions that made the manufacturing of consent possible was the fact that in the late 1800’s information production itself had become industrialized. Those who had access to capital could produce such a flood of material that the effect was to drown out anyone who merely had access to the older, much smaller, means of publishing and distribution.

This centralization continued through post-print form of media. Radio was really only democratizing on a local level, which is why up until the 1950’s culture could still emerge from regional diversity- just ask Wolfman Jack.  National, not to mention international, broadcasts required access to limited in number (and therefore expensive) telephone lines. Television production and distribution was even more capital intensive. And then the internet changed everything. We’re now back to something that resembles the pre-industrial type media world with both its possibility for a truly public form of speech and its lack of any editorial bearing or control.

And yet, though media and speech have become decentralized and slipped completely outside the bonds of control in another they are more susceptible to censorship and oversight via centralized mediators than ever. A concerted effort by Google, Twitter, and especially Facebook, could in reality asphyxiate the platforms of the Alt-right should they so choose. The question is, even if it was politically possible at the moment, should we want them to?

My guess, from where we stand today, is that launching on such a course would not only ultimately fail but would come back to haunt us. Preventing the ugliest of sentiments from being spoken openly does not prevent people from having them, and perhaps it’s the opposite. After all, politics in countries with much stricter hate speech laws than the US have not merely gone down the same dark path as ourselves, but one that is perhaps even darker. The kinds of censorship in the name of social stability and elite interest Facebook is flirting with to secure its foothold in China should give us pause. For not only is this the opposite of the technologically enabled democratic future many of us long for, which would entail real democratic control over such editorial decisions and transparency regarding how those decisions were made, we can never be sure such a weapon used against frankly despicable enemies won’t someday be used by the very same elites to define the despicable- as us.

Immortal Jellyfish and the Collapse of Civilization

Luca Giordano Cave of Eternity 1680s

The one rule that seems to hold for everything in our Universe is that all that exists, after a time, must pass away, which for life forms means they will die. From there, however, the bets are off and the definition of the word “time” in the phrase “after a time” comes into play. The Universe itself may exist for as long as 100s of trillions of years to at last disappear into the formless quantum field from whence it came. Galaxies, or more specifically, clusters of galaxies and super-galaxies, may survive for perhaps trillions of years to eventually be pulled in and destroyed by the black holes at their centers.

Stars last for a few billion years and our own sun some 5 or so billion years in the future will die after having expanded and then consumed the last of its nuclear fuel. The earth having lasted around 10 billion years by that point will be consumed in this  expansion of the Sun. Life on earth seems unlikely to make it all the way to the Sun’s envelopment of it and will likely be destroyed billions of years before the end of the Sun- as solar expansion boils away the atmosphere and oceans of our precious earth.

The lifespan of even the oldest lived individual among us is nothing compared to this kind of deep time. In contrast to deep time we are, all of us, little more than mayflies who live out their entire adult lives in little but a day. Yet, like the mayflies themselves who are one of the earth’s oldest existent species: by the very fact that we are the product of a long chain of life stretching backward we have contact with deep time.

Life on earth itself if not quite immortal does at least come within the range of the “lifespan” of other systems in our Universe, such as stars. If life that emerged from earth manages to survive and proves capable of moving beyond the life-cycle of its parent star, perhaps the chain in which we exist can continue in an unbroken line to reach the age of galaxies or even the Universe itself. Here then might lie something like immortality.

The most likely route by which this might happen is through our own species,  Homo Sapiens or our descendents. Species do not exist forever, and our is likely to share this fate of demise either through actual extinction or evolution into something else. In terms of the latter, one might ask if our survival is assumed, how far into the future we would need to go where our descendents are no longer recognizably, human? As long as something doesn’t kill us, or we don’t kill ourselves off first, I think that choice, for at least the foreseeable future will be up to us.

It is often assumed that species have to evolve or they will die. A common refrain I’ve heard among some transhumanists  is “evolve or die!”. In one sense, yes, we need to adapt to changing circumstances, in another, no, this is not really what evolution teaches us, or is not the only thing it teaches us. When one looks at the earth’s longest extant species what one often sees is that once natural selection comes us with a formula that works that model will be preserved essentially unchanged over very long stretches of time, even for what can be considered deep time. Cyanobacteria are nearly as old as life on earth itself, and the more complex Horseshoe Crab, is essentially the same as its relatives that walked the earth before the dinosaurs. The exact same type of small creatures that our children torture on beach vacations might have been snacks for a baby T-Rex!

That was the question of the longevity of species but what about the longevity of individuals? Anyone interested the should check out the amazing photo study of the subject by the artist Rachel Sussman. You can see Sussman’s work here at TED, and over at Long Now.  The specimen Sussman brings to light have individuals over 2,000 years old.  Almost all are bacteria or plants and clonal- that is they exist as a single organism composed of genetically identical individuals linked together by common root and other systems. Plants and especially trees are perhaps the most interesting because they are so familiar to us and though no plant can compete with the longevity of bacteria, a clonal colony of Quaking Aspen in Utah is an amazing 80,000 years old!

The only animals Sussman deals with are corals, an artistic decision that reflects the fact that animals do not survive for all that long- although one species of animal she does not cover might give the long-lifers in the other kingdoms a run for their money. The “immortal jellyfish” the turritopsis nutricula are thought to be effectively biologically immortal (though none are likely to have survived in the wild for anything even approaching the longevity of the longest lived plants). The way they achieve this feat is a wonder of biological evolution.The turritopsis nutricula, after mating upon sexual maturity, essentially reverses it own development process and reverts back to prior clonal state.

Perhaps we could say that the turritopsis nutricula survives indefinitely by moving between more and less complex types of structures all the while preserving the underlying genes of an individual specimen intact. Some hold out the hope that the turritopsis nutricula holds the keys to biological immortality for individuals, and let’s hope they’re right, but I, for one, think its lessons likely lie elsewhere.

A jellyfish is a jellyfish, after all, among more complex animals with well developed nervous systems longevity moves much closer to a humanly comprehensible lifespan with the oldest living animal a giant tortoise by the too cute name of “Jonathan” thought to be around 178 years old.  This is still a very long time frame in human terms, and perhaps puts the briefness of our own recent history in perspective: it would be another 26 years after Jonathan hatched from his egg till the first shots of the American Civil War were fired. A lot can happen over the life of a “turtle”.

Individual plants, however, put all individual animals to shame. The oldest non-clonal plant, The Great Basin Bristlecone Pine, has a specimen believed to be 5,062 years old. In some ways this oldest living non-clonal individual is perfectly illustrative of the (relatively) new way human beings have reoriented themselves to time, and even deep time.When this specimen of pine first emerged from a cone human beings had only just invented a whole set of tools that would make the transmission of cultural rather than genetic information across vast stretches of time possible. During the 31st century B.C.E. we invented monumental architecture such as Stonehenge and the pyramids of Egypt whose builders still “speak” to us, pose questions to us, from millennia ago. Above all, we invented writing which allowed someone with little more than a clay tablet and a carving utensil to say something to me living 5,000 years in his future.

Humans being the social animals that they are we might ask ourselves about the mortality or potential immortality of groups that survive across many generations, and even for thousands of years. Group that survive for such a long period of time seem to emerge most fully out of the technology of writing which allows both the ability to preserve historical memory and permits a common identity around a core set of ideas. The two major types of human groups based on writing are institutions, and societies which includes not just the state but also the economic, cultural, and intellectual features of a particular group.


Among the biggest mistakes I think those charged with responsibility for an institution or a society can make is to assume that it is naturally immortal, and that such a condition is independent of whatever decisions and actions those in charge of it take. This was part of the charge Augustine laid against the seemingly eternal Roman Empire in his The City of God. The Empire, Augustine pointed out, was a human institution that had grown and thrived from its virtues in the past just as surely as it was in his day dying from its vices. Augustine, however, saw the Church and its message as truly eternal. Empires would come and go but the people of God and their “city” would remain.

It is somewhat ironic, therefore, that the Catholic Church, which chooses a Pope this week, has been so beset by scandal that its very long-term survivability might be thought at stake. Even seemingly eternal institutions, such as the 2,000 year old Church, require from human beings an orientation that might be compared to the way theologians once viewed the relationship of God and nature. Once it was held that constant effort by God was required to keep the Universe from slipping back into the chaos from whence it came. That the action of God was necessary to open every flower. While this idea holds very little for us in terms of our understanding of nature, it is perhaps a good analog for human institutions, states and our own personal relationships which require our constant tending or they give way to mortality.

It is perhaps difficult for us to realize that our own societies are as mortal as the empires of old, and someday my own United States will be no more. America is a very odd country in respect to its’ views of time and history. A society seemingly obsessed with the new and the modern, contemporary debates almost always seek reference and legitimacy on the basis of men who lived and thought over 200 years ago. The Founding Fathers were obsessed with the mortality of states and deliberately crafted a form of government that they hoped might make the United States almost immortal.

Much of the structure of American constitutionalism where government is divided into “branches” which would “check and balance” one another was based on a particular reading of long-lived ancient systems of government which had something like this tripart structure, most notably Sparta and Rome. What “killed” a society, in the view of the Founders, was when one element- the democratic, oligarchic-aristocratic, or kingly rose to dominate all others. Constitutionally divided government was meant to keep this from happening and therefore would support the survival of the United States indefinitely.

Again, it is somewhat bitter irony that the very divided nature of American government that was supposed to help the United States survive into the far future seems to be making it impossible for the political class in the US to craft solutions to the country’s quite serious long-term problems and therefore might someday threaten the very survival of the country divided government was meant to secure.

Anyone interested in the question of the extended survival of their society, indeed of civilization itself, needs to take into account the work of Joseph A. Tainter and his The Collapse of Complex Societies (1988). Here, the archaeologist Tainter not only provides us with a “science” that explains the mortality of societies, his viewpoint, I think, provides us for ways to think about and gives us insight into seeming intractable social and economic and technological bottlenecks that now confront all developed economies: Japan, the EU/UK and the United States.

Tainter, in his Collapse wanted to move us away from vitalist ideas of the end of civilization seen in thinkers such as Oswald Spengler and Arnold Toynbee. We needed, in his view, to put our finger on the material reality of a society to figure out what conditions most often lead them to dissipate i.e. to move from a more complex and integrated form, such as the Roman Empire, to a more simple and less integrated form, such as the isolated medieval fiefdoms that followed.

Grossly oversimplified, Tainter’s answer was a dry two word concept borrowed from economics- marginal utility. The idea is simple if you think about it for a moment. Any society is likely to take advantage of “low-hanging fruit” first. The best land will be the first to be cultivated, the easiest resources to gain access to exploited.

The “fruit”,  however, quickly becomes harder to pick- problems become harder for a society to solve which leads to a growth in complexity. Romans first tapped tillable land around the city, but by the end of the Empire the city needed a complex international network of trade and political control to pump grain from the distant Nile Valley into the city of Rome.

Yet, as a society deploys more and more complex solutions to problems it becomes institutionally “heavy” (the legacy of all the problems it has solved in the past) just as problems become more and more difficult to solve. The result is, at some point, the shear amount of resources that need to be thrown at a problem to solve it are no longer possible and the only lasting solution becomes to move down the chain of complexity to a simpler form. Roman prosperity and civilization drew in the migration of “barbarian” populations in the north whose pressures would lead to the splitting of the Empire in two and the eventual collapse of its Western half.            

It would seem that we have broken through Tainter’s problem of marginal utility with the industrial revolution, but we should perhaps not judge so fast. The industrial revolution and all of its derivatives up to our current digital and biological revolutions, replaced a system in which goods were largely produced at a local level and communities were largely self-sufficient, with a sprawling global network of interconnections and coordinated activities requiring vast amounts of specialized knowledge on the part of human beings who, by necessity, must participate in this system to provide for their most basic needs.

Clothes that were once produced in the home of the individual who would wear them, are now produced thousands of miles away by workers connected to a production and transportation system that requires the coordination of millions of persons many of whom are exercising specialized knowledge. Food that was once grown or raised by the family that consumed it now requires vast systems of transportation, processing, the production of fertilizers from fossil fuels and the work of genetic engineers to design both crops and domesticated animals.

This gives us an indication of just how far up the chain of complexity we have moved, and I think leads inevitably to the questions of whether such increasing complexity might at some point stall for us, or even be thrown into reverse?

The idea that, despite all the whiz-bang! of modern digital technology, we have somehow stalled out in terms of innovation is an idea that has recently gained traction. There was the argument made by the technologist and entrepreneur, Peter Thiel, at the 2009 Singularity Summit, that the developed world faced real dangers of the Singularity not happening quickly enough. Thiel’s point was that our entire society was built around the expectations of exponential technological growth that showed ominous signs of not happening. I only need to think back to my Social Studies textbooks in the 1980s and their projections of the early 2000s with their glittering orbital and underwater cities, both of which I dreamed of someday living in, to realize our futuristic expectations are far from having been met. More depressingly, Thiel points out how all of our technological wonders have not translated into huge gains in economic growth and especially have not resulted in any increase in median income which has been stagnant since the 1970s.

In addition to Theil, you had the economist, Tyler Cowen, who in his The Great Stagnation (2011)  argued compellingly that the real root of America’s economic malaise was that the kinds of huge qualitative innovations that were seen in the 19th and early 20th centuries- from indoor toilets, to refrigerators, to the automobile, had largely petered out after the low hanging fruit- the technologies easiest to reach using the new industrial methods- were picked. I may love my iPhone (if I had one), but it sure doesn’t beat being able to sanitarily go to the bathroom indoors, or keep my food from rotting, or travel many miles overland on a daily basis in mere minutes or hours rather than days.

One reason why technological change is perhaps not happening as fast as boosters such as singularitarians hope, or our society perhaps needs to be able to continue to function in the way we have organized it, can be seen in the comments of the technologists, social critic and novelist, Ramez Naam. In a recent interview for  The Singularity Weblog, Naam points out that one of the things believers in the Singularity or others who hold to ideas regarding the exponential pace of technological growth miss is that the complexity of the problems technology is trying to solve are also growing exponentially, that is problems are becoming exponentially harder to solve. It’s for this reason that Naam finds the singularitarians’ timeline widely optimistic. We are a long long way from understanding the human brain in such a way that it can be replicated in an AI.

The recent proposal of the Obama Administration to launch an Apollo type project to understand the human brain along with the more circumspect, EU funded, Human Brain Project /Blue Brain Project might be seen as attempts to solve the epistemological problems posed by increasing complexity, and are meant to be responses to two seemingly unrelated technological bottlenecks stemming from complexity and the problem of increasing marginal returns.

On the epistemological front the problem seems to be that we are quite literally drowning in data, but are sorely lacking in models by which we can put the information we are gathering together into working theories that anyone actually understands. As Henry Markham the founder of the Blue Brain Project stated:

So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists are forced to become informaticians or robot operators.

This epistemological problem, which was brilliantly discussed by Noam Chomsky in an interview late last year is related to the very real bottleneck in Artificial Intelligence- the very technology Peter Thiel thinks is essentially if we are to achieve the rates of economic growth upon which our assumptions of technological and economic progress depend.

We have developed machines with incredible processing power, and the digital revolution is real, with amazing technologies just over the horizon. Still, these machines are nowhere near doing what we would call “thinking”. Or, to paraphrase the neuroscientist and novelist David Eagleman- the AI WATSON might have been able to beat the very best human being in the game Jeopardy! What it could not do was answer a question obvious to any two year old like “When Barack Obama enters a room, does his nose go with him?”

Understanding how human beings think, it is hoped, might allow us to overcome this AI bottleneck and produce machines that possess qualities such as our own or better- an obvious tool for solving society’s complex problems.

The other bottleneck a large scale research project on the brain is meant to solve is the halted development of psychotropic drugs– a product of the enormous and ever increasing costs for the creation of such products. Itself a product of the complexity of the problem pharmaceutical companies are trying to tackle, namely; how does the human brain work and how can we control its functions and manage its development?  This is especially troubling given the predictable rise in neurological diseases such as Alzheimer’s.   It is my hope that these large scale projects will help to crack the problem of the human brain, and especially as it pertains to devastating neurological disorders, let us pray they succeed.

On the broader front, Tainter has a number of solutions that societies have come up with to the problem of marginal utility two of which are merely temporary and the other long-term. The first is for society to become more complex, integrated, bigger. The old school way to do this was through conquest, but in an age of nuclear weapons and sophisticated insurgencies the big powers seem unlikely to follow that route. Instead what we are seeing is proposals such as the EU-US free trade area and the Trans-Pacific partnership both of which appear to assume that the solution to the problems of globalization is more globalization. The second solution is for a society to find a new source of energy. Many might have hoped this would have come in the form of green-energy rather than in the form it appears to have taken- shale gas, and oil from the tar sands of Canada. In any case, Tainter sees both of these solutions as but temporary respites for the problem of marginal utility.

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity. It is a solution that strangely echoes that of the “immortal jellyfish” the turritopsis nutricula, the only path complex entities have discovered that allows them to survive into something that whispers eternity.

Image description: From the National Gallery in London. “The Cave Of Eternity” (1680s) by Luca Giordan.“The serpent biting its tail symbolises Eternity. The crowned figure of Janus holds the fleece from which the Three Fates draw out the thread of life. The hooded figure is Demagorgon who receives gifts from Nature, from whose breasts pours forth milk. Seated at the entrance to the cave is the winged figure of Chronos, who represents Time.”