The Evolution of Chains

“Progress in society can be thought of as the evolution of chains….

 Phil Auerswald, The Code Economy

“I would prefer not to.”

Herman Melville, Bartleby the Scrivener

 The 21st century has proven to be all too much like living in a William Gibson novel. It may be sad, but at least it’s interesting. What gives the times this cyberpunk feel isn’t so much the ambiance (instead of dark cool of noir we’ve got frog memes and LOLcats), but instead is that absolutely every major happening and all of our interaction with the wider world has something to do with the issue of computation, of code.

Software has not only eaten the economy, as Marc Anderssen predicted back in 2011, but also culture, politics, and war. To understand both the present and the future, then, it is essential to get a handle on what exactly this code that now dominates our lives is, to see it in its broader historical perspective.

Phil Auerswald managed to do something of the sort with his book The Code Economy. A Forty-Thousand Year History. It’s a book that tries to take us to the very beginning, to chart the long march of code to sovereignty over human life. The book defines code (this will prove part of its shortcomings) broadly as a “recipe” a standardized way of achieving some end. Looked at this way human beings have been a species dominated by code since our beginnings, that is with the emergence of language, but there have been clear leaps closer to the reign of code along the way with Auerswald seeing the invention of writing being the first. Written language, it seems, was the invention of bureaucrats, a way for the tribal scribes in ancient Sumer to keep track of temple tributes and debts. And though it soon broke out of these chains and proved a tool as much for building worlds and wonders like the Epic of Gilgamesh as a radical new means of power and control, code has remained linked to the domination of one human group over another ever since.

Auerswald is mostly silent on the mathematical history of code before the modern age. I wish he had spent time discussing predecessors of the computer such as the abacus, the Antikythera mechanism, clocks, and especially the astrolabe. Where exactly did this idea of mechanizing mathematics actually emerge, and what are the continuities and discontinuities between different forms of such mechanization?

Instead, Gottfried Leibniz who envisioned the binary arithmetic that underlies all of our modern day computers is presented like he had just fallen out of the matrix. Though Leibniz’ genius does seem almost inexplicable and sui generis. A philosopher who arguably beat Isaac Newton to the invention of calculus, he was also an inventor of ingenious calculating machines and an almost real-life version of Nostradamus. In a letter to Christiaan Huygens in 1670 Leibniz predicted that with machines using his new binary arithmetic: “The human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes.” He was right, though it would be nearly 300 years until these instruments were up and running.

Something I found particularly fascinating is that Leibniz found a premonition for his binary number system in the Chinese system of divination- the I-ching which had been brought to his attention by Father Bouvet, a Jesuit priest then living in China. It seems that from the beginning the idea of computers, and the code underlying them, has been wrapped up with the human desire to know the future in advance.

Leibniz believed that the widespread adoption of binary would make calculation more efficient and thus mechanical calculation easier, but it wouldn’t be until the industrial revolution that we actually had the engineering to make his dreams a reality. Two developments especially aided in the development of what we might call proto-computers in the 19th century. The first was the division of labor first identified by Adam Smith, the second was Jacquard’s loom. Much like the first lurch toward code that came with the invention of writing, this move was seen as a tool that would extend the reach and powers of bureaucracy.       

Smith’s idea of how efficiency was to be gained from breaking down complex tasks into simple easily repeatable steps served as the inspiration to applying the same methods to computation itself. Faced with the prospect of not having enough trained mathematicians to create the extensive logarithmic and trigonometric tables upon which modern states and commerce was coming to depend, innovators such as Gaspard de Prony in France hit upon the idea of breaking down complex computation into simple tasks. The human “computer” was born.

It was the mechanical loom of Joseph Marie Jacquard that in the early 1800s proved that humans themselves weren’t needed once the creation of a complex pattern had been reduced to routine tasks. It was merely a matter of drawing the two, route human computation and machine enabled pattern making, together which would show the route to Leibniz’ new instrument for thought. And it was Charles Babbage along with his young assistant Ada Lovelace who seemed to graph the implications beyond data crunching of building machines that could “think”  who would do precisely this.

Babbage’s Difference and Analytical Engines, however, remained purely things of the mind. Early industrial age engineering had yet to catch up with Leibniz’ daydreams. Society’s increasing needs for data instead came to be served by a growing army of clerks along with simple adding machines and the like.

In an echo of the Luddites who rebelled against the fact that their craft was being supplanted by the demands of the machine, at least some of these clerks must have found knowledge reduced to information processing dehumanizing. Melville’s Bartleby the Scrivener gives us a portrait of a man who’s become like a computer stuck in a loopy glitch that eventually leads to his being scraped.    

What I think is an interesting aside here, Auerswald  doesn’t really explore the philosophical assumptions behind the drive to make the world computable. Melville’s Bartleby, for example, might have been used as a jumping off point for a discussion about how both the computer the view of human beings as a sort of automata meant to perform a specific task emerged out of a thoroughly deterministic worldview. This view, after all, was the main target of  Melville’s short story where a living person has come to resemble a sort of flawed windup toy, and make direct references to religious and philosophical tracts arguing in favor of determinism; namely, the firebrand preacher Jonathan Edwards sermon On the Will, and the chemist and utopian Joseph Priestley’s book  Doctrine of Philosophical Necessity.  

It is the effect of the development of machine based code on these masses of 19th and early 20th century white collar workers, rather than the more common ground of automation’s effect on the working class, that most interests Auerswald. And he has reasons for this, given that the current surge of AI seems to most threaten low level clerical workers rather than blue collar workers whose jobs have already been automated or outsourced, and in which the menial occupations that have replaced jobs in manufacturing require too much dexterity for even the most advanced robots.

As Auerswald points out, fear of white collar unemployment driven by the automation of cognitive tasks is at least as old as the invention of the Burroughs’s calculating machine in the 1880s. Yet rather than lead to a mass of unemployed clerks, the adoption of adding machines, typewriters, dictaphones only increased the number of people needed to manage and process information. That is, the depth of code into society, or the desire for society to conform to the demands of code placed even higher demands on both humans and machines. Perhaps that historical analogy will hold for us as well. Auerswald doesn’t seem to think that this is a problem.

By the start of the 20th century we still weren’t sure how to automate computation, but we were getting close. In an updated version of De Prony, the meteorologist, Fry Richardson showed how we could predict the weather armed with a factory of human computers. There are echoes of both to be seen in the low-paid laborers working for platforms such as Amazon’s Mechanical Turk who still provide the computation behind the magic-act that is much of contemporary AI.

Yet it was World War II and the Cold War that followed that would finally bootstrap Leibniz’s dream of the computer into reality. The names behind this computational revolution are now legendary: Vannevar Bush, Norbert Wiener, Claude Shannon, John Von Neumann, and especially Alan Turing.

It was these figures who laid the cornerstone for what George Dyson has called “Turing’s Cathedral” for like the great medieval cathedrals the computational megastructure in which all of us are now embedded was the product of generations of people dedicated to giving substance to the vision of its founders. Unlike the builders of Notre Dame or Chartres who labored in the name of ad majorem Dei gloriam, those who constructed Turing’s Cathedral were driven by ideas regarding the nature of thought and the power and necessity of computation.

Surprisingly, this model of computation created in the early 20th century by Turing and his fellow travelers continues to underlie almost all of our digital technology today. It’s a model that may be reaching its limits, and needs to be replaced with a more environmentally sustainable form of computation that is actually capable of helping us navigate and find the real structure of information in a world whose complexity we are finding so deep as to be intractable, and in which our own efforts to master only results in a yet harder to solve maze. It’s a story Auerswald doesn’t tell, and must wait until another time.        

Rather than explore what computation itself means, or what its future might be, Auerswald throws wide open the definition of what constitutes a code. As Erwin Schrodinger observed in his prescient essay What is Life?”, and as we later learned with the discovery of DNA, a code does indeed stand at the root of every living thing. Yet in pushing the definition of code ever farther from its origins in the exchange of information and detailed instructions on how to perform a task, something is surely lost. Thus Auerswald sees not only computer software and DNA as forms of code, but also the work of the late celebrity chef Julia Child, the McDonald’s franchise, WalMart and even the economy itself all as forms of it.

As I suggested at the beginning of this essay, there might be something to be gained by viewing the world through the lens of code. After all, code of one form or another is now the way in which most of the largely private and remaining government bureaucracies that run our lives function. The sixties radicals terrified that computers would be the ultimate tool of bureaucrats guessed better where we were headed. “Do not fold spindle or mutilate.” Right on. Code has also become the primary tool through which the system is hacked- forced to buckle and glitch in ways that expose its underlying artificiality, freeing us for a moment from its claustrophobic embrace. Unfortunately, most of its attackers aren’t rebels fighting to free us, but barbarians at the gates.

Here is where Auerswald really lets us down. Seeing progress as tied to our loss of autonomy and the development of constraints he seems to think we should just accept our fetters now made out of 1’s and 0’s. Yet within his broad definition of what constitutes code, it would appear that at least the creation of laws remains in our power. At least in democracies the law is something that the people collectively make. Indeed, if any social phenomenon can be said to be code like it would be law. It surprising, then, that Auerswald gives it no mention in his book. Meaning it’s not really surprising at all.

You see, there’s an implicit political agenda underneath Auerswald whole understanding of code, or at least a whole set of political and economic assumptions. These are assumptions regarding the naturalness of capitalism, the beneficial nature of behemoths built on code such as WalMart, the legitimacy of global standard setting institutions and protocols, the dominance of platforms over other “trophic layers” in the information economy that provide the actual technological infrastructure on which these platforms run. Perhaps not even realizing that these are just assumptions rather than natural facts Auerswald never develops or defends them. Certainly, code has its own political-economy, but to see it we will need to look elsewhere. Next time.



How we all got trapped in a Skinner Box

Sigmund Freud may be the most famous psychologist of the 20th century, but his legacy has nothing on that of B.F. Skinner. Step into your TARDIS or Delorean, travel back to 1950 and bring the then forty-six year old behaviorist to 2017, and much about our world would not only seem familiar to Skinner, he likely would realized how much his work had helped build it. Put a smartphone in his hand and let him play around for  awhile, then afterwards inform him that not only was the device not a toy, but that we were utterly dependent upon it, and the epiphany would no doubt come to him that all of us were now living in sort of globe-sized Skinner box.

Invented around 1930, an operant conditioning chamber, or a Skinner Box, is a cage in which an animal is trained via regular rewards and punishments to exhibit some targeted behavior. To Skinner it offered empirical demonstration that animals (including humans) lacked free will. For him and many of his fellow behaviorists, our sense of freedom, even consciousness itself, is an elaborate, psychological illusion. All of us are mere puppets of our environment.

Skinner wasn’t one for humility. He thought he had stumbled upon the true and final keys for controlling human behavior. Given this he felt the keys might fall into the wrong hands- communists, fascists, or advertisers for cola or face creams . In 1948 he published his utopian novel Walden Two (Thoreau would not have appreciate the homage). It depicted a “paradise” designed from above where only its architect was what we should recognize as free. It’s perhaps closer to the world we are living in than any of the dystopian-utopias written during the same period, even Orwell’s 1984 (also written in 1948), or Huxley’s Brave New World (1931).

The novel tells the story of a psychology professor Burris and a philosophy professor- Castle who along with a handful of Burris’ students, and their girlfriends, visit a utopian community named Walden Two established over a decade earlier by a colleague of Burris- T. E. Frazier.

The novel is all very 1940’s, filled with cigarettes and sexual innuendo, minus the sex. The book has the feel of Mad Men, and for someone who doesn’t believe in interiority, Skinner is surprisingly good as a novelist, even if predictably didactic.

In Walden Two you can find utopian tropes common since Plato invented the genre- the abolition of the family and at least some degree of equal property. What makes the book distinct and relevant to today is that Skinner’s novel imagines an entire society built around operant conditioning. It’s a society that shares some remarkable resemblances to our own.

Walden Two is a community where its inhabitants, gleefully “never have to go out of doors at all” (20 ). Designed on the basis of the maximally efficient use of space and time,  large-scale universally shared activities have been supplanted by the niche interests of individuals. Lack of mass interest in shared culture also translates into a lack of knowledge or involvement in areas of common responsibility and concern-  that is, politics.  Most people according to Frazier the utopia’s designer “want to be free of the responsibility of planning.” (154)

Walden Two’s efficiency has allowed it to reduce work hours to no more than four-hours per day. Housework has been commodified, and mass education replaced by individualized instruction that focuses on the student’s unique skills and interests. Mid-20th century sexual puritanism, especially for teenagers, has been supplanted by open sexuality and (writing twelve years before the introduction of oral contraceptives), biological parenthood (parenting itself being provided by the community) pushed back into ages ranging from the late teens to the early twenties.

Much of this readers will either find attractive or, seem more familiar to us in 2017 than it would to anyone in 1948. Yet it’s the darker side of Skinner’s vision that should concern us because he was perhaps even more prescient there.

Inhabitants of Walden Two still engage in the electoral politics of the outside world, they just do so at the direction of the community’s planners. The ultimate object is to spread their model from one city to the next- a kind of mondialist revolution.

In Walden Two it’s not merely that individuals have absconded political responsibility to experts out of lack of interest, it’s that they’ve surrendered all control over their environment to those who write and manage what Skinner calls the “Code” under which they live. An individual may lodge an objection to some particular aspect of the Code “…but in no case must he argue about the Code with the members at large.” (152)

Indeed, it’s not so much that the common individual is barred from challenging the Code that renders her essentially powerless in Walden Two it’s that the Code itself is deliberately obscured to those who live under it. As Frazier states it: “We deliberately conceal the planning and managerial machinery to further the same end.” (220)

Although Skinner justifies this opacity on the grounds that it promotes a sense of equality, in combining it with a deliberate inattention to history (justified on the same grounds) he ends up robbing the inhabitants of Walden Two of any alternative to the system under which they live, the very purpose that, as pointed out by the historian Yuval Harari, the study of history serves.

The purpose of Skinner’s Walden Two is to promote human happiness, but its design is one that, as his fictional stand-in Frazier openly admits, will only work if humans are no more free than a clockwork orange.

“My answer is simple enough”, said Frazier, “I deny that freedom exists at all. I must deny it- or my program would be absurd. You can’t have a science about a subject matter which hops capriciously about. Perhaps we can never prove that man isn’t free; it’s an assumption. But the increasing success of the science of behavior makes it more and more plausible. “ (257)

The world of Walden Two presents itself as one which is no longer concerned with the human-all-too-human desire for power. Yet the character who actually wields power in Walden, that is Frazier, is presented as almost manic in his sense of self- believing he has obtained the status of what in reality is an atrophied idea of God.

“Of course, I’m not indifferent to power! Frazier said hotly. “And I like to play God! Who wouldn’t under the circumstances? After all, man, even Jesus Christ thought he was God!” (281)

Perhaps Skinner’s real motive was sheer will to power masked by altruism, as it so often is.  Still, he gives some indication that his actual motivation was that the science his studies of pigeons and rats trapped in boxes, along with the susceptibility of “mass man” to propaganda as evidenced by the Nazis and the Soviets (along with liberal states at war), had proven human freedom a lie. Those in possession of even primitive versions of his science of human behavior were the true puppet masters capable of exercising control.  The future, for Skinner, if not populated by Waldens, would be dominated by totalitarian states or ad men.

Yet the powers offered by behaviorism ended up being much less potent than Skinner imagined, which is not to say, for limited purposes, that they didn’t work. Once totalitarian states passed from the scene the main proponents of behaviorism outside of psychology departments indeed did become ad men along with the politicians who came to copy their methods.

Manufactured behavioral addiction has become the modus operandi of late capitalism. As Natasha Dow Schüll points out in her book Addiction by Design about machine gambling, a huge amount of scientific research goes into designing machines, which optimize user addiction. Whole conferences are held to hawk these sophisticated slot machines while state revenue becomes ever more dependent on such predatory economics.

Now we all carry addictive by design machines in our pockets. Software is deliberately designed for the kinds of positive reinforcement Skinner would easily recognize.  It’s a world where everything is being gamified. As Adam Alter writes in his book Irresistible: The Rise of Addictive Technology and the Business of Keeping us Hooked:

A like on Facebook and Instagram strikes one of those notes, as does the reward of completing a World of Warcraft mission, or seeing one of your tweets shared by hundreds of Twitter users. The people who create and refine tech, games, and interactive experiences are very good at what they do. They run hundreds of tests with thousands of users to learn which tweaks works and which don’t- which background colors, fonts, and audio tones maximize engagement and  minimize frustration. As an experience evolves, it becomes an irresistible, weaponized version of what it once was. In 2004, Facebook was fun, in 2016, it’s addictive. (5)

We are the inheritors of a darker version of Skinner’s freedomless world- though by far not the darkest. Yet even should we get the beneficent paternalism contemporary Skinnerites- such as Richard Thaler and Cass R. Sunstein who wish to “nudge” us this -way -and-that, it would harm freedom not so much by proving that our decisions are indeed in large measure determined by our environment as from the fact that the shape of that environment would be in the hands of someone other than ourselves, individually and collectively.

The man who wrote the most powerful work against behaviorism ever written, A clockwork orange, Anthony Burgess wrote this about even behaviorism of  even the best kind:

One may take the principle of evil as applying in areas of conduct where the destruction of an organism is not intended. It is wrong to push drugs among children, but few would deny that it is also evil: the capacity of an organism for self-determination is being impaired. Maiming is evil. Acts of aggression are evil, though we are inclined to find mitigating factors in the hot spirit of revenge (“a kind of wild justice,” said Francis Bacon) or in the desire to protect others from expected, if not always fulfilled, acts of violence. We all hold in our imaginations or memories certain images of evil in which there is no breath of mitigation—four grinning youths torturing an animal, a gang rape, cold-blooded vandalism. It would seem that enforced conditioning of a mind, however good the social intention, has to be evil.

It’s an observation that remains true, and increasingly relevant, today.