A Box of a Trillion Souls

pandora's box

“The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.”                                                                          

Jaron Lanier

 

Stephen Wolfram may, or may not, have a justifiable reputation for intellectual egotism, but I like him anyway. I am pretty sure this is because, whenever I listen to the man speak I most often  walk away no so much with answers as a whole new way to frame questions I had never seen before, but sometimes I’m just left mesmerized, or perhaps bewildered, by an image he’s managed to draw.

A while back during a talk/demo of at the SXSW festival he managed to do this when he brought up the idea of “a box of a trillion souls”. He didn’t elaborate much, but left it there, after which I chewed on the metaphor for a few days and then returned to real life, which can be mesmerizing and bewildering enough.

A couple days ago I finally came across an explanation of the idea in a speech by Wolfram over at John Brockman’s Edge.org  There, Wolfram also opined on the near future of computation and the place of  humanity in the universe. I’ll cover those thoughts first before I get to his box full of souls.

One of the things I like about Wolfram is that, uncommonly for a technologist, he tends to approach explanations historically. In his speech he lays out a sort of history of information that begins with information being conveyed genetically with the emergence of life, moves to the interplay between individual and environment with the development of more complex life, and flowers in spoken language with the appearance of humans.

Spoken language eventually gave rise to the written word, though it took almost all of human history for writing to become nearly as common as speaking. For most of that time reading and writing were monopolized by elites. A good deal of mathematics, as well has moved from being utilized by an intellectual minority to being part of the furniture of the everyday world, though more advanced maths continues to be understandable by specialists alone.

The next stage in Wolfram’s history of information, the one we are living in, is the age of code. What distinguishes code from language is that it is “immediately executable” by which I understand him to mean that code is not just some set of instructions but, when run, the thing those instruction describe itself.

Much like reading, writing and basic mathematics before the invention of printing and universal education, code is today largely understood by specialists only. Yet rather than endure for millennia, as was the case with the monopoly of writing by the clerisy, Wolfram sees the age of non-universal code to be ending almost as soon as it began.

Wolfram believes that specialized computer languages will soon give way to “natural language programming”.  A fully developed form of natural language programming would be readable by both computers and human beings- numbers of people far beyond those who know how to code, so that code would be written in typical human languages like English or Chinese. He is not just making idle predictions, but has created a free program that allows you to play around with his own version of a NLP.

Wolfram makes some predictions as to what a world where natural language programming became ubiquitous- where just as many people could code as could now write- might look like. The gap between law and code would largely disappear. The vast majority of people, including school children, would have at the ability to program computers to do interesting things, including perform original research. As computers become embedded in objects the environment itself will be open to the programming of everyone.

All this would seem very good for us humans and would be even better given that Wolfram sees it as the prelude to the end of scarcity, including the scarcity of time that we now call death. But then comes the AI. Artificial intelligence will be both the necessary tool to explore the possibility space of the computational universe and the primary intelligence via which we interact with the entirety of the realm of human thought.  Yet at some threshold AI might leave us with nothing to do as it will have become the best and most efficient way to meet our goals.

What makes Wolfram nervous isn’t human extinction at the hands of super-intelligence so much as what becomes of us after scarcity and death have been eliminated and AI can achieve any goal- artistic ones included- better than us. This is Wolfram’s  vision of the not too far off future, which given the competition with even current reality, isn’t near sufficiently weird enough. It’s only when he starts speculating on where this whole thing is ultimately headed that anything so strange as Boltzmann brains make their appearance, yet something like them does and no one should be surprised given his ideas about the nature of computation.

One of Wolfram’s most intriguing, and controversial, ideas is something he calls computational equivalence. With this idea he claims not only that computation is ubiquitous across nature, but that the line between intelligence and merely complicated behavior that grows out of ubiquitous natural computation is exceedingly difficult to draw.

For Wolfram the colloquialism that “the weather has a mind of its own” isn’t just a way of complaining that the rain has ruined your picnic, but, in an almost panpsychic or pantheistic way, captures a deeper truth that natural phenomenon are the enactment of a sort of algorithm, which, he would claim, is why we can successfully model their behavior with other algorithms we call computer “simulations.” The word simulations needs quotes because, if I understand him, Wolfram is claiming that there would be no difference between a computer simulation of something at a certain level of description and the real thing.

It’s this view of computation that leads Wolfram to his far future and his box of a trillion souls. For if there is no difference between a perfect simulation and reality, if there is nothing that will prevent us from creating perfect simulations, at some point in the future however far off, then it makes perfect sense to think that some digitized version of you, which as far as you are concerned will be you, could end up in a “box”, along with billions or trillions of similar digitized persons, with perhaps millions or more copies of  you.   

I’ve tried to figure out where exactly this conclusion for an idea I otherwise find attractive, that is computational equivalence, goes wrong other just in terms of my intuition or common sense. I think the problem might come down to the fact that while many complex phenomenon in nature may have computer like features, they are not universal Turing machines i.e. general purpose computers, but machines whose information processing is very limited and specific to that established by its makeup.

Natural systems, including animals like ourselves, are more like the Tic-Tac-Toe machine built by the young Danny Hillis and described in his excellent primer on computers, that is still insightful decades after its publication- The Pattern on the Stone. Of course, animals such as ourselves can show vastly more types of behavior and exhibit a form of freedom of a totally different order than a game tree built out of circuit boards and lightbulbs, but, much like such a specialized machine, the way in which we think isn’t a form of generalized computation, but shows a definitive shape based on our evolutionary, cultural and personal history. In a way, Wolfram’s overgeneralization of computational equivalence negates what I find to be his as or more important idea of the central importance of particular pasts in defining who we are as a species, people and individuals.

Oddly enough, Wolfram falls into the exact same trap that the science-fiction writer Stanislaw Lem fell into after he had hit upon an equally intriguing, though in some ways quite opposite understanding of computation and information.

Lem believed that the whole system of computation and mathematics human beings use to describe the world was a kind of historical artifact for which there much be much better alternatives buried in the way systems that had evolved over time processed information. A key scientific task he thought would be to uncover this natural computation and find ways to use it in the way we now use math and computation.

Where this leads him is to precisely the same conclusion as Wolfram, the possibility of building a actual world in the form of simulation. He imagines the future designers of just such simulated worlds:

“Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess.” ( 291 -292)

Yet it seems to me that moving from the idea that things in the world: a storm, the structure of a sea-shell, the way particular types of problems are solved are algorithmic to the conclusion that the entirety of the world could be hung together in one universal  algorithm is a massive overgeneralization. Perhaps there is some sense that the universe might be said to be weakly analogous, not to one program, but to a computer language (the laws of physics) upon which an infinite ensemble of other programs can be instantiated, but which is structured so as to make some programs more likely to be run while deeming others impossible. Nevertheless, which programs actually get executed is subject to some degree of contingency- all that happens in the universe is not determined from initial conditions. Our choices actually count.

Still, such a view continues to treat the question of corporal structure as irrelevant, whereas structure itself may be primary.

The idea of the world as code, or DNA as a sort of code is incredibly attractive because it implies a kind of plasticity which equals power. What gets lost however, is something of the artifact like nature of everything that is, the physical stuff that surrounds us, life, our cultural environment. All that is exists as the product of a unique history where every moment counts, and this history, as it were, is the anchor that determines what is real. Asserting the world is or could be fully represented as a simulation either implies that such a simulation possesses the kinds of compression and abstraction, along with the ahistorical plasticity that comes with mathematics and code or it doesn’t, and if it doesn’t, it’s difficult to say how anything like a person, let alone, trillions of persons, or a universe could actually, rather than merely symbolically, be contained in a box even a beautiful one.

For the truly real can perhaps most often be identified by its refusal to be abstracted away or compressed and by its stubborn resistance to our desire to give it whatever shape we please.

 

How dark epistemology explains the rise of Donald Trump

Conjurer_Bosch

We are living in what is likely the golden age of deception. It would be difficult enough were we merely threatened with drowning in what James Gleick has called the flood of information, or were we doomed to roam blind through the corridors of Borges’ library of Babel, but the problem is actually much worse than that. Our dilemma is that the very instruments that once promised liberation via the power of universal access to all the world’s knowledge seem just as likely are being used to sow the seeds of conspiracy, to manipulate us and obscure the path to the truth.

Unlike what passes for politicians these days I won’t open with such a tirade only to walk away. Let me instead explain myself. You can trace the origins of our age of deception not only to the 2008 financial crisis but back much further to its very root. Even before the 1950’s elites believed they had the economic problem, and therefore the political instability that came with this problem, permanently licked. The solution was some measure of state intervention into the workings of capitalism.

These interventions ranged on a spectrum from the complete seizure and control of the economy by the state in communist countries, to regulation, social welfare and redistributive taxation in even the most solidly capitalist economies such as the United States. Here both the pro-business and pro-labor parties, despite the initial resistance of the former, ended up accepting the basic parameters of the welfare-state. Remember it was the Nixon administration that both created the EPA and flirted with the idea of a basic income.  By the early 1980’s with the rise of Reagan and Thatcher the hope that politics had become a realm of permanent consensus- Frederick Engel’s prophesied “administration of things”- collapsed in the face of inflation, economic stagnation and racial tensions.

The ideological groundwork for this neo-liberal revolution had, however, been laid as far back as 1945 when state and expert directed economics was at its height. It was in that year that Austrian economist Friedrich Hayek in a remarkable essay entitled The Use of Knowledge in Society pointed out that no central planner or director could ever be as wise as the collective perception and decision making of economic actors distributed across an entire economy.

At the risk of vastly over simplifying his argument, what Hayek was in essence pointing out was that markets provide an unrivaled form of continuous and distributed feedback. The “five year plans” of state run economies may or may not have been able to meet their production targets, but only the ultimate register of price can tell you whether any particular level of production is justified or not.

A spike in price is the consequence of an unanticipated demand and will send producers scrambling to meet in the moment it is encountered. The hubris behind rational planning is that it claims to be able to see through the uncertainty that lies at the heart of any economy, and that experts from 10 000 feet are someone more knowledgeable than the people on the ground who exist not in some abstract version of an economy built out of equations, but the real thing.

It was perhaps one of the first versions of the idea of the wisdom of crowds, and an argument for what we now understand as the advantages of evolutionary approaches over deliberate design. It was also perhaps one of the first arguments that what lies at the very core of an economy was not so much the exchange of goods as the exchange of information.

The problem with Hayek’s understanding of economics and information wasn’t that it failed to capture the inadequacies of state run economies, at least with the level of information technologies they possessed when he was writing, (a distinction I think important and hope to return in the future), but that it was true for only part of the economy- that dealing largely with the production and distribution of goods and not with the consumer economy that would take center stage after the Second World War.

Hayek’s idea that markets were better ways of conveying information than any kind of centralized direction worked well in a world of scarcity where the problem was an accurate gauge of supply vs demand for a given resource, yet it missed that the new era would be one of engineered scarcity where the key to economic survival was to convince consumers they had a “need” that they had not previously identified. Or as John Kenneth Galbraith put it in his 1958 book The Affluent Society we had:

… managed to transfer the sense of urgency in meeting consumer need that was once felt in a world where more production meant more food for the hungry, more clothing for the cold, and more houses for the homeless to a world where increased output satisfies the craving for more elegant automobiles, more exotic food, more elaborate entertainment- indeed for the entire modern range of sensuous, edifying, and lethal desires. (114-115).

Yet rather than seeing the economic problems of the 1970’s through this lens, that the difficulties we were experiencing were as much a matter of our expectations regarding what economic growth should look like and the measure of our success in having rid ourselves (in advanced countries) of the kinds of life threatening scarcity that had threatened all prior human generations, the US and Britain set off on the path prescribed by conservative economists such as Hayek and began to dismantle the hybrid market/state society that had been constructed after the Great Depression.

It was this revolt against state directed (or even just restrained) capitalism which was the neoliberal gospel that reigned almost everywhere after the fall of the Soviet Union, and to which the Clinton administration converted the Democratic party. The whole edifice came crashing down in 2008, since which we have become confused enough that demons long dormant  have come home to roost.

At least since the crisis, economists have taken a renewed interest in not only the irrational elements of human economic behavior, but how that irrationality has itself become a sort of saleable commodity. A good version of this is Robert J Shiller and George Akerlof’s recent Phishing for Phools: The Economics of Manipulation and Deception. In their short book the authors examine the myriad of ways all of us are “phished” – probed by con-artists looking for “phools” to take advantage of and manipulate.

The techniques have become increasingly sophisticated as psychologists have gotten a clearer handle on the typology of irrationality otherwise known as human nature. Gains in knowledge always come with tradeoffs:

“But theory of mind also has its downside. It also means we can figure out how to lure people into doing things that are in our interest, but not in theirs. As a result, many new ideas are not just technological. They are not ways to deliver good-for-you/good-for-me’s. They are, instead, new uses of the theory of mind,  regarding how to deliver good-for-me/bad-for-you’s.” (98)

This it seems would be the very opposite of a world dominated by non- zero sum games that were heralded in the 1990’s, rather it’s the construction of an entire society around the logic of the casino, where psychological knowledge is turned into a tool against consumers to make choices contrary to their own long term interest.

This type of manipulation, of course, has been the basis of our economies for quite sometime. What is different is the level of sophistication and resources being thrown at the problem of how to sustain human consumption in a world drowning in stuff. The solution has been to sell things that simply disappear after use- like experiences- which are made to take on the qualities of the ultimate version of such consumables,  namely addictive drugs.

It might seem strange, but the Internet hasn’t made achieving safety from this manipulation any easier. Part of the reason for this is something Shiller and Akerlof do not fully discuss- that much of the information resources used in our economies serve the purpose not so much of selling things consumers would be better off avoiding, let alone convey actual useful information, but in distorting the truth to the advantage of those doing the distorting.

This is a phenomenon for which Robert Proctor has coined the term agontology. It is essentially a form of dark epistemology whose knowledge consist in how to prevent others from obtaining the knowledge you wish to hide.

We live in an age too cultured for any barbarism such as book burning or direct censorship. Instead we have discovered alternative means of preventing the spread of information detrimental to our interests. The tobacco companies pioneered this. Outright denials of the health risks of smoking were replaced with the deliberate manufacture of doubt. Companies whose businesses models are threatened by any concerted efforts to address climate change have adopted similar methods.

Warfare itself, where the power of deception and disinformation was always better understood has woken up to its potential in the digital age: witness the information war still being waged by Russia in eastern Ukraine.

All this I think partly explains the strange rise of Trump. Ultimately, neoliberal policies failed to sustain rising living standards for the working and middle class- with incomes stagnant since the 1970’s. Perhaps this should have never been the goal in the first place.

At the same time we live in a media environment in which no one can be assumed to be telling the truth, in which everything is a sales pitch of one sort or another, and in which no institution’s narrative fails to be spun by its opponents into a conspiracy repackaged for maximum emotional effect. In an information ecosystem where trusted filters have failed, or are deemed irredeemably biased, and in which we are saturated by a flood of data so large it can never be processed, those who inspire the strongest emotions, even the emotion of revulsion, garner the only sustained attention. In such an atmosphere the fact that Trump is a deliberate showman whose pretense to authenticity is not that he is committed to core values, but that he is open about the very reality of his manipulations makes a disturbing kind of sense.

An age of dark epistemology will be ruled by those who can tap into the hidden parts of our nature, including the worst ones, for their own benefit, and will prey off the fact we no longer know what the truth is nor how we could find it even if we still believed in its existence. Donald Trump is the perfect character for it.