Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.

 

How to avoid drowning in the Library of Babel

Library of Babel

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by is using the same tools we use in understanding the present which is also, at least partly, hidden from  direct view by the dilemma inherent in our use of language. To understand anything we need language and symbols but neither are the thing we are actually trying to grasp. Even the brain itself, through the senses, is a form of funnel giving us a narrow stream of information that we conflate with the entire world.

We use often science to get us beyond the box of our heads, and even, to a limited extent, to see into the future. Less discussed is the fact that we also use good metaphors, fables, and stories to attack the the wall of the unknown obliquely. To get the rough outlines of something like the essence of reality while still not being able to see it as it truly is.

It shouldn’t surprise us that some of the most skilled writers at this sideways form of wall scaling ended up becoming blind. To be blind, especially to have become blind after a lifetime of being able to see, is to realize that there is a form to the world which is shrouded in darkness, that one can get to know only in bare outlines, and only with indirect methods. Touch in the blind goes from a way to know what something feels like for the sighted to an oblique and sometimes more revealing way to learn what something actually is.

The blind John Milton was a genius at using symbolism and metaphor to uncover a deeper and hidden reality, and he probably grasped more deeply than anyone before or after him that our age would be defined not by the chase after reality, but our symbolic representation of its “value”, especially in the form of our ever more virtual talisman of money. Another, and much more recent, writer skilled at such an approach, whose work was even stranger than Milton’s in its imagery, was Jorge Luis Borges whose writing defies easy categorization.  

Like Milton did for our confusion of the sign with the signified in his Paradise Lost, Borges would do for the culmination of this symbolization in the  “information age” with perhaps his most famous short-story- The Library at Babel.  A story that begins with the strange line:

The universe (which others call the Library) is composed of an indefinite, perhaps infinite number of hexagonal galleries. (112)

The inhabitants of Borges’ imagined Library (and one needs to say inhabitants rather than occupants, for the Library is not just a structure but is the entire world, the universe in which all human-beings exist) “live” in smallish compartments attached to vestibules on hexagonal galleries lined with shelves of books. Mirrors are so placed that when one looks out of their individual hexagon they see a repeating pattern of them- the origin of arguments over whether or not the Library is infinite or just appears to be so.

All this would be little but an Escher drawing in the form of words were it not for other aspects that reflect Borges’ pure genius. The Library contains not just all books, but all possible books, and therein lies the dilemma of its inhabitants-  for if all possible books exist it is impossible to find any particular book.

Some book explaining the origin of the Library and its system of organization logically must exist, but that book is essentially undiscoverable, surrounded by a perhaps infinite number of other books the vast number of which are just nonsensical combinations of letters. How could the inhabitants solve such a puzzle? A sect called the Purifiers thinks it has the answer- to destroy all the nonsensical books- but the futility of this project is soon recognized. Destruction barely puts a dent in the sheer number of nonsensical books in the Library.

The situation within the Library began with the hope that all knowledge, or at least the key to all knowledge, would be someday discovered but has led to despair as the narrator reflects:

I am perhaps misled by old age and fear but I suspect that the human species- the only species- teeters at the verge of extinction, yet that the Library- enlightened, solitary, infinite, perfectly unmoving, armed with precious volumes, pointless, incorruptible and secret will endure. (118)

The Library of Babel published in 1941 before the “information age” was even an idea seems to capture one of the paradoxes of our era. For the period in which we live is indeed experiencing an exponential rise of knowledge, of useful information, but this is not the whole of the story and does not fully reveal what is not just a wonder but a predicament.

The best exploration of this wonder and predicament is James Gleick’s recent brilliant history of the “information age” in his The Information: A History, A Theory, A Flood. In that book Gleick gives us a history of information from primitive times to the present, but the idea that everything: from our biology to our society to our physics can be understood as a form of information is a very recent one. Gleick himself uses Borges’ strange tale of the Library of Babel as metaphor through which we can better grasp our situation.

The imaginative leap into the information age dates from seven year after Borges’ story was published, in 1948, when Claude Shannon a researcher for Bell Labs published his essay “A Mathematical Theory of Communication”. Shannon was interested in how one could communicate messages over channels polluted with noise, but his seemingly narrow aim ended up being revolutionary for fields far beyond Shannon’s purview. The concept of information was about to take over biology – in the form of DNA, and would conquer communications in the form of the Internet and exponentially increasing powers of computation – to compress and represent things as bits.

The physicists John Wheeler would come to see the physical universe itself and its laws as a form of digital information- “It from bit”.

Otherwise put, every “it” — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

Gleick finds this understanding of reality as information to be both true in a deep sense and also in many ways troubling.  We are it seems constantly told that we are producing more information on a yearly basis than all of human history combined, although what this actually means is another thing. For there is a wide and seemingly unbridgeable gap between information and what would pass for knowledge or meaning. Shannon himself acknowledged this gap, and deliberately ignored it, for the meaningfulness of the information transmitted was, he thought, “irrelevant to the engineering problem” involved in transmitting it.

The Internet operates in this way, which is why a digital maven like Nicholas Negroponte is critical of the tech-culture’s shibboleth of “net neutrality”.   To humorize his example, a digital copy of a regular sized novel like Fahrenheit 451 is the digital equivalent of one second of video from What does the Fox Say?, which, however much it cracks me up, just doesn’t seem right. The information content of something says almost nothing about its meaningfulness, and this is a problem for anyone looking at claims that we are producing incredible amounts of “knowledge” compared to any age in the past.

The problem is perhaps even worse than at first blush. Take these two sets of numbers:

9      2           9              5              5              3              4              7              7              10

And:

9     7            4              3              8             5              4              2              6              3

The first set of numbers is a truly random set of the numbers 1 through 10 created using a random number generator. The second set has a pattern. Can you see it?

In the second set every other number is a prime number between 2 and 7. Knowing this rule vastly decreases the possible set of 10 numbers between 1 and 10 which a set of numbers can fit. But here’s the important takeaway-  the the first set of numbers contains vastly more information than the second set. Randomness increases the information content of a message. (If anyone out there has a better grasp of information theory thinks this example is somehow misleading or incorrect please advise.)

The new tools of the age have meant that a flood of bits now inundates us the majority of which is without deep meaning or value or even any meaning at all.We are confronted with a burst dam of information pouring itself over us. Gleick captures our dilemma with a quote from the philosopher of “enlightened doomsaying” Jean-Pierre Dupuy:

I take “hell” in its theological sense i.e. a place which is devoid of grace – the underserved, unnecessary, surprising, unforeseen. A paradox is at work here, ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.  (418)

It is not that useful and meaningful information is not increasing, but that our exponential increase in information of value- of medical and genetic information and the like has risen in tandem with an exponential increase in nonsense and noise. This is the flood of Gleick’s title and the way to avoid drowning in it is to learn how to swim or get oneself a boat.

A key to not drowning is to know when to hold your breath. Pushing for more and more data collection might sometimes be a waste of resources and time because it is leading one away from actionable knowledge. If you think the flood of useless information hitting you that you feel compelled to process is overwhelming now, just wait for the full flowering of the “Internet of Things” when everything you own from your refrigerator to your bathroom mirror will be “talking” to you.

Don’t get me wrong, there are aspects that are absolutely great about the Internet of Things. Take, for example, the work of the company Indoo.rs that has created a sort of digital map throughout the San Francisco International Airport that allows a blind person using bluetooth to know: “the location of every gate, newsstand, wine bar and iPhone charging station throughout the 640,000-square-foot terminal.”

This is exactly the kind of purpose the Internet of Things should be used for. A modern form of miracle that would have brought a different form sight to those blind like Milton and Borges. We could also take a cue from the powers of imagination found in these authors and use such technologies to make our experience of the world deeper our experience more multifaceted, immersive. Something like that is a far cry from an “augmented reality” covered in advertisements, or work like that of neuroscientist David Eagleman, (whose projects I otherwise like), on a haptic belt that would massage its wearer into knowing instantaneously the contents of the latest Twitter feed or stock market gyrations.

Our digital technology where it fails to make our lives richer should be helping us automate, meaning to make automatic and without thought, all that is least meaningful in our lives – this is what machines are good at. In areas of life that are fundamentally empty we should be decreasing not increasing our cognitive load. The drive for connectivity seems pushing in the opposite direction, forcing us to think about things that were formerly done by our thoughtless machines over which we lacked precision but also didn’t give us much to think about.

Yet even before the Internet of Things has fully bloomed we already have problems. As Gleick notes, in all past ages it was a given that most of our knowledge would be lost. Hence the logic of a wonderful social institution like the New York Public Library. Gleick writes with sharp irony:

Now expectations have inverted. Everything may be recorded and preserved…. Where does it end? Not with the Library of Congress.

Perhaps our answer is something like the Internet Archive whose work on capturing recording all of the internet as it comes into being and disappears is both amazing and might someday prove essential to the survival of our civilization- the analog to copies having been made of the famed collection at the lost Library at Alexandria. (Watch this film).

As individuals, however, we are still faced with the monumental task of separating the wheat from the chaff in the flood of information, the gems from the twerks. Our answer to this has been services which allow the aggregation and organization of this information, where as Gleick writes:

 Searching and filtering are all that stand between this world and the Library of Babel.  (410)

And here is the one problem I had with Gleick’s otherwise mind blowing book because he doesn’t really deal with questions of power. A good deal of power in our information society is shifting to those who provide these kinds of aggregating and sifting capabilities and claim on that basis to be able to make sense of a world brimming with noise and nonsense. The NSA makes this claim and is precisely the kind of intelligence agency one would predict to emerge in an information age.

Anne Neuberger, the Special Assistant to NSA Director Michael Rogers and the Director of the NSA’s Commercial Solutions Center recently gave a talk at the Long Now Foundation. It was a hostile audience of Silicon Valley types who felt burned by the NSA’s undermining of digital encryption, the reputation of American tech companies, and civil liberties. But nothing captured the essence of our situation better than a question from Paul Saffo.

Paul Saffo: It seems like the intelligence community have always been data optimists. That if we just had more data, and we know that after every crisis, it’s ‘well if we’d just have had  more data we’d connect the dots.’ And there’s a classic example, I think of 9-11 but this was said by the Church Committee in 1975. It’s become common to translate criticism of intelligence results into demands for enlarged data collection. Does more data really lead to more insight?

As a friend is fond of saying ‘perhaps the data haystack that the intelligence community has created has become too big to ever find the needle in.’    

Neuberger’s telling answer was that the NSA needed better analytics, something that the agency could learn from the big data practices of business. Gleick might have compared her answer to a the plea for a demon by a character in a story by Stanislaw Lem whom he quotes.

 We want the Demon, you see, to extract from the dance of atoms only information that is genuine like mathematical formulas, and fashion magazines.  (The Information 425)

Perhaps the best possible future for the NSA’s internet hoovering facility at Bluffdale  would be for it to someday end up as a memory bank of the our telecommunications in the 21st century, a more expansive version of the Internet Archive. What it is not, however,is a way to anticipate the future on either small or large scales, or at least that is what history tells us. More information or data does not equal more knowledge.

We are forced to find meaning in the current of the flood. It is a matter of learning what it is important to pay attention to and what to ignore, what we should remember and what we can safely forget, what the models we use to structure this information can tell us and what they can’t, what we actually know along with the boundaries of that which we don’t. It is not an easy job, or as Gleick says:

 Selecting the genuine takes work; then forgetting takes even more work.

And it’s a job, it seems, that just gets harder as our communications systems become ever more burdened by deliberate abuse ( 69.6 percent of email in 2014 was spam up almost 25 percent from a decade earlier) and in an age when quite extraordinary advances in artificial intelligence are being used not to solve our myriad problems, but to construct an insidious architecture of government and corporate stalking via our personal “data trails”, which some confuse with our “self”. A Laplacian ambition that Dag Kittlaus unguardedly praises as  “Incremental revenue nirvana.”  Pity the Buddha.

As individuals, institutions and societies we need to find ways stay afloat and navigate through the flood of information and its abuse that has burst upon us, for as in Borges’ Library there is no final escape from the world it represents. As Gleick powerfully concludes The Information:

We are all patrons of the Library of Babel now and we are the librarians too.

The Library will endure, it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors looking for lines of meaning amid the leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thought and collecting the thoughts of others, and every so often glimpsing mirrors in which we may recognize creatures of the information.”