Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.

 

The Evolution of Chains

“Progress in society can be thought of as the evolution of chains….

 Phil Auerswald, The Code Economy

“I would prefer not to.”

Herman Melville, Bartleby the Scrivener

 The 21st century has proven to be all too much like living in a William Gibson novel. It may be sad, but at least it’s interesting. What gives the times this cyberpunk feel isn’t so much the ambiance (instead of dark cool of noir we’ve got frog memes and LOLcats), but instead is that absolutely every major happening and all of our interaction with the wider world has something to do with the issue of computation, of code.

Software has not only eaten the economy, as Marc Anderssen predicted back in 2011, but also culture, politics, and war. To understand both the present and the future, then, it is essential to get a handle on what exactly this code that now dominates our lives is, to see it in its broader historical perspective.

Phil Auerswald managed to do something of the sort with his book The Code Economy. A Forty-Thousand Year History. It’s a book that tries to take us to the very beginning, to chart the long march of code to sovereignty over human life. The book defines code (this will prove part of its shortcomings) broadly as a “recipe” a standardized way of achieving some end. Looked at this way human beings have been a species dominated by code since our beginnings, that is with the emergence of language, but there have been clear leaps closer to the reign of code along the way with Auerswald seeing the invention of writing being the first. Written language, it seems, was the invention of bureaucrats, a way for the tribal scribes in ancient Sumer to keep track of temple tributes and debts. And though it soon broke out of these chains and proved a tool as much for building worlds and wonders like the Epic of Gilgamesh as a radical new means of power and control, code has remained linked to the domination of one human group over another ever since.

Auerswald is mostly silent on the mathematical history of code before the modern age. I wish he had spent time discussing predecessors of the computer such as the abacus, the Antikythera mechanism, clocks, and especially the astrolabe. Where exactly did this idea of mechanizing mathematics actually emerge, and what are the continuities and discontinuities between different forms of such mechanization?

Instead, Gottfried Leibniz who envisioned the binary arithmetic that underlies all of our modern day computers is presented like he had just fallen out of the matrix. Though Leibniz’ genius does seem almost inexplicable and sui generis. A philosopher who arguably beat Isaac Newton to the invention of calculus, he was also an inventor of ingenious calculating machines and an almost real-life version of Nostradamus. In a letter to Christiaan Huygens in 1670 Leibniz predicted that with machines using his new binary arithmetic: “The human race will have a new kind of instrument which will increase the power of the mind much more than optical lenses strengthen the eyes.” He was right, though it would be nearly 300 years until these instruments were up and running.

Something I found particularly fascinating is that Leibniz found a premonition for his binary number system in the Chinese system of divination- the I-ching which had been brought to his attention by Father Bouvet, a Jesuit priest then living in China. It seems that from the beginning the idea of computers, and the code underlying them, has been wrapped up with the human desire to know the future in advance.

Leibniz believed that the widespread adoption of binary would make calculation more efficient and thus mechanical calculation easier, but it wouldn’t be until the industrial revolution that we actually had the engineering to make his dreams a reality. Two developments especially aided in the development of what we might call proto-computers in the 19th century. The first was the division of labor first identified by Adam Smith, the second was Jacquard’s loom. Much like the first lurch toward code that came with the invention of writing, this move was seen as a tool that would extend the reach and powers of bureaucracy.       

Smith’s idea of how efficiency was to be gained from breaking down complex tasks into simple easily repeatable steps served as the inspiration to applying the same methods to computation itself. Faced with the prospect of not having enough trained mathematicians to create the extensive logarithmic and trigonometric tables upon which modern states and commerce was coming to depend, innovators such as Gaspard de Prony in France hit upon the idea of breaking down complex computation into simple tasks. The human “computer” was born.

It was the mechanical loom of Joseph Marie Jacquard that in the early 1800s proved that humans themselves weren’t needed once the creation of a complex pattern had been reduced to routine tasks. It was merely a matter of drawing the two, route human computation and machine enabled pattern making, together which would show the route to Leibniz’ new instrument for thought. And it was Charles Babbage along with his young assistant Ada Lovelace who seemed to graph the implications beyond data crunching of building machines that could “think”  who would do precisely this.

Babbage’s Difference and Analytical Engines, however, remained purely things of the mind. Early industrial age engineering had yet to catch up with Leibniz’ daydreams. Society’s increasing needs for data instead came to be served by a growing army of clerks along with simple adding machines and the like.

In an echo of the Luddites who rebelled against the fact that their craft was being supplanted by the demands of the machine, at least some of these clerks must have found knowledge reduced to information processing dehumanizing. Melville’s Bartleby the Scrivener gives us a portrait of a man who’s become like a computer stuck in a loopy glitch that eventually leads to his being scraped.    

What I think is an interesting aside here, Auerswald  doesn’t really explore the philosophical assumptions behind the drive to make the world computable. Melville’s Bartleby, for example, might have been used as a jumping off point for a discussion about how both the computer the view of human beings as a sort of automata meant to perform a specific task emerged out of a thoroughly deterministic worldview. This view, after all, was the main target of  Melville’s short story where a living person has come to resemble a sort of flawed windup toy, and make direct references to religious and philosophical tracts arguing in favor of determinism; namely, the firebrand preacher Jonathan Edwards sermon On the Will, and the chemist and utopian Joseph Priestley’s book  Doctrine of Philosophical Necessity.  

It is the effect of the development of machine based code on these masses of 19th and early 20th century white collar workers, rather than the more common ground of automation’s effect on the working class, that most interests Auerswald. And he has reasons for this, given that the current surge of AI seems to most threaten low level clerical workers rather than blue collar workers whose jobs have already been automated or outsourced, and in which the menial occupations that have replaced jobs in manufacturing require too much dexterity for even the most advanced robots.

As Auerswald points out, fear of white collar unemployment driven by the automation of cognitive tasks is at least as old as the invention of the Burroughs’s calculating machine in the 1880s. Yet rather than lead to a mass of unemployed clerks, the adoption of adding machines, typewriters, dictaphones only increased the number of people needed to manage and process information. That is, the depth of code into society, or the desire for society to conform to the demands of code placed even higher demands on both humans and machines. Perhaps that historical analogy will hold for us as well. Auerswald doesn’t seem to think that this is a problem.

By the start of the 20th century we still weren’t sure how to automate computation, but we were getting close. In an updated version of De Prony, the meteorologist, Fry Richardson showed how we could predict the weather armed with a factory of human computers. There are echoes of both to be seen in the low-paid laborers working for platforms such as Amazon’s Mechanical Turk who still provide the computation behind the magic-act that is much of contemporary AI.

Yet it was World War II and the Cold War that followed that would finally bootstrap Leibniz’s dream of the computer into reality. The names behind this computational revolution are now legendary: Vannevar Bush, Norbert Wiener, Claude Shannon, John Von Neumann, and especially Alan Turing.

It was these figures who laid the cornerstone for what George Dyson has called “Turing’s Cathedral” for like the great medieval cathedrals the computational megastructure in which all of us are now embedded was the product of generations of people dedicated to giving substance to the vision of its founders. Unlike the builders of Notre Dame or Chartres who labored in the name of ad majorem Dei gloriam, those who constructed Turing’s Cathedral were driven by ideas regarding the nature of thought and the power and necessity of computation.

Surprisingly, this model of computation created in the early 20th century by Turing and his fellow travelers continues to underlie almost all of our digital technology today. It’s a model that may be reaching its limits, and needs to be replaced with a more environmentally sustainable form of computation that is actually capable of helping us navigate and find the real structure of information in a world whose complexity we are finding so deep as to be intractable, and in which our own efforts to master only results in a yet harder to solve maze. It’s a story Auerswald doesn’t tell, and must wait until another time.        

Rather than explore what computation itself means, or what its future might be, Auerswald throws wide open the definition of what constitutes a code. As Erwin Schrodinger observed in his prescient essay What is Life?”, and as we later learned with the discovery of DNA, a code does indeed stand at the root of every living thing. Yet in pushing the definition of code ever farther from its origins in the exchange of information and detailed instructions on how to perform a task, something is surely lost. Thus Auerswald sees not only computer software and DNA as forms of code, but also the work of the late celebrity chef Julia Child, the McDonald’s franchise, WalMart and even the economy itself all as forms of it.

As I suggested at the beginning of this essay, there might be something to be gained by viewing the world through the lens of code. After all, code of one form or another is now the way in which most of the largely private and remaining government bureaucracies that run our lives function. The sixties radicals terrified that computers would be the ultimate tool of bureaucrats guessed better where we were headed. “Do not fold spindle or mutilate.” Right on. Code has also become the primary tool through which the system is hacked- forced to buckle and glitch in ways that expose its underlying artificiality, freeing us for a moment from its claustrophobic embrace. Unfortunately, most of its attackers aren’t rebels fighting to free us, but barbarians at the gates.

Here is where Auerswald really lets us down. Seeing progress as tied to our loss of autonomy and the development of constraints he seems to think we should just accept our fetters now made out of 1’s and 0’s. Yet within his broad definition of what constitutes code, it would appear that at least the creation of laws remains in our power. At least in democracies the law is something that the people collectively make. Indeed, if any social phenomenon can be said to be code like it would be law. It surprising, then, that Auerswald gives it no mention in his book. Meaning it’s not really surprising at all.

You see, there’s an implicit political agenda underneath Auerswald whole understanding of code, or at least a whole set of political and economic assumptions. These are assumptions regarding the naturalness of capitalism, the beneficial nature of behemoths built on code such as WalMart, the legitimacy of global standard setting institutions and protocols, the dominance of platforms over other “trophic layers” in the information economy that provide the actual technological infrastructure on which these platforms run. Perhaps not even realizing that these are just assumptions rather than natural facts Auerswald never develops or defends them. Certainly, code has its own political-economy, but to see it we will need to look elsewhere. Next time.

 

Iamus Returns

A reader, Dan Fair, kindly posted a link to the release of the full album composed by the artificial intelligence program, Iamus, on the comments section of my piece Turing and the Chinese Room Part 2 from several month back.

I took the time to listen to the whole album today (you can too by clicking on the picture above). Not being trained as a classical musician, or having much familiarity with the abstract style in which the album was composed makes it impossible for me to judge the quality of the work.

Over and above the question of quality, I am not sure how I feel about Iamus and “his”composition. As I mentioned to Dan, the optimistic side of me sees in this the potential to democratize human musical composition.

Yet, as I mentioned in the Turing post, the very knowledge that there is no emotional meaning being conveyed behind the work leaves it feeling emotionally dead and empty for me compared to to another composition composed, like those of Iamus, in honor of Alan Turing, this one created by a human being, Amanda Feery, entitled Turing’s Epitaph  that was gracefully shared by fellow blogger Andrew Gibson.

One way or another it seems, humans, and their ability to create and understand meaning will be necessary for the creations of machines to have anything real behind them.

But that’s what I think. What about you?

Turing and the Chinese Room 2

I propose to consider the question, “Can machines think?”

Thus began Alan Turing’s 1950 essay Computing Machinery and Intelligence without doubt the most import single piece written on the subject of what became known as “artificial intelligence”.

Straight away Turing insists that we won’t be able to answer this question by merely reflecting upon it. If we went about it this way we’d get all caught up in arguments over what exactly “thinking” means. Instead he proposes a test.

Turing imagines an imitation game composed of 3 players: (A) a man, (B) a woman, and (C) an interrogator with the role of the latter being to ask questions of the 2 players and determine which one is a man. Turing then transforms this game by exchanging the man with a machine, and replacing the woman with the man. The interrogator is then asked to figure out which is the real man:

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The machine in question Turing now narrowly defines as a digital computer, which consists of 3 parts (i) Store, that is the memory, (ii) Executive Unit, that is the part of the machine that performs the operations, (iii) The Control, the part of the machine that makes sure the Executive Unit performs operations based upon instructions that make up part of the Store. Digital computers are also “discrete state” machines, that is they are characterized by clear on-off states.

This is all a little technical so maybe a non-computer example will help. We can perhaps best understand digital technology by comparing it to its old rival analog. Think about a selection of music stored on your iPod versus, say, your collection of vintage ‘70s
8 tracks.  Your iPod has music stored in a discrete state- represented by 1s and 0s. It has an Executive Unit that allows you to translate these 1s and 0s into sound, and a Control that keeps the whole thing running smoothly and allows you, for example, to jump between tracks. Your 8 tracks, on the other hand, store music as “impressions” on a magnetic tape, not as discrete state representations, the “head”, in contact with the tape  reverses this process and transforms the impressions back into sound, you move between the tracks by causing the head to actually physically move.

Perhaps Turing’s choice of digital over analog computers can be said to amount to a bet about how the future of computer technology would play out. By compressing information into 1s and 0s – as representation- you could achieve seemingly limitless Storage/Control capacity. Imagine if all the songs on your iPod needed to be stored on 8 tracks! If you wanted to build an intelligent machine using analog you might as well just duplicate the very biological/analog intelligence you were trying to mimic. Digital technology represented, for Turing, a viable alternative path to intelligence other than the biological one we had always known.

Back to his article. He rephrases the initial question:

Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?

If we were able to answer this question in the affirmative, Turing insisted, then such a machine could be said to possess human level intelligence.

Turing then runs through and dismisses what he considers the most likely objections to the idea of whether or not a machine that could think could be built:

The Theological Objection– computers wouldn’t have a soul. Turing’ reply: Wouldn’t God grant a soul to any other being that possessed human level intelligence? What was the difference between bringing such an intelligent vessel for a soul into the world by procreation and by construction?

Heads in the Sand Objection– The idea that computers could be as smart as human is too horrible to be true.  Turing’s reply: Ideas aren’t true or false based on our wishing them so.

The Mathematical Objection- Machines can’t understand logically consistent sentences such as “this sentence is false”.  Turing’s reply: Okay, but, at the end of the day humans probably can’t either.

The Argument from Consciousness- Turing quotes a professor Lister: “”Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Turing’s reply:  If this is to be the case, why don’t we apply the same prejudice to people. How do I really know that another human being thinks except through his actions and words?

The Argument from Disability- Whatever a machine does it will never be able to do X.  Turing’s reply: These arguments are essentially making unprovable claims based on induction- that I’ve never seen a machine do X therefore no machine will ever do X.
Many of them are also alternative arguments from consciousness:

The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter. Nevertheless, “the subject matter of a machine’s operations” does seem to mean something, at least to the people who deal with it. If, for instance, the machine was trying to find a solution of the equation x2 – 40x – 11 = 0 one would be tempted to describe this equation as part of the machine’s subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter.

Lady Lovelace’s Objection- Lady Lovelace, friend of Charles Babbage whose plans for his Analytical Engine in the early 1800s were probably the first fully conceived modern computer, and Lovelace perhaps the author of the first computer program had this to say:

The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”

 Turing’s response: this is yet another argument from consciousness. The computers he works with surprise him all the time with results he did not expect.

Argument from the Continuity of the Nervous System: The nervous system is fundamentally different from a discrete state machine therefore the output of the two will always be fundamentally different. Turing’s response: The human brain is analogous to a  “differential analyzer” (our old analog computer discussed above), and solutions of the two types of computers are indistinguishable. Hence a digital computer is able to do at least some things the analog computer of the human brain does.

Argument from the Informality of Behavior: Human beings, unlike machines, are free to break rules and are thus unpredictable in a way machines are not. Turing’s response: We cannot really make the claim that we are not determined just because we are able to violate human conventions. The output of computers can be as unpredictable as human behavior and this emerges despite the fact that they are clearly designed to follow laws i.e. are programmed.

Argument from ESP: Given a situation in which the man playing against the machine possesses some yet understood telepathic power he could always influence the interrogator against the machine and in his favor. Turing’s response: This would mean the game was rigged until we found out how to build a computer that could somehow balance out clairvoyance. For now, put the interrogator in a “telepathy proof room”.

So that, in a nutshell, is the argument behind the Turing test. By far, the most well known challenge to this test was made by the philosopher, John Searle, (relation to the author has been lost in the mist of time). Searle has so influenced the debate around the Turing test that it might be said that much of the philosophy of mind that has dealt with the question of artificial intelligence has been a series of arguments about why Searle is wrong.

Like Turing, Searle in his 1980 essay, Minds Brains and Programs, will introduce us to a thought experiment:

Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.

Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch.

With some additions, this is the essence of Searle’s thought experiment, and what he wants us to ask: does the person in this room moving around a bunch of symbols according to a set of predefined rules actually understand Chinese? And our common sense answer is- “of course not!”

Searle’s argument is actually even more clever than it seems because it could having been taken right from one of Turing’s own computer projects. Turing had written a computer program that could have allowed a computer to play chess. I say could have allowed because there wasn’t actually a computer at the time sophisticated enough to run his program. What Turing did then was to use the program as a set of rules he used to play a human being in chess. He found that by following the rules he was unable to beat his friend in chess. He was, however, able to beat his friend’s wife! (No sexism intended).

Now had Turing given these rules to someone who knew nothing about chess at all they would have been able to play a reasonable game. That is, they would have played a reasonable game without having any knowledge or understanding of what it was they were actually doing.

Searle’s goal is to bring into doubt what he calls “strong AI” the idea that the formal manipulation of symbols- syntax- can give rise to the true understanding of meaning- semantics. He identifies part of our problem in our tendency to anthropomorphize our machines:

The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door “understands instructions” from its photoelectric cell is not at all the sense in which I understand English.

Even with this cleverness, Searle’s argument has been shown to have all sorts of inconsistencies. Among the best refutations I’ve read is one of the early ones- Margaret A. Boden’s 1987 essay Escaping the Chinese Room.  In gross simplification Boden’s argument is this: Look, normal human consciousness is made up of a patchwork of “stupid” subsystems that don’t understand or possess what Searle claims is the foundation stone of true thinking- intentionality- “subjective states that relate me to the rest of the world”- in anything like his sense at all. In fact, most of what the mind does is made up of these “stupid” processes. Boden wants to remind us that we really have no idea how these seemingly dumb processes somehow add up to what we experience as human level intelligence.

Still, what Searle has done has made us aware of the huge difference between formal symbol manipulation and what we would call thinking.  He made us aware of the algorithms (a process or set of rules to be followed in calculations or other problem-solving operations) that would become a simultaneous curse and backdrop of our own day. A day when our world has become mediated by algorithms in its economics, its warfare, in our choice of movies and books and music, in our memory and cognition (Google)  in love (dating sites) and now it seems in its art (see the excellent presentation by Kevin Slavin How Algorithms Shape Our World). Algorithms that Searle ultimately understood to be lifeless.

In the words of the philosopher Andrew Gibson on Iamus’ Hello World!:

I don’t really care that this piece of art is by a machine, but the process behind it is what is problematic. Algorithmization is arguably part of this ultra-modern abstractionism, abstracting even the artist.

The question I think that should be asked here is how exactly Iamus, the algorithm that composed, Hello World! worked? The composition Hello World! was created using a very particular form of algorithm known as a genetic algorithm, or, in other words Iamus is a genetic algorithm. In very over-simplified terms a genetic algorithm works like evolution. There is (a) a “population” of randomly created individuals (in Iamus’ case it would be sounds from a collection of instruments). Those individuals are the selected against (b) an environment for the condition of best fit (I do not know in Iamus’ case if this best fit was the judgement of classically trained humans, some selection of previous human created compositions, or something else), the individual that survive (are chosen to best meet the environment) are then combined to form new individuals (compositions in Iamus’ case) (c) an element of random features is introduced to individuals along the way to see if they help individuals better meet the fit. The survivor that best meets the fit is your end result.

Searle would obviously claim that Iamus was just another species of symbol manipulation and therefore did not really represent something new under the sun, and in a broad sense I fully agree. Nevertheless, I am not so sure this is the end of the story because to follow Searle’s understanding of artificial intelligence seems to close as many doors as it opens in essence locking Turing’s machine, forever, into his Chinese room. Searle writes:

I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.

For me, the line he draws between intelligent behavior or properties emerging from machines and those emerging from biological life is far too sharp and based on rather philosophically slippery concepts such as understanding, intentionality, causal dependence.

Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.

Taking Turing’s test and Searle’s Chinese Room together leaves us, I think, lost in a sort of intellectual trap. On the one side we having Turing arguing that human thinking and digital computation are essentially equivalent. All experience points to the fact that this is false. On the other side we have Searle arguing that digital computation and the types of thinking done by biological creatures are essentially nothing alike. An assertion that is not obviously true. The problem here is that we lose what are really the two most fundamental questions and that is how are computation and thought alike, and how are they different?

Instead what we have is something like Iamus “introduced” to the world, along with its creation as if it were a type of person, by the Turing faction when in fact it has almost none of these qualities that we consider to be constitutive of personhood.  A type of showmanship that put me in mind of the wanderings of the Mechanical Turk.  The 18th century mechanical illusion that for a time fooled  the courts of Europe into thinking a machine could defeat human beings at chess. (There was, of course, a short person inside.)

To this, the Searle faction responds with horror and utter disbelief aiming to disprove that such a “soulless” thing as a computer could ever, in the words of Turing’s Lister: “write a sonnet or compose a concerto”, that was not a product of nothing other than” the chance fall of symbols”.  The problem with this is that our modern day Mechanical Turks are no longer mere illusions- there is no longer a person hiding inside- and our machines really do defeat grand masters in chess, and win trivia games, and write news articles, and now compose classical music. Who knows where this will stop, if it indeed if it will stop, and how many what have long been considered the most  human defining activities qualities will be successfully replicated, and even surpassed, by such machines.

We need to break through this dichotomy and start asking ourselves some very serious questions. For only by doing so are we likely to arrive at an understanding both of where we are as a technology dependent species and where what we are doing is taking us. Only then will we have the means to make intentional choices about where we want to go.

I will leave off here with something shared with me by Andrew Gibson. It is a piece by his friend Amanda Feery composed in memory of Turing’s centenary entitled “Turing’s Epitath”.

And I will leave you with questions:

How is this similar to Iamus’ Hello World!?  And how is it different?

Turing and the Chinese Room 1

Two-Thousand-and-twelve marks the centenary of Alan Turing’s birth. Turing, of course, was a British mathematical genius who was preeminent among the “code -breakers”, those who during the Second World War helped to break the Nazi Enigma codes and were thus instrumental in helping the Allies win the war. Turing was one of the pioneers of modern computers. He was the first to imagine the idea of what became known as a Universal Turing Machine, (the title of the drawing above) the idea that any computer was in effect equal to any other and could, therefore, given sufficient time, solve any problem whose solution was computable.

In the last years of his life Turing worked on applying mathematical concepts to biology. Presaging the work of, Benoît Mandelbro, he was particularly struck by how the repetition of simple patterns could give rise to complex forms. Turing was in effect using  computation as a metaphor to understand nature, something that can be seen today in the work of people such as Steven Wolfram in his New Kind of Science, and in fields such as chaos and complexity theory. It also lies at the root of the synthetic biology being pioneered, as we speak, by Craig Venter.

Turning is best remembered, however, for a thought experiment he proposed in a 1950 paper Computing Machinery and Intelligence, that became known as the “Turing test”, a test that has remained the pole star for the field of artificial intelligence up until our own day, along with the tragic circumstances the surrounded the last years of his life.

In my next post, I will provide a more in-depth explanation of the Turing test. For now, it is sufficient to know that the test proposes to identify whether a computer possesses human level intelligence by seeing if a computer can fool a person into thinking the computer is human. If the computer can do so, then it can be said, under Turing’s definition, to possess human level intelligence.

Turing was a man who was comfortable with his own uniqueness,  and that uniqueness included the fact that he was homosexual at a time when homosexuality was considered both a disease and a crime. In 1952, Turning, one of the greatest minds ever produced by Britain, a hero of World War Two, and a member of the Order of the British Empire was convicted in a much publicized trial for having engaged in homosexual relationship. It was the same law under which Oscar Wilde had been prosecuted almost sixty years before.

Throughout his trial, Turing maintained that he had done nothing wrong in having such a relationship, but upon conviction faced a series of nothing- but- bad options, the least bad of which he decided was hormone “therapy” as a means of controlling his “deviant” urges, a therapy he chose to voluntarily undergo given the alternatives which included imprisonment.

The idea of hormone therapy represented a very functionalist view of the mind. Hormones had been discovered to regulate human sexual behavior, so the idea was that if you could successfully manipulate hormone levels you could produce the desired behavior. Perhaps part of the reason Turing chose this treatment is that it had something of the input-output qualities of the computers that he had made the core of his life-work. In any case, the therapy ended after the period of a year- the time period he was sentenced to undergo the treatment. Being shot through with estrogen did not have the effect of mentally and emotionally castrating Turing, but it did have nasty side effects such as making him grow breasts and “feminizing” his behavior.

Turing’s death, all evidence appears to indicate, came at his own hand,  ingesting an apple laced with poison in seeming imitation of the movie Snow White. Why did Turing commit suicide, if in fact he did so? It was not, it seems, an effect of his hormone therapy, which had ended quite some time before. He did not appear especially despondent to his family or friends in the period immediately before his death.

One plausible explanation is that, given the rising tensions of the Cold War, and the obvious value Turing had as an espionage target, Turing realized that on account of his very public outing, he had lost his private life. Who could be trusted now except his very oldest and deepest friends? What intimacy could he have when everyone he met might be a Soviet spy meant to seduce what they considered a vulnerable target, or an agent of the British or American states sent there to entrap him?  From here on out his private life was likely to be closely scrutinized: by the press, by the academy, by Western and Soviet security agencies. He had to wonder if he would face new trials, new tortuous “treatments”, to make him “normal” perhaps even imprisonment.

What a loss of honor for a man who had done so much to help the British win the war the reason, perhaps, that Turing’s death occurred on  or very near the ten year anniversary of the D-Day landing he had helped to make possible. What outlets could there be for a man of his genius when all of the “big science” was taking place under the umbrella of the machinery of war, where so much of science was regarded as state secrets and guarded like locked treasures and any deviation from the norm was considered a vulnerability that needed to be expunged? Perhaps his brilliant mind had shielded him from the reality of what had happened and his new situation after his trial, but only for so long, and that once he realized what his life had become the facts became too much to bear.

We will likely never truly know.

The centenary of Turing’s birth has been rightly celebrated with a bewildering array of tributes, and remembrances. Perhaps, the most interesting of these tributes, so far, has been the unveiling of a symphony composed entirely by a computer, and performed by the London Symphony Orchestra on July, 2, 2012..  This was a piece (really a series of several pieces) “composed” by a computer algorithm- Iamus, and entitled:  “Hello World!”

Please take a listen to this piece by clicking on the link below, and share with me your thoughts. Your ideas will help guide my own thoughts on this perplexing event, but my suspicion right now is that “Hello World!” represents something quite different than what either its sympathizers or distractors suggest, and gives us a window into the meaning and significance of the artificial intelligences we are bringing into what was once our world alone.

Hello World!”