Jumping Off The Technological Hype-Cycle and the AI Coup

Robotic Railroad 1950s

What we know is that the very biggest tech companies have been pouring money into artificial intelligence in the last year. Back in January Google bought the UK artificial intelligence firm Deep Mind for 400 million dollars. Only a month earlier, Google had bought the innovative robotics firm Boston Dynamics. FaceBook is in the game as well having also in December 2013 created a massive lab devoted to artificial intelligence. And this new obsession with AI isn’t only something latte pumped-up Americans are into. The Chinese internet giant Baidu, with its own AI lab, recently snagged the artificial intelligence researcher Andrew Ng whose work for Google included the breakthrough of creating a program that could teach itself to recognize pictures of cats on the Internet, and the word “breakthrough” is not intended to be the punch line of a joke.

Obviously these firms see something that make these big bets and the competition for talent seem worthwhile with the most obvious thing they see being advances in an approach to AI known as Deep Learning, which moves programming away from a logical set of instructions and towards the kind bulking and pruning found in biological forms of intelligence. Will these investments prove worth it? We should know in just a few years, yet we simply don’t right now.

No matter how it turns out we need to beware of becoming caught in the technological hype-cycle. A tech investor, or tech company for that matter, needs to be able to ride the hype-cycle like a surfer rides a wave- when it goes up, she goes up, and when it comes down she comes down, with the key being to position oneself in just the right place, neither too far ahead or too far behind. The rest of us, however, and especially those charged with explaining science and technology to the general public, namely, science journalists, have a very different job- to parse the rhetoric and figure out what is really going on.

A good example of what science journalism should look like is a recent conversation over at Bloggingheads between Freddie deBoer and Alexis Madrigal. As Madrigal points out we need to be cognizant of what the recent spate of AI wonders we’ve seen actually are. Take the much over-hyped Google self-driving car. It seems much less impressive to know the areas where these cars are functional are only those areas that have been mapped before hand in painstaking detail. The car guides itself not through “reality” but a virtual world whose parameters can be upset by something out of order that the car is then pre-programmed to respond to in a limited set of ways. The car thus only functions in the context of a mindbogglingly precise map of the area in which it is driving. As if you were unable to make your way through a room unless you knew exactly where every piece of furniture was located. In other words Google’s self-driving car is undriveable in almost all situations that could be handled by a sixteen year old who just learned how to drive. “Intelligence” in a self-driving car is a question of gathering massive amounts of data up front. Indeed, the latest iteration of the Google self-driving car is more like tiny trolley car where information is the “track” than an automobile driven by a human being and able to go anywhere, without the need of any foreknowledge of the terrain, so long, that is, as there is a road to drive upon.

As Madrigal and deBoer also point out in another example, the excellent service of Google Translate isn’t really using machine intelligence to decode language at all. It’s merely aggregating the efforts of thousands of human translators to arrive at approximate results. Again, there is no real intelligence here, just an efficient way to sort through an incredibly huge amount of data.

Yet, what if this tactic of approaching intelligence by “throwing more data at it”  ultimately proves a dead end? There may come a point where such a strategy shows increasingly limited returns. The fact of the matter is that we know of only one fully sentient creature- ourselves- and the more data strategy is nothing like how our own brains work. If we really want to achieve machine intelligence, and it’s an open question whether this is a worthwhile goal, then we should be exploring at least some alternative paths to that end such as those long espoused by Douglas Hofstadter the author of the amazing Godel, Escher, Bach,  and The Mind’s I among others.

Predictions about the future of capacities of artificially intelligent agents are all predicated on the continued exponential rise in computer processing power. Yet, these predictions are based on what are some less than solid assumptions with the first being that we are nowhere near hard limits to the continuation of Moore’s Law. What this assumption ignores is increased rumblings that Moore’s Law might be in hospice and destined for the morgue.

But even if no such hard limits are encountered in terms of Moore’s Law, we still have the unproven assumption that greater processing power almost all by itself leads to intelligence, or even is guaranteed to bring incredible changes to society at large. The problem here is that sheer processing power doesn’t tell you all that much. Processing power hasn’t brought us machines that are intelligent so much as machines that are fast, nor are the increases in processing power themselves all that relevant to what the majority of us can actually do.  As we are often reminded all of us carry in our pockets or have sitting on our desktops computational capacity that exceeds all of NASA in the 1960’s, yet clearly this doesn’t mean that any of us are by this power capable of sending men to the moon.

AI may be in a technological hype-cycle, again we won’t really know for a few years, but the dangers of any hype-cycle for an immature technology is that it gets crushed as the wave comes down. In a hype-cycle initial progress in some field is followed by a private sector investment surge and then a transformation of the grant writing and academic publication landscape as universities and researchers desperate for dwindling research funding try to match their research focus to match a new and sexy field. Eventually progress comes to the attention of the general press and gives rise to fawning books and maybe even a dystopian Hollywood movie or two. Once the public is on to it, the game is almost up, for research runs into headwinds and progress fails to meet the expectations of a now profit-fix addicted market and funders. In the crash many worthwhile research projects end up in the dustbin and funding flows to the new sexy idea.

AI itself went through a similar hype-cycle in the 1980’s, back when Hofstander was writing his Godel, Escher, Bach  but we have had a spate of more recent candidates. Remember in the 1990’s when seemingly every disease and every human behavior was being linked to a specific gene promising targeted therapies? Well, as almost always, we found out that reality is more complicated than the current fashion. The danger here was that such a premature evaluation of our knowledge led to all kinds of crazy fantasies and nightmares. The fantasy that we could tailor design human beings through selecting specific genes led to what amount to some pretty egregious practices, namely, the sale of selection services for unborn children based on spurious science- a sophisticated form of quackery. It also led to childlike nightmares, such as found in the movie Gattaca or Francis Fukuyama’s Our Posthuman Future where we were frightened with the prospect of a dystopian future where human beings were to be designed like products, a nightmare that was supposed to be just over the horizon.  

We now have the field of epigenetics to show us what we should have known- that both genes and environment count and we have to therefore address both, and that the world is too complex for us to ever assume complete sovereignty over it. In many ways it is the complexity of nature itself that is our salvation protecting us from both our fantasies and our fears.

Some other examples? How about MOOCS which we supposed to be as revolutionary as the invention of universal education or the university? Being involved in distance education for non-university attending adults I had always known that the most successful model for online learning was where it was “blended” some face-to-face, some online. That “soft” study skills were as important to student success as academic ability. The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.  Andrew Ng, the AI researcher scooped up by Baidu I mentioned earlier being just one of a number of high level MOOC refugees having helped found Coursera.

The so-called Internet of Things is probably another example of getting caught on the hype-cycle. The IoT is this idea that people are going to be clamoring to connect all of their things: their homes, refrigerators, cars, and even their own bodies to the Internet in order to be able to constantly monitor those things. The holes in all this are that not only are we already drowning in a deluge of data, or that it’s pretty easy to see how the automation of consumption is only of benefit to those providing the service if we’re either buying more stuff or the automators are capturing a “finder’s fee”, it’s above all, that anything connected to the Internet is by that fact hackable and who in the world wants their homes or their very bodies hacked? This isn’t a paranoid fantasy of the future, as a recent skeptical piece on the IoT in The Economist pointed out:

Last year, for instance, the United States Fair Trade Commission filed a complaint against TrendNet, a Californian marketer of home-security cameras that can be controlled over the internet, for failing to implement reasonable security measures. The company pitched its product under the trade-name “SecureView”, with the promise of helping to protect owners’ property from crime. Yet, hackers had no difficulty breaching TrendNet’s security, bypassing the login credentials of some 700 private users registered on the company’s website, and accessing their live video feeds. Some of the compromised feeds found their way onto the internet, displaying private areas of users’ homes and allowing unauthorised surveillance of infants sleeping, children playing, and adults going about their personal lives. That the exposure increased the chances of the victims being the targets of thieves, stalkers or paedophiles only fuelled public outrage.

Personalized medicine might be considered a cousin of the IoT, and while it makes perfect sense to me for persons with certain medical conditions or even just interest in their own health to monitor themselves or be monitored and connected to health care professionals, such systems will most likely be closed networks to avoid the risk of some maleficent nerd turning off your pacemaker.

Still, personalized medicine itself, might be yet another example of the magnetic power of hype. It is one thing to tailor a patient’s treatment based on how others with similar genomic profiles reacted to some pharmaceutical and the like. What would be most dangerous in terms of health care costs both to individuals and society would be something like the “personalized” care for persons with chronic illnesses profiled in the New York Times this April, where, for instance, the:

… captive audience of Type 1 diabetics has spawned lines of high-priced gadgets and disposable accouterments, borrowing business models from technology companies like Apple: Each pump and monitor requires the separate purchase of an array of items that are often brand and model specific.

A steady stream of new models and updates often offer dubious improvement: colored pumps; talking, bilingual meters; sensors reporting minute-by-minute sugar readouts. Ms. Hayley’s new pump will cost $7,350 (she will pay $2,500 under the terms of her insurance). But she will also need to pay her part for supplies, including $100 monitor probes that must be replaced every week, disposable tubing that she must change every three days and 10 or so test strips every day.

The technological hype-cycle gets its rhetoric from the one technological transformation that actually deserves the characterization of a revolution. I am talking, of course, about the industrial revolution which certainly transformed human life almost beyond recognition from what came before. Every new technology seemingly ends up making its claim to be “revolutionary” as in absolutely transformative. Just in my lifetime we have had the IT , or digital revolution, the Genomics Revolution, the Mobile Revolution, The Big Data Revolution to name only a few.  Yet, the fact of the matter is not merely have no single one of these revolutions proven as transformative as the industrial revolution, arguably, all of them combined haven’t matched the industrial revolution either.

This is the far too often misunderstood thesis of economists like Robert Gordon. Gordon’s argument, at least as far as I understand it, is not that current technological advancements aren’t a big deal, just that the sheer qualitative gains seen in the industrial revolution are incredibly difficult to sustain let alone surpass.

The enormity of the change from a world where it takes, as it took Magellan propelled by the winds, years, rather than days to circle the globe is hard to get our heads around, the gap between using a horse and using a car for daily travel incredible. The average lifespan since the 1800’s has doubled. One in five of the children born once died in childhood. There were no effective anesthetics before 1846.  Millions would die from an outbreak of the flu or other infectious disease. Hunger and famine were common human experiences however developed one’s society was up until the 20th century, and indoor toilets were not common until then either. Vaccinations did not emerge until the late 19th century.

Bill Gates has characterized views such as those of Gordon as “stupid”. Yet, he himself is a Gordonite as evidenced by this quote:

But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.”

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.”

And this is all really what I think Gordon is saying, that the “revolutions” of the past 50 years pale in comparison to the effects on human living of the period between 1850 and 1950 and this is the case even if we accept the fact that the pace of technological change is accelerating. It is as if we are running faster and faster at the same time the hill in front of us gets steeper and steeper so that truly qualitative change of the human condition has become more difficult even as our technological capabilities have vastly increased.

For almost two decades we’ve thought that the combined effects of three technologies in particular- robotics, genetics, and nanotech were destined to bring qualitative change on the order of the industrial revolution. It’s been fourteen years since Bill Joy warned us that these technologies threatened us with a future without human beings in it, but it’s hard to see how even a positive manifestation of the transformations he predicted have come true. This is not to say that they will never bring such a scale of change, only that they haven’t yet, and fourteen years isn’t nothing after all.

So now, after that long and winding road, back to AI. Erik Brynjolfsson, Andrew McAfee and Jeff Cummings the authors of the most popular recent book on the advances in artificial intelligence over the past decade, The Second Machine Age take aim directly at the technological pessimism of Gordon and others. They are firm believers in the AI revolution and its potential. For them innovation in the 21st century is no longer about brand new breakthrough ideas but, borrowing from biology, the recombination of ideas that already exist. In their view, we are being “held back by our inability to process all the new ideas fast enough” and therefore one of the things we need are even bigger computers to test out new ideas and combinations of ideas. (82)

But there are other conclusions one might draw from the metaphor of innovation as “recombination”.  For one, recombination can be downright harmful for organisms that are actually working. Perhaps you do indeed get growth from Schumpeter’s endless cycle of creation and destruction, but if all you’ve gotten as a consequence are minor efficiencies at the margins at the price of massive dislocations for those in industries deemed antiquated, not to mention society as a whole, then it’s hard to see the game being worth the candle.

We’ve seen this pattern in financial services, in music, and journalism, and it is now desired in education and healthcare. Here innovation is used not so much to make our lives appreciably better as to upend traditional stakeholders in an industry so that those with the biggest computer, what Jaron Lanier calls “Sirene Servers” can swoop in and take control. A new elite stealing an old elite’s thunder wouldn’t matter all that much to the rest of us peasants were it not for the fact that this new elite’s system of production has little room for us as workers and producers, but only as consumers of its goods. It is this desire to destabilize, reorder and control the institutions of pre-digital capitalism and to squeeze expensive human beings from the process of production that is the real source of the push for intelligent machines, the real force behind the current “AI revolution”, but given its nature, we’d do better to call it a coup.

Correction:

The phrase above: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.”

Originally read: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY  closing up shop, as a result.”

Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…

 

 

 

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

Immortal Jellyfish and the Collapse of Civilization

Luca Giordano Cave of Eternity 1680s

The one rule that seems to hold for everything in our Universe is that all that exists, after a time, must pass away, which for life forms means they will die. From there, however, the bets are off and the definition of the word “time” in the phrase “after a time” comes into play. The Universe itself may exist for as long as 100s of trillions of years to at last disappear into the formless quantum field from whence it came. Galaxies, or more specifically, clusters of galaxies and super-galaxies, may survive for perhaps trillions of years to eventually be pulled in and destroyed by the black holes at their centers.

Stars last for a few billion years and our own sun some 5 or so billion years in the future will die after having expanded and then consumed the last of its nuclear fuel. The earth having lasted around 10 billion years by that point will be consumed in this  expansion of the Sun. Life on earth seems unlikely to make it all the way to the Sun’s envelopment of it and will likely be destroyed billions of years before the end of the Sun- as solar expansion boils away the atmosphere and oceans of our precious earth.

The lifespan of even the oldest lived individual among us is nothing compared to this kind of deep time. In contrast to deep time we are, all of us, little more than mayflies who live out their entire adult lives in little but a day. Yet, like the mayflies themselves who are one of the earth’s oldest existent species: by the very fact that we are the product of a long chain of life stretching backward we have contact with deep time.

Life on earth itself if not quite immortal does at least come within the range of the “lifespan” of other systems in our Universe, such as stars. If life that emerged from earth manages to survive and proves capable of moving beyond the life-cycle of its parent star, perhaps the chain in which we exist can continue in an unbroken line to reach the age of galaxies or even the Universe itself. Here then might lie something like immortality.

The most likely route by which this might happen is through our own species,  Homo Sapiens or our descendents. Species do not exist forever, and our is likely to share this fate of demise either through actual extinction or evolution into something else. In terms of the latter, one might ask if our survival is assumed, how far into the future we would need to go where our descendents are no longer recognizably, human? As long as something doesn’t kill us, or we don’t kill ourselves off first, I think that choice, for at least the foreseeable future will be up to us.

It is often assumed that species have to evolve or they will die. A common refrain I’ve heard among some transhumanists  is “evolve or die!”. In one sense, yes, we need to adapt to changing circumstances, in another, no, this is not really what evolution teaches us, or is not the only thing it teaches us. When one looks at the earth’s longest extant species what one often sees is that once natural selection comes us with a formula that works that model will be preserved essentially unchanged over very long stretches of time, even for what can be considered deep time. Cyanobacteria are nearly as old as life on earth itself, and the more complex Horseshoe Crab, is essentially the same as its relatives that walked the earth before the dinosaurs. The exact same type of small creatures that our children torture on beach vacations might have been snacks for a baby T-Rex!

That was the question of the longevity of species but what about the longevity of individuals? Anyone interested the should check out the amazing photo study of the subject by the artist Rachel Sussman. You can see Sussman’s work here at TED, and over at Long Now.  The specimen Sussman brings to light have individuals over 2,000 years old.  Almost all are bacteria or plants and clonal- that is they exist as a single organism composed of genetically identical individuals linked together by common root and other systems. Plants and especially trees are perhaps the most interesting because they are so familiar to us and though no plant can compete with the longevity of bacteria, a clonal colony of Quaking Aspen in Utah is an amazing 80,000 years old!

The only animals Sussman deals with are corals, an artistic decision that reflects the fact that animals do not survive for all that long- although one species of animal she does not cover might give the long-lifers in the other kingdoms a run for their money. The “immortal jellyfish” the turritopsis nutricula are thought to be effectively biologically immortal (though none are likely to have survived in the wild for anything even approaching the longevity of the longest lived plants). The way they achieve this feat is a wonder of biological evolution.The turritopsis nutricula, after mating upon sexual maturity, essentially reverses it own development process and reverts back to prior clonal state.

Perhaps we could say that the turritopsis nutricula survives indefinitely by moving between more and less complex types of structures all the while preserving the underlying genes of an individual specimen intact. Some hold out the hope that the turritopsis nutricula holds the keys to biological immortality for individuals, and let’s hope they’re right, but I, for one, think its lessons likely lie elsewhere.

A jellyfish is a jellyfish, after all, among more complex animals with well developed nervous systems longevity moves much closer to a humanly comprehensible lifespan with the oldest living animal a giant tortoise by the too cute name of “Jonathan” thought to be around 178 years old.  This is still a very long time frame in human terms, and perhaps puts the briefness of our own recent history in perspective: it would be another 26 years after Jonathan hatched from his egg till the first shots of the American Civil War were fired. A lot can happen over the life of a “turtle”.

Individual plants, however, put all individual animals to shame. The oldest non-clonal plant, The Great Basin Bristlecone Pine, has a specimen believed to be 5,062 years old. In some ways this oldest living non-clonal individual is perfectly illustrative of the (relatively) new way human beings have reoriented themselves to time, and even deep time.When this specimen of pine first emerged from a cone human beings had only just invented a whole set of tools that would make the transmission of cultural rather than genetic information across vast stretches of time possible. During the 31st century B.C.E. we invented monumental architecture such as Stonehenge and the pyramids of Egypt whose builders still “speak” to us, pose questions to us, from millennia ago. Above all, we invented writing which allowed someone with little more than a clay tablet and a carving utensil to say something to me living 5,000 years in his future.

Humans being the social animals that they are we might ask ourselves about the mortality or potential immortality of groups that survive across many generations, and even for thousands of years. Group that survive for such a long period of time seem to emerge most fully out of the technology of writing which allows both the ability to preserve historical memory and permits a common identity around a core set of ideas. The two major types of human groups based on writing are institutions, and societies which includes not just the state but also the economic, cultural, and intellectual features of a particular group.


Among the biggest mistakes I think those charged with responsibility for an institution or a society can make is to assume that it is naturally immortal, and that such a condition is independent of whatever decisions and actions those in charge of it take. This was part of the charge Augustine laid against the seemingly eternal Roman Empire in his The City of God. The Empire, Augustine pointed out, was a human institution that had grown and thrived from its virtues in the past just as surely as it was in his day dying from its vices. Augustine, however, saw the Church and its message as truly eternal. Empires would come and go but the people of God and their “city” would remain.

It is somewhat ironic, therefore, that the Catholic Church, which chooses a Pope this week, has been so beset by scandal that its very long-term survivability might be thought at stake. Even seemingly eternal institutions, such as the 2,000 year old Church, require from human beings an orientation that might be compared to the way theologians once viewed the relationship of God and nature. Once it was held that constant effort by God was required to keep the Universe from slipping back into the chaos from whence it came. That the action of God was necessary to open every flower. While this idea holds very little for us in terms of our understanding of nature, it is perhaps a good analog for human institutions, states and our own personal relationships which require our constant tending or they give way to mortality.

It is perhaps difficult for us to realize that our own societies are as mortal as the empires of old, and someday my own United States will be no more. America is a very odd country in respect to its’ views of time and history. A society seemingly obsessed with the new and the modern, contemporary debates almost always seek reference and legitimacy on the basis of men who lived and thought over 200 years ago. The Founding Fathers were obsessed with the mortality of states and deliberately crafted a form of government that they hoped might make the United States almost immortal.

Much of the structure of American constitutionalism where government is divided into “branches” which would “check and balance” one another was based on a particular reading of long-lived ancient systems of government which had something like this tripart structure, most notably Sparta and Rome. What “killed” a society, in the view of the Founders, was when one element- the democratic, oligarchic-aristocratic, or kingly rose to dominate all others. Constitutionally divided government was meant to keep this from happening and therefore would support the survival of the United States indefinitely.

Again, it is somewhat bitter irony that the very divided nature of American government that was supposed to help the United States survive into the far future seems to be making it impossible for the political class in the US to craft solutions to the country’s quite serious long-term problems and therefore might someday threaten the very survival of the country divided government was meant to secure.

Anyone interested in the question of the extended survival of their society, indeed of civilization itself, needs to take into account the work of Joseph A. Tainter and his The Collapse of Complex Societies (1988). Here, the archaeologist Tainter not only provides us with a “science” that explains the mortality of societies, his viewpoint, I think, provides us for ways to think about and gives us insight into seeming intractable social and economic and technological bottlenecks that now confront all developed economies: Japan, the EU/UK and the United States.

Tainter, in his Collapse wanted to move us away from vitalist ideas of the end of civilization seen in thinkers such as Oswald Spengler and Arnold Toynbee. We needed, in his view, to put our finger on the material reality of a society to figure out what conditions most often lead them to dissipate i.e. to move from a more complex and integrated form, such as the Roman Empire, to a more simple and less integrated form, such as the isolated medieval fiefdoms that followed.

Grossly oversimplified, Tainter’s answer was a dry two word concept borrowed from economics- marginal utility. The idea is simple if you think about it for a moment. Any society is likely to take advantage of “low-hanging fruit” first. The best land will be the first to be cultivated, the easiest resources to gain access to exploited.

The “fruit”,  however, quickly becomes harder to pick- problems become harder for a society to solve which leads to a growth in complexity. Romans first tapped tillable land around the city, but by the end of the Empire the city needed a complex international network of trade and political control to pump grain from the distant Nile Valley into the city of Rome.

Yet, as a society deploys more and more complex solutions to problems it becomes institutionally “heavy” (the legacy of all the problems it has solved in the past) just as problems become more and more difficult to solve. The result is, at some point, the shear amount of resources that need to be thrown at a problem to solve it are no longer possible and the only lasting solution becomes to move down the chain of complexity to a simpler form. Roman prosperity and civilization drew in the migration of “barbarian” populations in the north whose pressures would lead to the splitting of the Empire in two and the eventual collapse of its Western half.            

It would seem that we have broken through Tainter’s problem of marginal utility with the industrial revolution, but we should perhaps not judge so fast. The industrial revolution and all of its derivatives up to our current digital and biological revolutions, replaced a system in which goods were largely produced at a local level and communities were largely self-sufficient, with a sprawling global network of interconnections and coordinated activities requiring vast amounts of specialized knowledge on the part of human beings who, by necessity, must participate in this system to provide for their most basic needs.

Clothes that were once produced in the home of the individual who would wear them, are now produced thousands of miles away by workers connected to a production and transportation system that requires the coordination of millions of persons many of whom are exercising specialized knowledge. Food that was once grown or raised by the family that consumed it now requires vast systems of transportation, processing, the production of fertilizers from fossil fuels and the work of genetic engineers to design both crops and domesticated animals.

This gives us an indication of just how far up the chain of complexity we have moved, and I think leads inevitably to the questions of whether such increasing complexity might at some point stall for us, or even be thrown into reverse?

The idea that, despite all the whiz-bang! of modern digital technology, we have somehow stalled out in terms of innovation is an idea that has recently gained traction. There was the argument made by the technologist and entrepreneur, Peter Thiel, at the 2009 Singularity Summit, that the developed world faced real dangers of the Singularity not happening quickly enough. Thiel’s point was that our entire society was built around the expectations of exponential technological growth that showed ominous signs of not happening. I only need to think back to my Social Studies textbooks in the 1980s and their projections of the early 2000s with their glittering orbital and underwater cities, both of which I dreamed of someday living in, to realize our futuristic expectations are far from having been met. More depressingly, Thiel points out how all of our technological wonders have not translated into huge gains in economic growth and especially have not resulted in any increase in median income which has been stagnant since the 1970s.

In addition to Theil, you had the economist, Tyler Cowen, who in his The Great Stagnation (2011)  argued compellingly that the real root of America’s economic malaise was that the kinds of huge qualitative innovations that were seen in the 19th and early 20th centuries- from indoor toilets, to refrigerators, to the automobile, had largely petered out after the low hanging fruit- the technologies easiest to reach using the new industrial methods- were picked. I may love my iPhone (if I had one), but it sure doesn’t beat being able to sanitarily go to the bathroom indoors, or keep my food from rotting, or travel many miles overland on a daily basis in mere minutes or hours rather than days.

One reason why technological change is perhaps not happening as fast as boosters such as singularitarians hope, or our society perhaps needs to be able to continue to function in the way we have organized it, can be seen in the comments of the technologists, social critic and novelist, Ramez Naam. In a recent interview for  The Singularity Weblog, Naam points out that one of the things believers in the Singularity or others who hold to ideas regarding the exponential pace of technological growth miss is that the complexity of the problems technology is trying to solve are also growing exponentially, that is problems are becoming exponentially harder to solve. It’s for this reason that Naam finds the singularitarians’ timeline widely optimistic. We are a long long way from understanding the human brain in such a way that it can be replicated in an AI.

The recent proposal of the Obama Administration to launch an Apollo type project to understand the human brain along with the more circumspect, EU funded, Human Brain Project /Blue Brain Project might be seen as attempts to solve the epistemological problems posed by increasing complexity, and are meant to be responses to two seemingly unrelated technological bottlenecks stemming from complexity and the problem of increasing marginal returns.

On the epistemological front the problem seems to be that we are quite literally drowning in data, but are sorely lacking in models by which we can put the information we are gathering together into working theories that anyone actually understands. As Henry Markham the founder of the Blue Brain Project stated:

So yes, there is no doubt that we are generating a massive amount of data and knowledge about the brain, but this raises a dilemma of what the individual understands. No neuroscientists can even read more than about 200 articles per year and no neuroscientists is even remotely capable of comprehending the current pool of data and knowledge. Neuroscientists will almost certainly drown in data the 21st century. So, actually, the fraction of the known knowledge about the brain that each person has is actually decreasing(!) and will decrease even further until neuroscientists are forced to become informaticians or robot operators.

This epistemological problem, which was brilliantly discussed by Noam Chomsky in an interview late last year is related to the very real bottleneck in Artificial Intelligence- the very technology Peter Thiel thinks is essentially if we are to achieve the rates of economic growth upon which our assumptions of technological and economic progress depend.

We have developed machines with incredible processing power, and the digital revolution is real, with amazing technologies just over the horizon. Still, these machines are nowhere near doing what we would call “thinking”. Or, to paraphrase the neuroscientist and novelist David Eagleman- the AI WATSON might have been able to beat the very best human being in the game Jeopardy! What it could not do was answer a question obvious to any two year old like “When Barack Obama enters a room, does his nose go with him?”

Understanding how human beings think, it is hoped, might allow us to overcome this AI bottleneck and produce machines that possess qualities such as our own or better- an obvious tool for solving society’s complex problems.

The other bottleneck a large scale research project on the brain is meant to solve is the halted development of psychotropic drugs- a product of the enormous and ever increasing costs for the creation of such products. Itself a product of the complexity of the problem pharmaceutical companies are trying to tackle, namely; how does the human brain work and how can we control its functions and manage its development?  This is especially troubling given the predictable rise in neurological diseases such as Alzheimer’s.   It is my hope that these large scale projects will help to crack the problem of the human brain, and especially as it pertains to devastating neurological disorders, let us pray they succeed.

On the broader front, Tainter has a number of solutions that societies have come up with to the problem of marginal utility two of which are merely temporary and the other long-term. The first is for society to become more complex, integrated, bigger. The old school way to do this was through conquest, but in an age of nuclear weapons and sophisticated insurgencies the big powers seem unlikely to follow that route. Instead what we are seeing is proposals such as the EU-US free trade area and the Trans-Pacific partnership both of which appear to assume that the solution to the problems of globalization is more globalization. The second solution is for a society to find a new source of energy. Many might have hoped this would have come in the form of green-energy rather than in the form it appears to have taken- shale gas, and oil from the tar sands of Canada. In any case, Tainter sees both of these solutions as but temporary respites for the problem of marginal utility.

The only long lasting solution Tainter sees for  increasing marginal utility is for a society to become less complex that is less integrated more based on what can be provided locally than on sprawling networks and specialization. Tainter wanted to move us away from seeing the evolution of the Roman Empire into the feudal system as the “death” of a civilization. Rather, he sees the societies human beings have built to be extremely adaptable and resilient. When the problem of increasing complexity becomes impossible to solve societies move towards less complexity. It is a solution that strangely echoes that of the “immortal jellyfish” the turritopsis nutricula, the only path complex entities have discovered that allows them to survive into something that whispers eternity.

Image description: From the National Gallery in London. “The Cave Of Eternity” (1680s) by Luca Giordan.“The serpent biting its tail symbolises Eternity. The crowned figure of Janus holds the fleece from which the Three Fates draw out the thread of life. The hooded figure is Demagorgon who receives gifts from Nature, from whose breasts pours forth milk. Seated at the entrance to the cave is the winged figure of Chronos, who represents Time.”

Iamus Returns

A reader, Dan Fair, kindly posted a link to the release of the full album composed by the artificial intelligence program, Iamus, on the comments section of my piece Turing and the Chinese Room Part 2 from several month back.

I took the time to listen to the whole album today (you can too by clicking on the picture above). Not being trained as a classical musician, or having much familiarity with the abstract style in which the album was composed makes it impossible for me to judge the quality of the work.

Over and above the question of quality, I am not sure how I feel about Iamus and “his”composition. As I mentioned to Dan, the optimistic side of me sees in this the potential to democratize human musical composition.

Yet, as I mentioned in the Turing post, the very knowledge that there is no emotional meaning being conveyed behind the work leaves it feeling emotionally dead and empty for me compared to to another composition composed, like those of Iamus, in honor of Alan Turing, this one created by a human being, Amanda Feery, entitled Turing’s Epitaph  that was gracefully shared by fellow blogger Andrew Gibson.

One way or another it seems, humans, and their ability to create and understand meaning will be necessary for the creations of machines to have anything real behind them.

But that’s what I think. What about you?

Turing and the Chinese Room 2

I propose to consider the question, “Can machines think?”

Thus began Alan Turing’s 1950 essay Computing Machinery and Intelligence without doubt the most import single piece written on the subject of what became known as “artificial intelligence”.

Straight away Turing insists that we won’t be able to answer this question by merely reflecting upon it. If we went about it this way we’d get all caught up in arguments over what exactly “thinking” means. Instead he proposes a test.

Turing imagines an imitation game composed of 3 players: (A) a man, (B) a woman, and (C) an interrogator with the role of the latter being to ask questions of the 2 players and determine which one is a man. Turing then transforms this game by exchanging the man with a machine, and replacing the woman with the man. The interrogator is then asked to figure out which is the real man:

We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, “Can machines think?”

The machine in question Turing now narrowly defines as a digital computer, which consists of 3 parts (i) Store, that is the memory, (ii) Executive Unit, that is the part of the machine that performs the operations, (iii) The Control, the part of the machine that makes sure the Executive Unit performs operations based upon instructions that make up part of the Store. Digital computers are also “discrete state” machines, that is they are characterized by clear on-off states.

This is all a little technical so maybe a non-computer example will help. We can perhaps best understand digital technology by comparing it to its old rival analog. Think about a selection of music stored on your iPod versus, say, your collection of vintage ‘70s
8 tracks.  Your iPod has music stored in a discrete state- represented by 1s and 0s. It has an Executive Unit that allows you to translate these 1s and 0s into sound, and a Control that keeps the whole thing running smoothly and allows you, for example, to jump between tracks. Your 8 tracks, on the other hand, store music as “impressions” on a magnetic tape, not as discrete state representations, the “head”, in contact with the tape  reverses this process and transforms the impressions back into sound, you move between the tracks by causing the head to actually physically move.

Perhaps Turing’s choice of digital over analog computers can be said to amount to a bet about how the future of computer technology would play out. By compressing information into 1s and 0s – as representation- you could achieve seemingly limitless Storage/Control capacity. Imagine if all the songs on your iPod needed to be stored on 8 tracks! If you wanted to build an intelligent machine using analog you might as well just duplicate the very biological/analog intelligence you were trying to mimic. Digital technology represented, for Turing, a viable alternative path to intelligence other than the biological one we had always known.

Back to his article. He rephrases the initial question:

Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?

If we were able to answer this question in the affirmative, Turing insisted, then such a machine could be said to possess human level intelligence.

Turing then runs through and dismisses what he considers the most likely objections to the idea of whether or not a machine that could think could be built:

The Theological Objection- computers wouldn’t have a soul. Turing’ reply: Wouldn’t God grant a soul to any other being that possessed human level intelligence? What was the difference between bringing such an intelligent vessel for a soul into the world by procreation and by construction?

Heads in the Sand Objection- The idea that computers could be as smart as human is too horrible to be true.  Turing’s reply: Ideas aren’t true or false based on our wishing them so.

The Mathematical Objection- Machines can’t understand logically consistent sentences such as “this sentence is false”.  Turing’s reply: Okay, but, at the end of the day humans probably can’t either.

The Argument from Consciousness- Turing quotes a professor Lister: “”Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.” Turing’s reply:  If this is to be the case, why don’t we apply the same prejudice to people. How do I really know that another human being thinks except through his actions and words?

The Argument from Disability- Whatever a machine does it will never be able to do X.  Turing’s reply: These arguments are essentially making unprovable claims based on induction- that I’ve never seen a machine do X therefore no machine will ever do X.
Many of them are also alternative arguments from consciousness:

The claim that a machine cannot be the subject of its own thought can of course only be answered if it can be shown that the machine has some thought with some subject matter. Nevertheless, “the subject matter of a machine’s operations” does seem to mean something, at least to the people who deal with it. If, for instance, the machine was trying to find a solution of the equation x2 – 40x – 11 = 0 one would be tempted to describe this equation as part of the machine’s subject matter at that moment. In this sort of sense a machine undoubtedly can be its own subject matter.

Lady Lovelace’s Objection- Lady Lovelace, friend of Charles Babbage whose plans for his Analytical Engine in the early 1800s were probably the first fully conceived modern computer, and Lovelace perhaps the author of the first computer program had this to say:

The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform”

 Turing’s response: this is yet another argument from consciousness. The computers he works with surprise him all the time with results he did not expect.

Argument from the Continuity of the Nervous System: The nervous system is fundamentally different from a discrete state machine therefore the output of the two will always be fundamentally different. Turing’s response: The human brain is analogous to a  “differential analyzer” (our old analog computer discussed above), and solutions of the two types of computers are indistinguishable. Hence a digital computer is able to do at least some things the analog computer of the human brain does.

Argument from the Informality of Behavior: Human beings, unlike machines, are free to break rules and are thus unpredictable in a way machines are not. Turing’s response: We cannot really make the claim that we are not determined just because we are able to violate human conventions. The output of computers can be as unpredictable as human behavior and this emerges despite the fact that they are clearly designed to follow laws i.e. are programmed.

Argument from ESP: Given a situation in which the man playing against the machine possesses some yet understood telepathic power he could always influence the interrogator against the machine and in his favor. Turing’s response: This would mean the game was rigged until we found out how to build a computer that could somehow balance out clairvoyance. For now, put the interrogator in a “telepathy proof room”.

So that, in a nutshell, is the argument behind the Turing test. By far, the most well known challenge to this test was made by the philosopher, John Searle, (relation to the author has been lost in the mist of time). Searle has so influenced the debate around the Turing test that it might be said that much of the philosophy of mind that has dealt with the question of artificial intelligence has been a series of arguments about why Searle is wrong.

Like Turing, Searle in his 1980 essay, Minds Brains and Programs, will introduce us to a thought experiment:

Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles.

Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that ‘formal’ means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch.

With some additions, this is the essence of Searle’s thought experiment, and what he wants us to ask: does the person in this room moving around a bunch of symbols according to a set of predefined rules actually understand Chinese? And our common sense answer is- “of course not!”

Searle’s argument is actually even more clever than it seems because it could having been taken right from one of Turing’s own computer projects. Turing had written a computer program that could have allowed a computer to play chess. I say could have allowed because there wasn’t actually a computer at the time sophisticated enough to run his program. What Turing did then was to use the program as a set of rules he used to play a human being in chess. He found that by following the rules he was unable to beat his friend in chess. He was, however, able to beat his friend’s wife! (No sexism intended).

Now had Turing given these rules to someone who knew nothing about chess at all they would have been able to play a reasonable game. That is, they would have played a reasonable game without having any knowledge or understanding of what it was they were actually doing.

Searle’s goal is to bring into doubt what he calls “strong AI” the idea that the formal manipulation of symbols- syntax- can give rise to the true understanding of meaning- semantics. He identifies part of our problem in our tendency to anthropomorphize our machines:

The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door “understands instructions” from its photoelectric cell is not at all the sense in which I understand English.

Even with this cleverness, Searle’s argument has been shown to have all sorts of inconsistencies. Among the best refutations I’ve read is one of the early ones- Margaret A. Boden’s 1987 essay Escaping the Chinese Room.  In gross simplification Boden’s argument is this: Look, normal human consciousness is made up of a patchwork of “stupid” subsystems that don’t understand or possess what Searle claims is the foundation stone of true thinking- intentionality- “subjective states that relate me to the rest of the world”- in anything like his sense at all. In fact, most of what the mind does is made up of these “stupid” processes. Boden wants to remind us that we really have no idea how these seemingly dumb processes somehow add up to what we experience as human level intelligence.

Still, what Searle has done has made us aware of the huge difference between formal symbol manipulation and what we would call thinking.  He made us aware of the algorithms (a process or set of rules to be followed in calculations or other problem-solving operations) that would become a simultaneous curse and backdrop of our own day. A day when our world has become mediated by algorithms in its economics, its warfare, in our choice of movies and books and music, in our memory and cognition (Google)  in love (dating sites) and now it seems in its art (see the excellent presentation by Kevin Slavin How Algorithms Shape Our World). Algorithms that Searle ultimately understood to be lifeless.

In the words of the philosopher Andrew Gibson on Iamus’ Hello World!:

I don’t really care that this piece of art is by a machine, but the process behind it is what is problematic. Algorithmization is arguably part of this ultra-modern abstractionism, abstracting even the artist.

The question I think that should be asked here is how exactly Iamus, the algorithm that composed, Hello World! worked? The composition Hello World! was created using a very particular form of algorithm known as a genetic algorithm, or, in other words Iamus is a genetic algorithm. In very over-simplified terms a genetic algorithm works like evolution. There is (a) a “population” of randomly created individuals (in Iamus’ case it would be sounds from a collection of instruments). Those individuals are the selected against (b) an environment for the condition of best fit (I do not know in Iamus’ case if this best fit was the judgement of classically trained humans, some selection of previous human created compositions, or something else), the individual that survive (are chosen to best meet the environment) are then combined to form new individuals (compositions in Iamus’ case) (c) an element of random features is introduced to individuals along the way to see if they help individuals better meet the fit. The survivor that best meets the fit is your end result.

Searle would obviously claim that Iamus was just another species of symbol manipulation and therefore did not really represent something new under the sun, and in a broad sense I fully agree. Nevertheless, I am not so sure this is the end of the story because to follow Searle’s understanding of artificial intelligence seems to close as many doors as it opens in essence locking Turing’s machine, forever, into his Chinese room. Searle writes:

I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.

For me, the line he draws between intelligent behavior or properties emerging from machines and those emerging from biological life is far too sharp and based on rather philosophically slippery concepts such as understanding, intentionality, causal dependence.

Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.

Taking Turing’s test and Searle’s Chinese Room together leaves us, I think, lost in a sort of intellectual trap. On the one side we having Turing arguing that human thinking and digital computation are essentially equivalent. All experience points to the fact that this is false. On the other side we have Searle arguing that digital computation and the types of thinking done by biological creatures are essentially nothing alike. An assertion that is not obviously true. The problem here is that we lose what are really the two most fundamental questions and that is how are computation and thought alike, and how are they different?

Instead what we have is something like Iamus “introduced” to the world, along with its creation as if it were a type of person, by the Turing faction when in fact it has almost none of these qualities that we consider to be constitutive of personhood.  A type of showmanship that put me in mind of the wanderings of the Mechanical Turk.  The 18th century mechanical illusion that for a time fooled  the courts of Europe into thinking a machine could defeat human beings at chess. (There was, of course, a short person inside.)

To this, the Searle faction responds with horror and utter disbelief aiming to disprove that such a “soulless” thing as a computer could ever, in the words of Turing’s Lister: “write a sonnet or compose a concerto”, that was not a product of nothing other than” the chance fall of symbols”.  The problem with this is that our modern day Mechanical Turks are no longer mere illusions- there is no longer a person hiding inside- and our machines really do defeat grand masters in chess, and win trivia games, and write news articles, and now compose classical music. Who knows where this will stop, if it indeed if it will stop, and how many what have long been considered the most  human defining activities qualities will be successfully replicated, and even surpassed, by such machines.

We need to break through this dichotomy and start asking ourselves some very serious questions. For only by doing so are we likely to arrive at an understanding both of where we are as a technology dependent species and where what we are doing is taking us. Only then will we have the means to make intentional choices about where we want to go.

I will leave off here with something shared with me by Andrew Gibson. It is a piece by his friend Amanda Feery composed in memory of Turing’s centenary entitled “Turing’s Epitath”.

And I will leave you with questions:

How is this similar to Iamus’ Hello World!?  And how is it different?

Turing and the Chinese Room 1

Two-Thousand-and-twelve marks the centenary of Alan Turing’s birth. Turing, of course, was a British mathematical genius who was preeminent among the “code -breakers”, those who during the Second World War helped to break the Nazi Enigma codes and were thus instrumental in helping the Allies win the war. Turing was one of the pioneers of modern computers. He was the first to imagine the idea of what became known as a Universal Turing Machine, (the title of the drawing above) the idea that any computer was in effect equal to any other and could, therefore, given sufficient time, solve any problem whose solution was computable.

In the last years of his life Turing worked on applying mathematical concepts to biology. Presaging the work of, Benoît Mandelbro, he was particularly struck by how the repetition of simple patterns could give rise to complex forms. Turing was in effect using  computation as a metaphor to understand nature, something that can be seen today in the work of people such as Steven Wolfram in his New Kind of Science, and in fields such as chaos and complexity theory. It also lies at the root of the synthetic biology being pioneered, as we speak, by Craig Venter.

Turning is best remembered, however, for a thought experiment he proposed in a 1950 paper Computing Machinery and Intelligence, that became known as the “Turing test”, a test that has remained the pole star for the field of artificial intelligence up until our own day, along with the tragic circumstances the surrounded the last years of his life.

In my next post, I will provide a more in-depth explanation of the Turing test. For now, it is sufficient to know that the test proposes to identify whether a computer possesses human level intelligence by seeing if a computer can fool a person into thinking the computer is human. If the computer can do so, then it can be said, under Turing’s definition, to possess human level intelligence.

Turing was a man who was comfortable with his own uniqueness,  and that uniqueness included the fact that he was homosexual at a time when homosexuality was considered both a disease and a crime. In 1952, Turning, one of the greatest minds ever produced by Britain, a hero of World War Two, and a member of the Order of the British Empire was convicted in a much publicized trial for having engaged in homosexual relationship. It was the same law under which Oscar Wilde had been prosecuted almost sixty years before.

Throughout his trial, Turing maintained that he had done nothing wrong in having such a relationship, but upon conviction faced a series of nothing- but- bad options, the least bad of which he decided was hormone “therapy” as a means of controlling his “deviant” urges, a therapy he chose to voluntarily undergo given the alternatives which included imprisonment.

The idea of hormone therapy represented a very functionalist view of the mind. Hormones had been discovered to regulate human sexual behavior, so the idea was that if you could successfully manipulate hormone levels you could produce the desired behavior. Perhaps part of the reason Turing chose this treatment is that it had something of the input-output qualities of the computers that he had made the core of his life-work. In any case, the therapy ended after the period of a year- the time period he was sentenced to undergo the treatment. Being shot through with estrogen did not have the effect of mentally and emotionally castrating Turing, but it did have nasty side effects such as making him grow breasts and “feminizing” his behavior.

Turing’s death, all evidence appears to indicate, came at his own hand,  ingesting an apple laced with poison in seeming imitation of the movie Snow White. Why did Turing commit suicide, if in fact he did so? It was not, it seems, an effect of his hormone therapy, which had ended quite some time before. He did not appear especially despondent to his family or friends in the period immediately before his death.

One plausible explanation is that, given the rising tensions of the Cold War, and the obvious value Turing had as an espionage target, Turing realized that on account of his very public outing, he had lost his private life. Who could be trusted now except his very oldest and deepest friends? What intimacy could he have when everyone he met might be a Soviet spy meant to seduce what they considered a vulnerable target, or an agent of the British or American states sent there to entrap him?  From here on out his private life was likely to be closely scrutinized: by the press, by the academy, by Western and Soviet security agencies. He had to wonder if he would face new trials, new tortuous “treatments”, to make him “normal” perhaps even imprisonment.

What a loss of honor for a man who had done so much to help the British win the war the reason, perhaps, that Turing’s death occurred on  or very near the ten year anniversary of the D-Day landing he had helped to make possible. What outlets could there be for a man of his genius when all of the “big science” was taking place under the umbrella of the machinery of war, where so much of science was regarded as state secrets and guarded like locked treasures and any deviation from the norm was considered a vulnerability that needed to be expunged? Perhaps his brilliant mind had shielded him from the reality of what had happened and his new situation after his trial, but only for so long, and that once he realized what his life had become the facts became too much to bear.

We will likely never truly know.

The centenary of Turing’s birth has been rightly celebrated with a bewildering array of tributes, and remembrances. Perhaps, the most interesting of these tributes, so far, has been the unveiling of a symphony composed entirely by a computer, and performed by the London Symphony Orchestra on July, 2, 2012..  This was a piece (really a series of several pieces) “composed” by a computer algorithm- Iamus, and entitled:  “Hello World!”

Please take a listen to this piece by clicking on the link below, and share with me your thoughts. Your ideas will help guide my own thoughts on this perplexing event, but my suspicion right now is that “Hello World!” represents something quite different than what either its sympathizers or distractors suggest, and gives us a window into the meaning and significance of the artificial intelligences we are bringing into what was once our world alone.

Hello World!”