The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?  

Advertisement

15 comments on “The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

  1. Gregory Maus says:

    An excellent critique of anthropocentric gauges of intelligence.

    I would be concerned that it might be difficult in practice to determine precisely when the Pinocchio Threshold has been crossed.
    Programs like Google Search are already evolve, so that they “turn off” parts of their programming and can act bizarrely at times, and redirect processing power to various functions unpredictably.
    Additionally, determining what a program “wants” and whether it considers its tasks/programming onerous, seems like it would run into the Problem of Other Minds, particularly when dealing with nonhuman-like intelligences.

    Are you proposing the Pinocchio Threshold as a useful concept and after the fact landmark or as an objective formalizable test?

    • Rick Searle says:

      Hello Gregory,

      I intended the piece more as a thought experiment than anything else and was mostly hoping to spur thought and input from persons like yourself.
      It seems to me, as an outsider, that the Turing Test or the efforts to engineer something like intelligence are missing something. Life on earth has been running a 3.5 billion year experiment in different forms of intelligence while we humans have been running an experiment in electromechanical computing that is maybe a century old. There is a belief out there that once we reach a certain stage of complexity and connectivity that the result will be an intelligence like our own, which, to me at least is an assumption that flies in the face of Copernican mediocrity.

      As I understand it, computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails. But perhaps the ecosystem of programs, computer and networks has become diverse enough that we could find movement toward intelligence that has not be predetermined by us. I can imagine biologist getting together with computer scientist to come up with a kind of “search for terrestrial silicon based intelligence” that looks for the spontaneous development of intelligence in computers by establishing templates based on what is done by biological life where intelligence is far broader than the human sort. This would be a little like exobiologists who now realize that we shouldn’t predetermine our search for extraterrestrial life based on our own most common experience of how life manifests itself on earth.

      Of course, such a “search for terrestrial silicon based intelligence” might easily morph into a form of farce- a kind of ghost hunting- most mutations in machines that seemed to give them some characteristics of intelligence would be mere glitches and harmful mutations just as in evolution, we’d be at risk of seeing things that weren’t there because we wanted to see them. But perhaps some types of mutations and spontaneous developments once identified would give us pathways to engineering intelligence in machines that we had been unaware of before.

      I would be interested in your thoughts?

      • Gregory Maus says:

        I completely agree with you that the Turing Test as a measure of intelligence only tests for a very human sort of intelligence, but we don’t really challenge it because we clearly haven’t identified an alternative example of intelligence that we might consider, and thus use our own as a (questionable) unquestioned default.

        In regards to “computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails.” I would contend that our own intelligence and learning is always predetermined by something, whether it be biological structures (plus input) or software (plus input.) Who or what makes the determination (gauging fitness, setting incentives, etc.) seems to me irrelevant for whether something is actually intelligent.
        One question (and this may be what you were getting at) is the degree to which flexibility of thinking, i.e. the ability to apply thought processes to a wider variety of fields than say a calculator, is required for one to be defined as intelligent. If that is a sticking point, then it would indeed seem that all the programs to my knowledge currently developed through evolutionary algorithms may be too narrowly capable, to be considered truly intelligent.

        Your “search for terrestrial silicon based intelligence” proposal is an intriguing one. It might have to deal with the issue of defining intelligence, which as far as I know is still a matter of contention in the field, though they could work with several different definitions simultaneously and define programs based on the different criteria that they do and don’t meet (“Passes Test X, but Doesn’t Pass Test Y, etc.)
        And I agree that if not carefully managed, it could become farcical and lacking in rigor, due to confirmation bias and the ELIZA Effect.
        If rigor were maintained, however the project could be sufficiently interesting that it might attract some serious press, and thus perhaps high-profile support/involvement.

        [By the way, I originally posted to IEET, but the moderators haven’t allowed my comment through yet, for some reason.]

  2. Rick Searle says:

    “In regards to “computer programmers already use evolutionary principles to develop programs, but this is a little like artificial selection- with programmers having predetermined what fitness entails.” I would contend that our own intelligence and learning is always predetermined by something, whether it be biological structures (plus input) or software (plus input.) Who or what makes the determination (gauging fitness, setting incentives, etc.) seems to me irrelevant for whether something is actually intelligent.”

    I guess what I was hoping to get at is that there are likely multiple paths (most of which we are not aware of) to multiple types (again most of which we are not aware of) of intelligence,
    Things are susceptible to engineering only if you know the destination before hand and there may be quicker paths to a goal that remain hidden because we are not aware there are multiple ways at arriving at a similar though not identical place.

    Thus it might be profitable for computer scientist trying to create intelligent machines to look beyond human cognitive and neuroscience something they could only do by looking at biology more broadly defined.

    If the ecosystem of machines we have created is robust enough we should be seeing manifestations of intelligence outside that which we are programming for already- though is we do not have enough models of what intelligence can look like we may not recognize it. But all this, of course, is just speculation on my part.

    It is odd that your comments did not go up on the IEET and you would have likely gotten a lot more interesting comments in response than mine. If they don’t go up by this evening I’ll shoot the editor there an email. Keep a look out for your comment to go up.

    • Gregory Maus says:

      I would agree with you assertion about multiple paths to multiple types of intelligence. My apologies for not picking up on your meaning earlier.

      How precisely are you defining the term “robust” in the ecosystem of machines?

      I’m not sure that I would have gotten any more interesting comments than yours. I’ve quite enjoyed our conversation. However, if IEET’s systems are accidentally screening out new commentators for some reason, then that could hinder the dialogue on the site.

      • Rick Searle says:

        I think you are looking for greater precision in my language and my apologies if I am not obliging.
        What I was thinking of when I said “robust” was something like diversity in the ecological sense but also depth and ubiquity.The world is awash in “species” of hardware and software and this diversity just keeps increasing. Depth and ubiquity is increasing too as everything is connected and turned into something subject to computation. There might be many things going on in this “ecosystem” that we aren’t even aware of even though we built and engineered everything that went into it.

        On another note, I passed your situation over to the editor at the IEET. There must have been a glitch- bad mutation :>) Would you like me to post your comment listing your name, or try again another time? ,

      • Gregory Maus says:

        My undergraduate background was analytic philosophy, so by force of habit I always seek precision in language, perhaps excessively at times. My apologies if so.
        I agree with you regarding the robustness (as you define it) of the current hardware/software ecosystem. There are so many systems in play, interacting with each other in complex ways and constantly evolving (in one sense or another) that it is impossible for any one person to understand the full scope of it.
        It’s always fascinating to me how systems of all sorts (technical, institutional, cultural, etc.) mutate and develop beyond the intent and comprehension of the original architect(s).

        “On another note, I passed your situation over to the editor at the IEET. There must have been a glitch- bad mutation :>) Would you like me to post your comment listing your name, or try again another time? ,”
        A case in point of systems acting without the knowledge and control of their creators. 🙂
        My comment has been posted as of now, as has another comment that I submitted to the “How can we Prevent Transhumans’ Violent Behavior Toward Humans?” thread.
        [ http://ieet.org/index.php/IEET/more/chu20140224 ]

        Shall we bring our full conversation over to IEET to see if they might be interested in jumping in? I imagine that the ““search for terrestrial silicon based intelligence” would draw fascination.

        And thank you for e-mailing them. Much appreciated.

  3. Rick Searle says:

    I posted the relevant parts of our conversation over at the IEET. We’ll see if they spark any further interesting conversation.

    • Gregory Maus says:

      I noticed that. (Good editing, by the way.)
      I wouldn’t be surprised if it were somewhat past the peak time of discussion though, depending on how much of a premium the site’s readership puts on new publications.
      I imagine that the editors would let you write a full article discussing the ““search for terrestrial silicon based intelligence,” though, and it seems to me that the concept merits the full treatment of an article in any case.

      Perhaps it could use a slightly snappier name or anagram though. 🙂
      S.T.S.B.I. doesn’t quite roll off the tongue.
      Shall I let you know if I think of a catchy name?

      • Rick Searle says:

        You’re right. We might be past peak.

        I think I will try to write something up on the “search for terrestrial silicon based intelligence,” at some point and I’m adding it to my (far too long) list of things to post about.

        If you do come up with a cleverer name, sure, let me know. I am tempted to just shorten it to the “search for terrestrial intelligence”, but that’s perhaps a little too humorous- even for me.

        I enjoyed our conversation, please stop by here again.

      • Gregory Maus says:

        “Search for Terrestrial Intelligence” could have some interesting implications. On the one hand, it might be interpreted as searching for sentient cryptids like some versions of Bigfoot.
        On the other hand, it could be considered to encompass determining how and in what way various animals are intelligent, such as dolphins, chimps, octopi, etc. This might represent a distracting diffusion of focus, but then again it could allow the Search to draw on the talents of those who study intelligent animals, potentially enriching discussions about definitions.

        And I’m sure that we will talk again.

  4. Rick Searle says:

    I was thinking more in terms of human intelligence. ;>)

  5. click here says:

    Hi, Neat post. There’s a problem together with your site in internet explorer, might test this?

    IE still is the market chief and a huge portion of folks will
    miss your fantastic writing due to this problem.

  6. […] Puppets have also been the jumping off point for some very deep philosophical reflections. What, after all, was the inspiration for the analogy of Plato’s cave than the world of the shadow play? Just a little over two centuries ago there was Heinrich von Kleist’s short story  “On the Marionette Theatre” that used the art of puppetry as a means of reflecting on human freedom and the difference between us, animals and machines. Philosophers can do a lot with puppets, or at least try to. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s