Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Advertisement

15 comments on “Is AI a Myth?

  1. lfloridi says:

    Please read my reply to Searle, he basically did not read the book: http://linkis.com/blog.oup.com/2014/11/m71u0

  2. James Cross says:

    But what is intelligence?

    Oddly in Bostrom’s book I can’t find a clear definition of it. We often prefix AI with “human level” as in human level AI. The reference point seems to be what humans do.

    I think we need a way of defining intelligence that is objective and abstract enough that it does not need to refer to humans or human capabilities.

    I tried to explore some of this partially in one of my posts where I discuss a paper by A. D. Wissner-Gross and C. E. Freer.

    I quote from them a definition of intelligent behavior.

    “Adaptive behavior might emerge more generally in open thermodynamic systems as a result of physical agents acting with some or all of the systems’ degrees of freedom so as to maximize the overall diversity of accessible future paths of their worlds.”

    My own post is here:

    http://broadspeculations.com/2

    Then there is the difficult relationship between intelligence and consciousness.

    We tend to believe these to be associated , that intelligence requires consciousness, although no such claims are being made for Watson or some of the “AI” technologies of today. Nevertheless, I think we somehow believe that once human level AI arrives the machines will be conscious. If they are not, they would not be human level.

    Or so the argument goes.

    What if the relationship is reversed? What if intelligence does require consciousness? What if consciousness is a creation of intelligence, one way in which intelligence expands its range of adaptive behavior?

    In that case, we could have superintelligence without consciousness.

    I hope to do something more expansive on this topic in the future but I have been on a little bit of hiatus from blogging.

    • Rick Searle says:

      Hello James,

      It’s strange, someone brought up the same issue over at the IEET, and since I’m responding to this on my lunch break I’ll give the same answer to you both.

      I don’t think consciousness is necessary for behavior to be intelligent meaning “an agent’s ability to achieve goals in a wide range of environments”
      http://intelligence.org/2013/06/19/what-is-intelligence-2/

      We’re experiencing this with our machines, which aren’t conscious in any sense, but can often outperform us in some tasks that require intelligence e.g. chess. But we should have known that consciousness wasn’t necessary for intelligent behavior all along. Neither an immune system or a bee colony is conscious (rather than aware), but they certainly show intelligent behavior.

      I guess that leads to the question of what do I mean by consciousness? To me consciousness is the “what am I doing?” it’s a situational awareness we share with many animals (and perhaps are now beginning to share with some machines. Self-consciousness is a “higher” version of this it is the “what are you doing?” it is my own explanation of my situation and behavior as if I were “outside” of myself.

      I my view, we are only just beginning to crack the nut of machine consciousness and nowhere near obtaining self-consciousness which would require semantic understanding of language.

      I think we still could have super-intelligence without consciousness, but there would be gaps in such an intelligence’s understanding between its “internalized” world and the real world that would make it much less threatening than some think. The problem with super-intelligence that wasn’t conscious would be in it being hacked by human beings that knew very well what they were doing.

  3. James Cross says:

    I meant “What if intelligence does NOT require consciousness?”

  4. […] IEET By Rick Searle  Utopia or Dystopia Nov 30, […]

  5. James Cross says:

    Rick,

    That link is great!

    First, is the acknowledgement that they do not precisely know what intelligence is, then the working definition by Legg and Hutter definition of the “optimization power” concept of intelligence, because it measures an agent’s power to optimize the world according to its preferences.

    This is very close to what I am thinking also.

  6. Bill Benzon says:

    Hi Rick, if I may. I found my way from Tyler Cowen’s blog, where he mentions your review of Average is Over. I was struck by your remarks on the human need for intelligibility (to which I’ve responded over at New Savanna).

    On AI winter, the 1980s event was in fact the second time around for that sort of thing. The first time was in the 1960s. But the defunding tidn’t happen to AI, it happend to MT (machine translation), a sister discipline that morphed into computational linguistics and now, it seems, has become NLP (natural language processing). As you may know, machine translation is one of the founding problem areas of computer science (behind artillery tables and a-bombs). The US government poured lots of money into the effort in the 1950 and early 1960s. They were looking for a practical result, computers that could produce high quality translations from Russian into English. When that result seemed to recede ever further into the distance, they pulled the funding plug in the middle 1960s.

    Though my degree is in English Lit, my most important teacher was one of that first generation of researchers in computational linguistics, David Hays. He led the RAND Corporation’s MT project and, when he decided that the excitement was over at RAND, became founding chair of the Linguistics Department at SUNY Buffalo, which is where I worked with him (I was a graduate student in English). He was a great believer in computational research into human psychology and a skeptic about AI. Which is pretty much where I am.

    One thing I think has been going on is that lots of very intelligent and creative researchers have simply misunderestimated the difficulties of exploring and inhabiting a new intellectual continent. So, every time someone finds a new place that looks suitable for long-term living, they set up camp and declare that it’s all like this so we can start putting up the houses and send home for the women and children. Well, it isn’t. Sooner or later the bears or vultures or snarks show themselves and make it clear that this is still wild unexplored country.

    It’s vast, immesurably vast.

    My own view is that there is a “singularity” in the future, and it does involve computing in an assisting role. But the singularity happens to us when we start putting lots of things together and suddenly find ourselves with more sophisticated ways of looking at things, including the problem of intelligence. Here’s a longish post on that: Redefining the Coming Singularity – It’s not what you think.

    • Rick Searle says:

      Hi Bill, thanks for linking to my posts. I’ll try to put this comment both here and on your own blog to make sure you see it, for I would like to hear your response.

      Like yourself, I do not not believe in the Singularity as an “intelligence explosion”. Though, what I think Cowen discussed and which worries me as well is something different.

      We already have a problem of intelligibility when it comes to something like String Theory or even the fact that no one person can now know all the relevant fact of any field. One can imagine AI that isn’t intelligent in the broad human sense at all but is extremely good at mining for patterns in scientific data to come up with
      theories or techniques which we can essentially not understand- neither how they hang together or how they ultimately work. This is the possibility Cowen draws from computer chess where humans have written the programs but end up scratching their heads at what the program does even when it works.

      It’s an idea that was perhaps first developed by Stanislav Lem which I wrote about here:

      https://utopiaordystopia.com/2014/11/22/summa-technologiae-or-why-the-trouble-with-science-is-religion/

  7. […] AI as currently constructed manifests intelligence more akin to puppet show illusions like the old Mechanical Turk than actual intellect. Nor does Gray really extend Kleist’s analogy to interrogate how we, both […]

  8. […] for platforms such as Amazon’s Mechanical Turk who still provide the computation behind the magic-act that is much of contemporary […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s