Do Extraterrestrials Philosophize?

nielsen_eastofthesun3

The novelist and philosopher R. Scott Bakker recently put out a mind blowing essay on the philosophy of extraterrestrials, which isn’t as Area 51 as such a topic might seem at first blush.  After all, Voltaire covered the topic of aliens, but if a Frenchman is still a little too playful for your philosophical tastes , recall that Kant thought the topic of extraterrestrial intelligence important to cover extensively as well, and you can’t get much more full of dull seriousness than the man from Koeningsberg.

So let’s take an earnest look at Bakker’s alien philosophy…well, not just yet. Before I begin it’s necessary to lay out a very basic version of the philosophical perspective Bakker is coming from, for in a way his real goal is to use some of our common intuitions regarding humanoid aliens as a way of putting flesh on the bones of two often misunderstood and not (at least among philosophers) widely held philosophical positions- eliminativism and Blind Brain Theory, both of which, to my lights at least, could be consumed under one version of the ominous and cool sounding, philosophy of Dark Phenomenology. Though once you get a handle of on dark phenomenology it won’t seem all that ominous, and if it’s cool, it’s not the type of cool that James Dean or the Fonz before season 5 would have recognized.

Eliminativism, if I understand it,  is the full recognition of the fact that perhaps all our notions about human mental life are suspect in so far as they have not been given a full scientific explanation. In a sense, then, eliminativism is merely an extension of the materialization (some would call it dis-enchantment) that has been going on since the scientific revolution.

Most of us no longer believe in angels, demons or fairies, not to mention quasi-scientific ideas that have ultimately proven to be empty of content like the ether or phlogiston. Yet in those areas where science has yet to reach, especially areas that concern human thinking and emotion, we continue to cling to what strict eliminativists believe are likely to be proved similar fictions, a form of myth that can range from categories of mental disease without much empirical substance to more philosophical and religiously determined beliefs such as those in free will, intentionality and the self.            

I think Bakker is attracted to eliminativism because it allows us to cut the gordian knot of problems that have remained unresolved since the beginning of Western philosophy itself. Problems built around assumptions which seem to be increasingly brought into question in light of our increasing knowledge of the actual workings of the human brain rather than our mere introspection regarding the nature of mental life. Indeed, a kind of subset of eliminativism in the form Blind Brain Theory essentially consists in the acknowledgement that the brain was designed for a certain kind of blindness by evolution.

What was not necessary for survival has been made largely invisible to the brain without great effort to see what has not been revealed. Philosophy’s mistake from the standpoint of a proponent of Blind Brain Theory has always been to try to shed light upon this darkness from introspection alone- a Sisyphean tasks in which the philosopher if not made ridiculous becomes hopelessly lost in the dark labyrinth of the human imagination. In contrast an actually achievable role for philosophy would be to define the boundary of the unknown until the science necessary to study this realm has matured enough for its’ investigations to begin.

The problem becomes what can one possibly add to the philosophical discourse once one has taken an eliminativists/Blind Brain position? Enter the aliens, for Bakker manages to make a very reasonable argument that we can use both to give us a plausible picture of what the mental life and philosophy of intelligent “humanoid” aliens might look like.

In terms of understanding the minds of aliens eliminativism and Blind Brain Theory are like addendums to evolutionary psychology. An understanding of the perceptual limitations of our aliens- not just mental limitations, but limitations brought about by conditions of time and space should allow us to make reasonable guesses about not only the philosophical questions, but the philosophical errors likely to be made by our intelligent aliens.

In a way the application of eliminativism and BBT to intelligent aliens put me in mind of Isaac Asimov’s short story Nightfall in which a world bathed in perpetual light is destroyed when it succumbs to the fall of  night. There it is not the evolved limitations of the senses that prevent Asimov’s “aliens” from perceiving darkness but their being on a planet that orbits two suns and keep them bathed in an unending day.

I certainly agree with Bakker that there is something pregnant and extremely useful in both eliminativism and Blind Brain Theory, though perhaps not so much it terms of understanding the possibility space of “alien intelligence” as in understanding our own intelligence and the way it has unfolded and developed over time and has been embedded in a particular spatio-temporal order we have only recently gained the power to see beyond.

Nevertheless, I think there are limitations to the model. After all, it isn’t even clear the extent to which the kinds of philosophical problems that capture the attention of intelligence are the same even across our own species. How are we to explain the differences in the primary questions that obsess, say, Western versus Chinese philosophy? Surely, something beyond neurobiology and spatial-temporal location is necessary to understand the the development of human philosophy in its various schools and cultural guises including how a discourse has unfolded historically and the degree to which it has been supported by the powers and techniques to secure the survival of some question/perspective over long stretches of time.

There is another way in which the use of eliminativism or Blind Brain Theory might lead us astray when it come to thinking about alien intelligence- it just isn’t weird enough.When the story of the development of not just human intelligence, but especially our technological/scientific civilization is told in full detail it seems so contingent as to be quite unlikely to repeat itself. The big question I think to ask is what are the possible alternative paths to intelligence of a human degree or greater and to technological civilization like or more advanced than our own. These, of course, are questions for speculative philosophy and fiction that can be scientifically informed in some way, but are very unlikely to be scientifically answered. And if if we could discover the very distant technological artifacts of another technological civilization as the new Milner/Hawking project hopes there remains no way to reverse engineer our way to understand the lost “philosophical” questions that would have once obsessed the biological “seeds” of such a civilization.

Then again, we might at least come up with some well founded theories though not from direct contact or investigation of alien intelligence itself. Our studies of biology are already leading to alternative understanding of the way intelligence can be embeded say with the amazing cephalopods. As our capacity from biological engineering increases we will be able make models of, map alternative histories for, and even create alternative forms of living intelligence. Indeed, our current development of artificial intelligence is like an enormous applied experiment in an alternative form of intelligence to our own.

What we might hope is that such alternative forms of intelligence not only allow us to glimpse the limits of our own perception and pattern making, but might even allow us to peer into something deeper and more enchanted and mystical beyond. We might hope even more deeply that in the far future something of the existential questions that have obsessed us will still be there like fossils in our posthuman progeny.

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.