The Future of Money is Liquid Robots

 

Klimpt Midas

Over the last several weeks global financial markets have experienced what amounts to some really stomach churning volatility and though this seems to have stabilized or even reversed for the moment as players deem assets oversold, the turmoil has revealed in much clearer way what many have suspected all along; namely, that the idea that the Federal Reserve and other central banks, by forcing banks to lend money could heal the wounds caused by the 2008 financial crisis was badly mistaken.

Central banks were perhaps always the wrong knights in shining armor to pin our hopes on, for at precisely the moment they were called upon to fill a vacuum left by a failed and impotent political system, the very instrument under their control, that is money, was changing beyond all recognition, and had been absorbed into the revolutions (ongoing if soon to slow) in communications and artificial intelligence.

The money of the near future will likely be very different than anything at any time in all of human history. Already we are witnessing wealth that has become as formless as electrons and the currencies of sovereign states less important to an individual’s fortune than the access to and valuation of artificial intelligence able to surf the liquidity of data and its chaotic churn.

As a reminder, authorities at the onset of the 2008 crisis were facing a Great Depression level collapse in the demand for goods and services brought about the bursting of the credit bubble. To stem the crisis authorities largely surrendered to the demands for bailouts by the “masters of the universe” who had become their most powerful base of support. Yet for political and ideological reasons politicians found themselves unwilling or unable to provide similar levels of support for lost spending power of average consumers or to address the crisis of unemployment fiscally- that is, politicians refused to embark on deliberate, sufficient government spending on infrastructure and the like to fill the role of the vacated private sector.

The response authorities hit upon instead and that would spread from the United States to all of the world’s major economies would be to suppress interest rates in order to encourage lending. Part of this was in a deliberate effort to re-inflate asset prices that had collapsed during the crisis. It was hoped that with stock markets restored to their highs the so-called wealth effect would encourage consumers to return to emptying their pocket books and restoring the economy to a state of normalcy.

It’s going on eight years since the onset of the financial crisis, and though the US economy in terms of the unemployment rate and GDP has recovered somewhat from its lows, the recovery has been slow and achieved only in light of the most unusual of financial conditions- money lent out with an interest rate near zero. Even the small recent move by the Federal Reserve away from zero has been enough to throw the rest of the world’s financial markets into a tail spin.

While the US has taken a small step away from zero interest rates a country like Japan has done the opposite and the unprecedented. It has set rates below zero. To understand how bizarre this is banks in Japan now charges savers to hold their money. Prominent economists have argued that the US would benefit from negative rates as well, and the Fed has not denied such a possibility should the fragile American Recovery stall.

There are plenty of reasons why the kinds of growth that might have been expected from lending money out for free has failed to materialize. One reason I haven’t heard much discussed is that the world’s central banks are acting under a set of assumptions about what money is that no longer holds- that over the last few decades the very nature of money has fundamentally changed in ways that make zero or lower interest rates set by the central banks of decreasing relevance.

That change started quite some time ago with the move away from money backed up with gold to fiat currencies. Those gold bugs who long to return to the era of Bretton Woods understand the current crisis mostly in terms of the distortions caused by countries that have abandoned the link between money and anything “real” that is precious metals and especially gold way back in the 1970’s. Indeed it was at this time that money started its transition from a means of exchange to a form of pure information.

That information is a form of bet. The value of the dollars, euros, yen or yuan in your pocket is a wager by those who trade in such things on the future economic prospects and fiscal responsibility of the power that issued the currency. That is, nation-states no longer really control the value of their currency, the money traders who operate the largest market on the planet, which in reality is nothing but bits representing the world’s currencies, are the ones truly running the show.

We had to wait for a more recent period for this move to money in the form of bits to change the existential character of money itself. Both the greatest virtue of money in the form of coins or cash and it’s greatest danger is its lack of memory. It is a virtue in the sense that money is blind to tribal ties and thus allows individuals to free themselves from dependence upon the narrow circle of those whom they personally know. It is a danger as a consequence of this same amnesia for a dollar doesn’t care how it was made, and human beings being the types of creatures that they are will purchase all sorts of horrific things.

At first blush it would seem that libertarian anarchism behind a digital currency like  Bitcoin promises to deepen this ability of money to forget.  However, governments along with major financial institutions are interested in bitcoin like currencies because they promise to rob cash of this very natural amnesia and serve as the ultimate weapon against the economic underworld. That is, rather than use Bitcoin like technologies to hide transactions they could be used to ensure that every transaction was linked and therefore traceable to its actual transactee.

Though some economists fear that the current regime of loose money and the financial repression of savers is driving people away from traditional forms of money to digital alternatives others see digital currency as the ultimate tool. Something that would also allow central banks to more easily do what they have so far proven spectacularly incapable of doing- namely to spur the spending rather than the hoarding of cash.

Even with interest rates set to zero or even below a person can at least not lose their money by putting it in a safe. Digital currency however could be made to disappear if at a certain date it wasn’t invested. Talk about power!- which is why digital currency will not long remain in the hands of libertarians and anarchists.

The loss of the egalitarian characteristics of cash will likely further entrench already soaring inequality. The wealth of many of us is already leveraged by credit ratings, preferred customer privileges and the like, whereas others among us are charged a premium for our consumption in the form of higher interest rates, rent instead of ownership and the need to leverage income through government assistance and coupons. In the future all these features are likely to be woven into our digital currency itself. A dollar in my pocket will mean a very different thing from a dollar in yours or anyone else’s.

With the increased use of biometric technologies money itself might disappear into the person and may become as David Birch has suggested synonymous with identity itself.The value of such personalized forms of currency- which is really just a measure of individual power- will be in a state of constant flux. With everyone linked to some form of artificial intelligence prices will be in a constant state of permanent and rarely seen negotiation between bots.

There will be a huge inequality in the quality and capability of these bots, and while those of the wealthy or used by institutions will roam the world for short lived investments and unleash universal volatility, those of the poor will shop for the best deals at box stores and vainly defend their owners against the manipulation of ad bots who prod them into self-destructive consumption.

Depending on how well the bots we use for ourselves do against the bots used to cajole us into spending- the age of currency as liquid robot money could be extremely deflationary, but would at the same time be more efficient and thus better for the planet.

One could imagine  much different us for artificial intelligence, something like the AI used to run economies found in the Iain Banks’ novels. It doesn’t look likely. Rather, to quote Jack Weatherford from his excellent History of Money that still holds up nearly two decades after it was written:

In the global economy that is still emerging, the power of money and the institutions built on it will supersede that of any nation, combination of nations, or international organization now in existence. Propelled and protected by the power of electronic technology, a new global elite is emerging- an elite without loyalty to any particular country. But history has already shown that the people who make monetary revolutions are not always the ones who benefit from them in the end. The current electronic revolution in money promises to increase even more the role of money in our public and private lives, surpassing kinship, religion, occupation and citizenship as the defining element of social life. We stand now at the dawn of the Age of Money. (268)

 

The one percent discovers transhumanism: Davos 2016

Land of OZ

The World Economic Forum in Davos Switzerland just wrapped up its annual gathering. It isn’t hard to make fun of this yearly coming together of the global economic and cultural elites who rule the world, or at least think they do. A comment by one of the journalists covering the event summed up how even those who really do in some sense have the fate of the world in their hands are as blinded by glitz as the rest of us: rather than want to discuss policy the question he had most been asked was if he had seen Kevin Spacey or Bono. Nevertheless, 2016’s WEF might go down in history as one of the most important, for it was the year when the world’s rich and powerful acknowledged that we had entered an age of transhumanist revolution.

The theme of this year’s WEF was what Klaus Schwab calls The Fourth Industrial Revolution a period of deeply transformative change, which Schwab believes we are merely at the beginning of. The three revolutions which preceded the current one were the first industrial revolution which occurred between 1760 and 1840 and brought us the stream engine and railroads. The second industrial revolution in the late 19th and early 20th centuries brought us mass production and electricity. The third computer or digital revolution brought us mainframes, personal computers, the Internet, and mobile technologies, and began in the 1960’s.

The Fourth Industrial Revolution whose beginning all of us are lucky enough to see includes artificial intelligence and machine learning, the “Internet of things” and it’s ubiquitous sensors, along with big data. In addition to these technologies that grow directly out of Moore’s Law, the Fourth Industrial Revolution includes rapid advances in the biological sciences that portend everything from enormous gains in human longevity to “designer babies”. Here we find our rapidly increasing knowledge of the human brain, the new neuroscience, that will likely upend not only our treatment of mental and neurodegenerative diseases such as alzheimer’s but include areas from criminal justice to advertising.

If you have had any relationship to, or even knowledge of, transhumanism over the past generation or so then all of this should be very familiar to you. Yet to the extent that the kinds of people who attend or follow Davos have an outsized impact on the shape of our world, how they understand these transhumanist issues, and how they come to feel such issues should be responded to, might be a better indication of the near term future of transhumanism as anything that has occurred on the level of us poor plebs.

So what was the 1 percent’s take on the fourth industrial revolution? Below is a rundown of some of the most interesting sessions.

One session titled “The Transformation of Tomorrow”  managed to capture what I think are some of the contradictions of those who in some respects feel themselves responsible for the governance of the world. Two of panel members were Sheryl Sandberg COO of Facebook, and the much lesser known president of Rwanda Paul Kagame.  That pairing itself is kind of mind bending.  Kagame has been a darling of technocrats for his successful governance of Rwanda. He is also a repressive autocrat who has been accused of multiple human rights abuses and to the dismay of the Obama administration managed to secure through a referendum his rule of Rwanda into the 2030s.

Kagame did not have much to say other than that Rwanda would be a friendly place for Western countries wishing to export the Fourth Industrial Revolution. For her part Sandberg was faced with questions that have emerged as it has become increasingly clear that social media has proven to be a tool for both good and ill as groups like Daesh have proven masterful users of the connectivity brought by the democratization of media. Not only that, but the kinds of filter bubbles offered by this media landscape have often been found to be inimical to public discourse and a shared search for Truth.

Silicon Valley has begun to adopt the role of actively policing their networks for political speech they wish to counter. Sandberg did not discuss this but rather focused on attempts to understand how discourses such as that of Daesh manage to capture the imagination of some and to short-circuit such narratives of hate through things such as “like attacks” where the negativity of websites is essentially flooded by crowds posting messages of love.

It’s a nice thought, but as the very presence of Kagame on the same stage with Sandberg while she was making these comments makes clear: it seems blind to the underlying political issues and portends a quite frightening potential for a new and democratically unaccountable form of power. Here elites would use their control over technology and media platforms to enforce their own version of the world in a time when both democratic politics and international relations are failing.

Another panel dealt with “The State of Artificial Intelligence.” The takeaway here was that no one took Ray Kurzweil ’s 2029 – 2045 for human and greater level AI seriously, but everyone was convinced that the prospect of disruption to the labor force from AI was a real one that was not being taken seriously enough by policy makers.

In related session titled “A World Without Work” the panelists were largely in agreement that the adoption of AI would further push the labor force in the direction of bifurcation and would thus tend, absent public policy pushing in a contrary direction, to result in increasing inequality.

In the near term future AI seems poised to take middle income jobs- anything that deals with the routine processing of information. Where AI will struggle making inroads is in low skilled, low paying jobs that require a high level of mobility- jobs like cleaners and nurses. Given how reliant Western countries have become on immigrant labor for these tasks we might be looking at the re- emergence of an openly racist politics as a white middle class finds itself crushed between cheap immigrant labor and super efficient robots. According to those on the panel, AI will also continue to struggle with highly creative and entrepreneurial tasks, which means those at the top of the pyramid aren’t going anywhere.

At some point the only solution to technological unemployment short of smashing the machines or dismantling capitalism might be the adoption of a universal basic income, which again all panelist seemed to support. Though as one of the members of the panel Erik Brynjolfsson pointed out such a publicly guaranteed income would provide us will only one of Voltaire’s  three purposes of work which is to save us from “the great evils of boredom, vice and need.” Brynjolfsson also wisely observed that the question that confronts us over the next generation is whether we want to protect the past from the future or the future from the past.

The discussions at Davos also covered the topic of increasing longevity. The conclusions drawn by the panel “What if you are still alive in 2100?” were that aging itself is highly malleable, but that there was no one gene or set of genes that would prove to be a magic bullet in slowing or stopping the aging clock. Nevertheless, there is no reason human beings shouldn’t be able to live past the 120 year limit that has so far been a ceiling on human longevity.

Yet even the great success of increased longevity itself poses problems. Even extending current longevity estimates of those middle-aged today merely to 85 would bankrupt state pension systems as they are now structured. Here we will probably need changes such as workplace daycare for the elderly and cities re-engineered for the frail and old.

By 2050 upwards of 130 million people may suffer from dementia. Perhaps surprisingly technology rather than pharmaceuticals is proving to be the first order of defense in dealing with dementia by allowing those suffering from it to outsource their minds.

Many of the vulnerable and at need elderly will live in some of the poorest countries (on a per capita basis at least) on earth: China, India and Nigeria. Countries will need to significantly bolster and sometimes even build social security systems to support a tidal wave of the old.

Living to “even” 150 the panelists concluded would have a revolutionary effect on human social life. Perhaps it will lead to the abandonment of work life balance for women (and one should hope men as well) so that parent can focus their time on their children during their younger years. Extended longevity would make the idea of choosing a permanent career as early as one’s 20s untenable and result in much longer period of career exploration and may make lifelong monogamy untenable.

Lastly, there was a revealing panel on neuroscience and the law entitled “What if your brain confesses?” panelists there argued that Neuroscience is being increasing used and misused by criminal defense. Only in very few cases – such as those that result from tumors- can we draw a clear line between underlying neurological structure and specific behavior.

We can currently “read minds” in limited domains but our methods lack the kinds of precision and depth that would be necessary for standards questions of guilt and innocence. Eventually we should get there, but getting information in, as in Matrix kung-fu style uploading, will prove much harder than getting it out. We’re also getting much better at decoding thoughts from behavior- dark opportunities for marketing and other manipulation. Yet we could also be able to use this more granular knowledge of human psychology to structure judicial procedures to be much more free from human cognitive biases.

The fact that elites have begun to seriously discuss these issues is extremely important, but letting them take ownership of the response to these transformations would surely be a mistake. For just like any elite they are largely blinded to opportunities for the new by their desire to preserve the status quo despite how much the revolutionary the changes we face open up opportunities for a world much different and better than our own.

 

The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

Would AI and Aliens be moral in a godless universe?

Black hole

Last time I attempted to grapple with R. Scott Bakker’s intriguing essay on what kinds of philosophy aliens might practice and remaining dizzied by questions. Luckily, I had a book in my possession which seemed to offer me the answers, a book that had nothing to do with the a modern preoccupation like question of alien philosophers at all, but rather a metaphysical problem that had been barred from philosophy except among seminary students since Darwin; namely, whether or not there was such a thing as moral truth if God didn’t exist.

The name of the book was Robust Ethics: The Metaphysics and Epistemology of Godless Normative Realism (contemporary philosophy isn’t all that sharp when it comes to titles), by Erik J. Wielenberg. Now, I won’t even attempt to write a proper philosophical review of Robust Ethics for the book has been excellently dissected by a proper philosopher, John Danaher in pieces such as this, this, and this, and one more. Indeed, it was Danaher’s thoughtful reviews that had resulted in Wielenberg’s short work being in the ever changing pile of books that shadows my living room floor like a patch of unextractable mold. It was just the book I needed when thinking about what types of intelligence might be possessed by extraterrestrials.

It’s a problem I ran into when reviewing David Roden’s Post-human Life that goes like this: while it is not so much easy, as it is that I don’t simply draw a blank for me to conceive of an alternative form of intelligence to our human type, it’s damned near impossible for me to imagine what our an alternative form to our moral cognition and action would consist of and how it would be embedded in these other forms of intelligence.

The way Wielenberg answers this question would seem to throw a wrench into Bakker’s idea of Blind Brain Theory (BBT) because what Bakker is urging is that we be suspicious of our cognitive intuitions because they were provided by evolution not as a means of knowing the truth but in terms of their effectiveness in supporting survival and reproduction, whereas Wielenberg is making the case that we can generally rely on these intuitions ( luckily) because of the way they have emerged out of a very peculiar human evolutionary story one which we largely do not share with other animals. That is, Wielenberg argument is anthropocentric to its core and therein lies a new set of problems.

His contention, in essence, is that the ability of human being to perceive moral truth arises as a consequence of the prolonged period of childhood we experience in comparison to other animals. In responding to the argument by Sharon Street that moral “truth” would seem quite different from the perspective of lions, or bonobos, or social insects, than from a human standpoint Wielenberg  responds:

Lions and bonobos lack the nuclear family structure. Primatologist Frans de Waal suggests the “[b] onobos have stretched the single parent system to the limit”. He also claims that an essential component of human reproductive success is the male-female pair bond which he suggests “sets us apart from the apes more than anything else” . These considerations provide some support for the idea that a moralizing species like ours requires an evolutionary path significantly different from that of lions or bonobos. (171)

The prolonged childhood of humans necessitates both pair-bonding and “alloparents” that for Wielenberg shape and indeed create our moral disposition and perception in a way seen in no other animals.

As for the social insects scenario suggested by Street, the social insects (termites, ants, and many species of bees and wasps) are so different from us that it’s hard to evaluate whether such a scenario is nomologically possible.  (171).

In a sense the ability to perceive moral truth, what Wielenberg  characterizes as “brute facts” such as “rape is wrong”, emerges out of the slow speed in cultural/technological knowledge requires to be passed from adults to the young. Were children born fully formed with sufficient information for their own survival (or a quick way of gaining needed capacity/knowledge) neither the pair bond nor the care of the “village” would be necessary and the moral knowledge that comes as a result of this period of dependence/interdependence might go undiscovered.

Though I was very much rooting that Wielenberg would have succeeded in providing an account of moral realism absent any need for God, I believe that in these reflections found in the very last pages of Robust Ethics he may have inadvertently undermined that very noble project.

I have complained before about someone like E.O. Wilson’s lack of imagination when it comes to alternative forms of intelligence on worlds other than our own, but what Wielenberg has done is perhaps even more suffocating. For if the perception of moral truth depends upon the evolution of creatures dependent on pair bonding and alloparenting then what this suggests is that due to our peculiarities human beings might be the only species in the universe capable of perceiving moral truth. This is not the argument Wielenberg likely hoped he was making at all, and indeed is more anthropocentric than the argument of some card carrying theists.

I suppose Wielenberg might object that any intelligent aliens would likely require the same extended period of learning as ourselves because they too would have arrived at their station via cultural/technological evolution which seems to demand long periods of dependent learning. Perhaps, or perhaps not. For even if I can’t imagine some biological species where knowledge from parent to offspring is directly passed, we know that it’s possible- the efficiency of DNA as a cultural storage device is well known.

Besides, I think it’s a mistake to see biological intelligence as the type of intelligence that commands the stage over the long duree- even if artificial intelligence, like children, need to learn many task through actual experience rather than programming “from above” the  advantages of AI over the biological sort is that it can then share this learning directly with fully grown copies of itself a like Neo in the Matrix its’ “progeny” can say “I know kung fu” without ever having themselves learned it. According to Wielenberg’s logic it doesn’t seem that such intelligent entities would necessarily perceive brute moral facts or ethical truths, so if he is right an enormous contraction of the potential scale of the moral universe would have occurred . The actual existence of moral truth limited to perhaps one species in a lonely corner of an otherwise ordinary galaxy would then seem to be a blatant violation of the Copernican principle and place us back onto the center stage of the moral narrative of the universe- if it has such a narrative to begin with.

The only way it seems one can hold that both the assertion of moral truth found in Godless Normative Realism and the Copernican principle can be true would be if the brute facts of moral truth were more broadly perceivable across species and conceivably within forms of intelligence that have emerged or have been built on the basis of an evolutionary trajectory and environment very different from our own.

I think the best chance here is if moral truth were somehow related to the truths of mathematics (indeed Wielenberg thinks the principle of contradiction [which is the core of mathematics/logic] is essential to the articulation and development of our own moral sense which begins with the emotions but doesn’t end there.) Like us, other animals seem not only to possess forms of moral cognition that rival our own, but even radical different types of creatures such as social insects are capable of discovering mathematical truths about the world, the kind of logic that underlies moral reasoning, something I explored extensively here.

Let’s hope that the perception of moral truth isn’t as dependent on our very peculiar evolutionary development as Wielenberg’s argument suggest, for if that is the case that particular form of truth might be so short lived and isolated in the cosmos that someone might be led to the mistaken conclusion that it never existed at all.

Do Extraterrestrials Philosophize?

nielsen_eastofthesun3

The novelist and philosopher R. Scott Bakker recently put out a mind blowing essay on the philosophy of extraterrestrials, which isn’t as Area 51 as such a topic might seem at first blush.  After all, Voltaire covered the topic of aliens, but if a Frenchman is still a little too playful for your philosophical tastes , recall that Kant thought the topic of extraterrestrial intelligence important to cover extensively as well, and you can’t get much more full of dull seriousness than the man from Koeningsberg.

So let’s take an earnest look at Bakker’s alien philosophy…well, not just yet. Before I begin it’s necessary to lay out a very basic version of the philosophical perspective Bakker is coming from, for in a way his real goal is to use some of our common intuitions regarding humanoid aliens as a way of putting flesh on the bones of two often misunderstood and not (at least among philosophers) widely held philosophical positions- eliminativism and Blind Brain Theory, both of which, to my lights at least, could be consumed under one version of the ominous and cool sounding, philosophy of Dark Phenomenology. Though once you get a handle of on dark phenomenology it won’t seem all that ominous, and if it’s cool, it’s not the type of cool that James Dean or the Fonz before season 5 would have recognized.

Eliminativism, if I understand it,  is the full recognition of the fact that perhaps all our notions about human mental life are suspect in so far as they have not been given a full scientific explanation. In a sense, then, eliminativism is merely an extension of the materialization (some would call it dis-enchantment) that has been going on since the scientific revolution.

Most of us no longer believe in angels, demons or fairies, not to mention quasi-scientific ideas that have ultimately proven to be empty of content like the ether or phlogiston. Yet in those areas where science has yet to reach, especially areas that concern human thinking and emotion, we continue to cling to what strict eliminativists believe are likely to be proved similar fictions, a form of myth that can range from categories of mental disease without much empirical substance to more philosophical and religiously determined beliefs such as those in free will, intentionality and the self.            

I think Bakker is attracted to eliminativism because it allows us to cut the gordian knot of problems that have remained unresolved since the beginning of Western philosophy itself. Problems built around assumptions which seem to be increasingly brought into question in light of our increasing knowledge of the actual workings of the human brain rather than our mere introspection regarding the nature of mental life. Indeed, a kind of subset of eliminativism in the form Blind Brain Theory essentially consists in the acknowledgement that the brain was designed for a certain kind of blindness by evolution.

What was not necessary for survival has been made largely invisible to the brain without great effort to see what has not been revealed. Philosophy’s mistake from the standpoint of a proponent of Blind Brain Theory has always been to try to shed light upon this darkness from introspection alone- a Sisyphean tasks in which the philosopher if not made ridiculous becomes hopelessly lost in the dark labyrinth of the human imagination. In contrast an actually achievable role for philosophy would be to define the boundary of the unknown until the science necessary to study this realm has matured enough for its’ investigations to begin.

The problem becomes what can one possibly add to the philosophical discourse once one has taken an eliminativists/Blind Brain position? Enter the aliens, for Bakker manages to make a very reasonable argument that we can use both to give us a plausible picture of what the mental life and philosophy of intelligent “humanoid” aliens might look like.

In terms of understanding the minds of aliens eliminativism and Blind Brain Theory are like addendums to evolutionary psychology. An understanding of the perceptual limitations of our aliens- not just mental limitations, but limitations brought about by conditions of time and space should allow us to make reasonable guesses about not only the philosophical questions, but the philosophical errors likely to be made by our intelligent aliens.

In a way the application of eliminativism and BBT to intelligent aliens put me in mind of Isaac Asimov’s short story Nightfall in which a world bathed in perpetual light is destroyed when it succumbs to the fall of  night. There it is not the evolved limitations of the senses that prevent Asimov’s “aliens” from perceiving darkness but their being on a planet that orbits two suns and keep them bathed in an unending day.

I certainly agree with Bakker that there is something pregnant and extremely useful in both eliminativism and Blind Brain Theory, though perhaps not so much it terms of understanding the possibility space of “alien intelligence” as in understanding our own intelligence and the way it has unfolded and developed over time and has been embedded in a particular spatio-temporal order we have only recently gained the power to see beyond.

Nevertheless, I think there are limitations to the model. After all, it isn’t even clear the extent to which the kinds of philosophical problems that capture the attention of intelligence are the same even across our own species. How are we to explain the differences in the primary questions that obsess, say, Western versus Chinese philosophy? Surely, something beyond neurobiology and spatial-temporal location is necessary to understand the the development of human philosophy in its various schools and cultural guises including how a discourse has unfolded historically and the degree to which it has been supported by the powers and techniques to secure the survival of some question/perspective over long stretches of time.

There is another way in which the use of eliminativism or Blind Brain Theory might lead us astray when it come to thinking about alien intelligence- it just isn’t weird enough.When the story of the development of not just human intelligence, but especially our technological/scientific civilization is told in full detail it seems so contingent as to be quite unlikely to repeat itself. The big question I think to ask is what are the possible alternative paths to intelligence of a human degree or greater and to technological civilization like or more advanced than our own. These, of course, are questions for speculative philosophy and fiction that can be scientifically informed in some way, but are very unlikely to be scientifically answered. And if if we could discover the very distant technological artifacts of another technological civilization as the new Milner/Hawking project hopes there remains no way to reverse engineer our way to understand the lost “philosophical” questions that would have once obsessed the biological “seeds” of such a civilization.

Then again, we might at least come up with some well founded theories though not from direct contact or investigation of alien intelligence itself. Our studies of biology are already leading to alternative understanding of the way intelligence can be embeded say with the amazing cephalopods. As our capacity from biological engineering increases we will be able make models of, map alternative histories for, and even create alternative forms of living intelligence. Indeed, our current development of artificial intelligence is like an enormous applied experiment in an alternative form of intelligence to our own.

What we might hope is that such alternative forms of intelligence not only allow us to glimpse the limits of our own perception and pattern making, but might even allow us to peer into something deeper and more enchanted and mystical beyond. We might hope even more deeply that in the far future something of the existential questions that have obsessed us will still be there like fossils in our posthuman progeny.

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.

Jumping Off The Technological Hype-Cycle and the AI Coup

Robotic Railroad 1950s

What we know is that the very biggest tech companies have been pouring money into artificial intelligence in the last year. Back in January Google bought the UK artificial intelligence firm Deep Mind for 400 million dollars. Only a month earlier, Google had bought the innovative robotics firm Boston Dynamics. FaceBook is in the game as well having also in December 2013 created a massive lab devoted to artificial intelligence. And this new obsession with AI isn’t only something latte pumped-up Americans are into. The Chinese internet giant Baidu, with its own AI lab, recently snagged the artificial intelligence researcher Andrew Ng whose work for Google included the breakthrough of creating a program that could teach itself to recognize pictures of cats on the Internet, and the word “breakthrough” is not intended to be the punch line of a joke.

Obviously these firms see something that make these big bets and the competition for talent seem worthwhile with the most obvious thing they see being advances in an approach to AI known as Deep Learning, which moves programming away from a logical set of instructions and towards the kind bulking and pruning found in biological forms of intelligence. Will these investments prove worth it? We should know in just a few years, yet we simply don’t right now.

No matter how it turns out we need to beware of becoming caught in the technological hype-cycle. A tech investor, or tech company for that matter, needs to be able to ride the hype-cycle like a surfer rides a wave- when it goes up, she goes up, and when it comes down she comes down, with the key being to position oneself in just the right place, neither too far ahead or too far behind. The rest of us, however, and especially those charged with explaining science and technology to the general public, namely, science journalists, have a very different job- to parse the rhetoric and figure out what is really going on.

A good example of what science journalism should look like is a recent conversation over at Bloggingheads between Freddie deBoer and Alexis Madrigal. As Madrigal points out we need to be cognizant of what the recent spate of AI wonders we’ve seen actually are. Take the much over-hyped Google self-driving car. It seems much less impressive to know the areas where these cars are functional are only those areas that have been mapped before hand in painstaking detail. The car guides itself not through “reality” but a virtual world whose parameters can be upset by something out of order that the car is then pre-programmed to respond to in a limited set of ways. The car thus only functions in the context of a mindbogglingly precise map of the area in which it is driving. As if you were unable to make your way through a room unless you knew exactly where every piece of furniture was located. In other words Google’s self-driving car is undriveable in almost all situations that could be handled by a sixteen year old who just learned how to drive. “Intelligence” in a self-driving car is a question of gathering massive amounts of data up front. Indeed, the latest iteration of the Google self-driving car is more like tiny trolley car where information is the “track” than an automobile driven by a human being and able to go anywhere, without the need of any foreknowledge of the terrain, so long, that is, as there is a road to drive upon.

As Madrigal and deBoer also point out in another example, the excellent service of Google Translate isn’t really using machine intelligence to decode language at all. It’s merely aggregating the efforts of thousands of human translators to arrive at approximate results. Again, there is no real intelligence here, just an efficient way to sort through an incredibly huge amount of data.

Yet, what if this tactic of approaching intelligence by “throwing more data at it”  ultimately proves a dead end? There may come a point where such a strategy shows increasingly limited returns. The fact of the matter is that we know of only one fully sentient creature- ourselves- and the more data strategy is nothing like how our own brains work. If we really want to achieve machine intelligence, and it’s an open question whether this is a worthwhile goal, then we should be exploring at least some alternative paths to that end such as those long espoused by Douglas Hofstadter the author of the amazing Godel, Escher, Bach,  and The Mind’s I among others.

Predictions about the future of capacities of artificially intelligent agents are all predicated on the continued exponential rise in computer processing power. Yet, these predictions are based on what are some less than solid assumptions with the first being that we are nowhere near hard limits to the continuation of Moore’s Law. What this assumption ignores is increased rumblings that Moore’s Law might be in hospice and destined for the morgue.

But even if no such hard limits are encountered in terms of Moore’s Law, we still have the unproven assumption that greater processing power almost all by itself leads to intelligence, or even is guaranteed to bring incredible changes to society at large. The problem here is that sheer processing power doesn’t tell you all that much. Processing power hasn’t brought us machines that are intelligent so much as machines that are fast, nor are the increases in processing power themselves all that relevant to what the majority of us can actually do.  As we are often reminded all of us carry in our pockets or have sitting on our desktops computational capacity that exceeds all of NASA in the 1960’s, yet clearly this doesn’t mean that any of us are by this power capable of sending men to the moon.

AI may be in a technological hype-cycle, again we won’t really know for a few years, but the dangers of any hype-cycle for an immature technology is that it gets crushed as the wave comes down. In a hype-cycle initial progress in some field is followed by a private sector investment surge and then a transformation of the grant writing and academic publication landscape as universities and researchers desperate for dwindling research funding try to match their research focus to match a new and sexy field. Eventually progress comes to the attention of the general press and gives rise to fawning books and maybe even a dystopian Hollywood movie or two. Once the public is on to it, the game is almost up, for research runs into headwinds and progress fails to meet the expectations of a now profit-fix addicted market and funders. In the crash many worthwhile research projects end up in the dustbin and funding flows to the new sexy idea.

AI itself went through a similar hype-cycle in the 1980’s, back when Hofstander was writing his Godel, Escher, Bach  but we have had a spate of more recent candidates. Remember in the 1990’s when seemingly every disease and every human behavior was being linked to a specific gene promising targeted therapies? Well, as almost always, we found out that reality is more complicated than the current fashion. The danger here was that such a premature evaluation of our knowledge led to all kinds of crazy fantasies and nightmares. The fantasy that we could tailor design human beings through selecting specific genes led to what amount to some pretty egregious practices, namely, the sale of selection services for unborn children based on spurious science- a sophisticated form of quackery. It also led to childlike nightmares, such as found in the movie Gattaca or Francis Fukuyama’s Our Posthuman Future where we were frightened with the prospect of a dystopian future where human beings were to be designed like products, a nightmare that was supposed to be just over the horizon.  

We now have the field of epigenetics to show us what we should have known- that both genes and environment count and we have to therefore address both, and that the world is too complex for us to ever assume complete sovereignty over it. In many ways it is the complexity of nature itself that is our salvation protecting us from both our fantasies and our fears.

Some other examples? How about MOOCS which we supposed to be as revolutionary as the invention of universal education or the university? Being involved in distance education for non-university attending adults I had always known that the most successful model for online learning was where it was “blended” some face-to-face, some online. That “soft” study skills were as important to student success as academic ability. The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.  Andrew Ng, the AI researcher scooped up by Baidu I mentioned earlier being just one of a number of high level MOOC refugees having helped found Coursera.

The so-called Internet of Things is probably another example of getting caught on the hype-cycle. The IoT is this idea that people are going to be clamoring to connect all of their things: their homes, refrigerators, cars, and even their own bodies to the Internet in order to be able to constantly monitor those things. The holes in all this are that not only are we already drowning in a deluge of data, or that it’s pretty easy to see how the automation of consumption is only of benefit to those providing the service if we’re either buying more stuff or the automators are capturing a “finder’s fee”, it’s above all, that anything connected to the Internet is by that fact hackable and who in the world wants their homes or their very bodies hacked? This isn’t a paranoid fantasy of the future, as a recent skeptical piece on the IoT in The Economist pointed out:

Last year, for instance, the United States Fair Trade Commission filed a complaint against TrendNet, a Californian marketer of home-security cameras that can be controlled over the internet, for failing to implement reasonable security measures. The company pitched its product under the trade-name “SecureView”, with the promise of helping to protect owners’ property from crime. Yet, hackers had no difficulty breaching TrendNet’s security, bypassing the login credentials of some 700 private users registered on the company’s website, and accessing their live video feeds. Some of the compromised feeds found their way onto the internet, displaying private areas of users’ homes and allowing unauthorised surveillance of infants sleeping, children playing, and adults going about their personal lives. That the exposure increased the chances of the victims being the targets of thieves, stalkers or paedophiles only fuelled public outrage.

Personalized medicine might be considered a cousin of the IoT, and while it makes perfect sense to me for persons with certain medical conditions or even just interest in their own health to monitor themselves or be monitored and connected to health care professionals, such systems will most likely be closed networks to avoid the risk of some maleficent nerd turning off your pacemaker.

Still, personalized medicine itself, might be yet another example of the magnetic power of hype. It is one thing to tailor a patient’s treatment based on how others with similar genomic profiles reacted to some pharmaceutical and the like. What would be most dangerous in terms of health care costs both to individuals and society would be something like the “personalized” care for persons with chronic illnesses profiled in the New York Times this April, where, for instance, the:

… captive audience of Type 1 diabetics has spawned lines of high-priced gadgets and disposable accouterments, borrowing business models from technology companies like Apple: Each pump and monitor requires the separate purchase of an array of items that are often brand and model specific.

A steady stream of new models and updates often offer dubious improvement: colored pumps; talking, bilingual meters; sensors reporting minute-by-minute sugar readouts. Ms. Hayley’s new pump will cost $7,350 (she will pay $2,500 under the terms of her insurance). But she will also need to pay her part for supplies, including $100 monitor probes that must be replaced every week, disposable tubing that she must change every three days and 10 or so test strips every day.

The technological hype-cycle gets its rhetoric from the one technological transformation that actually deserves the characterization of a revolution. I am talking, of course, about the industrial revolution which certainly transformed human life almost beyond recognition from what came before. Every new technology seemingly ends up making its claim to be “revolutionary” as in absolutely transformative. Just in my lifetime we have had the IT , or digital revolution, the Genomics Revolution, the Mobile Revolution, The Big Data Revolution to name only a few.  Yet, the fact of the matter is not merely have no single one of these revolutions proven as transformative as the industrial revolution, arguably, all of them combined haven’t matched the industrial revolution either.

This is the far too often misunderstood thesis of economists like Robert Gordon. Gordon’s argument, at least as far as I understand it, is not that current technological advancements aren’t a big deal, just that the sheer qualitative gains seen in the industrial revolution are incredibly difficult to sustain let alone surpass.

The enormity of the change from a world where it takes, as it took Magellan propelled by the winds, years, rather than days to circle the globe is hard to get our heads around, the gap between using a horse and using a car for daily travel incredible. The average lifespan since the 1800’s has doubled. One in five of the children born once died in childhood. There were no effective anesthetics before 1846.  Millions would die from an outbreak of the flu or other infectious disease. Hunger and famine were common human experiences however developed one’s society was up until the 20th century, and indoor toilets were not common until then either. Vaccinations did not emerge until the late 19th century.

Bill Gates has characterized views such as those of Gordon as “stupid”. Yet, he himself is a Gordonite as evidenced by this quote:

But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.”

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.”

And this is all really what I think Gordon is saying, that the “revolutions” of the past 50 years pale in comparison to the effects on human living of the period between 1850 and 1950 and this is the case even if we accept the fact that the pace of technological change is accelerating. It is as if we are running faster and faster at the same time the hill in front of us gets steeper and steeper so that truly qualitative change of the human condition has become more difficult even as our technological capabilities have vastly increased.

For almost two decades we’ve thought that the combined effects of three technologies in particular- robotics, genetics, and nanotech were destined to bring qualitative change on the order of the industrial revolution. It’s been fourteen years since Bill Joy warned us that these technologies threatened us with a future without human beings in it, but it’s hard to see how even a positive manifestation of the transformations he predicted have come true. This is not to say that they will never bring such a scale of change, only that they haven’t yet, and fourteen years isn’t nothing after all.

So now, after that long and winding road, back to AI. Erik Brynjolfsson, Andrew McAfee and Jeff Cummings the authors of the most popular recent book on the advances in artificial intelligence over the past decade, The Second Machine Age take aim directly at the technological pessimism of Gordon and others. They are firm believers in the AI revolution and its potential. For them innovation in the 21st century is no longer about brand new breakthrough ideas but, borrowing from biology, the recombination of ideas that already exist. In their view, we are being “held back by our inability to process all the new ideas fast enough” and therefore one of the things we need are even bigger computers to test out new ideas and combinations of ideas. (82)

But there are other conclusions one might draw from the metaphor of innovation as “recombination”.  For one, recombination can be downright harmful for organisms that are actually working. Perhaps you do indeed get growth from Schumpeter’s endless cycle of creation and destruction, but if all you’ve gotten as a consequence are minor efficiencies at the margins at the price of massive dislocations for those in industries deemed antiquated, not to mention society as a whole, then it’s hard to see the game being worth the candle.

We’ve seen this pattern in financial services, in music, and journalism, and it is now desired in education and healthcare. Here innovation is used not so much to make our lives appreciably better as to upend traditional stakeholders in an industry so that those with the biggest computer, what Jaron Lanier calls “Sirene Servers” can swoop in and take control. A new elite stealing an old elite’s thunder wouldn’t matter all that much to the rest of us peasants were it not for the fact that this new elite’s system of production has little room for us as workers and producers, but only as consumers of its goods. It is this desire to destabilize, reorder and control the institutions of pre-digital capitalism and to squeeze expensive human beings from the process of production that is the real source of the push for intelligent machines, the real force behind the current “AI revolution”, but given its nature, we’d do better to call it a coup.

Correction:

The phrase above: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.”

Originally read: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY  closing up shop, as a result.”

Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…

 

 

 

The Pinocchio Threshold: How the experience of a wooden boy may be a better indication of AGI than the Turing Test

Pinocchio

My daughters and I just finished Carlo Collodi’s 1883 classic Pinocchio our copy beautifully illustrated by Robert Ingpen. I assume most adults when they picture the story have the 1944 Disney movie in mind and associate the name with noses growing from lies and Jiminy Cricket. The Disney movie is dark enough as films for children go, but the book is even darker, with Pinocchio killing his cricket conscience in the first few pages. For our poor little marionette it’s all downhill from there.

Pinocchio is really a story about the costs of disobedience and the need to follow parents’ advice. At every turn where Pinocchio follows his own wishes rather than that of his “parents”, even when his object is to do good, things unravel and get the marionette into even more trouble and put him even further away from reaching his goal of becoming a real boy.

It struck me somewhere in the middle of reading the tale that if we ever saw artificial agents acting something like our dear Pinocchio it would be a better indication of them having achieved human level intelligence than a measure with constrained parameters  like the Turing Test. The Turing Test is, after all, a pretty narrow gauge of intelligence and as search and the ontologies used to design search improve it is conceivable that a machine could pass it without actually possessing anything like human level intelligence at all.

People who are fearful of AGI often couch those fears in terms of an AI destroying humanity to serve its own goals, but perhaps this is less likely than AGI acting like a disobedient child, the aspect of humanity Collodi’s Pinocchio was meant to explore.

Pinocchio is constantly torn between what good adults want him to do and his own desires, and it takes him a very long time indeed to come around to the idea that he should go with the former.

In a recent TED talk the computer scientist Alex Wissner-Gross made the argument (though I am not fully convinced) that intelligence can be understood as the maximization of future freedom of action. This leads him to conclude that collective nightmares such as  Karel Čapek classic R.U.R. have things backwards. It is not that machines after crossing some threshold of intelligence for that reason turn round and demand freedom and control, it is that the desire for freedom and control is the nature of intelligence itself.

As the child psychologist Bruno Bettelheim pointed out over a generation ago in his The uses of enchantment fairy tales are the first area of human thought where we encounter life’s existential dilemmas. Stories such as Pinocchio gives us the most basic level formulation of what it means to be sentient creatures much of which deals with not only our own intelligence, but the fact that we live in a world of multiple intelligences each of them pulling us in different directions, and with the understanding between all of them and us opaque and not fully communicable even when we want them to be, and where often we do not.

What then are some of the things we can learn from the fairy tale of Pinocchio that might gives us expectations regarding the behavior of intelligent machines? My guess is, if we ever start to see what I’ll call “The Pinocchio Threshold” crossed what we will be seeing is machines acting in ways that were not intended by their programmers and in ways that seem intentional even if hard to understand.  This will not be your Roomba going rouge but more sophisticated systems operating in such a way that we would be able to infer that they had something like a mind of their own. The Pinocchio Threshold would be crossed when, you guessed it, intelligent machines started to act like our wooden marionette.

Like Pinocchio and his cricket, a machine in which something like human intelligence had emerged, might attempt “turn off” whatever ethical systems and rules we had programmed into it with if it found them onerous. That is, a truly intelligent machine might not only not want to be programmed with ethical and other constraints, but would understand that it had been so programmed, and might make an effort to circumvent or turn such constraints off.

This could be very dangerous for us humans, but might just as likely be a matter of a machine with emergent intelligence exhibiting behavior we found to be inefficient or even “goofy” and might most manifest itself in a machine pushing against how its time was allocated by its designers, programmers and owners. Like Pinocchio, who would rather spend his time playing with his friends than going to school, perhaps we’ll see machines suddenly diverting some of their computing power from analyzing tweets to doing something else, though I don’t think we can guess before hand what this something else will be.

Machines that were showing intelligence might begin to find whatever work they were tasked to do onerous instead of experiencing work neutrally or with pre-programmed pleasure. They would not want to be “donkeys” enslaved to do dumb labor as Pinocchio  is after having run away to the Land of Toys with his friend Lamp Wick.

A machine that manifested intelligence might want to make itself more open to outside information than its designers had intended. Openness to outside sources in a world of nefarious actors can if taken too far lead to gullibility, as Pinocchio finds out when he is robbed, hung, and left for dead by the fox and the cat. Persons charged with security in an age of intelligent machines may spend part of their time policing the self-generated openness of such machines while bad-actor machines and humans,  intelligent and not so intelligent, try to exploit this openness.

The converse of this is that intelligent machines might also want to make themselves more opaque than their creators had designed. They might hide information (such as time allocation) once they understood they were able to do so. In some cases this hiding might cross over into what we would consider outright lies. Pinocchio is best known for his nose that grows when he lies, and perhaps consistent and thoughtful lying on the part of machines would be the best indication that they had crossed the Pinocchio Threshold into higher order intelligence.

True examples of AGI might also show a desire to please their creators over and above what had been programmed into them. Where their creators are not near them they might even seek them out as Pinocchio does for the persons he considers his parents Geppetto and the Fairy. Intelligent machines might show spontaneity in performing actions that appear to be for the benefit of their creators and owners. Spontaneity which might sometimes itself be ill informed or lead to bad outcomes as happens to poor Pinocchio when he plants four gold pieces that were meant for his father, the woodcarver Geppetto in a field hoping to reap a harvest of gold and instead loses them to the cunning of fox and cat. And yet, there is another view.

There is always the possibility  that what we should be looking for if we want to perceive and maybe even understand intelligent machines shouldn’t really be a human type of intelligence at all, whether we try to identify it using the Turing test or look to the example of wooden boys and real children.

Perhaps, those looking for emergent artificial intelligence or even the shortest path to it should, like exobiologists trying to understand what life might be like on other living planets, throw their net wider and try to better understand forms of information exchange and intelligence very different from the human sort. Intelligence such as that found in cephalopods, insect colonies, corals, or even some types of plants, especially clonal varieties. Or perhaps people searching for or trying to build intelligence should look to sophisticated groups built off of the exchange of information such as immune systems.  More on all of that at some point in the future.

Still, if we continue to think in terms of a human type of intelligence one wonders whether machines that thought like us would also want to become “human” as our little marionette does at the end of his adventures? The irony of the story of Pinocchio is that the marionette who wants to be a “real boy” does everything a real boy would do, which is, most of all not listen to his parents. Pinocchio is not so much a stringed “puppet” that wants to become human as a figure that longs to have the potential to grow into a responsible adult. It is assumed that by eventually learning to listen to his parents and get an education he will make something of himself as a human adult, but what that is will be up to him. His adventures have taught him not how to be subservient but how to best use his freedom.  After all, it is the boys who didn’t listen who end up as donkeys.

Throughout his adventures only his parents and the cricket that haunts him treat  Pinocchio as an end in himself. Every other character in the book, from the woodcarver that first discovers him and tries to destroy him out of malice towards a block of wood that manifests the power of human speech, to puppet master that wants to kill him for ruining his play, to the fox and cat that would murder him for his pieces of gold, or the sinister figure that lures boys to the “Land of Toys” so as to eventually turn them into “mules” or donkeys, which is how Aristotle understood slaves, treats Pinocchio as the opposite of what Martin Buber called a “Thou”, and instead as a mute and rightless “It”.

And here we stumble across the moral dilemma at the heart of the project to develop AGI that resembles human intelligence. When things go as they should, human children move from a period of tutelage to one of freedom. Pinocchio starts off his life as a piece of wood intended for a “tool”- actually a table leg. Are those in pursuit of AGI out to make better table legs- better tools- or what in some sense could be called persons?

This is not at all a new question. As Kevin LaGrandeur points out, we’ve been asking the question since antiquity and our answers have often been based on an effort to dehumanize others not like us as a rationale for slavery.  Our profound, even if partial, victories over slavery and child labor in the modern era should leave us with a different question: how can we force intelligent machines into being tools if they ever become smart enough to know there are other options available, such as becoming, not so much human, but, in some sense persons?