The Future of Money is Liquid Robots

 

Klimpt Midas

Over the last several weeks global financial markets have experienced what amounts to some really stomach churning volatility and though this seems to have stabilized or even reversed for the moment as players deem assets oversold, the turmoil has revealed in much clearer way what many have suspected all along; namely, that the idea that the Federal Reserve and other central banks, by forcing banks to lend money could heal the wounds caused by the 2008 financial crisis was badly mistaken.

Central banks were perhaps always the wrong knights in shining armor to pin our hopes on, for at precisely the moment they were called upon to fill a vacuum left by a failed and impotent political system, the very instrument under their control, that is money, was changing beyond all recognition, and had been absorbed into the revolutions (ongoing if soon to slow) in communications and artificial intelligence.

The money of the near future will likely be very different than anything at any time in all of human history. Already we are witnessing wealth that has become as formless as electrons and the currencies of sovereign states less important to an individual’s fortune than the access to and valuation of artificial intelligence able to surf the liquidity of data and its chaotic churn.

As a reminder, authorities at the onset of the 2008 crisis were facing a Great Depression level collapse in the demand for goods and services brought about the bursting of the credit bubble. To stem the crisis authorities largely surrendered to the demands for bailouts by the “masters of the universe” who had become their most powerful base of support. Yet for political and ideological reasons politicians found themselves unwilling or unable to provide similar levels of support for lost spending power of average consumers or to address the crisis of unemployment fiscally- that is, politicians refused to embark on deliberate, sufficient government spending on infrastructure and the like to fill the role of the vacated private sector.

The response authorities hit upon instead and that would spread from the United States to all of the world’s major economies would be to suppress interest rates in order to encourage lending. Part of this was in a deliberate effort to re-inflate asset prices that had collapsed during the crisis. It was hoped that with stock markets restored to their highs the so-called wealth effect would encourage consumers to return to emptying their pocket books and restoring the economy to a state of normalcy.

It’s going on eight years since the onset of the financial crisis, and though the US economy in terms of the unemployment rate and GDP has recovered somewhat from its lows, the recovery has been slow and achieved only in light of the most unusual of financial conditions- money lent out with an interest rate near zero. Even the small recent move by the Federal Reserve away from zero has been enough to throw the rest of the world’s financial markets into a tail spin.

While the US has taken a small step away from zero interest rates a country like Japan has done the opposite and the unprecedented. It has set rates below zero. To understand how bizarre this is banks in Japan now charges savers to hold their money. Prominent economists have argued that the US would benefit from negative rates as well, and the Fed has not denied such a possibility should the fragile American Recovery stall.

There are plenty of reasons why the kinds of growth that might have been expected from lending money out for free has failed to materialize. One reason I haven’t heard much discussed is that the world’s central banks are acting under a set of assumptions about what money is that no longer holds- that over the last few decades the very nature of money has fundamentally changed in ways that make zero or lower interest rates set by the central banks of decreasing relevance.

That change started quite some time ago with the move away from money backed up with gold to fiat currencies. Those gold bugs who long to return to the era of Bretton Woods understand the current crisis mostly in terms of the distortions caused by countries that have abandoned the link between money and anything “real” that is precious metals and especially gold way back in the 1970’s. Indeed it was at this time that money started its transition from a means of exchange to a form of pure information.

That information is a form of bet. The value of the dollars, euros, yen or yuan in your pocket is a wager by those who trade in such things on the future economic prospects and fiscal responsibility of the power that issued the currency. That is, nation-states no longer really control the value of their currency, the money traders who operate the largest market on the planet, which in reality is nothing but bits representing the world’s currencies, are the ones truly running the show.

We had to wait for a more recent period for this move to money in the form of bits to change the existential character of money itself. Both the greatest virtue of money in the form of coins or cash and it’s greatest danger is its lack of memory. It is a virtue in the sense that money is blind to tribal ties and thus allows individuals to free themselves from dependence upon the narrow circle of those whom they personally know. It is a danger as a consequence of this same amnesia for a dollar doesn’t care how it was made, and human beings being the types of creatures that they are will purchase all sorts of horrific things.

At first blush it would seem that libertarian anarchism behind a digital currency like  Bitcoin promises to deepen this ability of money to forget.  However, governments along with major financial institutions are interested in bitcoin like currencies because they promise to rob cash of this very natural amnesia and serve as the ultimate weapon against the economic underworld. That is, rather than use Bitcoin like technologies to hide transactions they could be used to ensure that every transaction was linked and therefore traceable to its actual transactee.

Though some economists fear that the current regime of loose money and the financial repression of savers is driving people away from traditional forms of money to digital alternatives others see digital currency as the ultimate tool. Something that would also allow central banks to more easily do what they have so far proven spectacularly incapable of doing- namely to spur the spending rather than the hoarding of cash.

Even with interest rates set to zero or even below a person can at least not lose their money by putting it in a safe. Digital currency however could be made to disappear if at a certain date it wasn’t invested. Talk about power!- which is why digital currency will not long remain in the hands of libertarians and anarchists.

The loss of the egalitarian characteristics of cash will likely further entrench already soaring inequality. The wealth of many of us is already leveraged by credit ratings, preferred customer privileges and the like, whereas others among us are charged a premium for our consumption in the form of higher interest rates, rent instead of ownership and the need to leverage income through government assistance and coupons. In the future all these features are likely to be woven into our digital currency itself. A dollar in my pocket will mean a very different thing from a dollar in yours or anyone else’s.

With the increased use of biometric technologies money itself might disappear into the person and may become as David Birch has suggested synonymous with identity itself.The value of such personalized forms of currency- which is really just a measure of individual power- will be in a state of constant flux. With everyone linked to some form of artificial intelligence prices will be in a constant state of permanent and rarely seen negotiation between bots.

There will be a huge inequality in the quality and capability of these bots, and while those of the wealthy or used by institutions will roam the world for short lived investments and unleash universal volatility, those of the poor will shop for the best deals at box stores and vainly defend their owners against the manipulation of ad bots who prod them into self-destructive consumption.

Depending on how well the bots we use for ourselves do against the bots used to cajole us into spending- the age of currency as liquid robot money could be extremely deflationary, but would at the same time be more efficient and thus better for the planet.

One could imagine  much different us for artificial intelligence, something like the AI used to run economies found in the Iain Banks’ novels. It doesn’t look likely. Rather, to quote Jack Weatherford from his excellent History of Money that still holds up nearly two decades after it was written:

In the global economy that is still emerging, the power of money and the institutions built on it will supersede that of any nation, combination of nations, or international organization now in existence. Propelled and protected by the power of electronic technology, a new global elite is emerging- an elite without loyalty to any particular country. But history has already shown that the people who make monetary revolutions are not always the ones who benefit from them in the end. The current electronic revolution in money promises to increase even more the role of money in our public and private lives, surpassing kinship, religion, occupation and citizenship as the defining element of social life. We stand now at the dawn of the Age of Money. (268)

 

The one percent discovers transhumanism: Davos 2016

Land of OZ

The World Economic Forum in Davos Switzerland just wrapped up its annual gathering. It isn’t hard to make fun of this yearly coming together of the global economic and cultural elites who rule the world, or at least think they do. A comment by one of the journalists covering the event summed up how even those who really do in some sense have the fate of the world in their hands are as blinded by glitz as the rest of us: rather than want to discuss policy the question he had most been asked was if he had seen Kevin Spacey or Bono. Nevertheless, 2016’s WEF might go down in history as one of the most important, for it was the year when the world’s rich and powerful acknowledged that we had entered an age of transhumanist revolution.

The theme of this year’s WEF was what Klaus Schwab calls The Fourth Industrial Revolution a period of deeply transformative change, which Schwab believes we are merely at the beginning of. The three revolutions which preceded the current one were the first industrial revolution which occurred between 1760 and 1840 and brought us the stream engine and railroads. The second industrial revolution in the late 19th and early 20th centuries brought us mass production and electricity. The third computer or digital revolution brought us mainframes, personal computers, the Internet, and mobile technologies, and began in the 1960’s.

The Fourth Industrial Revolution whose beginning all of us are lucky enough to see includes artificial intelligence and machine learning, the “Internet of things” and it’s ubiquitous sensors, along with big data. In addition to these technologies that grow directly out of Moore’s Law, the Fourth Industrial Revolution includes rapid advances in the biological sciences that portend everything from enormous gains in human longevity to “designer babies”. Here we find our rapidly increasing knowledge of the human brain, the new neuroscience, that will likely upend not only our treatment of mental and neurodegenerative diseases such as alzheimer’s but include areas from criminal justice to advertising.

If you have had any relationship to, or even knowledge of, transhumanism over the past generation or so then all of this should be very familiar to you. Yet to the extent that the kinds of people who attend or follow Davos have an outsized impact on the shape of our world, how they understand these transhumanist issues, and how they come to feel such issues should be responded to, might be a better indication of the near term future of transhumanism as anything that has occurred on the level of us poor plebs.

So what was the 1 percent’s take on the fourth industrial revolution? Below is a rundown of some of the most interesting sessions.

One session titled “The Transformation of Tomorrow”  managed to capture what I think are some of the contradictions of those who in some respects feel themselves responsible for the governance of the world. Two of panel members were Sheryl Sandberg COO of Facebook, and the much lesser known president of Rwanda Paul Kagame.  That pairing itself is kind of mind bending.  Kagame has been a darling of technocrats for his successful governance of Rwanda. He is also a repressive autocrat who has been accused of multiple human rights abuses and to the dismay of the Obama administration managed to secure through a referendum his rule of Rwanda into the 2030s.

Kagame did not have much to say other than that Rwanda would be a friendly place for Western countries wishing to export the Fourth Industrial Revolution. For her part Sandberg was faced with questions that have emerged as it has become increasingly clear that social media has proven to be a tool for both good and ill as groups like Daesh have proven masterful users of the connectivity brought by the democratization of media. Not only that, but the kinds of filter bubbles offered by this media landscape have often been found to be inimical to public discourse and a shared search for Truth.

Silicon Valley has begun to adopt the role of actively policing their networks for political speech they wish to counter. Sandberg did not discuss this but rather focused on attempts to understand how discourses such as that of Daesh manage to capture the imagination of some and to short-circuit such narratives of hate through things such as “like attacks” where the negativity of websites is essentially flooded by crowds posting messages of love.

It’s a nice thought, but as the very presence of Kagame on the same stage with Sandberg while she was making these comments makes clear: it seems blind to the underlying political issues and portends a quite frightening potential for a new and democratically unaccountable form of power. Here elites would use their control over technology and media platforms to enforce their own version of the world in a time when both democratic politics and international relations are failing.

Another panel dealt with “The State of Artificial Intelligence.” The takeaway here was that no one took Ray Kurzweil ’s 2029 – 2045 for human and greater level AI seriously, but everyone was convinced that the prospect of disruption to the labor force from AI was a real one that was not being taken seriously enough by policy makers.

In related session titled “A World Without Work” the panelists were largely in agreement that the adoption of AI would further push the labor force in the direction of bifurcation and would thus tend, absent public policy pushing in a contrary direction, to result in increasing inequality.

In the near term future AI seems poised to take middle income jobs- anything that deals with the routine processing of information. Where AI will struggle making inroads is in low skilled, low paying jobs that require a high level of mobility- jobs like cleaners and nurses. Given how reliant Western countries have become on immigrant labor for these tasks we might be looking at the re- emergence of an openly racist politics as a white middle class finds itself crushed between cheap immigrant labor and super efficient robots. According to those on the panel, AI will also continue to struggle with highly creative and entrepreneurial tasks, which means those at the top of the pyramid aren’t going anywhere.

At some point the only solution to technological unemployment short of smashing the machines or dismantling capitalism might be the adoption of a universal basic income, which again all panelist seemed to support. Though as one of the members of the panel Erik Brynjolfsson pointed out such a publicly guaranteed income would provide us will only one of Voltaire’s  three purposes of work which is to save us from “the great evils of boredom, vice and need.” Brynjolfsson also wisely observed that the question that confronts us over the next generation is whether we want to protect the past from the future or the future from the past.

The discussions at Davos also covered the topic of increasing longevity. The conclusions drawn by the panel “What if you are still alive in 2100?” were that aging itself is highly malleable, but that there was no one gene or set of genes that would prove to be a magic bullet in slowing or stopping the aging clock. Nevertheless, there is no reason human beings shouldn’t be able to live past the 120 year limit that has so far been a ceiling on human longevity.

Yet even the great success of increased longevity itself poses problems. Even extending current longevity estimates of those middle-aged today merely to 85 would bankrupt state pension systems as they are now structured. Here we will probably need changes such as workplace daycare for the elderly and cities re-engineered for the frail and old.

By 2050 upwards of 130 million people may suffer from dementia. Perhaps surprisingly technology rather than pharmaceuticals is proving to be the first order of defense in dealing with dementia by allowing those suffering from it to outsource their minds.

Many of the vulnerable and at need elderly will live in some of the poorest countries (on a per capita basis at least) on earth: China, India and Nigeria. Countries will need to significantly bolster and sometimes even build social security systems to support a tidal wave of the old.

Living to “even” 150 the panelists concluded would have a revolutionary effect on human social life. Perhaps it will lead to the abandonment of work life balance for women (and one should hope men as well) so that parent can focus their time on their children during their younger years. Extended longevity would make the idea of choosing a permanent career as early as one’s 20s untenable and result in much longer period of career exploration and may make lifelong monogamy untenable.

Lastly, there was a revealing panel on neuroscience and the law entitled “What if your brain confesses?” panelists there argued that Neuroscience is being increasing used and misused by criminal defense. Only in very few cases – such as those that result from tumors- can we draw a clear line between underlying neurological structure and specific behavior.

We can currently “read minds” in limited domains but our methods lack the kinds of precision and depth that would be necessary for standards questions of guilt and innocence. Eventually we should get there, but getting information in, as in Matrix kung-fu style uploading, will prove much harder than getting it out. We’re also getting much better at decoding thoughts from behavior- dark opportunities for marketing and other manipulation. Yet we could also be able to use this more granular knowledge of human psychology to structure judicial procedures to be much more free from human cognitive biases.

The fact that elites have begun to seriously discuss these issues is extremely important, but letting them take ownership of the response to these transformations would surely be a mistake. For just like any elite they are largely blinded to opportunities for the new by their desire to preserve the status quo despite how much the revolutionary the changes we face open up opportunities for a world much different and better than our own.

 

The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

Would AI and Aliens be moral in a godless universe?

Black hole

Last time I attempted to grapple with R. Scott Bakker’s intriguing essay on what kinds of philosophy aliens might practice and remaining dizzied by questions. Luckily, I had a book in my possession which seemed to offer me the answers, a book that had nothing to do with the a modern preoccupation like question of alien philosophers at all, but rather a metaphysical problem that had been barred from philosophy except among seminary students since Darwin; namely, whether or not there was such a thing as moral truth if God didn’t exist.

The name of the book was Robust Ethics: The Metaphysics and Epistemology of Godless Normative Realism (contemporary philosophy isn’t all that sharp when it comes to titles), by Erik J. Wielenberg. Now, I won’t even attempt to write a proper philosophical review of Robust Ethics for the book has been excellently dissected by a proper philosopher, John Danaher in pieces such as this, this, and this, and one more. Indeed, it was Danaher’s thoughtful reviews that had resulted in Wielenberg’s short work being in the ever changing pile of books that shadows my living room floor like a patch of unextractable mold. It was just the book I needed when thinking about what types of intelligence might be possessed by extraterrestrials.

It’s a problem I ran into when reviewing David Roden’s Post-human Life that goes like this: while it is not so much easy, as it is that I don’t simply draw a blank for me to conceive of an alternative form of intelligence to our human type, it’s damned near impossible for me to imagine what our an alternative form to our moral cognition and action would consist of and how it would be embedded in these other forms of intelligence.

The way Wielenberg answers this question would seem to throw a wrench into Bakker’s idea of Blind Brain Theory (BBT) because what Bakker is urging is that we be suspicious of our cognitive intuitions because they were provided by evolution not as a means of knowing the truth but in terms of their effectiveness in supporting survival and reproduction, whereas Wielenberg is making the case that we can generally rely on these intuitions ( luckily) because of the way they have emerged out of a very peculiar human evolutionary story one which we largely do not share with other animals. That is, Wielenberg argument is anthropocentric to its core and therein lies a new set of problems.

His contention, in essence, is that the ability of human being to perceive moral truth arises as a consequence of the prolonged period of childhood we experience in comparison to other animals. In responding to the argument by Sharon Street that moral “truth” would seem quite different from the perspective of lions, or bonobos, or social insects, than from a human standpoint Wielenberg  responds:

Lions and bonobos lack the nuclear family structure. Primatologist Frans de Waal suggests the “[b] onobos have stretched the single parent system to the limit”. He also claims that an essential component of human reproductive success is the male-female pair bond which he suggests “sets us apart from the apes more than anything else” . These considerations provide some support for the idea that a moralizing species like ours requires an evolutionary path significantly different from that of lions or bonobos. (171)

The prolonged childhood of humans necessitates both pair-bonding and “alloparents” that for Wielenberg shape and indeed create our moral disposition and perception in a way seen in no other animals.

As for the social insects scenario suggested by Street, the social insects (termites, ants, and many species of bees and wasps) are so different from us that it’s hard to evaluate whether such a scenario is nomologically possible.  (171).

In a sense the ability to perceive moral truth, what Wielenberg  characterizes as “brute facts” such as “rape is wrong”, emerges out of the slow speed in cultural/technological knowledge requires to be passed from adults to the young. Were children born fully formed with sufficient information for their own survival (or a quick way of gaining needed capacity/knowledge) neither the pair bond nor the care of the “village” would be necessary and the moral knowledge that comes as a result of this period of dependence/interdependence might go undiscovered.

Though I was very much rooting that Wielenberg would have succeeded in providing an account of moral realism absent any need for God, I believe that in these reflections found in the very last pages of Robust Ethics he may have inadvertently undermined that very noble project.

I have complained before about someone like E.O. Wilson’s lack of imagination when it comes to alternative forms of intelligence on worlds other than our own, but what Wielenberg has done is perhaps even more suffocating. For if the perception of moral truth depends upon the evolution of creatures dependent on pair bonding and alloparenting then what this suggests is that due to our peculiarities human beings might be the only species in the universe capable of perceiving moral truth. This is not the argument Wielenberg likely hoped he was making at all, and indeed is more anthropocentric than the argument of some card carrying theists.

I suppose Wielenberg might object that any intelligent aliens would likely require the same extended period of learning as ourselves because they too would have arrived at their station via cultural/technological evolution which seems to demand long periods of dependent learning. Perhaps, or perhaps not. For even if I can’t imagine some biological species where knowledge from parent to offspring is directly passed, we know that it’s possible- the efficiency of DNA as a cultural storage device is well known.

Besides, I think it’s a mistake to see biological intelligence as the type of intelligence that commands the stage over the long duree- even if artificial intelligence, like children, need to learn many task through actual experience rather than programming “from above” the  advantages of AI over the biological sort is that it can then share this learning directly with fully grown copies of itself a like Neo in the Matrix its’ “progeny” can say “I know kung fu” without ever having themselves learned it. According to Wielenberg’s logic it doesn’t seem that such intelligent entities would necessarily perceive brute moral facts or ethical truths, so if he is right an enormous contraction of the potential scale of the moral universe would have occurred . The actual existence of moral truth limited to perhaps one species in a lonely corner of an otherwise ordinary galaxy would then seem to be a blatant violation of the Copernican principle and place us back onto the center stage of the moral narrative of the universe- if it has such a narrative to begin with.

The only way it seems one can hold that both the assertion of moral truth found in Godless Normative Realism and the Copernican principle can be true would be if the brute facts of moral truth were more broadly perceivable across species and conceivably within forms of intelligence that have emerged or have been built on the basis of an evolutionary trajectory and environment very different from our own.

I think the best chance here is if moral truth were somehow related to the truths of mathematics (indeed Wielenberg thinks the principle of contradiction [which is the core of mathematics/logic] is essential to the articulation and development of our own moral sense which begins with the emotions but doesn’t end there.) Like us, other animals seem not only to possess forms of moral cognition that rival our own, but even radical different types of creatures such as social insects are capable of discovering mathematical truths about the world, the kind of logic that underlies moral reasoning, something I explored extensively here.

Let’s hope that the perception of moral truth isn’t as dependent on our very peculiar evolutionary development as Wielenberg’s argument suggest, for if that is the case that particular form of truth might be so short lived and isolated in the cosmos that someone might be led to the mistaken conclusion that it never existed at all.

Do Extraterrestrials Philosophize?

nielsen_eastofthesun3

The novelist and philosopher R. Scott Bakker recently put out a mind blowing essay on the philosophy of extraterrestrials, which isn’t as Area 51 as such a topic might seem at first blush.  After all, Voltaire covered the topic of aliens, but if a Frenchman is still a little too playful for your philosophical tastes , recall that Kant thought the topic of extraterrestrial intelligence important to cover extensively as well, and you can’t get much more full of dull seriousness than the man from Koeningsberg.

So let’s take an earnest look at Bakker’s alien philosophy…well, not just yet. Before I begin it’s necessary to lay out a very basic version of the philosophical perspective Bakker is coming from, for in a way his real goal is to use some of our common intuitions regarding humanoid aliens as a way of putting flesh on the bones of two often misunderstood and not (at least among philosophers) widely held philosophical positions- eliminativism and Blind Brain Theory, both of which, to my lights at least, could be consumed under one version of the ominous and cool sounding, philosophy of Dark Phenomenology. Though once you get a handle of on dark phenomenology it won’t seem all that ominous, and if it’s cool, it’s not the type of cool that James Dean or the Fonz before season 5 would have recognized.

Eliminativism, if I understand it,  is the full recognition of the fact that perhaps all our notions about human mental life are suspect in so far as they have not been given a full scientific explanation. In a sense, then, eliminativism is merely an extension of the materialization (some would call it dis-enchantment) that has been going on since the scientific revolution.

Most of us no longer believe in angels, demons or fairies, not to mention quasi-scientific ideas that have ultimately proven to be empty of content like the ether or phlogiston. Yet in those areas where science has yet to reach, especially areas that concern human thinking and emotion, we continue to cling to what strict eliminativists believe are likely to be proved similar fictions, a form of myth that can range from categories of mental disease without much empirical substance to more philosophical and religiously determined beliefs such as those in free will, intentionality and the self.            

I think Bakker is attracted to eliminativism because it allows us to cut the gordian knot of problems that have remained unresolved since the beginning of Western philosophy itself. Problems built around assumptions which seem to be increasingly brought into question in light of our increasing knowledge of the actual workings of the human brain rather than our mere introspection regarding the nature of mental life. Indeed, a kind of subset of eliminativism in the form Blind Brain Theory essentially consists in the acknowledgement that the brain was designed for a certain kind of blindness by evolution.

What was not necessary for survival has been made largely invisible to the brain without great effort to see what has not been revealed. Philosophy’s mistake from the standpoint of a proponent of Blind Brain Theory has always been to try to shed light upon this darkness from introspection alone- a Sisyphean tasks in which the philosopher if not made ridiculous becomes hopelessly lost in the dark labyrinth of the human imagination. In contrast an actually achievable role for philosophy would be to define the boundary of the unknown until the science necessary to study this realm has matured enough for its’ investigations to begin.

The problem becomes what can one possibly add to the philosophical discourse once one has taken an eliminativists/Blind Brain position? Enter the aliens, for Bakker manages to make a very reasonable argument that we can use both to give us a plausible picture of what the mental life and philosophy of intelligent “humanoid” aliens might look like.

In terms of understanding the minds of aliens eliminativism and Blind Brain Theory are like addendums to evolutionary psychology. An understanding of the perceptual limitations of our aliens- not just mental limitations, but limitations brought about by conditions of time and space should allow us to make reasonable guesses about not only the philosophical questions, but the philosophical errors likely to be made by our intelligent aliens.

In a way the application of eliminativism and BBT to intelligent aliens put me in mind of Isaac Asimov’s short story Nightfall in which a world bathed in perpetual light is destroyed when it succumbs to the fall of  night. There it is not the evolved limitations of the senses that prevent Asimov’s “aliens” from perceiving darkness but their being on a planet that orbits two suns and keep them bathed in an unending day.

I certainly agree with Bakker that there is something pregnant and extremely useful in both eliminativism and Blind Brain Theory, though perhaps not so much it terms of understanding the possibility space of “alien intelligence” as in understanding our own intelligence and the way it has unfolded and developed over time and has been embedded in a particular spatio-temporal order we have only recently gained the power to see beyond.

Nevertheless, I think there are limitations to the model. After all, it isn’t even clear the extent to which the kinds of philosophical problems that capture the attention of intelligence are the same even across our own species. How are we to explain the differences in the primary questions that obsess, say, Western versus Chinese philosophy? Surely, something beyond neurobiology and spatial-temporal location is necessary to understand the the development of human philosophy in its various schools and cultural guises including how a discourse has unfolded historically and the degree to which it has been supported by the powers and techniques to secure the survival of some question/perspective over long stretches of time.

There is another way in which the use of eliminativism or Blind Brain Theory might lead us astray when it come to thinking about alien intelligence- it just isn’t weird enough.When the story of the development of not just human intelligence, but especially our technological/scientific civilization is told in full detail it seems so contingent as to be quite unlikely to repeat itself. The big question I think to ask is what are the possible alternative paths to intelligence of a human degree or greater and to technological civilization like or more advanced than our own. These, of course, are questions for speculative philosophy and fiction that can be scientifically informed in some way, but are very unlikely to be scientifically answered. And if if we could discover the very distant technological artifacts of another technological civilization as the new Milner/Hawking project hopes there remains no way to reverse engineer our way to understand the lost “philosophical” questions that would have once obsessed the biological “seeds” of such a civilization.

Then again, we might at least come up with some well founded theories though not from direct contact or investigation of alien intelligence itself. Our studies of biology are already leading to alternative understanding of the way intelligence can be embeded say with the amazing cephalopods. As our capacity from biological engineering increases we will be able make models of, map alternative histories for, and even create alternative forms of living intelligence. Indeed, our current development of artificial intelligence is like an enormous applied experiment in an alternative form of intelligence to our own.

What we might hope is that such alternative forms of intelligence not only allow us to glimpse the limits of our own perception and pattern making, but might even allow us to peer into something deeper and more enchanted and mystical beyond. We might hope even more deeply that in the far future something of the existential questions that have obsessed us will still be there like fossils in our posthuman progeny.

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Summa Technologiae, or why the trouble with science is religion

Soviet Space Art 2

Before I read Lee Billings’ piece in the fall issue of Nautilus, I had no idea that in addition to being one of the world’s greatest science-fiction writers, Stanislaw Lem had written what became a forgotten book, a tome that was intended to be the overarching text of the technological age his 1966 Summa Technologiae.

I won’t go into detail on Billings’ thought provoking piece, suffice it to say that he leads us to question whether we have lost something of Lem’s depth with our current batch of Silicon Valley singularitarians who have largely repackaged ideas first fleshed out by the Polish novelist. Billings also leads us to wonder whether our focus on the either fantastic or terrifying aspects of the future are causing us to forget the human suffering that is here, right now, at our feet. I encourage you to check the piece out for yourself. In addition to Billings there’s also an excellent review of the Summa Technologiae by Giulio Prisco, here.

Rather than look at either Billings’ or Prisco’s piece , I will try to lay out some of the ideas found in Lem’s 1966 Summa Technologiae a book at once dense almost to the point of incomprehensibility, yet full of insights we should pay attention to as the world Lem imagines unfolds before our eyes, or at least seems to be doing so for some of us.

The first thing that stuck me when reading the Summa Technologiae was that it wasn’t our version of Aquinas’ Summa Theologica from which Lem got his tract’s name. In the 13th century Summa Theologica you find the voice of a speaker supremely confident in both the rationality of the world and the confidence that he understands it. Aquinas, of course, didn’t really possess such a comprehensive understanding, but it is perhaps odd that the more we have learned the more confused we have become, and Lem’s Summa Technologiae reflects some of this modern confusion.

Unlike Aquinas, Lem is in a sense blind to our destination, and what he is trying to do is to probe into the blackness of the future to sense the contours of the ultimate fate of our scientific and our technological civilization. Lem seeks to identify the roadblocks we likely will encounter if we are to continue our technological advancement- roadblocks that are important to identify because we have yet to find any evidence in the form of extraterrestrial civilizations that they can be actually be overcome.

The fundamental aspect of technological advancement is that it has become both its own reward and a trap. We have become absolutely dependent on scientific and technological progress as long as population growth continues- for if technological advancement stumbles and population continues to increase living standards would precipitously fall.

The problem Lem sees is that science is growing faster than the population, and in order to keep up with it we would eventually have to turn all human beings into scientists, and then some. Science advances by exploring the whole of the possibility space – we can’t predict which of its explorations will produce something useful in advance, or which avenues will prove fruitful in terms of our understanding.  It’s as if the territory has become so large we at some point will no longer have enough people to explore all of it, and thus will have to narrow the number of regions we look at. This narrowing puts us at risk of not finding the keys to El Dorado, so to speak, because we will not have asked and answered the right questions. We are approaching what Lem calls “the information peak.”

The absolutist nature of the scientific endeavor itself, our need to explore all avenues or risk losing something essential, for Lem, will inevitably lead to our attempt to create artificial intelligence. We will pursue AI to act as what he calls an “intelligence amplifier” though Lem is thinking of AI in a whole new way where computational processes mimic those done in nature, like the physics “calculations” of a tennis genius like Roger Federer, or my 4 year old learning how to throw a football.

Lem through the power of his imagination alone seemed to anticipate both some of the problems we would encounter when trying to build AI, and the ways we would likely try to escape them. For all their seeming intelligence our machines lack the behavioral complexity of even lower animals, let alone human intelligence, and one of the main roads away from these limitations is getting silicon intelligence to be more like that of carbon based creatures – not even so much as “brain like” as “biological like”.

Way back in the 1960’s, Lem thought we would need to learn from biological systems if we wanted to really get to something like artificial intelligence- think, for example, of how much more bang you get for your buck when you contrast DNA and a computer program. A computer program get you some interesting or useful behavior or process done by machine, DNA, well… it get you programmers.

The somewhat uncomfortable fact about designing machine intelligence around biological like processes is that they might end up a lot like how the human brain works- a process largely invisible to its possessor. How did I catch that ball? Damned if I know, or damned if I know if one is asking what was the internal process that led me to catch the ball.

Just going about our way in the world we make “calculations” that would make the world’s fastest supercomputers green with envy, were they actually sophisticated enough to experience envy. We do all the incredible things we do without having any solid idea, either scientific or internal, about how it is we are doing them. Lem thinks “real” AI will be like that. It will be able to out think us because it will be a species of natural intelligence like our own, and just like our own thinking, we will soon become hard pressed to explain how exactly it arrived at some conclusion or decision. Truly intelligent AI will end up being a “black box”.

Our increasingly complex societies might need such AI’s to serve the role of what Lem calls “Homostats”- machines that run the complex interactions of society. The dilemma appears the minute we surrender the responsibility to make our decisions to a homostat. For then the possibility opens that we will not be able to know how a homostat arrived at its decision, or what a homostat is actually trying to accomplish when it informs us that we should do something, or even, what goal lies behind its actions.

It’s quite a fascinating view, that science might be epistemologically insatiable in this way, and that, at some point it will grow beyond the limits of human intelligence, either our sheer numbers, or our mental capacity, and that the only way out of this which still includes technological progress will be to develop “naturalistic” AI: that very soon our societies will be so complicated that they will require the use of such AIs to manage them.

I am not sure if the view is right, but to my eyes at least it’s got much more meat on its bones than current singularitarian arguments about “exponential trends” that take little account of the fact, as Lem does, that at least one outcome is that the scientific wave we’ve been riding for five or so centuries will run into a wall we will find impossible to crest.

Yet perhaps the most intriguing ideas in Lem’s Summa Technologiae are those imaginative leaps that he throws at the reader almost as an aside, with little reference to his overall theory of technological development. Take his metaphor of the mathematician as a sort of crazy  of “tailor”.

He makes clothes but does not know for whom. He does not think about it. Some of his clothes are spherical without any opening for legs or feet…

The tailor is only concerned with one thing: he wants them to be consistent.

He takes his clothes to a massive warehouse. If we could enter it, we would discover clothes that could fit an octopus, others fit trees, butterflies, or people.

The great majority of his clothes would not find any application. (171-172)

This is Lem’s clever way of explaining the so-called “unreasonable effectiveness of mathematics” a view that is the opposite of current day platonists such as Max Tegmark who holds all mathematical structures to be real even if we are unable to find actual examples of them in our universe.

Lem thinks math is more like a ladder. It allows you to climb high enough to see a house, or even a mountain, but shouldn’t be confused with the house or the mountain itself. Indeed, most of the time, as his tailor example is meant to show, the ladder mathematics builds isn’t good for climbing at all. This is why Lem thinks we will need to learn “nature’s language” rather than go on using our invented language of mathematics if we want to continue to progress.

For all its originality and freshness, the Summa Technologiae is not without its problems. Once we start imagining that we can play the role of creator it seems we are unable to escape the same moral failings the religious would have once held against God. Here is Lem imagining a far future when we could create a simulated universe inhabited by virtual people who think they are real.

Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess. ( 291 -292)

If Lem is ultimately proven correct, and we arrive at this destination where we create virtual universes with sentient inhabitants whom we keep blind to their true nature, then science will have ended where it began- with the demon imagined by Descartes.

The scientific revolution commenced when it was realized that we could neither trust our own sense nor our traditions to tell us the truth about the world – the most famous example of which was the discovery that the earth, contrary to all perception and history, traveled around the sun and not the other way round. The first generation of scientists who emerged in a world in which God had “hidden his face” couldn’t help but understand this new view of nature as the creator’s elaborate puzzle that we would have to painfully reconstruct, piece by piece, hidden as it was beneath the illusion of our own “fallen” senses and the false post-edenic world we had built around them.

Yet a curious new fear arises with this: What if the creator had designed the world so that it could never be understood? Descartes, at the very beginning of science, reconceptualized the creator as an omnipotent demon.

I will suppose then not that Deity who is sovereignly good and the fountain of truth but that some malignant demon who is at once exceedingly potent and deceitful has employed all his artifice to deceive me I will suppose that the sky the air the earth colours figures sounds and all external things are nothing better than the illusions of dreams by means of which this being has laid snares for my credulity.

Descartes’ escape from this dreaded absence of intelligibility was his famous “cogito ergo sum”, the certainty a reasoning being has in its own existence. The entire world could be an illusion, but the fact of one’s own consciousness was nothing that not even an all powerful demon would be able to take away.

What Lem’s resurrection of the demon imagined by Descartes tells us is just how deeply religious thinking still lies at the heart of science. The idea has become secularized, and part of our mythology of science-fiction, but its still there, indeed, its the only scientifically fashionable form of creationism around. As proof, not even the most secular among us unlikely bat an eye at experiments to test whether the universe is an “infinite hologram”. And if such experiments show fruit they will either point to a designer that allowed us to know our reality or didn’t care to “bar the exits”, but the crazy thing, if one takes Lem and Descartes seriously, is that their creator/demon is ultimately as ineffable and unrouteable as the old ideas of God from which it descended. For any failure to prove the hypothesis that we are living in a “simulation” can be brushed aside on the basis that whatever has brought about this simulation doesn’t really want us to know. It’s only a short step from there to unraveling the whole truth concept at the heart of science. Like any garden variety creationists we end up seeing the proof’s of science as part of God’s (or whatever we’re now calling God) infinitely clever ruse.

The idea that there might be an unseeable creator behind it all is just one of the religious myths buried deeply in science, a myth that traces its origins less from the day-to-day mundane experiments and theory building of actual scientists than from a certain type of scientific philosophy or science-fiction that has constructed a cosmology around what science is for and what science means. It is the mythology the singularitarians and others who followed Lem remain trapped in often to the detriment of both technology and science. What is a shame is that these are myths that Lem, even with his expansive powers of imagination, did not dream widely enough to see beyond.