The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

Our Verbot Moment

Metropolis poster

When I was around nine years old I got a robot for Christmas. I still remember calling my best friend Eric to let him know I’d hit pay dirt. My “Verbot” was to be my own personal R2D2. As was clear from the picture on the box, which I again remember as clear as if it were yesterday, Verbot would bring me drinks and snacks from the kitchen on command- no more pestering my sisters who responded with their damned claims of autonomy! Verbot would learn to recognize my voice and might help me with the math homework I hated. Being the only kid in my nowhere town with his very own robot I’d be the talk for miles in every direction. As long, that is, as Mark Z didn’t find his own Verbot under the tree- the boy who had everything- cursed brat!

Within a week after Christmas Verbot was dead. I never did learn how to program it to bring me snacks while I lounged watching Our Star Blazers, though it wasn’t really programmable to start with.  It was really more of a remote controlled car in the shape of Robbie from Lost in Space than an actual honest to goodness robot. Then as now, my steering skills weren’t so hot and I managed to somehow get Verbot’s antenna stuck in the tangly curls of our skittish terrier, Pepper. To the sounds of my cursing, Pepper panicked and drug poor Verbot round and around the kitchen table eventually snapping it loose from her hair to careen into a wall and smash into pieces. I felt my whole future was there in front of me in shattered on the floor. There was no taking it back.

Not that my 9 year old nerd self realized this, but the makers of Verbot obviously weren’t German, the word in that language meaning “to ban or prohibit”. Not exactly a ringing endorsement on a product, and more like an inside joke by the makers whose punch line was the precautionary principle.

What I had fallen into in my Verbot moment was the gap between our aspirations for  robots and their actual reality. People had been talking about animated tools since ancient times. Homer has some in his Iliad, Aristotle discussed their possibility. Perhaps we started thinking about this because living creature tend to be unruly and unpredictable. They don’t get you things when you want them to and have a tendency to run wild and go rogue. Tools are different, they always do what you want them to as long as they’re not broken and you are using them properly. Combining the animation and intelligence of living things with the cold functionality of tools would be the mixing of chocolate and peanut butter for someone who wanted to get something done without doing it himself. The problems is we had no idea how to get from our dead tools to “living” ones.

It was only in the 19th century that an alternative path to the hocus-pocus of magic was found for answering the two fundamental questions surrounding the creation of animate tools. The questions being what would animate these machines in the same way uncreated living beings were animated? and what would be the source of these beings intelligence? Few before the 1800s could see through these questions without some reference to black arts, although a genius like Leonardo Da Vinci had as far back as the 15th century seen hints that at least one way forward was to discover the principles of living things and apply them to our tools and devices. (More on that another time).

The path forward we actually discovered was through machines animated by chemical and electrical processes much like living beings are, rather than the tapping of kinetic forces such as water and wind or the potential energy and multiplication of force through things like springs and levers which had run our machines up until that point. Intelligence was to be had in the form of devices following the logic of some detailed set of instructions. Our animated machines were to be energetic like animals but also logical and precise like the devices and languages we had created for for measuring and sequencing.

We got the animation part down pretty quickly, but the intelligence part proved much harder. Although much more precise that fault prone humans, mechanical methods of intelligence were just too slow when compared to the electro-chemical processes of living brains. Once such “calculators”, what we now call computers, were able to use electronic processes they got much faster and the idea that we were on the verge of creating a truly artificial intelligence began to take hold.

As everyone knows, we were way too premature in our aspirations. The most infamous quote of our hubris came in 1956 when the Dartmouth Summer Research Project on Artificial Intelligence boldly predicted:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.[emphasis added]

Ooops. Over much of the next half century, the only real progress in these areas for artificial intelligence came in the worlds we’d dreamed up in our heads. As a young boy, the robots I saw using language or forming abstractions were only found in movies and TV shows such as 2001: A Space Odyssey, or Star Wars, Battlestar Galactica, Buck Rogers and in books by science-fiction giants like Asimov. Some of these daydreams were so long in being unfulfilled I watched them in black and white. Given this, I am sure many other kids had their Verbot moments as well.

It is only in the last decade or so when the processing of instructions has proven fast enough and the programs sophisticated enough for machines to exhibit something like the intelligent behaviors of living organisms. We seem to be at the beginning of a robotic revolution where machines are doing at least some of the things science-fiction and Hollywood had promised. They beat us at chess and trivia games, can drive airplanes and automobiles, serve as pack animals, and even speak to us. How close we will come to the dreams of authors and filmmakers when it comes to our 21st century robots can not be known, though, an even more important question would be how what actually develops diverges from these fantasies?

I find the timing of this robotic revolution in the context of other historical currents quite strange. The bizarre thing being that almost at the exact moment many of us became unwilling to treat other living creatures, especially human beings, as mere tools, with slavery no longer tolerated, our children and spouses no longer treated as property and servants but gifts to be cultivated, (perhaps the sadder element of why population growth rates are declining), and even our animals offered some semblance of rights and autonomy, we were coming ever closer to our dream of creating truly animated and intelligent slaves.

This is not to say we are out of the woods yet when it comes to our treatment of living beings. The headlines of the tragic kidnapping of over 300 girls in Nigeria should bring to our attention the reality of slavery in our supposedly advanced and humane 21st century with there being more people enslaved today than at the height of 19th century chattel slavery. It’s just the proportions that are so much lower. Many of the world’s working poor, especially in the developing world, live in conditions not far removed from slavery or serfdom. The primary problem I see with our continued practice of eating animals is not meat eating itself, but that the production processes chains living creatures to the cruel and unrelenting sequence of machines rather than allowing such animals to live in the natural cycles for which they evolved and were bred.

Still, people in many parts of the world are rightly constrained in how they can treat living beings. What I am afraid of is dark and perpetual longings to be served and to dominate in humans will manifest themselves in our making machines more like persons for those purposes alone.  The danger here is that these animated tools will cross, or we will force them to cross, some threshold of sensibility that calls into question their very treatment and use as mere tools while we fail to mature beyond the level of my 9 year old self dreaming of his Verbot slave.

And yet, this is only one way to look at the rise of intelligent machines. Like a gestalt drawing we might change our focus and see a very different picture- that it is not we who are using and chaining machines for our purposes, but the machines who are doing this to us.  To that subject next time…

 

 

 

Psychobot

It is interesting… how weapons reflect the soul of their maker.

                  Don Delillo,  The Underworld

Singularity, or something far short of it, the very real revolution in artificial intelligence and robotics is already encroaching on the existential nature of aspects of the human condition that have existed for as long as our history.  Robotics is indeed changing the nature of work, and is likely to continue to do so throughout this century and beyond. But, as in most technological revolutions, the impact of change is felt first and foremost in the field of war.

In 2012 IEET Fellow Patrick Lin had a fascinating article in the Atlantic about a discussion he had at the CIA revolving around the implications of the robotics revolution. The use of robots in war results in all kinds of questions in the area of Just-War theory that have yet to even begun to be addressed. An assumption throughout Lin’s article is that robots are likely to make war more not less ethical as robots can be programmed to never target civilians, or to never cross the thin line that separates interrogation from torture.

This idea, that the application of robots to war could ultimately take some of the nastier parts of the human condition out of the calculus of warfare is also touched upon from the same perspective in Peter Singer’s Wired for War.  There, Singer brings up the case of Steven Green, a US soldier charged with the premeditated rape and murder of a 14 year old Iraqi girl.  Singer contrast the young soldier “swirling with hormones” to the calm calculations of a robot lacking such sexual and murderous instincts.

The problem with this interpretation of Green is that it relies on an outdated understanding of how the brain works. As I’ll try to show Green is really more like a robot-soldier than most human beings.

Lin and Singer’s idea of the “good robot” as a replacement for the “bad soldier” is based on a understanding of the nature of moral behavior that can be traced, as most things in Western civilization, back to Plato. In Plato’s conception, the godly part of human nature, it’s reason, was seen as a charioteer tasked with guiding chaotic human passions. People did bad things whenever reason lost control. The idea was updated by Freud with his ID (instincts) Ego (self) and Super-Ego (social conscience). The thing is, this version of why human beings act morally or immorally is most certainly wrong.

The neuroscience writer Jonah Lehrer in his How we Decide has a chapter, The Moral Mind, devoted to this very topic.  Odd thing is the normal soldier does not want to kill anybody- even enemy combatants. He cites a study of thousands of American soldiers after WWII done by  U.S. Army Brigadier General S.L.A Marshall.

His shocking conclusion was that less than 20 percent actually shot at the enemy even when under attack. “It is fear of     killing” Marshall wrote “rather than fear of being killed, that is the most common cause of battle failure of the individual”. When soldiers were forced to directly confront the possibility of directly harming another human being- this is a personal moral decision- they were literally incapacitated by their emotions. “At the most vital point of battle”, Marshall wrote, “the soldier becomes a conscientious objector”.

After this study was published, the Army redesigned it’s training to reduce this natural moral impediment to battlefield effectiveness. “What was being taught in this environment is the ability to shoot reflexively and instantly… Soldiers are de-sensitized to the act of killing until it becomes an automatic response. pp. 179-180

Lehrer, of course, has been discredited as a result of plagiarism scandals, so we should accept his ideas with caution, yet, they do suggest what already know that the existential condition of war is that it is difficult for human beings to kill one another, and well it should be. If modern training methods are meant to remove this obstruction in the name of combat effective they also remove the soldier from the actual moral reality of war. This moral reality is the reason why wars should be fought infrequently and only under the most extreme of circumstances. We should only be willing to kill other human beings under the most threatening and limited of conditions.

The designers of  robots warriors are unlikely to program this moral struggle with killing into their machines. Such machines will kill or not kill a fellow sentient beings as they are programmed to do. They were truly be amoral in nature, or to use a loaded and antiquated term, without a soul.

We could certainly program robots with ethical rules of war, as Singer and Lin suggest. These robots would be less likely to kill the innocent in the fear and haste of the fog of war. It is impossible to imagine that robots would commit the horrible crime of rape, which is far too common in war. All these things are good things. The question for the farther future is, how would a machine with a human or supra-human level of intelligence experience war? What would be their moral/existential reality of war compared to how the most highly sentient creatures today, human beings, experience combat.

Singer’s use of Steven Green as a flawed human being whose “hormones” have overwhelmed his reason, as ethically inferior to the cold reason of artificial intelligence which have no such passions to control is telling, and again is based on the flawed Plato/Freud model of the conscience of human beings.  A clear way to see this is by looking inside the mind of the rapist/murderer Green who, before he had committed his crime had been quoted in the Washington Post as saying:

I came over here because I wanted to kill people…

I shot a guy here when we were out at a traffic checkpoint, and it was like nothing. Over here killing people is like squashing an ant. I mean you kill somebody and it’s like ‘All right, let’s go get some pizza’.

In other words, Green is a psychopath.

Again we can turn to Lehrer who in describing the serial killer John Wayne Gacy:

According to the court appointed psychiatrist, Gacy seemed incapable of experiencing regret, sadness, or joy. Instead his inner life consisted entirely of sexual impulses and ruthless rationality. p.169

It is not the presence of out of control emotions that explain the psychopath, but the very absence of emotion. Psychopaths are unmoved by the very sympathy that makes it difficult for normal soldiers to kill. Unlike other human beings they show no emotional response when shown depictions of violence. In fact, they are unmoved by emotions at all.  For them, there are simply “goals” (set by biology or the environment) that they want to achieve. The means to those goals, including murder, are, for them, irrelevant. Lehrer quotes G.K. Chesterson:

The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.

Whatever the timeline, we are in the process of creating sentient beings who will kill other sentient beings, human and machine, without anger, guilt, or fear. I see no easy way out of this dilemma, for the very selective pressures of war, appear to be weighted against programming such moral qualities (as opposed to rules for who and when to kill) into our machines.  Rather than ushering in an era of “humane” warfare, on the existential level, that is in the minds of the beings actually doing the fighting, the moral dimension of war will be relentlessly suppressed. We will have created what is in effect, an army of psychopaths.