What makes war between the US and China or Russia inevitable?

Battle of Lepanto

There is a dangerous and not so new idea currently making the rounds that not only is conventional war between the great powers inevitable, but that it would be much less of an existential threat to humanity than we have been led to believe and even might be necessary for human progress.

The appearance of this argument in favor of war was almost predictable given that it was preceded by a strong case being made that war was now obsolete and at least in its great power guise was being driven out of history by deep underlying trends towards prosperity and peace. I am thinking here mostly of Steven Pinker in his Better Angels of Our Nature, but he was not alone.

The exact same thing happened in the 19th century. Then, voices arose that argued that war was becoming unnecessary as values became global and the benefits of peaceful trade replaced the plunder of war. The counter-reaction argued that war had been the primary vector for human progress and that without it we would “go soft” or de-evolve into a species much less advanced than our own.

Given their racist overtones, arguments that we will evolve “backward” absent war are no longer made in intelligent circles. Instead, war has been tied to technological development the argument being that without war in general and great power war in particular we will become technologically stuck. That’s the case made by Ian Morris in his War What is it Good For? where he sees the US in shepherding in the Singularity via war, and it’s a case made from opposite sides of the fence regarding transhumaism by Christopher Coker and Steve Fuller.

The problem with these new arguments in favor of great power conflict is that they discount the prospect of nuclear exchange. Perhaps warfare is the heir of technological progress, but it’s better to be the tortoise when such conflicts hold the risk of driving you back to the stone age.

That said, there is an argument out there that perhaps even nuclear warfare wouldn’t be quite the civilization destroyer we have believed, but that position seems less likely to become widespread than the idea that the great powers could directly fight each other and yet some how avoid bringing the full weight of their conventional and nuclear forces down upon the other even in the face of a devastating loss at the hands of the other. It’s here where we can find Peter W. Singer and August Cole’s recent novel Ghost Fleet: A novel of the Third World War that tells the story of a conventional war, mainly at sea, between the US and China and Russia.

It’s a copiously researched book, and probably gives a good portrait of what warfare in the next ten to fifteen years will look like. If the authors are right, in the wars of the future drones – underground, land, air, and sea will be ubiquitous and AI used to manage battles where sensors take the place of senses- the crew of a ship needs never set sight on the actual sea.

Cyber attacks in the future will be a fully functional theater of war- as will outer space. The next war will take advantage of enhancement technologies- augmented reality, and interventions such “stim tabs” for alertness. Advances in neuroscience and bioelectronics will be used at the very least as a form of enhanced, and brutal, interrogation.

If the book makes any warning (and all such books seem to have a warning) it is that the US is extremely reliant on the technological infrastructure that allows the country to deploy and direct its’ global force. The war begins with a Chinese/Russian attack on American satellites, which effectively blinds the American military. I’ve heard interviews where Singer claims that in light of this the US Navy is now training its’ officers in the art of stellar navigation, which in the 21st century is just, well… cool. The authors point out how American hardware is vulnerable as well to threats like embedded beacons or kill switches given that much of this hardware has come out of Chinese factories.

The plot of Ghost Fleet is pretty standard: in a surprise attack the Chinese and Russians push the US out of the Pacific by destroying much of the American navy and conquering Hawaii. Seemingly without a single ally, the US then manages to destroy the Chinese fleet after resurrecting the ships of its naval graveyard near San Francisco- a real thing apparently, especially a newfangled battleship the USS Zumwalt, which is equipped with a new and very powerful form of rail gun. I am not sure if it helped or hindered my enjoyment of the book that its’ plot seemed eerily similar to my favorite piece of anime from when I was a kid- Star Blazers.

The problem, once again, is in thinking that we can contain such conflicts and therefore need not do everything possible to avoid them. In the novel there never seems to be any possibility of nuclear exchange or strategic bombing and with the exception of the insurgency and counterinsurgency in Hawaii the war is hermetically contained to the Pacific and the sea. Let’s hope we would be so lucky, but I’m doubtful.

Ghost Fleet is also not merely weak but detrimental in regards to the one “technology” that will be necessary for the US to avoid a war with China or others and contain it should it break out.

If I had to take a vote on one work that summed up the historical differences between the West and everyone else, differences that would eventually bring us the scientific revolution and all of the power that came with it, that work wouldn’t be one of science such as the Principia Mathematica or even some great work of philosophy or literature, but a history book.

The thing that makes Herodotus’ The Histories so unique was that it was the first time that one people tried to actually understand their enemies. It’s certainly eurocentric to say it, but the Greeks as far as I am aware, were first and unique here. It wasn’t just a one off.

Thucydides would do something similar for the intra-Hellenic conflict between Athenians and Spartans, but the idea of actually trying to understand your enemy, no doubt because so much about being an enemy lends itself to being hidden and thus needs to be imagined was brought to its’ heights in the worlds of drama and fiction. The great Aeschylus did this with the Persians as well with his tragedy named for that noble people.

It’s a long tradition that was seen right up until The Riddle of the Sands a 1903 book that depicted a coming war between the British and the Germans. Much different than the dehumanizing war propaganda that would characterize the Germans as less than human “Huns” during the First World War, The Riddle of the Sands attempted to make German aggression explicable given its’ historical and geographical circumstances even while arguing such aggression needed to be countered.

In Ghost Fleet by contrast the Chinese especially are reduced to something like Bond Villains, US control of the Pacific is wholly justified, its’ characters largely virtuous and determined. In that sense the novel fails to do what novels naturally excel at; namely, opening up realms to the imagination that otherwise remain inaccessible, and in this specific case the motivations, assumptions, and deep historical grievances likely to drive Chinese or Russian policy makers in the run up to and during any such conflict. Sadly, it is exactly such a lack of understanding that makes war great power wars and the existential risks to humanity they pose, perhaps not inevitable, but increasingly more likely.

Psychobot

It is interesting… how weapons reflect the soul of their maker.

                  Don Delillo,  The Underworld

Singularity, or something far short of it, the very real revolution in artificial intelligence and robotics is already encroaching on the existential nature of aspects of the human condition that have existed for as long as our history.  Robotics is indeed changing the nature of work, and is likely to continue to do so throughout this century and beyond. But, as in most technological revolutions, the impact of change is felt first and foremost in the field of war.

In 2012 IEET Fellow Patrick Lin had a fascinating article in the Atlantic about a discussion he had at the CIA revolving around the implications of the robotics revolution. The use of robots in war results in all kinds of questions in the area of Just-War theory that have yet to even begun to be addressed. An assumption throughout Lin’s article is that robots are likely to make war more not less ethical as robots can be programmed to never target civilians, or to never cross the thin line that separates interrogation from torture.

This idea, that the application of robots to war could ultimately take some of the nastier parts of the human condition out of the calculus of warfare is also touched upon from the same perspective in Peter Singer’s Wired for War.  There, Singer brings up the case of Steven Green, a US soldier charged with the premeditated rape and murder of a 14 year old Iraqi girl.  Singer contrast the young soldier “swirling with hormones” to the calm calculations of a robot lacking such sexual and murderous instincts.

The problem with this interpretation of Green is that it relies on an outdated understanding of how the brain works. As I’ll try to show Green is really more like a robot-soldier than most human beings.

Lin and Singer’s idea of the “good robot” as a replacement for the “bad soldier” is based on a understanding of the nature of moral behavior that can be traced, as most things in Western civilization, back to Plato. In Plato’s conception, the godly part of human nature, it’s reason, was seen as a charioteer tasked with guiding chaotic human passions. People did bad things whenever reason lost control. The idea was updated by Freud with his ID (instincts) Ego (self) and Super-Ego (social conscience). The thing is, this version of why human beings act morally or immorally is most certainly wrong.

The neuroscience writer Jonah Lehrer in his How we Decide has a chapter, The Moral Mind, devoted to this very topic.  Odd thing is the normal soldier does not want to kill anybody- even enemy combatants. He cites a study of thousands of American soldiers after WWII done by  U.S. Army Brigadier General S.L.A Marshall.

His shocking conclusion was that less than 20 percent actually shot at the enemy even when under attack. “It is fear of     killing” Marshall wrote “rather than fear of being killed, that is the most common cause of battle failure of the individual”. When soldiers were forced to directly confront the possibility of directly harming another human being- this is a personal moral decision- they were literally incapacitated by their emotions. “At the most vital point of battle”, Marshall wrote, “the soldier becomes a conscientious objector”.

After this study was published, the Army redesigned it’s training to reduce this natural moral impediment to battlefield effectiveness. “What was being taught in this environment is the ability to shoot reflexively and instantly… Soldiers are de-sensitized to the act of killing until it becomes an automatic response. pp. 179-180

Lehrer, of course, has been discredited as a result of plagiarism scandals, so we should accept his ideas with caution, yet, they do suggest what already know that the existential condition of war is that it is difficult for human beings to kill one another, and well it should be. If modern training methods are meant to remove this obstruction in the name of combat effective they also remove the soldier from the actual moral reality of war. This moral reality is the reason why wars should be fought infrequently and only under the most extreme of circumstances. We should only be willing to kill other human beings under the most threatening and limited of conditions.

The designers of  robots warriors are unlikely to program this moral struggle with killing into their machines. Such machines will kill or not kill a fellow sentient beings as they are programmed to do. They were truly be amoral in nature, or to use a loaded and antiquated term, without a soul.

We could certainly program robots with ethical rules of war, as Singer and Lin suggest. These robots would be less likely to kill the innocent in the fear and haste of the fog of war. It is impossible to imagine that robots would commit the horrible crime of rape, which is far too common in war. All these things are good things. The question for the farther future is, how would a machine with a human or supra-human level of intelligence experience war? What would be their moral/existential reality of war compared to how the most highly sentient creatures today, human beings, experience combat.

Singer’s use of Steven Green as a flawed human being whose “hormones” have overwhelmed his reason, as ethically inferior to the cold reason of artificial intelligence which have no such passions to control is telling, and again is based on the flawed Plato/Freud model of the conscience of human beings.  A clear way to see this is by looking inside the mind of the rapist/murderer Green who, before he had committed his crime had been quoted in the Washington Post as saying:

I came over here because I wanted to kill people…

I shot a guy here when we were out at a traffic checkpoint, and it was like nothing. Over here killing people is like squashing an ant. I mean you kill somebody and it’s like ‘All right, let’s go get some pizza’.

In other words, Green is a psychopath.

Again we can turn to Lehrer who in describing the serial killer John Wayne Gacy:

According to the court appointed psychiatrist, Gacy seemed incapable of experiencing regret, sadness, or joy. Instead his inner life consisted entirely of sexual impulses and ruthless rationality. p.169

It is not the presence of out of control emotions that explain the psychopath, but the very absence of emotion. Psychopaths are unmoved by the very sympathy that makes it difficult for normal soldiers to kill. Unlike other human beings they show no emotional response when shown depictions of violence. In fact, they are unmoved by emotions at all.  For them, there are simply “goals” (set by biology or the environment) that they want to achieve. The means to those goals, including murder, are, for them, irrelevant. Lehrer quotes G.K. Chesterson:

The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.

Whatever the timeline, we are in the process of creating sentient beings who will kill other sentient beings, human and machine, without anger, guilt, or fear. I see no easy way out of this dilemma, for the very selective pressures of war, appear to be weighted against programming such moral qualities (as opposed to rules for who and when to kill) into our machines.  Rather than ushering in an era of “humane” warfare, on the existential level, that is in the minds of the beings actually doing the fighting, the moral dimension of war will be relentlessly suppressed. We will have created what is in effect, an army of psychopaths.

Of drones and democracy

P.W. Singer has a fascinating article in last Sunday’s edition of the New York Times about the implications of the current robotics revolution in warfare for democracy.

Singer’s reputation of someone who asks the hard questions (and asks them first) about war was gained in his studies of the rise of privatized warfare  and the increasing use of children in warfare.  On both of those issues, Singer was among the first to correctly identify disturbing trends. His perspective shows little ideological bent, merely the search for truth. On these issues he was almost a lone voice crying in the wilderness, as he is in bringing to our knowledge the troubling questions being brought about as the revolution in computer technology and robotics finds itself increasing applied to war. His goal, as always, appears to be to reveal the obvious trends in front of us to which we are blind, and in doing so start a conversation we should already be having.

I think a short review of Singer’s Wired For War  is in order, so that his N.Y. Times article can be seen in its full context.  In that book Singer takes readers on a wild ride through the current robotics revolution in warfare. He sees our era as akin to WWI when new technologies like the tank and airplane were thrown into the field, but no one new yet how to actually use them. (224)

The military now funds over 80% of American spending on artificial intelligence. Much of this funding flows through the Defense Advanced Research Agency (DARPA).  Joel Garreau of the Washington Post SAYS the mission of DARPA “is to accelerate the future into being”. (78)

Not only the US Army, but also the Navy and Airforce (which wants 45% of its bomber fleet to be composed of unmanned vehicles) are rushing to develop unmanned systems. Some of these systems seem straight out of science-fiction, whether swarming insect sized robots or super-sized planes able to stay aloft for months or even years.  (117-118).

Many of the new robots resemble animals, such as a robotic dog .

What does all of this have to do with democracy?

In his article, he makes the point, that increasingly, the US is relying on its advantage in military technology, the most famous of which, are the use of unmanned drones, to wage, what is in effect a constant war, with out the democratic oversight and control that is supposed to be the job of our elected representatives in the Congress. The biggest cost in war, the lives of citizens, no longer at risk, the American government can wage war on terrorist targets throughout the world with seeming impunity.

As a prime example, he cites the recent US military action in Libya.

Starting on April 23, American unmanned systems were deployed over Libya. For the next six months, they carried out at least 146 strikes on their own. They also identified and pinpointed the targets for most of NATO’s manned strike jets. This unmanned operation lasted well past the 60-day deadline of the War Powers Resolution, extending to the very last airstrike that hit Colonel Qaddafi’s convoy on Oct. 20 and led to his death.

Choosing to make the operation unmanned proved critical to initiating it without Congressional authorization and continuing it with minimal public support. On June 21, when NATO’s air war was lagging, an American Navy helicopter was shot down by pro-Qaddafi forces. This previously would have been a disaster, with the risk of an American aircrew being captured or even killed. But the downed helicopter was an unmanned Fire Scout, and the story didn’t even make the newspapers the next day.

Singer’s conclusion:

We must now accept that technologies that remove humans from the battlefield, fromunmanned systems like the Predator to cyberweapons like the Stuxnet computer worm, are becoming the new normal in war.

And like it or not, the new standard we’ve established for them is that presidents need to seek approval only for operations that send people into harm’s way — not for those that involve waging war by other means.

WITHOUT any actual political debate, we have set an enormous precedent, blurring the civilian and military roles in war and circumventing the Constitution’s mandate for authorizing it. Freeing the executive branch to act as it chooses may be appealing to some now, but many future scenarios will be less clear-cut. And each political party will very likely have a different view, depending on who is in the White House.

America’s founding fathers may not have been able to imagine robotic drones, but they did provide an answer. The Constitution did not leave war, no matter how it is waged, to the executive branch alone.

In a democracy, it is an issue for all of us.