The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

The King of Weird Futures

Bosch vanity Garden of earthy delights

Back in the late winter I wrote a review of the biologist Edward O. Wilson’s grandiloquently mistitled tract-  The Meaning of Human Existence. As far as visions of the future go Wilson’s was a real snoozer, although for that very reason it left little to be nervous about. The hope that he articulated in his book being that we somehow manage to keep humanity pretty much the same- genetically at least- “as a sacred trust”,  in perpetuity. It’s a bio-conservatism that, on one level, I certainly understand, but one I also find incredibly unlikely given that the future consists of….well…. an awfully long stretch of time (that is as long as we’re wise enough or just plain lucky ). How in the world can we expect, especially in light of current advances in fields like genetics, neuroscience, artificial intelligence etc, that we can, or even should, keep humanity essentially unchanged not just now, but for 100 years, or 1000s year, 10,000s years, or even longer?

If Wilson is the 21st century’s prince of the dull future the philosopher David Roden should perhaps be crowned the king of weird one(s). Indeed, it may be that the primary point of his recent mind-bending book Posthuman Life:Philosophy at the Edge of the Human, is to make the case for the strange and unexpected. The Speculative Posthumanism (SP) he helps launch with this book a philosophy that grapples with the possibility that the future of our species and its descendents will be far weirder than we have so far allowed ourselves to imagine.

I suppose the best place to begin a proper discussion of  Posthuman Life would be with explaining just exactly what Roden means by Speculative Posthumanism, something that (as John Dahaner has pointed out) Roden manages to uncover like a palimpsest by providing some very useful clarifications for often philosophically confused and conflated areas of speculation regarding humanity’s place in nature and its future.

Essentially Roden sees four domains of thought regarding humanism/posthumanism. There is Humanism of the old fashioned type that even absent some kind of spiritual dimension makes the claim that there is something special, cognitively, morally, etc that marks human beings off from the rest of nature.

Interestingly, Roden sees Transhumanism as merely an updating of this humanism- the expansion of its’ tool kit for perfecting humankind to include not just things like training and education but physical, cognitive, and moral enhancements made available by advances in medicine, genetics, bio-electronics and similar technologies.

Then there is Critical Posthumanism by which Roden means a move in Western philosophy apparent since the later half of the 20th century that seeks to challenge the anthropocentrism at the heart of Western thinking. The shining example of this move was the work of Descartes, which reduced animals to machines while treating the human intellect as mere “spirit” as embodied and tangible as a burnt offering to the gods. Critical Posthumanism, among whom one can count a number of deconstructionists, feminists, multicultural, animal rights, and environmentalists philosophers from the last century, aims to challenge the centrality of the subject and the discourses surrounding the idea of an observer located at some Archimedean point outside of nature and society.

Lastly, there is the philosophy Roden himself hopes to help create- Speculative Posthumanism the goal of which is to expand and explore the potential boundaries of what he calls the posthuman possibility space (PPS). It is a posthumanism that embraces the “weird” in the sense that it hopes, like critical posthumanism, to challenge the hold anthropocentrism has had on the way we think about possible manifestations of phenomenology, moral reasoning, and cognition. Yet unlike Critical Posthumanism, Speculative Posthumanism does not stop at scepticism but seeks to imagine, in so far as it is possible, what non-anthropocentric forms of phenomenology, moral reasoning, and cognition might actually look like. (21)

It is as a work of philosophical clarification that Posthuman Life succeeds best, though a close runner up would be the way Roden manages to explain and synthesize many of the major movements within philosophy in the modern period in a way that clearly connects them to what many see as upcoming challenges to traditional philosophical categories as a consequence of emerging technologies from machines that exhibit more reasoning, or the disappearance of the boundary between the human, the animal, and the machine, or even the erosion of human subjectivity and individuality themselves.

Roden challenges the notion that any potential moral agents of the future that can trace their line of descent back to humanity will be something like Kantian moral agents rather than agents possessing a moral orientation we simply cannot imagine. He also manages to point towards connections of the postmodern thrust of late 21st century philosophy which challenged the role of the self/subject and recent developments in neuroscience, including connections between philosophical phenomenology and the neuroscience of human perception that do something very similar to our conception of the self. Indeed, Posthuman Life eclipses similar efforts at synthesis and Roden excels at bringing to light potentially pregnant connections between thinkers as diverse as Andy Clark and Heidegger, Donna Haraway and Deleuze and Derrida along with non-philosophical figures like the novelist Philip K. Dick.

It is as a very consequence of his success at philosophical clarification that leads Roden across what I, at least, felt was a bridge (philosophically) too far. As posthumanist philosophers are well aware, the very notion of the “human” suffers a continuum problem. Unique to us alone, it is almost impossible to separate humanity from technology broadly defined and this is the case even if we go back to the very beginnings of the species where the technologies in question are the atul or the baby sling. We are in the words of Andy Clark “natural born cyborgs”. In addition to this is the fact that (like anything bound up with historical change) how a human being is defined is a moving target rather than a reflection of any unchanging essence.

How then can one declare any possible human future that emerges out of our continuing “technogenesis” “post” human, rather than just the latest iteration in what in fact is the very old story of the human “artificial ape”? And this status of mere continuation (rather than break with the past) would seem to hold in a philosophical sense even if whatever posthumans emerged bore no genetic and only a techno-historical relationship to biological humans. This somewhat different philosophical problem of clarification again emerges as the consequence of another continuum problem namely the fact that human beings are inseparable from the techno-historical world around them- what Roden brilliantly calls “the Wide Human” (WH).

It is largely out of the effort to find clear boundaries within this confusing continuum that leads Roden to postulate what he calls the “disconnection thesis”. According to this thesis an entity can only properly be said to be posthuman if it is no longer contained within the Wide Human.  A “Wide Human descendent is a posthuman if and only if:”

  1. It has ceased to belong to WH (the Wide Human) as a result of technical alteration.
  2. Or is wide descendent of such a being. (outside WH) . (112)

Yet it isn’t clear, to me at least, why disconnection from the Wide Human is more likely to result in something more different from humanity and our civilization as they currently exist today than anything that could emerge out of, but still remain part of, the Wide Human itself. Roden turns to the idea of “assemblages” developed by Deleuze and Guattari in an attempt to conceptualize how such a disconnection might occur, but his idea is perhaps conceptually clearer if one comes at it from the perspective of the kinds of evolutionary drift that occurs when some set of creatures becomes isolated from another by having become an island.

As Darwin realized while on his journey to the Galapagos isolation can lead quite rapidly to wide differences between the isolated variant and its parent species. The problem when applying such isolation analogies to technological development is that unlike biological evolution (or technological development before the modern era), the evolution of technology is now globally distributed, rapid and continuous.

Something truly disruptive seems much more likely to emerge from within the Wide Human than from some separate entity or enclave- even one located far out in space.  At the very least because the Wide Human possesses the kind of leverage that could turn something disruptive into something transformative to the extent it could be characterized as posthuman.

What I think we should look out for in terms of the kinds of weird divergence from current humanity that Roden is contemplating, and though he claims speculative posthumanism is not normative, is perhaps rooting for, is maybe something more akin to a phase change or the kinds of rapid evolutionary changes seen in events like the cambrian explosion or the opening up of whole new evolutionary theaters such as when life in the sea first moved unto the land than some sort of separation. It would be something like the singularity predicted by Vernor Vinge though might just as likely come from a direction completely unanticipated and cause a transformation that would make the world, from our current perspective, unrecognizable, and indeed, weird.

Still, what real posthuman weirdness would seem to require would be something clearly identified by Roden and not dependent, to my lights, on his disruption thesis being true. The same reality that would make whatever follows humanity truly weird would be that which allowed alien intelligence to be truly weird; namely, that the kinds of cognition, logic, mathematics, science found in our current civilization, or the kinds of biology and social organization we ourselves possess to all be contingent. What that would mean in essence was that there were a multitude of ways intelligence and technological civilizations might manifest themselves of which we were only a single type, and by no means the most interesting one. Life itself might be like that with the earthly variety and its conditions just one example of what is possible, or it might not.

The existence of alien intelligence and technology very different from our own means we are not in the grip of any deterministic developmental process and that alternative developmental paths are available. So far, we have no evidence one way or another, though unlike Kant who used aliens as a trope to defend a certain versions of what intelligence and morality means we might instead imagine both extraterrestrial and earthly alternatives to our own.

While I can certainly imagine what alternative, and from our view, weird forms of cognition might look like- for example the kinds of distributed intelligence found in a cephalopod or eusocial insect colony, it is much more difficult for me to conceive what morality and ethics might look like if divorced from our own peculiar hybrid of social existence and individual consciousness (the very features Wilson, perhaps rightfully, hopes we will preserve). For me at least one side of what Roden calls dark phenomenology is a much deeper shade of black.

What is especially difficult in this regard for me to imagine is how the kinds of openness to alternative developmental paths that Roden, at the very least, wants us to refrain from preemptively aborting is compatible with a host of other projects surrounding our relationship to emerging technology which I find extremely important: projects such as subjecting technology to stricter, democratically established ethical constraints, including engineering moral philosophy into machines themselves as the basis for ethical decision making autonomous from human beings. Nor is it clear what guidance Roden’s speculative posthumanism provides when it comes to the question of how to regulate against existential risks, dangers which our failure to tackle will foreclose not only a human future but very likely possibility of a posthuman future.

Roden seems to think the fact that there is no such thing as a human “essence” we should be free to engender whatever types of posthumans we want. As I see it this kind of ahistoricism is akin to a parent who refuses to use the lessons learned from a difficult youth to inform his own parenting. Despite the pessimism of some, humanity has actually made great moral strides over the arc of its history and should certainly use those lessons to inform whatever posthumans we chose to create.

One would think the types of posthumans whose creation we permit should be constrained by our experience of a world ill designed by the God of Job. How much suffering is truly necessary? Certainly less than sapient creatures currently experience and thus any posthumans should suffer less than ourselves. We must be alert to and take precautions to avoid the danger that posthuman weirdness will emerge from those areas of the Wide Human where the greatest resources are devoted- military or corporate competition- and for that reason- be terrifying.

Yet the fact that Roden has left one with questions should not subtract from what he has accomplished; namely he has provided us with a framework in which much of modern philosophy can be used to inform the unprecedented questions that are facing as a result of emerging technologies. Roden has also managed to put a very important bug in the ear of all those who would move too quick to prohibit technologies that have the potential to prove disruptive, or close the door to the majority of the hopefully very long future in front of us and our descendents- that in too great an effort to preserve the contingent reality of what we currently are we risk preventing the appearance of something infinitely more brilliant in our future.

John Gray and the Puppets of Gloom

Javanese shadow puppets

Lately I’ve been thinking a lot about puppets. I know that sounds way too paleo-tech, and weird, but hear me out. Puppets are an ancient technology, which, for all the millennia that passed before, and up until very, very recently, were the primary way we experienced animated art. For the vast majority of human history the way we watched projected figures in front of us playing out some imagined drama was in the form of shadows cast on the walls.

In such shadows were the forerunner of movies, and television, videogames and VR. And if you don’t think a similar artistry and brilliance to these newer medium can be found in ancient marionettes you should take a peek at the beautiful, bizarre world conjured up by the Javanese who with their long tradition continue to do shadow theater best.

Puppets have also been the jumping off point for some very deep philosophical reflections. What, after all, was the inspiration for the analogy of Plato’s cave than the world of the shadow play? Just a little over two centuries ago there was Heinrich von Kleist’s short story  “On the Marionette Theatre” that used the art of puppetry as a means of reflecting on human freedom and the difference between us, animals and machines. Philosophers can do a lot with puppets, or at least try to.

Thus when I heard that the philosopher John Gray had written a recent book whose starting point was Kleist’s short story- Gray’s The Soul of a Marionette–  I felt compelled to pick it up. I was ready to kick myself for not having realized first that Kleist’s story was an excellent way to address contemporary questions such as the difference between human and artificial intelligence or perhaps the challenges brought upon common notions of freedom in the light of recent neuroscience.

As I am not alone in seeing, rather than diminishing in importance as we have developed new and superior forms of entertainment a grasp of the ancient art puppetry might be a key to understanding our own confusing age. For it seems that we are entering a golden age of puppetry in which humans are the puppeteers of all sorts of semi-autonomous machines from drones to artificial prostitutes. A fate that seems much more likely over the next few decades than the kinds of looming full machine autonomy predicted (and feared) by many today.

The specter of the marionette can also be seen in the quite legitimate fear that some of the recent advances in neuroscience could possibly be used not only to infringe on the autonomy of animals, but on human beings as well.

In other words, I had high hopes for The Soul of a Marionette given that its jumping off point for discussing the modern world was Kleist’s brilliant 1810 story and essay on the philosophy of puppetry, but it seems I didn’t deserve a kick after all, for these hopes were dashed when I discovered Gray was merely using Kleist’s tale (and his entire book) as a prop for his otherwise stale, endless argument with liberals and “utopians”. Allow me to do in my own limited way what Gray should have done, but did not and for that those unaware will need to first hear Kleist’s tale.

It’s impossible to capture the genius of Kleist’s bizarre yet brilliant short story, but I will try nonetheless. Ostensibly it is the story of a man who encounters a famed dancer/choreographer named Herr C, attending a marionette show. This becomes the setting for what is really a philosophical discussion about how thought and free will often interfere with the ability of human individuals to effectively act- a theme which Kleist also explored in his essay On the Gradual Production of Thoughts Whilst Speaking.

Any of us who have played a sport, given an impromptu speech, or even planted a kiss know precisely what Kleist is talking about. Consciousness, once one gets past the initial point of learning something, can actually trip us up. Herr C compares for the inquiring man clumsy human dancers with the grace of marionettes free of the limitations imposed by gravity and self-doubting minds.  The inquisitor himself recalls how with a mere joke he had inadvertently destroyed the unreflective confidence of a friend, which prompts Herr C to tell  a story illustrating how much better the natural skills of a bear are than even the most well trained human fencer. After which the two men end their conversation.

Such a story would mean little, especially for us two centuries later, had Kleist not put into the mouth of his Herr C within this dialogue what amount to philosophical and even religious speculation pregnant with connections, especially for today, and specifically in light of recent advances in artificial intelligence.

At one point in their discussion, the man inquiring of Herr C compares the marionettes to mere machines like a “hurdy gurdy” much unlike real human dancers. Herr C does indeed believe “that this final trace of the intellect could eventually be removed from the marionettes, so that their dance could pass entirely over into the world of the mechanical and be operated by means of a handle”. Yet rather than this reflecting a diminished judgement of the marionette’s visa-via human dancers, Herr C believes full artificiality and automatism to be their great virtues:

He smiled and replied that he dared to venture that a marionette constructed by a craftsman according to his requirements could perform a dance that neither he nor any other outstanding dancer of his time, not even Vestris himself, could equal. Have you, he asked while I gazed thoughtfully at the ground, ever heard of those mechanical legs that English craftsmen manufacture for unfortunate people who have lost their own limbs? I replied that I had never seen such artifacts. That’s a shame, he replied, for when I tell you that these unfortunate people are able to dance with the use of them, you most certainly will not believe me. What do I mean by using the word dance? The span of their movements is quite limited, but those movements of which they are capable are accomplished with a composure, lightness, and grace that would amaze any sensitive observer.

Here Kleist, at the very least, opens up not only the possibility that a machine constructed by a craftsman according to some specifications would be better than a human being, but also that human beings with mechanical parts would be superior to mere biological humans. In the story when the interrogator of Herr C questions this assertion that machines could potentially be superior to human beings the choreographer/philosopher responds with the assertion that:

….it would be almost impossible for a man to attain even an approximation of a mechanical being. In such a realm only a God could measure up to this matter, and this is the point where both ends of the circular world would join one another.

For Herr C, human beings were trapped between the infinite consciousness of God and the freedom from consciousness of machines. Getting free from this trap would entail eating again from the “tree of knowledge” and this would be “the last chapter of the history of the world.”

Now Kleist, of course, had no intention of addressing what we would consider questions regarding artificial intelligence, yet given developments in that field of late, one can’t help but be struck (at least if you’re not Gray) by the fact that “On the Marionette Theatre” seems to touch on current issues such as, what Yuval Harari brilliantly characterized as the “decoupling of intelligence from consciousness”. Like the marionettes patiently observed by Herr C, at least in some formerly human endeavors- such as playing chess– machines with intelligence, but no consciousness at all can outperform us. Indeed, this is the big surprise of recent gains in the ability of AI- we can get very close to smart and even superior behavior without any need for general intelligence let alone consciousness.

There are many places where Gray might have leveraged Kleist’s strange tale from addressing what such a decoupling means for the whole Western philosophical tradition, which began, after all, with the injunction “know thy self” to wrestling with claims that AI as currently constructed manifests intelligence more akin to puppet show illusions like the old Mechanical Turk than the intellect of a mind. Nor does Gray really extend Kleist’s analogy to interrogate how we, both voluntarily and involuntarily, seem hell bent on turning ourselves into a version of automata through technologies of micro-surveillance for the purpose of self-control and efficiency, or how this connects to the project of much of philosophy itself.

Gray might also have discussed how the problem with the version of marionette freedom proposed by Herr C was that it appears to be blind to the dictatorship of the puppeteer who continues to exist behind the scenes. To recognize and take steps to counter this is the first step towards ensuring technology actually does enhance human freedom, especially as that technology becomes merged with the body and brain themselves and subject to outside control.

These problems with The Soul of a Marionette stem largely from the fact that the book is ultimately the right weapon used to hit the wrong target. Although on the surface it appears that Gray is out to philosophically grapple with our current technological trajectory in light of our ancient human condition his real target is Steven Pinker and his exhausting band of optimists.

The Soul of a Marionette, I think rightly, makes the case that the philosophy behind much of modern technology is a modern form of Gnosticism. In this case Gnosticism means the belief that the world is somehow ill constructed and that through our knowledge and efforts we can fix it. But rather than make the case for this technological version of Gnosticism– ala Steve Fuller, or use such a recognition as the basis for a critique as does Luciano Floridi, Gray sidesteps the issue to make a rather weak case against common notions of “progress”.

It is indeed true that those who insist upon perpetual human progress share the same intellectual roots as those claiming we are rapidly approaching a technological singularity- most importantly both emerge out of “the death of God” in the 19th century which resulted in human beings assuming responsibility for both their own knowledge and fate, and we have been grappling with this new responsibility ever since.

Gray essentially adopts the old trope that while we have advanced technologically we have not advanced in our morality or our wisdom. At the same time, Gray essentially accepts the destination predicted by singularitarians- that human beings will be supplanted by artificial intelligence. What distinguishes him from figures like Ray Kurzweil is that Gray wants to make it clear that the coming “spiritual machines” will carry forward our same moral flaws as human beings, which, contrary to Pinker and his ilk, we have retained.

The first problem here is that any suggestion that moral progress (or even technological progress) is or is not perpetual remains mere speculation- it’s not really an answerable question. The second and bigger problem for Gray’s case is that in failing to acknowledge singularitarian technological projections as a political project Gray essentially severs our ability to influence how technological development unfolds- that is to define its moral and ethical dimension. By failing to keep in view the still very real and relevant human beings (moral and immoral) behind our intelligent machines he obscures the essential political and economic questions in his cloud of existential gloom.

Gray would like us to abandon whatever freedom we have left to join him in some stoic version“of the inward variety prized by the ancient world” (162). He is certainly premature in urging our retreat into the desert. Following him would only accelerate the very unraveling of our moral progress that he predicts. To step aside and let the the very real political and moral gains we have made over the last few centuries disappear would not be forgiven by our descendants, unless that is, they really have become soulless marionettes.

 

Freedom in the Age of Algorithms

modern-times-22

Reflect for a moment on what for many of us has become the average day. You are awoken by your phone whose clock is set via a wireless connection to a cell phone tower, connected to a satellite, all ultimately ending in the ultimate precision machine, a clock that will not lose even a second after 15 billion years of ticking. Upon waking you connect to the world through the “sirene server” of your pleasure that “decides” for you based on an intimate profile built up over years the very world you will see- ranging from Kazakhstan to Kardasian, and connects your intimates, and those whom you “follow” and, if you’re lucky enough, your streams of “followers”.

Perhaps you use a health app to count your breakfast calories or the fat you’ve burned on your morning run, or perhaps you’ve just spent the morning by playing Bejeweled, and will need to pay for your sins of omission later. On your mindless drive to work you make the mistake of answering a text from the office while in front of a cop who unbeknownst to you has instantly run your licence plate to find out if you are a weirdo or terrorist. Thank heavens his algorithm confirms you’re neither.

When you pull into the Burger King drive through to buy your morning coffee, you thoughtlessly end up buying yet another bacon egg and cheese with a side of hash browns in spite of your best self nagging you from inside your smart phone. Having done this one too many times this month your fried food preference has now been sold to the highest bidders, two databanks, through which you’ll now be receiving annoying coupons in the mail along with even more annoying and intrusive adware while you surf the web, the first from all the fast food restaurants along the path of your morning commute, the other friendly, and sometimes frightening, suggestions you ask your doctor about the new cholesterol drug evolocumab.

You did not, of course, pay for your meal with cash but with plastic. Your money swirling somewhere out there in the ether in the form of ones and zeroes stored and exchanged on computers to magically re-coalesce and fill your stomach with potatoes and grease. Your purchases correlated and crunched to define you for all the machines and their cold souls of software that gauge your value as you go about your squishy biological and soulless existence.

____________________________________________

The bizarre thing about this common scenario is that all of this happens before you arrive at the office, or store, or factory, or wherever it is you earn your life’s bread. Not only that, almost all of these constraints on how one views and interacts with the world have been self imposed. The medium through which much of the world and our response to is now apps and algorithms of one sort or another. It’s gotten to the point that we now need apps and algorithms to experience what it’s like to be lost, which seems to, well… misunderstand the definition of being lost.

I have no idea where future historians, whatever their minds are made of, will date the start of this trend of tracking and constraining ourselves so as to maintain “productivity” and “wellness”, perhaps with all those 7- habits- of- highly- effective books that started taking up shelf space in now ancient book stores sometime in the 1980’s, but it’s certainly gotten more personal and intimate with the rise of the smart phone. In a way we’ve brought the logic of the machine out of the factory and into our lives and even our bodies- the idea of super efficient man-machine merger as invented by Frederick Taylor and never captured better than in Charlie Chaplin’s brilliant 1936 film Modern Times.

The film is for silent pictures what the Wizard of OZ was for color and bridges the two worlds where almost all of the spoken parts are through the medium of machines including a giant flat screen that seemed entirely natural in a world that has been gone for eighty years. It portrays the Tramp asserting his humanity in the dehumanizing world of automation found in a factory where even eating lunch had been mechanized and made maximally efficient. Chaplin no doubt would have been pleasantly surprised with how well much of the world turned out given the bleakness of economic depression and soon world war he was facing, but I think he also would have been shocked at how much we have given up of the Tramp in us all without reason and largely of our own volition.

Still, the fact of the matter is that this new rule of apps and and algorithms much of which comes packaged in the spiritualized wrapping of “mindfulness” and “happiness” would be much less troubling did it not smack of a new form of Marx’s “opiate for the people” and divert us away from trying to understand and challenge the structural inadequacies of society.

For there is nothing inherently wrong with measuring performance as a means to pursue excellence, or attending to one’s health and mental tranquility. There’s a sort of postmodern cynicism that kicks in whenever some cultural trend becomes too popular, and while it protects us from groupthink, it also tends to lead to intellectual and cultural paralysis.  It’s only when performance measures find their way into aspects of our lives that are trivialized by quantifying – such as love or family life- that I think we should earnestly worry, along, perhaps, with the atrophy of our skills to engage with the world absent these algorithmic tools.

My really deep concern lies with the way apps and algorithms now play the role of invisible instruments of power. Again, this is nothing new to the extent that in the pre-digital age such instruments came in the form of bureaucracy and the rule by decree rather than law as Hannah Arendt laid out in her Origins of Totalitarianism back in the 1950:

In governments by bureaucracy decrees appear in their naked purity as though they were no longer issued by powerful men, but were the incarnation of power itself and the administrator only its accidental agent. There are no general principles behind the decree, but ever changing circumstances which only an expert can know in detail. People ruled by decree never know what rules them because of the impossibility of understanding decrees in themselves and the carefully organized ignorance of specific circumstances and their practical significance in which all administrators keep their subjects.  (244)

It’s quite easy to read the rule of apps and algorithms in that quote especially the part about “only an expert can know in detail” and “carefully organized ignorance” a fact that became clear to me after I read what is perhaps the best book yet on our new algorithmically ruled lives, Frank Pasquale’s  The Black Box Society: The Secret Algorithms That Control Money and Information.

I have often wondered what exactly was being financially gained by gathering up all this data on individuals given how obvious and ineffective the so-called targeted advertisements that follow me around on the internet seem to be, and Pasquale managed to explain this clearly. What is being “traded” is my “digital reputation” whether as a debtor, or insurance risk (medical or otherwise), or customer with a certain depth of pocket and identity- “father, 40’s etc”- or even the degree to which I can be considered a “sucker” for scam and con artists of one sort or another.

This is a reputation matrix much different from the earlier arrangements based or personal knowledge or later impersonal systems such as credit reporting (though both had their abuses) or that for health records under H.I.P.A.A  in the sense that the new digital form or reputation is largely invisible to me, its methodology inscrutable, its declarations of my “identity” immune to challenge and immutable. It is as Pasquale so-aptly terms a “black box” in the strongest sense of that word meaning unintelligible and opaque to the individual within it like the rules Kafka’s characters suffer under in his novels about the absurdity of hyper- bureaucracy (and of course more) The Castle and The Trial.   

Much more troubling, however, is how such corporate surveillance interacts with the blurring of the line between intelligence and police functions  – the distinction between the foreign and domestic spheres- that has been what of the defining features of our constitutional democracy. As Pasquale reminds us:

Traditionally, a critical distinction has been made between intelligence and investigation. Once reserved primarily for overseas spy operations, “intelligence” work is anticipatory, it is the job of agencies like the CIA, which gather potentially useful information on external enemies that pose a threat to national security. “Investigation” is what police do once they have evidence of a crime. (47)

It isn’t only that such moves towards a model of “predictive policing” mean the undoing of constitutionally guaranteed protections and legal due process (presumptions of innocence, and 5th amendment protections) it is also that it has far too often turned the police into a political instrument, which, as Pasquale documents, have monitored groups ranging from peaceful protesters to supporters of Ron Paul all in the name of preventing a “terrorist act” by these members of these groups. (48)

The kinds of illegal domestic spying performed by the NSA and its acronymic companions was built on back of an already existing infrastructure of commercial surveillance. The same could be said for the blurring of the line between intelligence and investigation exemplified by creation of “fusion centers” after 9-11 which repurposed the espionage tools once contained to intelligence services and towards domestic targets and for the purpose of controlling crime.

Both domestic spying by federal intelligence agencies and new forms of invasive surveillance by state and local law enforcement had been enabled by the commercial surveillance architecture established by the likes of corporate behemoths such as FaceBook and Google to whom citizens had surrendered their right to privacy seemingly willingly.

Given the degree to which these companies now hold near monopolies hold over the information citizens receive Pasquale thinks it would be wise to revisit the breakup of the “trusts” in the early part of the last century. It’s not only that the power of these companies is already enormous it’s that were they ever turned into overt political tools they would undermine or upend democracy itself given that citizen action requires the free exchange of information to achieve anything at all.

The black box features of our current information environment have not just managed to colonize the worlds of security, crime, and advertisement, they have become the defining feature of late capitalism itself. A great deal of the 2008 financial crisis can be traced to the computerization of finance over the 1980’s. Computers were an important feature of the pre-crisis argument that we had entered a period of “The Great Equilibrium”. We had become smart enough, and our markets sophisticated enough (so the argument went) that there would be no replay of something like the 1929 Wall Street crash and Great Depression. Unlike the prior era markets without debilitating crashes were not to be the consequence of government regulation to contain the madness of crowds and their bubbles and busts, but in part from the new computer modeling which would exorcise from the markets the demon of “animal spirits” and allow human beings to do what they had always dreamed of doing- to know the future.  Pasquale describes it this way:

As information technology improved, lobbyists could tell a seductive story: regulators were no longer necessary.  Sophisticated investors could vet their purchases.  Computer models could identify and mitigate risk. But the replacement of regulation by automation turned out to be as fanciful as flying cars or space colonization. (105)

Computerization gave rise to ever more sophisticated financial products, such as mortgage backed securities, based on ever more sophisticated statistical models that by bundling investments gave the illusion of stability. Even had there been more prophets crying from the wilderness that the system was unstable they would not have been able to prove it for the models being used were “a black box, programmed in proprietary software with the details left to the quants and the computers”. (106)

It seems there is a strange dynamic at work throughout the digital economy, not just in finance but certainly exhibited in full force there, where the whole game in essence a contest of asymmetric information. You either have the data someone else lacks to make a trade, you process that data faster, or both. Keeping your algorithms secret becomes a matter of survival for as soon as they are out there they can be exploited by rivals or cracked by hackers- or at least this is the argument companies make. One might doubt it though once this you see how nearly ubiquitous this corporate secrecy and patent hoarding has become in areas radically different from software such as the pharmaceuticals or by biotech corporations like Monsanto which hold patents on life itself and whose logic leads to something like Paolo Bacigalupi’s dystopian novel The Windup Girl.

For Pasquale complexity itself becomes a tool of obfuscation in which corruption and skimming can’t help but become commonplace. The contest of asymmetric information means companies are engaged in what amounts to an information war where the goal is as much to obscure real value to rivals and clients so as to profit from the difference in this distortion. In such an atmosphere markets stop being able to perform the informative role Friedrich Hayek thought was their very purpose.  Here’s Pasquale himself:

…financialization has created enormous uncertainty about the value of companies, homes, and even (thanks to the pressing need for bailouts) the once rock solid promises of governments themselves.

Finance thrives in this environment of radical uncertainty, taking commissions in cash as investors (or, more likely, their poorly monitored agents) race to speculate on or hedge against an ever less knowable future. (138)

Okay, if Pasquale has clearly laid out the problem, what is his solution? I could go through a list of his suggestions, but I should stick to the general principle. Pasquale’s goal, I think, is to restore our faith in our ability to publicly shape digital technology in ways that better reflect our democratic values. That the argument which claims software is unregulable is an assumption not a truth and the tools and models for regulation and public input over the last century for the physical world are equally applicable to the digital one.

We have already developed a complex, effective, system of privacy protections in the form of H.I.P.A, there are already examples of mandating fair understandable contracts (as opposed to indecipherable “terms of service” agreements) in the form of various consumer protection provisions, up until the 1980’s we were capable of regulating the boom and bust cycles of markets without crashing the economy. Lastly the world did not collapse when earlier corporations that had gotten so large they threatened not only the free competition of markets, but more importantly, democracy itself, were broken up and would not collapse were the like of FaceBook, Google or the big banks broken up either.

Above all, Pasquale urges us to seek out and find some way to make the algorithmization of the world intelligible and open to the political, social and ethical influence of a much broader segment of society than the current group of programmers and their paymasters who have so far been the only ones running the show. For if we do not assert such influence and algorithms continue to structure more and more of our relationship with the world and each other, them algorithmization and democracy would seem to be on a collision course. Or, as Taylor Owen pointed out in a recent issue of Foreign Affairs:

If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system, one that requires transparency and accountability in order to function. A node of power that exists outside of these bounds is a threat to the notion of collective governance itself. This, at its core, is a profoundly undemocratic notion—one that states will have to engage with seriously if they are going to remain relevant and legitimate to their digital citizenry who give them their power.

Pasquale has given us an excellent start to answering the question of how democracy, and freedom, can survive in the age of algorithms.

 

Auguries of Immortality, Malthus and the Verge

Hindu Goddess Tara

Sometimes, if you want to see something in the present clearly it’s best to go back to its origins. This is especially true when dealing with some monumental historical change, a phase transition from one stage to the next. The reason I think this is helpful is that those lucky enough to live at the beginning of such events have no historical or cultural baggage to obscure their forward view. When you live in the middle, or at the end of an era, you find yourself surrounded, sometimes suffocated, by all the good and bad that has come as a result. As a consequence, understanding the true contours of your surroundings or ultimate destination is almost impossible, your nose is stuck to the glass.

Question is, are we ourselves in the beginning of such an era, in the middle, or at an end? How would we even know?

If I were to make the case that we find ourselves in either the middle or the end of an era, I know exactly where I would start. In 1793 the eccentric English writer William Godwin published his Enquiry Concerning Political Justice and its Influence on Morals and Happiness, a book which few people remember. What Godwin is remembered for instead is his famous daughter Mary Shelley, and her even more famous monster, though I should add that if you like thrillers you can thank Godwin for having invented them.

Godwin’s Enquiry, however, was a different kind of book. It grew out of the environment of a time which, in Godwin’s eyes at least, seemed pregnant with once unimaginable hope. The scientific revolution had brought about a fuller understanding of nature and her laws than anything achieved by the ancients, the superstitions of earlier eras were abandoned for a new age of enlightenment, the American Revolution had brought into the world a whole new form of government based on Enlightenment principles, and, as Godwin wrote, a much more important similar revolution had overthrown the monarchy in France.

All this along with the first manifestations of what would become the industrial revolution led Godwin to speculate in the Enquiry that mankind had entered a new era of perpetual progress. Where then could such progress and mastery over nature ultimately lead? Jumping off of a comment by his friend Ben Franklin, Godwin wrote:

 Let us here return to the sublime conjecture of Franklin, that “mind will one day become omnipotent over matter.” If over all other matter, why not over the matter of our own bodies? If over matter at ever so great a distance, why not over matter which, however ignorant we may be of the tie that connects it with the thinking principle, we always carry about with us, and which is in all cases the medium of communication between that principle and the external universe? In a word, why may not man be one day immortal?

Here then we can find evidence for the recent claim of Yuval Harari that “The leading project of the Scientific Revolution is to give humankind eternal life.” (268)  In later editions of the Enquiry, however, Godwin dropped the suggestion of immortality, though it seems he did it not so much because of the criticism that followed from such comments, or the fact that he stopped believing in it, but rather that it seemed too much a termination point for his notion of progress that he now thought really would extend forever into the future for his key point was that the growing understanding by the mind would result in an ever increasing power of the mind over the material world in a process that would literally never end.

Almost at the exact same time as Godwin was writing his Enquiry another figure was making almost the exact same argument including the idea that scientific progress would eventually result in indefinite human lifespans. The Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit was a book written by a courageous man while on the run from a French Revolutionary government that wanted to cut off his head. Amazingly, even while hunted down during the Terror, Condorcet retained his long term optimism regarding the ultimate fate of humankind.

A young English parson with a knack for the just emerging science of economics, not only wasn’t buying it, he wanted to actually scientifically prove (though I am using the term loosely) exactly why  such optimism should not be believed. This was Thomas Malthus, whose name, quite mistakenly, has spawned its own adjective, Malthusian, which means essentially environmental disaster caused by our own human hands and faults.

As is commonly known, Malthus’ argument was that historically there has been a mismatch between the growth of population and the production of food which sooner or later has led to famine and decline. It was Godwin’s and Condorcet’s claims regarding future human immortality that, in part, was responsible for Malthus stumbling upon his specific argument centered on population. For the obvious rejoinder to those claiming that the human lifespan would increase forever was-  what would we do will all of these people?

Both Godwin and Condorcet thought they had answered this question by claiming that in the future the birth rate would decline to zero. Stunningly, this has actually proven to be correct- that population growth rates have declined in parallel with increases in longevity. Though, rather than declining due to the victory of “reason” and the conquest of the “passions” as both Godwin and Condorcet thought they declined because sex was, for the first time in human history, decoupled from reproduction through the creation of effective forms of birth control.

So far, at least, it seems it has been Godwin and Condorcet that have gotten the better side of the argument. Since the  Enquiry we have experienced nearly 250 years of uninterrupted progress where mind has gained increasing mastery over the material world. And though,we are little closer to the optimists’ dream of “immortality” their prescient guess that longevity would be coupled with a declining birth rate would seem to clear the goal of increased longevity from being self-defeating on Malthusian grounds.

This would not be, of course, the first time Malthus has been shown to be wrong. Yet his ideas, or a caricature of his ideas, have a long history of retaining their hold over our imagination.  Exactly why this is the case is a question  explored in detail by Robert J. Mayhew in his excellent, Malthus: The Life and Legacies of an Untimely Prophet, Malthus’ argument in his famous if rarely actually read An Essay on the Principle of Population has become a sort of secular version of armageddon, his views latched onto by figures both sinister and benign over the two centuries since his essay’s publication.

Malthus’ argument was used against laws to alleviate the burdens of poverty, which it was argued would only increase population growth and an ultimate reckoning (and this view, at least, was close to that of Malthus himself). They were used by anti-immigrant and racist groups in the 19th and early 20th century. Hitler’s expansionist and genocidal policy in eastern Europe was justified on Malthusian grounds.

On a more benign side, Malthusian arguments were used as a spur to the Green Revolution in Agriculture in the 1960’s (though Mayhew thinks the warnings of pending famine were political -arising from the cold war- and overdone.) Malthusianism was found in the 1970’s by Paul Ehrlich to warn of a “population bomb” that never came, during the Oil Crisis slid into the fear over resource constraints, and can now be found in predictions about the coming “resource” and “water” wars. There is also a case where Malthus really may have his revenge, though more on that in a little bit.

And yet, we would be highly remiss were we to not take the question Malthus posed seriously. For what he was really inquiring about is whether or not their might be ultimate limits on the ability of the human mind to shape the world in which it found itself. What Malthus was  looking for was the boundary or verge of our limits as established by the laws of nature as he understood them. Those whose espoused the new human perfectionism, such as Godwin and Condorcet, faced what appeared to Malthus to be an insurmountable barrier to their theories being considered scientific, no matter how much they attached themselves to the language and symbols of the recent success of science. For what they were predicting would happen had no empirical basis- it had never happened before. Given that knowledge did indeed seem to increase through history, if merely as a consequence of having the time to do so, it was still the case that the kind of radical and perpetual progress that Godwin and Condorcet predicted was absent from human history. Malthus set out to provide a scientific argument for why.

In philosophical terms Malthus’ Essay is best read as a theodicy, an attempt like that of Leibniz before him, to argue that even in light of the world’s suffering we live in the “best of all possible world’s”. Like Newton did for falling objects, Malthus sought the laws of nature as designed by his God that explained the development of human society. Technological and social progress had remained static in the past even as human knowledge regarding the world accumulated over generations because the gap between mind and matter was what made us uniquely human. What caused us to most directly experience this gap and caused progress to remain static? Malthus thought he pinned the source of stasis in famine and population decline.

As is the case with any other physical system, for human societies, the question boiled down to how much energy was available to do meaningful work. Given that the vast majority of work during Malthus’ day was done by things that required energy in the form of food whether humans or animals, the limited amount of land that could be efficiently tilled presented an upper bound to the size, complexity, and progress of any human society.

What Malthus missed, of course, was the fact that the relationship between food and work was about to be severed. Or rather, the new machines did consume a form of processed “food” organic material that had been chemically “constructed” and accumulated over the eons, in the form of fossil fuels that offered an easily accessible type of energy that was different in kind from anything that had come before it.

The sheer force of the age of machines Malthus had failed to foresee did indeed break with the flatline of human history he had identified in every prior age. A fact that has never perhaps been more clearly than in Ian Morris’ simple graph below.

Ian Morris Great Divergence Graph

What made this breaking of the pattern between all of past human history and the last few centuries is the thing that could possibly, and tragically, prove Malthus right after all- fossil fuels.  For any society before 1800 the majority of energy other than that derived from food came in the form of wood- whether as timber itself or charcoal. But as Lewis Dartnell pointed out in a recent piece in AEON the world before fossil fuels posed a seemingly unsurmountable (should I say Malthusian?) dilemma; namely:

The central problem is that woodland, even when it is well-managed, competes with other land uses, principally agriculture. The double-whammy of development is that, as a society’s population grows, it requires more farmland to provide enough food and also greater timber production for energy. The two needs compete for largely the same land areas.

Dartnell’s point is that we have been both extremely lucky and unlucky in how accessible and potent fossil fuels have been. On the one hand fossil fuels gave us a rather short path to technological society, on the other, not only will it be difficult to wean ourselves from them, it is hard to imagine how we could reboot as a civilization should we suffer collapse and find ourselves having already used up most of the world’s most easily accessed forms of energy.

It is a useful exercise, then, to continue to take Malthus’ argument seriously, for even if we escape the second Malthusian trap- fossil fuel induced climate change- that allowed us to break free from the trap Malthus’ originally identified- our need to literally grow our energy- there are other predictable traps that likely lie in store.

One of these traps that interests me the most has to do with the “energy problem” that Malthus understood in terms of the production of food. As I’ve written about before, and as brought to my attention by the science writer Lee Billings in his book Five Billion Years of Solitude, there is a good and little discussed case from physics for thinking we might be closer to the end of an era that began with the industrial revolution rather than in the middle or even at the beginning.

This physics of civilizational limits comes from Tom Murphy of the University of California, San Diego who writes the blog Do The Math. Murphy’s argument, as profiled by the BBC, has some of the following points:

  • Assuming rising energy use and economic growth remain coupled, as they have in the past, confronts us with the absurdity of exponentials. At a 2.3 percent growth rate within 2,500 hundred years we would require all the energy from all the stars in the Milky Way galaxy to function.
  • At 3 percent growth, within four hundred years we will have boiled away the earth’s oceans, not because of global warming, but from the excess heat that is the normal product of energy production. (Even clean fusion leaves us burning away the world’s oceans for the same reason)
  • Renewables push out this reckoning, but not indefinitely. At a 3 percent growth rate, even if the solar efficiency was 100% we would need to capture all of the sunlight hitting the earth within three hundred years.

Will such limits prove to be correct and Malthus, in some sense, be shown to have been right all along? Who knows. The only way we’d have strong evidence to the contrary is if we came across evidence of civilizations with a much, much greater energy signature than our own. A recent project out of Penn State to do just that, which looked at 100,000 galaxies, found nothing, though this doesn’t mean the search is over.

Relating back to my last post: the universe may lean towards giving rise to complexity in the way the physicist Jeremy England suggests, but the landscape is littered with great canyons where evolution gets stuck for very long periods of time which explains its blindness as perceived by someone like Henry Gee. The scary thing is getting out of these canyons is a race against time: complex life could have been killed off by some disaster shortly after the Cambrian explosion, we could have remained hunter gatherers and failed to develop agriculture before another ice age did us in, some historical contingency could have prevented industrialization before some global catastrophe we are advanced enough now to respond to wiped us out.

If Tom Murphy is right we are now in a race to secure exponentially growing sources of energy, and it is a race we are destined to lose. The reason we don’t see any advanced civilizations out there is because the kind of growth we’ve extrapolated from the narrow slice of the past few centuries is indeed a finite thing as the amount of energy such growth requires reaches either terrestrial or cosmic limits. We simply won’t be able to gain access to enough energy fast enough to keep technological progress going at its current rate.

Of course, even if we believe that progress has some limit out there, that does not necessarily entail we shouldn’t pursue it, in many of its forms, until we hit the verge itself. Taking arguments that there might be limits to our technological progress is one thing, but to our moral progress, our efforts to address suffering in the world, they are quite another, for there accepting limits would mean accepting some level of suffering or injustice as just the “way things are”.  That we should not accept this Malthus himself nearly concluded:

 Evil exists in the world not to create despair but activity. We are not patiently to submit to it, but to exert ourselves to avoid it. It is not only the interest but the duty of every individual to use his utmost efforts to remove evil from himself and from as large a circle as he can influence, and the more he exercises himself in this duty, the more wisely he directs his efforts, and the more successful these efforts are, the more he will probably improve and exalt his own mind and the more completely does he appear to fulfil the will of his Creator. (124-125)

The problem lies with the justification of the suffering of individual human beings in any particular form as “natural”. The crime at the heart of many versions of Malthusianism is this kind of aggregation of human beings into some kind of destructive force, which leads to the denial of the only scale where someone’s true humanity can be scene- at the level of the individual. Such a moral blindness that sees only the crowd can be found in the work the most famous piece of modern Malthusianism -Paul Ehrlich’s Population Bomb where he discusses his experience of Delhi.

The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to the buses. People herding animals. People, people, people, people. As we moved slowly through the mob, hand horn squawking, the  dust, noise, heat, and the cooking fires gave the scene a hellish aspect. Would we ever get to our hotel? All three of us were, frankly, frightened. (p. 1)

Striped of this inability to see the that the value of human beings can only be grasped at the level of the individual and that suffering can only be assessed in a moral sense at this individual level, Malthus can help to remind us that our mind’s themselves emerge out of their confrontation and interaction with a material world where we constantly explore, overcome, and confront again its boundaries. That the world itself was probably “a mighty process for awakening matter into mind” and even the most ardent proponents of human perfectionism, modern day transhumanists or singularitarians, or just plain old humanists would agree with that.

* Image: Tara (Devi): Hindu goddess of the unquenchable hunger that compels all life.

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and our certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.