The debate between the economists and the technologists, who wins?

Human bat vs robot gangster

For a while now robots have been back in the news with a vengeance, and almost on cue seem to have revived many of the nightmares that we might have thought had been locked up in the attic of the mind with all sorts of other stuff from the 1980’s, which it was hoped we would never need.

Big fears should probably only be tackled one at a time, so let’s leave aside for today the question of whether robots are likely to kill us, and focus on what should be an easier nut to crack and a less frightening nut at that; namely, whether we are in the process of automating our way out into a state of permanent, systemic unemployment.

Alas, even this seemingly less fraught question is no less difficult to answer. For like everything the issue seems to have given rise to two distinct sides neither of which seems to have a clear monopoly on the truth. Unlike elsewhere however, these two sides in the debate over “technological unemployment” usually split less over ideological grounds than on the basis of professional expertise. That is, those who dismiss the argument that advances in artificial intelligence and robotics have already, or are about to, displace the types of work now done by humans to the extent that we face a crisis of  permanent underemployment and unemployment the likes of which have never been seen before tend to be economists. How such an optimistic bunch came to be known as dismissal scientists is beyond me- note how they are also on the optimistic side of the debate with environmentalists.

Economists are among the first to remind us that we’ve seen fears of looming robot induced unemployment before, whether those of Ned Ludd and his followers in the 19th century, or as close to us as the 1960s. The destruction of jobs has, in the past at least, been achieved through the kinds of transformation that created brand new forms of employment. In 1915 nearly 40% of Americans were agricultural laborers of some sort now that number hovers around 2 percent. These farmers weren’t replaced by “robots” but they certainly were replaced by machines.

Still we certainly don’t have a 40% unemployment rate. Rather, as the number of farm laborer positions declined they were replaced by jobs that didn’t even exist in 1915. The place these farmers have not gone, or where they probably would have gone in 1915 that wouldn’t be much of an option today is into manufacturing. For in that sector something very similar to the hollowing out of employment in agriculture has taken place with the decline in the percentage of laborers in manufacturing declining since 1915 from 25% to around 9% today. Here the workers really have been replaced by robots though just as much have job prospects on the shop floor declined because the jobs have been globalized. Again, even at the height of the recent financial crisis we haven’t seen 25% unemployment, at least not in the US.

Economists therefore continue to feel vindicated by history: any time machines have managed to supplement human labor we’ve been able to invent whole new sectors of employment where the displaced or their children have been able to find work. It seems we’ve got nothing to fear from the “rise of the robots.” Or do we?

Again setting aside the possibility that our mechanical servants will go all R.U.R on us, anyone who takes serious Ray Kurzweil’s timeline that by the 2020’s computers will match human intelligence and by 2045 exceed our intelligence a billionfold has to come to the conclusion that most jobs as we know them are toast. The problem here, and one that economists mostly fail to take into account, is that past technological revolutions ended up replacing human brawn and allowing workers to upscale into cognitive tasks. Human workers had somewhere to go. But a machine that did the same for tasks that require intelligence and that were indeed billions of times smarter than us would make human workers about as essential to the functioning of a company as Leaper ornament is to the functioning of a Jaguar.

Then again perhaps we shouldn’t take Kurzweil’s timeline all that seriously in the first place. Skepticism would seem to be in order  because the Moore’s Law based exponential curve that is at the heat of Kurzweil’s predictions appears to have started to go all sigmoidal on us. That was the case made by John Markoff recently over at The Edge. In an interview about the future of Silicon Valley he said:

All the things that have been driving everything that I do, the kinds of technology that have emerged out of here that have changed the world, have ridden on the fact that the cost of computing doesn’t just fall, it falls at an accelerating rate. And guess what? In the last two years, the price of each transistor has stopped falling. That’s a profound moment.

Kurzweil argues that you have interlocked curves, so even after silicon tops out there’s going to be something else. Maybe he’s right, but right now that’s not what’s going on, so it unwinds a lot of the arguments about the future of computing and the impact of computing on society. If we are at a plateau, a lot of these things that we expect, and what’s become the ideology of Silicon Valley, doesn’t happen. It doesn’t happen the way we think it does. I see evidence of that slowdown everywhere. The belief system of Silicon Valley doesn’t take that into account.

Although Markoff admits there has been great progress in pattern recognition there has been nothing similar for the kinds of routine physical tasks found in much of low skilled/mobile forms of work. As evidence from the recent DARPA challenge if you want a job safe from robots choose something for a living that requires mobility and the performance of a variety of tasks- plumber, home health aide etc.

Markoff also sees job safety on the higher end of the pay scale in cognitive tasks computers seem far from being able to perform:

We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

The upshot of all this is that there’s less to be feared from technological unemployment than many think:

There is an argument that these machines are going to replace us, but I only think that’s relevant to you or me in the sense that it doesn’t matter if it doesn’t happen in our lifetime. The Kurzweil crowd argues this is happening faster and faster, and things are just running amok. In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

The problem, I think, with the case against technological unemployment made by many economists and someone like Markoff is that they seem to be taking on a rather weak and caricatured version of the argument. That at least was is the conclusion one comes to when taking into account what is perhaps the most reasoned and meticulous book to try to convince us that the boogeyman of robots stealing our jobs might have all been our imagination before, but that it is real indeed this time.

I won’t so much review the book I am referencing, Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future here as layout how he responds to the case from economics that we’ve been here before and have nothing to worry about or the observation of Markoff that because Moore’s Law has hit a wall (has it?), we need no longer worry so much about the transformative implications of embedding intelligence in silicon.

I’ll take the second one first. In terms of the idea that the end of Moore’s Law will derail tech innovation Ford makes a pretty good case that:

Even if the advance of computer hardware capability were to plateau, there would be a whole range of paths along which progress could continue. (71)

Continued progress in software and especially new types of (especially parallel) computer architecture will continue to be mined long after Moore’s Law has reached its apogee. Cloud computing should also imply that silicon needn’t compete with neurons at scale. You don’t have to fit the computational capacity of a human individual into a machine of roughly similar size but could tap into a much, much larger and more energy intensive supercomputer remotely that gives you human level capacities. Ultimate density has become less important.

What this means is that we should continue to see progress (and perhaps very rapid progress) in robotics and artificially intelligent agents. Given that we have what is often thought of as a 3 dimensional labor market comprised of agriculture, manufacturing and the service sector and the first two are already largely mechanized and automated the place where the next wave will fall hardest is in the service sector and the question becomes is will there be any place left for workers to go?

Ford makes a pretty good case that eventually we should be able to automate almost anything human beings do for all automation means is breaking a task down into a limited number of steps. And the examples he comes up with where we have already shown this is possible are both surprising and sometimes scary.

Perhaps 70 percent of financial trading on Wall Street is now done by trading algorithms (who don’t seem any less inclined to panic). Algorithms now perform legal research and compose orchestras and independently discover scientific theories. And those are the fancy robots. Most are like ATM machines where its is the customer who now does part of the labor involved in some task. Ford thinks the fast food industry is rife for innovation in this way where the customer designs their own food that some system of small robots then “builds.” Think of your own job. If you can describe it to someone in a discrete number of steps Ford thinks the robots are coming for you.

I was thankful that though himself a technologist, Ford placed technological unemployment in a broader context and sees it as part and parcel of trends after 1970 such as a greater share of GDP moving from labor and to capital, soaring inequality, stagnant wages for middle class workers, the decline of unions, and globalization. His solution to these problems is a guaranteed basic income, which he makes a humane and non-ideological argument for. Reminding those on the right who might the idea anathema that conservative heavyweights such Milton Friedman have argued in its favor.

The problem from my vantage point is not that Ford has failed to make a good case against those on the economists’ side of the issue who would accuse him of committing the  Luddite fallacy, it’s that perhaps his case is both premature and not radical enough. It is premature in the sense that while all the other trends regarding rising inequality, the decline of unions etc are readily apparent in the statistics, technological unemployment is not.

Perhaps then technological unemployment is only small part of a much larger trend pushing us in the direction of the entrenchment and expansion of inequality and away from the type of middle class society established in the last century. The tech culture of Silicon Valley and the companies they have built are here a little bit like Burning Man an example of late capitalist culture at its seemingly most radical and imaginative that temporarily escapes rather than creates a really alternative and autonomous political and social space.

Perhaps the types of technological transformation already here and looming that Ford lays out could truly serve as the basis for a new form of political and economic order that served as an alternative to the inegalitarian turn, but he doesn’t explore them. Nor does he discuss the already present dark alternative to the kinds of socialism through AI we find in the “Minds” of Iain Banks, namely, the surveillance capitalism we have allowed to be built around us that now stands as a bulwark against preserving our humanity and prevent ourselves from becoming robots.

The King of Weird Futures

Bosch vanity Garden of earthy delights

Back in the late winter I wrote a review of the biologist Edward O. Wilson’s grandiloquently mistitled tract-  The Meaning of Human Existence. As far as visions of the future go Wilson’s was a real snoozer, although for that very reason it left little to be nervous about. The hope that he articulated in his book being that we somehow manage to keep humanity pretty much the same- genetically at least- “as a sacred trust”,  in perpetuity. It’s a bio-conservatism that, on one level, I certainly understand, but one I also find incredibly unlikely given that the future consists of….well…. an awfully long stretch of time (that is as long as we’re wise enough or just plain lucky ). How in the world can we expect, especially in light of current advances in fields like genetics, neuroscience, artificial intelligence etc, that we can, or even should, keep humanity essentially unchanged not just now, but for 100 years, or 1000s year, 10,000s years, or even longer?

If Wilson is the 21st century’s prince of the dull future the philosopher David Roden should perhaps be crowned the king of weird one(s). Indeed, it may be that the primary point of his recent mind-bending book Posthuman Life:Philosophy at the Edge of the Human, is to make the case for the strange and unexpected. The Speculative Posthumanism (SP) he helps launch with this book a philosophy that grapples with the possibility that the future of our species and its descendents will be far weirder than we have so far allowed ourselves to imagine.

I suppose the best place to begin a proper discussion of  Posthuman Life would be with explaining just exactly what Roden means by Speculative Posthumanism, something that (as John Dahaner has pointed out) Roden manages to uncover like a palimpsest by providing some very useful clarifications for often philosophically confused and conflated areas of speculation regarding humanity’s place in nature and its future.

Essentially Roden sees four domains of thought regarding humanism/posthumanism. There is Humanism of the old fashioned type that even absent some kind of spiritual dimension makes the claim that there is something special, cognitively, morally, etc that marks human beings off from the rest of nature.

Interestingly, Roden sees Transhumanism as merely an updating of this humanism- the expansion of its’ tool kit for perfecting humankind to include not just things like training and education but physical, cognitive, and moral enhancements made available by advances in medicine, genetics, bio-electronics and similar technologies.

Then there is Critical Posthumanism by which Roden means a move in Western philosophy apparent since the later half of the 20th century that seeks to challenge the anthropocentrism at the heart of Western thinking. The shining example of this move was the work of Descartes, which reduced animals to machines while treating the human intellect as mere “spirit” as embodied and tangible as a burnt offering to the gods. Critical Posthumanism, among whom one can count a number of deconstructionists, feminists, multicultural, animal rights, and environmentalists philosophers from the last century, aims to challenge the centrality of the subject and the discourses surrounding the idea of an observer located at some Archimedean point outside of nature and society.

Lastly, there is the philosophy Roden himself hopes to help create- Speculative Posthumanism the goal of which is to expand and explore the potential boundaries of what he calls the posthuman possibility space (PPS). It is a posthumanism that embraces the “weird” in the sense that it hopes, like critical posthumanism, to challenge the hold anthropocentrism has had on the way we think about possible manifestations of phenomenology, moral reasoning, and cognition. Yet unlike Critical Posthumanism, Speculative Posthumanism does not stop at scepticism but seeks to imagine, in so far as it is possible, what non-anthropocentric forms of phenomenology, moral reasoning, and cognition might actually look like. (21)

It is as a work of philosophical clarification that Posthuman Life succeeds best, though a close runner up would be the way Roden manages to explain and synthesize many of the major movements within philosophy in the modern period in a way that clearly connects them to what many see as upcoming challenges to traditional philosophical categories as a consequence of emerging technologies from machines that exhibit more reasoning, or the disappearance of the boundary between the human, the animal, and the machine, or even the erosion of human subjectivity and individuality themselves.

Roden challenges the notion that any potential moral agents of the future that can trace their line of descent back to humanity will be something like Kantian moral agents rather than agents possessing a moral orientation we simply cannot imagine. He also manages to point towards connections of the postmodern thrust of late 21st century philosophy which challenged the role of the self/subject and recent developments in neuroscience, including connections between philosophical phenomenology and the neuroscience of human perception that do something very similar to our conception of the self. Indeed, Posthuman Life eclipses similar efforts at synthesis and Roden excels at bringing to light potentially pregnant connections between thinkers as diverse as Andy Clark and Heidegger, Donna Haraway and Deleuze and Derrida along with non-philosophical figures like the novelist Philip K. Dick.

It is as a very consequence of his success at philosophical clarification that leads Roden across what I, at least, felt was a bridge (philosophically) too far. As posthumanist philosophers are well aware, the very notion of the “human” suffers a continuum problem. Unique to us alone, it is almost impossible to separate humanity from technology broadly defined and this is the case even if we go back to the very beginnings of the species where the technologies in question are the atul or the baby sling. We are in the words of Andy Clark “natural born cyborgs”. In addition to this is the fact that (like anything bound up with historical change) how a human being is defined is a moving target rather than a reflection of any unchanging essence.

How then can one declare any possible human future that emerges out of our continuing “technogenesis” “post” human, rather than just the latest iteration of what in fact is the very old story of the human “artificial ape”? And this status of mere continuation (rather than break with the past) would seem to hold in a philosophical sense even if whatever posthumans emerged bore no genetic and only a techno-historical relationship to biological humans. This somewhat different philosophical problem of clarification again emerges as the consequence of another continuum problem namely the fact that human beings are inseparable from the techno-historical world around them- what Roden brilliantly calls “the Wide Human” (WH).

It is largely out of the effort to find clear boundaries within this confusing continuum that leads Roden to postulate what he calls the “disconnection thesis”. According to this thesis an entity can only properly be said to be posthuman if it is no longer contained within the Wide Human.  A “Wide Human descendent is a posthuman if and only if:”

  1. It has ceased to belong to WH (the Wide Human) as a result of technical alteration.
  2. Or is wide descendent of such a being. (outside WH) . (112)

Yet it isn’t clear, to me at least, why disconnection from the Wide Human is more likely to result in something more different from humanity and our civilization as they currently exist today than anything that could emerge out of, but still remain part of, the Wide Human itself. Roden turns to the idea of “assemblages” developed by Deleuze and Guattari in an attempt to conceptualize how such a disconnection might occur, but his idea is perhaps conceptually clearer if one comes at it from the perspective of the kinds of evolutionary drift that occurs when some set of creatures becomes isolated from another by having become an island.

As Darwin realized while on his journey to the Galapagos isolation can lead quite rapidly to wide differences between the isolated variant and its parent species. The problem when applying such isolation analogies to technological development is that unlike biological evolution (or technological development before the modern era), the evolution of technology is now globally distributed, rapid and continuous.

Something truly disruptive seems much more likely to emerge from within the Wide Human than from some separate entity or enclave- even one located far out in space.  At the very least because the Wide Human possesses the kind of leverage that could turn something disruptive into something transformative to the extent it could be characterized as posthuman.

What I think we should look out for in terms of the kinds of weird divergence from current humanity that Roden is contemplating, and though he claims speculative posthumanism is not normative, is perhaps rooting for, is maybe something more akin to a phase change or the kinds of rapid evolutionary changes seen in events like the cambrian explosion or the opening up of whole new evolutionary theaters such as when life in the sea first moved unto the land than some sort of separation. It would be something like the singularity predicted by Vernor Vinge though might just as likely come from a direction completely unanticipated and cause a transformation that would make the world, from our current perspective, unrecognizable, and indeed, weird.

Still, what real posthuman weirdness would seem to require would be something clearly identified by Roden and not dependent, to my lights, on his disruption thesis being true. The same reality that would make whatever follows humanity truly weird would be that which allowed alien intelligence to be truly weird; namely, that the kinds of cognition, logic, mathematics, science found in our current civilization, or the kinds of biology and social organization we ourselves possess to all be contingent. What that would mean in essence was that there were a multitude of ways intelligence and technological civilizations might manifest themselves of which we were only a single type, and by no means the most interesting one. Life itself might be like that with the earthly variety and its conditions just one example of what is possible, or it might not.

The existence of alien intelligence and technology very different from our own means we are not in the grip of any deterministic developmental process and that alternative developmental paths are available. So far, we have no evidence one way or another, though unlike Kant who used aliens as a trope to defend a certain versions of what intelligence and morality means we might instead imagine both extraterrestrial and earthly alternatives to our own.

While I can certainly imagine what alternative, and from our view, weird forms of cognition might look like- for example the kinds of distributed intelligence found in a cephalopod or eusocial insect colony, it is much more difficult for me to conceive what morality and ethics might look like if divorced from our own peculiar hybrid of social existence and individual consciousness (the very features Wilson, perhaps rightfully, hopes we will preserve). For me at least one side of what Roden calls dark phenomenology is a much deeper shade of black.

What is especially difficult in this regard for me to imagine is how the kinds of openness to alternative developmental paths that Roden, at the very least, wants us to refrain from preemptively aborting is compatible with a host of other projects surrounding our relationship to emerging technology which I find extremely important: projects such as subjecting technology to stricter, democratically established ethical constraints, including engineering moral philosophy into machines themselves as the basis for ethical decision making autonomous from human beings. Nor is it clear what guidance Roden’s speculative posthumanism provides when it comes to the question of how to regulate against existential risks, dangers which our failure to tackle will foreclose not only a human future but very likely possibility of a posthuman future.

Roden seems to think the fact that there is no such thing as a human “essence” we should be free to engender whatever types of posthumans we want. As I see it this kind of ahistoricism is akin to a parent who refuses to use the lessons learned from a difficult youth to inform his own parenting. Despite the pessimism of some, humanity has actually made great moral strides over the arc of its history and should certainly use those lessons to inform whatever posthumans we chose to create.

One would think the types of posthumans whose creation we permit should be constrained by our experience of a world ill designed by the God of Job. How much suffering is truly necessary? Certainly less than sapient creatures currently experience and thus any posthumans should suffer less than ourselves. We must be alert to and take precautions to avoid the danger that posthuman weirdness will emerge from those areas of the Wide Human where the greatest resources are devoted- military or corporate competition- and for that reason- be terrifying.

Yet the fact that Roden has left one with questions should not subtract from what he has accomplished; namely he has provided us with a framework in which much of modern philosophy can be used to inform the unprecedented questions that are facing as a result of emerging technologies. Roden has also managed to put a very important bug in the ear of all those who would move too quick to prohibit technologies that have the potential to prove disruptive, or close the door to the majority of the hopefully very long future in front of us and our descendents- that in too great an effort to preserve the contingent reality of what we currently are we risk preventing the appearance of something infinitely more brilliant in our future.

John Gray and the Puppets of Gloom

Javanese shadow puppets

Lately I’ve been thinking a lot about puppets. I know that sounds way too paleo-tech, and weird, but hear me out. Puppets are an ancient technology, which, for all the millennia that passed before, and up until very, very recently, were the primary way we experienced animated art. For the vast majority of human history the way we watched projected figures in front of us playing out some imagined drama was in the form of shadows cast on the walls.

In such shadows were the forerunner of movies, and television, videogames and VR. And if you don’t think a similar artistry and brilliance to these newer medium can be found in ancient marionettes you should take a peek at the beautiful, bizarre world conjured up by the Javanese who with their long tradition continue to do shadow theater best.

Puppets have also been the jumping off point for some very deep philosophical reflections. What, after all, was the inspiration for the analogy of Plato’s cave than the world of the shadow play? Just a little over two centuries ago there was Heinrich von Kleist’s short story  “On the Marionette Theatre” that used the art of puppetry as a means of reflecting on human freedom and the difference between us, animals and machines. Philosophers can do a lot with puppets, or at least try to.

Thus when I heard that the philosopher John Gray had written a recent book whose starting point was Kleist’s short story- Gray’s The Soul of a Marionette–  I felt compelled to pick it up. I was ready to kick myself for not having realized first that Kleist’s story was an excellent way to address contemporary questions such as the difference between human and artificial intelligence or perhaps the challenges brought upon common notions of freedom in the light of recent neuroscience.

As I am not alone in seeing, rather than diminishing in importance as we have developed new and superior forms of entertainment a grasp of the ancient art puppetry might be a key to understanding our own confusing age. For it seems that we are entering a golden age of puppetry in which humans are the puppeteers of all sorts of semi-autonomous machines from drones to artificial prostitutes. A fate that seems much more likely over the next few decades than the kinds of looming full machine autonomy predicted (and feared) by many today.

The specter of the marionette can also be seen in the quite legitimate fear that some of the recent advances in neuroscience could possibly be used not only to infringe on the autonomy of animals, but on human beings as well.

In other words, I had high hopes for The Soul of a Marionette given that its jumping off point for discussing the modern world was Kleist’s brilliant 1810 story and essay on the philosophy of puppetry, but it seems I didn’t deserve a kick after all, for these hopes were dashed when I discovered Gray was merely using Kleist’s tale (and his entire book) as a prop for his otherwise stale, endless argument with liberals and “utopians”. Allow me to do in my own limited way what Gray should have done, but did not and for that those unaware will need to first hear Kleist’s tale.

It’s impossible to capture the genius of Kleist’s bizarre yet brilliant short story, but I will try nonetheless. Ostensibly it is the story of a man who encounters a famed dancer/choreographer named Herr C, attending a marionette show. This becomes the setting for what is really a philosophical discussion about how thought and free will often interfere with the ability of human individuals to effectively act- a theme which Kleist also explored in his essay On the Gradual Production of Thoughts Whilst Speaking.

Any of us who have played a sport, given an impromptu speech, or even planted a kiss know precisely what Kleist is talking about. Consciousness, once one gets past the initial point of learning something, can actually trip us up. Herr C compares for the inquiring man clumsy human dancers with the grace of marionettes free of the limitations imposed by gravity and self-doubting minds.  The inquisitor himself recalls how with a mere joke he had inadvertently destroyed the unreflective confidence of a friend, which prompts Herr C to tell  a story illustrating how much better the natural skills of a bear are than even the most well trained human fencer. After which the two men end their conversation.

Such a story would mean little, especially for us two centuries later, had Kleist not put into the mouth of his Herr C within this dialogue what amount to philosophical and even religious speculation pregnant with connections, especially for today, and specifically in light of recent advances in artificial intelligence.

At one point in their discussion, the man inquiring of Herr C compares the marionettes to mere machines like a “hurdy gurdy” much unlike real human dancers. Herr C does indeed believe “that this final trace of the intellect could eventually be removed from the marionettes, so that their dance could pass entirely over into the world of the mechanical and be operated by means of a handle”. Yet rather than this reflecting a diminished judgement of the marionette’s visa-via human dancers, Herr C believes full artificiality and automatism to be their great virtues:

He smiled and replied that he dared to venture that a marionette constructed by a craftsman according to his requirements could perform a dance that neither he nor any other outstanding dancer of his time, not even Vestris himself, could equal. Have you, he asked while I gazed thoughtfully at the ground, ever heard of those mechanical legs that English craftsmen manufacture for unfortunate people who have lost their own limbs? I replied that I had never seen such artifacts. That’s a shame, he replied, for when I tell you that these unfortunate people are able to dance with the use of them, you most certainly will not believe me. What do I mean by using the word dance? The span of their movements is quite limited, but those movements of which they are capable are accomplished with a composure, lightness, and grace that would amaze any sensitive observer.

Here Kleist, at the very least, opens up not only the possibility that a machine constructed by a craftsman according to some specifications would be better than a human being, but also that human beings with mechanical parts would be superior to mere biological humans. In the story when the interrogator of Herr C questions this assertion that machines could potentially be superior to human beings the choreographer/philosopher responds with the assertion that:

….it would be almost impossible for a man to attain even an approximation of a mechanical being. In such a realm only a God could measure up to this matter, and this is the point where both ends of the circular world would join one another.

For Herr C, human beings were trapped between the infinite consciousness of God and the freedom from consciousness of machines. Getting free from this trap would entail eating again from the “tree of knowledge” and this would be “the last chapter of the history of the world.”

Now Kleist, of course, had no intention of addressing what we would consider questions regarding artificial intelligence, yet given developments in that field of late, one can’t help but be struck (at least if you’re not Gray) by the fact that “On the Marionette Theatre” seems to touch on current issues such as, what Yuval Harari brilliantly characterized as the “decoupling of intelligence from consciousness”. Like the marionettes patiently observed by Herr C, at least in some formerly human endeavors- such as playing chess– machines with intelligence, but no consciousness at all can outperform us. Indeed, this is the big surprise of recent gains in the ability of AI- we can get very close to smart and even superior behavior without any need for general intelligence let alone consciousness.

There are many places where Gray might have leveraged Kleist’s strange tale from addressing what such a decoupling means for the whole Western philosophical tradition, which began, after all, with the injunction “know thy self” to wrestling with claims that AI as currently constructed manifests intelligence more akin to puppet show illusions like the old Mechanical Turk than the intellect of a mind. Nor does Gray really extend Kleist’s analogy to interrogate how we, both voluntarily and involuntarily, seem hell bent on turning ourselves into a version of automata through technologies of micro-surveillance for the purpose of self-control and efficiency, or how this connects to the project of much of philosophy itself.

Gray might also have discussed how the problem with the version of marionette freedom proposed by Herr C was that it appears to be blind to the dictatorship of the puppeteer who continues to exist behind the scenes. To recognize and take steps to counter this is the first step towards ensuring technology actually does enhance human freedom, especially as that technology becomes merged with the body and brain themselves and subject to outside control.

These problems with The Soul of a Marionette stem largely from the fact that the book is ultimately the right weapon used to hit the wrong target. Although on the surface it appears that Gray is out to philosophically grapple with our current technological trajectory in light of our ancient human condition his real target is Steven Pinker and his exhausting band of optimists.

The Soul of a Marionette, I think rightly, makes the case that the philosophy behind much of modern technology is a modern form of Gnosticism. In this case Gnosticism means the belief that the world is somehow ill constructed and that through our knowledge and efforts we can fix it. But rather than make the case for this technological version of Gnosticism– ala Steve Fuller, or use such a recognition as the basis for a critique as does Luciano Floridi, Gray sidesteps the issue to make a rather weak case against common notions of “progress”.

It is indeed true that those who insist upon perpetual human progress share the same intellectual roots as those claiming we are rapidly approaching a technological singularity- most importantly both emerge out of “the death of God” in the 19th century which resulted in human beings assuming responsibility for both their own knowledge and fate, and we have been grappling with this new responsibility ever since.

Gray essentially adopts the old trope that while we have advanced technologically we have not advanced in our morality or our wisdom. At the same time, Gray essentially accepts the destination predicted by singularitarians- that human beings will be supplanted by artificial intelligence. What distinguishes him from figures like Ray Kurzweil is that Gray wants to make it clear that the coming “spiritual machines” will carry forward our same moral flaws as human beings, which, contrary to Pinker and his ilk, we have retained.

The first problem here is that any suggestion that moral progress (or even technological progress) is or is not perpetual remains mere speculation- it’s not really an answerable question. The second and bigger problem for Gray’s case is that in failing to acknowledge singularitarian technological projections as a political project Gray essentially severs our ability to influence how technological development unfolds- that is to define its moral and ethical dimension. By failing to keep in view the still very real and relevant human beings (moral and immoral) behind our intelligent machines he obscures the essential political and economic questions in his cloud of existential gloom.

Gray would like us to abandon whatever freedom we have left to join him in some stoic version“of the inward variety prized by the ancient world” (162). He is certainly premature in urging our retreat into the desert. Following him would only accelerate the very unraveling of our moral progress that he predicts. To step aside and let the the very real political and moral gains we have made over the last few centuries disappear would not be forgiven by our descendants, unless that is, they really have become soulless marionettes.

 

Freedom in the Age of Algorithms

modern-times-22

Reflect for a moment on what for many of us has become the average day. You are awoken by your phone whose clock is set via a wireless connection to a cell phone tower, connected to a satellite, all ultimately ending in the ultimate precision machine, a clock that will not lose even a second after 15 billion years of ticking. Upon waking you connect to the world through the “sirene server” of your pleasure that “decides” for you based on an intimate profile built up over years the very world you will see- ranging from Kazakhstan to Kardasian, and connects your intimates, and those whom you “follow” and, if you’re lucky enough, your streams of “followers”.

Perhaps you use a health app to count your breakfast calories or the fat you’ve burned on your morning run, or perhaps you’ve just spent the morning by playing Bejeweled, and will need to pay for your sins of omission later. On your mindless drive to work you make the mistake of answering a text from the office while in front of a cop who unbeknownst to you has instantly run your licence plate to find out if you are a weirdo or terrorist. Thank heavens his algorithm confirms you’re neither.

When you pull into the Burger King drive through to buy your morning coffee, you thoughtlessly end up buying yet another bacon egg and cheese with a side of hash browns in spite of your best self nagging you from inside your smart phone. Having done this one too many times this month your fried food preference has now been sold to the highest bidders, two databanks, through which you’ll now be receiving annoying coupons in the mail along with even more annoying and intrusive adware while you surf the web, the first from all the fast food restaurants along the path of your morning commute, the other friendly, and sometimes frightening, suggestions you ask your doctor about the new cholesterol drug evolocumab.

You did not, of course, pay for your meal with cash but with plastic. Your money swirling somewhere out there in the ether in the form of ones and zeroes stored and exchanged on computers to magically re-coalesce and fill your stomach with potatoes and grease. Your purchases correlated and crunched to define you for all the machines and their cold souls of software that gauge your value as you go about your squishy biological and soulless existence.

____________________________________________

The bizarre thing about this common scenario is that all of this happens before you arrive at the office, or store, or factory, or wherever it is you earn your life’s bread. Not only that, almost all of these constraints on how one views and interacts with the world have been self imposed. The medium through which much of the world and our response to is now apps and algorithms of one sort or another. It’s gotten to the point that we now need apps and algorithms to experience what it’s like to be lost, which seems to, well… misunderstand the definition of being lost.

I have no idea where future historians, whatever their minds are made of, will date the start of this trend of tracking and constraining ourselves so as to maintain “productivity” and “wellness”, perhaps with all those 7- habits- of- highly- effective books that started taking up shelf space in now ancient book stores sometime in the 1980’s, but it’s certainly gotten more personal and intimate with the rise of the smart phone. In a way we’ve brought the logic of the machine out of the factory and into our lives and even our bodies- the idea of super efficient man-machine merger as invented by Frederick Taylor and never captured better than in Charlie Chaplin’s brilliant 1936 film Modern Times.

The film is for silent pictures what the Wizard of OZ was for color and bridges the two worlds where almost all of the spoken parts are through the medium of machines including a giant flat screen that seemed entirely natural in a world that has been gone for eighty years. It portrays the Tramp asserting his humanity in the dehumanizing world of automation found in a factory where even eating lunch had been mechanized and made maximally efficient. Chaplin no doubt would have been pleasantly surprised with how well much of the world turned out given the bleakness of economic depression and soon world war he was facing, but I think he also would have been shocked at how much we have given up of the Tramp in us all without reason and largely of our own volition.

Still, the fact of the matter is that this new rule of apps and and algorithms much of which comes packaged in the spiritualized wrapping of “mindfulness” and “happiness” would be much less troubling did it not smack of a new form of Marx’s “opiate for the people” and divert us away from trying to understand and challenge the structural inadequacies of society.

For there is nothing inherently wrong with measuring performance as a means to pursue excellence, or attending to one’s health and mental tranquility. There’s a sort of postmodern cynicism that kicks in whenever some cultural trend becomes too popular, and while it protects us from groupthink, it also tends to lead to intellectual and cultural paralysis.  It’s only when performance measures find their way into aspects of our lives that are trivialized by quantifying – such as love or family life- that I think we should earnestly worry, along, perhaps, with the atrophy of our skills to engage with the world absent these algorithmic tools.

My really deep concern lies with the way apps and algorithms now play the role of invisible instruments of power. Again, this is nothing new to the extent that in the pre-digital age such instruments came in the form of bureaucracy and the rule by decree rather than law as Hannah Arendt laid out in her Origins of Totalitarianism back in the 1950:

In governments by bureaucracy decrees appear in their naked purity as though they were no longer issued by powerful men, but were the incarnation of power itself and the administrator only its accidental agent. There are no general principles behind the decree, but ever changing circumstances which only an expert can know in detail. People ruled by decree never know what rules them because of the impossibility of understanding decrees in themselves and the carefully organized ignorance of specific circumstances and their practical significance in which all administrators keep their subjects.  (244)

It’s quite easy to read the rule of apps and algorithms in that quote especially the part about “only an expert can know in detail” and “carefully organized ignorance” a fact that became clear to me after I read what is perhaps the best book yet on our new algorithmically ruled lives, Frank Pasquale’s  The Black Box Society: The Secret Algorithms That Control Money and Information.

I have often wondered what exactly was being financially gained by gathering up all this data on individuals given how obvious and ineffective the so-called targeted advertisements that follow me around on the internet seem to be, and Pasquale managed to explain this clearly. What is being “traded” is my “digital reputation” whether as a debtor, or insurance risk (medical or otherwise), or customer with a certain depth of pocket and identity- “father, 40’s etc”- or even the degree to which I can be considered a “sucker” for scam and con artists of one sort or another.

This is a reputation matrix much different from the earlier arrangements based or personal knowledge or later impersonal systems such as credit reporting (though both had their abuses) or that for health records under H.I.P.A.A  in the sense that the new digital form or reputation is largely invisible to me, its methodology inscrutable, its declarations of my “identity” immune to challenge and immutable. It is as Pasquale so-aptly terms a “black box” in the strongest sense of that word meaning unintelligible and opaque to the individual within it like the rules Kafka’s characters suffer under in his novels about the absurdity of hyper- bureaucracy (and of course more) The Castle and The Trial.   

Much more troubling, however, is how such corporate surveillance interacts with the blurring of the line between intelligence and police functions  – the distinction between the foreign and domestic spheres- that has been what of the defining features of our constitutional democracy. As Pasquale reminds us:

Traditionally, a critical distinction has been made between intelligence and investigation. Once reserved primarily for overseas spy operations, “intelligence” work is anticipatory, it is the job of agencies like the CIA, which gather potentially useful information on external enemies that pose a threat to national security. “Investigation” is what police do once they have evidence of a crime. (47)

It isn’t only that such moves towards a model of “predictive policing” mean the undoing of constitutionally guaranteed protections and legal due process (presumptions of innocence, and 5th amendment protections) it is also that it has far too often turned the police into a political instrument, which, as Pasquale documents, have monitored groups ranging from peaceful protesters to supporters of Ron Paul all in the name of preventing a “terrorist act” by these members of these groups. (48)

The kinds of illegal domestic spying performed by the NSA and its acronymic companions was built on back of an already existing infrastructure of commercial surveillance. The same could be said for the blurring of the line between intelligence and investigation exemplified by creation of “fusion centers” after 9-11 which repurposed the espionage tools once contained to intelligence services and towards domestic targets and for the purpose of controlling crime.

Both domestic spying by federal intelligence agencies and new forms of invasive surveillance by state and local law enforcement had been enabled by the commercial surveillance architecture established by the likes of corporate behemoths such as FaceBook and Google to whom citizens had surrendered their right to privacy seemingly willingly.

Given the degree to which these companies now hold near monopolies hold over the information citizens receive Pasquale thinks it would be wise to revisit the breakup of the “trusts” in the early part of the last century. It’s not only that the power of these companies is already enormous it’s that were they ever turned into overt political tools they would undermine or upend democracy itself given that citizen action requires the free exchange of information to achieve anything at all.

The black box features of our current information environment have not just managed to colonize the worlds of security, crime, and advertisement, they have become the defining feature of late capitalism itself. A great deal of the 2008 financial crisis can be traced to the computerization of finance over the 1980’s. Computers were an important feature of the pre-crisis argument that we had entered a period of “The Great Equilibrium”. We had become smart enough, and our markets sophisticated enough (so the argument went) that there would be no replay of something like the 1929 Wall Street crash and Great Depression. Unlike the prior era markets without debilitating crashes were not to be the consequence of government regulation to contain the madness of crowds and their bubbles and busts, but in part from the new computer modeling which would exorcise from the markets the demon of “animal spirits” and allow human beings to do what they had always dreamed of doing- to know the future.  Pasquale describes it this way:

As information technology improved, lobbyists could tell a seductive story: regulators were no longer necessary.  Sophisticated investors could vet their purchases.  Computer models could identify and mitigate risk. But the replacement of regulation by automation turned out to be as fanciful as flying cars or space colonization. (105)

Computerization gave rise to ever more sophisticated financial products, such as mortgage backed securities, based on ever more sophisticated statistical models that by bundling investments gave the illusion of stability. Even had there been more prophets crying from the wilderness that the system was unstable they would not have been able to prove it for the models being used were “a black box, programmed in proprietary software with the details left to the quants and the computers”. (106)

It seems there is a strange dynamic at work throughout the digital economy, not just in finance but certainly exhibited in full force there, where the whole game in essence a contest of asymmetric information. You either have the data someone else lacks to make a trade, you process that data faster, or both. Keeping your algorithms secret becomes a matter of survival for as soon as they are out there they can be exploited by rivals or cracked by hackers- or at least this is the argument companies make. One might doubt it though once this you see how nearly ubiquitous this corporate secrecy and patent hoarding has become in areas radically different from software such as the pharmaceuticals or by biotech corporations like Monsanto which hold patents on life itself and whose logic leads to something like Paolo Bacigalupi’s dystopian novel The Windup Girl.

For Pasquale complexity itself becomes a tool of obfuscation in which corruption and skimming can’t help but become commonplace. The contest of asymmetric information means companies are engaged in what amounts to an information war where the goal is as much to obscure real value to rivals and clients so as to profit from the difference in this distortion. In such an atmosphere markets stop being able to perform the informative role Friedrich Hayek thought was their very purpose.  Here’s Pasquale himself:

…financialization has created enormous uncertainty about the value of companies, homes, and even (thanks to the pressing need for bailouts) the once rock solid promises of governments themselves.

Finance thrives in this environment of radical uncertainty, taking commissions in cash as investors (or, more likely, their poorly monitored agents) race to speculate on or hedge against an ever less knowable future. (138)

Okay, if Pasquale has clearly laid out the problem, what is his solution? I could go through a list of his suggestions, but I should stick to the general principle. Pasquale’s goal, I think, is to restore our faith in our ability to publicly shape digital technology in ways that better reflect our democratic values. That the argument which claims software is unregulable is an assumption not a truth and the tools and models for regulation and public input over the last century for the physical world are equally applicable to the digital one.

We have already developed a complex, effective, system of privacy protections in the form of H.I.P.A, there are already examples of mandating fair understandable contracts (as opposed to indecipherable “terms of service” agreements) in the form of various consumer protection provisions, up until the 1980’s we were capable of regulating the boom and bust cycles of markets without crashing the economy. Lastly the world did not collapse when earlier corporations that had gotten so large they threatened not only the free competition of markets, but more importantly, democracy itself, were broken up and would not collapse were the like of FaceBook, Google or the big banks broken up either.

Above all, Pasquale urges us to seek out and find some way to make the algorithmization of the world intelligible and open to the political, social and ethical influence of a much broader segment of society than the current group of programmers and their paymasters who have so far been the only ones running the show. For if we do not assert such influence and algorithms continue to structure more and more of our relationship with the world and each other, them algorithmization and democracy would seem to be on a collision course. Or, as Taylor Owen pointed out in a recent issue of Foreign Affairs:

If algorithms represent a new ungoverned space, a hidden and potentially ever-evolving unknowable public good, then they are an affront to our democratic system, one that requires transparency and accountability in order to function. A node of power that exists outside of these bounds is a threat to the notion of collective governance itself. This, at its core, is a profoundly undemocratic notion—one that states will have to engage with seriously if they are going to remain relevant and legitimate to their digital citizenry who give them their power.

Pasquale has given us an excellent start to answering the question of how democracy, and freedom, can survive in the age of algorithms.

 

Auguries of Immortality, Malthus and the Verge

Hindu Goddess Tara

Sometimes, if you want to see something in the present clearly it’s best to go back to its origins. This is especially true when dealing with some monumental historical change, a phase transition from one stage to the next. The reason I think this is helpful is that those lucky enough to live at the beginning of such events have no historical or cultural baggage to obscure their forward view. When you live in the middle, or at the end of an era, you find yourself surrounded, sometimes suffocated, by all the good and bad that has come as a result. As a consequence, understanding the true contours of your surroundings or ultimate destination is almost impossible, your nose is stuck to the glass.

Question is, are we ourselves in the beginning of such an era, in the middle, or at an end? How would we even know?

If I were to make the case that we find ourselves in either the middle or the end of an era, I know exactly where I would start. In 1793 the eccentric English writer William Godwin published his Enquiry Concerning Political Justice and its Influence on Morals and Happiness, a book which few people remember. What Godwin is remembered for instead is his famous daughter Mary Shelley, and her even more famous monster, though I should add that if you like thrillers you can thank Godwin for having invented them.

Godwin’s Enquiry, however, was a different kind of book. It grew out of the environment of a time which, in Godwin’s eyes at least, seemed pregnant with once unimaginable hope. The scientific revolution had brought about a fuller understanding of nature and her laws than anything achieved by the ancients, the superstitions of earlier eras were abandoned for a new age of enlightenment, the American Revolution had brought into the world a whole new form of government based on Enlightenment principles, and, as Godwin wrote, a much more important similar revolution had overthrown the monarchy in France.

All this along with the first manifestations of what would become the industrial revolution led Godwin to speculate in the Enquiry that mankind had entered a new era of perpetual progress. Where then could such progress and mastery over nature ultimately lead? Jumping off of a comment by his friend Ben Franklin, Godwin wrote:

 Let us here return to the sublime conjecture of Franklin, that “mind will one day become omnipotent over matter.” If over all other matter, why not over the matter of our own bodies? If over matter at ever so great a distance, why not over matter which, however ignorant we may be of the tie that connects it with the thinking principle, we always carry about with us, and which is in all cases the medium of communication between that principle and the external universe? In a word, why may not man be one day immortal?

Here then we can find evidence for the recent claim of Yuval Harari that “The leading project of the Scientific Revolution is to give humankind eternal life.” (268)  In later editions of the Enquiry, however, Godwin dropped the suggestion of immortality, though it seems he did it not so much because of the criticism that followed from such comments, or the fact that he stopped believing in it, but rather that it seemed too much a termination point for his notion of progress that he now thought really would extend forever into the future for his key point was that the growing understanding by the mind would result in an ever increasing power of the mind over the material world in a process that would literally never end.

Almost at the exact same time as Godwin was writing his Enquiry another figure was making almost the exact same argument including the idea that scientific progress would eventually result in indefinite human lifespans. The Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit was a book written by a courageous man while on the run from a French Revolutionary government that wanted to cut off his head. Amazingly, even while hunted down during the Terror, Condorcet retained his long term optimism regarding the ultimate fate of humankind.

A young English parson with a knack for the just emerging science of economics, not only wasn’t buying it, he wanted to actually scientifically prove (though I am using the term loosely) exactly why  such optimism should not be believed. This was Thomas Malthus, whose name, quite mistakenly, has spawned its own adjective, Malthusian, which means essentially environmental disaster caused by our own human hands and faults.

As is commonly known, Malthus’ argument was that historically there has been a mismatch between the growth of population and the production of food which sooner or later has led to famine and decline. It was Godwin’s and Condorcet’s claims regarding future human immortality that, in part, was responsible for Malthus stumbling upon his specific argument centered on population. For the obvious rejoinder to those claiming that the human lifespan would increase forever was-  what would we do will all of these people?

Both Godwin and Condorcet thought they had answered this question by claiming that in the future the birth rate would decline to zero. Stunningly, this has actually proven to be correct- that population growth rates have declined in parallel with increases in longevity. Though, rather than declining due to the victory of “reason” and the conquest of the “passions” as both Godwin and Condorcet thought they declined because sex was, for the first time in human history, decoupled from reproduction through the creation of effective forms of birth control.

So far, at least, it seems it has been Godwin and Condorcet that have gotten the better side of the argument. Since the  Enquiry we have experienced nearly 250 years of uninterrupted progress where mind has gained increasing mastery over the material world. And though,we are little closer to the optimists’ dream of “immortality” their prescient guess that longevity would be coupled with a declining birth rate would seem to clear the goal of increased longevity from being self-defeating on Malthusian grounds.

This would not be, of course, the first time Malthus has been shown to be wrong. Yet his ideas, or a caricature of his ideas, have a long history of retaining their hold over our imagination.  Exactly why this is the case is a question  explored in detail by Robert J. Mayhew in his excellent, Malthus: The Life and Legacies of an Untimely Prophet, Malthus’ argument in his famous if rarely actually read An Essay on the Principle of Population has become a sort of secular version of armageddon, his views latched onto by figures both sinister and benign over the two centuries since his essay’s publication.

Malthus’ argument was used against laws to alleviate the burdens of poverty, which it was argued would only increase population growth and an ultimate reckoning (and this view, at least, was close to that of Malthus himself). They were used by anti-immigrant and racist groups in the 19th and early 20th century. Hitler’s expansionist and genocidal policy in eastern Europe was justified on Malthusian grounds.

On a more benign side, Malthusian arguments were used as a spur to the Green Revolution in Agriculture in the 1960’s (though Mayhew thinks the warnings of pending famine were political -arising from the cold war- and overdone.) Malthusianism was found in the 1970’s by Paul Ehrlich to warn of a “population bomb” that never came, during the Oil Crisis slid into the fear over resource constraints, and can now be found in predictions about the coming “resource” and “water” wars. There is also a case where Malthus really may have his revenge, though more on that in a little bit.

And yet, we would be highly remiss were we to not take the question Malthus posed seriously. For what he was really inquiring about is whether or not their might be ultimate limits on the ability of the human mind to shape the world in which it found itself. What Malthus was  looking for was the boundary or verge of our limits as established by the laws of nature as he understood them. Those whose espoused the new human perfectionism, such as Godwin and Condorcet, faced what appeared to Malthus to be an insurmountable barrier to their theories being considered scientific, no matter how much they attached themselves to the language and symbols of the recent success of science. For what they were predicting would happen had no empirical basis- it had never happened before. Given that knowledge did indeed seem to increase through history, if merely as a consequence of having the time to do so, it was still the case that the kind of radical and perpetual progress that Godwin and Condorcet predicted was absent from human history. Malthus set out to provide a scientific argument for why.

In philosophical terms Malthus’ Essay is best read as a theodicy, an attempt like that of Leibniz before him, to argue that even in light of the world’s suffering we live in the “best of all possible world’s”. Like Newton did for falling objects, Malthus sought the laws of nature as designed by his God that explained the development of human society. Technological and social progress had remained static in the past even as human knowledge regarding the world accumulated over generations because the gap between mind and matter was what made us uniquely human. What caused us to most directly experience this gap and caused progress to remain static? Malthus thought he pinned the source of stasis in famine and population decline.

As is the case with any other physical system, for human societies, the question boiled down to how much energy was available to do meaningful work. Given that the vast majority of work during Malthus’ day was done by things that required energy in the form of food whether humans or animals, the limited amount of land that could be efficiently tilled presented an upper bound to the size, complexity, and progress of any human society.

What Malthus missed, of course, was the fact that the relationship between food and work was about to be severed. Or rather, the new machines did consume a form of processed “food” organic material that had been chemically “constructed” and accumulated over the eons, in the form of fossil fuels that offered an easily accessible type of energy that was different in kind from anything that had come before it.

The sheer force of the age of machines Malthus had failed to foresee did indeed break with the flatline of human history he had identified in every prior age. A fact that has never perhaps been more clearly than in Ian Morris’ simple graph below.

Ian Morris Great Divergence Graph

What made this breaking of the pattern between all of past human history and the last few centuries is the thing that could possibly, and tragically, prove Malthus right after all- fossil fuels.  For any society before 1800 the majority of energy other than that derived from food came in the form of wood- whether as timber itself or charcoal. But as Lewis Dartnell pointed out in a recent piece in AEON the world before fossil fuels posed a seemingly unsurmountable (should I say Malthusian?) dilemma; namely:

The central problem is that woodland, even when it is well-managed, competes with other land uses, principally agriculture. The double-whammy of development is that, as a society’s population grows, it requires more farmland to provide enough food and also greater timber production for energy. The two needs compete for largely the same land areas.

Dartnell’s point is that we have been both extremely lucky and unlucky in how accessible and potent fossil fuels have been. On the one hand fossil fuels gave us a rather short path to technological society, on the other, not only will it be difficult to wean ourselves from them, it is hard to imagine how we could reboot as a civilization should we suffer collapse and find ourselves having already used up most of the world’s most easily accessed forms of energy.

It is a useful exercise, then, to continue to take Malthus’ argument seriously, for even if we escape the second Malthusian trap- fossil fuel induced climate change- that allowed us to break free from the trap Malthus’ originally identified- our need to literally grow our energy- there are other predictable traps that likely lie in store.

One of these traps that interests me the most has to do with the “energy problem” that Malthus understood in terms of the production of food. As I’ve written about before, and as brought to my attention by the science writer Lee Billings in his book Five Billion Years of Solitude, there is a good and little discussed case from physics for thinking we might be closer to the end of an era that began with the industrial revolution rather than in the middle or even at the beginning.

This physics of civilizational limits comes from Tom Murphy of the University of California, San Diego who writes the blog Do The Math. Murphy’s argument, as profiled by the BBC, has some of the following points:

  • Assuming rising energy use and economic growth remain coupled, as they have in the past, confronts us with the absurdity of exponentials. At a 2.3 percent growth rate within 2,500 hundred years we would require all the energy from all the stars in the Milky Way galaxy to function.
  • At 3 percent growth, within four hundred years we will have boiled away the earth’s oceans, not because of global warming, but from the excess heat that is the normal product of energy production. (Even clean fusion leaves us burning away the world’s oceans for the same reason)
  • Renewables push out this reckoning, but not indefinitely. At a 3 percent growth rate, even if the solar efficiency was 100% we would need to capture all of the sunlight hitting the earth within three hundred years.

Will such limits prove to be correct and Malthus, in some sense, be shown to have been right all along? Who knows. The only way we’d have strong evidence to the contrary is if we came across evidence of civilizations with a much, much greater energy signature than our own. A recent project out of Penn State to do just that, which looked at 100,000 galaxies, found nothing, though this doesn’t mean the search is over.

Relating back to my last post: the universe may lean towards giving rise to complexity in the way the physicist Jeremy England suggests, but the landscape is littered with great canyons where evolution gets stuck for very long periods of time which explains its blindness as perceived by someone like Henry Gee. The scary thing is getting out of these canyons is a race against time: complex life could have been killed off by some disaster shortly after the Cambrian explosion, we could have remained hunter gatherers and failed to develop agriculture before another ice age did us in, some historical contingency could have prevented industrialization before some global catastrophe we are advanced enough now to respond to wiped us out.

If Tom Murphy is right we are now in a race to secure exponentially growing sources of energy, and it is a race we are destined to lose. The reason we don’t see any advanced civilizations out there is because the kind of growth we’ve extrapolated from the narrow slice of the past few centuries is indeed a finite thing as the amount of energy such growth requires reaches either terrestrial or cosmic limits. We simply won’t be able to gain access to enough energy fast enough to keep technological progress going at its current rate.

Of course, even if we believe that progress has some limit out there, that does not necessarily entail we shouldn’t pursue it, in many of its forms, until we hit the verge itself. Taking arguments that there might be limits to our technological progress is one thing, but to our moral progress, our efforts to address suffering in the world, they are quite another, for there accepting limits would mean accepting some level of suffering or injustice as just the “way things are”.  That we should not accept this Malthus himself nearly concluded:

 Evil exists in the world not to create despair but activity. We are not patiently to submit to it, but to exert ourselves to avoid it. It is not only the interest but the duty of every individual to use his utmost efforts to remove evil from himself and from as large a circle as he can influence, and the more he exercises himself in this duty, the more wisely he directs his efforts, and the more successful these efforts are, the more he will probably improve and exalt his own mind and the more completely does he appear to fulfil the will of his Creator. (124-125)

The problem lies with the justification of the suffering of individual human beings in any particular form as “natural”. The crime at the heart of many versions of Malthusianism is this kind of aggregation of human beings into some kind of destructive force, which leads to the denial of the only scale where someone’s true humanity can be scene- at the level of the individual. Such a moral blindness that sees only the crowd can be found in the work the most famous piece of modern Malthusianism -Paul Ehrlich’s Population Bomb where he discusses his experience of Delhi.

The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to the buses. People herding animals. People, people, people, people. As we moved slowly through the mob, hand horn squawking, the  dust, noise, heat, and the cooking fires gave the scene a hellish aspect. Would we ever get to our hotel? All three of us were, frankly, frightened. (p. 1)

Striped of this inability to see the that the value of human beings can only be grasped at the level of the individual and that suffering can only be assessed in a moral sense at this individual level, Malthus can help to remind us that our mind’s themselves emerge out of their confrontation and interaction with a material world where we constantly explore, overcome, and confront again its boundaries. That the world itself was probably “a mighty process for awakening matter into mind” and even the most ardent proponents of human perfectionism, modern day transhumanists or singularitarians, or just plain old humanists would agree with that.

* Image: Tara (Devi): Hindu goddess of the unquenchable hunger that compels all life.

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and are certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.

 

Big Data as statistical masturbation

Infinite Book Tunnel

It’s just possible that there is a looming crisis in yet another technological sector whose proponents have leaped too far ahead, and too soon, promising all kinds of things they are unable to deliver. It strange how we keep ramming our head into this same damned wall, but this next crisis is perhaps more important than deflated hype at other times, say our over optimism about the timeline for human space flight in the 1970’s, or the “AI winter” in the 1980’s, or the miracles that seemed just at our fingertips when we cracked the Human Genome while pulling riches out of the air during the dotcom boom- both of which brought us to a state of mania in the 1990’s and early 2000’s.

The thing that separates a potentially new crisis in the area of so-called “Big-Data” from these earlier ones is that, literally overnight, we have reconstructed much of our economy, national security infrastructure and in the process of eroding our ancient right privacy on it’s yet to be proven premises. Now, we are on the verge of changing not just the nature of the science upon which we all depend, but nearly every other field of human intellectual endeavor. And we’ve done and are doing this despite the fact that the the most over the top promises of Big Data are about as epistemologically grounded as divining the future by looking at goat entrails.

Well, that might be a little unfair. Big Data is helpful, but the question is helpful for what? A tool, as opposed to a supposedly magical talisman has its limits, and understanding those limits should lead not to our jettisoning the tool of large scale data based analysis, but what needs to be done to make these new capacities actually useful rather than, like all forms of divination, comforting us with the idea that we can know the future and thus somehow exert control over it, when in reality both our foresight and our powers are much more limited.

Start with the issue of the digital economy. One model underlies most of the major Internet giants- Google, FaceBook and to a lesser extent Apple and Amazon, along with a whole set of behemoths who few of us can name but that underlie everything we do online, especially data aggregators such as Axicom. That model is to essentially gather up every last digital record we leave behind, many of them gained in exchange for “free” services and using this living archive to target advertisements at us.

It’s not only that this model has provided the infrastructure for an unprecedented violation of privacy by the security state (more on which below) it’s that there’s no real evidence that it even works.

Just anecdotally reflect on your own personal experience. If companies can very reasonably be said to know you better than your mother, your wife, or even you know yourself, why are the ads coming your way so damn obvious, and frankly even oblivious? In my own case, if I shop online for something, a hammer, a car, a pair of pants, I end up getting ads for that very same type of product weeks or even months after I have actually bought a version of the item I was searching for.

In large measure, the Internet is a giant market in which we can find products or information. Targeted ads can only really work if they are able refract in their marketed product’s favor the information I am searching for, if they lead me to buy something I would not have purchased in the first place. Derek Thompson, in the piece linked to above points out that this problem is called Endogeneity, or more colloquially: “hell, I was going to buy it anyway.”

The problem with this economic model, though, goes even deeper than that. At least one-third of clicks on digital ads aren’t human beings at all but bots that represent a way of gaming advertising revenue like something right out of a William Gibson novel.

Okay, so we have this economic model based on what at it’s root is really just spyware, and despite all the billions poured into it, we have no idea if it actually affects consumer behavior. That might be merely an annoying feature of the present rather than something to fret about were it not for the fact that this surveillance architecture has apparently been captured by the security services of the state. The model is essentially just a darker version of its commercial forbearer. Here the NSA, GCHQ et al hoover up as much of the Internet’s information as they can get their hands on. Ostensibly, their doing this so they can algorithmically sort through this data to identify threats.

In this case, we have just as many reasons to suspect that it doesn’t really work, and though they claim it does, none of these intelligence agencies will actually look at their supposed evidence that it does. The reasons to suspect that mass surveillance might suffer similar flaws as mass “personalized” marketing, was excellently summed up   in a recent article in the Financial Times Zeynep Tufekci when she wrote:

But the assertion that big data is “what it’s all about” when it comes to predicting rare events is not supported by what we know about how these methods work, and more importantly, don’t work. Analytics on massive datasets can be powerful in analysing and identifying broad patterns, or events that occur regularly and frequently, but are singularly unsuited to finding unpredictable, erratic, and rare needles in huge haystacks. In fact, the bigger the haystack — the more massive the scale and the wider the scope of the surveillance — the less suited these methods are to finding such exceptional events, and the more they may serve to direct resources and attention away from appropriate tools and methods.

I’ll get to what’s epistemologically wrong with using Big Data in the way used by the NSA that Tufekci rightly criticizes in a moment, but on a personal, not societal level, the biggest danger from getting the capabilities of Big Data wrong seems most likely to come through its potentially flawed use in medicine.

Here’s the kind of hype we’re in the midst of as found in a recent article by Tim Mcdonnell in Nautilus:

We’re well on our way to a future where massive data processing will power not just medical research, but nearly every aspect of society. Viktor Mayer-Schönberger, a data scholar at the University of Oxford’s Oxford Internet Institute, says we are in the midst of a fundamental shift from a culture in which we make inferences about the world based on a small amount of information to one in which sweeping new insights are gleaned by steadily accumulating a virtually limitless amount of data on everything.

The value of collecting all the information, says Mayer-Schönberger, who published an exhaustive treatise entitled Big Data in March, is that “you don’t have to worry about biases or randomization. You don’t have to worry about having a hypothesis, a conclusion, beforehand.” If you look at everything, the landscape will become apparent and patterns will naturally emerge.

Here’s the problem with this line of reasoning, a problem that I think is the same, and shares the same solution to the issue of mass surveillance by the NSA and other security agencies. It begins with this idea that “the landscape will become apparent and patterns will naturally emerge.”

The flaw that this reasoning suffers has to do with the way very large data sets work. One would think that the fact that sampling millions of people, which we’re now able to do via ubiquitous monitoring, would offer enormous gains over the way we used to be confined to population samples of only a few thousand, yet this isn’t necessarily the case. The problem is the larger your sample size the greater your chance at false correlations.

Previously I had thought that surely this is a problem that statisticians had either solved or were on the verge of solving. They’re not, at least according to the computer scientist Michael Jordan, who fears that we might be on the verge of a “Big Data winter” similar to the one AI went through in the 1980’s and 90’s. Let’s say you had an extremely large database with multiple forms of metrics:

Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

The actual mathematics of sorting out spurious from potentially useful correlations from being distinguished is, in Jordan’s estimation, far from being worked out:

We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

Alright, now that’s a problem. As you’ll no doubt notice the danger of false correlation that Jordan identifies as a problem for science is almost exactly the same critique Tufekci  made against the mass surveillance of the NSA. That is, unless the NSA and its cohorts have actually solved the statistical/engineering problems Jordan identified and haven’t told us, all the biggest data haystack in the world is going to lead to is too many leads to follow, most of them false, and many of which will drain resources from actual public protection. Perhaps equally troubling: if security services have solved these statistical/engineering problems how much will be wasted in research funding and how many lives will be lost because medical scientists were kept from the tools that would have empowered their research?

At least part of the solution to this will be remembering why we developed statistical analysis in the first place. Herbert I. Weisberg with his recent book Willful Ignorance: The Mismeasure of Uncertainty has provided a wonderful, short primer on the subject.

Statistical evidence, according to Weisberg was first introduced to medical research back in the 1950’s as a protection against exaggerated claims to efficacy and widespread quackery. Since then we have come to take the p value .05 almost as the truth itself. Weisberg’s book is really a plea to clinicians to know their patients and not rely almost exclusively on statistical analyses of “average” patients to help those in their care make life altering decisions in terms of what medicines to take or procedures to undergo. Weisberg thinks that personalized medicine will over the long term solve these problems, and while I won’t go into my doubts about that here, I do think, in the experience of the physician, he identifies the root to the solution of our Big Data problem.

Rather than think of Big Data as somehow providing us with a picture of reality, “naturally emerging” as Mayer-Schönberger quoted above suggested we should start to view it as a way to easily and cheaply give us a metric for the potential validity of a hypothesis. And it’s not only the first step that continues to be guided by old fashioned science rather than computer driven numerology but the remaining steps as well, a positive signal  followed up by actual scientist and other researchers doing such now rusting skills as actual experiments and building theories to explain their results. Big Data, if done right, won’t end up making science a form of information processing, but will instead be used as the primary tool for keeping scientist from going down a cul-de-sac.

The same principle applied to mass surveillance means a return to old school human intelligence even if it now needs to be empowered by new digital tools. Rather than Big Data being used to hoover up and analyze all potential leads, espionage and counterterrorism should become more targeted and based on efforts to understand and penetrate threat groups themselves. The move back to human intelligence and towards more targeted surveillance rather than the mass data grab symbolized by Bluffdale may be a reality forced on the NSA et al by events. In part due to the Snowden revelations terrorist and criminal networks have already abandoned the non-secure public networks which the rest of us use. Mass surveillance has lost its raison d’etre.

At least it terms of science and medicine, I recently saw a version of how Big Data done right might work. In an article for Qunta and Scientific American by Veronique Greenwood she discussed two recent efforts by researchers to use Big Data to find new understandings of and treatments for disease.

The physicist (not biologist) Stefan Thurner has created a network model of comorbid diseases trying to uncover the hidden relationships between different, seemingly unrelated medical conditions. What I find interesting about this is that it gives us a new way of understanding disease, breaking free of hermetically sealed categories that may blind us to underlying shared mechanisms by medical conditions. I find this especially pressing where it comes to mental health where the kind of symptom listing found in the DSM- the Bible for mental health care professionals- has never resulted in a causative model of how conditions such as anxiety or depression actually work and is based on an antiquated separation between the mind and the body not to mention the social and environmental factors that all give shape to mental health.

Even more interesting, from Greenwood’s piece, are the efforts by Joseph Loscalzo of Harvard Medical School to try and come up with a whole new model for disease that looks beyond genome associations for diseases to map out the molecular networks of disease isolating the statistical correlation between a particular variant of such a map and a disease. This relationship between genes and proteins correlated with a disease is something Loscalzo calls a “disease module”.

Thurner describes the underlying methodology behind his, and by implication Loscalzo’s,  efforts to Greenwood this way:

Once you draw a network, you are drawing hypotheses on a piece of paper,” Thurner said. “You are saying, ‘Wow, look, I didn’t know these two things were related. Why could they be? Or is it just that our statistical threshold did not kick it out?’” In network analysis, you first validate your analysis by checking that it recreates connections that people have already identified in whatever system you are studying. After that, Thurner said, “the ones that did not exist before, those are new hypotheses. Then the work really starts.

It’s the next steps, the testing of hypotheses, the development of a stable model where the most important work really lies. Like any intellectual fad, Big Data has its element of truth. We can now much more easily distill large and sometimes previously invisible  patterns from the deluge of information in which we are now drowning. This has potentially huge benefits for science, medicine, social policy, and law enforcement.

The problem comes from thinking that we are at the point where our data crunching algorithms can do the work for us and are about to replace the human beings and their skills at investigating problems deeply and in the real world. The danger there would be thinking that knowledge could work like self-gratification a mere thing of the mind without all the hard work, compromises, and conflict between expectations and reality that goes into a real relationship. Ironically, this was a truth perhaps discovered first not by scientists or intelligence agencies but by online dating services. To that strange story, next time….

2040’s America will be like 1840’s Britain, with robots?

Christopher Gibbs Steampunk

Looked at in a certain light, Adrian Hon’s History of the Future in 100 Objects can be seen as giving us a window into a fictionalized version of an intermediate technological stage we may be entering. It is the period when the gains in artificial intelligence are clearly happening, but they have yet to completely replace human intelligence. The question if it AI ever will actually replace us is not of interest to me here. It certainly won’t be tomorrow, and technological prediction beyond a certain limited horizon is a fool’s game.

Nevertheless, some features of the kind of hybrid stage we have entered are clearly apparent. Hon built an entire imagined world around them from with “amplified-teams” (AI working side by side with groups of humans) as one of the major elements of 21st century work, sports, and much else besides.

The economist Tyler Cowen perhaps did Hon one better, for he based his very similar version of the future not only on things that are happening right now, but provided insight on what we should do as job holders and bread-winners in light of the rise of ubiquitous, if less than human level, artificial intelligence. One only wishes that his vision had room for more politics, for if Cowen is right, and absent us taking collective responsibility for the type of future we want to live in, 2040’s America might look like the Britain found in Dickens, only we’ll be surrounded by robots.

Cowen may seem a strange duck to take up the techno-optimism mantle, but he did in with gusto in his recent book Average is Over. The book in essence is a sequel to Cowen’s earlier best seller The Great Stagnation in which he argued that developed economies, including the United States, had entered a period of secular stagnation beginning in the 1970’s. The reason for this stagnation was that advanced economies had essentially picked all the “low hanging fruit” of the industrial revolution.

Arguing that we are in a period of technological stagnation at first seems strange, but when I reflect a moment on the meaning of facts such as not flying all that much faster than would have been common for my grandparents in the 1960’s, the kitchen in my family photos from the Carter days looking surprisingly like the kitchen I have right now- minus the paneling, or saddest of all, from the point of view of someone brought up on Star Trek, Star Wars and Our Star Blazers with a comforter sporting Viking 2 and Pioneer, the fact that, not only have we failed to send human visitors to Mars or beyond, we haven’t even been back to the moon. Hell we don’t even have any human beings beyond low-earth orbit.

Of course, it would be silly to argue there has been no technological progress since Nixon. Information, communication and computer technology have progressed at an incredible speed, remaking much of the world in their wake, and have now seemingly been joined by revolutions in biotechnology and renewable energy.

And yet, despite how revolutionary these technologies have been, they have not been able to do the heavy lifting of prior forms of industrialization due to the simple fact that they haven’t been as qualitatively transformative as the industrial revolution. If I had a different job I could function just fine without the internet, and my life would be different only at the margins. Set the technological clock by which I live back to the days preceding industrialization, before electricity, and the internal combustion engine, and I’d be living the life of my dawn-to-dusk Amish neighbors- a different life entirely.

Average is Over is a followup to Cowen’s earlier book in that in it he argues that technological changes now taking place will have an impact that will shake us out of our stagnation, or at least how that stagnation is itself evolving into something quite different with some being able to escape its pull while others fall even further behind.

Like Hon, Cowen thinks intermediate level AI is what we should be paying attention to rather than Kurzweil or Bostrom- like hopes and fears regarding superintelligence. Also like Hon, Cowen thinks the most important aspect of artificial intelligence in the near future is human-AI teams. This is the lesson Cowen takes from, among other things, freestyle chess.

For those who haven’t been paying attention to the world of competitive chess, freestyle chess is what emerged once people were able to buy a chess playing program that could beat the best players in the world for a few dollars to play on one’s phone. One might of thought that would be the death knell for human chess, but something quite different has happened. Now, some of the most popular chess games are freestyle meaning human-machine vs human-machine.

The moral Cowen draws from freestyle chess is that the winners of these games, and he extrapolates, the economic “games” of the future, are those human beings who are most willing to defer to the decisions of the machine. I find this conclusion more than a little chilling given we’re talk about real people here rather than Knight or Pawns, but Cowen seems to think it’s just common sense.

In its simplest form Cowen’s argument boils down to the prediction that an increasing amount of human work in the future will come in the form of these AI-human teams. Some of this, he admits, will amount to no workers at all with the human part of the “team” reduced to an unpaid customer. I now almost always scan and bag my own goods at the grocery store, just as I can’t remember the last time I actually spoke to a bank teller who wasn’t my mom. Cowen also admits that the rise of AI might mean the world actually gets “dumber” our interactions with our environment simplified to foster smooth integration with machines and compressed to meet their limits.

In his vision intelligent machines will revolutionize everything from medicine to education to business management and negotiation to love. The human beings who will best thrive in this new environment will be those whose work best complements that of intelligent machines, and this will be the case all the way from the factory floor to the classroom. Intelligent machines should improve human judgement in areas such as medical diagnostics and would even replace judges in the courtroom if we are ever willing to take the constitutional plunge. Teachers will go from educators to “coaches” as intelligent machines allow individualized instruction , but education will still require a human touch when it comes to motivating students.

His message to those who don’t work well with intelligent machines is – good luck. He sees automation leading to an ever more competitive job market in which many will fail to develop the skills necessary to thrive. Those unfortunate ones will be left to fend for themselves in the face of an increasingly penny-pinching state. There is one area, however, where Cowen thinks you might find refuge if machines just aren’t your thing-marketing. Indeed, he sees marketing as one of the major growth areas in the new otherwise increasingly post-human economy.

The reason for this is simple. In the future there are going to be less ,not more, people with surplus cash to spend on all the goods built by a lot of robots and a handful of humans. One will have to find and persuade those with real incomes to part with some of their cash. Computers can do the finding, but it will take human actors to sell the dream represented by a product.

The world of work presented in Cowen’s Average is Over is almost exclusively that of the middle class and higher who find their way with ease around the Infosphere, or whatever we want to call this shell of information and knowledge we’ve built around ourselves. Either that or those who thrive economically will be those able to successfully pitch whatever it is they’re selling to wealthy or well off buyers, sometimes even with the help of AI that is able to read human emotions.

I wish Cowen had focused more on what it will be like to be poor in such a world. One thing is certain, it will not be fun. For one, he sees further contraction rather than expansion of the social safety net, and widespread conservatism, rather than any attempts at radically new ways of organizing our economy, society and politics. Himself a libertarian conservative, Cowen sees such conservatism baked into the demographic cake of our aging societies. The old do not lead revolutions and given enough of them they can prevent the young from forcing any deep structural changes to society.

Cowen also has a thing for so-called “moral enhancement” though he doesn’t call it that. Moral enhancement need not only come from conservative forces, as the extensive work on the subject by the progressive James Hughes shows, but in the hands of both Hon and Cowen, moral enhancement is a bulwark of conservative societies, where the world of middle class work and the social safety net no longer function, or even exist, in the ways they had in the 20th century.

Hon with his neuroscience background sees moral enhancement leveraging off of our increasing mastery over the brain, but manifesting itself in a revival of religious longings related to meaning, a meaning that was for a long time provided by work, callings and occupations that he projects will become less and less available as we roll through the 21st century with human workers replaced by increasingly intelligent machines. Cowen, on the other hand, sees moral enhancement as the only way the poor will survive in an increasingly competitive and stingy environment, though his enhancement is to take place by more traditional means, the return of strict schools that inculcate victorian era morals such as self-control and above all conscientiousness in the young. Cowen is far from alone in thinking that in an era when machines are capable of much of the physical and intellectual labor once done by human beings what will matter most to individual success is ancient virtues.

In Cowen’s world the rich with money to burn are chased down with a combination of AI, behavioral economics, targeted consumer surveillance, and old fashioned, fleshy persuasion to part with their cash, but what will such a system be like for those chronically out of work? Even should mass government surveillance disappear tomorrow, (fat chance) it seems the poor will still face a world where the forces behind their ever more complex society become increasingly opaque, responsible humans harder to find, and in which they are constantly “nudged” by people who claim to know better. For the poor, surveillance technologies will likely be used not to sell them stuff which they can’t afford, but are a tool of the repo-man, and debt collector, parole officer, and cop that will slowly chisel away whatever slim column continues to connect them the former middle class world of their parents. It is a world more akin to the 1940’s or even the 1840’s than it is to anything we have taken to be normal since the middle of the 20th century.

I do not know if such a world is sustainable over the long haul, and pray that it is not. The pessimist in me remembers that the classical and medieval world’s existed for long periods of time with extreme levels of inequality in both wealth and power, the optimist chimes in that these were ages when the common people did not know how to read. In any case, it is not a society that must by some macabre logic of economic determinism come about. The mechanism by which Cowen sees no sustained response to such a future coming into being is our own political paralysis and generational tribalism. He seems to want this world more than he is offering us a warning of it arrival. Let’s decide to prove him wrong for the technologies he puts so much hope in could be used in totally different ways and in the service of a juster form of society.

However critical I am of Cowen for accepting such a world as a fait accompli, the man still has some rather fascinating things to say. Take for instance his view of the future of science:

Once genius machines start coming up with new theories…. intelligibility will seem like a legacy from the very distant past. ( 220)

For Cowen much of science in the 21st century will be driven by coming up with theories and correlations from the massive amount of data we are collecting, a task more suited to a computer than a man (or woman) in a lab coat. Eventually machine derived theories will become so complex that no human being will be able to understand them. Progress in science will be given over to intelligent machines even as non-scientists find increasing opportunities to engage in “citizen science”.

Come to think of it, lack of intelligibility runs like a red thread throughout Average is Over, from “ugly” machine chess moves that human players scratch their heads at, to the fact that Cowen thinks those who will succeed in the next century will be those who place their “faith” in the decisions of machines, choices of action they themselves do not fully understand. Let’s hope he’s wrong on that score as well, for lack of intelligibility in human beings in politics, economics, and science, drives conspiracy theories, paranoia, and superstition, and political immobility.

Cowen believes the time when secular persons are able to cull from science a general, intelligible picture of the world is coming to a close. This would be a disaster in the sense that science gives us the only picture of the world that is capable of being universally shared which is also able to accurately guide our response to both nature and the technological world. At least for the moment, perhaps the best science writer we have suggests something very different. To her new book, next time….

The Future As History

hon-future100

It is a risky business trying to predict the future, and although it makes some sense to try to get a handle on what the world might be like in one’s lifetime, one might wonder what’s even the point of all this prophecy that stretches out beyond the decades one is expected to live? The answer I think is that no one who engages in futurism is really trying to predict the future so much as shape it, or at the very least, inspire Noah like preparations for disaster. Those who imagine a dark future are trying to scare the bejesus out of us so we do what is necessary not to end up in a world gone black swept away by the flood waters.  Problem is, extreme fear more often leads to paralysis rather than reform or ark building, something that God, had he been a behavioral psychologist, would have known.

Those with a Pollyannaish story about tomorrow, on the other hand, are usually trying to convince us to buy into some set of current trends, and for that reason, optimists often end up being the last thing they think they are, a support for conservative politics. Why change what’s going well or destined, in the long run, to end well? The problem here is that, as Keynes said “In the long run we’re all dead”, which should be an indication that if we see a problem out in front of us we should address it, rather than rest on faith and let some teleos of history or some such sort the whole thing out.

It’s hard to ride the thin line between optimism and pessimism regarding the future while still providing a view of it that is realistic, compelling and encourages us towards action in the present. Science-fiction, where it avoids the pull towards utopia or dystopia, and regardless of it flaws, does manage to present versions of the future that are gripping and a thousand times better than dry futurists “reports” on the future that go down like sawdust, but the genre suffers from having too many balls in the air.

There is not only a problem of the common complaint that, like with political novels, the human aspects of a story suffer from being tied too tightly to a social “purpose”- in this case to offer plausible predictions of the future, but that the idea of crafting a “plausible” future itself can serve as an anchor on the imagination. An author of fiction should be free to sail into any world that comes into his head- plausible destinations be damned.

Adrian Hon’s recent The History of the Future in 100 Objects overcomes this problem with using science-fiction to craft plausible versions of the future by jettisoning fictional narrative and presenting the future in the form of a work of history. Hon was inspired to take this approach in part by an actual recent work of history- Neil MacGregor’s History of the World in 100 Objects. In the same way objects from the human past can reveal deep insights not just into the particular culture that made them, but help us apprehend the trajectory that the whole of humankind has taken so far, 100 imagined “objects” from the century we have yet to see play out allows Hon to reveal the “culture” of the near future we can actually see quite well,  which when all is said and done amounts to interrogating the path we are currently on.     

Hon is perhaps uniquely positioned to give us a feel for where we are currently headed. Trained as a neuroscientist he is able to see what the ongoing revolutionary breakthroughs in neuroscience might mean for society. He also has his fingers on the pulse of the increasingly important world of online gaming as the CEO of the company Six-to-Start which develops interactive real world games such as Zombies, Run!

In what follows I’ll look at 9 of Hon’s objects of the future which I thought were the most intriguing. Here we go:

#8 Locked Simulation Interrogation – 2019

There’s a lot of discussion these days about the revival of virtual reality, especially with the quite revolutionary new VR headset of Oculus Rift. We’ve also seen a surge of brain scanning that purports to see inside the human mind revealing everything from when a person is lying to whether or not they are prone to mystical experiences. Hon imagines that just a few years out these technologies being combined to form a brand new and disturbing form of interrogation.

In 2019, after a series of terrorists attacks in Charlotte North Carolina the FBI starts using so-called “locked-sims” to interrogate terrorist suspects. A suspect is run through a simulation in which his neurological responses are closely monitored in the hope that they might do things such as help identify other suspects, or unravel future plots.

The technique of locked-sims appears to be so successful that it is soon becomes the rage in other areas of law enforcement involving much less existential public risks. Imagine murder suspects or even petty criminals run through a simulated version of the crime- their every internal and external reaction minutely monitored.

Whatever their promise locked-sims prove full of errors and abuses not the least of which is their tendency to leave the innocents often interrogated in them emotionally scarred. Ancient protections end up saving us from a nightmare technology. In 2033 the US Supreme Court deems locked-sims a form of “cruel and unusual punishment” and therefore constitutionally prohibited.

#20 Cross Ball- 2026

A good deal of A History of the Future deals with the way we might relate to advances in artificial intelligence, and one thing Hon tries to make clear is that, in this century at least, human beings won’t suddenly just exit the stage to make room for AI. For a good while the world will be hybrid.

“Cross Ball” is an imagined game that’s a little like the ancient Mesoamerican ball game of Nahuatl, only in Cross Ball human beings work in conjunction with bots. Hon sees a lot of AI combined with human teams in the future world of work, but in sports, the reason for the amalgam has more to do with human psychology:

Bots on their own were boring; humans on their own were old-fashioned. But bots and humans together? That was something new.

This would be new for real word games, but we do already have this in “Freestyle Chess” where old-fashioned humans can no longer beat machines and no one seems to want to watch matches between chess playing programs, so that the games with the most interest have been those which match human beings working with programs against other human beings working with programs. In the real world bot/human games of the future I hope they have good helmets.

# 23 Desir 2026

Another area where I thought Hon was really onto something was when it came to puppets. Seriously. AI is indeed getting better all the time even if Siri or customer service bots can be so frustrating, but it’s likely some time out before bots show anything like the full panoply of human interactions like imagined in the film Her. But there’s a mid-point here and that’s having human beings remotely control the bots- to be their puppeteers.

Hon imagines this in the realm of prostitution. A company called Desir essentially uses very sophisticated forms of sex dolls as puppets controlled by experienced prostitutes. The weaknesses of AI give human beings something to do. As he quotes Desir’s imaginary founder:

Our agent AI is pretty good as it is, but like I said, there’s nothing that beats the intimate connection that only a real human can make. Our members are experts and they know what to say, how to move and how to act better than our own AI agents, so I think that any members who choose to get involved in puppeting will supplement their income pretty nicely

# 26 Amplified Teams 2027

One thing I really liked about A History of the Future is that it put flesh on the bones of an idea that has been developed by the economist Tyler Cowen in his book Average is Over (review pending) that employment in the 21st century won’t eventually all be swallowed up by robots, but that the highest earners, or even just those able to economically sustain themselves, would be in the form of teams connected to the increasing capacity of AI. Such are Hon’s “amplified teams” which Hon states:

 ….usually have three to seven human members supported by highly customized software that allows them to communicate with one another-  and with AI support systems- at an accelerated rate.

I’m crossing my fingers that somebody invents a bot for introverts- or is that a contradiction?

#39 Micromort Detector – 2032

Hon foresees our aging population becoming increasingly consumed with mortality and almost obsessive compulsive with measurement as a means of combating our anxiety. Hence his idea of the “micromort detector”.

A micromort is a unit of risk representing a one-in-a-million chance of death.

Mutual Assurance is a company that tried to springboard off this anxiety with its product “Lifeline” a device for measuring the mortality risk of any behavior the hope being to both improve healthy living, and more important for the company to accurately assess insurance premiums. Drink a cup of coffee – get a score, eat a doughnut, score.

The problem with the Lifeline was that it wasn’t particularly accurate due to individual variation, and the idea that the road to everything was paved in the 1s and 0s of data became passe. The Lifeline did however sometimes cause people to pause and reflect on their own mortality:

And that’s perhaps the most useful thing that the Lifeline did. Those trying to guide their behavior were frequently stymied, but that very effort often prompted a fleeting understanding of mortality and caused more subtle, longer- lasting changes in outlook. It wasn’t a magical device that made people wiser- it was a memento mori.

#56 Shanghai Six 2036

As part of the gaming world Hon has some really fascinating speculations on the future of entertainment. With Shanghai Six he imagines a mashup of alternate reality games such as his own Zombies Run! and something like the massive role playing found in events such as historical reenactments combined with aspects of reality television and all rolled up into the drama of film. Shanghai Six is a 10,000 person global drama with actors drawn from the real world. I’d hate to be the film crew’s gofer.

#63 Javelin 2040

The History of the Future also has some rather interesting things to say about the future of human enhancement. The transition begins with the paralympians who by the 2020’s are able to outperform by a large measure typical human athletes.

The shift began in 2020, when the International Paralympic Committee (IPC) staged a technology demonstration….

The demonstration was a huge success. People had never before seen such a direct combination of technology and raw human will power outside of war, and the sponsors were delighted at the viewing figures. The interest, of course, lay in marketing their expensive medical and lifestyle devices to the all- important Gen-X and Millennial markets, who were beginning to worry about their mobility and independence as they grew older.

There is something of the Daytona 500 about this here, sports becoming as much about how good the technology is as it is about the excellence of the athlete. And all sports do indeed seem to be headed this way. The barrier now is that technological and pharmaceutical assists for the athlete are not seen as a way to take human performance to its limits, but as a form of cheating. Yet, once such technologies become commonplace Hon imagines it unlikely that such distinctions will prove sustainable:

By the 40s and 50s, public attitudes towards mimic scripts, lenses, augments and neural laces had relaxed, and the notion that using these things would somehow constitute ‘cheating’ seemed outrageous. Baseline non-augmented humans were becoming the minority; the Paralympians were more representative of the real world, a world in which everyone was becoming enhanced in some small or large way.

It was a far cry from the Olympics. But then again, the enhanced were a far cry from the original humans.

#70 The Fourth Great Awakening 2044

Hon has something like Nassim Taleb’s idea that one of the best ways we have of catching the shadow of the future isn’t to have a handle on what will be new, but rather a good idea of what will still likely be around. The best indication we have that something will exist in the future is how long it has existed in the past. Long life proves evolutionary robustness under a variety of circumstances. Families have been around since our beginnings and will therefore likely exist for a long time to come.

Things that exist for a long time aren’t unchanging but flexible in a way that allows them to find expression in new forms once the old ways of doing things cease working.

Hon sees our long lived desire for communal eating surviving in his  #25 The Halls (2027) where people gather and mix together in collectively shared kitchen/dining establishments.

Halls speak to our strong need for social interaction, and for the ages-old idea that people will always need to eat- and they’ll enjoy doing it together.

And the survival of the reading in a world even more media and distraction saturated in something like dedicated seclusionary Reading Rooms (2030) #34. He also sees the survival of one of the oldest of human institutions, religion, only religion will have become much more centered on worldliness and will leverage advances in neuroscience to foster, depending on your perspective, either virtue or brainwashing.  Thus we have Hon’s imagined Fourth Great Awakening and the Christian Consummation Movement.

If I use the eyedrops, take the pills, and enroll in their induction course of targeted viruses and magstim- which I can assure you I am not about to do- then over the next few months, my personality and desires would gradually be transformed. My aggressive tendencies would be lowered. I’d readily form strong, trusting friendships with the people I met during this imprinting period- Consummators, usually. I would become generally more empathetic, more generous and “less desiring of fleeting, individual and mundane pleasures” according to the CCM.

It is social conditions that Hon sees driving the creation of something like the CCM, namely mass unemployment caused by globalization and especially automation. The idea, again, is very similar to that of Tyler Cowen’s in Average is Over, but whereas Cowen sees in the rise of Neo-victorianism a lifeboat for a middle class savaged by automation, Hon sees the similar CCM as a way human beings might try to reestablish the meaning they can no longer derive from work.

Hon’s imagined CCM combines some very old and very new technologies:

The CCM understood how Christianity itself spread during the Apostolic Age through hundreds of small gatherings, and accelerated that process by multiple orders of magnitude with the help of network technologies.

And all of that combined with the most advanced neuroscience.

#72 The Downvoted 2045

Augmented reality devices such as Google Glass should let us see the world in new ways, but just important might be what it allows us not to have to see. From this Hon derives his idea of “downvoting” essentially the choice to redact from reality individuals the group has deemed worthless.

“They don’t see you, “ he used to say. “You are completely invisible.I don’t know if it was better or worse  before these awful glasses, when people just pretended you didn’t exist. Now I am told that there are people who literally put you out of their sight, so that I become this muddy black shadow drifting along the pavement. And you know what? People will still downvote a black shadow!”

I’ll leave you off at Hon’s world circa 2045, but he has a lot else to say about everything from democracy, to space colonies to the post-21century future of AI. Somehow Hon’s patchwork imagined artifacts of the future allowed him to sew together a quilt of the century before us in a very clear pattern. What is that pattern?

That out in front of us the implications of continued miniaturization, networking, algorithmization, AI, and advances in neuroscience and human enhancement will continue to play themselves out. This has bright sides and dark sides and one of the darker that the opportunities for gainful human employment will become more rare.

Trained as a neuroscientist, Hon sees both dangers and opportunities as advances in neuroscience make the human brain once firmly secured in the black box of the skull permeable. Here there will be opportunities for abuse by the state or groups with nefarious intents, but there will also be opportunities for enriched human cooperation and even art.

All fascinating stuff, but it was what he had to say about the future of entertainment and the arts that I found most intriguing.  As the CEO of the company Six-to-Start he has his finger on the pulse of the entertainment in a way I do not. In the near future, Hon sees a blurring of the lines between gaming, role playing, and film and television, and there will be extraordinary changes in the ways we watch and play sports.

As for the arts, here where I live in Pennsylvania we are constantly bombarded with messages that our children need to be training in STEM (science, technology, engineering and mathematics). This is often to the detriment of programs in “useless” liberal arts such as history and most of all art programs whose budgets have been consistently whittled away. Hon showed me a future in which artists and actors, or more clearly people who have had exposure through schooling to the arts, may be some of the few groups that can avoid, at least for a time, the onset of AI driven automation. Puppeteering of various sorts would seem to be a likely transitional phase between “dead” humanoid robots and true and fully human like AI. This isn’t just a matter of the lurid future of prostitution, but for remote nursing, health care, and psychotherapy. Engineers and scientists will bring us the tools of the future, but it’s those with humanistic “soft-skills” that will be needed to keep that future livable, humane, and interesting.

We see this with another of  The History of the Future’s underlying perspectives- that a lot of the struggle of the future will be about keeping it a place human beings are actually happy to live in and that much of doing this will rely on tools of the past or finding protective bubbles through which the things that we now treasure can survive in the new reality we are building. Hence Hon’s idea of dining halls and reading rooms, and even more generally his view that people will continue to search for meaning sometimes turning to one of our most ancient technologies- religion- to do so.

Yet perhaps what Hon has most given us in The History of the Future is less a prediction than a kind of game with which we too can play which helps us see the outlines of the future, after all, game design is his thing. Perhaps, I’ll try to play the game myself sometime soon…