Is the Anthropocene a Disaster or an Opportunity?

Bradley Cantrell Pod-mod

Bradley Cantrell Pod-Mod

Recently the journal Nature published a paper arguing that the year in which the Anthropocene, the proposed geological era in which the collective actions of the human species started to trump other natural processes in terms of their impact, began in the year 1610 AD. If that year leaves you, like it did me, scratching your head and wondering what your missed while you dozed off in your 10th grade history class, don’t worry, because 1610 is a year in which nothing much happened at all. In fact, that’s why the author’s chose it.

There are a whole host of plausible candidates for the beginning of the Anthropocene, that are much, much more well know than 1610, starting relatively recently with the explosion of the first atomic bomb in 1945 and stretching backwards in time through the beginning of the industrial revolution (1750’s) to the discovery of the Americas and the bridging between the old and new worlds in the form of the Columbian Exchange, (1492), and back even further to some hypothetical year in which humans first began farming, or further still to the extinction of the mega-fauna such as Woolly Mammoths probably at the hands of our very clever, and therefore dangerous, hunter gatherer forebears.

The reason Simon L. Lewis and Mark A. Maslin (the authors of the Nature paper) chose 1610 is that in that year there can be found a huge, discernible drop in the amount of carbon dioxide in the atmosphere. A rapid decline in greenhouses gases which probably plunged the earth into a cold snap- global warming in reverse. The reason this happened goes to the heart of a debate plaguing the environmental community, a debate that someone like myself, merely listening from the sidelines, but concerned about the ultimate result, should have been paying closer attention to.

It seems that while I was focusing on how paleontologists had been undermining the myth of the “peaceful savage”, they had also called into question another widely held idea regarding the past, one that was deeply ingrained in the American environmental movement itself- that the Native Americans did not attempt the intensive engineering of the environment of the sort that Europeans brought with them from the Old World, and therefore, lived in a state of harmony with nature we should pine for or emulate.

As paleontologists and historians have come to understand the past more completely we’ve learned that it just isn’t so. Native Americans, across North America, and not just in well known population centers such as Tenochtitlan in Mexico, the lost civilizations of the Pueblos (Anasazi) in the deserts of the southwest, or Cahokia in the Mississippi Valley, but in the woodlands of the northeast in states such as my beloved Pennsylvania, practiced extensive agriculture and land management in ways that had a profound impact on the environment over the course of centuries.

Their tool in all this was the planful use of fire. As the naturalist and historian Scott Weidensaul tells the story in his excellent account of the conflicts between Native Americans and the Europeans who settled the eastern seaboard, The First Frontier:

Indians had been burning Virginia and much of the rest of North America annually for fifteen thousand years and it was certainly not a desert. But they had clearly altered their world in ways we are only still beginning to appreciate. The grasslands of the Great Plains, the lupine-studded savannas of northwestern Ohio’s “oak openings”, the longleaf pine forests of the southern coastal plain, the pine barrens of the Mid-Atlantic and Long Island- all were ecosystems that depended on fire and were shaped over millennia of use by its frequent and predictable application.  (69)

The reason, I think, most of us have this vision in our heads of North America at the dawn of European settlement as an endless wilderness is not on account of some conspiracy by English and French settlers (though I wouldn’t put it past them), but rather, when the first settlers looked out into a continent sparsely settled by noble “savages” what they were seeing was a mere remnant of the large Indian population that had existed there only a little over a century before. That the native peoples of the North America Europeans struggled so hard to subdue and expel were mere refugees from a fallen civilization. As the historian Charles Mann has put it: when settlers gazed out into a seemingly boundless American wilderness they were not seeing a land untouched by the hands of man, a virgin land given to them by God, but a graveyard.

Given the valiant, ingenious, and indeed, sometime brutal resistance Native Americans gave against European (mostly English) encroachment it seems highly unlikely the the United States and Canada would exist today in anything like their current form had the Indian population not been continually subject to collapse. What so ravaged the Native American population and caused their civilization to go into reverse were the same infectious diseases unwittingly brought by Europeans that were the primary culprit in the destruction of the even larger Indian civilizations in Mexico and South America. It was a form of accidental biological warfare that targeted Native American based upon their genomes, their immunity to diseases having never been shaped by contact with domesticated animals like pigs and cows as had the Europeans who were set on stealing their lands.

Well, it wasn’t always accidental. For though it would be sometime before we understood how infectious diseases worked, Europeans were well aware of their disproportionate impact on the Indians. In 1763 William Trent in charge of Fort Pitt, the seed that would become Pittsburgh, besieged by Indians, engaged in the first documented case of biological warfare. Weidensaul quotes one of Trent’s letters:

Out of regard to the [Indian emissaries] we gave them two blankets and a handkerchief out of the Small Pox Hospital. I hope it will have the desired effect (381)

It’s from this devastating impact of infectious diseases that decimated Native American populations and ended their management of the ecosystem through the systematic use of fire that (the authors of the Nature paper) get their 1610 start date for the beginning of the Anthropocene. With the collapse of Indian fire ecology eastern forests bloomed to such an extent that vast amounts of carbon otherwise left in the atmosphere was sucked up by the trees with the effect that the earth rapidly cooled.

Beyond the desire to uncover the truth and to enter into sympathy with those of the past that lies at the heart of the practice and study of history, this new view of the North American past has implications for the present. For the goal of returning, both the land itself and as individuals, to the purity of the wilderness that has been at the heart of the environmentalist project since its inception in the 19th century now encounters a problem with time. Where in the past does the supposedly pure natural state to which we are supposed to be returning actually exist?

Such an untrammeled state would seem to exist either before Native American’s development of fire ecology, or after their decimation by disease before European settlement west of the Allegheny mountains. This, after all, was the period of America as endless wilderness that has left its mark on our imagination. That is, we should attempt to return as much of North America as is compatible with civilization to the state it was after the collapse of the Native American population, but before the spread of Europeans into the American wilderness. The problem with such a goal is that this natural splendor may have hid the fact that such an ecology was ultimately unsustainable in the wild  state in which European settlers first encountered it.

As Tim Flannery pointed out in his book The Eternal Frontier: An Ecological History of North America and Its Peoples, the fire ecology adopted by Native Americans may have been filling a vital role, doing the job that should have been done by large herbivores, which North America, unlike other similar continents, and likely due to the effectiveness of the first wave of human hunters to enter it as the ultimate “invasive species” from Asia, lacked.  Large herbivores cull the forests and keep them healthy, a role which in their absence can also be filled by fire.

Although projects are currently in the works to try to bring back some of these large foresters such as the woolly mammoth I am doubtful large populations will ever be restored in a world filled even with self-driving cars. Living in Pennsylvania I’ve had the misfortune of running into a mere hundred pound or so white tailed deer (or her running into me). I don’t even want to think about what a mammoth would do to my compact. Besides, woolly mammoths don’t seem to be a creature made for the age we’re headed into, which is one of likely continued, and extreme, warming.

Losing our romanticized picture of nature’s past means we are now also unclear as to its future.  Some are using this new understanding of the ecological history of North America to make an argument against traditional environmentalism arguing in essence that without a clear past to return to the future of the environment is ours to decide.

Perhaps what this New Environmentalism signals is merely the normalization of North American’s relationship with nature, a move away from the idea of nature as a sort of lost Eden and towards something like ideas found elsewhere, which need not be any less reverential or religious. In Japan especially nature is worshiped, but it is in the form of a natural world that has been deeply sculpted by human beings to serve their aesthetic and spiritual needs. This the Anthropocene interpreted as opportunity rather than as unmitigated disaster.

Ultimately, I think it is far too soon to embrace without some small degree of skepticism the project of the New Environmentalism espoused by organizations such as the Breakthrough Institute or the chief scientist and fire- brand of the Nature Conservancy Peter Kareiva, or environmental journalists and scholars espousing those views like Andrew Rifkin, Emma Marris, or even the incomparable Diane Ackerman. For certainly the argument that even if the New Environmentalism contains a degree of truth it is inherently at risk of being captured by actors whose environmental concern is a mere smokescreen for economic interests, and which, at the very least, threatens to undermine the long held support for the setting aside of areas free from human development whose stillness and silence many in of us in our drenched in neon and ever beeping world hold dear.

It would be wonderful if objective scientists could adjudicate disputes between traditional environmentalists and New Environmentalists expressing non-orthodox views on everything from hydraulic fracking, to genetic engineering, to invasive species, to geo-engineering. The problem is that the very uncertainty at the heart of these types of issues probably makes definitive objectivity impossible. At the end of the day the way one decides comes down to values.

Yet what the New Environmentalists have done that I find most intriguing is to challenge  the mythology of Edenic nature, and just as in any other undermining of a ruling mythology this has opened the potential to both freedom and risks. On the basis of my own values the way I would use that freedom would not be to hive off humanity from nature, to achieve the “de-ecologization of our material welfare”,  as some New Environmentalists want to do, but for the artifact of human civilization itself to be made more natural.

Up until today humanity has gained its ascendancy over nature by assaulting it with greater and more concentrated physical force. We’ve damned and redrawn the course of rivers, built massive levies against the sea, cleared the land of its natural inhabitants (nonhuman and human) and reshaped it to grow our food.

We’ve reached the point where we’re smarter than this now and understand much better how nature achieve ends close to our own without resorting to excessively destructive and wasteful force. Termites achieve climate control from the geometry of their mounds rather than our energy intensive air conditioning. IBM’s Watson took a small city’s worth of electric power to beat its human opponents on Jeopardy! whose brains consumed no more power than a light bulb.

Although I am no great fan of the term “cyborg-ecology” I am attracted to the idea as it is being developed by figures such as Bradley Cantrell who propose that we use our new capabilities and understanding to create a flexible infrastructure that allows, for instance, a river to be a river rather than a mere “concrete channel” which we’ve found superior to the alternative of a natural river because it is predictable, while providing for our needs. A suffocatingly narrow redefinition of nature that has come at great costs to the other species with which we share with other such habitats, and ultimately causes large scale problems that require yet more brute force engineering for us to control once nature breaks free from our chains.

I can at least imagine an Anthropocene that provides a much richer and biodiverse landscape than our current one, a world where we are much more planful about providing for creatures other than just our fellow humans. It would be a world where we perhaps break from environmentalism’s shibboleths and do things such as prudently and judiciously use genetic engineering to make the natural world more robust, and where scientists who propose that we use the tools our knowledge has given to address plagues not just to ourselves, but to fellow creatures, do not engender scorn, but careful consideration of their proposals.

It would be a much greener world where the cold kingdom of machines hadn’t replaced the living, but was merely a temporary way station towards the rebirth, after our long, and almost terminal assault, on the biological world. If our strategies before have been divided between raping and subduing nature in light of our needs, or conversely, placing nature at such a remove from our civilization that she might be deemed safe from our impact, perhaps, having gained knowledge regarding nature’s secrets, we can now live not so much with her as a part of her, and with her as a part of us.

The Sofalarity is Near

Mucha Goddess Maia

Many readers here have no doubt spent at least some time thinking about the Singularity, whether in a spirit of hope or fear, or perhaps more reasonably some admixture of both. For my part, though, I am much less worried about a coming Singularity than I am about a Sofalarity in which our ability to create realistic illusions of achievement and adventure convinces the majority of humans that reality isn’t really worth all the trouble after all. Let me run through the evidence of an approaching Sofalarity. I hope you’re sitting down… well… actually I hope you’re not.

I would define a Sofalarity as a hypothetical  point in human history would when the majority of human beings spend most of their time engaged in activities that have little or no connection to actual life in the physical world. It’s not hard to see the outline of this today: on average, Americans already spend an enormous amount of time with their attention focused on worlds either wholly or partly imagined. The numbers aren’t very precise, and differ among age groups and sectors of the population, but they come out to be somewhere around five hours watching television per day, three hours online, and another three hours playing video games. That means, collectively at least, we spend almost half of our day in dream worlds, not counting the old fashioned kind such as those found in books or the ones we encounter when we’re actually sleeping.

There’s perhaps no better example of how the virtual is able to hijack our very real biology than pornography . Worldwide the amount of total internet traffic that is categorized as erotica ranges from a low of four to as high as thirty percent. When one combines that with recent figures claiming that up to 36 percent of internet traffic aren’t even human beings but bots, then it’s hard not to experience future shock.

Amidst all the complaining that the future hasn’t arrived yet and “where’s my jetpack?” a 21st century showed up where upwards of 66 percent of internet traffic could be people looking for pornography, bots pretending to be human, or, weirdest of all, bots pretending to be human looking for humans to have sex with. Take that Alvin Toffler.

Still all of this remains simply our version of painting on the walls of a prehistoric cave. Any true Sofalarity would likely require more than just television shows and Youtube clips. It would need to have gained the keys to our emotional motivation and senses.

As a species we’ve been trying to open the doors of perception with drugs long before almost anything else. What makes our current situation more likely to be leading toward a Sofalarity is that now this quest is a global business and that we’ve become increasingly sophisticated when it comes to playing tricks on our neurochemistry.

The problem any society has with individuals screwing with their neurochemistry is two-fold. The first is to make sure that enough sober people are available for the necessary work of keeping their society operating at a functional level, and the second is to prevent any ill effects from the mind altered from spilling over into the society at large.

The contemporary world has seemingly found a brilliant solution to this problem- to contain the mind altered in space and time, and making sure only the sober are running the show. The reason bars or dance clubs work is that only the customers are drunk or stoned and such places exist in a state of controlled chaos with the management carefully orchestrating the whole affair and making sure things remain lively enough that customers will return while ensuring that things also don’t get so dangerous patrons will stay away for the opposite reason.

The whole affair is contained in time because drunken binges last only through the weekend with individuals returning to their straight-laced bourgeois jobs on Monday, propped up, perhaps by a bit of stimulants to promote productivity.

Sometimes this controlled chaos is meant to last for longer stretches than the weekends, yet here again, it is contained in space and time. If you want to see controlled chaos perfected with technology thrown into the mix you can’t get any better than Las Vegas where seemingly endless opportunities for pleasure and losing one’s wits abound all the while one is being constantly monitored both in the name of safety, and in order that one not develop any existential doubts about the meaning of life under all that neon.   

If you ever find yourself in Vegas losing your dopamine fix after one too many blows from lady luck behind a one-armed bandit, and suddenly find some friendly casino staff next to you offering you free drinks or tickets to a local show, bless not the goddess of Fortune, but the surveillance cameras that have informed the house they are about to lose an unlucky, and therefore lucrative, customer. La Vegas is the surveillance capital of the United States, and it’s not just inside the casinos.

Ubiquitous monitoring seems to be the price of Las Vegas’ adoption of vice as a form of economy. Or as Megan McArdle put it in a recent article:

 Is the friendly police state the price of the freedom to drink and gamble with abandon?Whatever your position on vice industries, they are heavily associated with crime, even where they are legal. Drinking makes people both violent and vulnerable; gambling presents an almost irresistible temptation to cheating and theft.  Las Vegas has Disneyfied libertinism. But to do so, it employs armies of security guards and acres of surveillance cameras that are always and everywhere recording your every move.

Even the youngest of our young children now have a version of this: we call it Disney World. The home of Mickey Mouse has used current surveillance technology to its fullest, allowing it to give visitors to the “magic kingdom” both the experience of being free and one of reality seemingly bending itself in the shape of innocent fantasy and expectations. It’s a technology they work very hard to keep invisible. Disney’s magic band, which not only allows visitors to navigate seamlessly through its theme parks, but allows your dinner to be brought to you before you ordered it, or the guy or gal in the Mickey suit to greet your children by name before they have introduced themselves was described recently in a glowing article in Wired that quoted the company’s COO Tom Staggs this way:

 Staggs couches Disney’s goals for the MagicBand system in an old saw from Arthur C. Clarke. “Any sufficiently advanced technology is indistinguishable from magic,” he says. “That’s how we think of it. If we can get out of the way, our guests can create more memories.”

Nothing against the “magic of Disney” for children, but I do shudder a little thinking that so many parents don’t think twice about creating memories in a “world” that is not so much artificial as completely staged. And it’s not just for kids. They have actually built an entire industry around our ridiculousness here, especially in places in like China, where people pay to have their photos taken in front of fake pyramids or the Eiffel tower, or to vacation in places pretending to be someplace else.

Yet neither Las Vegas nor a Disney theme park resemble what a Sofalarity would look like in full flower. After all, the show girls at Bally’s or the poor soul under the mouse suit in Orlando are real people. What a true Sofalarity would entail is nobody being there at all, for the very people behind the pretend to no longer be there.

We’re probably some way off from a point where the majority of human labor is superfluous, but if things keep going at the rate they are, we’re not talking centuries. The rejoinder to claims that human labor will be replaced to the extent that most of us no longer have anything to do is often that we’ll become the creators and behind the scenes, the same way Apple’s American workers do the high end work of designing its products while the work of actually putting them together is done by numb fingers over in China. In the utopian version of our automated future we’ll all be designers on the equivalent of Infinite Loop Street while the robots provide the fingers.

Yet, over the long run, I am not sure this humans as mental creators/machines as physical producers distinction will hold. Our (quite dumb) computers already create visually stunning and unanticipated works or art, compose music that is indistinguishable from that created in human minds, and write things none of us realize are the product of clever programs. Who’s to say that decades hence, or over a longer stretch, they won’t be able to create richer fantasy worlds of every type that blow our minds and grip our attention far more than any crafted by our flesh and blood brethren?

And still, even should every human endeavor be taken over by machines, including our politics, we would still be short of a true Sofalarity because we would be left with the things that make us most human- the relationship we have with our loved ones. No, to see the Sofalarity in full force we’d need to have become little more than a pile of undulating mush like the creatures in the original conception of the movie Wall-E from which I ripped the term.

The only way we’d get to that point is if our created fantasies could reach deep under our skin and skulls and give us worlds and emotional experiences that atrophied to the point of irrecoverability what we now consider most essential to being a person. The signs are clear that we’re headed there. In Japan, for instance, there are perhaps 700,000 Hikikomori, modern day hermits that consist of adults who have withdrawn from 3 dimensional social relationships and live out their lives primarily online.

Don’t get me wrong, there is a lot of very cool stuff either here or shortly coming down the pike, there’s Oculus Rift should it ever find developers and conquer the nausea problem, and there’s wonders such as Magic Leap, a virtual reality platform that allows you to see 3D images by beaming them directly into your eyes. Add to these things like David Eagleman’s crazy haptic vest, or brain readers that sit it your ear, not to mention things a little further off in terms of public debut that seem to have jumped right off the pages of Nexus, like brain-to-brain communication, or magnetic nanoparticles that allow brain stimulation without wires, and it’s plain to see we’re on the cusp of revolution in creating and experiencing purely imagined worlds, but all this makes it even more difficult to bust a poor hikikomori out of his 4’ x 4’ apartment.

It seems we might be on the verge of losing the distinction between the Enchantment of Fantasy and Magic that J.R.R Tolkien brought us in his brilliant lecture On Fairy Stories:

Enchantment produces a Secondary World into which both designer and spectator can enter, to the satisfaction of their senses while they are inside; but in its purity it is artistic in desire and purpose. Magic produces, or pretends to produce, an alteration in the Primary World. It does not matter by whom it is said to be practiced, fay or mortal, it remains distinct from the other two; it is not an art but a technique; its desire is power in this world, domination of things and wills.

Fantasy is a natural human activity. It certainly does not destroy or even insult Reason; and it does not either blunt the appetite for, nor obscure the perception of, scientific verity. On the contrary. The keener and the clearer is the reason, the better fantasy will it make. If men were ever in a state in which they did not want to know or could not perceive truth (facts or evidence), then Fantasy would languish until they were cured. If they ever get into that state (it would not seem at all impossible), Fantasy will perish, and become Morbid Delusion.

For creative Fantasy is founded upon the hard recognition that things are so in the world as it appears under the sun; on a recognition of fact, but not a slavery to it. So upon logic was founded the nonsense that displays itself in the tales and rhymes of Lewis Carroll. If men really could not distinguish between frogs and men, fairy-stories about frog-kings would not have arisen.

For Tolkien, the primary point of fantasy was to enrich our engagement with the real world to allow us to see the ordinary anew. Though it might also be a place to hide and escape should the real world, whether for the individual or society as a whole , become hellish, as Tolkien, having fought in the World War I, lived through the Great Depression, and was on the eve of a second world war when he gave his lecture well knew.

To those who believe we might already be living in a simulation perhaps all this is merely like having traveled around the world only to end up exactly where you started as in the Borges’ story The Circular Ruins, or in the idea of many of the world’s great religions that we are already living in a state of maya or illusion, though we now make the case using much more scientific language. There’s a very serious argument out there, such as that of Nick Bostrom, that we are already living in a simulation. The way one comes to this conclusion is merely by looking at the virtual world we’ve already created and extrapolating the trend outward for hundreds or thousands of years. In such a world the majority of sentient creatures would be “living” entities in virtual environments, and these end up comprising the overwhelming number of sentient creatures that will ever exist. Statistical reasoning would seem to lead to the conclusion that you are more likely than not, right now, a merely virtual entity. There are even, supposedly, possible scientific experiments to test for evidence of this. Thankfully, in my view at least, the Higgs particle might prevent me from being a Boltzmann brain.

For my part, I have trouble believing I am living in a simulation. Running “ancestor simulations” seems like something a being with human intelligence might do, but it would probably bore the hell out of any superintelligence capable of actually creating the things, they would not provide any essential information for their survival, and given the historical and present suffering of our world would have to be run by a very amoral, indeed immoral being.

That was part of the fear Descartes was tapping into when he proposed that the world, and himself in it, might be nothing more than the dream of an “evil demon”. Yet what he was really getting at, as same as was the case with other great skeptics of history such as Hume, wasn’t so much the plausibility of the Matrix, but the limits surrounding what we can ever be said to truly know.

Some might welcome the prospect of a coming Sofalarity for the same reasons they embrace the Singularity, and indeed, when it comes to many features such as exponential technological advancement or automation, the two are hardly distinguishable. Yet the biggest hope that sofaltarians and singularitarians would probably share is that technological trends point towards the possibility of uploading minds into computers.

Sadly, or thankfully, uploading is some ways off. The EU seems to have finally brought the complaints of neuroscientists that Henry Markum’s Human Brain Project, that aimed to simulate an entire human brain was scientifically premature enough to be laughable, were it not for the billion Euro’s invested in it that might have been spent on much more pressing areas like mental illness or Alzheimer’s research. The project has not been halted but a recent scathing official report is certainly a big blow.

Nick Bostrom has pondered that if we are not now living in a simulation then there is something that prevents civilizations such as our from reaching the technological maturity to create such simulations. As I see it, perhaps the movement towards a Sofalarity ultimately contains the seeds of its own destruction.  Just as I am convinced that hell, which exists in the circumscribed borders of torture chambers, death camps, or the human heart, can never be the basis for an entire society, let alone a world, it is quite possible that a heaven that we could only reach by escaping the world as it exists is like that as well, and that any society that would be built around the fantasy of permanent escape would not last long in its confrontation with reality. Fermi paradox, anyone?

Seeing the Future through the Deep Past

woman-gazing-into-a-crystal-ball

We study history not to know the future but to widen our horizons, to understand that our present system is neither natural nor inevitable, and that we consequently have many more possibilities before us than we imagine. (241)

The above is a quote from Ynval Harari’s book Sapiens: A Brief History of Humankind, which I reviewed last time. So that’s his view of history, but what of other fields specifically designed to give us a handle on the future, you know, the kinds of “future studies” futurists claim to be experts in, fields like scenario planning, or even some versions of science-fiction.

Harari probably wouldn’t put much credence on the ability of these fields to predict the future either. The reason being that there are very real epistemological limits to what we can know outside of a limited number of domains.  The reason we are unable to make the same sort of accurate predictions for areas such as history, politics or economics as we do for physics, is that all of the former are examples of Level II chaotic systems. A Level I chaotic system is one where small differences in conditions can result in huge differences in outcomes. Weather is the best example we have of Level I chaos, which is why nearly everyone has at some point in their life wanted to bludgeon the weatherman with his umbrella. Level II chaotic systems make the job of accurate prediction even harder, for, as Harari points out:

Level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately. (240)

Level II chaos is probably the source of many of our philosophical paradoxes regarding, what we can really know, along with free will and its evil deterministic twin, not to mention the kinds of paradoxes we encounter when contemplating time travel, issues we only seem able to resolve in a way that conserves the underlying determinism of nature with our subjective experience of freedom when we assume that there are a multiple or even infinite number of universe that together manifest every possibility. A solution that seems to violate every fiber of our common sense.

Be all that as it may, the one major disappointment of Harari’s Sapiens was his versions of possible futures for humanity,though expressed with full acknowledgement that they were just well reasoned guesses, really weren’t all that different from what we all know already. Maybe we’ll re-engineer our genes even to the point of obtaining biological immortality. Maybe we’ll merge with our machines and become cyborgs. Maybe our AI children will replace us.

It’s not that these are unlikely futures, just commonly held ones, and thinkers who have arrived at them have just as often as not done so without having looked deep into the human past, merely extrapolating from current technological trends. Harari might instead have used his uncovering of the deep past to go in a different direction, not so much to make specific predictions about how the human story will likely end, but to ascertain broad recurring patterns that might narrow down the list of things we should look for in regards to our movement towards the future, and, above all, things whose recurrence we would do best to guard ourselves against.

At least one of these recurring trends identified in Sapiens we should be on the look out for is the Sisyphean character of our actions. We gained numbers and civilization with the Agricultural Revolution, but lost in the process both our health and our freedom, and this Sisyphean aspect did not end with industrialization and the scientific revolution. The industrial revolution that ended our universal scarcity, threatens to boil us all. Our automation of labor has lead us to be crippled by our sedentary lifestyles, our conquest of famine has resulted in widespread obesity, and our vastly expanded longevity has resulted in an epidemic of dementia and Alzheimer’s disease.

It’s a weird “god” indeed that suffers the arrival of a new misfortune the minute an old one is conquered. This is not to argue that our solutions actually make our circumstance worse, rather, it is that our solutions become as much a part of the world that affects us as anything in nature itself, and naturally give rise to some aspects we find problematic and were not intended in the solution to our original problem.

At least some of these problems we will be able to anticipate in advance and because of this they call for neither the precautionary principle of environmentalist, nor the proactionary principle of Max More, but something I’ll call the Design Foresight Principle (DFP). If we used something like the DFP we would design to avoid ethical and other problems that emerge from technology or social policy before they arrive, or at least before a technology is widely adopted. Yet in many cases even a DFP wouldn’t help us because the problem arising from a technology or policy wasn’t obvious until afterward- a classic case of which was DDT’s disastrous effect on bird populations.

This situation where we create something or act in some way in order to solve one problem which in turn causes another isn’t likely a bug, but a deep feature of the universe we inhabit. It not going to go away completely regardless of the extent of our powers. Areas I’d look for this Sisyphean character of human life to rear its head over the next century would include everything from biotechnology, to geoengineering, and even quite laudable attempts to connect the world’s remaining billions in one overarching network.

Another near universal of human history, at least since the Agricultural Revolution Sapiens has been the ubiquity of oppression. Humans, it seems, will almost instantly grasp the potential of any new technology, or form of social organization to start oppressing other humans and animals. Although on the surface far-fetched, it’s quite easy, given the continued existence of slavery, coupled with advances in both neuroscience and genetic engineering to come up with nightmare scenarios where, for instance, alternative small brained humans are biologically engineered and bred to serve as organ donors for humans of the old fashioned sort. One need only combine the claims of technical feasibility by biologists such as George Church of bringing back other hominids such as the Neanderthals, historical research and abuse of animals, and current research using human fetal tissue to grow human organs in pigs to see a nightmare right out of Dr Moreau.

Again far-fetched at the moment, other forms of oppression may be far less viscerally troubling but almost just as bad. As both neural monitoring and the ability to control subjects through optogenetics increases, we might see the rise of “part-time” slavery where one’s freedom is surrendered on the job, or in certain countries perhaps the revival of the institution itself.

The solution here, I think, is to start long before such possibilities manifest themselves, especially in countries with less robust protections regarding worker, human, and animal rights and push hard for research and social policy towards solutions to problems such as the dearth of organs for human beings in need of them that would have the least potentially negative impact on human rights.

Part of the answer to also needs to be in the form of regulation to protect workers from the creep of now largely innocent efforts such as the quantified self movement and its technologies into areas that really would put individual autonomy at risk. The development of near human level AI would seem to be the ultimate solution for human on human oppression, though at some point in the intelligence of our machines, we are might again face the issue of one group of sentient beings oppressing another group for its exclusive benefit.

One seemingly broad trend that I think Harari misidentifies is what he sees as the inexorable move towards a global empire. He states it this way:

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet I am far from certain that the movement towards a world empire or state is one that can be seen when we look deep into history. Since their beginnings, empires have waxed and waned. No empire with the scale and depth of integration now stands where the Persian, Roman, or Inca empires once stood. Rather than an unstoppable march towards larger and larger political units we have the rise and collapse of these units covering huge regions.

For the past 500 years the modern age of empires which gave us our globalized world has been a play of ever shifting musical chairs with the Spanish, Portuguese followed by the British and French followed by failed bids by the Germans and the Japanese, followed by the Russians and Americans.

For a quarter century now America has stood alone, but does anyone seriously doubt that this is much more than an interim until the likes of China and India join it? Indeed, the story of early 21st century geopolitics could easily be told as a tale of the limits of American empire.

Harari seems to think a global ruling class is about to emerge on the basis of stints at US universities and hobnobbing at Davos. This is to forget the power of artificial boundaries such as borders, and imagined communities. As scholars of nationalism such as Benedict Anderson have long pointed out you can get strengthened national identity when you combine a lingua franca with a shared education as long as personal opportunities for advancement are based upon citizenship.

Latin American creoles who were denied administrative positions even after they had proven themselves superior to their European-Spanish classmates were barred from becoming members of the elite in Spain itself and thus returned home with a heightened sense of national identity. And just as long as American universities remain the premier educational institutions on earth, which will not be forever, the elite children of other countries will come there for education. They will then have to choose whether to sever their ties with their home country and pursue membership in the American elite, or return home to join the elite of their home country with only tenuous ties to the values of the global power. They will not have the choice to chose both.

Indeed, the only way Harari’s global elite might emerge as a ruling class would be for states to fail almost everywhere. It wouldn’t be the victory of the trend towards “domination over the entire world” but a kind of global neo-feudal order. That is, the two trends we should be looking to history to illuminate when it comes to the future of political order is the much older trend of the rise and fall of empires, or the much younger 500 year trend of the rise and fall of great powers based on states.

One trend that might bolster the chances towards either neo-feudalism or the continued dominance of rival states depending upon how it plays out is the emergence of new forms of money. The management of a monetary system, enforcement of contacts, and protection of property, has always been among the state’s chief functions. As Harari showed us, money, along with writing and mathematics were invented as bureaucratic tools of accounting and management. Yet since the invention of money there has always been this tension between the need for money as a way to facilitate exchange – for which it has to be empty of any information except its value, and the need to record and enforce loans in that medium of exchange- loans and contacts.

This tension between forgetfulness and remembering when it comes to money is one way to see the tug of war between inflation and deflation in the management of it. States that inflate their currency are essentially voting for money has a means of facilitating exchange over the need for money to preserve its values so that past loans and contracts can be met in full.

Digital currencies, of which Bitcoin is only one example, and around which both states and traditional banks are (despite Bitcoin’s fall) are showing increasing interest, by treating money as data allow it to fully combine these two functions as a medium of exchange that can remember. This could either allow states to crush non-state actors, such as drug cartels, that live off the ability of money to forget its origins, or conversely, strengthen those actors (mostly from the realm of business) who claim there is no need for a state because digital currency can make contracts self enforcing. Imagine that rather than money you simply have a digital account which you can only borrow from to make a purchase once it connects itself to a payment system that will automatically withdraw increments until some set date in the future. And imagine that such digital wallets are the only form of money that is actually functional.

There are other trends from deep history we should look to as well to get a sense of what the future may have in store. For instance, the growth of human power has been based not on individual intelligence, but collective organization. New forms of organization using technologies like brain-nets might become available at some future date, and based on the scalability of these technologies might prove truly revolutionary. This will be no less important than the kinds of collective myths that predominate in the future, which, religious or secular will likely continue to determine how we live our lives as they have in the past.

Yet perhaps the most important trend from the deep past for the future will be the one Harari thinks might end desire to make history at all. Where will we go with our mastery over the biochemical keys to our happiness which we formerly sought in everything from drugs to art? It’s a question that deserves a post all to itself.

 

A Global History of Post-humans

Cave painting hand prints

One thing that can certainly not be said either the anthropologist Ynval Harari’s or his new book Sapiens: A Brief History of Humankind is that they lack ambition. In Sapiens, Harari sets out to tell the story of humanity since our emergence on the plans of Africa until the era in which we are living right now today, a period he thinks is the beginning of the end of our particular breed of primate. His book ends with some speculations on our post-human destiny, whether we achieve biological immortality or manage to biologically and technologically engineer ourselves into an entirely different species. If you want to talk about establishing a historical context for the issues confronted by transhumanism, you can’t get better than that.

In Sapiens, Harari organizes the story of humanity by casting it within the framework of three revolutions: the cognitive, the agricultural and the scientific. The Cognitive Revolution is what supposedly happened somewhere between 70,000- 30,000 thousand years ago when a suddenly very creative and  cooperative homo sapiens came roaring out of Africa using their unprecedented social coordination and technological flexibility to invade every ecological niche on the planet.

Armed with an extremely cooperative form of culture, and the ability to make tools, (especially clothing) to fit environments they had not evolved for, homo sapiens was able to move into more northern latitudes than their much more cold adapted Neanderthal cousins, or to cast out on the seas to settle far off islands and continents such as Australia, Polynesia, and perhaps even the Americas.

The speed with which homo sapiens invaded new territory was devastating for almost all other animals. Harari points out how the majority of large land animals outside of Africa (where animals had enough time to evolve weariness of this strange hairless ape) disappeared not long after human beings arrived there. And the casualties included other human species as well.

One of the best things about Harari is his ability to overthrow previous conceptions- as he does here with the romantic notion held by some environmentalist that “primitive” cultures lived in harmony with nature. Long before even the adoption of agriculture, let alone the industrial revolution, the arrival of homo sapiens proved devastating for every other species, including other hominids.

Yet Harari also defies intellectual stereotypes. He might not think the era in which the only human beings on earth were  “noble savages” was a particularly good one for other species, but he does see it as having been a particularly good one for homo sapiens, at least compared to what came afterward, and up until quite recently.

Humans, in the era before the Agricultural Revolution lived a healthier lifestyle than any since. They had a varied diet, and though they only worked on average six hours a day, and were far more active than any of us in modern societies chained to our cubicles and staring at computer screens.

Harari, also throws doubt on the argument that has been made most recently by Steven Piker, that the era before states was one of constant tribal warfare and violence, suggesting that it’s impossible to get an overall impression for levels of violence based on what end up being a narrow range of human skeletal remains. The most likely scenario, he thinks, is that some human societies before agriculture were violent, and some were not, and that even the issue of which societies were violent varied over time rising and falling in respect to circumstances.

From the beginning of the Cognitive Revolution up until the Agricultural Revolution starting around 10,000 years ago things were good for homo sapiens, but as Harari sees it, things really went downhill for us as individuals, something he sees as different from our status as a species, with the rise of farming.

Harari is adamant that while the Agricultural Revolution may have had the effect of increasing our numbers, and gave us all the wonders of civilization and beauty of high culture, its price, on the bodies and minds of countless individuals, both humans, and other animals was enormous. Peasants were smaller, less healthy,  and died younger than their hunter gatherer ancestors. The high culture of the elites of ancient empires was bought at the price of the systematic oppression of the vast majority of human beings who lived in those societies. And, in the first instance of telling this tale with in the context of a global history of humanity that I can think of, Harari tells the story of not just our oppression of each other, but of our domesticated animals as well.

He shows us how inhumane animal husbandry was long before our era of factory farming, which is even worse, but it was these more “natural”, “organic” farmers who began practices such as penning animals in cages, separating mothers from their young, castrating males, and cutting off the noses or out the eyes of animals such a pigs so they could better serve their “divinely allotted” function of feeding human mouths and stomachs.

Yet this begs the question: if the Agricultural Revolution was so bad for the vast majority of human beings, and animals with the exception of a slim class at the top of the pyramid, why did it not only last, but spread, until only a tiny minority of homo sapiens in remote corners continued to be hunter gatherers while the vast majority, up until quite recently were farmers?

Harari doesn’t know. It was probably a very gradual process, but once human societies had crossed a certain threshold there was no going back- our numbers were simply too large to support a reversion to hunting and gathering, For one of the ironies of the Agricultural Revolution is that while it made human beings unhealthy, it also drove up birthrates. This probably happened through rational choice. A hunter gathering family would likely space their children, whereas a peasant family needed all the hands it could produce, something that merely drove the need for more children, and was only checked by the kinds of famines Malthus had pegged as the defining feature of agricultural societies and that we only escaped recently via the industrial revolution.

Perhaps, as Harrai suggest, “We did not domesticate wheat. It domesticated us.” (81) At the only level evolution cares about- the propagation of our genes- wheat was a benefit to humanity, but at the cost of much human suffering and backbreaking labor in which we rid wheat of its rivals and spread the plant all over the globe. The wheat got a better deal, no matter how much we love our toast.

It was on the basis of wheat, and a handful of other staple crops (rice, maize, potatoes) that states were first formed. Harari emphasizes the state’s role as record keeper, combined with enforcer of rules. The state saw its beginning as a scorekeeper and referee for the new complex and crowded societies that grew up around farming.

All the greatest works of world literature, not to mention everything else we’ve ever read, can be traced back to this role of keeping accounts, of creating long lasting records, that led the nascent states that grew up around agriculture to create writing. Shakespeare’s genius can trace its way back to the 7th century B.C. equivalent to an IRS office. Along with writing the first states also created numbers and mathematics, bureaucrats have been bean counters ever since.

The quirk of human nature that for Harari made both the Cognitive and Agricultural Revolution possible and led to much else besides was our ability to imagine things that do not exist, by which he means almost everything we find ourselves surrounded by, not just religion, but the state and its laws, and everything in between has been akin to a fantasy game. Indeed, Harari left me feeling that the whole of both premodern and modern societies was at root little but a game of pretend played by grown ups, with adulthood perhaps nothing more than agreeing to play along with the same game everyone else is engaged in. He was especially compelling and thought provoking when it came to that ultimate modern fantasy and talisman that all of us, from Richard Dawkins to, Abu Bakr al-Baghdadi believes in; namely money.

For Harari the strange thing is that: “Money is the only trust system created by humans that can bridge almost any cultural gap, and does not discriminate on the basis of religion, gender, race, age or sexual orientation.” In this respect it is “the apogee of human tolerance” (186). Why then have so many of its critics down through the ages denounced money as the wellspring human of evil? He explains the contradiction this way:

For although money builds universal trust between strangers, this trust is invested not in humans, communities or sacred values, but in money itself and the impersonal systems that back it. We do not trust the stranger or the next store neighbor- we trust the coins they hold. If they run out of coins, we run out of trust. (188)

We ourselves don’t live in the age of money, so much as the age of Credit (or capital), and it is this Credit which Harari sees as one of the legs of the three-legged stools which he think defines our own period of history. For him we live in the age of the third revolution in human history, the age of the Scientific Revolution that has followed his other two. It is an age built out of an alliance of the forces of Capital-Science- and Empire.

What has made our the age of Credit and different from the material cultures that have come before where money certainly played a prominent role, is that we have adopted lending as the route to economic expansion. But what separates us from past eras that engaged in lending as well, is that ours is based on a confidence that the future will be not just different but better than the past, a confidence that has, at least so far, panned out over the last two centuries largely through the continuous advances in science and technology.

The feature that really separated the scientific revolution from earlier systems of knowledge, in Harari’s view, grew out of the recognition in the 17th century of just how little we actually knew:

The Scientific Revolution has not been a revolution of knowledge. It has above all been a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions. (251)

Nothing perhaps better captures the early modern recognition of this ignorance than their discovery of the New World which began us down our own unique historical path towards Empire. Harari sees both Credit and Science being fed and feeding the intermediary institution of Empire, and indeed, that the history of capitalism along with science and technology cannot be understood without reference to the way the state and imperialism have shaped both.

The imperialism of the state has been necessary to enable the penetration of capitalism and secure its gains, and the powers of advanced nations to impose these relationships has been the result largely of more developed science and technology which the state itself has funded. Science was also sometimes used not merely to understand the geography and culture of subject peoples to exploit them, but in the form of 19th century and early 20th century racism was used as a justification for that racism itself. And the quest for globe spanning Empire that began with the Age of Exploration in the 15th century is still ongoing.

Where does Harari think all this is headed?

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet Harari questions not only whether the scientific revolution or the new age of economic prosperity, not to mention the hunt for empire, have actually brought about a similar amount of misery, if not quite suffering, as the Agricultural Revolution that preceded it.

After all, the true quest of modern science is really not power, but immortality. In Harari’s view we are on the verge of fulfilling the goal of the “Gilgamesh Project”.

Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself. (267)

The quest after wealth, too, seems to be reaching a point of diminishing returns. If the objective of material abundance was human happiness, we might ask why so many of us are miserable? The problem, Harari thinks, might come down to biologically determined hedonic set-points that leave a modern office worker surrounded by food and comforts ultimately little happier to his peasant ancestor who toiled for a meager supper from sunset to sunrise. Yet perhaps the solution to this problem is at our fingertips as well:

 There is only one historical development that has real significance. Today when we realize that the keys to happiness are in the hands of our biochemical system, we can stop wasting our time on politics, social reforms, putsches, and ideologies and focus instead on the only thing that truly makes us happy: manipulating our biochemistry.  (389)

Still, even should the Gilgamesh Project succeed, or we prove capable of mastering our biochemistry, Harari sees a way the Sisyphean nature may continue to have the last laugh. He writes:

Suppose that science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an epidemic of anger and anxiety.

Those unable to afford the new treatments- the vast majority of people- will be besides themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even handed- that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth they will not revive corpses. How dreadful to think that I and my loved ones can live forever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow adverse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.( 384-385).

Along with this, Harari reminds us that it might not be biology that is most important for our happiness, but our sense of meaning. Given that he thinks all of our sources of meaning are at bottom socially constructed illusions, he concludes that perhaps the only philosophically defensible position might be some form of Buddhism- to stop all of our chasing after desire in the first place.

The real question is whether the future will show if all our grasping has ended up in us reaching our object or has led us further down the path of illusion and pain that we need to outgrow to achieve a different kind of transcendence.

 

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and our certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.

 

Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.

 

Big Data as statistical masturbation

Infinite Book Tunnel

It’s just possible that there is a looming crisis in yet another technological sector whose proponents have leaped too far ahead, and too soon, promising all kinds of things they are unable to deliver. It strange how we keep ramming our head into this same damned wall, but this next crisis is perhaps more important than deflated hype at other times, say our over optimism about the timeline for human space flight in the 1970’s, or the “AI winter” in the 1980’s, or the miracles that seemed just at our fingertips when we cracked the Human Genome while pulling riches out of the air during the dotcom boom- both of which brought us to a state of mania in the 1990’s and early 2000’s.

The thing that separates a potentially new crisis in the area of so-called “Big-Data” from these earlier ones is that, literally overnight, we have reconstructed much of our economy, national security infrastructure and in the process of eroding our ancient right privacy on it’s yet to be proven premises. Now, we are on the verge of changing not just the nature of the science upon which we all depend, but nearly every other field of human intellectual endeavor. And we’ve done and are doing this despite the fact that the the most over the top promises of Big Data are about as epistemologically grounded as divining the future by looking at goat entrails.

Well, that might be a little unfair. Big Data is helpful, but the question is helpful for what? A tool, as opposed to a supposedly magical talisman has its limits, and understanding those limits should lead not to our jettisoning the tool of large scale data based analysis, but what needs to be done to make these new capacities actually useful rather than, like all forms of divination, comforting us with the idea that we can know the future and thus somehow exert control over it, when in reality both our foresight and our powers are much more limited.

Start with the issue of the digital economy. One model underlies most of the major Internet giants- Google, FaceBook and to a lesser extent Apple and Amazon, along with a whole set of behemoths who few of us can name but that underlie everything we do online, especially data aggregators such as Axicom. That model is to essentially gather up every last digital record we leave behind, many of them gained in exchange for “free” services and using this living archive to target advertisements at us.

It’s not only that this model has provided the infrastructure for an unprecedented violation of privacy by the security state (more on which below) it’s that there’s no real evidence that it even works.

Just anecdotally reflect on your own personal experience. If companies can very reasonably be said to know you better than your mother, your wife, or even you know yourself, why are the ads coming your way so damn obvious, and frankly even oblivious? In my own case, if I shop online for something, a hammer, a car, a pair of pants, I end up getting ads for that very same type of product weeks or even months after I have actually bought a version of the item I was searching for.

In large measure, the Internet is a giant market in which we can find products or information. Targeted ads can only really work if they are able refract in their marketed product’s favor the information I am searching for, if they lead me to buy something I would not have purchased in the first place. Derek Thompson, in the piece linked to above points out that this problem is called Endogeneity, or more colloquially: “hell, I was going to buy it anyway.”

The problem with this economic model, though, goes even deeper than that. At least one-third of clicks on digital ads aren’t human beings at all but bots that represent a way of gaming advertising revenue like something right out of a William Gibson novel.

Okay, so we have this economic model based on what at it’s root is really just spyware, and despite all the billions poured into it, we have no idea if it actually affects consumer behavior. That might be merely an annoying feature of the present rather than something to fret about were it not for the fact that this surveillance architecture has apparently been captured by the security services of the state. The model is essentially just a darker version of its commercial forbearer. Here the NSA, GCHQ et al hoover up as much of the Internet’s information as they can get their hands on. Ostensibly, their doing this so they can algorithmically sort through this data to identify threats.

In this case, we have just as many reasons to suspect that it doesn’t really work, and though they claim it does, none of these intelligence agencies will actually look at their supposed evidence that it does. The reasons to suspect that mass surveillance might suffer similar flaws as mass “personalized” marketing, was excellently summed up   in a recent article in the Financial Times Zeynep Tufekci when she wrote:

But the assertion that big data is “what it’s all about” when it comes to predicting rare events is not supported by what we know about how these methods work, and more importantly, don’t work. Analytics on massive datasets can be powerful in analysing and identifying broad patterns, or events that occur regularly and frequently, but are singularly unsuited to finding unpredictable, erratic, and rare needles in huge haystacks. In fact, the bigger the haystack — the more massive the scale and the wider the scope of the surveillance — the less suited these methods are to finding such exceptional events, and the more they may serve to direct resources and attention away from appropriate tools and methods.

I’ll get to what’s epistemologically wrong with using Big Data in the way used by the NSA that Tufekci rightly criticizes in a moment, but on a personal, not societal level, the biggest danger from getting the capabilities of Big Data wrong seems most likely to come through its potentially flawed use in medicine.

Here’s the kind of hype we’re in the midst of as found in a recent article by Tim Mcdonnell in Nautilus:

We’re well on our way to a future where massive data processing will power not just medical research, but nearly every aspect of society. Viktor Mayer-Schönberger, a data scholar at the University of Oxford’s Oxford Internet Institute, says we are in the midst of a fundamental shift from a culture in which we make inferences about the world based on a small amount of information to one in which sweeping new insights are gleaned by steadily accumulating a virtually limitless amount of data on everything.

The value of collecting all the information, says Mayer-Schönberger, who published an exhaustive treatise entitled Big Data in March, is that “you don’t have to worry about biases or randomization. You don’t have to worry about having a hypothesis, a conclusion, beforehand.” If you look at everything, the landscape will become apparent and patterns will naturally emerge.

Here’s the problem with this line of reasoning, a problem that I think is the same, and shares the same solution to the issue of mass surveillance by the NSA and other security agencies. It begins with this idea that “the landscape will become apparent and patterns will naturally emerge.”

The flaw that this reasoning suffers has to do with the way very large data sets work. One would think that the fact that sampling millions of people, which we’re now able to do via ubiquitous monitoring, would offer enormous gains over the way we used to be confined to population samples of only a few thousand, yet this isn’t necessarily the case. The problem is the larger your sample size the greater your chance at false correlations.

Previously I had thought that surely this is a problem that statisticians had either solved or were on the verge of solving. They’re not, at least according to the computer scientist Michael Jordan, who fears that we might be on the verge of a “Big Data winter” similar to the one AI went through in the 1980’s and 90’s. Let’s say you had an extremely large database with multiple forms of metrics:

Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.

Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.

The actual mathematics of sorting out spurious from potentially useful correlations from being distinguished is, in Jordan’s estimation, far from being worked out:

We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.

It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.

Alright, now that’s a problem. As you’ll no doubt notice the danger of false correlation that Jordan identifies as a problem for science is almost exactly the same critique Tufekci  made against the mass surveillance of the NSA. That is, unless the NSA and its cohorts have actually solved the statistical/engineering problems Jordan identified and haven’t told us, all the biggest data haystack in the world is going to lead to is too many leads to follow, most of them false, and many of which will drain resources from actual public protection. Perhaps equally troubling: if security services have solved these statistical/engineering problems how much will be wasted in research funding and how many lives will be lost because medical scientists were kept from the tools that would have empowered their research?

At least part of the solution to this will be remembering why we developed statistical analysis in the first place. Herbert I. Weisberg with his recent book Willful Ignorance: The Mismeasure of Uncertainty has provided a wonderful, short primer on the subject.

Statistical evidence, according to Weisberg was first introduced to medical research back in the 1950’s as a protection against exaggerated claims to efficacy and widespread quackery. Since then we have come to take the p value .05 almost as the truth itself. Weisberg’s book is really a plea to clinicians to know their patients and not rely almost exclusively on statistical analyses of “average” patients to help those in their care make life altering decisions in terms of what medicines to take or procedures to undergo. Weisberg thinks that personalized medicine will over the long term solve these problems, and while I won’t go into my doubts about that here, I do think, in the experience of the physician, he identifies the root to the solution of our Big Data problem.

Rather than think of Big Data as somehow providing us with a picture of reality, “naturally emerging” as Mayer-Schönberger quoted above suggested we should start to view it as a way to easily and cheaply give us a metric for the potential validity of a hypothesis. And it’s not only the first step that continues to be guided by old fashioned science rather than computer driven numerology but the remaining steps as well, a positive signal  followed up by actual scientist and other researchers doing such now rusting skills as actual experiments and building theories to explain their results. Big Data, if done right, won’t end up making science a form of information promising, but will instead be used as the primary tool for keeping scientist from going down a cul-de-sac.

The same principle applied to mass surveillance means a return to old school human intelligence even if it now needs to be empowered by new digital tools. Rather than Big Data being used to hoover up and analyze all potential leads, espionage and counterterrorism should become more targeted and based on efforts to understand and penetrate threat groups themselves. The move back to human intelligence and towards more targeted surveillance rather than the mass data grab symbolized by Bluffdale may be a reality forced on the NSA et al by events. In part due to the Snowden revelations terrorist and criminal networks have already abandoned the non-secure public networks which the rest of us use. Mass surveillance has lost its raison d’etre.

At least it terms of science and medicine, I recently saw a version of how Big Data done right might work. In an article for Qunta and Scientific American by Veronique Greenwood she discussed two recent efforts by researchers to use Big Data to find new understandings of and treatments for disease.

The physicist (not biologist) Stefan Thurner has created a network model of comorbid diseases trying to uncover the hidden relationships between different, seemingly unrelated medical conditions. What I find interesting about this is that it gives us a new way of understanding disease, breaking free of hermetically sealed categories that may blind us to underlying shared mechanisms by medical conditions. I find this especially pressing where it comes to mental health where the kind of symptom listing found in the DSM- the Bible for mental health care professionals- has never resulted in a causative model of how conditions such as anxiety or depression actually work and is based on an antiquated separation between the mind and the body not to mention the social and environmental factors that all give shape to mental health.

Even more interesting, from Greenwood’s piece, are the efforts by Joseph Loscalzo of Harvard Medical School to try and come up with a whole new model for disease that looks beyond genome associations for diseases to map out the molecular networks of disease isolating the statistical correlation between a particular variant of such a map and a disease. This relationship between genes and proteins correlated with a disease is something Loscalzo calls a “disease module”.

Thurner describes the underlying methodology behind his, and by implication Loscalzo’s,  efforts to Greenwood this way:

Once you draw a network, you are drawing hypotheses on a piece of paper,” Thurner said. “You are saying, ‘Wow, look, I didn’t know these two things were related. Why could they be? Or is it just that our statistical threshold did not kick it out?’” In network analysis, you first validate your analysis by checking that it recreates connections that people have already identified in whatever system you are studying. After that, Thurner said, “the ones that did not exist before, those are new hypotheses. Then the work really starts.

It’s the next steps, the testing of hypotheses, the development of a stable model where the most important work really lies. Like any intellectual fad, Big Data has its element of truth. We can now much more easily distill large and sometimes previously invisible  patterns from the deluge of information in which we are now drowning. This has potentially huge benefits for science, medicine, social policy, and law enforcement.

The problem comes from thinking that we are at the point where our data crunching algorithms can do the work for us and are about to replace the human beings and their skills at investigating problems deeply and in the real world. The danger there would be thinking that knowledge could work like self-gratification a mere thing of the mind without all the hard work, compromises, and conflict between expectations and reality that goes into a real relationship. Ironically, this was a truth perhaps discovered first not by scientists or intelligence agencies but by online dating services. To that strange story, next time….