Auguries of Immortality, Malthus and the Verge

Hindu Goddess Tara

Sometimes, if you want to see something in the present clearly it’s best to go back to its origins. This is especially true when dealing with some monumental historical change, a phase transition from one stage to the next. The reason I think this is helpful is that those lucky enough to live at the beginning of such events have no historical or cultural baggage to obscure their forward view. When you live in the middle, or at the end of an era, you find yourself surrounded, sometimes suffocated, by all the good and bad that has come as a result. As a consequence, understanding the true contours of your surroundings or ultimate destination is almost impossible, your nose is stuck to the glass.

Question is, are we ourselves in the beginning of such an era, in the middle, or at an end? How would we even know?

If I were to make the case that we find ourselves in either the middle or the end of an era, I know exactly where I would start. In 1793 the eccentric English writer William Godwin published his Enquiry Concerning Political Justice and its Influence on Morals and Happiness, a book which few people remember. What Godwin is remembered for instead is his famous daughter Mary Shelley, and her even more famous monster, though I should add that if you like thrillers you can thank Godwin for having invented them.

Godwin’s Enquiry, however, was a different kind of book. It grew out of the environment of a time which, in Godwin’s eyes at least, seemed pregnant with once unimaginable hope. The scientific revolution had brought about a fuller understanding of nature and her laws than anything achieved by the ancients, the superstitions of earlier eras were abandoned for a new age of enlightenment, the American Revolution had brought into the world a whole new form of government based on Enlightenment principles, and, as Godwin wrote, a much more important similar revolution had overthrown the monarchy in France.

All this along with the first manifestations of what would become the industrial revolution led Godwin to speculate in the Enquiry that mankind had entered a new era of perpetual progress. Where then could such progress and mastery over nature ultimately lead? Jumping off of a comment by his friend Ben Franklin, Godwin wrote:

 Let us here return to the sublime conjecture of Franklin, that “mind will one day become omnipotent over matter.” If over all other matter, why not over the matter of our own bodies? If over matter at ever so great a distance, why not over matter which, however ignorant we may be of the tie that connects it with the thinking principle, we always carry about with us, and which is in all cases the medium of communication between that principle and the external universe? In a word, why may not man be one day immortal?

Here then we can find evidence for the recent claim of Yuval Harari that “The leading project of the Scientific Revolution is to give humankind eternal life.” (268)  In later editions of the Enquiry, however, Godwin dropped the suggestion of immortality, though it seems he did it not so much because of the criticism that followed from such comments, or the fact that he stopped believing in it, but rather that it seemed too much a termination point for his notion of progress that he now thought really would extend forever into the future for his key point was that the growing understanding by the mind would result in an ever increasing power of the mind over the material world in a process that would literally never end.

Almost at the exact same time as Godwin was writing his Enquiry another figure was making almost the exact same argument including the idea that scientific progress would eventually result in indefinite human lifespans. The Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit was a book written by a courageous man while on the run from a French Revolutionary government that wanted to cut off his head. Amazingly, even while hunted down during the Terror, Condorcet retained his long term optimism regarding the ultimate fate of humankind.

A young English parson with a knack for the just emerging science of economics, not only wasn’t buying it, he wanted to actually scientifically prove (though I am using the term loosely) exactly why  such optimism should not be believed. This was Thomas Malthus, whose name, quite mistakenly, has spawned its own adjective, Malthusian, which means essentially environmental disaster caused by our own human hands and faults.

As is commonly known, Malthus’ argument was that historically there has been a mismatch between the growth of population and the production of food which sooner or later has led to famine and decline. It was Godwin’s and Condorcet’s claims regarding future human immortality that, in part, was responsible for Malthus stumbling upon his specific argument centered on population. For the obvious rejoinder to those claiming that the human lifespan would increase forever was-  what would we do will all of these people?

Both Godwin and Condorcet thought they had answered this question by claiming that in the future the birth rate would decline to zero. Stunningly, this has actually proven to be correct- that population growth rates have declined in parallel with increases in longevity. Though, rather than declining due to the victory of “reason” and the conquest of the “passions” as both Godwin and Condorcet thought they declined because sex was, for the first time in human history, decoupled from reproduction through the creation of effective forms of birth control.

So far, at least, it seems it has been Godwin and Condorcet that have gotten the better side of the argument. Since the  Enquiry we have experienced nearly 250 years of uninterrupted progress where mind has gained increasing mastery over the material world. And though,we are little closer to the optimists’ dream of “immortality” their prescient guess that longevity would be coupled with a declining birth rate would seem to clear the goal of increased longevity from being self-defeating on Malthusian grounds.

This would not be, of course, the first time Malthus has been shown to be wrong. Yet his ideas, or a caricature of his ideas, have a long history of retaining their hold over our imagination.  Exactly why this is the case is a question  explored in detail by Robert J. Mayhew in his excellent, Malthus: The Life and Legacies of an Untimely Prophet, Malthus’ argument in his famous if rarely actually read An Essay on the Principle of Population has become a sort of secular version of armageddon, his views latched onto by figures both sinister and benign over the two centuries since his essay’s publication.

Malthus’ argument was used against laws to alleviate the burdens of poverty, which it was argued would only increase population growth and an ultimate reckoning (and this view, at least, was close to that of Malthus himself). They were used by anti-immigrant and racist groups in the 19th and early 20th century. Hitler’s expansionist and genocidal policy in eastern Europe was justified on Malthusian grounds.

On a more benign side, Malthusian arguments were used as a spur to the Green Revolution in Agriculture in the 1960’s (though Mayhew thinks the warnings of pending famine were political -arising from the cold war- and overdone.) Malthusianism was found in the 1970’s by Paul Ehrlich to warn of a “population bomb” that never came, during the Oil Crisis slid into the fear over resource constraints, and can now be found in predictions about the coming “resource” and “water” wars. There is also a case where Malthus really may have his revenge, though more on that in a little bit.

And yet, we would be highly remiss were we to not take the question Malthus posed seriously. For what he was really inquiring about is whether or not their might be ultimate limits on the ability of the human mind to shape the world in which it found itself. What Malthus was  looking for was the boundary or verge of our limits as established by the laws of nature as he understood them. Those whose espoused the new human perfectionism, such as Godwin and Condorcet, faced what appeared to Malthus to be an insurmountable barrier to their theories being considered scientific, no matter how much they attached themselves to the language and symbols of the recent success of science. For what they were predicting would happen had no empirical basis- it had never happened before. Given that knowledge did indeed seem to increase through history, if merely as a consequence of having the time to do so, it was still the case that the kind of radical and perpetual progress that Godwin and Condorcet predicted was absent from human history. Malthus set out to provide a scientific argument for why.

In philosophical terms Malthus’ Essay is best read as a theodicy, an attempt like that of Leibniz before him, to argue that even in light of the world’s suffering we live in the “best of all possible world’s”. Like Newton did for falling objects, Malthus sought the laws of nature as designed by his God that explained the development of human society. Technological and social progress had remained static in the past even as human knowledge regarding the world accumulated over generations because the gap between mind and matter was what made us uniquely human. What caused us to most directly experience this gap and caused progress to remain static? Malthus thought he pinned the source of stasis in famine and population decline.

As is the case with any other physical system, for human societies, the question boiled down to how much energy was available to do meaningful work. Given that the vast majority of work during Malthus’ day was done by things that required energy in the form of food whether humans or animals, the limited amount of land that could be efficiently tilled presented an upper bound to the size, complexity, and progress of any human society.

What Malthus missed, of course, was the fact that the relationship between food and work was about to be severed. Or rather, the new machines did consume a form of processed “food” organic material that had been chemically “constructed” and accumulated over the eons, in the form of fossil fuels that offered an easily accessible type of energy that was different in kind from anything that had come before it.

The sheer force of the age of machines Malthus had failed to foresee did indeed break with the flatline of human history he had identified in every prior age. A fact that has never perhaps been more clearly than in Ian Morris’ simple graph below.

Ian Morris Great Divergence Graph

What made this breaking of the pattern between all of past human history and the last few centuries is the thing that could possibly, and tragically, prove Malthus right after all- fossil fuels.  For any society before 1800 the majority of energy other than that derived from food came in the form of wood- whether as timber itself or charcoal. But as Lewis Dartnell pointed out in a recent piece in AEON the world before fossil fuels posed a seemingly unsurmountable (should I say Malthusian?) dilemma; namely:

The central problem is that woodland, even when it is well-managed, competes with other land uses, principally agriculture. The double-whammy of development is that, as a society’s population grows, it requires more farmland to provide enough food and also greater timber production for energy. The two needs compete for largely the same land areas.

Dartnell’s point is that we have been both extremely lucky and unlucky in how accessible and potent fossil fuels have been. On the one hand fossil fuels gave us a rather short path to technological society, on the other, not only will it be difficult to wean ourselves from them, it is hard to imagine how we could reboot as a civilization should we suffer collapse and find ourselves having already used up most of the world’s most easily accessed forms of energy.

It is a useful exercise, then, to continue to take Malthus’ argument seriously, for even if we escape the second Malthusian trap- fossil fuel induced climate change- that allowed us to break free from the trap Malthus’ originally identified- our need to literally grow our energy- there are other predictable traps that likely lie in store.

One of these traps that interests me the most has to do with the “energy problem” that Malthus understood in terms of the production of food. As I’ve written about before, and as brought to my attention by the science writer Lee Billings in his book Five Billion Years of Solitude, there is a good and little discussed case from physics for thinking we might be closer to the end of an era that began with the industrial revolution rather than in the middle or even at the beginning.

This physics of civilizational limits comes from Tom Murphy of the University of California, San Diego who writes the blog Do The Math. Murphy’s argument, as profiled by the BBC, has some of the following points:

  • Assuming rising energy use and economic growth remain coupled, as they have in the past, confronts us with the absurdity of exponentials. At a 2.3 percent growth rate within 2,500 hundred years we would require all the energy from all the stars in the Milky Way galaxy to function.
  • At 3 percent growth, within four hundred years we will have boiled away the earth’s oceans, not because of global warming, but from the excess heat that is the normal product of energy production. (Even clean fusion leaves us burning away the world’s oceans for the same reason)
  • Renewables push out this reckoning, but not indefinitely. At a 3 percent growth rate, even if the solar efficiency was 100% we would need to capture all of the sunlight hitting the earth within three hundred years.

Will such limits prove to be correct and Malthus, in some sense, be shown to have been right all along? Who knows. The only way we’d have strong evidence to the contrary is if we came across evidence of civilizations with a much, much greater energy signature than our own. A recent project out of Penn State to do just that, which looked at 100,000 galaxies, found nothing, though this doesn’t mean the search is over.

Relating back to my last post: the universe may lean towards giving rise to complexity in the way the physicist Jeremy England suggests, but the landscape is littered with great canyons where evolution gets stuck for very long periods of time which explains its blindness as perceived by someone like Henry Gee. The scary thing is getting out of these canyons is a race against time: complex life could have been killed off by some disaster shortly after the Cambrian explosion, we could have remained hunter gatherers and failed to develop agriculture before another ice age did us in, some historical contingency could have prevented industrialization before some global catastrophe we are advanced enough now to respond to wiped us out.

If Tom Murphy is right we are now in a race to secure exponentially growing sources of energy, and it is a race we are destined to lose. The reason we don’t see any advanced civilizations out there is because the kind of growth we’ve extrapolated from the narrow slice of the past few centuries is indeed a finite thing as the amount of energy such growth requires reaches either terrestrial or cosmic limits. We simply won’t be able to gain access to enough energy fast enough to keep technological progress going at its current rate.

Of course, even if we believe that progress has some limit out there, that does not necessarily entail we shouldn’t pursue it, in many of its forms, until we hit the verge itself. Taking arguments that there might be limits to our technological progress is one thing, but to our moral progress, our efforts to address suffering in the world, they are quite another, for there accepting limits would mean accepting some level of suffering or injustice as just the “way things are”.  That we should not accept this Malthus himself nearly concluded:

 Evil exists in the world not to create despair but activity. We are not patiently to submit to it, but to exert ourselves to avoid it. It is not only the interest but the duty of every individual to use his utmost efforts to remove evil from himself and from as large a circle as he can influence, and the more he exercises himself in this duty, the more wisely he directs his efforts, and the more successful these efforts are, the more he will probably improve and exalt his own mind and the more completely does he appear to fulfil the will of his Creator. (124-125)

The problem lies with the justification of the suffering of individual human beings in any particular form as “natural”. The crime at the heart of many versions of Malthusianism is this kind of aggregation of human beings into some kind of destructive force, which leads to the denial of the only scale where someone’s true humanity can be scene- at the level of the individual. Such a moral blindness that sees only the crowd can be found in the work the most famous piece of modern Malthusianism -Paul Ehrlich’s Population Bomb where he discusses his experience of Delhi.

The streets seemed alive with people. People eating, people washing, people sleeping. People visiting, arguing, and screaming. People thrusting their hands through the taxi window, begging. People defecating and urinating. People clinging to the buses. People herding animals. People, people, people, people. As we moved slowly through the mob, hand horn squawking, the  dust, noise, heat, and the cooking fires gave the scene a hellish aspect. Would we ever get to our hotel? All three of us were, frankly, frightened. (p. 1)

Striped of this inability to see the that the value of human beings can only be grasped at the level of the individual and that suffering can only be assessed in a moral sense at this individual level, Malthus can help to remind us that our mind’s themselves emerge out of their confrontation and interaction with a material world where we constantly explore, overcome, and confront again its boundaries. That the world itself was probably “a mighty process for awakening matter into mind” and even the most ardent proponents of human perfectionism, modern day transhumanists or singularitarians, or just plain old humanists would agree with that.

* Image: Tara (Devi): Hindu goddess of the unquenchable hunger that compels all life.

Life: Inevitable or Accident?

The-Tree-Of-Life Gustav Klimt

Here’s the question: does the existence of life in the universe reflect something deep and fundamental or is it merely an accident and epiphenomenon?

There’s an interesting new theory coming out of the field of biophysics that claims the cosmos is indeed built for life, and not just merely in the sense found in the so-called “anthropic principle” which states that just by being here we can assume that all of nature’s fundamental values must be friendly for complex organisms such as ourselves that are able to ask such questions. The new theory makes the claim that not just life, but life of ever growing complexity and intelligence is not just likely, but the inevitable result of the laws of nature.

The proponent of the new theory is a young physicist at MIT named Jeremy England. I can’t claim I quite grasp all the intricate details of England’s theory, though he does an excellent job of explaining it here, but perhaps the best way of capturing it succinctly is by thinking of the laws of physics as a landscape, and a leaning one at that.

The second law of thermodynamics leans in the direction of increased entropy: systems naturally move in the direction of losing rather than gaining order over time, which is why we break eggs to make omelettes and not the other way round. The second law would seem to be a bad thing for living organisms, but oddly enough, ends up being a blessing not just for life, but for any self-organizing system so long as that system has a means of radiating this entropy away from itself.

For England, the second law provides the environment and direction in which life evolves. In those places where energy outputs from outside are available and can be dissipated because they have some boundary, such as a pool of water, self-organizing systems naturally come to be dominated by those forms that are particularly good at absorbing energy from their surrounding environment and dissipating less organized forms of energy in the form of heat (entropy) back into it.

This landscape in which life evolves, England postulates, may tilt as well in the direction of complexity and intelligence due to the fact that in a system that frequently changes in terms of oscillations of energy, those forms able to anticipate the direction of such oscillations gain the possibility of aligning themselves with them and thus become able to accomplish even more work through resonance.

England is in no sense out to replace Darwin’s natural selection as the mechanism through which evolution is best understood, though, should he be proved right, he would end up greatly amending it. If his theory ultimately proves successful, and it is admittedly very early days, England’s theory will have answered one of the fundamental questions that has dogged evolution since its beginnings. For while Darwin’s theory provides us with all the explanation we need for how complex organisms such as ourselves could have emerged out of seemingly random processes- that is through natural selection- it has never quite explained how you go from the inorganic to the organic and get evolution working in the first place. England’s work is blurring the line between the organic and the most complicated self-organizing forms of the inorganic, making the line separating cells from snowflakes and storms a little less distinct.

Whatever its ultimate fate, however, England’s theory faces major hurdles, not least because it seems to have a bias towards increasing complexity, and in its most radical form, points towards the inevitability that life will evolve in the direction of increased intelligence, ideas which many evolutionary thinkers vehemently disavow.

Some evolutionary theorists may see effort such as England’s not as a paradigm shift waiting in the wings, but as an example of a misconception regarding the relationship between increasing complexity and evolution that now appears to have been adopted by actual scientists rather than a merely misguided public. A misconception that, couched in scientific language, will further muddy the minds of the public leaving them with a conception of evolution that belongs much more to the 19th century than to the 21st. It is a misconception whose most vocal living opponent after the death of the irreplaceable Stephen J Gould has been the paleontologist, evolutionary biologist, and senior editor of the journal Nature, Henry Gee, who has set out to disabuse us of it in his book The Accidental Species.

Gee’s goal is to remind us of what he holds to be the fundamental truth behind the theory of evolution- evolution has one singular purpose from which everything else follows in lockstep- reproduction. His objective is to do away, once and for all, with what he feels is a common misconception that evolution is leading towards complexity and progress and that the highest peak of this complexity and progress is us- human beings.

If improved prospects for reproduction can be bought through the increased complexity of an organism then that is what will happen, but it needn’t be the case. Gee points out that many species, notably some worms and many parasites, have achieved improved reproductive prospects by decreasing their complexity.Therefore the idea that complexity (as in an increase in the specialization and number of parts an organism has)  is a merely matter of evolution plus time doesn’t hold up to close scrutiny. Judged through the eyes of evolution, losing features and becoming more simple is not necessarily a vice. All that counts is an organism’s ability to make more copies, or for animals that reproduce through sex, blended copies of itself.

Evolution in this view isn’t beautiful but coldly functional and messy- a matter of mere reproductive success. Gee reminds us of Darwin’s idea of evolution’s product as a “tangled bank”- a weird menagerie of creatures each having their own particular historical evolutionary trajectory. The anal retentive Victorian era philosophers who tried to build upon his ideas couldn’t accept such a mess and:

…missed the essential metaphor of Darwin’s tangled bank, however, and saw natural selection as a sort of motor that would drive transformation from one preordained station on the ladder of life to the next one.” (37)

Gee also sets out to show how deeply limited our abilities are when it comes to understanding the past through the fossil record. Very, very, few of the species that have ever existed left evidence of their lives in the form of fossils, which are formed only under very special conditions, and where the process of fossilization greatly favors the preservation of some species over others. The past is thus incredibly opaque making it impossible to impose an overarching narrative upon it- such as increasing complexity- as we move from the past towards the present.

Gee, though an ardent defender of evolution and opponent of creationist pseudoscience, finds the gaps in the fossil record so pronounced that he thinks we can create almost any story we want from it and end up projecting our contemporary biases onto the speechless stones. This is the case even when the remains we are dealing with are of much more recent origin and especially when their subject is the origin of us.

We’ve tended, for instance, to link tool use and intelligence, even in those cases such as Homo Habilis, when the records and artifacts point to a different story. We’ve tended not to see other human species such as the so-called Hobbit man as ways we might have actually evolved had circumstances not played out in precisely the way they had. We have not, in Gee’s estimation, been working our way towards the inevitable goal of our current intelligence and planetary dominance, but have stumbled into it by accident.

Although Gee is in no sense writing in anticipation of a theory such as England’s his line of thinking does seem to pose obstacles that the latter’s hypothesis will have to address. If it is indeed the case that, as England has stated it, complex life arises inevitably from the physics of the universe, so that in his estimation:

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.

Then England will have to address why it took so incredibly long – 4 billion years out of the earth’s 4.5 billion year history for actual plants to make their debut, not to mention similar spans for other complex eukarya such as animals like ourselves.

Whether something like England’s inevitable complexity or Gee’s, not just blind, but drunk and random, evolutionary walk is ultimately the right way to understand evolution has implications far beyond evolutionary theory. Indeed, it might have deep implications for the status and distribution of life in the universe and even inform the way we understand the current development of new forms of artificial intelligence.

What we have discovered over the last decade is that bodies of water appear to be both much more widespread and can be found in environments far beyond those previously considered. Hence NASA’s recent announcement that we are likely to find microbial life in the next 10 – 30 years both in our solar system and beyond. What this means is that England’s heat baths are likely ubiquitous, and if he’s correct, life likely can be found anywhere there is water- meaning nearly everywhere. There may even be complex lifelike forms that did not evolve through what we would consider normal natural selection at all.

If Gee is right the universe might be ripe for life, but the vast, vast majority of that life will be microbial and no amount of time will change that fate on most life inhabited worlds. If England in his minor key is correct the universe should at least be filled with complex multicellular life forms such as ourselves. Yet it is the possibility that England is right in his major key, that consciousness, civilization, and computation might flow naturally from the need of organisms to resonate with their fluctuating environments that, biased as we are, we likely find most exciting. Such a view leaves us with the prospect of many, many more forms of intelligence and technological civilizations like ourselves spread throughout the cosmos.

The fact that the universe so far has proven silent and devoid of any signs of technological civilization might give us pause when it comes to endorsing England’s optimism over Gee’s pessimism, unless, that is, there is some sort of limit or wall when it comes to our own perceived technological trajectory that can address the questions that emerge from the ideas of both. To that story, next time…

 

Is the Anthropocene a Disaster or an Opportunity?

Bradley Cantrell Pod-mod

Bradley Cantrell Pod-Mod

Recently the journal Nature published a paper arguing that the year in which the Anthropocene, the proposed geological era in which the collective actions of the human species started to trump other natural processes in terms of their impact, began in the year 1610 AD. If that year leaves you, like it did me, scratching your head and wondering what your missed while you dozed off in your 10th grade history class, don’t worry, because 1610 is a year in which nothing much happened at all. In fact, that’s why the author’s chose it.

There are a whole host of plausible candidates for the beginning of the Anthropocene, that are much, much more well know than 1610, starting relatively recently with the explosion of the first atomic bomb in 1945 and stretching backwards in time through the beginning of the industrial revolution (1750’s) to the discovery of the Americas and the bridging between the old and new worlds in the form of the Columbian Exchange, (1492), and back even further to some hypothetical year in which humans first began farming, or further still to the extinction of the mega-fauna such as Woolly Mammoths probably at the hands of our very clever, and therefore dangerous, hunter gatherer forebears.

The reason Simon L. Lewis and Mark A. Maslin (the authors of the Nature paper) chose 1610 is that in that year there can be found a huge, discernible drop in the amount of carbon dioxide in the atmosphere. A rapid decline in greenhouses gases which probably plunged the earth into a cold snap- global warming in reverse. The reason this happened goes to the heart of a debate plaguing the environmental community, a debate that someone like myself, merely listening from the sidelines, but concerned about the ultimate result, should have been paying closer attention to.

It seems that while I was focusing on how paleontologists had been undermining the myth of the “peaceful savage”, they had also called into question another widely held idea regarding the past, one that was deeply ingrained in the American environmental movement itself- that the Native Americans did not attempt the intensive engineering of the environment of the sort that Europeans brought with them from the Old World, and therefore, lived in a state of harmony with nature we should pine for or emulate.

As paleontologists and historians have come to understand the past more completely we’ve learned that it just isn’t so. Native Americans, across North America, and not just in well known population centers such as Tenochtitlan in Mexico, the lost civilizations of the Pueblos (Anasazi) in the deserts of the southwest, or Cahokia in the Mississippi Valley, but in the woodlands of the northeast in states such as my beloved Pennsylvania, practiced extensive agriculture and land management in ways that had a profound impact on the environment over the course of centuries.

Their tool in all this was the planful use of fire. As the naturalist and historian Scott Weidensaul tells the story in his excellent account of the conflicts between Native Americans and the Europeans who settled the eastern seaboard, The First Frontier:

Indians had been burning Virginia and much of the rest of North America annually for fifteen thousand years and it was certainly not a desert. But they had clearly altered their world in ways we are only still beginning to appreciate. The grasslands of the Great Plains, the lupine-studded savannas of northwestern Ohio’s “oak openings”, the longleaf pine forests of the southern coastal plain, the pine barrens of the Mid-Atlantic and Long Island- all were ecosystems that depended on fire and were shaped over millennia of use by its frequent and predictable application.  (69)

The reason, I think, most of us have this vision in our heads of North America at the dawn of European settlement as an endless wilderness is not on account of some conspiracy by English and French settlers (though I wouldn’t put it past them), but rather, when the first settlers looked out into a continent sparsely settled by noble “savages” what they were seeing was a mere remnant of the large Indian population that had existed there only a little over a century before. That the native peoples of the North America Europeans struggled so hard to subdue and expel were mere refugees from a fallen civilization. As the historian Charles Mann has put it: when settlers gazed out into a seemingly boundless American wilderness they were not seeing a land untouched by the hands of man, a virgin land given to them by God, but a graveyard.

Given the valiant, ingenious, and indeed, sometime brutal resistance Native Americans gave against European (mostly English) encroachment it seems highly unlikely the the United States and Canada would exist today in anything like their current form had the Indian population not been continually subject to collapse. What so ravaged the Native American population and caused their civilization to go into reverse were the same infectious diseases unwittingly brought by Europeans that were the primary culprit in the destruction of the even larger Indian civilizations in Mexico and South America. It was a form of accidental biological warfare that targeted Native American based upon their genomes, their immunity to diseases having never been shaped by contact with domesticated animals like pigs and cows as had the Europeans who were set on stealing their lands.

Well, it wasn’t always accidental. For though it would be sometime before we understood how infectious diseases worked, Europeans were well aware of their disproportionate impact on the Indians. In 1763 William Trent in charge of Fort Pitt, the seed that would become Pittsburgh, besieged by Indians, engaged in the first documented case of biological warfare. Weidensaul quotes one of Trent’s letters:

Out of regard to the [Indian emissaries] we gave them two blankets and a handkerchief out of the Small Pox Hospital. I hope it will have the desired effect (381)

It’s from this devastating impact of infectious diseases that decimated Native American populations and ended their management of the ecosystem through the systematic use of fire that (the authors of the Nature paper) get their 1610 start date for the beginning of the Anthropocene. With the collapse of Indian fire ecology eastern forests bloomed to such an extent that vast amounts of carbon otherwise left in the atmosphere was sucked up by the trees with the effect that the earth rapidly cooled.

Beyond the desire to uncover the truth and to enter into sympathy with those of the past that lies at the heart of the practice and study of history, this new view of the North American past has implications for the present. For the goal of returning, both the land itself and as individuals, to the purity of the wilderness that has been at the heart of the environmentalist project since its inception in the 19th century now encounters a problem with time. Where in the past does the supposedly pure natural state to which we are supposed to be returning actually exist?

Such an untrammeled state would seem to exist either before Native American’s development of fire ecology, or after their decimation by disease before European settlement west of the Allegheny mountains. This, after all, was the period of America as endless wilderness that has left its mark on our imagination. That is, we should attempt to return as much of North America as is compatible with civilization to the state it was after the collapse of the Native American population, but before the spread of Europeans into the American wilderness. The problem with such a goal is that this natural splendor may have hid the fact that such an ecology was ultimately unsustainable in the wild  state in which European settlers first encountered it.

As Tim Flannery pointed out in his book The Eternal Frontier: An Ecological History of North America and Its Peoples, the fire ecology adopted by Native Americans may have been filling a vital role, doing the job that should have been done by large herbivores, which North America, unlike other similar continents, and likely due to the effectiveness of the first wave of human hunters to enter it as the ultimate “invasive species” from Asia, lacked.  Large herbivores cull the forests and keep them healthy, a role which in their absence can also be filled by fire.

Although projects are currently in the works to try to bring back some of these large foresters such as the woolly mammoth I am doubtful large populations will ever be restored in a world filled even with self-driving cars. Living in Pennsylvania I’ve had the misfortune of running into a mere hundred pound or so white tailed deer (or her running into me). I don’t even want to think about what a mammoth would do to my compact. Besides, woolly mammoths don’t seem to be a creature made for the age we’re headed into, which is one of likely continued, and extreme, warming.

Losing our romanticized picture of nature’s past means we are now also unclear as to its future.  Some are using this new understanding of the ecological history of North America to make an argument against traditional environmentalism arguing in essence that without a clear past to return to the future of the environment is ours to decide.

Perhaps what this New Environmentalism signals is merely the normalization of North American’s relationship with nature, a move away from the idea of nature as a sort of lost Eden and towards something like ideas found elsewhere, which need not be any less reverential or religious. In Japan especially nature is worshiped, but it is in the form of a natural world that has been deeply sculpted by human beings to serve their aesthetic and spiritual needs. This the Anthropocene interpreted as opportunity rather than as unmitigated disaster.

Ultimately, I think it is far too soon to embrace without some small degree of skepticism the project of the New Environmentalism espoused by organizations such as the Breakthrough Institute or the chief scientist and fire- brand of the Nature Conservancy Peter Kareiva, or environmental journalists and scholars espousing those views like Andrew Rifkin, Emma Marris, or even the incomparable Diane Ackerman. For certainly the argument that even if the New Environmentalism contains a degree of truth it is inherently at risk of being captured by actors whose environmental concern is a mere smokescreen for economic interests, and which, at the very least, threatens to undermine the long held support for the setting aside of areas free from human development whose stillness and silence many of us in our drenched in neon and ever beeping world hold dear.

It would be wonderful if objective scientists could adjudicate disputes between traditional environmentalists and New Environmentalists expressing non-orthodox views on everything from hydraulic fracking, to genetic engineering, to invasive species, to geo-engineering. The problem is that the very uncertainty at the heart of these types of issues probably makes definitive objectivity impossible. At the end of the day the way one decides comes down to values.

Yet what the New Environmentalists have done that I find most intriguing is to challenge  the mythology of Edenic nature, and just as in any other undermining of a ruling mythology this has opened the potential to both freedom and risks. On the basis of my own values the way I would use that freedom would not be to hive off humanity from nature, to achieve the “de-ecologization of our material welfare”,  as some New Environmentalists want to do, but for the artifact of human civilization itself to be made more natural.

Up until today humanity has gained its ascendancy over nature by assaulting it with greater and more concentrated physical force. We’ve damned and redrawn the course of rivers, built massive levies against the sea, cleared the land of its natural inhabitants (nonhuman and human) and reshaped it to grow our food.

We’ve reached the point where we’re smarter than this now and understand much better how nature achieve ends close to our own without resorting to excessively destructive and wasteful force. Termites achieve climate control from the geometry of their mounds rather than our energy intensive air conditioning. IBM’s Watson took a small city’s worth of electric power to beat its human opponents on Jeopardy! whose brains consumed no more power than a light bulb.

Although I am no great fan of the term “cyborg-ecology” I am attracted to the idea as it is being developed by figures such as Bradley Cantrell who propose that we use our new capabilities and understanding to create a flexible infrastructure that allows, for instance, a river to be a river rather than a mere “concrete channel” which we’ve found superior to the alternative of a natural river because it is predictable, while providing for our needs. A suffocatingly narrow redefinition of nature that has come at great costs to the other species with which we share with other such habitats, and ultimately causes large scale problems that require yet more brute force engineering for us to control once nature breaks free from our chains.

I can at least imagine an Anthropocene that provides a much richer and biodiverse landscape than our current one, a world where we are much more planful about providing for creatures other than just our fellow humans. It would be a world where we perhaps break from environmentalism’s shibboleths and do things such as prudently and judiciously use genetic engineering to make the natural world more robust, and where scientists who propose that we use the tools our knowledge has given to address plagues not just to ourselves, but to fellow creatures, do not engender scorn, but careful consideration of their proposals.

It would be a much greener world where the cold kingdom of machines hadn’t replaced the living, but was merely a temporary way station towards the rebirth, after our long, and almost terminal assault, on the biological world. If our strategies before have been divided between raping and subduing nature in light of our needs, or conversely, placing nature at such a remove from our civilization that she might be deemed safe from our impact, perhaps, having gained knowledge regarding nature’s secrets, we can now live not so much with her as a part of her, and with her as a part of us.

The Sofalarity is Near

Mucha Goddess Maia

Many readers here have no doubt spent at least some time thinking about the Singularity, whether in a spirit of hope or fear, or perhaps more reasonably some admixture of both. For my part, though, I am much less worried about a coming Singularity than I am about a Sofalarity in which our ability to create realistic illusions of achievement and adventure convinces the majority of humans that reality isn’t really worth all the trouble after all. Let me run through the evidence of an approaching Sofalarity. I hope you’re sitting down… well… actually I hope you’re not.

I would define a Sofalarity as a hypothetical  point in human history would when the majority of human beings spend most of their time engaged in activities that have little or no connection to actual life in the physical world. It’s not hard to see the outline of this today: on average, Americans already spend an enormous amount of time with their attention focused on worlds either wholly or partly imagined. The numbers aren’t very precise, and differ among age groups and sectors of the population, but they come out to be somewhere around five hours watching television per day, three hours online, and another three hours playing video games. That means, collectively at least, we spend almost half of our day in dream worlds, not counting the old fashioned kind such as those found in books or the ones we encounter when we’re actually sleeping.

There’s perhaps no better example of how the virtual is able to hijack our very real biology than pornography . Worldwide the amount of total internet traffic that is categorized as erotica ranges from a low of four to as high as thirty percent. When one combines that with recent figures claiming that up to 36 percent of internet traffic aren’t even human beings but bots, then it’s hard not to experience future shock.

Amidst all the complaining that the future hasn’t arrived yet and “where’s my jetpack?” a 21st century showed up where upwards of 66 percent of internet traffic could be people looking for pornography, bots pretending to be human, or, weirdest of all, bots pretending to be human looking for humans to have sex with. Take that Alvin Toffler.

Still all of this remains simply our version of painting on the walls of a prehistoric cave. Any true Sofalarity would likely require more than just television shows and Youtube clips. It would need to have gained the keys to our emotional motivation and senses.

As a species we’ve been trying to open the doors of perception with drugs long before almost anything else. What makes our current situation more likely to be leading toward a Sofalarity is that now this quest is a global business and that we’ve become increasingly sophisticated when it comes to playing tricks on our neurochemistry.

The problem any society has with individuals screwing with their neurochemistry is two-fold. The first is to make sure that enough sober people are available for the necessary work of keeping their society operating at a functional level, and the second is to prevent any ill effects from the mind altered from spilling over into the society at large.

The contemporary world has seemingly found a brilliant solution to this problem- to contain the mind altered in space and time, and making sure only the sober are running the show. The reason bars or dance clubs work is that only the customers are drunk or stoned and such places exist in a state of controlled chaos with the management carefully orchestrating the whole affair and making sure things remain lively enough that customers will return while ensuring that things also don’t get so dangerous patrons will stay away for the opposite reason.

The whole affair is contained in time because drunken binges last only through the weekend with individuals returning to their straight-laced bourgeois jobs on Monday, propped up, perhaps by a bit of stimulants to promote productivity.

Sometimes this controlled chaos is meant to last for longer stretches than the weekends, yet here again, it is contained in space and time. If you want to see controlled chaos perfected with technology thrown into the mix you can’t get any better than Las Vegas where seemingly endless opportunities for pleasure and losing one’s wits abound all the while one is being constantly monitored both in the name of safety, and in order that one not develop any existential doubts about the meaning of life under all that neon.   

If you ever find yourself in Vegas losing your dopamine fix after one too many blows from lady luck behind a one-armed bandit, and suddenly find some friendly casino staff next to you offering you free drinks or tickets to a local show, bless not the goddess of Fortune, but the surveillance cameras that have informed the house they are about to lose an unlucky, and therefore lucrative, customer. La Vegas is the surveillance capital of the United States, and it’s not just inside the casinos.

Ubiquitous monitoring seems to be the price of Las Vegas’ adoption of vice as a form of economy. Or as Megan McArdle put it in a recent article:

 Is the friendly police state the price of the freedom to drink and gamble with abandon?Whatever your position on vice industries, they are heavily associated with crime, even where they are legal. Drinking makes people both violent and vulnerable; gambling presents an almost irresistible temptation to cheating and theft.  Las Vegas has Disneyfied libertinism. But to do so, it employs armies of security guards and acres of surveillance cameras that are always and everywhere recording your every move.

Even the youngest of our young children now have a version of this: we call it Disney World. The home of Mickey Mouse has used current surveillance technology to its fullest, allowing it to give visitors to the “magic kingdom” both the experience of being free and one of reality seemingly bending itself in the shape of innocent fantasy and expectations. It’s a technology they work very hard to keep invisible. Disney’s magic band, which not only allows visitors to navigate seamlessly through its theme parks, but allows your dinner to be brought to you before you ordered it, or the guy or gal in the Mickey suit to greet your children by name before they have introduced themselves was described recently in a glowing article in Wired that quoted the company’s COO Tom Staggs this way:

 Staggs couches Disney’s goals for the MagicBand system in an old saw from Arthur C. Clarke. “Any sufficiently advanced technology is indistinguishable from magic,” he says. “That’s how we think of it. If we can get out of the way, our guests can create more memories.”

Nothing against the “magic of Disney” for children, but I do shudder a little thinking that so many parents don’t think twice about creating memories in a “world” that is not so much artificial as completely staged. And it’s not just for kids. They have actually built an entire industry around our ridiculousness here, especially in places in like China, where people pay to have their photos taken in front of fake pyramids or the Eiffel tower, or to vacation in places pretending to be someplace else.

Yet neither Las Vegas nor a Disney theme park resemble what a Sofalarity would look like in full flower. After all, the show girls at Bally’s or the poor soul under the mouse suit in Orlando are real people. What a true Sofalarity would entail is nobody being there at all, for the very people behind the pretend to no longer be there.

We’re probably some way off from a point where the majority of human labor is superfluous, but if things keep going at the rate they are, we’re not talking centuries. The rejoinder to claims that human labor will be replaced to the extent that most of us no longer have anything to do is often that we’ll become the creators and behind the scenes, the same way Apple’s American workers do the high end work of designing its products while the work of actually putting them together is done by numb fingers over in China. In the utopian version of our automated future we’ll all be designers on the equivalent of Infinite Loop Street while the robots provide the fingers.

Yet, over the long run, I am not sure this humans as mental creators/machines as physical producers distinction will hold. Our (quite dumb) computers already create visually stunning and unanticipated works or art, compose music that is indistinguishable from that created in human minds, and write things none of us realize are the product of clever programs. Who’s to say that decades hence, or over a longer stretch, they won’t be able to create richer fantasy worlds of every type that blow our minds and grip our attention far more than any crafted by our flesh and blood brethren?

And still, even should every human endeavor be taken over by machines, including our politics, we would still be short of a true Sofalarity because we would be left with the things that make us most human- the relationship we have with our loved ones. No, to see the Sofalarity in full force we’d need to have become little more than a pile of undulating mush like the creatures in the original conception of the movie Wall-E from which I ripped the term.

The only way we’d get to that point is if our created fantasies could reach deep under our skin and skulls and give us worlds and emotional experiences that atrophied to the point of irrecoverability what we now consider most essential to being a person. The signs are clear that we’re headed there. In Japan, for instance, there are perhaps 700,000 Hikikomori, modern day hermits that consist of adults who have withdrawn from 3 dimensional social relationships and live out their lives primarily online.

Don’t get me wrong, there is a lot of very cool stuff either here or shortly coming down the pike, there’s Oculus Rift should it ever find developers and conquer the nausea problem, and there’s wonders such as Magic Leap, a virtual reality platform that allows you to see 3D images by beaming them directly into your eyes. Add to these things like David Eagleman’s crazy haptic vest, or brain readers that sit it your ear, not to mention things a little further off in terms of public debut that seem to have jumped right off the pages of Nexus, like brain-to-brain communication, or magnetic nanoparticles that allow brain stimulation without wires, and it’s plain to see we’re on the cusp of revolution in creating and experiencing purely imagined worlds, but all this makes it even more difficult to bust a poor hikikomori out of his 4’ x 4’ apartment.

It seems we might be on the verge of losing the distinction between the Enchantment of Fantasy and Magic that J.R.R Tolkien brought us in his brilliant lecture On Fairy Stories:

Enchantment produces a Secondary World into which both designer and spectator can enter, to the satisfaction of their senses while they are inside; but in its purity it is artistic in desire and purpose. Magic produces, or pretends to produce, an alteration in the Primary World. It does not matter by whom it is said to be practiced, fay or mortal, it remains distinct from the other two; it is not an art but a technique; its desire is power in this world, domination of things and wills.

Fantasy is a natural human activity. It certainly does not destroy or even insult Reason; and it does not either blunt the appetite for, nor obscure the perception of, scientific verity. On the contrary. The keener and the clearer is the reason, the better fantasy will it make. If men were ever in a state in which they did not want to know or could not perceive truth (facts or evidence), then Fantasy would languish until they were cured. If they ever get into that state (it would not seem at all impossible), Fantasy will perish, and become Morbid Delusion.

For creative Fantasy is founded upon the hard recognition that things are so in the world as it appears under the sun; on a recognition of fact, but not a slavery to it. So upon logic was founded the nonsense that displays itself in the tales and rhymes of Lewis Carroll. If men really could not distinguish between frogs and men, fairy-stories about frog-kings would not have arisen.

For Tolkien, the primary point of fantasy was to enrich our engagement with the real world to allow us to see the ordinary anew. Though it might also be a place to hide and escape should the real world, whether for the individual or society as a whole , become hellish, as Tolkien, having fought in the World War I, lived through the Great Depression, and was on the eve of a second world war when he gave his lecture well knew.

To those who believe we might already be living in a simulation perhaps all this is merely like having traveled around the world only to end up exactly where you started as in the Borges’ story The Circular Ruins, or in the idea of many of the world’s great religions that we are already living in a state of maya or illusion, though we now make the case using much more scientific language. There’s a very serious argument out there, such as that of Nick Bostrom, that we are already living in a simulation. The way one comes to this conclusion is merely by looking at the virtual world we’ve already created and extrapolating the trend outward for hundreds or thousands of years. In such a world the majority of sentient creatures would be “living” entities in virtual environments, and these end up comprising the overwhelming number of sentient creatures that will ever exist. Statistical reasoning would seem to lead to the conclusion that you are more likely than not, right now, a merely virtual entity. There are even, supposedly, possible scientific experiments to test for evidence of this. Thankfully, in my view at least, the Higgs particle might prevent me from being a Boltzmann brain.

For my part, I have trouble believing I am living in a simulation. Running “ancestor simulations” seems like something a being with human intelligence might do, but it would probably bore the hell out of any superintelligence capable of actually creating the things, they would not provide any essential information for their survival, and given the historical and present suffering of our world would have to be run by a very amoral, indeed immoral being.

That was part of the fear Descartes was tapping into when he proposed that the world, and himself in it, might be nothing more than the dream of an “evil demon”. Yet what he was really getting at, as same as was the case with other great skeptics of history such as Hume, wasn’t so much the plausibility of the Matrix, but the limits surrounding what we can ever be said to truly know.

Some might welcome the prospect of a coming Sofalarity for the same reasons they embrace the Singularity, and indeed, when it comes to many features such as exponential technological advancement or automation, the two are hardly distinguishable. Yet the biggest hope that sofaltarians and singularitarians would probably share is that technological trends point towards the possibility of uploading minds into computers.

Sadly, or thankfully, uploading is some ways off. The EU seems to have finally brought the complaints of neuroscientists that Henry Markum’s Human Brain Project, that aimed to simulate an entire human brain was scientifically premature enough to be laughable, were it not for the billion Euro’s invested in it that might have been spent on much more pressing areas like mental illness or Alzheimer’s research. The project has not been halted but a recent scathing official report is certainly a big blow.

Nick Bostrom has pondered that if we are not now living in a simulation then there is something that prevents civilizations such as our from reaching the technological maturity to create such simulations. As I see it, perhaps the movement towards a Sofalarity ultimately contains the seeds of its own destruction.  Just as I am convinced that hell, which exists in the circumscribed borders of torture chambers, death camps, or the human heart, can never be the basis for an entire society, let alone a world, it is quite possible that a heaven that we could only reach by escaping the world as it exists is like that as well, and that any society that would be built around the fantasy of permanent escape would not last long in its confrontation with reality. Fermi paradox, anyone?

Seeing the Future through the Deep Past

woman-gazing-into-a-crystal-ball

We study history not to know the future but to widen our horizons, to understand that our present system is neither natural nor inevitable, and that we consequently have many more possibilities before us than we imagine. (241)

The above is a quote from Ynval Harari’s book Sapiens: A Brief History of Humankind, which I reviewed last time. So that’s his view of history, but what of other fields specifically designed to give us a handle on the future, you know, the kinds of “future studies” futurists claim to be experts in, fields like scenario planning, or even some versions of science-fiction.

Harari probably wouldn’t put much credence on the ability of these fields to predict the future either. The reason being that there are very real epistemological limits to what we can know outside of a limited number of domains.  The reason we are unable to make the same sort of accurate predictions for areas such as history, politics or economics as we do for physics, is that all of the former are examples of Level II chaotic systems. A Level I chaotic system is one where small differences in conditions can result in huge differences in outcomes. Weather is the best example we have of Level I chaos, which is why nearly everyone has at some point in their life wanted to bludgeon the weatherman with his umbrella. Level II chaotic systems make the job of accurate prediction even harder, for, as Harari points out:

Level two chaos is chaos that reacts to predictions about it, and therefore can never be predicted accurately. (240)

Level II chaos is probably the source of many of our philosophical paradoxes regarding, what we can really know, along with free will and its evil deterministic twin, not to mention the kinds of paradoxes we encounter when contemplating time travel, issues we only seem able to resolve in a way that conserves the underlying determinism of nature with our subjective experience of freedom when we assume that there are a multiple or even infinite number of universe that together manifest every possibility. A solution that seems to violate every fiber of our common sense.

Be all that as it may, the one major disappointment of Harari’s Sapiens was his versions of possible futures for humanity,though expressed with full acknowledgement that they were just well reasoned guesses, really weren’t all that different from what we all know already. Maybe we’ll re-engineer our genes even to the point of obtaining biological immortality. Maybe we’ll merge with our machines and become cyborgs. Maybe our AI children will replace us.

It’s not that these are unlikely futures, just commonly held ones, and thinkers who have arrived at them have just as often as not done so without having looked deep into the human past, merely extrapolating from current technological trends. Harari might instead have used his uncovering of the deep past to go in a different direction, not so much to make specific predictions about how the human story will likely end, but to ascertain broad recurring patterns that might narrow down the list of things we should look for in regards to our movement towards the future, and, above all, things whose recurrence we would do best to guard ourselves against.

At least one of these recurring trends identified in Sapiens we should be on the look out for is the Sisyphean character of our actions. We gained numbers and civilization with the Agricultural Revolution, but lost in the process both our health and our freedom, and this Sisyphean aspect did not end with industrialization and the scientific revolution. The industrial revolution that ended our universal scarcity, threatens to boil us all. Our automation of labor has lead us to be crippled by our sedentary lifestyles, our conquest of famine has resulted in widespread obesity, and our vastly expanded longevity has resulted in an epidemic of dementia and Alzheimer’s disease.

It’s a weird “god” indeed that suffers the arrival of a new misfortune the minute an old one is conquered. This is not to argue that our solutions actually make our circumstance worse, rather, it is that our solutions become as much a part of the world that affects us as anything in nature itself, and naturally give rise to some aspects we find problematic and were not intended in the solution to our original problem.

At least some of these problems we will be able to anticipate in advance and because of this they call for neither the precautionary principle of environmentalist, nor the proactionary principle of Max More, but something I’ll call the Design Foresight Principle (DFP). If we used something like the DFP we would design to avoid ethical and other problems that emerge from technology or social policy before they arrive, or at least before a technology is widely adopted. Yet in many cases even a DFP wouldn’t help us because the problem arising from a technology or policy wasn’t obvious until afterward- a classic case of which was DDT’s disastrous effect on bird populations.

This situation where we create something or act in some way in order to solve one problem which in turn causes another isn’t likely a bug, but a deep feature of the universe we inhabit. It not going to go away completely regardless of the extent of our powers. Areas I’d look for this Sisyphean character of human life to rear its head over the next century would include everything from biotechnology, to geoengineering, and even quite laudable attempts to connect the world’s remaining billions in one overarching network.

Another near universal of human history, at least since the Agricultural Revolution Sapiens has been the ubiquity of oppression. Humans, it seems, will almost instantly grasp the potential of any new technology, or form of social organization to start oppressing other humans and animals. Although on the surface far-fetched, it’s quite easy, given the continued existence of slavery, coupled with advances in both neuroscience and genetic engineering to come up with nightmare scenarios where, for instance, alternative small brained humans are biologically engineered and bred to serve as organ donors for humans of the old fashioned sort. One need only combine the claims of technical feasibility by biologists such as George Church of bringing back other hominids such as the Neanderthals, historical research and abuse of animals, and current research using human fetal tissue to grow human organs in pigs to see a nightmare right out of Dr Moreau.

Again far-fetched at the moment, other forms of oppression may be far less viscerally troubling but almost just as bad. As both neural monitoring and the ability to control subjects through optogenetics increases, we might see the rise of “part-time” slavery where one’s freedom is surrendered on the job, or in certain countries perhaps the revival of the institution itself.

The solution here, I think, is to start long before such possibilities manifest themselves, especially in countries with less robust protections regarding worker, human, and animal rights and push hard for research and social policy towards solutions to problems such as the dearth of organs for human beings in need of them that would have the least potentially negative impact on human rights.

Part of the answer to also needs to be in the form of regulation to protect workers from the creep of now largely innocent efforts such as the quantified self movement and its technologies into areas that really would put individual autonomy at risk. The development of near human level AI would seem to be the ultimate solution for human on human oppression, though at some point in the intelligence of our machines, we are might again face the issue of one group of sentient beings oppressing another group for its exclusive benefit.

One seemingly broad trend that I think Harari misidentifies is what he sees as the inexorable move towards a global empire. He states it this way:

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet I am far from certain that the movement towards a world empire or state is one that can be seen when we look deep into history. Since their beginnings, empires have waxed and waned. No empire with the scale and depth of integration now stands where the Persian, Roman, or Inca empires once stood. Rather than an unstoppable march towards larger and larger political units we have the rise and collapse of these units covering huge regions.

For the past 500 years the modern age of empires which gave us our globalized world has been a play of ever shifting musical chairs with the Spanish, Portuguese followed by the British and French followed by failed bids by the Germans and the Japanese, followed by the Russians and Americans.

For a quarter century now America has stood alone, but does anyone seriously doubt that this is much more than an interim until the likes of China and India join it? Indeed, the story of early 21st century geopolitics could easily be told as a tale of the limits of American empire.

Harari seems to think a global ruling class is about to emerge on the basis of stints at US universities and hobnobbing at Davos. This is to forget the power of artificial boundaries such as borders, and imagined communities. As scholars of nationalism such as Benedict Anderson have long pointed out you can get strengthened national identity when you combine a lingua franca with a shared education as long as personal opportunities for advancement are based upon citizenship.

Latin American creoles who were denied administrative positions even after they had proven themselves superior to their European-Spanish classmates were barred from becoming members of the elite in Spain itself and thus returned home with a heightened sense of national identity. And just as long as American universities remain the premier educational institutions on earth, which will not be forever, the elite children of other countries will come there for education. They will then have to choose whether to sever their ties with their home country and pursue membership in the American elite, or return home to join the elite of their home country with only tenuous ties to the values of the global power. They will not have the choice to chose both.

Indeed, the only way Harari’s global elite might emerge as a ruling class would be for states to fail almost everywhere. It wouldn’t be the victory of the trend towards “domination over the entire world” but a kind of global neo-feudal order. That is, the two trends we should be looking to history to illuminate when it comes to the future of political order is the much older trend of the rise and fall of empires, or the much younger 500 year trend of the rise and fall of great powers based on states.

One trend that might bolster the chances towards either neo-feudalism or the continued dominance of rival states depending upon how it plays out is the emergence of new forms of money. The management of a monetary system, enforcement of contacts, and protection of property, has always been among the state’s chief functions. As Harari showed us, money, along with writing and mathematics were invented as bureaucratic tools of accounting and management. Yet since the invention of money there has always been this tension between the need for money as a way to facilitate exchange – for which it has to be empty of any information except its value, and the need to record and enforce loans in that medium of exchange- loans and contacts.

This tension between forgetfulness and remembering when it comes to money is one way to see the tug of war between inflation and deflation in the management of it. States that inflate their currency are essentially voting for money has a means of facilitating exchange over the need for money to preserve its values so that past loans and contracts can be met in full.

Digital currencies, of which Bitcoin is only one example, and around which both states and traditional banks are (despite Bitcoin’s fall) are showing increasing interest, by treating money as data allow it to fully combine these two functions as a medium of exchange that can remember. This could either allow states to crush non-state actors, such as drug cartels, that live off the ability of money to forget its origins, or conversely, strengthen those actors (mostly from the realm of business) who claim there is no need for a state because digital currency can make contracts self enforcing. Imagine that rather than money you simply have a digital account which you can only borrow from to make a purchase once it connects itself to a payment system that will automatically withdraw increments until some set date in the future. And imagine that such digital wallets are the only form of money that is actually functional.

There are other trends from deep history we should look to as well to get a sense of what the future may have in store. For instance, the growth of human power has been based not on individual intelligence, but collective organization. New forms of organization using technologies like brain-nets might become available at some future date, and based on the scalability of these technologies might prove truly revolutionary. This will be no less important than the kinds of collective myths that predominate in the future, which, religious or secular will likely continue to determine how we live our lives as they have in the past.

Yet perhaps the most important trend from the deep past for the future will be the one Harari thinks might end desire to make history at all. Where will we go with our mastery over the biochemical keys to our happiness which we formerly sought in everything from drugs to art? It’s a question that deserves a post all to itself.

 

A Global History of Post-humans

Cave painting hand prints

One thing that can certainly not be said either the anthropologist Ynval Harari’s or his new book Sapiens: A Brief History of Humankind is that they lack ambition. In Sapiens, Harari sets out to tell the story of humanity since our emergence on the plans of Africa until the era in which we are living right now today, a period he thinks is the beginning of the end of our particular breed of primate. His book ends with some speculations on our post-human destiny, whether we achieve biological immortality or manage to biologically and technologically engineer ourselves into an entirely different species. If you want to talk about establishing a historical context for the issues confronted by transhumanism, you can’t get better than that.

In Sapiens, Harari organizes the story of humanity by casting it within the framework of three revolutions: the cognitive, the agricultural and the scientific. The Cognitive Revolution is what supposedly happened somewhere between 70,000- 30,000 thousand years ago when a suddenly very creative and  cooperative homo sapiens came roaring out of Africa using their unprecedented social coordination and technological flexibility to invade every ecological niche on the planet.

Armed with an extremely cooperative form of culture, and the ability to make tools, (especially clothing) to fit environments they had not evolved for, homo sapiens was able to move into more northern latitudes than their much more cold adapted Neanderthal cousins, or to cast out on the seas to settle far off islands and continents such as Australia, Polynesia, and perhaps even the Americas.

The speed with which homo sapiens invaded new territory was devastating for almost all other animals. Harari points out how the majority of large land animals outside of Africa (where animals had enough time to evolve weariness of this strange hairless ape) disappeared not long after human beings arrived there. And the casualties included other human species as well.

One of the best things about Harari is his ability to overthrow previous conceptions- as he does here with the romantic notion held by some environmentalist that “primitive” cultures lived in harmony with nature. Long before even the adoption of agriculture, let alone the industrial revolution, the arrival of homo sapiens proved devastating for every other species, including other hominids.

Yet Harari also defies intellectual stereotypes. He might not think the era in which the only human beings on earth were  “noble savages” was a particularly good one for other species, but he does see it as having been a particularly good one for homo sapiens, at least compared to what came afterward, and up until quite recently.

Humans, in the era before the Agricultural Revolution lived a healthier lifestyle than any since. They had a varied diet, and though they only worked on average six hours a day, and were far more active than any of us in modern societies chained to our cubicles and staring at computer screens.

Harari, also throws doubt on the argument that has been made most recently by Steven Piker, that the era before states was one of constant tribal warfare and violence, suggesting that it’s impossible to get an overall impression for levels of violence based on what end up being a narrow range of human skeletal remains. The most likely scenario, he thinks, is that some human societies before agriculture were violent, and some were not, and that even the issue of which societies were violent varied over time rising and falling in respect to circumstances.

From the beginning of the Cognitive Revolution up until the Agricultural Revolution starting around 10,000 years ago things were good for homo sapiens, but as Harari sees it, things really went downhill for us as individuals, something he sees as different from our status as a species, with the rise of farming.

Harari is adamant that while the Agricultural Revolution may have had the effect of increasing our numbers, and gave us all the wonders of civilization and beauty of high culture, its price, on the bodies and minds of countless individuals, both humans, and other animals was enormous. Peasants were smaller, less healthy,  and died younger than their hunter gatherer ancestors. The high culture of the elites of ancient empires was bought at the price of the systematic oppression of the vast majority of human beings who lived in those societies. And, in the first instance of telling this tale with in the context of a global history of humanity that I can think of, Harari tells the story of not just our oppression of each other, but of our domesticated animals as well.

He shows us how inhumane animal husbandry was long before our era of factory farming, which is even worse, but it was these more “natural”, “organic” farmers who began practices such as penning animals in cages, separating mothers from their young, castrating males, and cutting off the noses or out the eyes of animals such a pigs so they could better serve their “divinely allotted” function of feeding human mouths and stomachs.

Yet this begs the question: if the Agricultural Revolution was so bad for the vast majority of human beings, and animals with the exception of a slim class at the top of the pyramid, why did it not only last, but spread, until only a tiny minority of homo sapiens in remote corners continued to be hunter gatherers while the vast majority, up until quite recently were farmers?

Harari doesn’t know. It was probably a very gradual process, but once human societies had crossed a certain threshold there was no going back- our numbers were simply too large to support a reversion to hunting and gathering, For one of the ironies of the Agricultural Revolution is that while it made human beings unhealthy, it also drove up birthrates. This probably happened through rational choice. A hunter gathering family would likely space their children, whereas a peasant family needed all the hands it could produce, something that merely drove the need for more children, and was only checked by the kinds of famines Malthus had pegged as the defining feature of agricultural societies and that we only escaped recently via the industrial revolution.

Perhaps, as Harrai suggest, “We did not domesticate wheat. It domesticated us.” (81) At the only level evolution cares about- the propagation of our genes- wheat was a benefit to humanity, but at the cost of much human suffering and backbreaking labor in which we rid wheat of its rivals and spread the plant all over the globe. The wheat got a better deal, no matter how much we love our toast.

It was on the basis of wheat, and a handful of other staple crops (rice, maize, potatoes) that states were first formed. Harari emphasizes the state’s role as record keeper, combined with enforcer of rules. The state saw its beginning as a scorekeeper and referee for the new complex and crowded societies that grew up around farming.

All the greatest works of world literature, not to mention everything else we’ve ever read, can be traced back to this role of keeping accounts, of creating long lasting records, that led the nascent states that grew up around agriculture to create writing. Shakespeare’s genius can trace its way back to the 7th century B.C. equivalent to an IRS office. Along with writing the first states also created numbers and mathematics, bureaucrats have been bean counters ever since.

The quirk of human nature that for Harari made both the Cognitive and Agricultural Revolution possible and led to much else besides was our ability to imagine things that do not exist, by which he means almost everything we find ourselves surrounded by, not just religion, but the state and its laws, and everything in between has been akin to a fantasy game. Indeed, Harari left me feeling that the whole of both premodern and modern societies was at root little but a game of pretend played by grown ups, with adulthood perhaps nothing more than agreeing to play along with the same game everyone else is engaged in. He was especially compelling and thought provoking when it came to that ultimate modern fantasy and talisman that all of us, from Richard Dawkins to, Abu Bakr al-Baghdadi believes in; namely money.

For Harari the strange thing is that: “Money is the only trust system created by humans that can bridge almost any cultural gap, and does not discriminate on the basis of religion, gender, race, age or sexual orientation.” In this respect it is “the apogee of human tolerance” (186). Why then have so many of its critics down through the ages denounced money as the wellspring human of evil? He explains the contradiction this way:

For although money builds universal trust between strangers, this trust is invested not in humans, communities or sacred values, but in money itself and the impersonal systems that back it. We do not trust the stranger or the next store neighbor- we trust the coins they hold. If they run out of coins, we run out of trust. (188)

We ourselves don’t live in the age of money, so much as the age of Credit (or capital), and it is this Credit which Harari sees as one of the legs of the three-legged stools which he think defines our own period of history. For him we live in the age of the third revolution in human history, the age of the Scientific Revolution that has followed his other two. It is an age built out of an alliance of the forces of Capital-Science- and Empire.

What has made our the age of Credit and different from the material cultures that have come before where money certainly played a prominent role, is that we have adopted lending as the route to economic expansion. But what separates us from past eras that engaged in lending as well, is that ours is based on a confidence that the future will be not just different but better than the past, a confidence that has, at least so far, panned out over the last two centuries largely through the continuous advances in science and technology.

The feature that really separated the scientific revolution from earlier systems of knowledge, in Harari’s view, grew out of the recognition in the 17th century of just how little we actually knew:

The Scientific Revolution has not been a revolution of knowledge. It has above all been a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions. (251)

Nothing perhaps better captures the early modern recognition of this ignorance than their discovery of the New World which began us down our own unique historical path towards Empire. Harari sees both Credit and Science being fed and feeding the intermediary institution of Empire, and indeed, that the history of capitalism along with science and technology cannot be understood without reference to the way the state and imperialism have shaped both.

The imperialism of the state has been necessary to enable the penetration of capitalism and secure its gains, and the powers of advanced nations to impose these relationships has been the result largely of more developed science and technology which the state itself has funded. Science was also sometimes used not merely to understand the geography and culture of subject peoples to exploit them, but in the form of 19th century and early 20th century racism was used as a justification for that racism itself. And the quest for globe spanning Empire that began with the Age of Exploration in the 15th century is still ongoing.

Where does Harari think all this is headed?

 Over millennia, small simple cultures gradually coalesce into bigger and more complex civilizations, so that the world contains fewer and fewer mega-cultures each of which is bigger and more complex. (166)

Since around 200 BC, most humans have lived in empires. It seems likely that in the future, too, most human will live in one. But this time the empire will truly be global. The imperial vision of domination over the entire world could be imminent. (207)

Yet Harari questions not only whether the scientific revolution or the new age of economic prosperity, not to mention the hunt for empire, have actually brought about a similar amount of misery, if not quite suffering, as the Agricultural Revolution that preceded it.

After all, the true quest of modern science is really not power, but immortality. In Harari’s view we are on the verge of fulfilling the goal of the “Gilgamesh Project”.

Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself. (267)

The quest after wealth, too, seems to be reaching a point of diminishing returns. If the objective of material abundance was human happiness, we might ask why so many of us are miserable? The problem, Harari thinks, might come down to biologically determined hedonic set-points that leave a modern office worker surrounded by food and comforts ultimately little happier to his peasant ancestor who toiled for a meager supper from sunset to sunrise. Yet perhaps the solution to this problem is at our fingertips as well:

 There is only one historical development that has real significance. Today when we realize that the keys to happiness are in the hands of our biochemical system, we can stop wasting our time on politics, social reforms, putsches, and ideologies and focus instead on the only thing that truly makes us happy: manipulating our biochemistry.  (389)

Still, even should the Gilgamesh Project succeed, or we prove capable of mastering our biochemistry, Harari sees a way the Sisyphean nature may continue to have the last laugh. He writes:

Suppose that science comes up with cures for all diseases, effective anti-ageing therapies and regenerative treatments that keep people indefinitely young. In all likelihood, the immediate result will be an epidemic of anger and anxiety.

Those unable to afford the new treatments- the vast majority of people- will be besides themselves with rage. Throughout history, the poor and oppressed comforted themselves with the thought that at least death is even handed- that the rich and powerful will also die. The poor will not be comfortable with the thought that they have to die, while the rich will remain young and beautiful.

But the tiny minority able to afford the new treatments will not be euphoric either. They will have much to be anxious about. Although the new therapies could extend life and youth they will not revive corpses. How dreadful to think that I and my loved ones can live forever, but only if we don’t get hit by a truck or blown to smithereens by a terrorist! Potentially a-mortal people are likely to grow adverse to taking even the slightest risk, and the agony of losing a spouse, child or close friend will be unbearable.( 384-385).

Along with this, Harari reminds us that it might not be biology that is most important for our happiness, but our sense of meaning. Given that he thinks all of our sources of meaning are at bottom socially constructed illusions, he concludes that perhaps the only philosophically defensible position might be some form of Buddhism- to stop all of our chasing after desire in the first place.

The real question is whether the future will show if all our grasping has ended up in us reaching our object or has led us further down the path of illusion and pain that we need to outgrow to achieve a different kind of transcendence.

 

Truth and Prediction in the Dataclysm

The Deluge by Francis Danby. 1837-1839

Last time I looked at the state of online dating. Among the figures was mentioned was Christian Rudder, one of the founders of the dating site OkCupid and the author of a book on big data called Dataclysm: Who We Are When We Think No One’s Looking that somehow manages to be both laugh-out-loud funny and deeply disturbing at the same time.

Rudder is famous, or infamous depending on your view of the matter, for having written a piece about his site with the provocative title: We experiment on human beings!. There he wrote: 

We noticed recently that people didn’t like it when Facebook “experimented” with their news feed. Even the FTC is getting involved. But guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.

That statement might set the blood of some boiling, but my own negative reaction to it is somewhat tempered by the fact that Rudder’s willingness to run his experiments on his sites users originates, it seems, not in any conscious effort to be more successful at manipulating them, but as a way to quantify our ignorance. Or, as he puts it in the piece linked to above:

I’m the first to admit it: we might be popular, we might create a lot of great relationships, we might blah blah blah. But OkCupid doesn’t really know what it’s doing. Neither does any other website. It’s not like people have been building these things for very long, or you can go look up a blueprint or something. Most ideas are bad. Even good ideas could be better. Experiments are how you sort all this out.

Rudder eventually turned his experiments on the data of OkCupid’s users into his book Dataclysm which displays the same kind of brutal honesty and acknowledgement of the limits of our knowledge. What he is trying to do is make sense of the deluge of data now inundating us. The only way we have found to do this is to create sophisticated algorithms that allow us to discern patterns in the flood.  The problem with using algorithms to try and organize human interactions (which have themselves now become points of data) is that their users are often reduced into the version of what being a human beings is that have been embedded by the algorithm’s programmers. Rudder, is well aware and completely upfront about these limitations and refuses to make any special claims about algorithmic wisdom compared to the normal human sort. As he puts it in Dataclysm:

That said, all websites, and indeed all data scientists objectify. Algorithms don’t work well with things that aren’t numbers, so when you want a computer to understand an idea, you have to convert as much of it as you can into digits. The challenge facing sites and apps is thus to chop and jam the continuum of the of human experience into little buckets 1, 2, 3, without anyone noticing: to divide some vast, ineffable process- for Facebook, friendship, for Reddit, community, for dating sites, love- into a pieces a server can handle. (13)

At the same time, Rudder appears to see the data collected on sites such as OkCupid as a sort of mirror, reflecting back to us in ways we have never had available before the real truth about ourselves laid bare of the social conventions and politeness that tend to obscure the way we truly feel. And what Rudder finds in this data is not a reflection of the inner beauty of humanity one might hope for, but something more like the mirror out of A Picture of Dorian Grey.

As an example take what Rudder calls” Wooderson’s Law” after the character from Dazed and Confused who said in the film “That’s what I love about these high school girl, I get older while they stay the same age”. What Rudder has found is that heterosexual male attraction to females peaks when those women are in their early 20’s and thereafter precipitously falls. On OkCupid at least, women in their 30’s and 40’s are effectively invisible when competing against women in their 20’s for male sexual attraction. Fortunately for heterosexual men, women are more realistic in their expectations and tend to report the strongest attraction to men roughly their own age, until sometime in men’s 40’s where males attractiveness also falls off a cliff… gulp.

Another finding from Rudder’s work is not just that looks rule, but just how absolutely they rule. In his aforementioned piece, Rudder lays out that the vast majority of users essentially equate personality with looks. A particularly stunning women can find herself with a 99% personality rating even if she has not one word in her profile.

These are perhaps somewhat banal and even obvious discoveries about human nature Rudder has been able to mine from OkCupid’s data, and to my mind at least, are less disturbing than the deep seated racial bias he finds there as well. Again, at least among OkCupid’s users, dating preferences are heavily skewed against black men and women. Not just whites it seems, but all other racial groups- Asians, Hispanics would apparently prefer to date someone from a race other than African- disheartening for the 21st century.

Rudder looks at other dark manifestations of our collective self than those found in OkCupid data as well. Try using Google search as one would play the game Taboo. The search suggestions that pop up in the Google search bar, after all, are compiled on the basis of Google user’s most popular searches and thus provide a kind of gauge on what 1.17 billion human beings are thinking. Try these some of which Rudder plays himself:

“why do women?”

“why do men?”

“why do white people?”

“why do black people?”

“why do Asians?”

“why do Muslims?”

The exercise gives a whole new meaning to Nietzsche’s observation that “When you stare into the abyss, the abyss stares back”.

Rudder also looks at the ability of social media to engender mobs. Take this case from Twitter in 2014. On New Years Eve of that year a young woman tweeted:

“This beautiful earth is now 2014 years old, amazing.”

Her strength obviously wasn’t science in school, but what should have just led to collective giggles, or perhaps a polite correction regarding terrestrial chronology, ballooned into a storm of tweets like this:

“Kill yourself”

And:

“Kill yourself you stupid motherfucker”. (139)

As a recent study has pointed out the emotion second most likely to go viral is rage, we can count ourselves very lucky the emotion most likely to go viral is awe.

Then there’s the question of the structure of the whole thing. Like Jaron Lanier, Rudder is struck by the degree to which the seemingly democratized architecture of the Internet appears to consistently manifest the opposite and reveal itself as following Zipf’s Law, which Rudder concisely reduces to:

rank x number = constant (160)

Both the economy and the society in the Internet age are dominated by “superstars”, companies (such as Google and FaceBook that so far outstrip their rivals in search or social media that they might be called monopolies), along with celebrities, musical artist, authors. Zipf’s Law also seems to apply to dating sites where a few profiles dominate the class of those viewed by potential partners. In the environment of a networked society where invisibility is the common fate of almost all of us and success often hinges on increasing our own visibility we are forced to turn ourselves towards “personal branding” and obsession over “Klout scores”. It’s not a new problem, but I wonder how much all this effort at garnering attention is stealing time from the effort at actual work that makes that attention worthwhile and long lasting.

Rudder is uncomfortable with all this algorithmization while at the same time accepting its inevitability. He writes of the project:

Reduction is inescapable. Algorithms are crude. Computers are machines. Data science is trying to make sense of an analog world. It’s a by-product of the basic physical nature of the micro-chip: a chip is just a sequence of tiny gates.

From that microscopic reality an absolutism propagates up through the whole enterprise, until at the highest level you have the definitions, data types and classes essential to programming languages like C and JavaScript.  (217-218)

Thing is, for all his humility at the effectiveness of big data so far, or his admittedly limited ability to draw solid conclusions from the data of OkCupid, he seems to place undue trust in the ability of large corporations and the security state to succeed at the same project. Much deeper data mining and superior analytics, he thinks, separate his efforts from those of the really big boys. Rudder writes:

Analytics has in many ways surpassed the information itself as the real lever to pry. Cookies in your web browser and guys hacking for your credit card numbers get most of the press and our certainly the most acutely annoying of the data collectors. But they’ve taken hold of a small fraction of your life and for that they’ve had to put in all kinds of work. (227)

He compares them to Mike Myer’s Dr. Evil holding the world hostage “for one million dollars”

… while the billions fly to the real masterminds, like Axicom. These corporate data marketers, with reach into bank and credit card records, retail histories, and government fillings like tax accounts, know stuff about human behavior that no academic researcher searching for patterns on some website ever could. Meanwhile the resources and expertise the national security apparatus brings to bear makes enterprise-level data mining look like Minesweeper (227)

Yet do we really know this faith in big data isn’t an illusion? What discernable effects that are clearly traceable to the juggernauts of big data ,such as Axicom, on the overall economy or even consumer behavior? For us to believe in the power of data shouldn’t someone have to show us the data that it works and not just the promise that it will transform the economy once it has achieved maximum penetration?

On that same score, what degree of faith should we put in the powers of big data when it comes to security? As far as I am aware no evidence has been produced that mass surveillance has prevented attacks- it didn’t stop the Charlie Hebo killers. Just as importantly, it seemingly hasn’t prevented our public officials from being caught flat footed and flabbergasted in the face of international events such as the revolution in Egypt or the war in Ukraine. And these later big events would seem to be precisely the kinds of predictions big data should find relatively easy- monitoring broad public sentiment as expressed through social media and across telecommunications networks and marrying that with inside knowledge of the machinations of the major political players at the storm center of events.

On this point of not yet mastering the art of being able to anticipate the future despite the mountains of data it was collecting,  Anne Neuberger, Special Assistant to the NSA Director, gave a fascinating talk over at the Long Now Foundation in August last year. During a sometimes intense q&a she had this exchange with one of the moderators, Stanford professor, Paul Saffo:

 Saffo: With big data as a friend likes to say “perhaps the data haystack that the intelligence community has created has grown too big to ever find the needle in.”

Neuberger : I think one of the reasons we talked about our desire to work with big data peers on analytics is because we certainly feel that we can glean far more value from the data that we have and potentially collect less data if we have a deeper understanding of how to better bring that together to develop more insights.

It’s a strange admission from a spokesperson from the nation’s premier cyber-intelligence agency that for their surveillance model to work they have to learn from the analytics of private sector big data companies whose models themselves are far from having proven their effectiveness.

Perhaps then, Rudder should have extended his skepticism beyond the world of dating websites. For me, I’ll only know big data in the security sphere works when our politicians, Noah like, seem unusually well prepared for a major crisis that the rest of us data poor chumps didn’t also see a mile away, and coming.