The Flash Crash of Reality

“The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.”

                                                                                                 H.P. Lovecraft, The Call of Cthulhu  

“All stable processes we shall predict. All unstable processes we shall control.”

John von Neumann

For at least as long as there has been a form of written language to record such a thing, human beings have lusted after divination. The classical Greeks had their trippy Oracle at Delphi, while the Romans scanned entrails for hidden patterns, or more beautifully, sought out the shape of the future in the murmurations of birds. All ancient cultures, it seems, looked for the signs of fate in the movement of the heavens. The ancient Chinese script may have even originated in a praxis of prophecy, a search for meaning in the branching patterns of “oracle bones” and tortoise shells, signaling that perhaps written language itself originated not with accountants but with prophets seeking to overcome the temporal confines of the second law, in whose measure we are forever condemned.

The promise of computation was that this power of divination was at our fingertips at last. Computers would allow us to outrun time, and thus in seeing the future we’d finally be able to change it or deftly avoid its blows- the goal of all seers in the first place.

Indeed, the binary language underlying computation sprung from the fecund imagination of Gottfried Leibniz who got the idea after he encountered the most famous form of Chinese divination, the I-Ching. The desire to create computing machines emerged with the Newtonian worldview and instantiated its premise; namely, that the world could be fully described in terms of equations whose outcome was preordained. What computers promised was the ability to calculate these equations, offering us a power born of asymmetric information- a kind of leverage via time travel.

Perhaps we should have known that time would not be so easily subdued. Outlining exactly how our recent efforts to know and therefore control the future have failed is the point of James Bridle’s wonderful book  New Dark Age: Technology and the End of the Future.

With the kind of clarity demanded of an effective manifesto, Bridle neatly breaks his book up into ten “C’s”: Chasm, Computation, Climate, Calculation, Complexity, Cognition, Complicity, Conspiracy, Concurrency, and Cloud.

Chasm defines what Bridle believes to be our problem. Our language is no longer up to the task of navigating the world which the complex systems upon which we are dependent have wrought. This failure of language, which amounts to a failure of thought, Bridle traces to the origins of computation itself.

Computation

To vastly oversimplify his argument, the problem with computation is that the detailed models it provides too often tempt us into confusing our map with the territory. Sometimes this leads us to mistrust our own judgement and defer to the “intelligence” of machines- a situation that in the most tragic of circumstances has resulted in what those in the National Parks Service call “death by GPS”. While in other cases our confusion of the model with reality results in the surrender of power to the minority of individuals capable of creating and the corporations which own and run such models.

Computation was invented under the aforementioned premise born with the success of calculus, that everything, including  history itself, could be contained in an equation. It was also seen as a solution to the growing complexity of society. Echoing Stanislaw Lem, one of the founders of modern computers, Vannevar Bush with his “memex” foresaw something like the internet in the 1940s. The mechanization of knowledge the only possible lifeboat in the deluge of information modernity had brought.

Climate

One of the first projected purposes of computers would be not just to predict, but to actually control the weather. And while we’ve certainly gotten better at the former, the best we’ve gotten from the later is Kurt Vonnegut’s humorous takedown of the premise in his novel Cat’s Cradle, which was actually based on his chemist brother’s work on weather control for General Electric. It is somewhat ironic, then, that the very fossil fuel based civilization upon which our computational infrastructure depends is not only making the weather less “controllable” and predictable, but is undermining the climatic stability of the Holocene, which facilitated the rise of a species capable of imagining and building something as sophisticated as computers in the first place.

Our new dark age is not just a product of our misplaced faith in computation, but in the growing unpredictability of the world itself. A reality whose existential importance is most apparent in the case of climate change. Our rapidly morphing climate threatens the very communications infrastructure that allows us to see and respond to its challenges. Essential servers and power sources will likely be drowned under the rising seas, cooling dependent processors taxed by increasing temperatures. Most disturbingly, rising CO2  levels are likely to make human beings dumber. As Bridle writes:

“At 1,000 ppm, human cognitive ability drops by 21 per cent.33 At higher atmospheric concentrations, CO2 stops us from thinking clearly. Outdoor CO2 already reaches 500 ppm”

An unstable climate undermines the bedrock, predictable natural cycles from which civilization itself emerged, that is, those of agriculture.  In a way our very success at controlling nature, by making it predictable is destabilizing the regularity of nature that made its predictability possible in the first place.

It is here that computation reveals its double edged nature, for while computation is the essential tool we need to see and respond to the “hyperobject” that is climate change, it is also one of the sources of this growing natural instability itself. Much of the energy of modern computation directly traceable to fossil fuels, a fact the demon coal lobby has eagerly pointed out.

Calculation

What the explosion of computation has allowed, of course, is an exponential expansion of the power and range of calculation. While one can quibble over whether or not the driving force behind the fact that everything is now software, that is Moore’s Law, has finally proved Ray Kurzweil and his ilk wrong and bent towards the asymptote, the fact is that nothing else in our era has followed the semiconductor’s exponential curve. Indeed, as Bridle shows, precisely the opposite.

For all their astounding benefits, machine learning and big data have not, as Chris Anderson predicted, resulted in the “End of Theory”. Science still needs theory, experiment, and dare I say, humans to make progress, and what is clear is that many areas outside ICT itself progress has not merely slowed but stalled.

Over the past sixty years, rather than experience Moore’s Law type increases, the pharmaceutical industry has suffered the opposite. The so-called Eroom’s Law where: “The number of new drugs approved per billion US dollars spent on research and development has halved every nine years since 1950.”

Part of this stems from the fact that the low hanging fruit of discovery, not just in pharmaceuticals but elsewhere, have already been picked, along with the fact that the problems we’re dealing with are becoming exponentially harder to solve. Yet some portion of the slowdown in research progress is surely a consequence of technology itself, or at least the ways in which computers are being relied upon and deployed. Ease of sharing when combined with status hunting inevitably leads to widespread gaming. Scientists are little different from the rest of us, seeking ways to top Google’s Page Rank, Youtube recommendations, Instagram and Twitter feeds, or the sorting algorithms of Amazon, though for scientists the summit of recognition consists of prestigious journals where publication can make or break a career.

Data being easy to come by, while experimentation and theory remain difficult, has meant that “proof” is often conflated with what turn out to be spurious p-values, or “p-hacking”. The age of big data has also been the age of science’s “replication crisis”, where seemingly ever more findings disappear upon scrutiny.

What all this calculation has resulted in is an almost suffocating level of complexity, which is the source of much of our in-egalitarian turn. Connectivity and transparency were supposed to level the playing field, instead, in areas such as financial markets where the sheer amount of information to be processed has ended up barring new entrants, calculation has provided the ultimate asymmetric advantage to those with the highest capacity to identify and respond within nanoseconds to changing market conditions.

Asymmetries of information lie behind both our largest corporate successes and the rising inequality that they have brought in their wake. Companies such as WalMart and Amazon are in essences logistics operations built on the unique ability of these entities to see and respond first or most vigorously to consumer needs. As Bridle points out this rule of logistics has resulted in a bizarre scrambling of politics, the irony that:

“The complaint of the Right against communism – that we’d all have to buy our goods from a single state supplier – has been supplanted by the necessity of buying everything from Amazon.”

Yet unlike the corporate giants of old such as General Motors, our 21st century behemoths don’t actually employ all that many people, and in their lust after automation, seem determined to employ even less. The workplace, and the larger society, these companies are building seem even more designed around the logic of machines than factories in the heyday of heavy industry. The ‘chaotic storage’ deployed in Amazon’s warehouses is like something dreamed up by Kafka, but that’s because it emerges out of the alien “mind” of an algorithm, a real world analog to Google’s Deep Dream.

The world in this way becomes less and less sensible, except to the tiny number of human engineers who, for the moment, retain control over its systems. This is a problem that is only going to get worse with the spread of the Internet of Things. An extension of computation not out of necessity, but because capitalism in its current form seems hell bent on turning all products into services so as to procure a permanent revenue stream. It’s not a good idea. As Bridle puts it:

“We are inserting opaque and poorly understood computation at the very bottom of Maslow’s hierarchy of needs – respiration, food, sleep, and homeostasis – at the precise point, that is, where we are most vulnerable.”

What we should know by now, if anything, is that the more connected things are, the more hackable they become, and the more susceptible to rapid and often unexplainable crashes. Turning reality into a type of computer simulations comes with the danger the world at large might experience the kind of “flash crash” now limited to the stock market. Bridle wonders if we’ve already experienced just such a flash crash of reality:

“Or perhaps the flash crash in reality looks exactly like everything we are experiencing right now: rising economic inequality, the breakdown of the nation-state and the militarisation of borders, totalising global surveillance and the curtailment of individual freedoms, the triumph of transnational corporations and neurocognitive capitalism, the rise of far-right groups and nativist ideologies, and the utter degradation of the natural environment. None of these are the direct result of novel technologies, but all of them are the product of a general inability to perceive the wider, networked effects of individual and corporate actions accelerated by opaque, technologically augmented complexity.”

Cognition

It’s perhaps somewhat startling that even as we place ourselves in greater and greater dependence on artificial intelligence we’re still not really certain how or even if machines can think. Of course, we’re far from understanding how human beings exhibit intelligence, but we’ve never been confronted with this issue of inscrutability when it comes to our machines. Indeed, almost the whole point of machines is to replace the “herding cats” element of the organic world with the deterministic reliability of classical physics. Machines are supposed to be precise, predictable, legible, and above all, under the complete control of the human beings who use them.

The question of legibility today hasn’t gone far beyond the traditional debate between the two schools of AI that have rivaled each other since the field’s birth. There are those who believe intelligence is merely the product of connections between different parts of the brain and those who think intelligence has more to do with the mind’s capacity to manipulate symbols. In our era of deep learning the Connectionists hold sway, but it’s not clear if we are getting any closer to machines actually understanding anything, a lack of comprehension that can result in the new comedy genre of silly or disturbing mistakes made by computers, but that also presents us with real world dangers as we place more and more of our decision making upon machines that have no idea what any of the data they are processing actually means even as these programs discover dangerous hacks such as deception that are as old as life itself.

Complicity

Of course, no techno-social system can exist unless it serves the interest of at least some group in a position of power. Bridle draws an analogy between the society in ancient Egypt and our own. There, the power of the priests was premised on their ability to predict the rise and fall of the Nile. To the public this predictive power was shrouded in the language of the religious castes’ ability to commune with the gods, all the while the priests were secretly using the much more prosaic technology of nilometers hidden underground.

Who are the priests of the current moment? Bridle makes a good case that it’s the “three letter agencies”, the NSA, MI5 and their ilk that are the priests of our age. It’s in the interest of these agencies, born in the militarized atmosphere of the Cold War and the War on Terrorism that the logic of radical transparency continues to unfold- where the goal is to see all and to know all.

Who knows how vulnerable these agencies have made our communications architecture in trying to see through it? Who can say, Bridle wonders, if the strongest encryption tools available haven’t already been made useless by some genius mathematician working for the security state? And here is the cruel irony of it all, that the agencies whose quest is to see into everything are completely opaque to the publics they supposedly server. There really is a “deep state” though given our bizzaro-land media landscape our grappling with it quickly gives way conspiracy theories and lunatic cults like Q-Anon.

Conspiracy

The hunt for conspiracy stems from the understandable need of the human mind to simplify. It is the search for clear agency where all we can see are blurred lines. Ironically, believers in conspiracy hold more expansive ideas of power and freedom than those who explain the world in terms of “social forces” or other necessary abstractions. For a conspiracists the world is indeed controllable it’s just a matter that those doing the controlling happen to be terrifying.  None of this makes conspiracy anything but an unhinged way of dealing with reality, just a likely one whenever a large number of individuals feel the world is spinning out of control.

The internet ends up being a double boon for conspiracists politics because it both fragments the shared notion that of reality that existed in the age of print and mass media while allowing individuals who fall under some particular conspiracy’s spell to find one another and validate their own worldview. Yet it’s not just a matter of fragmented media and the rise of filter bubbles that plague us but a kind of shattering of our sense of reality itself.

Concurrency

It is certainly a terrible thing that our current communications and media landscape has fractured into digital tribes with the gap of language and assumptions between us seemingly unbridgeable, and emotion-driven political frictions resulting in what some have called “a cold civil war.” It’s perhaps even more terrifying that this same landscape has spontaneously given way to a kind of disturbed collective unconscious that is amplified, and sometimes created, by AI into what amounts to the lucid dreams of a madman that millions of people, many of them children, experience at once.

Youtube isn’t so much a television channel as it is a portal to the twilight zone, where one can move from videos of strangers compulsively wrapping and unwrapping products to cartoons of Peppa the Pig murdering her parents. Like its sister “tubes” in the porn industry, Youtube has seemingly found a way to jack straight into the human lizard brain. As is the case with slot machines, the system has been designed with addiction in mind, only the trick here is to hook into whatever tangle of weirdness or depravity exists in the individual human soul- and pull.

The even crazier thing about these sites is that the majority of viewers, and perhaps soon creators, are humans but bots. As Bridle writes:

“It’s not just trolls, or just automation; it’s not just human actors playing out an algorithmic logic, or algorithms mindlessly responding to recommendation engines. It’s a vast and almost completely hidden matrix of interactions between desires and rewards, technologies and audiences, tropes and masks.”

Cloud

Bridle thinks one thing is certain, we will never again return to the feeling of being completely in control, and the very illusion that we can be, if we only had the right technical tweak, or the right political leader, is perhaps the greatest danger of our new dark age.

In a sense we’re stuck with complexity and it’s this complex human/machine artifice which has emerged without anyone having deliberately built it that is the source of all the ills he has detailed.

The historian George Dyson recently composed a very similar diagnosis. In his essay Childhood’s End Dyson argued that we are leaving the age of digital and going back to the era of analog. He didn’t mean that we’d shortly be cancelling our subscriptions to Spotify and rediscovering the beauty of LP’s (though plenty of us are doing that), but something much deeper. Rather than, being a model of the world’s knowledge, in some sense, now Google was the world’s knowledge. Rather than represent the world’s social graph, now FaceBook was the world’s social graph.

The problem with analogue systems when compared to digital is that they are hard to precisely control, and thus are full of surprises, some of which are far from good. Our quest to assert control over nature and society hasn’t worked out as planned. According to Dyson:

“Nature’s answer to those who sought to control nature through programmable machines is to allow us to build machines whose nature is beyond programmable control.”

Bridle’s answer to our situation is to urge us to think, precisely the kind of meditation on the present he has provided with his book. It’s not as wanting a solution as one might suppose, and for me had clear echoes with the perspective put forward by Roy Scranton in his book Learning to Die in the Anthropocene where he wrote:

“We must practice suspending stress-semantic chains of social exhaustion through critical thought, contemplation, philosophical debate, and posing impertinent questions…

We must inculcate ruminative frequencies in the human animal by teaching slowness, attention to detail, argumentative rigor, careful reading, and meditative reflection.”

I’m down with that. Yet the problem I had with Scranton is ultimately the same one I had with Bridle. Where is the politics? Where is human agency? For it is one thing to say that we live in a complex world roamed by “hyperobjects” we at best partly control, but it quite another to discount our capacity for continuous intervention, especially our ability to “act in concert”, that is politically, to steer the world towards desirable ends.

Perhaps what the arrival of a new dark age means is that we’re regaining a sense of humility. Starting about two centuries ago human beings got it into their heads that they had finally gotten nature under their thumb. What we are learning in the 21st century was not only was this view incorrect, but that the human made world itself seems to have a mind of its own. What this means is that we’re likely barred forever from the promised land, condemned to a state of adaptation and response to nature’s cruelty and human injustice, which will only end with our personal death or the extinction of the species, and yet still contains all the joy and wonder of what it means to be a human being cast into a living world.

 

Advertisements

How Science and Technology Slammed into a Wall and What We Should Do About It

Captin Future 1944

It might be said that some contemporary futurists tend to use technological innovation and scientific discovery in the same way God was said to use the whirlwind against defiant Job, or Donald Rumsfeld treated the poor citizens of Iraq a decade ago. It’s all about the “shock and awe”. One glance at something like KurzweilAI.Net leaves a reader with the impression that brand new discoveries are flying off the shelf by the nanosecond and that of all our deepest sci-fi dreams are about to come true. No similar effort is made, at least that I know of, to show all the scientific and technological paths that have led  into cul-de-sac, or chart all the projects packed up and put away like our childhood chemistry sets to gather dust in the attic of the human might-have- been.  In exact converse to the world of political news, in technological news it’s the jetpacks that do fly we read about not the ones that never get off the ground.

Aside from the technologies themselves future oriented discussion of the potential of technologies or scientific discovery tends to come in two stripes when it comes to political and ethical concerns: we’re either on the verge of paradise or about to make Frankenstein seem like an amiable dinner guest.

There are a number of problems with this approach to science and technology, I can name more, but here are three: 1) it distorts the reality of innovation and discovery 2) it isn’t necessarily true, 3) the political and ethical questions, which are the most essential ones, are too often presented in a simplistic all- good or all-bad manner when any adult knows that most of life is like ice-cream. It tastes great and will make you fat.

Let’s start with distortion: A futurists’ forum like the aforementioned, KurzweilAI.Net,  by presenting every hint of innovation or discovery side-by-side does not allow the reader to discriminate between both the quality and the importance of such discoveries. Most tentative technological breakouts and discoveries are just that- tentative- and ultimately go nowhere. The first question a reader should ask is whether or not some technique process or prediction has been replicated.  The second question is whether or not the technology or discovery being presented is actually all that important. Anyone who’s ever seen an infomercial knows people invent things everyday that are just minor tweaks on what we already have. Ask anyone trapped like Houdini in a university lab-  the majority of scientific discovery is not about revolutionary paradigm shifts ala Thomas Kuhn but merely filling in the details. Most scientists aren’t Einsteins in waiting. They just edit his paperwork.

Then we have the issue of reality: anyone familiar with the literature or websites of contemporary futurists is left with the impression that we live in the most innovative and scientifically productive era in history. Yet, things may not be as rosy as they might appear when we only read the headlines. At least since 2009, there has been a steady chorus of well respected technologists, scientists and academics telling us that innovation is not happening fast enough, that is that our rates of technological advancement are not merely not exceeding those found in the past, they are not even matching them. A common retort to this claim might be to club whoever said it over the head with Moore’s Law; surely,with computer speeds increasing exponentially it must be pulling everything else along. But, to pull a quote from ol’ Gershwin “ it ain’t necessarily so”.

As Paul Allen, co-founder of Microsoft and the financial muscle behind the Allen Institute for Brain Science pointed out in his 2011 article, The singularity isn’t near, the problems of building ever smaller and faster computer chips is actually relatively simple, but many of the other problems we face, such as understanding how the human brain works and applying the lessons of that model- to the human brain or in the creation of AI- suffer what Allen calls the “complexity break”. The problems have become so complex that they are slowing down the pace of innovation itself. Perhaps it’s not so much of a break as we’ve slammed into a wall.

A good real world example of the complexity break in action is what is happening with innovation in the drug industry where new discoveries have stalled. In the words of Mark Herper at Forbes:

But the drug industry has been very much a counter-example to Kurzweil’s proposed law of accelerating returns. The technologies used in the production of drugs, like DNA sequencing and various types of chemistry approaches, do tend to advance at a Moore’s Law-like clip. However, as Bernstein analyst Jack Scannell pointed out in Nature Review’s Drug Discovery, drug discovery itself has followed the opposite track, with costs increasing exponentially. This is like Moore’s law backwards, or, as Scannell put it, Eroom’s Law.”

It is only when we acknowledge that there is a barrier in front of our hopes for innovation and discovery that we can seek to find its source and try to remove it. If you don’t see a wall you run the risk of running into it and certainly won’t be able to do the smart things: swerve, scale, leap or prepare to bust through.

At least part of the problem stems from the fact that though we are collecting a simply enormous amount of scientific data we are having trouble bringing this data together to either solve problems or aid in our actual understanding of what it is we are studying. Trying to solve this aspect of the innovation problem is a goal of the brilliant young technologist,Jeffrey Hammerbach, founder of Cloudera. Hammerbach has embarked on a project with Mt. Sinai Hospital to apply tools for organizing and analyzing the overwhelming amounts of data gather by companies like Google and FaceBook to medical information in the hopes of spurring new understanding and treatments of diseases. The problem Hammerbach is trying to solve as he acknowledged on a recent interview with Charlie Rose is precisely the one identified by Herper in the quote above, that innovation in treating diseases like mental illness is simply not moving fast enough.

Hammerbach, is our Spiderman. Having helped create the analytical tools that underlie FaceBook he began to wonder if it was worth it quipping: “The best minds of my generation are thinking about how to make people click ads”.  The real problem wasn’t the lack of commercial technology it was the barriers to important scientific discoveries that would actually save people’s lives. Hammerbach’s conscientious objection to what technological innovation was being applied for vs what it wasn’t is, I think, a perfect segway (excuse the pun) to my third point the political and ethical dimension, or lack thereof, in much of futurists writing today.

In my view, futurists too seldom acknowledges the political and ethical dimension in which technology will be embedded. Technologies hoped for in futurists communities such as brain-computer interfaces, radical life extension, cognitive enhancements or AI are treated in the spirit of consumer goods. If they exist we will find a way to pay for them. There is, perhaps, the assumption that such technologies will follow the decreasing cost curves found in consumer electronics: cell phones were once toys for the rich and now the poorest people on earth have them.

Yet, it isn’t clear to me that the technologies hoped for will actually follow decreasing curves and may instead resemble health care costs rather than the plethora of cheap goods we’ve scored thanks to Moore’s Law. It’s also not clear to me that should such technologies be invented to be initially be affordable only by the rich that this gap will at all be acceptable by the mass of the middle class and the poor unless it is closed very, very fast. After all, some futurists are suggesting that not just life but some corporal form of immortality will be at stake. There isn’t much reason to riot if your wealthy neighbor toots around in his jetpack while you’re stuck driving a Pinto. But the equation would surely change if what was at stake was a rich guy living to be a thousand while you’re left broke, jetpackless, driving a Pinto and kicking the bucket at 75.

The political question of equity will thus be important as will much deeper ethical questions as to what we should do and how we should do it. The slower pace of innovation and discovery, if it holds, might ironically, for a time at least, be a good thing for society (though not for individuals who were banking on an earlier date for the arrival of technicolored miracles) for it will give us time to sort these political and ethical questions through.

There are 3 solutions I can think of that would improve the way science and technology is consumed and presented by futurists, help us get through the current barriers to invention and discovery and build our capacity to deal with whatever is on the other side. The first problem, that of distortion, might be dealt with by better sorting of scientific news stories so that the reader has some idea both where the finding being presented lies along the path of scientific discovery or technological innovation and how important a discovery is in the overall framework of a field. This would prevent things occurring such as a preliminary findings regarding the creation of an artificial hippocampus in a rat brain being placed next to the discovery of the Higgs Boson, at least without some color coating or other signification that these discoveries are both widely separated along the path of discovery and of grossly different import.

As to the barriers to innovation and discovery itself: more attempts such as those of Hammerbach’s need to be tried. Walls need to be better identified and perhaps whole projects bringing together government and venture capital resources and money used to scale over or even bust through these blocked paths. As Hammerbach’s case seems to illustrate, a lot of technology is fluff, it’s about click-rates, cool gadgets, and keeping up with the joneses. Yet, technology is also vitally important as the road to some of our highest aspirations. Without technological innovation we can not alleviate human suffering, extend the time we have here, or spread the ability to reach self-defined ends to every member of the human family.

Some technological breakthroughs would actually deserve the appellation. Quantum computing, if viable, and if it lives up to the hype, would be like Joshua’s trumpets against the walls of Jericho in terms of the barriers to innovation we face. This is because, theoretically at least, it would answer the biggest problem of the era, the same one identified by Hammerbach, that we are generating an enormous amount of real data but are having a great deal of trouble organizing this information into actionable units we actually understand. In effect, we are creating an exponentially increasing data base that requires exponentially increasing effort to put the pieces of this puzzle together- running to standstill. Quantum computing, again theoretically at least, would solve this problem by making such databases searchable without the need to organize them beforehand. 

Things in terms of innovation are not, of course, all gloom and doom. One fast moving  field that has recently come to public attention is that of molecular and synthetic biology perhaps the only area where knowledge and capacity is not merely equalling but exceeding Moore’s Law.

To conclude, the very fact that innovation might be slower than we hope- though we should make every effort to get it moving- should not be taken as an unmitigated disaster but as an opportunity to figure out what exactly it is we want to do when many of the hoped for wonders of science and technology actually arrive. At the end of his recent TED-Talk on reviving extinct species, a possibility that itself grows out of the biological revolution of which synthetic biology is a part, Stewart Brand, gives us an idea of what this might look like. When asked if it was ethical for scientist to “play God” in doing such a thing he responded that he and his fellow pioneers were trying to answer the question of if we could revive extinct species not if we should. The ability to successfully revive extinct species, if it worked, would take some time to master, and would be a multi-generational project the end of which Brand, given his age, would not see. This would give us plenty of time to decide if de-extinction was a good idea, a decision he certainly hoped we would make. The only way we can justly do this is to set up democratic forums to discuss and debate the question.

The ex-hippie Brand has been around a long time, and great evidence to me that old age plus experience can still result in what we once called wisdom. He has been present long enough to see many of our technological dreams of space colonies and flying cars and artificial intelligence fail to come true despite our childlike enthusiasm. He seems blissfully unswept up in all the contemporary hoopla, and still less his own importance in the grand scheme of things, and has devoted his remaining years to generation long projects such as the Clock of the Long now, or the revival of extinct species that will hopefully survive into the far future after he is gone.

Brand has also been around long enough to see all the Frankenstein’s that have not broken loose, the GMOs and China Syndromes and all those oh so frightening “test tube babies”.  His attitude towards science and technology seems to be that it is neither savior nor Shiva it’s just the cool stuff we can do when we try. Above all, he knows what the role of scientists and technologists are, and what is the role of the rest of us. Both of the former show us what tricks we can play, but it is up to all of us as to how or if we should play them.