A Box of a Trillion Souls

pandora's box

“The cybernetic structure of a person has been refined by a very large, very long, and very deep encounter with physical reality.”                                                                          

Jaron Lanier

 

Stephen Wolfram may, or may not, have a justifiable reputation for intellectual egotism, but I like him anyway. I am pretty sure this is because, whenever I listen to the man speak I most often  walk away no so much with answers as a whole new way to frame questions I had never seen before, but sometimes I’m just left mesmerized, or perhaps bewildered, by an image he’s managed to draw.

A while back during a talk/demo of at the SXSW festival he managed to do this when he brought up the idea of “a box of a trillion souls”. He didn’t elaborate much, but left it there, after which I chewed on the metaphor for a few days and then returned to real life, which can be mesmerizing and bewildering enough.

A couple days ago I finally came across an explanation of the idea in a speech by Wolfram over at John Brockman’s Edge.org  There, Wolfram also opined on the near future of computation and the place of  humanity in the universe. I’ll cover those thoughts first before I get to his box full of souls.

One of the things I like about Wolfram is that, uncommonly for a technologist, he tends to approach explanations historically. In his speech he lays out a sort of history of information that begins with information being conveyed genetically with the emergence of life, moves to the interplay between individual and environment with the development of more complex life, and flowers in spoken language with the appearance of humans.

Spoken language eventually gave rise to the written word, though it took almost all of human history for writing to become nearly as common as speaking. For most of that time reading and writing were monopolized by elites. A good deal of mathematics, as well has moved from being utilized by an intellectual minority to being part of the furniture of the everyday world, though more advanced maths continues to be understandable by specialists alone.

The next stage in Wolfram’s history of information, the one we are living in, is the age of code. What distinguishes code from language is that it is “immediately executable” by which I understand him to mean that code is not just some set of instructions but, when run, the thing those instruction describe itself.

Much like reading, writing and basic mathematics before the invention of printing and universal education, code is today largely understood by specialists only. Yet rather than endure for millennia, as was the case with the monopoly of writing by the clerisy, Wolfram sees the age of non-universal code to be ending almost as soon as it began.

Wolfram believes that specialized computer languages will soon give way to “natural language programming”.  A fully developed form of natural language programming would be readable by both computers and human beings- numbers of people far beyond those who know how to code, so that code would be written in typical human languages like English or Chinese. He is not just making idle predictions, but has created a free program that allows you to play around with his own version of a NLP.

Wolfram makes some predictions as to what a world where natural language programming became ubiquitous- where just as many people could code as could now write- might look like. The gap between law and code would largely disappear. The vast majority of people, including school children, would have at the ability to program computers to do interesting things, including perform original research. As computers become embedded in objects the environment itself will be open to the programming of everyone.

All this would seem very good for us humans and would be even better given that Wolfram sees it as the prelude to the end of scarcity, including the scarcity of time that we now call death. But then comes the AI. Artificial intelligence will be both the necessary tool to explore the possibility space of the computational universe and the primary intelligence via which we interact with the entirety of the realm of human thought.  Yet at some threshold AI might leave us with nothing to do as it will have become the best and most efficient way to meet our goals.

What makes Wolfram nervous isn’t human extinction at the hands of super-intelligence so much as what becomes of us after scarcity and death have been eliminated and AI can achieve any goal- artistic ones included- better than us. This is Wolfram’s  vision of the not too far off future, which given the competition with even current reality, isn’t near sufficiently weird enough. It’s only when he starts speculating on where this whole thing is ultimately headed that anything so strange as Boltzmann brains make their appearance, yet something like them does and no one should be surprised given his ideas about the nature of computation.

One of Wolfram’s most intriguing, and controversial, ideas is something he calls computational equivalence. With this idea he claims not only that computation is ubiquitous across nature, but that the line between intelligence and merely complicated behavior that grows out of ubiquitous natural computation is exceedingly difficult to draw.

For Wolfram the colloquialism that “the weather has a mind of its own” isn’t just a way of complaining that the rain has ruined your picnic, but, in an almost panpsychic or pantheistic way, captures a deeper truth that natural phenomenon are the enactment of a sort of algorithm, which, he would claim, is why we can successfully model their behavior with other algorithms we call computer “simulations.” The word simulations needs quotes because, if I understand him, Wolfram is claiming that there would be no difference between a computer simulation of something at a certain level of description and the real thing.

It’s this view of computation that leads Wolfram to his far future and his box of a trillion souls. For if there is no difference between a perfect simulation and reality, if there is nothing that will prevent us from creating perfect simulations, at some point in the future however far off, then it makes perfect sense to think that some digitized version of you, which as far as you are concerned will be you, could end up in a “box”, along with billions or trillions of similar digitized persons, with perhaps millions or more copies of  you.   

I’ve tried to figure out where exactly this conclusion for an idea I otherwise find attractive, that is computational equivalence, goes wrong other just in terms of my intuition or common sense. I think the problem might come down to the fact that while many complex phenomenon in nature may have computer like features, they are not universal Turing machines i.e. general purpose computers, but machines whose information processing is very limited and specific to that established by its makeup.

Natural systems, including animals like ourselves, are more like the Tic-Tac-Toe machine built by the young Danny Hillis and described in his excellent primer on computers, that is still insightful decades after its publication- The Pattern on the Stone. Of course, animals such as ourselves can show vastly more types of behavior and exhibit a form of freedom of a totally different order than a game tree built out of circuit boards and lightbulbs, but, much like such a specialized machine, the way in which we think isn’t a form of generalized computation, but shows a definitive shape based on our evolutionary, cultural and personal history. In a way, Wolfram’s overgeneralization of computational equivalence negates what I find to be his as or more important idea of the central importance of particular pasts in defining who we are as a species, people and individuals.

Oddly enough, Wolfram falls into the exact same trap that the science-fiction writer Stanislaw Lem fell into after he had hit upon an equally intriguing, though in some ways quite opposite understanding of computation and information.

Lem believed that the whole system of computation and mathematics human beings use to describe the world was a kind of historical artifact for which there much be much better alternatives buried in the way systems that had evolved over time processed information. A key scientific task he thought would be to uncover this natural computation and find ways to use it in the way we now use math and computation.

Where this leads him is to precisely the same conclusion as Wolfram, the possibility of building a actual world in the form of simulation. He imagines the future designers of just such simulated worlds:

“Imagine that our Designer now wants to turn his world into a habitat for intelligent beings. What would present the greatest difficulty here? Preventing them from dying right away? No, this condition is taken for granted. His main difficulty lies in ensuring that the creatures for whom the Universe will serve as a habitat do not find out about its “artificiality”. One is right to be concerned that the very suspicion that there may be something else beyond “everything” would immediately encourage them to seek exit from this “everything” considering themselves prisoners of the latter, they would storm their surroundings, looking for a way out- out of pure curiosity- if nothing else.

…We must not therefore cover up or barricade the exit. We must make its existence impossible to guess.” ( 291 -292)

Yet it seems to me that moving from the idea that things in the world: a storm, the structure of a sea-shell, the way particular types of problems are solved are algorithmic to the conclusion that the entirety of the world could be hung together in one universal  algorithm is a massive overgeneralization. Perhaps there is some sense that the universe might be said to be weakly analogous, not to one program, but to a computer language (the laws of physics) upon which an infinite ensemble of other programs can be instantiated, but which is structured so as to make some programs more likely to be run while deeming others impossible. Nevertheless, which programs actually get executed is subject to some degree of contingency- all that happens in the universe is not determined from initial conditions. Our choices actually count.

Still, such a view continues to treat the question of corporal structure as irrelevant, whereas structure itself may be primary.

The idea of the world as code, or DNA as a sort of code is incredibly attractive because it implies a kind of plasticity which equals power. What gets lost however, is something of the artifact like nature of everything that is, the physical stuff that surrounds us, life, our cultural environment. All that is exists as the product of a unique history where every moment counts, and this history, as it were, is the anchor that determines what is real. Asserting the world is or could be fully represented as a simulation either implies that such a simulation possesses the kinds of compression and abstraction, along with the ahistorical plasticity that comes with mathematics and code or it doesn’t, and if it doesn’t, it’s difficult to say how anything like a person, let alone, trillions of persons, or a universe could actually, rather than merely symbolically, be contained in a box even a beautiful one.

For the truly real can perhaps most often be identified by its refusal to be abstracted away or compressed and by its stubborn resistance to our desire to give it whatever shape we please.

 

Is AI a Myth?

nielsen eastof the sun 29

A few weeks back the technologist Jaron Lanier gave a provocative talk over at The Edge in which he declared ideas swirling around the current manifestation AI to be a “myth”, and a dangerous myth at that. Yet Lanier was only one of a set of prominent thinkers and technologists who have appeared over the last few months to challenge what they saw as a flawed narrative surrounding recent advances in artificial intelligence.

There was a piece in The New York Review of Books back in October by the most famous skeptic from the last peak in AI – back in the early 1980’s, John Searle. (Relation to the author lost in the mists of time) It was Searle who invented the well-know thought experiment of the “Chinese Room”, which purports to show that a computer can be very clever without actually knowing anything at all. Searle was no less critical of the recent incarnation of AI, and questioned the assumptions behind both Luciano Floridi’s Fourth Revolution and Nick Bostrom’s Super-Intelligence.

Also in October, Michael Jordan, the guy who brought us neural and Bayesian networks (not the gentleman who gave us mind-bending slam dunks) sought to puncture what he sees as hype surrounding both AI and Big Data. And just the day before this Thanksgiving, Kurt Anderson gave us a very long piece in Vanity Fair in which he wondered which side of this now enjoined battle between AI believers and skeptics would ultimately be proven correct.

I think seeing clearly what this debate is and isn’t about might give us a better handle on what is actually going on in AI, right now, in the next few decades, and in reference to a farther off future we have to start at least thinking about it- even if there’s no much to actually do regarding the latter question for a few decades at the least.

The first thing I think one needs to grasp is that none of the AI skeptics are making non-materialistic claims, or claims that human level intelligence in machines is theoretically impossible. These aren’t people arguing that there’s some spiritual something that humans possess that we’ll be unable to replicate in machines. What they are arguing against is what they see as a misinterpretation of what is happening in AI right now, what we are experiencing with our Siri(s) and self-driving cars and Watsons. This question of timing is important far beyond a singularitarian’s fear that he won’t be alive long enough for his upload, rather, it touches on questions of research sustainability, economic equality, and political power.

Just to get the time horizon straight, Nick Bostrom has stated that top AI researchers give us a 90% probability of having human level machine intelligence between 2075 and 2090. If we just average those we’re out to 2083 by the time human equivalent AI emerges. In the Kurt Andersen piece, even the AI skeptic Lanier thinks humanesque machines are likely by around 2100.

Yet we need to keep sight of the fact that this is 69 years in the future we’re talking about, a blink of an eye in the grand scheme of things, but quite a long stretch in the realm of human affairs. It should be plenty long enough for us to get a handle on what human level intelligence means, how we want to control it, (which, I think, echoing Bostrom we will want to do), and even what it will actually look like when it arrives. The debate looks very likely to grow from here on out and will become a huge part of a larger argument, that will include many issues in addition to AI, over the survival and future of our species, only some of whose questions we can answer at this historical and technological juncture.

Still, what the skeptics are saying really isn’t about this larger debate regarding our survival and future, it’s about what’s happening with artificial intelligence right before our eyes. They want to challenge what they see as current common false assumptions regarding AI.  It’s hard not to be bedazzled by all the amazing manifestations around us many of which have only appeared over the last decade. Yet as the philosopher Alva Noë recently pointed out, we’re still not really seeing what we’d properly call “intelligence”:

Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.

This is an old criticism, the same as the one made by John Searle, both in the 1980’s and more recently, and though old doesn’t necessarily mean wrong, there are more novel versions. Michael Jordan, for one, who did so much to bring sophisticated programming into AI, wants us to be more cautious in our use of neuroscience metaphors when talking about current AI. As Jordan states it:

I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

What this lack of deep understanding means is that brain based metaphors of algorithmic processing such as “neural nets” are really just cartoons of what real brains do. Jordan is attempting to provide a word of caution for AI researchers, the media, and the general public. It’s not a good idea to be trapped in anything- including our metaphors. AI researchers might fail to develop other good metaphors that help them understand what they are doing- “flows and pipelines” once provided good metaphors for computers. The media is at risk of mis-explaining what is actually going on in AI if all it has are quite middle 20th century ideas about “electronic brains” and the public is at risk of anthropomorphizing their machines. Such anthropomorphizing might have ugly consequences- a person is liable to some pretty egregious mistakes if he think his digital assistant is able to think or possesses the emotional depth to be his friend.

Lanier’s critique of AI is actually deeper than Jordan’s because he sees both technological and political risks from misunderstanding what AI is at the current technological juncture. The research risk is that we’ll find ourselves in a similar “AI winter” to that which occurred in the 1980’s. Hype-cycles always risk deflation and despondency when they go bust. If progress slows and claims prove premature what you often get a flight of capital and even public grants. Once your research area becomes the subject of public ridicule you’re likely to lose the interest of the smartest minds and start to attract kooks- which only further drives away both private capital and public support.

The political risks Lanier sees, though, are far more scary. In his Edge talk Lanier points out how our urge to see AI as persons is happening in parallel with our defining corporations as persons. The big Silicon Valley companies – Google, FaceBook, Amazon are essentially just algorithms. Some of the same people who have an economic interest in us seeing their algorithmic corporations as persons are also among the biggest promoters of a philosophy that declares the coming personhood of AI. Shouldn’t this lead us to be highly skeptical of the claim that AI should be treated as persons?

What Lanier thinks we have with current AI is a Wizard of OZ scenario:

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

What you get with a digital assistant isn’t so much another form of intelligence helping you to make better informed decisions as a very cleverly crafted marketing tool. In fact the intelligence of these systems isn’t, as it is often presented, coming silicon intelligence at all. Rather, it’s leveraged human intelligence that has suddenly disappeared from the books. This is how search itself works, along with Google Translate or recommendation systems such as Spotify,Pandora, Amazon or Netflix, they aggregate and compress decisions made by actually intelligent human beings who are hidden from the user’s view.

Lanier doesn’t think this problem is a matter of consumer manipulation alone: By packaging these services as a form of artificial intelligence tech companies can ignore paying the human beings who are the actual intelligence at the heart of these systems. Technological unemployment, whose solution the otherwise laudable philanthropists Bill Gates thinks is: eliminating payroll and corporate income taxes while also scrapping the minimum wage so that businesses will feel comfortable employing people at dirt-cheap wages instead of outsourcing their jobs to an iPad”A view that is based on the false premise that human intelligence is becoming superfluous when what is actually happening is that human intelligence has been captured, hidden, and repackaged as AI.  

The danger of the moment is that we will take this rhetoric regarding machine intelligence as reality.Lanier wants to warn us that the way AI is being positioned today looks eerily familiar in terms of human history:

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

As long as we avoid falling into another AI winter this century (a prospect that seems as likely to occur as not) then over the course of the next half-century we will experience the gradual improvement of AI to the point where perhaps the majority of human occupations are able to be performed by machines. We should not confuse ourselves as to what this means, it is impossible to say with anything but an echo of lost religious myths that we will be entering the “next stage” of human or “cosmic evolution”. Indeed, what seems more likely is that the rise of AI is just one part of an overall trend eroding the prospects and power of the middle class and propelling the re-emergence of oligarchy as the dominant form of human society. Making sure we don’t allow ourselves to fall into this trap by insisting that our machines continue to serve the broader human interest for which they were made will be the necessary prelude to addressing the deeper existential dilemmas posed by truly intelligent artifacts should they ever emerge from anything other than our nightmares and our dreams.

 

Why the Castles of Silicon Valley are Built out of Sand

Ambrogio_Lorenzetti Temperance with an hour glass Allegory of Good Government

If you get just old enough, one of the lessons living through history throws you is that dreams take a long time to die. Depending on how you date it, communism took anywhere from 74 to 143 years to pass into the dustbin of history, though some might say it is still kicking. The Ptolemaic model of the universe lasted from 100 AD into the 1600’s. Perhaps even more dreams than not simply refuse to die, they hang on like ghost, or ghouls, zombies or vampires, or whatever freakish version of the undead suits your fancy. Naming them would take up more room than I can post, and would no doubt start one too many arguments, all of our lists being different. Here, I just want to make an argument for the inclusion of one dream on our list of zombies knowing full well the dream I’ll declare dead will have its defenders.

The fact of the matter is, I am not even sure what to call the dream I’ll be talking about. Perhaps, digitopia is best. It was the dream that emerged sometime in the 1980’s and went mainstream in the heady 1990’s that this new thing we were creating called the “Internet” and the economic model it permitted was bound to lead to a better world of more sharing, more openness, more equity, if we just let its logic play itself out over a long enough period of time. Almost all the big-wigs in Silicon Valley, the Larry Pages and Mark Zuckerbergs, and Jeff Bezos(s), and Peter Diamandis(s) still believe this dream, and walk around like 21st century versions of Mary Magdalene claiming they can still see what more skeptical souls believe has passed.

By far, the best Doubting Thomas of digitopia we have out there is Jaron Lanier. In part his power in declaring the dream dead comes from the fact that he was there when the dream was born and was once a true believer. Like Kevin Bacon in Hollywood, take any intellectual heavy hitter of digital culture, say Marvin Minsky, and you’ll find Lanier having some connection. Lanier is no Luddite, so when he says there is something wrong with how we have deployed the technology he in part helped develop, it’s right and good to take the man seriously.

The argument Lanier makes in his most recent book Who Owns the Future? against the economic model we have built around digital technology in a nutshell is this: what we have created is a machine that destroys middle class jobs and concentrates information, wealth and power. Say what? Hasn’t the Internet and mobile technology democratized knowledge? Don’t average people have more power than ever before? The answer to both questions is no and the reason why is that the Internet has been swallowed by its own logic of “sharing”.

We need to remember that the Internet really got ramped up when it started to be used by scientists to exchange information between each other. It was built on the idea of openness and transparency not to mention a set of shared values. When the Internet leapt out into public consciousness no one had any idea of how to turn this sharing capacity and transparency into the basis for an economy. It took the aftermath of dot com bubble and bust for companies to come up with a model of how to monetize the Internet, and almost all of the major tech companies that dominate the Internet, at least in America- and there are only a handful- Google, FaceBook and Amazon, now follow some variant of this model.

The model is to aggregate all the sharing that the Internet seems to naturally produce and offer it, along with other “compliments” for “free” in exchange for one thing: the ability to monitor, measure and manipulate through advertising whoever uses their services. Like silicon itself, it is a model that is ultimately built out of sand.

When you use a free service like Instagram there are three ways its ultimately paid for. The first we all know about, the “data trail” we leave when using the site is sold to third party advertisers, which generates income for the parent company, in this case FaceBook. The second and third ways the service is paid for I’ll get to in a moment, but the first way itself opens up all sorts of observations and questions that need to be answered.

We had thought the information (and ownership) landscape of the Internet was going to be “flat”. Instead, its proven to be extremely “spiky”. What we forgot in thinking it would turn out flat was that someone would have to gather and make useful the mountains of data we were about to create. The big Internet and Telecom companies are these aggregators who are able to make this data actionable by being in possession of the most powerful computers on the planet that allow them to not only route and store, but mine for value in this data. Lanier has a great name for the biggest of these companies- he calls them Siren Servers.

One might think whatever particular Siren Servers are at the head of the pack is a matter of which is the most innovative. Not really. Rather, the largest Siren Servers have become so rich they simply swallow any innovative company that comes along. FaceBook gobbled up Instagram because it offered a novel and increasingly popular way to share photos.

The second way a free service like Instagram is paid for, and this is one of the primary concerns of Lanier in his book, is that it essentially cannibalizes to the point of destruction the industry that used to provide the service, which in the “old economy” meant it also supported lots of middle class jobs.

Lanier states the problem bluntly:

 Here’s a current example of the challenge we face. At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography is Instagram. When Instagram was sold to FaceBook for a billion dollars in 2012, it employed only thirteen people.  (p.2)

Calling Thomas Piketty….

As Bill Davidow argued recently in The Atlantic the size of this virtual economy where people share and get free stuff in exchange for their private data is now so big that it is giving us a distorted picture of GDP. We can no longer be sure how fast our economy is growing. He writes:

 There are no accurate numbers for the aggregate value of those services but a proxy for them would be the money advertisers spend to invade our privacy and capture our attention. Sales of digital ads are projected to be $114 billion in 2014,about twice what Americans spend on pets.

The forecasted GDP growth in 2014 is 2.8 percent and the annual historical growth rate of middle quintile incomes has averaged around 0.4 percent for the past 40 years. So if the government counted our virtual salaries based on the sale of our privacy and attention, it would have a big effect on the numbers.

Fans of Joseph Schumpeter might see all this churn as as capitalism’s natural creative destruction, and be unfazed by the government’s inability to measure this “off the books” economy because what the government cannot see it cannot tax.

The problem is, unlike other times in our history, technological change doesn’t seem to be creating many new middle class jobs as fast as it destroys old ones. Lanier was particularly sensitive to this development because he always had his feet in two worlds- the world of digital technology and the world of music. Not the Katy Perry world of superstar music, but the kinds of people who made a living selling local albums, playing small gigs, and even more importantly, providing the services that made this mid-level musical world possible. Lanier had seen how the digital technology he loved and helped create had essentially destroyed the middle class world of musicians he also loved and had grown up in. His message for us all was that the Siren Servers are coming for you.

The continued advance of Moore’s Law, which, according to Charlie Stross, will play out for at least another decade or so, means not so much that we’ll achieve AGI, but that machines are just smart enough to automate some of the functions we had previously thought only human beings were capable of doing. I’ll give an example of my own. For decades now the GED test, which people pursue to obtain a high school equivalency diploma, has had an essay section. Thousands of people were necessary to score these essays by hand, the majority of whom were likely paid to do so. With the new, computerized GED test this essay scoring has now been completely automated, human readers made superfluous.

This brings me to the third way this new digital capabilities are paid for. They cannibalize work human beings have already done to profit a company who presents and sells their services as a form of artificial intelligence. As Lanier writes of Google Translate:

It’s magic that you can upload a phrase in Spanish into the cloud services of a company like Google or Microsoft, and a workable, if imperfect, translation to English is returned. It’s as if there’s a polyglot artificial intelligence residing up there in that great cloud of server farms.

But that is not how cloud services work. Instead, a multitude of examples of translations made by real human translators are gathered over the Internet. These are correlated with the example you send for translation. It will almost always turn out that multiple previous translations by real human translators had to contend with similar passages, so a collage of those previous translations will yield a usable result.

A giant act of statistics is made virtually free because of Moore’s Law, but at core the act of translation is based on real work of people.

Alas, the human translators are anonymous and off the books. (19-20)

The question all of us should be asking ourselves is not “could a machine be me?” with all of our complexity and skills, but “could a machine do my job?” the answer to which, in 9 cases out of 10, is almost certainly- “yes!”

Okay, so that’s the problem, what is Lanier’s solution? His solution is not that we pull a Ned Ludd and break the machines or even try to slow down Moore’s Law. Instead, what he wants us to do is to start treating our personal data like property. If someone wants to know my buying habits they have to pay a fee to me the owner of this information. If some company uses my behavior to refine their algorithm I need to be paid for this service, even if I was unaware I had helped in such a way. Lastly, anything I create and put on the Internet is my property. People are free to use it as they chose, but they need to pay me for it. In Lanier’s vision each of us would be the recipients of a constant stream of micropayments from Siren Servers who are using our data and our creations.

Such a model is very interesting to me, especially in light of other fights over data ownership, namely the rights of indigenous people against bio-piracy, something I was turned on to by Paolo Bacigalupi’s bio-punk novel The Windup Girl, and what promises to be an increasing fight between pharmaceutical/biotech firms and individuals over the use of what is becoming mountains of genetic data. Nevertheless, I have my doubts as to Lanier’s alternative system and will lay them out in what follows.

For one, such a system seems likely to exacerbate rather than relieve the problem of rising inequality. Assuming most of the data people will receive micropayments for will be banal and commercial in nature, people who are already big spenders are likely to get a much larger cut of the micropayments pie. If I could afford such things it’s no doubt worth a lot for some extra piece of information to tip the scales between me buying a Lexus or a Beemer, not so much if it’s a question of TIDE vs Whisk.

This issue would be solved if Lanier had adopted the model of a shared public pool of funds where micropayments would go rather than routing them to the actual individual involved, but he couldn’t do this out of commitment to the idea that personal data is a form of property. Don’t let his dreadlocks fool you, Lanier is at bottom a conservative thinker. Such a fee might balance out the glaring problem that Siren Servers effectively pay zero taxes

But by far the biggest hole in Lanier’s micropayment system is that it ignores the international dimension of the Internet. Silicon Valley companies may be barreling down on their model, as can be seen in Amazon’s recent foray into the smartphone market, which attempts to route everything through itself, but the model has crashed globally. Three events signal the crash, Google was essentially booted out of China, the Snowden revelations threw a pale of suspicion over the model in an already privacy sensitive Europe, and the EU itself handed the model a major loss with the “right to be forgotten” case in Spain.

Lanier’s system, which accepts mass surveillance as a fact, probably wouldn’t fly in a privacy conscious Europe, and how in the world would we force Chinese and other digital pirates to provide payments of any scale? And China and other authoritarian countries have their own plans for their Siren Servers, namely, their use as tools of the state.

The fact of the matter is their is probably no truly global solution to continued automation and algorithmization, or to mass surveillance. Yet, the much feared “splinter-net”, the shattering of the global Internet, may be better for freedom than many believe. This is because the Internet, and the Siren Servers that run it, once freed from its spectral existence in the global ether, becomes the responsibility of real territorially bound people to govern. Each country will ultimately have to decide for itself both how the Internet is governed and define its response to the coming wave of automation. There’s bound to be diversity because countries are diverse, some might even leap over Lanier’s conservativism and invent radically new, and more equitable ways of running an economy, an outcome many of the original digitopians who set this train a rollin might actually be proud of.

 

Jumping Off The Technological Hype-Cycle and the AI Coup

Robotic Railroad 1950s

What we know is that the very biggest tech companies have been pouring money into artificial intelligence in the last year. Back in January Google bought the UK artificial intelligence firm Deep Mind for 400 million dollars. Only a month earlier, Google had bought the innovative robotics firm Boston Dynamics. FaceBook is in the game as well having also in December 2013 created a massive lab devoted to artificial intelligence. And this new obsession with AI isn’t only something latte pumped-up Americans are into. The Chinese internet giant Baidu, with its own AI lab, recently snagged the artificial intelligence researcher Andrew Ng whose work for Google included the breakthrough of creating a program that could teach itself to recognize pictures of cats on the Internet, and the word “breakthrough” is not intended to be the punch line of a joke.

Obviously these firms see something that make these big bets and the competition for talent seem worthwhile with the most obvious thing they see being advances in an approach to AI known as Deep Learning, which moves programming away from a logical set of instructions and towards the kind bulking and pruning found in biological forms of intelligence. Will these investments prove worth it? We should know in just a few years, yet we simply don’t right now.

No matter how it turns out we need to beware of becoming caught in the technological hype-cycle. A tech investor, or tech company for that matter, needs to be able to ride the hype-cycle like a surfer rides a wave- when it goes up, she goes up, and when it comes down she comes down, with the key being to position oneself in just the right place, neither too far ahead or too far behind. The rest of us, however, and especially those charged with explaining science and technology to the general public, namely, science journalists, have a very different job- to parse the rhetoric and figure out what is really going on.

A good example of what science journalism should look like is a recent conversation over at Bloggingheads between Freddie deBoer and Alexis Madrigal. As Madrigal points out we need to be cognizant of what the recent spate of AI wonders we’ve seen actually are. Take the much over-hyped Google self-driving car. It seems much less impressive to know the areas where these cars are functional are only those areas that have been mapped before hand in painstaking detail. The car guides itself not through “reality” but a virtual world whose parameters can be upset by something out of order that the car is then pre-programmed to respond to in a limited set of ways. The car thus only functions in the context of a mindbogglingly precise map of the area in which it is driving. As if you were unable to make your way through a room unless you knew exactly where every piece of furniture was located. In other words Google’s self-driving car is undriveable in almost all situations that could be handled by a sixteen year old who just learned how to drive. “Intelligence” in a self-driving car is a question of gathering massive amounts of data up front. Indeed, the latest iteration of the Google self-driving car is more like tiny trolley car where information is the “track” than an automobile driven by a human being and able to go anywhere, without the need of any foreknowledge of the terrain, so long, that is, as there is a road to drive upon.

As Madrigal and deBoer also point out in another example, the excellent service of Google Translate isn’t really using machine intelligence to decode language at all. It’s merely aggregating the efforts of thousands of human translators to arrive at approximate results. Again, there is no real intelligence here, just an efficient way to sort through an incredibly huge amount of data.

Yet, what if this tactic of approaching intelligence by “throwing more data at it”  ultimately proves a dead end? There may come a point where such a strategy shows increasingly limited returns. The fact of the matter is that we know of only one fully sentient creature- ourselves- and the more data strategy is nothing like how our own brains work. If we really want to achieve machine intelligence, and it’s an open question whether this is a worthwhile goal, then we should be exploring at least some alternative paths to that end such as those long espoused by Douglas Hofstadter the author of the amazing Godel, Escher, Bach,  and The Mind’s I among others.

Predictions about the future of capacities of artificially intelligent agents are all predicated on the continued exponential rise in computer processing power. Yet, these predictions are based on what are some less than solid assumptions with the first being that we are nowhere near hard limits to the continuation of Moore’s Law. What this assumption ignores is increased rumblings that Moore’s Law might be in hospice and destined for the morgue.

But even if no such hard limits are encountered in terms of Moore’s Law, we still have the unproven assumption that greater processing power almost all by itself leads to intelligence, or even is guaranteed to bring incredible changes to society at large. The problem here is that sheer processing power doesn’t tell you all that much. Processing power hasn’t brought us machines that are intelligent so much as machines that are fast, nor are the increases in processing power themselves all that relevant to what the majority of us can actually do.  As we are often reminded all of us carry in our pockets or have sitting on our desktops computational capacity that exceeds all of NASA in the 1960’s, yet clearly this doesn’t mean that any of us are by this power capable of sending men to the moon.

AI may be in a technological hype-cycle, again we won’t really know for a few years, but the dangers of any hype-cycle for an immature technology is that it gets crushed as the wave comes down. In a hype-cycle initial progress in some field is followed by a private sector investment surge and then a transformation of the grant writing and academic publication landscape as universities and researchers desperate for dwindling research funding try to match their research focus to match a new and sexy field. Eventually progress comes to the attention of the general press and gives rise to fawning books and maybe even a dystopian Hollywood movie or two. Once the public is on to it, the game is almost up, for research runs into headwinds and progress fails to meet the expectations of a now profit-fix addicted market and funders. In the crash many worthwhile research projects end up in the dustbin and funding flows to the new sexy idea.

AI itself went through a similar hype-cycle in the 1980’s, back when Hofstander was writing his Godel, Escher, Bach  but we have had a spate of more recent candidates. Remember in the 1990’s when seemingly every disease and every human behavior was being linked to a specific gene promising targeted therapies? Well, as almost always, we found out that reality is more complicated than the current fashion. The danger here was that such a premature evaluation of our knowledge led to all kinds of crazy fantasies and nightmares. The fantasy that we could tailor design human beings through selecting specific genes led to what amount to some pretty egregious practices, namely, the sale of selection services for unborn children based on spurious science- a sophisticated form of quackery. It also led to childlike nightmares, such as found in the movie Gattaca or Francis Fukuyama’s Our Posthuman Future where we were frightened with the prospect of a dystopian future where human beings were to be designed like products, a nightmare that was supposed to be just over the horizon.  

We now have the field of epigenetics to show us what we should have known- that both genes and environment count and we have to therefore address both, and that the world is too complex for us to ever assume complete sovereignty over it. In many ways it is the complexity of nature itself that is our salvation protecting us from both our fantasies and our fears.

Some other examples? How about MOOCS which we supposed to be as revolutionary as the invention of universal education or the university? Being involved in distance education for non-university attending adults I had always known that the most successful model for online learning was where it was “blended” some face-to-face, some online. That “soft” study skills were as important to student success as academic ability. The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.  Andrew Ng, the AI researcher scooped up by Baidu I mentioned earlier being just one of a number of high level MOOC refugees having helped found Coursera.

The so-called Internet of Things is probably another example of getting caught on the hype-cycle. The IoT is this idea that people are going to be clamoring to connect all of their things: their homes, refrigerators, cars, and even their own bodies to the Internet in order to be able to constantly monitor those things. The holes in all this are that not only are we already drowning in a deluge of data, or that it’s pretty easy to see how the automation of consumption is only of benefit to those providing the service if we’re either buying more stuff or the automators are capturing a “finder’s fee”, it’s above all, that anything connected to the Internet is by that fact hackable and who in the world wants their homes or their very bodies hacked? This isn’t a paranoid fantasy of the future, as a recent skeptical piece on the IoT in The Economist pointed out:

Last year, for instance, the United States Fair Trade Commission filed a complaint against TrendNet, a Californian marketer of home-security cameras that can be controlled over the internet, for failing to implement reasonable security measures. The company pitched its product under the trade-name “SecureView”, with the promise of helping to protect owners’ property from crime. Yet, hackers had no difficulty breaching TrendNet’s security, bypassing the login credentials of some 700 private users registered on the company’s website, and accessing their live video feeds. Some of the compromised feeds found their way onto the internet, displaying private areas of users’ homes and allowing unauthorised surveillance of infants sleeping, children playing, and adults going about their personal lives. That the exposure increased the chances of the victims being the targets of thieves, stalkers or paedophiles only fuelled public outrage.

Personalized medicine might be considered a cousin of the IoT, and while it makes perfect sense to me for persons with certain medical conditions or even just interest in their own health to monitor themselves or be monitored and connected to health care professionals, such systems will most likely be closed networks to avoid the risk of some maleficent nerd turning off your pacemaker.

Still, personalized medicine itself, might be yet another example of the magnetic power of hype. It is one thing to tailor a patient’s treatment based on how others with similar genomic profiles reacted to some pharmaceutical and the like. What would be most dangerous in terms of health care costs both to individuals and society would be something like the “personalized” care for persons with chronic illnesses profiled in the New York Times this April, where, for instance, the:

… captive audience of Type 1 diabetics has spawned lines of high-priced gadgets and disposable accouterments, borrowing business models from technology companies like Apple: Each pump and monitor requires the separate purchase of an array of items that are often brand and model specific.

A steady stream of new models and updates often offer dubious improvement: colored pumps; talking, bilingual meters; sensors reporting minute-by-minute sugar readouts. Ms. Hayley’s new pump will cost $7,350 (she will pay $2,500 under the terms of her insurance). But she will also need to pay her part for supplies, including $100 monitor probes that must be replaced every week, disposable tubing that she must change every three days and 10 or so test strips every day.

The technological hype-cycle gets its rhetoric from the one technological transformation that actually deserves the characterization of a revolution. I am talking, of course, about the industrial revolution which certainly transformed human life almost beyond recognition from what came before. Every new technology seemingly ends up making its claim to be “revolutionary” as in absolutely transformative. Just in my lifetime we have had the IT , or digital revolution, the Genomics Revolution, the Mobile Revolution, The Big Data Revolution to name only a few.  Yet, the fact of the matter is not merely have no single one of these revolutions proven as transformative as the industrial revolution, arguably, all of them combined haven’t matched the industrial revolution either.

This is the far too often misunderstood thesis of economists like Robert Gordon. Gordon’s argument, at least as far as I understand it, is not that current technological advancements aren’t a big deal, just that the sheer qualitative gains seen in the industrial revolution are incredibly difficult to sustain let alone surpass.

The enormity of the change from a world where it takes, as it took Magellan propelled by the winds, years, rather than days to circle the globe is hard to get our heads around, the gap between using a horse and using a car for daily travel incredible. The average lifespan since the 1800’s has doubled. One in five of the children born once died in childhood. There were no effective anesthetics before 1846.  Millions would die from an outbreak of the flu or other infectious disease. Hunger and famine were common human experiences however developed one’s society was up until the 20th century, and indoor toilets were not common until then either. Vaccinations did not emerge until the late 19th century.

Bill Gates has characterized views such as those of Gordon as “stupid”. Yet, he himself is a Gordonite as evidenced by this quote:

But asked whether giving the planet an internet connection is more important than finding a vaccination for malaria, the co-founder of Microsoft and world’s second-richest man does not hide his irritation: “As a priority? It’s a joke.”

Then, slipping back into the sarcasm that often breaks through when he is at his most engaged, he adds: “Take this malaria vaccine, [this] weird thing that I’m thinking of. Hmm, which is more important, connectivity or malaria vaccine? If you think connectivity is the key thing, that’s great. I don’t.”

And this is all really what I think Gordon is saying, that the “revolutions” of the past 50 years pale in comparison to the effects on human living of the period between 1850 and 1950 and this is the case even if we accept the fact that the pace of technological change is accelerating. It is as if we are running faster and faster at the same time the hill in front of us gets steeper and steeper so that truly qualitative change of the human condition has become more difficult even as our technological capabilities have vastly increased.

For almost two decades we’ve thought that the combined effects of three technologies in particular- robotics, genetics, and nanotech were destined to bring qualitative change on the order of the industrial revolution. It’s been fourteen years since Bill Joy warned us that these technologies threatened us with a future without human beings in it, but it’s hard to see how even a positive manifestation of the transformations he predicted have come true. This is not to say that they will never bring such a scale of change, only that they haven’t yet, and fourteen years isn’t nothing after all.

So now, after that long and winding road, back to AI. Erik Brynjolfsson, Andrew McAfee and Jeff Cummings the authors of the most popular recent book on the advances in artificial intelligence over the past decade, The Second Machine Age take aim directly at the technological pessimism of Gordon and others. They are firm believers in the AI revolution and its potential. For them innovation in the 21st century is no longer about brand new breakthrough ideas but, borrowing from biology, the recombination of ideas that already exist. In their view, we are being “held back by our inability to process all the new ideas fast enough” and therefore one of the things we need are even bigger computers to test out new ideas and combinations of ideas. (82)

But there are other conclusions one might draw from the metaphor of innovation as “recombination”.  For one, recombination can be downright harmful for organisms that are actually working. Perhaps you do indeed get growth from Schumpeter’s endless cycle of creation and destruction, but if all you’ve gotten as a consequence are minor efficiencies at the margins at the price of massive dislocations for those in industries deemed antiquated, not to mention society as a whole, then it’s hard to see the game being worth the candle.

We’ve seen this pattern in financial services, in music, and journalism, and it is now desired in education and healthcare. Here innovation is used not so much to make our lives appreciably better as to upend traditional stakeholders in an industry so that those with the biggest computer, what Jaron Lanier calls “Sirene Servers” can swoop in and take control. A new elite stealing an old elite’s thunder wouldn’t matter all that much to the rest of us peasants were it not for the fact that this new elite’s system of production has little room for us as workers and producers, but only as consumers of its goods. It is this desire to destabilize, reorder and control the institutions of pre-digital capitalism and to squeeze expensive human beings from the process of production that is the real source of the push for intelligent machines, the real force behind the current “AI revolution”, but given its nature, we’d do better to call it a coup.

Correction:

The phrase above: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY loosing its foothold.”

Originally read: “The MOOC model largely avoided these hard won truths of the field of Adult Basic Education and appears to be starting to stumble, with one of its biggest players UDACITY  closing up shop, as a result.”

Defining Home

One would be hard-pressed to find two thinkers as distinct as Jane Jacobs and Jaron Lanier. Jacobs, who passed away in 2006, was a thinker concerned with the concrete reality of real-world communities, and most especially, how to preserve them. Lanier is a pioneer in the field of virtual reality, having coined the phrase, with deep ties to the culture of Silicon Valley. This is why I found it so surprising upon reading relatively recent books from both of these authors that they provided an almost synergistic perspective in which each author appeared to inform the work of the other resulting in a more comprehensive whole.

I’ll start with Jane Jacob’s. The purpose of her last and by far most pessimistic book Dark Age Ahead, published in 2004, was to identify what she saw as some major dystopian trends in the West that if not checked might result in the appearance of a new dark age. Jacob’s gives what is perhaps one of the best descriptions of what a dark age is that I have ever seen; A state of “mass amnesia” in which not only have certain aspects of a culture been lost, but the fact that these aspects have been lost is forgotten as well.

In Dark Age Ahead, Jacobs identifies five dystopian trends which she thinks are leading us down the path of a new dark age: the decline of communities and the family, the decline of higher education, the decline of science, the failure of government, and the decay of culture. One of the things that make Jacobs so interesting is that she defies ideological stereotypes. Looking at the world from a perspective of the community allows her to cast her net much wider in the search for explanations than what emerges from “think tanks” of both the right and the left. Want a reason for the decline of the family? How about consumerism, the need for two incomes, and the automobile, rather than the right’s claim of declining moral standards. Want a reason for the failure of government?
What about the loss of taxing authority by local governments to the national government, and the innate inability of national bureaucracies to craft effective policies based on local conditions, rather than, as some on the left would have it, the need for a more expansive federal government.

Jacob’s unique perspective gained her prescience.  Over three years before the housing bubble burst and felled the US economy she was able to see the train wreck coming. (DA P.32). This perspective grows out of her disdain for ideology, which is one of her main targets in Dark Age Ahead. Something like ideology can be seen in what Jacobs understands to be the decline of science. Openness to  feedback from the real- world is the cornerstone of true science, but, in what Jacob’s sees as a far too often occurrence scientists, especially social scientists,  ignore such feedback because it fails to conform to the reigning paradigm. Another danger is when fields of knowledge without an empirical base at all successfully pass themselves off as “science”.

But where the negative effect of ideology is most apparent is at the level of national government where the “prefabricated answers” ideology provides become one-size-fits-all “solutions” that are likely to fail, firstly, because profound local differences are ignored, and secondly, because national imperatives and policies emerge from bureaucratic or corporate interests that promote or mandate solutions to broad problems that end up embedding their own ideology and agenda, rather than actually addressing the problem at hand.

Sometimes we are not even aware that policies from distant interests are being thrusts upon us. Often what are in fact politically crafted policies reflecting some interest have the appearance of having arisen organically as the product of consumer choice. Jacobs illustrates this by showing how the automobile centric culture of the US was largely the creation of the automobile industry, which pushed for the deconstruction of much of the public transportation system American cities. Of course, the federal government played a huge role in the expansion of the automobile as well, but it did not do so in order to address the question of what would be the best transportation system to adopt, but as a means of fostering national security, and less well known, to promote the end of national full-employment, largely blind to whatever negative consequences might emerge from such a policy.

Jacobs ideas regarding feedback- whether as the basis of real science, or as the foundation of effective government policies- have some echoes, I think, of the conservative economist Friedrich Hayek. Both Hayek and Jacobs favored feedback systems such as the market, in Hayek’s case, or, for Jacobs the community (which includes the economy but is also broader) over the theories of and policies crafted by and imposed by distant experts.

A major distinction, I think, is that whereas Jacob looked to provide boundaries to effective systems of feedback- her home city of Toronto was one such feedback system rather than the economy of all of Canada, North America, or the world- Hayek, emerging from the philosophy of classical liberalism focused his attention sharply on economics, rather than broadening his view to include things such as the education system, institutions of culture and the arts, or local customs. Jacob saw many markets limited in geographic scope, Hayek saw the MARKET a system potentially global in scale, that is given the adoption of free- trade, would constitute a real, as opposed to a politically distorted, feedback system which could cover the whole earth. Jacobs is also much more attuned to areas that appear on the surface to be driven by market mechanisms- such the idea that consumer choice led to the widespread adoption of the automobile in the US- that on closer inspection are shown to be driven by influence upon or decisions taken by national economic and political elites.

Anyone deeply familiar with either Hayek or Jacobs who could help me clarify my thoughts here would be greatly appreciated, but now back to Lanier.

Just as Jacobs sees a naturally emergent complexity to human environments such as cities, a complexity that makes any de-contextualized form of social engineering likely to end in failure, Lanier, in his 2009 manifesto, You Are Not A Gadget, applies an almost identical idea to the human person, and challenges the idea that any kind of engineered “human-like” artificial intelligence will manage to make machines like people. Instead, Lanier claims, by trying to make machines like people we will make people more like machines.

Lanier is not claiming that there is a sort of “ghost in the machine” that makes human beings distinct. His argument is instead evolutionary:

I believe humans are the result of billions of years of implicit, evolutionary study in the school of hard knocks. The cybernetic structure of a person has been refined by a very  large, very long, and very deep encounter with physical reality.( 157)

Both human communities and individuals, these authors seem to be suggesting, are the products of a deep and largely non-replicable processes. Imagine what it would truly mean to replicate, as software, the city of Rome. It is easy enough to imagine that we could reproduce within amazing levels of detail the architecture and landscape of the city, but how on earth would we replicate the all the genealogical legacies that go into a city: its history, culture, language- not to mention the individuals who are the carriers of such legacies?The layers that have gone into making Rome what it is stretch deep back into human, biological, and physical time: beginning with the Big Bang, the formation of the Milky Way, our sun, the earth, life on earth from the eons up until human history, prehistoric settlements, the story of the Roman Republic and Empire, the Catholic Church, Renaissance city states, national unification, Mussolini’s fascist dictatorship down to our own day. Or, to quote Lanier: “What makes something fully real is that it is impossible to represent it to completion”.  (134)

Lanier thinks the fact that everything is represented in bits has lead to the confusion that everything is bits. The result of this type of idolatry is for representation and reality to begin to part company a delusion which he thinks explains the onset of the economic crisis in 2008.( It’s easy to see why he might think this when the crisis was engendered by financial frankensteins such as Credit Default Swaps which displaced traditional mortgages where the borrowers credit was a reality lenders were forced to confront when granting a loan.)

Lanier also thinks it is far beyond our current capacity to replicate human intelligence in the form of software, and when it appears we have actually done so, what we have in fact achieved is a massive reduction in complexity which has likely stripped away the most human aspects of whatever quality or activity we are trying to replicate in machines. Take the case of chess where the psychological aspects of the game are stripped away to create chess playing machines and the game is reduced to the movement of pieces around a board. Of course, even in this case, it really isn’t the chess playing machine that has won but the human engineers and programmers behind it who have figured out how to make and program such a machine. Lanier doesn’t even think it is necessary to locate a human activity on a machine for that activity to be stripped of its human elements. He again uses the case of chess only this time chess played against a grandmaster not by a machine but by a crowd wherein individual choices are averaged out to choose the move of the crowd “player”. He wants us to ask whether the human layer of chess, the history of the players their psychological understanding of their opponent is still in evidence in the game-play of this “hive- mind”. He thinks not.

Like Jacobs and her example of the origins of the US transportation system in the machinations of the automotive industry and the influence of the American government to promote an economy built around the automobile for reasons that had nothing to do with transportation as such- namely national security and the desire for full-employment, Lanier sees the current state of computer technology and software as not a determined outcome, but as a conscious choice that has been imposed upon the broader society by technologist. What he sees as dangerous here is that any software architecture is built upon a certain number of assumptions that amount to a philosophy, something he calls “digital-lock-in”.That philosophy then becomes the technological world in which we live without ever having had any broader discussion in society around the question of if this is truly what we want.

Examples of such assumptions are the non-value of privacy, and the idea that everything is a vehicle for advertising. Lanier thinks the current treatment of content producers as providers of a shell for advertisement are driving artists to the wall. Fact is, we all eventually become stuck with these models once they become universal. We all end up using FaceBook and Google because we have to if we want to participate in the online world. But we should realize that the assumptions of these architectures was a choice, and did not have to be this way.

It is my hope that, in terms of the Internet, the market and innovation will likely provide solutions to these problems even the problem of how artist and writers are to find a viable means of living in conditions of ubiquitous copyable content. But the market is far from perfect, and as Jacob’s example of the development of the US transportation system shows, are far too often distorted by political manipulation.

A great example of this is both the monopolization of the world’s agriculture by a handful of mammoth agribusinesses, a phenomenon detailed by Giulio Caperchi, of the blog The Geneaology of Consent.  In his post , Food Sovereignty, Caperchi details how both the world food system is dominated by a small number of global firms and international organizations. He also introduces the novel concept of epistemological sovereignty “the right to define what systems of knowledge are best suited for particular contexts”.  These are ideas that are desperately needed, for if Lanier is right, we are about to embark on an even more dangerous experiment by applying the assumptions of computer science to the natural world, and he cites an article by one of the patriarchs of 20th century physics- Freeman Dyson- to show us that this is so.

There must be something with me and Freeman Dyson, for this is the second time in a short period that I have run into the musings of the man, first in doing research for a post I wrote on the science-fiction novel Accelerando, and now here. In Our Biotech Future  Dyson lays out what he thinks will be the future of not just the biotech industry and biological sciences but the future of life itself.

Citing an article by Carl Woese on “the golden age” of life before species had evolved and gene transfer between organisms was essentially unbounded and occurred rapidly. Dyson writes:

But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell, anticipating Bill Gates by three billion years, separated itself from the community and refused to share. Its offspring became the first species of bacteria and the first species of any kind reserving their intellectual property for their own private use.

And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes. Then the evolution of life will once again be communal, as it was in the good old days before separate species and intellectual property were invented.

Dyson looks forward to an age when:

Domesticated biotechnology, once it gets into the hands of housewives and children, will give us an explosion of diversity of new living creatures, rather than the monoculture crops that the big corporations prefer. New lineages will proliferate to replace those that monoculture farming and deforestation have destroyed. Designing genomes will be a personal thing, a new art form as creative as painting or sculpture.

Dyson, like Lanier and Jacobs praises complexity: he thinks swapping genes is akin to cultural evolution which is more complex than biological evolution ,and that the new biological science, unlike much of the physical sciences, will need to reflect this complexity. What he misses, what both Jacobs and Lanier understand ,is that the complexity of life does not emerge just from combination, but from memory, which acts as a constraint and limits choices. Rome is Rome, a person is a person, a species is a species because choices were made which have closed off alternatives.

Dyson is also looking at life through the eyes of the same reductionist science he thinks has reached its limits. I want to make a kitten that glows in the dark, so I insert a firefly gene etc. In doing this he is almost oblivious to the fact that in complex systems the consequences are often difficult to predict beforehand, and some could be incredibly dangerous both for natural animals and plants and the ecosystems they live in and for us human beings as well. Some of this danger will come from bio-terrorism- persons deliberately creating organisms to harm other people- and this would include any reinvigorated effort to develop such weapons on behalf of states as it would the evil intentions of any nihilistic group or individual. Still, a good deal of the danger from such a flippant attitude towards the re-engineering of life could arise more often from unintended consequences of our actions. One might counter that we have been doing this re-engineering at least since we domesticated plants and animals, and we have, though not on anything like the scale Dyson is proposing. It is also to forget that one of the unintended consequences of agriculture was to produce diseases that leap from domesticated animals to humans and resulted in the premature deaths of millions.

Applying the ideas of computer science to biology creates the assumption that life is software. This is an idea that is no doubt pregnant with discoveries that could improve the human condition, but in the end it is only an assumption- the map not the territory. Holding to it too closely results in us treating all of life as if it was our plaything, and aggressively rather than cautiously applying the paradigm until, like Jacob’s decaying cities or Lanier’s straight-jacket computer technologies, or Caperchi’s industrialized farming it becomes the reality we have trapped ourselves in without having ever had a conversation about whether we wanted to live there.