How to avoid drowning in the Library of Babel

Library of Babel

Between us and the future stands an almost impregnable wall that cannot be scaled. We cannot see over it,or under it, or through it, no matter how hard we try. Sometimes the best way to see the future is by is using the same tools we use in understanding the present which is also, at least partly, hidden from  direct view by the dilemma inherent in our use of language. To understand anything we need language and symbols but neither are the thing we are actually trying to grasp. Even the brain itself, through the senses, is a form of funnel giving us a narrow stream of information that we conflate with the entire world.

We use often science to get us beyond the box of our heads, and even, to a limited extent, to see into the future. Less discussed is the fact that we also use good metaphors, fables, and stories to attack the the wall of the unknown obliquely. To get the rough outlines of something like the essence of reality while still not being able to see it as it truly is.

It shouldn’t surprise us that some of the most skilled writers at this sideways form of wall scaling ended up becoming blind. To be blind, especially to have become blind after a lifetime of being able to see, is to realize that there is a form to the world which is shrouded in darkness, that one can get to know only in bare outlines, and only with indirect methods. Touch in the blind goes from a way to know what something feels like for the sighted to an oblique and sometimes more revealing way to learn what something actually is.

The blind John Milton was a genius at using symbolism and metaphor to uncover a deeper and hidden reality, and he probably grasped more deeply than anyone before or after him that our age would be defined not by the chase after reality, but our symbolic representation of its “value”, especially in the form of our ever more virtual talisman of money. Another, and much more recent, writer skilled at such an approach, whose work was even stranger than Milton’s in its imagery, was Jorge Luis Borges whose writing defies easy categorization.  

Like Milton did for our confusion of the sign with the signified in his Paradise Lost, Borges would do for the culmination of this symbolization in the  “information age” with perhaps his most famous short-story- The Library at Babel.  A story that begins with the strange line:

The universe (which others call the Library) is composed of an indefinite, perhaps infinite number of hexagonal galleries. (112)

The inhabitants of Borges’ imagined Library (and one needs to say inhabitants rather than occupants, for the Library is not just a structure but is the entire world, the universe in which all human-beings exist) “live” in smallish compartments attached to vestibules on hexagonal galleries lined with shelves of books. Mirrors are so placed that when one looks out of their individual hexagon they see a repeating pattern of them- the origin of arguments over whether or not the Library is infinite or just appears to be so.

All this would be little but an Escher drawing in the form of words were it not for other aspects that reflect Borges’ pure genius. The Library contains not just all books, but all possible books, and therein lies the dilemma of its inhabitants-  for if all possible books exist it is impossible to find any particular book.

Some book explaining the origin of the Library and its system of organization logically must exist, but that book is essentially undiscoverable, surrounded by a perhaps infinite number of other books the vast number of which are just nonsensical combinations of letters. How could the inhabitants solve such a puzzle? A sect called the Purifiers thinks it has the answer- to destroy all the nonsensical books- but the futility of this project is soon recognized. Destruction barely puts a dent in the sheer number of nonsensical books in the Library.

The situation within the Library began with the hope that all knowledge, or at least the key to all knowledge, would be someday discovered but has led to despair as the narrator reflects:

I am perhaps misled by old age and fear but I suspect that the human species- the only species- teeters at the verge of extinction, yet that the Library- enlightened, solitary, infinite, perfectly unmoving, armed with precious volumes, pointless, incorruptible and secret will endure. (118)

The Library of Babel published in 1941 before the “information age” was even an idea seems to capture one of the paradoxes of our era. For the period in which we live is indeed experiencing an exponential rise of knowledge, of useful information, but this is not the whole of the story and does not fully reveal what is not just a wonder but a predicament.

The best exploration of this wonder and predicament is James Gleick’s recent brilliant history of the “information age” in his The Information: A History, A Theory, A Flood. In that book Gleick gives us a history of information from primitive times to the present, but the idea that everything: from our biology to our society to our physics can be understood as a form of information is a very recent one. Gleick himself uses Borges’ strange tale of the Library of Babel as metaphor through which we can better grasp our situation.

The imaginative leap into the information age dates from seven year after Borges’ story was published, in 1948, when Claude Shannon a researcher for Bell Labs published his essay “A Mathematical Theory of Communication”. Shannon was interested in how one could communicate messages over channels polluted with noise, but his seemingly narrow aim ended up being revolutionary for fields far beyond Shannon’s purview. The concept of information was about to take over biology – in the form of DNA, and would conquer communications in the form of the Internet and exponentially increasing powers of computation – to compress and represent things as bits.

The physicists John Wheeler would come to see the physical universe itself and its laws as a form of digital information- “It from bit”.

Otherwise put, every “it” — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. “It from bit” symbolizes the idea that every item of the physical world has at bottom — a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-or-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

Gleick finds this understanding of reality as information to be both true in a deep sense and also in many ways troubling.  We are it seems constantly told that we are producing more information on a yearly basis than all of human history combined, although what this actually means is another thing. For there is a wide and seemingly unbridgeable gap between information and what would pass for knowledge or meaning. Shannon himself acknowledged this gap, and deliberately ignored it, for the meaningfulness of the information transmitted was, he thought, “irrelevant to the engineering problem” involved in transmitting it.

The Internet operates in this way, which is why a digital maven like Nicholas Negroponte is critical of the tech-culture’s shibboleth of “net neutrality”.   To humorize his example, a digital copy of a regular sized novel like Fahrenheit 451 is the digital equivalent of one second of video from What does the Fox Say?, which, however much it cracks me up, just doesn’t seem right. The information content of something says almost nothing about its meaningfulness, and this is a problem for anyone looking at claims that we are producing incredible amounts of “knowledge” compared to any age in the past.

The problem is perhaps even worse than at first blush. Take these two sets of numbers:

9      2           9              5              5              3              4              7              7              10

And:

9     7            4              3              8             5              4              2              6              3

The first set of numbers is a truly random set of the numbers 1 through 10 created using a random number generator. The second set has a pattern. Can you see it?

In the second set every other number is a prime number between 2 and 7. Knowing this rule vastly decreases the possible set of 10 numbers between 1 and 10 which a set of numbers can fit. But here’s the important takeaway-  the the first set of numbers contains vastly more information than the second set. Randomness increases the information content of a message. (If anyone out there has a better grasp of information theory thinks this example is somehow misleading or incorrect please advise.)

The new tools of the age have meant that a flood of bits now inundates us the majority of which is without deep meaning or value or even any meaning at all.We are confronted with a burst dam of information pouring itself over us. Gleick captures our dilemma with a quote from the philosopher of “enlightened doomsaying” Jean-Pierre Dupuy:

I take “hell” in its theological sense i.e. a place which is devoid of grace – the underserved, unnecessary, surprising, unforeseen. A paradox is at work here, ours is a world about which we pretend to have more and more information but which seems to us increasingly devoid of meaning.  (418)

It is not that useful and meaningful information is not increasing, but that our exponential increase in information of value- of medical and genetic information and the like has risen in tandem with an exponential increase in nonsense and noise. This is the flood of Gleick’s title and the way to avoid drowning in it is to learn how to swim or get oneself a boat.

A key to not drowning is to know when to hold your breath. Pushing for more and more data collection might sometimes be a waste of resources and time because it is leading one away from actionable knowledge. If you think the flood of useless information hitting you that you feel compelled to process is overwhelming now, just wait for the full flowering of the “Internet of Things” when everything you own from your refrigerator to your bathroom mirror will be “talking” to you.

Don’t get me wrong, there are aspects that are absolutely great about the Internet of Things. Take, for example, the work of the company Indoo.rs that has created a sort of digital map throughout the San Francisco International Airport that allows a blind person using bluetooth to know: “the location of every gate, newsstand, wine bar and iPhone charging station throughout the 640,000-square-foot terminal.”

This is exactly the kind of purpose the Internet of Things should be used for. A modern form of miracle that would have brought a different form sight to those blind like Milton and Borges. We could also take a cue from the powers of imagination found in these authors and use such technologies to make our experience of the world deeper our experience more multifaceted, immersive. Something like that is a far cry from an “augmented reality” covered in advertisements, or work like that of neuroscientist David Eagleman, (whose projects I otherwise like), on a haptic belt that would massage its wearer into knowing instantaneously the contents of the latest Twitter feed or stock market gyrations.

Our digital technology where it fails to make our lives richer should be helping us automate, meaning to make automatic and without thought, all that is least meaningful in our lives – this is what machines are good at. In areas of life that are fundamentally empty we should be decreasing not increasing our cognitive load. The drive for connectivity seems pushing in the opposite direction, forcing us to think about things that were formerly done by our thoughtless machines over which we lacked precision but also didn’t give us much to think about.

Yet even before the Internet of Things has fully bloomed we already have problems. As Gleick notes, in all past ages it was a given that most of our knowledge would be lost. Hence the logic of a wonderful social institution like the New York Public Library. Gleick writes with sharp irony:

Now expectations have inverted. Everything may be recorded and preserved…. Where does it end? Not with the Library of Congress.

Perhaps our answer is something like the Internet Archive whose work on capturing recording all of the internet as it comes into being and disappears is both amazing and might someday prove essential to the survival of our civilization- the analog to copies having been made of the famed collection at the lost Library at Alexandria. (Watch this film).

As individuals, however, we are still faced with the monumental task of separating the wheat from the chaff in the flood of information, the gems from the twerks. Our answer to this has been services which allow the aggregation and organization of this information, where as Gleick writes:

 Searching and filtering are all that stand between this world and the Library of Babel.  (410)

And here is the one problem I had with Gleick’s otherwise mind blowing book because he doesn’t really deal with questions of power. A good deal of power in our information society is shifting to those who provide these kinds of aggregating and sifting capabilities and claim on that basis to be able to make sense of a world brimming with noise and nonsense. The NSA makes this claim and is precisely the kind of intelligence agency one would predict to emerge in an information age.

Anne Neuberger, the Special Assistant to NSA Director Michael Rogers and the Director of the NSA’s Commercial Solutions Center recently gave a talk at the Long Now Foundation. It was a hostile audience of Silicon Valley types who felt burned by the NSA’s undermining of digital encryption, the reputation of American tech companies, and civil liberties. But nothing captured the essence of our situation better than a question from Paul Saffo.

Paul Saffo: It seems like the intelligence community have always been data optimists. That if we just had more data, and we know that after every crisis, it’s ‘well if we’d just have had  more data we’d connect the dots.’ And there’s a classic example, I think of 9-11 but this was said by the Church Committee in 1975. It’s become common to translate criticism of intelligence results into demands for enlarged data collection. Does more data really lead to more insight?

As a friend is fond of saying ‘perhaps the data haystack that the intelligence community has created has become too big to ever find the needle in.’    

Neuberger’s telling answer was that the NSA needed better analytics, something that the agency could learn from the big data practices of business. Gleick might have compared her answer to a the plea for a demon by a character in a story by Stanislaw Lem whom he quotes.

 We want the Demon, you see, to extract from the dance of atoms only information that is genuine like mathematical formulas, and fashion magazines.  (The Information 425)

Perhaps the best possible future for the NSA’s internet hoovering facility at Bluffdale  would be for it to someday end up as a memory bank of the our telecommunications in the 21st century, a more expansive version of the Internet Archive. What it is not, however,is a way to anticipate the future on either small or large scales, or at least that is what history tells us. More information or data does not equal more knowledge.

We are forced to find meaning in the current of the flood. It is a matter of learning what it is important to pay attention to and what to ignore, what we should remember and what we can safely forget, what the models we use to structure this information can tell us and what they can’t, what we actually know along with the boundaries of that which we don’t. It is not an easy job, or as Gleick says:

 Selecting the genuine takes work; then forgetting takes even more work.

And it’s a job, it seems, that just gets harder as our communications systems become ever more burdened by deliberate abuse ( 69.6 percent of email in 2014 was spam up almost 25 percent from a decade earlier) and in an age when quite extraordinary advances in artificial intelligence are being used not to solve our myriad problems, but to construct an insidious architecture of government and corporate stalking via our personal “data trails”, which some confuse with our “self”. A Laplacian ambition that Dag Kittlaus unguardedly praises as  “Incremental revenue nirvana.”  Pity the Buddha.

As individuals, institutions and societies we need to find ways stay afloat and navigate through the flood of information and its abuse that has burst upon us, for as in Borges’ Library there is no final escape from the world it represents. As Gleick powerfully concludes The Information:

We are all patrons of the Library of Babel now and we are the librarians too.

The Library will endure, it is the universe. As for us, everything has not been written; we are not turning into phantoms. We walk the corridors looking for lines of meaning amid the leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thought and collecting the thoughts of others, and every so often glimpsing mirrors in which we may recognize creatures of the information.”

 

Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

City As Superintelligence

Medieval Paris

A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings. The movement goes by the name of smart-cities, or sometimes sentient cities, and the fate of the project is intimately tied to the fate of humanity in the 21st century and beyond because the question of how the city is organized will define the world we live in from here forwards -the beginning of era of urban mankind.

Here are just some of many possible examples of smart cities at work, there is the city of Sondgo in South Korea a kind of testing ground for companies such as Cisco which can experiment with integrated technologies, to quote a recent article on the subject, such as:

TelePresence system, an advanced videoconferencing technology that allows residents to access a wide range of services including remote health care, beauty consulting and remote learning, as well as touch screens that enable residents to control their unit’s energy use.

Another example would be IBM’s Smart City Initiative in Rio which has covered that city with a dense network of sensors and cameras that allow centralized monitoring and control of vital city functions, and was somewhat brazenly promoted by that city’s mayor during a TED Talk in 2012. New York has set up a similar system, but it is in the non-Western world where smart cities will live or die because it is there where almost all of the world’s increasingly rapid urbanization is taking place.

Thus India, which has yet to urbanize like its neighbor, and sometimes rival, China, has plans to build up to 100 smart cities with 4.5 billion of the funding towards such projects being provided by perhaps the most urbanized country on the planet- Japan.

China continues to urbanize at a historically unprecedented pace with 250 million of its people- the equivalent of the entire population of the United States a generation ago- to move to its cities in the next 12 years. (I didn’t forget a zero.) There you have a city that few of us have even heard of – Chongqing, – which The Guardian several years back characterized as “the fastest growing urban center on the planet”  with more people in it than the entire countries of Peru and Iraq. No doubt in response to urbanization pressure, and at least back in 2011, Cisco was helping that city with its so-called Peaceful Chongqing Project an attempt to blanket the city in 500,000 video surveillance cameras- a collaboration that was possibly derailed by allegations by Edward Snowden that the NSA had infiltrated or co-opted U.S. companies.

Yet there are other smart-city initiatives that go beyond monitoring technologies. Under this rubric should fall the renewed interest in arcologies- massive buildings that contain within them an entire city, and thus in principle allow a city to be managed in terms of its climate, flows, etc. in the same way the internal environment of a skyscraper can be managed. China had an arcology in the works in Dongtan, which appears to have been scrapped over corruption and cost overrun concerns. Dubai has its green arcology in Masdar City, but it’s in Russia in the grip of a 21st century version of czarism, of all places, where the mother of all arcologies is planned, architect Norman Foster’s Crystal Island which, if actually built, would be the largest structure on the planet.

On the surface, there is actually much to like about smart-cities and their related arcologies. Smart-cities hold out the promise of greater efficiency for an energy starved and warming world. They should allow city management to be more responsive to citizens. All things being equal, smart-cities should be better than “dumb” ones at responding to everything from common fires and traffic accidents to major man- made and natural disasters. If Wendy Orent is correct as she wrote in a recent issue of AEON that we have less to fear from pandemics emerging from the wilderness such as Ebola than those that evolve in areas of extreme human density, smart-city applications should make the response to pandemics both quicker and more effective.

Especially in terms of arcologies, smart-cities represent something relatively new. We’ve had our two major models of the city since the early to mid-20th century, whether the skyscraper cities pioneered by New York and Chicago or the cul-de-sac suburban sprawl of cities dominated by the automobile like Phoenix. Cities going up now in the developing world certainly look more modern than American cities many of whose infrastructure is in a state of decay, but the model is the same, with the marked exception of all those super-trains.

All that said there are problems with smart-cities and the thinker who has written most extensively on the subject Anthony M. Townsend lays them out excellently in his book Smart Cities: Big-Data, Civic Hackers and the Quest for a New Utopia. Townsend sees three potential problems with smart-cities- they might prove, in his terms, “buggy. brittle, and bugged”.  

Like all software, the kinds that will be used to run smart-cities might exhibit unwanted surprises. We’ve seen this in some of the most sophisticated software we have running, financial trading algorithms whose “flash crashes” have felled billion dollar companies.

The loss of money, even a great deal of money, is something any reasonably healthy society should be able to absorb, but what if buggy software made essential services go off line over an extended period? Cascades from services now coupled by smart-city software could take out electricity and communication in a heat wave or re-run of last winter’s “polar vortex” and lead to loss of life. Perhaps having functions separated in silos and with a good deal of redundancy, even at the cost of inefficiency, is  “smarter” than having them tightly coupled and under one system. That is, after all, how the human brain works.  

Smart-cities might also be brittle. We might not be able to see that we had built a precarious architecture that could collapse in the face of some stressor or effort to intentionally harm- ahem– Windows. Computers crash and sometimes do so for reasons we are completely at a loss to identify. Or, imagine someone blackmailing a city by threatening to shut it down after having hacked its management system. Old school dumb-cities don’t really crash, even if they can sicken and die, and its hard to say they can be hacked.

Would we be in danger of decreasing a city’s resilience by compressing its complexity into an algorithm? If something like Stephen Wolfram’s principles of computational equivalence  and computational irreducibility is correct then the city is already a kind of computation and no model we can create of it will ever be more effective than this natural computation itself.

Or, to make my meaning clearer, imagine that you had a person that had suffered some horrible accident where to save them you had to replace all of his body’s natural information processing with a computer program. Such a program who have to regulate everything from breathing to metabolism to muscle movement ,along with the immune system, and exchange between neurons, not to mention a dozen other things. My guess is that you’d have to go out many generations of such programs before they are anywhere near workable. That the first generations would miss important elements, be based on wrong assumptions on how things worked, and would be loaded with perhaps catastrophic design errors that you couldn’t identify until the program was fully run in multiple iterations.

We are blissfully unaware that we are the product of billions of years of “engineering” where “design” failures were weeded out by evolution. Cities have only a few thousand years of a similar type of evolution behind them, but trying to control a great number of their functions via algorithms run by “command centers” might pose similar risks to my body example.  Reducing city functions to something we can compute in silicon might oversimplify the city in such a way as to reduce its resilience to stressors cities have naturally evolved to absorb. That is, there is, in all use of “Big-Data”, a temptation to interpret reality only in light of the model that scaffolds this data or reframe problems in ways that can mathematically be modeled. We set ourselves up for crises when we confuse the map with the territory or as Jaron Lanier said:

 What makes something real is that it is Impossible to represent it to completion.

Lastly, and as my initial examples of smart-cities should have indicated, smart-cities are by design bugged. They are bugged so as to surveil their citizens in an effort to prevent crime or terrorism or even just respond to accidents or disasters. Yet the promise of safety comes at the cost of one of the virtues of city living – the freedom granted from anonymity. But even if we care nothing for such things I’ve got news- trading privacy for security doesn’t even work.

Chongqing may spend tens of millions of dollars installing CCTV cameras, but would be hooligans or criminals or just people who don’t like being watched such as those in London, have a twenty dollar answer to all these gizmos- it’s called a hoodie. Likewise, a simple pattern of dollar store facepaint, strategically applied, can short-circuit the most sophisticated facial recognition software. I will never cease to be amazed at human ingenuity.    

We need to acknowledge that it is largely companies or individuals with extremely deep pockets and even deeper political connections that are promoting this model of the city. Townsend estimates it is potentially a 100 billion dollar business. We need to exercise our historical memory and recall how it was automobile companies that lobbied for and ended up creating our world of sprawl. Before investing millions or even billions cities need to have an idea of what kind of future they want to have and not be swayed by the latest technological trends.

This is especially the case when it comes to cities in the developing world where the conditions often resemble something more out of Dicken’s 19th century than even the 20th. When I enthusiastically asked a Chinese student about the arcology at Dongtan he responded with something like “Fools! Who would want to live in such a thing! It’s a waste of money. We need clean air and water, not such craziness!” And he’s no doubt largely right. And perhaps we might be happy that the project ultimately unraveled and say with Thoreau:

As for your high towers and monuments, there was a crazy fellow once in this town who undertook to dig through to China, and he got so far that, as he said, he heard the Chinese pots and kettles rattle; but I think that I shall not go out of my way to admire the hole which he made. Many are concerned about the monuments of the West and the East — to know who built them. For my part, I should like to know who in those days did not build them — who were above such trifling.

The age of connectivity, if it’s done thoughtfully, could bring us cities that are cleaner, greener, more able to deal with the shocks of disaster or respond to the spread of disease. Truly smart-cities should support a more active citizenry, a less tone-deaf bureaucracy, a more socially and culturally rich, entertaining and more civil life- the very reasons human beings have chosen to live in cities in the first place .

If the age of connectivity is done wrong cities will have poured scarce resources down a hole of corruption as deep as the one dug by Thoreau’s townsman, will have turned vibrant cultural and historical environments into corporate “flat-pack” versions of tomorrowland, and most frighteningly of all turned the potential democratic agora of the city into a massive panopticon of Orwellian monitoring and control. Cities, in the Wolfram not Bostrom sense, are already a sort of super-intelligence or better, hive mind of its interconnected yet free individuals more vibrant and important than any human built structure imaginable.  Will we let them stay that way?