Can Machines Be Moral Actors?

Tin Man by Greg Hildebrandt

The Tin Man by Greg Hildebrandt

Ethicists have been asking themselves a question over the last couple of years that seems to come right out of science-fiction. Is it possible to make moral machines, or in their lingo, autonomous moral agents -AMAs? Asking the question might have seemed silly not so long ago, or so speculative as risk obtaining tenure, but as the revolution in robotics has rolled forward it has become an issue necessary to grapple with and now.

Perhaps the most surprising thing to emerge out of the effort to think through what moral machines might mean has been less what it has revealed about our machines or their potential than what it has brought to relief in terms of our own very human moral behavior and reasoning. But I am getting ahead of myself, let me begin at the beginning.

Step back for a second and think about the largely automated systems that surround you right now, not in any hypothetical future with our version of Rosie from the Jetsons, or R2D2, but in the world in which we live today. The car you drove in this morning and the computer you are reading this on were brought to you in part by manufacturing robots. You may have stumbled across this piece via the suggestion of a search algorithm programmed but not really run by human hands. The electrical system supplying your the computer or tablet on which you are reading is largely automated, as are the actions of the algorithms that are now trying to grow your 401K or pension fund. And while we might not be able to say what this automation will look like ten or 20 years down the road we can nearly guarantee that barring some catastrophe there will only be more of it, and given the lag between software and hardware development this will be the case even should Moore’s Law completely stall out on us.  

The question ethicists are really asking isn’t really should we build an ethical dimension into these types of automated systems, but if we can, and what doing so would mean for human status as moral agents. Would building AMAs mean a decline in human moral dignity? Personally, I can’t really say I have a dog in this fight, but hopefully that will help me to see the views of all sides with more clarity.

Of course, the most ethically fraught area in which AMAs are being debated is warfare. There seems to be an evolutionary pull to deploy machines in combat with greater and greater levels of autonomy. The reason is simple advanced nations want to minimize risks to their own soldiers and remote controlled weapons are at risk of jamming and being cut off from their controllers. Thus, machines need to be able to act independent of human puppeteers. A good case can be made, though I will not try to make it here, that building machines that can make autonomous decisions to actually kill human beings would constitute the crossing of a threshold humanity will have prefered having remained on this side of.

Still, while it’s typically a bad idea to attach one’s ethical precepts to what is technically possible, here it may serve us well, at least for now. The best case to be made against fully autonomous weapons is that artificial intelligence is nowhere near being able to make ethical decisions regarding use of force on the battlefield, especially on mixed battlefields with civilians- which is what most battlefields are today. As argued by Guarani and Bello in “Robotic Warfare: some challenges in moving from non-civilian to civilian theaters”, current current forms of AI is not up to the task of threat ascription, can’t do mental attribution, and are currently unable to tackle the problem of isotropy, a fancy word that just means “the relevance of anything to anything”.

This is a classical problem of error through correlation, something that seems to be inherent in not just these types of weapons systems, but the kinds of specious correlation made by big-data algorithms as well. It is the difficulty of gauging a human being’s intentions from the pattern of his or her behavior. Is that child holding a weapon or a toy? Is that woman running towards me because she wants to attack or is she merely frightened and wants to act as a shield between me and her kids? This difficulty in making accurate mental attributions doesn’t need to revolve around life and death situations. Did I click on that ad because I’m thinking about moving to light beer or because I thought the woman in the ad looked cute? Too strong a belief in correlation can start to look like belief in magic- thinking that drinking the lite beer will get me a reasonable facsimile of the girl smiling at me on the TV.

Algorithms and the machines they run are still pretty bad at this sort of attribution. Bad enough, at least right now, that there’s no way they can be deployed ethically on the battlefield. Unfortunately, however, the debate about ethical machines often gets lost in the necessary, but much less comprehensive fight over “killer robots”. This despite the fact that war will be only a small part of the ethical remit of our new and much more intelligent machines will actually involve robots acting as weapons.

One might think that Asimov already figured all this out with his Three Laws of Robotics; 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The problem is the types of “robots” we are talking about bear little resemblance to the ones dreamed up by the great science-fiction writer, even if Asimov merely intended to explore the possible tensions between the laws.

Problems that robotics ethicists face today have more in common with the “Trolley Problem” when a machine has to make a decision between competing goods rather than contradicting imperatives. A self-driving car might be faced with a situation of hitting a pedestrian or hitting a school bus. Which should it chose? If your health care services are being managed by an algorithm what is it’s criteria for allocating scarce resources?

A approach to solving these types of problems might be to consistently use one of the major moral philosophies: Kantian, Utilitarian, Virtue ethics and the like as a guide for the types of decisions machines may make. Kant’s categorical imperative of “Acting only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction,” or aiming for utilitarianism’s “greatest good of the greatest number” might seem like a good approach, but as Wendell Wallach and Colin Allen point out in their book Moral Machines, such seemingly logically straightforward rules are for all practical purposes incalculable.

Perhaps if we ever develop real quantum computers capable of scanning through every possible universe we would have machines that can solve these problems, but, for the moment we do not. Oddly enough, the moral philosophy most dependent on human context, Virtue ethics, seems to have the best chance of being substantiated in machines, but the problem is that virtues are highly culture dependent. We might get machines that make vastly different and opposed ethical decisions depending on the culture in which they are embedded.

And here we find perhaps the biggest question that been raised by trying to think through the problem of moral machines. For if machines are deemed incapable of using any of the moral systems we have devised to come to ethical decisions, we might wonder if human beings really can either? This is the case made by Anthony Beavers who argues that the very attempt to make machines moral might be leading us towards a sort of ethical nihilism.

I’m not so sure. Or rather, as long as our machines, whatever their degree of autonomy, merely reflect the will of their human programmers I think we’re safe declaring them moral actors but not moral agents, and this should save us from slipping down the slope of ethical nihilism Beavers is rightly concerned about. In other words, I think Wallach and Allen are wrong in asserting that intentionality doesn’t matter when judging an entity to be a moral agent or not. Or as they say:

Functional equivalence of behavior is all that can possibly matter for the practical issues of designing AMA. “ (Emphasis theirs p. 68)

Wallach and Allen want to repeat the same behaviorists mistake made by Alan Turing and his assertion that consciousness didn’t matter when it came to the question of intelligence, a blind alley they follow so far as to propose their own version of the infamous test- what they call a Moral Turing Test or MTT. Yet it’s hard to see how we can grant full moral agency to any entity that has no idea what it is actually deciding, indeed that would just as earnestly do the exact opposite- such as deliberately target women and children- if its program told it to do so.

Still for the immediate future, and with the frightening and sad exception of robots on the battlefield, the role of moral machines seems less likely to have to do with making ethical decisions themselves than helping human beings make them. In Moral Machines Wallach and Allen discuss MedEthEx an amazing algorithm that helps healthcare professionals with decision making. It’s an exciting time for moral philosophers who will get to turn their ideas into all sorts of apps and algorithms that will hopefully aid moral decision making for individuals and groups.

Some might ask is leaning on algorithms to help one make more decisions a form of “cheating”? I’m not sure, but I don’t think so. A moral app reminds me of a story I heard once about George Washington that has stuck with me ever since. Washington from the time he was a small boy into old age carried a little pocket virtue guide with him full with all kinds of rules of thumb for how to act. Some might call that a crutch, but I just call it a tool, and human beings are the only animal that can’t thrive without its tools. The very moral systems we are taught as children might equally be thought of as tools of this sort. Washington only had a way of keeping them near at hand.

The trouble lies when one becomes too dependent on such “automation” and in the process loses touch with capabilities once the tool is removed. Yet this is a problem inherent in all technology. The ability of people to do long division atrophies because they always have a calculator at hand. I have a heck of a time making a fire without matches or a lighter and would have difficulty dealing with cold weather without clothes.

It’s certainly true that our capacity to be good people is more important to our humanity than our ability to do maths in our head or on scratch paper or even make fire or protect ourselves from the elements so we’ll need to be especially conscious as moral tools develop to still keep our natural ethic skills sharp.

This again applies more broadly than the issue of moral machines, but we also need to be able to better identify the assumptions that underlie the programs running in the instruments that surrounded us. It’s not so much “code or be coded” as the ability to unpack the beliefs which programs substantiate. Perhaps someday we’ll even have something akin to food labels on programs we use that will inform us exactly what such assumptions are. In other words, we have a job in front of us, for whether they help us or hinder us in our goal to be better people, machines remain, for the moment at least, mere agents, and a reflection of our own capacity for good and evil rather than moral actors of themselves.

 

City As Superintelligence

Medieval Paris

A movement is afoot to cover some of the largest and most populated cities in the world with a sophisticated array of interconnected sensors, cameras, and recording devices, able to track and respond to every crime or traffic jam ,every crisis or pandemic, as if it were an artificial immune system spread out over hundreds of densely packed kilometers filled with millions of human beings. The movement goes by the name of smart-cities, or sometimes sentient cities, and the fate of the project is intimately tied to the fate of humanity in the 21st century and beyond because the question of how the city is organized will define the world we live in from here forwards -the beginning of era of urban mankind.

Here are just some of many possible examples of smart cities at work, there is the city of Sondgo in South Korea a kind of testing ground for companies such as Cisco which can experiment with integrated technologies, to quote a recent article on the subject, such as:

TelePresence system, an advanced videoconferencing technology that allows residents to access a wide range of services including remote health care, beauty consulting and remote learning, as well as touch screens that enable residents to control their unit’s energy use.

Another example would be IBM’s Smart City Initiative in Rio which has covered that city with a dense network of sensors and cameras that allow centralized monitoring and control of vital city functions, and was somewhat brazenly promoted by that city’s mayor during a TED Talk in 2012. New York has set up a similar system, but it is in the non-Western world where smart cities will live or die because it is there where almost all of the world’s increasingly rapid urbanization is taking place.

Thus India, which has yet to urbanize like its neighbor, and sometimes rival, China, has plans to build up to 100 smart cities with 4.5 billion of the funding towards such projects being provided by perhaps the most urbanized country on the planet- Japan.

China continues to urbanize at a historically unprecedented pace with 250 million of its people- the equivalent of the entire population of the United States a generation ago- to move to its cities in the next 12 years. (I didn’t forget a zero.) There you have a city that few of us have even heard of – Chongqing, – which The Guardian several years back characterized as “the fastest growing urban center on the planet”  with more people in it than the entire countries of Peru and Iraq. No doubt in response to urbanization pressure, and at least back in 2011, Cisco was helping that city with its so-called Peaceful Chongqing Project an attempt to blanket the city in 500,000 video surveillance cameras- a collaboration that was possibly derailed by allegations by Edward Snowden that the NSA had infiltrated or co-opted U.S. companies.

Yet there are other smart-city initiatives that go beyond monitoring technologies. Under this rubric should fall the renewed interest in arcologies- massive buildings that contain within them an entire city, and thus in principle allow a city to be managed in terms of its climate, flows, etc. in the same way the internal environment of a skyscraper can be managed. China had an arcology in the works in Dongtan, which appears to have been scrapped over corruption and cost overrun concerns. Dubai has its green arcology in Masdar City, but it’s in Russia in the grip of a 21st century version of czarism, of all places, where the mother of all arcologies is planned, architect Norman Foster’s Crystal Island which, if actually built, would be the largest structure on the planet.

On the surface, there is actually much to like about smart-cities and their related arcologies. Smart-cities hold out the promise of greater efficiency for an energy starved and warming world. They should allow city management to be more responsive to citizens. All things being equal, smart-cities should be better than “dumb” ones at responding to everything from common fires and traffic accidents to major man- made and natural disasters. If Wendy Orent is correct as she wrote in a recent issue of AEON that we have less to fear from pandemics emerging from the wilderness such as Ebola than those that evolve in areas of extreme human density, smart-city applications should make the response to pandemics both quicker and more effective.

Especially in terms of arcologies, smart-cities represent something relatively new. We’ve had our two major models of the city since the early to mid-20th century, whether the skyscraper cities pioneered by New York and Chicago or the cul-de-sac suburban sprawl of cities dominated by the automobile like Phoenix. Cities going up now in the developing world certainly look more modern than American cities many of whose infrastructure is in a state of decay, but the model is the same, with the marked exception of all those super-trains.

All that said there are problems with smart-cities and the thinker who has written most extensively on the subject Anthony M. Townsend lays them out excellently in his book Smart Cities: Big-Data, Civic Hackers and the Quest for a New Utopia. Townsend sees three potential problems with smart-cities- they might prove, in his terms, “buggy. brittle, and bugged”.  

Like all software, the kinds that will be used to run smart-cities might exhibit unwanted surprises. We’ve seen this in some of the most sophisticated software we have running, financial trading algorithms whose “flash crashes” have felled billion dollar companies.

The loss of money, even a great deal of money, is something any reasonably healthy society should be able to absorb, but what if buggy software made essential services go off line over an extended period? Cascades from services now coupled by smart-city software could take out electricity and communication in a heat wave or re-run of last winter’s “polar vortex” and lead to loss of life. Perhaps having functions separated in silos and with a good deal of redundancy, even at the cost of inefficiency, is  “smarter” than having them tightly coupled and under one system. That is, after all, how the human brain works.  

Smart-cities might also be brittle. We might not be able to see that we had built a precarious architecture that could collapse in the face of some stressor or effort to intentionally harm- ahem- Windows. Computers crash and sometimes do so for reasons we are completely at a loss to identify. Or, imagine someone blackmailing a city by threatening to shut it down after having hacked its management system. Old school dumb-cities don’t really crash, even if they can sicken and die, and its hard to say they can be hacked.

Would we be in danger of decreasing a city’s resilience by compressing its complexity into an algorithm? If something like Stephen Wolfram’s principles of computational equivalence  and computational irreducibility is correct then the city is already a kind of computation and no model we can create of it will ever be more effective than this natural computation itself.

Or, to make my meaning clearer, imagine that you had a person that had suffered some horrible accident where to save them you had to replace all of his body’s natural information processing with a computer program. Such a program who have to regulate everything from breathing to metabolism to muscle movement ,along with the immune system, and exchange between neurons, not to mention a dozen other things. My guess is that you’d have to go out many generations of such programs before they are anywhere near workable. That the first generations would miss important elements, be based on wrong assumptions on how things worked, and would be loaded with perhaps catastrophic design errors that you couldn’t identify until the program was fully run in multiple iterations.

We are blissfully unaware that we are the product of billions of years of “engineering” where “design” failures were weeded out by evolution. Cities have only a few thousand years of a similar type of evolution behind them, but trying to control a great number of their functions via algorithms run by “command centers” might pose similar risks to my body example.  Reducing city functions to something we can compute in silicon might oversimplify the city in such a way as to reduce its resilience to stressors cities have naturally evolved to absorb. That is, there is, in all use of “Big-Data”, a temptation to interpret reality only in light of the model that scaffolds this data or reframe problems in ways that can mathematically be modeled. We set ourselves up for crises when we confuse the map with the territory or as Jaron Lanier said:

 What makes something real is that it is Impossible to represent it to completion.

Lastly, and as my initial examples of smart-cities should have indicated, smart-cities are by design bugged. They are bugged so as to surveil their citizens in an effort to prevent crime or terrorism or even just respond to accidents or disasters. Yet the promise of safety comes at the cost of one of the virtues of city living – the freedom granted from anonymity. But even if we care nothing for such things I’ve got news- trading privacy for security doesn’t even work.

Chongqing may spend tens of millions of dollars installing CCTV cameras, but would be hooligans or criminals or just people who don’t like being watched such as those in London, have a twenty dollar answer to all these gizmos- it’s called a hoodie. Likewise, a simple pattern of dollar store facepaint, strategically applied, can short-circuit the most sophisticated facial recognition software. I will never cease to be amazed at human ingenuity.    

We need to acknowledge that it is largely companies or individuals with extremely deep pockets and even deeper political connections that are promoting this model of the city. Townsend estimates it is potentially a 100 billion dollar business. We need to exercise our historical memory and recall how it was automobile companies that lobbied for and ended up creating our world of sprawl. Before investing millions or even billions cities need to have an idea of what kind of future they want to have and not be swayed by the latest technological trends.

This is especially the case when it comes to cities in the developing world where the conditions often resemble something more out of Dicken’s 19th century than even the 20th. When I enthusiastically asked a Chinese student about the arcology at Dongtan he responded with something like “Fools! Who would want to live in such a thing! It’s a waste of money. We need clean air and water, not such craziness!” And he’s no doubt largely right. And perhaps we might be happy that the project ultimately unraveled and say with Thoreau:

As for your high towers and monuments, there was a crazy fellow once in this town who undertook to dig through to China, and he got so far that, as he said, he heard the Chinese pots and kettles rattle; but I think that I shall not go out of my way to admire the hole which he made. Many are concerned about the monuments of the West and the East — to know who built them. For my part, I should like to know who in those days did not build them — who were above such trifling.

The age of connectivity, if it’s done thoughtfully, could bring us cities that are cleaner, greener, more able to deal with the shocks of disaster or respond to the spread of disease. Truly smart-cities should support a more active citizenry, a less tone-deaf bureaucracy, a more socially and culturally rich, entertaining and more civil life- the very reasons human beings have chosen to live in cities in the first place .

If the age of connectivity is done wrong cities will have poured scarce resources down a hole of corruption as deep as the one dug by Thoreau’s townsman, will have turned vibrant cultural and historical environments into corporate “flat-pack” versions of tomorrowland, and most frighteningly of all turned the potential democratic agora of the city into a massive panopticon of Orwellian monitoring and control. Cities, in the Wolfram not Bostrom sense, are already a sort of super-intelligence or better, hive mind of its interconnected yet free individuals more vibrant and important than any human built structure imaginable.  Will we let them stay that way?

Why archaeologist make better futurists than science-fiction writers

Astrolabe vs iPhone

Human beings seem to have an innate need to predict the future. We’ve read the entrails of animals, thrown bones, tried to use the regularity or lack of it in the night sky as a projection of the future and omen of things to come, along with a thousand others kinds of divination few of us have ever heard of. This need to predict the future makes perfect sense for a creature whose knowledge bias is towards the present and the past. Survival means seeing enough ahead to avoid dangers, so that an animal that could successfully predict what was around the next corner could avoid being eaten or suffering famine.

It’s weird how many of the ancient techniques of divination have a probabilistic element to them, as if the best tool for discerning the future was a magic 8 ball, though perhaps it makes a great deal of sense. Probability and chance come into play wherever our knowledge comes up against limits, and these limits can consists merely of knowledge we are not privy to like the game of “He loves me, he loves me not” little girls play with flower petals. As Henri Poincaré said; “Chance is only the measure of our ignorance”.

Poincaré was coming at the question with the assumption that the future is already determined, the idea of many physicists who think our knowledge bias towards the present and past is in fact like a game of “he loves me not”, and all in our heads, that the future already exists in the same sense the past and present, the future is just facts we can only dimly see.  It shouldn’t surprise us that they say this, physics being our own most advanced form divination, and it’s just one among many modern forms from meteorology to finance to genomics. Our methods of predicting the future are more sophisticated and effective than those of the past, but they still only barely see through the fog in front of us, which doesn’t stop us from being overconfident in our predictive prowess. We even have an entire profession, futurists, who claim like old-school fortune tellers- to have a knowledge of the future no one can truly possess.

There’s another group that makes a claim of the ability to divine at least some rough features of the future- science-fiction writers. The idea of writing about a future that was substantially different from the past really didn’t emerge until the 19th century when technology began not only making the present substantially different from the past, but promised to do so out into an infinite future. Geniuses like Jules Verne and H.G. Wells were able to look at the industrial revolution changing the world around  them and project it forward in time with sometimes extraordinary prescience.

There is a good case to be made, and I’ve tried to make it before, that science-fiction is no longer very good at this prescience. The shape of the future is now occluded, and we’ve known the reasons for this for quite some time. Karl Popper had pointed out as far back as the 1930’s that the future is essentially unknowable and part of this unknowability was that the course of scientific and technological advancement can not be known in advance.  A  too tight fit with wacky predictions of the future of science and technology is the characteristic of the worst and most campy of science-fiction.

Another good argument can be made that technologically dependent science-fiction isn’t so much predicting the future as inspiring it. That the children raised on Star Trek’s communicator end up inventing the cell phone. Yet technological horizons can just as much be atrophied as energized by science-fiction. This is the point Robinson Meyer makes in his blog post “Google Wants to Make ‘Science Fiction’ a Reality—and That’s Limiting Their Imagination” .

Meyer looks at the infamous Google X a supposedly risky research arm of Google one of whose criteria for embarking on a project is that it must utilize a radical solution that has at least a component that resembles science fiction.”

The problem Meyer finds with this is:

 ….“science fiction” provides but a tiny porthole onto the vast strangeness of the future. When we imagine a “science fiction”-like future, I think we tend to picture completed worlds, flying cars, the shiny, floating towers of midcentury dreams.

We tend, in other words, to imagine future technological systems as readymade, holistic products that people will choose to adopt, rather than as the assembled work of countless different actors, which they’ve always really been.

Meyer’s mentions a recent talk by a futurist I hadn’t heard of before Scott Smith who calls the kinds of clean futuristic visions of total convenience, antiseptic worlds surrounded by smart glass where all the world’s knowledge is our fingertips and everyone is productive and happy, “flat-pack futures” the world of tomorrow ready to be brought home and assembled like a piece of furniture from the IKEA store.

I’ve always had trouble with these kinds of simplistic futures, which are really just corporate advertising devices not real visions of a complex future which will inevitably have much ugliness in it, including the ugliness that emerges from the very technology that is supposed to make our lives a paradise. The major technological players are not even offering alternative versions of the future, just the same bland future with different corporate labels as seen, here, here, and here.

It’s not that these visions of the future are always totally wrong, or even just unattractive, as the fact that people the present- meaning us-  have no real agency over what elements of these visions they want and which they don’t with the exception of exercising consumer choice, the very thing these flat-pack visions of the future are trying to get us to buy.

Part of the reason that Scott sees a mismatch between these anodyne versions of the future, which we’ve had since at least the early 20th century, and what actually happens in the future is that these corporate versions of the future lack diversity and context,  not to mention conflict, which shouldn’t really surprise us, they are advertisements after all, and geared to high end consumers- not actual predictions of what the future will in fact look like for poor people or outcasts or non-conformists of one sort or another. One shouldn’t expect advertisers to show the bad that might happen should a technology fail or be hacked or used for dark purposes.

If you watch enough of them, their more disturbing assumption is that by throwing a net of information over everything we will finally have control and all the world will finally fall under the warm blanket of our comfort and convenience. Or as Alexis Madrigal put it in a thought provoking recent piece also at The Atlantic:

We’re creating a world that seamlessly, effortlessly shapes itself to human desire. It’s no longer cutting through a mountain to prove we dominate nature; now, it’s satisfying each impulse in the physical world with the ease and speed of digital tools. The shortest way to explain what Uber means: hit a button and something happens in the world (that makes life easier for you).

To return to Meyer, his point was that by looking to science-fiction almost exclusively to see what the future “should” look like designers, technologists, and by implication some futurists, had reduced themselves to a very limited palette. How might this palette be expanded? I would throw my bet behind turning to archaeologists, anthropologists, and certain kinds of historians.

As Steve Jobs understood, in some ways the design of a technology is as important as the technology itself. But Jobs sought almost formlessness, the clean, antiseptic modernist zen you feel when you walk into an Apple Store, as someone once said whose attribution I can’t place – an Apple Store is designed like a temple. But it’s a temple that is representative of only one very particular form of religion.

All the flat-pack versions of the future I linked to earlier have one prominent feature in common they are obsessed with glass. Everything in the world it seems will be a see through touchscreen bubbling with the most relevant information for the individual at that moment, the weather, traffic alerts, box scores, your child’s homework assignment. I have a suspicion that this glass fetish is a sort of Freudian slip on the part of tech companies, for the thing about glass is that you can see both ways through it, and what these companies most want is the transparency of you, to be able to peer into the minutiae of your life, so as to be able to sell you stuff.

Technology designers have largely swallowed the modernist glass covered kool-aid, but one way to purge might be to turn to archeology. For example, I’ve never found a tool more beautiful than the astrolabe, though it might be difficult to carry a smartphone in the form of one in your pocket. Instead of all that glass why not a Japanese paper house where all our supposedly important information is to constantly appear? Whiteboards are better for writing anyway. Ransacking the past for the forms in which we could embed our technologies might be one way to escape the current design conformity.

There are reasons to hope that technology itself will help us breakout of our futuristic design conformity. Ubiquitous 3D printing may allow us to design and create our own household art, tools, and customized technology devices. A website my girls and I often look to for artistic inspiration and projects such as the Met’s 82nd & Fifth as the images get better, eventually becoming 3D and even providing CAD specifications make allow us access to a great deal of the world’s objects from the past providing design templates for ways to embed our technology that reflect our distinct and personal and cultural values that might have nothing to do with those whipped up by Silicon Valley.

Of course, we’ve been ransacking the past for ideas about how to live in the present and future since modernity began. Augustus Pugin’s 1836 book Contasts with its beautiful, if slightly dishonest, drawings put the 19th century’s bland, utilitarian architecture up against the exquisite beauty of buildings from the 15th century. How cool would it be to have a book contrasting technology today with that of the past like Pugin’s?

The problem with Pugin’s and other’s efforts in such regards was that they didn’t merely look to the past for inspiration or forms but sought to replace the present and its assumptions with the revival of whole eras, assuming that a kind of impossible cultural and historical rewind button or time machine was possible when in reality the past was forever gone.

Looking to the past, as opposed to trying to raise it from the dead, has other advantages when it comes to our ability to peer into and shape the future.  It gives us different ways of seeing how our technology might be used as means towards the things that have always mattered most to human beings – our relationships, art, spirituality, curiosity. Take, for instance, the way social media such as SKYPE and FaceBook is changing how we relate to death. On the one hand social media seems to breakdown the way in which we have traditionally dealt with death – as a community sharing a common space, while on the other, it seems to revive a kind of analog to practices regarding death that have long past from the scene, practices that archeologists and social historians know well, where the dead are brought into the home and tended to for an extended period of time. Now though, instead of actual bodies, we see the online legacy of the dead, websites, FaceBook pages and the like, being preserved and attended to by loved ones.  A good question to ask oneself when thinking about how future technology will be used might not be what new thing will it make possible, but what old thing no longer done might it be used to revive?

In other words, the design of technology is the reflection of a worldview, but this is only a very limited worldview competing with an innumerable number of others. Without fail, for good and ill, technology will be hacked and used as a means by people whose worldviews that have little if any relationship to that of technologists. What would be best is if the design itself, not the electronic innards, but the shell and organization for use was controlled by non-technologists in the first place.

The best of science-fiction actually understands this. It’s not the gadgets that are compelling, but the alternative world, how it fits together, where it is going, and what it says about our own world. That’s what books like Asimov’s Foundation Trilogy did, or  Frank Herbert’s Dune. Both weren’t predictions of the future so much as fictional archeologies and histories. It’s not the humans who are interesting in a series like Star Trek, but the non-humans who each live in their own very distinct versions of a life-world.

For those science-fiction writers not interested in creating alternative world’s or taking a completely imaginary shot in the dark to imagine what life might be like many hundreds of years beyond our own they might benefit from reflection on what they are doing and why they should lean heavily on knowledge of the past.

To the extent that science-fiction tells us anything about the future it is through a process of focused extrapolation. As, quoting myself, the sci-fi author Paolo Bacigalupi has said the role of science-fiction is to take some set of elements of the present and extrapolate them to see where they lead. Or, as William Gibson said “The future is already here it’s just not evenly distributed.” In his novel The Windup Girl Bacigalupi extrapolated cut throat corporate competition, the use of mercenaries, genetic engineering, and the way biotechnology and agribusiness seem to be trying to take ownership over food, along with our continued failure to tackle carbon emissions and came up with a chilling dystopian world. Amazingly, Bacigalupi, an American, managed to embed this story in a Buddhist cultural and Thai historical context.

Although I have not had the opportunity to read the book yet, which is now at the top of my end of summer reading list, former neuroscientist and founder of the gaming company Six to Start, Adrian Hon’s book The History of the Future in 100 Objects seems to grasp precisely this. As Hon explained in an excellent talk over at the Long Now Foundation (I believe I’ve listed to it in its entirety 3 times!) he was inspired by Neil MacGregor’s History of the World in 100 Object (also on my reading list). What Hon realized was that the near future at least would have many of the things we are familiar with things like religion or fashion, and so what he did was extrapolate his understanding of technology trends gained from both his neuroscience and gaming backgrounds onto those things.

Science-fiction writers and their kissing-cousins the futurists, need to remind themselves when writing about the next few centuries hence that, as long as there are human beings, (and otherwise what are they writing about) there will still be much about us that we’ve had since time immemorial. Little doubt, there will still be death, but also religion. And not just any religion, but the ones we have now: Christianity and Islam, Hinduism and Buddhism etc. (Can anyone think of a science-fiction story where they still celebrate Christmas?) There will still be poverty, crime, and war. There will also be birth and laughter and love. Culture will still matter, and sub-culture just as much, as will geography. For at least the next few centuries, let us hope, we will still be human, so perhaps I shouldn’t have used archeologist in my title, but anthropologists or even psychologist, for those who can best understand the near future best understand the mind and heart of mankind rather than the current, and inevitably unpredictable, trajectory of our science and technology.

 

Plato and the Physicist: A Multicosmic Love Story

Life as a Braid of Space Time

 

So I finally got around to reading Max Tegmark’s book Our Mathematical Universe, and while the book answered the question that had led me to read it, namely, how one might reconcile Plato’s idea of eternal mathematical forms with the concept of multiple universes, it also threw up a whole host of new questions. This beautifully written and thought provoking book made me wonder about the future of science and the scientific method, the limits to human knowledge, and the scientific, philosophical and moral meaning of various ideas of the multiverse.

I should start though with my initial question of how Tegmark manages to fit something very much like Plato’s Theory of the Forms into the seemingly chaotic landscape of multiverse theories. If you remember back to your college philosophy classes, you might recall something of Plato’s idea of forms, which in its very basics boils down to this: Plato thought there was a world of perfect, eternally existing ideas of which our own supposedly real world was little more than a shadow. The idea sounds out there until you realize that Plato was thinking like a mathematician. We should remember that over the walls of Plato’s Academy was written “Let no man ignorant of geometry enter here”, and for the Greeks geometry was the essence of mathematics. Plato aimed to create a school of philosophical mathematicians much more than he hoped to turn philosophers into a sect of moral geometers.

Probably almost all mathematicians and physicists hold to some version of platonism, which means that they think mathematical structures are something discovered rather than a form of language invented by human beings. Non- mathematicians, myself very much included, often have trouble understanding this, but a simple example from Plato himself might help clarify.

When the Greeks played around with shapes for long enough they discovered things. And here we really should say discover because they had no idea shapes had these properties until they stumbled upon them through play.Plato’s dialogue Meno gave us the most famous demonstration of the discovery rather than invention of mathematical structures. Socrates asks a “slave boy” (we should take this to be the modern day equivalent of the man off the street) to figure out the area of a square which is double that of a square with a length of 2. The key, as Socrates leads the boy to see, is that one should turn the square with the side of 2 into a right triangle the length of whose hypotenuse is then seen as equal to one of the lengths of the doubled square allowing you easily calculate its area. The slave boy explains his measurement epiphany as the “recovery of knowledge from a past life.”

The big gap between Plato and modern platonists is that the ancient philosopher thought the natural world was a flawed copy of the crystalline purity of the mathematics of  thought. Contrast that with Newton who saw the hand of God himself in nature’s calculable regularities. The deeper the scientists of the modern age probed with their new mathematical tools the more nature appeared as Galileo said “ a book written in the language of mathematics”. For the moderns mathematical structures and natural structures became almost one and the same. The Spanish filmmaker and graphic designer Cristóbal Vila has a beautiful short over at AEON reflecting precisely this view.

It’s that “almost” that Tegmark has lept over with his Mathematical Universe Hypothesis (MUH). The essence of the MUH is not only that mathematical structures have an independent identity, or that nature is a book written in mathematics, but that the nature is a mathematical structure and just as all mathematical structures exist independent of whether we have discovered them or not, all logically coherent universes exists whether or not we have discovered their structures. This is platonism with a capital P, the latter half explaining how the MUH intersects with the idea of the multiverse.

One of the beneficial things Tegmark does with his book is to provide a simple to understand set of levels for different ideas that there is more than one universe.

Level I: Beyond our cosmological horizon

A Level I multiverse is the easiest for me to understand. It is within the lifetime of people still alive that our universe was held to be no bigger than our galaxy. Before that people thought the entirety of what was consisted of nothing but our solar system, so it is no wonder that people thought humanity was the center of creation’s story. As of right now the observable universe is around 46 billion light years across, actually older than the age of the universe due to its expansion. Yet, why should we think this observable horizon constitutes everything when such assumption has never proved true in the past? The Level I multiverse holds that there are entire other universes outside the limit of what we can observe.

Level II: Universes with different physical constants

The Level II multiverse again makes intuitive sense to me. If one assumes that the Big Bang was not the first or the last of its kind, and  if one assumes there are whole other, potentially an infinite number of universes, why assume that our is the only way a universe should be organized? Indeed, having a variety of physical constants to choose from would make the fine tuning of our own universe make more sense.

Level III: Many-worlds interpretation of quantum mechanics

This is where I start to get lost, or at least this particular doppelganger of me starts to get lost. Here we find Hugh Everett’s interpretation of quantum unpredictability. Rather than Schrodinger’s Cat being pushed from a superposition of states between alive and dead when you open the box, exposing the feline causes the universe to split- in one universe you have an alive cat, and in another a dead one. It gets me dizzy just thinking about it, just imagine the poor cat- wait, I am the cat!

Level IV: Ultimate ensemble

Here we have Tegmark’s model itself where every universe that can represented as a logically consistent mathematical structure is said to actually exist. In such a multiverse when you roll a six-sided die, there end up being six universes corresponding to each of the universes, but there is no universe where you have rolled a “1 not 1” , and so on. If a universe’s mathematical structure can be described, then that universe can be said to exist there being, in Tegmark’s view, no difference between such a mathematical structure and a universe.

I had previously thought the idea of the multiverse was a way to give scale to the shadow of our ignorance and expand our horizon in space and time. As mentioned, we had once thought all that is was only as big as our solar system and merely thousands of years old. By the 19th century the universe had expanded to the size of our galaxy and the past had grown to as much as 400 million years. By the end of the 20th century we knew there were at least 100 billion galaxies in the universe and that its age was 13.7 billion. There is no reason to believe that we have grasped the full totality of existence, that the universe, beyond our observable horizon isn’t even bigger, and the past deeper. There is “no sign on the Big Bang saying ‘this happened only once’” as someone once said cleverly whose attribution I cannot find.

Ideas of the multiverse seemed to explain the odd fact that the universe seems fine-tuned to provide the conditions for life, Martin Rees “six numbers” such as Epsilon (ε)- the strength of the force binding nucleons to nuclei. If you have a large enough sample of universes then the fact that some universes are friendly for life starts to make more sense. The problem, I think, comes in when you realize just how large this sample size has to be to get you to fine tuning- somewhere on the order of 10 ^200. What this means is that you’ve proposed the existence of a very very large or even infinite number of values, as far as we know which are unobservable to explain essentially six. If this is science, it is radically different from the science we’ve known since Galileo dropped cannon balls off of the Leaning Tower of Pisa.

For whatever reason, rather than solidify my belief in the possibility of the multiverse, or convert me to platonism, Tegmark’s book left me with a whole host of new questions, which is what good books do. The problem is my damned doppelgangers who can be found not only at the crazy quantum Level III, but at the levels I thought were a preserve of Copernican Mediocrity – Levels I and II, or as Tegmark says.

The only difference between Level I and Level III is where your doppelgängers reside.

Yet, to my non-physicist eyes, the different levels of multiverse sure seems distinct. Level III seems to violate Copernican Mediocrity with observers and actors being able to call into being whole new timelines with even the most minutea laden of their choices, whereas Levels I and II simply posit that a universe sufficiently large enough and sufficiently extended enough in time would allow for repeat performances down to the smallest detail- perhaps the universe is just smaller than that, or less extended in time, or there is some sort of kink whereby when what the late Stephen J Gould called the “life tape” is replayed you can never get the same results twice.

Still, our intuitions about reality have often been proven wrong, so no theory can be discounted on the basis of intuitive doubts. There are other reasons, however, why we might use caution when it comes to multiverse theories, namely, their potential risk to the scientific endeavor itself.  The fact that we can never directly observe parts of the multiverse that are not our own means that we would have to step away from falsifiability as the criteria for scientific truth. The physicist Sean Carroll  argues that falsifiability is a weak criteria, what makes a theory scientific is that it is “direct” (says something definite about how reality works) and “empirical”, by which he no longer means the Popperian notion of falsifiability, but its ability to explain the world. He writes:

Consider the multiverse.

If the universe we see around us is the only one there is, the vacuum energy is a unique constant of nature, and we are faced with the problem of explaining it. If, on the other hand, we live in a multiverse, the vacuum energy could be completely different in different regions, and an explanation suggests itself immediately: in regions where the vacuum energy is much larger, conditions are inhospitable to the existence of life. There is therefore a selection effect, and we should predict a small value of the vacuum energy. Indeed, using this precise reasoning, Steven Weinberg did predict the value of the vacuum energy, long before the acceleration of the universe was discovered.

We can’t (as far as we know) observe other parts of the multiverse directly. But their existence has a dramatic effect on how we account for the data in the part of the multiverse we do observe.

One could look at Tegmark’s MUH and Carroll’s comments as a broadening of our scientific and imaginative horizons and the continuation of our powers to explain into realms beyond what human beings will ever observe. The idea of a 22nd version of Plato’s Academy using amazingly powerful computers to explore all the potential universes ala Tegmark’s MUH is an attractive future to me. Yet, given how reliant we are on science and the technology that grows from it, and given the role of science in our society in establishing the consensus view of what our shared physical reality actually is, we need to be cognizant and careful of what such a changed understanding of science actually might mean.

The physicist, George Ellis, for one, thinks the multiverse hypothesis, and not just Tegmark’s version of it, opens the door to all sorts of pseudoscience such as Intelligent Design. After all, the explanation that the laws and structure of our universe can be understood only by reference to something “outside” is the essence of explanations from design as well, and just like the multiverse, cannot be falsified.

One might think that the multiverse was a victory of theorizing over real world science, but I think Sean Carroll is essentially right when he defends the multiverse theory by saying:

 Science is not merely armchair theorizing; it’s about explaining the world we see, developing models that fit the data.

It’s the use of the word “model” here rather than “theory” that is telling. For a model is a type of representation of something whereas a theory constitutes an attempt at a coherent self-contained explanation. If the move from theories to models was only happening in physics then we might say that this had something to do merely with physics as a science rather than science in general. But we see this move all over the place.

Among, neuroscientists, for example, there is no widely agreed upon theory of how SSRIs work, even though they’ve been around for a generation, and there’s more. In a widely debated speech Noam Chomsky argued that current statistical models in AI were bringing us no closer to the goal of AGI or the understanding of human intelligence because they lacked any coherent theory of how intelligence works. As Yaden Katz wrote for The Atlantic:

Chomsky critiqued the field of AI for adopting an approach reminiscent of behaviorism, except in more modern, computationally sophisticated form. Chomsky argued that the field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. For Chomsky, the “new AI” — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.

Likewise, the field of systems biology and especially genomic science is built not on theory but on our ability to scan enormous databases of genetic information looking for meaningful correlations. The new field of social physics is based on the idea that correlations of human behavior can be used as governance and management tools, and business already believes that statistical correlation is worth enough to spend billions on and build an economy around.

Will this work as well as the science we’ve had for the last five centuries? It’s too early to tell, but it certainly constitutes a big change for science and the rest of us who depend upon it. This shouldn’t be taken as an unqualified defense of theory- for if theory was working then we wouldn’t be pursuing this new route of data correlation whatever the powers of our computers. Yet, those who are pushing this new model of science should be aware of its uncertain success, and its dangers.

The primary danger I can see from these new sorts of science, and this includes the MUH, is that it challenges the role of science in establishing the consensus reality which we all must agree upon. Anyone who remembers their Thomas Kuhn can recall that what makes science distinct from almost any system of knowledge we’ve had before, is that it both enforces a consensus view of physical reality beyond which an individual’s view of the world can be considered “unreal”, and provides a mechanism by which this consensus reality can be challenged and where the challenge is successful overturned.

With multiverse theories we are in approaching what David Engelman calls Possibilism the exploration of every range of ways existence can be structured that is compatible with the findings of science and is rationally coherent. I find this interesting as a philosophical and even spiritual project, but it isn’t science, at least as we’ve understood science since the beginning of the modern world. Declaring the project to be scientific blurs the lines between science and speculation and might allow people to claim the kind of understanding over uncertainty that makes politics and consensus decisions regarding acute needs of the present, such a global warming, or projected needs of the future impossible.

Let me try to clarify this. I found it very important that in Our Mathematical Universe Tegmark tried to tackle the problem of existential risks facing the human future. He touches upon everything from climate change, to asteroid impacts, to pandemics to rogue AI. Yet, the very idea that there are multiple versions of us out there, and that our own future is determined seems to rob these issues of their urgency. In an “infinity” of predetermined worlds we destroy ourselves, just as in an “infinity” of predetermined worlds we do what needs to be done. There is no need to urge us forward because, puppet-like, we are destined to do one thing or the other on this particular timeline.

Morally and emotionally, how is what happens in this version of the universe in the future all that different from what happens in other universe? Persons in those parallel universes are even closer to us, our children, parents, spouses, and even ourselves than the people of the future on our own timeline. According to the deterministic models of the multiverse, the world of these others are outside of our influence and both the expansion or contraction of our ethical horizon leave us in the same state of moral paralysis. Given this, I will hold off on believing in the multiverse, at least on the doppelganger scale of Level I and II, and especially Levels III and IV until it actually becomes established as a scientific fact,which it is not at the moment, and given our limitations, perhaps never will be, even if it is ultimately true.

All that said, I greatly enjoyed Tegmark’s book, it was nothing if not thought provoking. Nor would I say it left me with little but despair, for in one section he imagined a Spinoza-like version of eternity that will last me a lifetime, or perhaps I should say beyond.  I am aware that I will contradict myself here: his image that gripped me was of an individual life seen as a braid of space-time. For Tegmark, human beings have the most complex space-time braids we know of. The idea vastly oversimplified by the image above.

About which Tegmark explains:

At both ends of your spacetime braid, corresponding to your birth and death, all the threads gradually separate, corresponding to all your particles joining, interacting and finally going their own separate ways. This makes the spacetime structure of your entire life resemble a tree: At the bottom, corresponding to early times, is an elaborate system of roots corresponding to the spacetime trajectories of many particles, which gradually merge into thicker strands and culminate in a single tube-like trunk corresponding to your current body (with a remarkable braid-like pattern inside as we described above). At the top, corresponding to late times, the trunk splits into ever finer branches, corresponding to your particles going their own separate ways once your life is over. In other words, the pattern of life has only a finite extent along the time dimension, with the braid coming apart into frizz at both ends.

Because mathematical structures always exist whether or not anyone has discovered them, our life braid can be said to have always existed and will always exist. I have never been able to wrap my head around the religious idea of eternity, but this eternity I understand. Someday I may even do a post on how the notion of time found in the MUH resembles the medieval idea of eternity as nunc stans, the standing-now, but for now I’ll use it to address more down to earth concerns.

My youngest daughter, philosopher that she is, has often asked me “where was I before I was born?”. To which my lame response has been “you were an egg” which for a while made big breakfasts difficult. Now I can just tell her to get out her crayons to scribble, and we’ll color our way to something profound.

 

How Should Humanity Steer the Future?

FQXi

Over the spring the Fundamental Questions Institute (FQXi) sponsored an essay contest the topic of which should be dear to this audience’s heart- How Should Humanity Steer the Future? I thought I’d share some of the essays I found most interesting, but there are lots, lots, more to check out if you’re into thinking about the future or physics, which I am guessing you might be.

If there was any theme I found across the 140 or so essays entered in the contest – it was that the 21st century was make- it- or-break-it for humanity, so we need to get our act together, and fast. If you want a metaphor for this sentiment, you couldn’t do much better than Nietzsche’s idea that humanity is like an individual walking on a “rope over an abyss”.

A Rope over an Abyss by Laurence Hitterdale

Hitterdale’s idea is that for most of human history the qualitative aspects of human experience have pretty much been the same, but that is about to change. What are facing, according to Hitterdale, is the the extinction of our species or the realization of our wildest perennial human dreams- biological superlongevity, machine intelligence that seem to imply the end of drudgery and scarcity. As he points out, some very heavy hitting thinkers seem to think we live in make or break times:

 John Leslie, judged the probability of human extinction during the next five centuries as perhaps around thirty per cent at least. Martin Rees in 2003 stated, “I think the odds are no better than fifty-fifty that our present civilization on Earth will survive to the end of the present century.”Less than ten years later Rees added a comment: “I have been surprised by how many of my colleagues thought a catastrophe was even more likely than I did, and so considered me an optimist.”

In a nutshell, Hiterdale’s solution is for us to concentrate more on preventing negative outcomes that achieving positive ones in this century. This is because even positive outcomes like human superlongevity and greater than human AI could lead to negative outcomes if we don’t sort out our problems or establish controls first.

How to avoid steering blindly: The case for a robust repository of human knowledge by Jens C. Niemeyer

This was probably my favorite essay overall because it touched on issues dear to my heart- how will we preserve the past in light of the huge uncertainties of the future.  Niemeyer makes the case that we need to establish a repository of human knowledge in the event we suffer some general disaster, and how we might do this.

By one of those strange incidences of serendipity, while thinking about Niemeyer’s ideas and browsing the science section of my local bookstore I came across a new book by Lewis Dartnell The Knowledge: How to Rebuild Our World from Scratch which covers the essential technologies human beings will need if they want to revive civilization after a collapse. Or maybe I shouldn’t consider it so strange. Right next to The Knowledge was another new book The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day, by David Hand, but I digress.

The digitization of knowledge and its dependence on the whole technological apparatus of society actually makes us more vulnerable to the complete loss of information both social and personal and therefore demands that we backup our knowledge. Only things like a flood or a fire could have destroyed our lifetime visual records the way we used to store them- in photo albums- but now all many of us would have to do is lose or break our phone. As Niemeyer  says:

 Currently, no widespread efforts are being made to protect digital resources against global disasters and to establish the means and procedures for extracting safeguarded digital information without an existing technological infrastructure. Facilities like, for instance, the Barbarastollen underground archive for the preservation of Germany’s cultural heritage (or other national and international high-security archives) operate on the basis of microfilm stored at constant temperature and low humidity. New, digital information will most likely never exist in printed form and thus cannot be archived with these techniques even in principle. The repository must therefore not only be robust against man-made or natural disasters, it must also provide the means for accessing and copying digital data without computers, data connections, or even electricity.

Niemeyer imagines the creation of such a knowledge repository as a unifying project for humankind:

Ultimately, the protection and support of the repository may become one of humanity’s most unifying goals. After all, our collective memory of all things discovered or created by mankind, of our stories, songs and ideas, have a great part in defining what it means to be human. We must begin to protect this heritage and guarantee that future generations have access to the information they need to steer the future with open eyes.

Love it!

One Cannot Live in the Cradle Forever by Robert de Neufville

If Niemeyer is trying to goad us into preparing should the worst occur, like Hitterdale, Robert de Neufville is working towards making sure these nightmare, especially self-inflicted ones, don’t come true in the first place. He does this as a journalist and writer and as an associate of the Global Catastrophic Risk Institute.

As de Neufville points out, and as I myself have argued before, the silence of the universe gives us reason to be pessimistic about the long term survivability of technological civilization. Yet, the difficulties that stand in the way of our minimizing global catastrophic risks, thing like developing an environmentally sustainable modern economy, protecting ourselves against global pandemics or meteor strikes of a scale that might set civilization on its knees, or the elimination of the threat of nuclear war, are more challenges of politics than technology. He writes:

But the greatest challenges may be political. Overcoming the technical challenges may be easy in comparison to using our collective power as a species wisely. If humanity were a single person with all the knowledge and abilities of the entire human race, avoiding nuclear war, and environmental catastrophe would be relatively easy. But in fact we are billions of people with different experiences, different interests, and different visions for the future.

In a sense, the future is a collective action problem. Our species’ prospects are effectively what economists call a “common good”. Every person has a stake in our future. But no one person or country has the primary responsibility for the well-being of the human race. Most do not get much personal benefit from sacrificing to lower the risk of extinction. And all else being equal each would prefer that others bear the cost of action. Many powerful people and institutions in particular have a strong interest in keeping their investments from being stranded by social change. As Jason Matheny has said, “extinction risks are market failures”.

His essay makes an excellent case that it is time we mature as a species and live up to our global responsibilities. The most important of which is ensuring our continued existence.

The “I” and the Robot by Cristinel Stoica

Here Cristinel Stoica makes a great case for tolerance, intellectual humility and pluralism, a sentiment perhaps often expressed but rarely with such grace and passion.

As he writes:

The future is unpredictable and open, and we can make it better, for future us and for our children. We want them to live in peace and happiness. They can’t, if we want them to continue our fights and wars against others that are different, or to pay them back bills we inherited from our ancestors. The legacy we leave them should be a healthy planet, good relations with others, access to education, freedom, a healthy and critical way of thinking. We have to learn to be free, and to allow others to be free, because this is the only way our children will be happy and free. Then, they will be able to focus on any problems the future may reserve them.

Ends of History and Future Histories in the Longue Duree by Benjamin Pope

In his essay Benjamin Pope is trying to peer into the human future over the long term, by looking at the types of institutions that survive across centuries and even millennia: Universities, “churches”, economic systems- such as capitalism- and potentially multi-millennial, species – wide projects, namely space colonization.

I liked Pope’s essay a lot, but there are parts of it I disagreed with. For one, I wish he would have included cities. These are the oldest lived of human institutions, and unlike Pope’s other choices are political, and yet manage to far out live other political forms- namely states or empires. Rome far outlived the Roman Empire and my guess is that many American cities, as long as they are not underwater, will outlive the United States.

Pope’s read on religion might be music to the ears of some at the IEET:

Even the very far future will have a history, and this future history may have strong, path-dependent consequences. Once we are at the threshold of a post-human society the pace of change is expected to slow down only in the event of collapse, and there is a danger that any locked-in system not able to adapt appropriately will prevent a full spectrum of human flourishing that might otherwise occur.

Pope seems to lean toward the negative take on the role of religion to promote “a full spectrum of human flourishing” and , “as a worst-case scenario, may lock out humanity from futures in which peace and freedom will be more achievable.”

To the surprise of many in the secular West, and that includes an increasingly secular United States, the story of religion will very much be the story of humanity over the next couple of centuries, and that includes especially the religion that is dying in the West today, Christianity. I doubt, however, that religion has either the will or the capacity to stop or even significantly slow technological development, though it might change our understanding of it. It also the case that, at the end of the day, religion only thrives to the extent it promotes human flourishing and survival, though religious fanatics might lead us to think otherwise. I am also not the only one to doubt Pope’s belief that “Once we are at the threshold of a posthuman society the pace of change is expected to slow down only in the event of collapse”.

Still, I greatly enjoyed Pope’s essay, and it was certainly thought provoking.  

Smooth seas do not make good sailors by Georgina Parry

If you’re looking to break out of your dystopian gloom for a while, and I myself keep finding reasons for which to be gloomy, then you couldn’t do much better to take a peak and Georgina Parry’s fictionalized peak at a possible utopian future. Like a good parent, Parry encourages our confidence, but not our hubris:

 The image mankind call ‘the present’ has been written in the light but the material future has not been built. Now it is the mission of people like Grace, and the human species, to build a future. Success will be measured by the contentment, health, altruism, high culture, and creativity of its people. As a species, Homo sapiens sapiens are hackers of nature’s solutions presented by the tree of life, that has evolved over millions of years.

The future is the past by Roger Schlafly

Schlafly’s essay literally made my draw drop, it was so morally absurd and even obscene.

Consider a mundane decision to walk along the top of a cliff. Conventional advice would be to be safe by staying away from the edge. But as Tegmark explains, that safety is only an illusion. What you perceive as a decision to stay safe is really the creation of a clone who jumps off the cliff. You may think that you are safe, but you are really jumping to your death in an alternate universe.

Armed with this knowledge, there is no reason to be safe. If you decide to jump off thecliff, then you really create a clone of yourself who stays on top of the cliff. Both scenarios are equally real, no matter what you decide. Your clone is indistinguishable from yourself, and will have the same feelings, except that one lives and the other dies. The surviving one can make more clones of himself just by making more decisions.

Schlafly rams the point home that under current views of the multiverse in physics nothing you do really amount to a choice, we are stuck on an utterly deterministic wave-function on whose branching where we play hero and villain, and there is no space for either praise or guilt. You can always act as a coward or naive sure that somewhere “out there” another version of “you” does the right thing. Saving humanity from itself in the ways proposed by Hitterdale and de Neufville, preparing for the worst as in Niemeyer and Pope or trying to build a better future as Parry and Stoica makes no sense here. Like poor Schrodinger’s cat, on some branches we end up surviving, on some we destroy ourselves and it is not us who is in charge of which branch we are on.

The thought made me cringe, but then I realized Schlafly must be playing a Swiftian game. Applying quantum theory to the moral and political worlds we inhabit leads to absurdity. This might or might not call into question the fundamental  reality of the multiverse or the universal wave function, but it should not lead us to doubt or jettison our ideas regarding our own responsibility for the lives we live, which boil down to the decisions we have made.

Chinese Dream is Xuan Yuan’s Da Tong by KoGuan Leo

Those of us in the West probably can’t help seeing the future of technology as nearly synonymous with the future of our own civilization, and a civilization, when boiled down to its essence, amounts to a set of questions a particular group of human beings keeps asking, and their answer to these questions. The questions in the West are things like what is the right balance between social order and individual freedom? What is the relationship between the external and internal (mental/spiritual) worlds, including the question of the meaning of Truth? How might the most fragile thing in existence, and for us the most precious- the individual- survive across time? What is the relationship between the man-made world- and culture- visa-vi nature, and which is most important to the identity and authenticity of the individual?

The progress of science and technology intersect with all of these questions, but what we often forget is that we have sown the seeds of science and technology elsewhere and the environment in which they will grow can be very different and hence their application and understanding different based as they will be on a whole different set of questions and answers encountered by a distinct civilization.

Leo KoGuan’s essay approaches the future of science and technology from the perspective of Chinese civilization. Frankly, I did not really understand his essay which seemed to me a combination of singularitarianism and Chinese philosophy that I just couldn’t wrap my head around.  What am I to make of this from the Founder and Chairman of a 5.1 billion dollar computer company:

 Using the KQID time-engine, earthlings will literally become Tianming Ren with God-like power to create and distribute objects of desire at will. Unchained, we are free at last!

Other than the fact that anyone interested in the future of transhumanism absolutely needs to be paying attention to what is happening and what and how people are thinking in China.

Lastly, I myself had an essay in the contest. It was about how we are facing incredible hurdles in the near future and that one of the ways we might succeed in facing these hurdles is by recovering the ability to imagine what an ideal society, Utopia, might look like. Go figure.

Why does the world exist, and other dangerous questions for insomniacs

William Blake Creation of the World

 

A few weeks back I wrote a post on how the recent discovery of gravitational lensing provided evidence for inflationary models of the Big Bang. These are cosmological models that imply some version of the multiverse, essentially the idea that ours is just one of a series of universes, a tiny bubble, or region, of a much, much larger universe where perhaps even the laws of physics or rationality of mathematics differed from one region to another.

My earlier piece had taken some umbrage with the physicist Lawrence Krauss’ new atheist take on the discovery of gravitational lensing, in the New Yorker. Krauss is a “nothing theorists”, one of a group of physicists who argue that the universe emerged from what in effect was nothing at all, although; unlike other nothing theorists such as Stephen Hawking, Krauss uses his science as a cudgel to beat up on contemporary religion. It was this attack on religion I was interested in, while the deeper issue the issue of a universe arising from nothing, left me shrugging my shoulders as if there was, excuse the pun, nothing of much importance in the assertion.

Perhaps I missed the heart of the issue because I am a nothingist myself, or at the very least, never found the issue of nothingness something worth grappling with.  It’s hard to write this without sounding like a zen koan or making my head hurt, but I didn’t look into the physics or the metaphysics of Krauss’ nothingingist take on gravitational lensing, inflation or anything else, in fact I don’t think I had ever really reflected on the nature of nothing at all.

The problems I had with Krauss’ overall view as seen in his book on the same subject A Universe from Nothing had to do with his understanding of the future and the present not the past.  I felt the book read the future far too pessimistically, missing the fact that just because the universe would end in nothing there was a lot of living to be done from now to the hundreds of billions of years before its heat death. As much as it was a work of popular science, Krauss’ book was mostly an atheist weapon in what I called “The Great God Debate” which, to my lights, was about attacking, or for creationists defending, a version of God as a cosmic engineer that was born no earlier and in conjunction with modern science itself. I felt it was about time we got beyond this conception of God and moved to a newer or even more ancient one.

Above all, A Universe from Nothing, as I saw it, was epistemologically hubristic, using science to make a non-scientific claim over the meaning of existence- that there wasn’t any- which cut off before they even got off the ground so many other interesting avenues of thought. What I hadn’t thought about was the issue of emergence from nothingness itself. Maybe the question of the past, the question of why our universe was here at all, was more important than I thought.

When thinking a question through, I always find a helpful first step to turn to the history of ideas to give me some context. Like much else, the idea that the universe began from nothing is a relatively recent one. The ancients had little notion of nothingness with their creation myths starring not with nothing but most often an eternally existing chaos that some divinity or divinities sculpted into the ordered world we see. You start to get ideas of creation out of nothing- ex nihilo- really only with Augustine in the 5th century, but full credit for the idea of a world that began with nothing would have to wait until Leibniz in the 1600s, who, when he wasn’t dreaming up new cosmologies was off independently inventing calculus at the same time as Newton and designing computers three centuries before any of us had lost a year playing Farmville.

Even when it came to nothingness Leibniz was ahead of his time. Again about three centuries after he had imagined a universe created from nothing the renegade Einstein was just reflecting universally held opinion when he made his biggest “mistake” tweaking his theory of general relativity with what he thought was a bogus cosmological constant so that he could get a universe that he and everyone else believed in- a universe that was eternal and unchanging- uncreated. Not long after Einstein had cooked the books Edwin Hubble discovered the universe was changing with time, moving apart, and not long after the that, evidence mounted that the universe had a beginning in the Big Bang.

With a creation event in the Big Bang cosmologists, philosophers and theologians were forced to confront the existence of a universe emerging from what was potentially nothing running into questions that had lain dormant since Leibniz- how did the universe emerge from nothing? why this particular universe? and ultimately why something rather than nothing at all? Krauss thinks we have solved the first and second questions and finds the third question, in his words, “stupid”.

Strange as it sounds coming out of my mouth, I actually find myself agreeing with Krauss: explanations that the universe emerged from fluctuations in a primordial “quantum foam” – closer to the ancient’s idea of chaos than our version of nothing- along with the idea that we are just one of many universes that follow varied natural laws- some like ours capable of fostering intelligent life- seem sufficient to me.  The third question, however, I find in no sense stupid, and if it’s childlike, it is childlike in the best wondrously curious kind of way. Indeed, the answers to the question “why is there something rather than nothing?” might result is some of the most thrilling ideas human beings have come up with yet.

The question of why there is something rather than nothing is brilliantly explored in a book by Jim Holt Why the World Exist?: An Existential Detective Story. As Holt points out, the problem with nothingists theories like those of Krauss is that they fail to answer  the question as to why the quantum foam or multiple universes churning out their versions of existence are there in the first place. The simplest explanation we have is that “God made it”, and Holt does look at this answer as provided by philosopher of religion Richard Swinburne who answers the obvious question “who made God?” with the traditional answer “God is eternal and not made” which makes one wonder why we can’t just stick with Krauss’ self-generating universe in the first place?

Yet, it’s not only religious persons who think the why question is addressing something fundamental or even that science reveals the question as important even if we are forever barred from completely answering it. As physicist David Deutsch says in Why does the world exist:

 … none of our laws of physics can possibly answer the question of why the multiverse is there…. Laws don’t do that kind of work.

Wheeler used to say, take all the best laws of physics and put those bits on a piece of paper on the floor. Then stand back and look at them and say, “Fly!” They won’t fly they just sit there. Quantum theory may explain why the Big Bang happened, but it can’t answer the question you’re interested in, the question of existence. The very concept of existence is a complex one that needs to be unpacked. And the question Why is there something rather than nothing is a layered one, I expect. Even if you succeeded in answering it at some level, you’d still have the next level to worry about.  (128)

Holt quotes Deutsch from his book The Fabric of Reality “I do not believe that we are now, or shall ever be, close to understanding everything there is”. (129)

Others, philosophers and physicists are trying to answer the “why” question by composing solutions that combine ancient and modern elements. These are the Platonic multiverses of John Leslie and Max Tegmark both of whom, though in different ways, believe in eternally existing “forms”, goodness in the case of Leslie and mathematics in the case of Tegmark, which an infinity of universes express and realize. For the philosopher Leslie:

 … what the cosmos consists of is an infinite number of infinite minds, each of which knows absolutely everything which is worth knowing. (200)

Leslie borrows from Plato the idea that the world appeared out of the sheer ethical requirement for Goodness, that “the form of the Good bestows existence upon the world” (199).

If that leaves you scratching your scientifically skeptical head as much as it does mine, there are actual scientists, in this case the cosmologist Max Tegmark who hold similar Platonic ideas. According to Holt, Tegmark believes that:

 … every consistently desirable mathematical structure exists in a genuine physical sense. Each of these structures constitute a parallel world, and together these parallel worlds make up a mathematical multiverse. 182

Like Leslie, Tegmark looks to Plato’s Eternal Forms:

 The elements of this multiverse do not exist in the same space but exist outside space and time they are “static sculptures” that represent the mathematical structure of the physical laws that govern them.  183

If you like this line of reasoning, Tegmark has a whole book on the subject, Our Mathematical Universe. I am no Platonist and Tegmark is unlikely to convert me, but I am eager to read it. What I find most surprising about the ideas of both Leslie and Tegmark is that they combine two things I did not previously see as capable of being combined ,or even considered outright rival models of the world- an idea of an eternal Platonic world behind existence and the prolific features of multiverse theory in which there are many, perhaps infinite varieties of universes.

The idea that the universe is mind bogglingly prolific in its scale and diversity is the “fecundity” of the philosopher Robert Nozick who until Holt I had only associated with libertarian economics. Anyone who has a vision of a universe so prolific and diverse is okay in my book, though I do wish the late Nozick had been as open to the diversity of human socio-economic systems as he had been to the diversity of universes.

Like the physicist Paul Davies, or even better to my lights the novelists John Updike, both discussed by Holt, I had previously thought the idea of the multiverse was a way to avoid the need for either a creator God or eternally existing laws- although, unlike Davies and Updike and in the spirit of Ockham’s Razor I thought this a good thing. The one problem I had with multiverse theories was the idea of not just a very large or even infinite number of alternative universes but parallel universes where there are other versions of me running around, Holt managed to clear that up for me.

The idea that the universe was splitting every time I chose to eat or not eat a chocolate bar or some such always struck me as silly and also somehow suffocating. Hints that we may live in a parallel universe of this sort are just one of the weird phenomenon that emerge from quantum mechanics, you know, poor Schrodinger’s Cat . Holt points out that this is much different and not connected to the idea of multiple universes that emerge from the cosmological theory of inflation. We simply don’t know if these two ideas have any connection. Whew! I can now let others wrestle with the bizarre world of the quantum and rest comforted that the minutiae of my every decision doesn’t make me responsible for creating a whole other universe.

This returning to Plato seen in Leslie and Tegmark, a philosopher who died, after all,  2,5000 years ago, struck me as both weird and incredibly interesting. Stepping back, it seems to me that it’s not so much that we’re in the middle of some Hegelian dialectic relentlessly moving forward through thesis-antithesis-synthesis, but more involved in a very long conversation that is moving in no particular direction and every so often will loop back upon itself and bring up issues and perspectives we had long left behind.  It’s like a maze where you have to backtrack to the point you made a wrong turn in order to go in the right direction. We can seemingly escape the cosmological dead end created by Christian theology and Leibniz’s idea of creation ex nihilo only by going back to ideas found before we went down that path, to Plato. Though, for my money, I even better prefer another ancient philosopher- Lucretius.

Yet, maybe Plato isn’t back quite far enough. It was the pre-socratics who invented the natural philosophy that eventually became science. There is a kind of playfulness to their ideas all of which could exist side-by-side in dialogue and debate with one another with no clear way for any theory to win. Theories such as Heraclitus: world as flux and fire, or Pythagoras: world as number, or Democritus: world as atoms.

My hope is that we recognize our contemporary versions of these theories for what they are “just-so” stories that we tell about the darkness beyond the edge of scientific knowledge- and the darkness is vast. They are versions of a speculative theology- the possibilism of David Eagleman, which I have written about before and which are harmful only when they become as rigid and inflexible as the old school theology they are meant to replace or falsely claim the kinds of proof from evidence that only science and its experimental verification can afford. We should be playful with them, in the way Plato himself was playful with such stories in the knowledge that while we are in the “cave” we can only find the truth by looking through the darkness at varied angles.

Does Holt think there is a reason the world exists? What is really being asked here is what type of, in the philosopher Derek Parfit’s term “selector” brought existence into being. For Swinburne  the selector was God, for Leslie Goodness, Tegmark mathematics, Nozik fullness, but Holt thinks the selector might have been more simple, indeed, that the selector was simplicity. All the other selectors Holt finds to be circular, ultimately ending up being used to explain themselves. But what if our world is merely the simplest one possible that is also full? Moving from reason alone Holt adopts something like the mid-point between a universe that contained nothing and one that contained an infinite number of universes that are perfectly good adopting a mean he calls  “infinite mediocrity.”

I was not quite convinced by Holt’s conclusion, and was more intrigued by the open-ended and ambiguous quality of his exploration of the question of why there is something rather than nothing than I was his “proof” that our existence could be explained in such a way.

What has often strikes me as deplorable when it comes to theists and atheists alike is their lack of awe at the mysterious majesty of it all. That “God made it” or “it just is” strikes me flat. Whenever I have the peace in a busy world to reflect it is not nothingness that hits me but the awe -That the world is here, that I am here, that you are here, a fact that is a statistical miracle of sorts – a web weaving itself. Holt gave me a whole new way to think about this wonder.

How wonderfully strange that our small and isolated minds leverage cosmic history and reality to reflect the universe back upon itself, that our universe might come into existence and disappear much like we do. On that score, of all the sections in Holt’s beautiful little book it was the most personal section on the death of his mother that taught me the most about nothingness. Reflecting on the memory of being at his mother’s bedside in hospice he writes:

My mother’s breathing was getting shallower. Her eyes remained closed. She still looked peaceful, although every once in a while she made a little gasping noise.

Then I was standing over her, still holding her hand, my mother’s eyes open wide, as if in alarm. It was the first time I had seen them that day. She seemed to be looking at me. She opened her mouth. I saw her tongue twitch two or three times. Was she trying to say something? Within a couple of seconds her breathing stopped.

I leaned down and told her I loved her. Then I went into the hall and said to the nurse. “I think she just died.” (272-273)

The scene struck me as the exact opposite of the joyous experience I had at the birth of my own children and somehow reminded me of a scene from Diane Ackerman’s book Deep Play.

 The moment a new-born opens its eyes discovery begins. I learned this with a laugh one morning in New Mexico where I worked through the seasons of a large cattle ranch. One day, I delivered a calf. When it lifted up its fluffy head and looked at me its eyes held the absolute bewilderment of the newly born. A moment before it had enjoyed the even, black  nowhere of the womb and suddenly its world was full of color, movement and noise. I’ve never seen anything so shocked to be alive. (141-142)

At the end of the day, for the whole of existence, the question of why there is something rather than nothing may remain forever outside our reach, but we, the dead who have become the living and the living who will become the dead, are certainly intimates with the reality of being and nothingness.
 

Privacy Strikes Back, Dave Eggers’ The Circle and a Response to David Brin

I believe that we have turned a corner: we have finally attained Peak Indifference to Surveillance. We have reached the moment after which the number of people who give a damn about their privacy will only increase. The number of people who are so unaware of their privilege or blind to their risk that they think “nothing to hide/nothing to fear” is a viable way to run a civilization will only decline from here on in.  Cory Doctorow

If I was lucky enough to be teaching a media studies class right now I would assign two books to be read in tandem. The first of these books, Douglas Rushkoff’s Present Shocka book I have written about before, gives one the view of our communications landscape from 10,000 feet. Asking how can we best understand what is going on, with not just Internet and mobile technologies, but all forms of modern communication including that precious antique, the narrative book or novel.

Perspectives from “above” have the strength that they give you a comprehensive view, but human meaning often requires another level, an on-the-ground emotional level, that good novels, perhaps still more than any other medium, succeed at brilliantly. Thus, the second book I would assign in my imaginary media studies course would be Dave Eggers’ novel The Circle where one is taken into a world right on the verge of our own, which because it seems so close, and at the same time so creepy, makes us conscious of changes in human communication less through philosophy, although the book has plenty of that too, as through our almost inarticulable discomfort. Let me explain:

The Circle tells the story of a 20 something young woman, Mae Holland, who through a friend lands her dream job at the world’s top corporation, named, you guessed it, the Circle. To picture Circle, imagine some near future where Google swallowed FaceBook and Twitter and the resulting behemoth went on to feed on and absorb all the companies and algorithms that now structure our lives: the algorithm that suggests movies for you at NetFlix, or books and products on Amazon, in addition to all the other Internet services you use like online banking. This monster of a company is then able integrate all of your online identities into one account, they call it “TruYou”.

Having escaped a dead end job in a nowhere small town utility company, Mae finds herself working at the most powerful, most innovative, most socially conscious and worker friendly company on the planet. “Who else but utopians could make utopia. “ (30) she muses, but there are, of course, problems on the horizon.

The Circle is the creation of a group called the “3 Wise men”. One of these young wise men, Bailey, is the philosopher of the group. Here he is musing about the creation of small, cheap, ubiquitous and high definition video cameras that the company is placing anywhere and everywhere in a program called SeeChange:

Folks, we’re at the dawn of the Second Enlightenment. And I am not talking about a new building on campus. I am talking about an era where we don’t allow the vast majority of human thought and action and achievement to escape as if from a leaky bucket. We did that once before. It was called the Middle Ages, the Dark Ages. If not for the monks, everything the world had ever learned would have been lost. Well, we live in a similar time and we’re losing the vast majority of what we do and see and learn. But it doesn’t have to be that way. Not with these cameras, and not with the mission of the Circle. (67)

The philosophy of the company is summed up, in what the reader can only take as echoes of Orwell, in slogans such as “all that happens must be known” , “privacy is theft”, “secrets are lies”, and “to heal we must know, to know we must share”.

Our protagonist, Mae, has no difficulty with this philosophy. She is working for what she believes is the best company in the world, and it is certainly a company that on the surface all of us would likely want to work for: there are non-stop social events which include bringing in world class speakers, free cultural and sporting events and concerts. The company supports the relief of a whole host of social problems. Above all, there are amazing benefits which include the company covering the healthcare costs of Mae’s father who is stricken with an otherwise bankrupting multiple sclerosis.

What Eggers is excellent at is taking a person who is in complete agreement with the philosophy around her and making her humanity become present in her unintended friction with it. It’s really impossible to convey without pasting in large parts of the book just how effective Eggers is at presenting the claustrophobia that comes from a too intense use of social technology. The shame Mae is made to feel from missing out on a coworker’s party, the endless rush to keep pace with everyone’s updates, and the information overload and data exhaustion that results, the pressure of always being observed and “measured”, both on the job and off, the need to always present oneself in the best light, to “connect” with others who share the same views, passions and experiences,the anxiety that people will share something, such as intimate or embarrassing pictures one would not like shared, the confusion of “liking” and “disliking” with actually doing something, with the consequence that not sharing one’s opinion begins to feel like moral irresponsibility.

Mae desperately wants to fit into the culture of transparency found at the Circle, but her humanity keeps getting in the way. She has to make a herculean effort to keep up with the social world of the company, mistakenly misses key social events, throws herself into sexual experiences and relationships she would prefer not be shared, keeps the pain she is experiencing because of her father’s illness private.

She also has a taste for solitude, real solitude, without any expectation that she bring something out of it- photos or writing to be shared. Mae has a habit of going on solo kayaking excursions, and it is in these that her real friction with the culture of the Circle begins. She relies on an old fashioned brochure to identify wildlife and fails to document and share her experiences. As an HR representative who berates her for this “selfish” practice states it:

You look at your paper brochure, and that’s where it ends. It ends with you. (186)

The Circle is a social company built on the philosophy of transparency and anyone who fails to share, it is assumed, must not really buy into that worldview. The “wise man” Bailey, as part of the best argument against keeping secrets I have ever read captures the ultimate goal of this philosophy:

A circle is the strongest shape in the universe. Nothing can beat it,  nothing can improve upon it.  And that’s what we want to be: perfect. So any information that eludes us, anything that’s not accessible, prevents us from being perfect.  (287)

The growing power of the Circle, the way it is swallowing everything and anything, does eventually come in for scrutiny by a small group of largely powerless politicians, but as was the case with the real world Julian Assange, transparency, or the illusion of it, can be used as a weapon against the weak as much as one against the strong. Suddenly all sorts of scandalous ilk becomes known to exist on these politicians computers and their careers are destroyed. Under “encouragement” from the Circle politicians “go transparent” their every move recorded so as to be free from the charge of corruption as the Circle itself takes over the foundational mechanism of democracy- voting.

The transparency that the Circle seeks is to contain everyone, and Mae herself, after committing the “crime” of temporarily taking a kayak for one of her trips and being caught by a SeeChange camera, at the insistence of Bailey, becomes one of only two private citizens to go transparent, with almost her every move tracked and recorded.

If Mae actually believes in the philosophy of transparency and feels the flaw is with her, despite almost epiphanies that would have freed her from its grip, there are voices of solid opposition. There is Mae’s ex-boyfriend, Mercer, who snaps at her while at dinner with her parents, and had it been Eggers’ intention would have offered an excellent summation of Rushkoff’s Present Shock.

Here, though, there are no oppressors. No one’s forcing you to do this. You willingly tie yourself to these leashes. And you willingly become utterly socially autistic. You no longer pick up on basic human communication cues. You’re at a table with three humans, all of whom you know and are trying to talk to you, and you’re staring at a screen searching for strangers in Dubai.  (260)

There is also another of the 3 wise men, Ty, who fears where the company he helped create is leading and plots to destroy it. He cries to Mae:

This is it. This is the moment where history pivots. Imagine you could have been there before Hitler became chancellor. Before Stalin annexed Eastern Europe. We’re on the verge of having another very hungry, very evil empire on our hands, Mae. Do you understand? (401)

Ty says of his co-creator Bailey:

This is the moment he has been waiting for, the moment when all souls are connected. This is his rapture, Mae! Don’t you see how extreme this view is? His idea is radical, and in another era would have been a fringe notion espoused by an eccentric adjunct professor somewhere: that all information, personal or not, should be shared by all.  (485)

If any quote defines what I mean by radical transparency it is that one immediately above. It is, no doubt, a caricature and the individuals who adhere to something like it in the real world do so in shades, along a spectrum. One of the thinkers who does so, and whose thought might therefore shed light on what non-fictional proponents of transparency are like is the science fiction author, David Brin, who took some umbrage over at the IEET in response to my last post.

In that post itself I did not really have Brin in mind, partly because, like Julian Assange, his views have always seemed to me more cognizant of the deep human need for personal privacy, in a way the figures I mentioned there; Mark Zuckerberg, Kevin Kelly and Jeff Stibel; have not, and thus his ideas were not directly relevant to where I originally intended to go in the second part of my post, which was to focus on precisely this need. Given his sharp criticism, it now seems important that I address his views directly and thus swerve for a moment away from my main narrative.

Way back in 1997, Brin had written a quite prescient work The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom? which accurately gauged the way technology and culture were moving on the questions of surveillance and transparency. In a vastly simplified form, Brin’s argument was that given the pace of technological change, which makes surveillance increasingly easier and cheaper, our best protection against elite or any other form of surveillance, is not to put limits on or stop that surveillance, but our capacity to look back, to watch the watchers, and make as much as possible of what they do transparent.

Brin’s view that transparency is the answer for surveillance leads him to be skeptical of traditional approaches, such as those of the ACLU, that think laws are the primary means to protect us because technology, in Brin’s perspective, will always any outrun any legal limitations on the capacities to surveil.

While I respect Brin as a fellow progressive and admire his early prescience, I simply have never found his argument compelling, and think in fact his and similar sentiments held by early figures at the dawn of the Internet age have led us down a cul de sac.

Given the speed at which technologies of surveillance have been and are being developed it has always been the law and its ability to give long lasting boundaries to the permissible and non-permissible that is our primary protection against them. Indeed, the very fall in cost and rise in capacity of surveillance technologies, a reality which Brin believes make legal constraints largely unworkable, in fact make law, one of our oldest technologies, and which no man should be above or below, our best weapon in privacy’s defense.

The same logic of the deterrent threat of legal penalties that we will need for, say, preventing a woman from being tracked and stalked by a jealous ex boyfriend using digital technology, will be necessary to restrain corporations and the state. It does not help a stalked woman just to know she is being stalked, to be able to “watch her watcher”, rather, she needs to be able to halt the stalking. Perhaps she can digitally hide, but she especially needs the protection of the force of law which can impose limitations and penalties on anyone who uses technological capacities for socially unacceptable ends, and in the same way, citizens are best protected not by being able to see into government prying, but by prohibiting that prying under penalty of law in the first place.We already do this effectively, the problem is that the law has lagged behind technology.

Admittedly, part of the issue is that technology has moved incredibly fast, but a great deal of law’s slowness has come from a culture that saw no problem with citizens being monitored and tracked 24/7- a culture which Brin helped create.

The law effectively prohibits authorities from searching your home without a warrant and probable cause, something authorities have been “technologically” able to do since we started living in shelters. Phone tapping, again, without a warrant and probable cause, has been prohibited to authorities in the US since the late 1960’s- authorities had been tapping phones since shortly after the phone was invented in the 1890’s. Part of the problem today is that no warrant is required for the government to get your “meta-data” who you called  or where you were as indicated by GPS. When your email exists in the “cloud” and not on your personal device those emails can in some cases be read without any oversight from a judge. These gaps in Fourth Amendment protections exist because the bulk of US privacy law that was meant to deal with electronic communications was written before even email, existed, indeed, before most of us knew what the Internet was. The law can be slow, but it shouldn’t be that slow, email, after all, is pretty damned old.  

There’s a pattern here in that egregious government behavior or abuse of technological capacities – British abuses in the American colonies, the American government and law enforcement’s egregious behavior and utilization of wiretapping/recording capacities in the 1960’s, results in the passing of restrictions on the use of such techniques and technological capacities. Knowing about those abuses is only a necessary condition of restricting or putting a stop to them.

I find no reason to believe the pattern will not repeat itself again and that we will soon push for and achieve restrictions on the surveillance power of government and others which will work until the powers- that- be find ways to get around them and new technology will allow those who wish to surveil in an abusive way allow them to do so. Then we’ll be back at this table again in the endless cat and mouse game that we of necessity must play if we wish to retain our freedom.

Brin seems to think that the fact that “elites” always find ways to get around such restrictions is a reason for not having such restrictions in the first place, which is a little like asking why should you clean your house when it just gets dirty again. As I see it, our freedom is always in a state of oscillation between having been secured and being at risk. We preserve it by asserting our rights during times of danger, and, sadly, this is one of those times.

I agree with Brin that the development of surveillance technologies are such that they themselves cannot directly be stopped, and spying technologies that would have once been the envy of the CIA or KGB, such as remote controlled drones with cameras, or personal tracking and bugging devices, are now available off the shelf to almost everyone an area in which Brin was eerily prescient in this. Yet, as with another widespread technology that can be misused, such as the automobile, their use needs to be licensed, regulated, and where necessary, prohibited. The development of even more sophisticated and intrusive surveillance technologies may not be preventable, but it can certainly be slowed, and tracked into directions that better square with long standing norms regarding privacy or even human nature itself.

Sharp regulatory and legal limits on the use of surveillance technologies would likely derail a good deal innovation and investment in the technologies of surveillance, which is exactly the point. Right now billions of dollars are flowing in the direction of empowering what only a few decades ago we would have clearly labeled creeps, people watching other people in ways they shouldn’t be, and these creeps can be found at the level of the state, the corporation and the individual.

On the level of individuals, transparency is not a solution for creepiness, because, let’s face it, the social opprobrium of being known as a creep (because everyone is transparent) is unlikely to make them less creepy- it is their very insensitivity to such social rules that make them creeps in the first place. All transparency would have done would be to open the “shades” of the victim’s “window”. Two-way transparency is only really helpful, as opposed to inducing a state of fear in the watched,  if the perception of intrusive watching allows the victim to immediately turn such watching off, whether by being able to make themselves “invisible”, or, when the unwanted watching has gone too far, to bring down the force of the law upon it.

Transparency is a step in the solution to this problem, as in, we need to somehow require tracking apps or surveillance apps in private hands to notify the person being tracked, but it is only a first step. Once a person knows they are being watched by another person they need ways to protect themselves, to hide, and the backup of authorities to stop harassment.

In the economic sphere, the path out of the dead end we’ve driven ourselves into might lie in the recognition that the commodity for sale in the transaction between Internet companies and advertisers, the very reason they have and continue pushing to make us transparent and surveilling us in the first place, is us. We would do well to remember, as Hannah Arendt pointed out in her book The Human Condition, that the root of our conception of privacy lies in private property. The ownership of property, “one’s own private place in the world” was once considered the minimum prerequisite for the possession of political rights.

Later, property as in land was exchanged for the portable property of our labor, which stemmed from our own body, and capital. We have in a sense surrendered control over the “property” of ourselves and become what Jaron Lanier calls digital peasants. Part of the struggle to maintain our freedoms will mean reasserting control over this property- which means our digital data and genetic information. How exactly this might be done is not clear to me, but I can see outlines. For example, it is at least possible to imagine something like digital “strikes” in which tracked consumers deliberately make themselves opaque to compel better terms.

In terms of political power, the use of law, as opposed to mere openness or transparency, to constrain the abuse of surveillance powers by elites would square better with our history. For the base of the Western democratic tradition (at least in its modern incantation) is not primarily elites’ openness to public scrutiny, or their competition with one another, as Brin argues is the case in The Transparent Society, (though admittedly the first especially is very important) but the constraints on power of the state, elites, the mob, or nefarious individuals provided by the rule of law which sets clear limits on how power, technologically enabled or otherwise, can be used.

The argument that prohibition, or even just regulation, never works and comparisons to the failed drug war I find too selective to be useful when discussing surveillance technologies. Society has prohibitions on all sorts of things that are extremely effective if never universally so.

In the American system, as mentioned, police and government are severely constrained in how they are able to obtain evidence against suspects or targets. Past prohibitions against unreasonable searches and surveillance have actually worked. Consumer protection laws dissuade corporations from abusing, putting customers at risk, or even just misusing consumer’s information. Environmental protection laws ban certain practices or place sharp boundaries on their use. Individuals are constrained in how they can engage with one another socially or how they can use certain technologies without their privilege (e.g driving) to use such technologies being revoked.

Drug and alcohol prohibitions, both having to push against the force of highly addictive substances, are exceptions the general rule that thoughtful prohibition and regulation works. The ethical argument is over what we should prohibit and what we should permit and how. It is ultimately a debate over what kind of society we want to live in based on our technological capacities, which should not be confused with a society determined by those capacities.

The idea that laws, regulations, and prohibitions under certain circumstances is well.., boring  shouldn’t be an indication that it is also wrong. The Transparent Society was a product of its time, the 1990’s, a prime example of the idea that as long as the playing field was leveled spontaneous order would emerge and that government “interference” through regulation and law (and in a democracy that is working the government is us) would distort this natural balance. It was the same logic that got us into the financial crisis and a species of an eternal human illusion that this time is different. Sometimes the solution to a problem is merely a matter of knowing your history and applying common sense, and the solution to the problem of mass surveillance is to exert our power as citizens of a democracy to regulate and prohibit it where we see fit. Or to sum it all up-we need updated surveillance laws.

It would be very unfair to Brin to say his views are as radical as the Circle’s philosopher Bailey, for, as mentioned, Brin is very cognizant and articulate regarding the human need for privacy at the level of individual intimacy. Eggers’ fictional entrepreneur-philosopher’s vision is a much more frightening version of radical transparency entailing the complete loss of private life. Such total transparency is victorious over privacy at the conclusion of The Circle. For, despite Mae’s love for Ty, he is unable to convince her to help him to destroy the company, and she betrays him.

We are left with the impression that the Circle, as a consequence of Mae’s allegiance to its transparency project, has been able as Lee Billings said in a different context,” to sublime and compress our small isolated world into an even more infinitesimal, less substantial state”  that our world is about to be enveloped in a dark sphere.

Yet, it would be wrong to view even Bailey in the novel as somehow “evil”, something that might make the philosophy of the Circle in some sense even more disturbing. The leadership of the Circle (with the exception of the Judas Ty) doesn’t view what they are building as somehow creepy or dangerous, they see it as a sort of paradise. In many ways they are actually helping people and want to help them. Mae in the beginning of Eggers’ novel is right- the builders of the Circle are utopians as were those who thought and continue to think radical transparency would prove the path to an inevitably better world.

As drunks are known for speaking the truth, an inebriated circler makes the connection between the aspirations of the Circle and those of religion:

….you’re gonna save all the souls. You’re gonna get everyone in the same place, you’re gonna teach them all the same things. There can be one morality, one set of rules. Imagine! (395)

The kinds of religious longings lying behind the mission of the Circle is even better understood by comparison to that first utopian, Plato, and his character Glaucon’s myth of the Ring of Gyges in The Republic. The ring makes its possessor invisible and the question it is used to explore is what human beings might do were there no chance they might get caught. The logic of the Circle is like a reverse Ring of Gyges making everyone perfectly visible. Bailey, thinks Mae had stolen the kayak because she thought she couldn’t be seen, couldn’t get caught:

All because you were being enabled by ,what, some cloak of invisibility? (296)

If not being able to watch people would make them worse, being able to fully and completely watch them, so the logic goes, would inevitably make them better.

In making this utopian assumption proponents of radical transparency both fictional and real needed to jettison some basic truths about the human condition we are only now relearning. A pattern that has, sadly, happened many times before.

Utopia does not feel like utopia if upon crossing the border you can’t go back home.  And upon reaching utopia we almost always want to return home because every utopia is built on a denial of or oversimplification regarding our multidimensional and stubbornly imperfectable human nature, and this would be the case whether or not our utopia was free of truly bad actors, creeps or otherwise.

The problem one runs into, in the transparency version of utopia, as in any other, is that given none of us are complete, or are less complete than we wish others to understand us to be, the push for us to be complete in an absolute sense often leads to its opposite. On social networks, we end up showcasing not reality, but a highly edited and redacted version of it: not the temper tantrums, but our kids at their cutest, not vacation disasters, but their picture perfect moments.

Pushing us, imperfect creatures that we are, towards total transparency leads almost inevitably to hypocrisy and towards exhausting and ultimately futile efforts at image management. All this becomes even more apparent when asymmetries in power between the watched and watcher are introduced. Employees are with reason less inclined to share that drunk binge over the weekend if they are “friends” with their boss on FaceBook. I have had fellow bloggers tell me they are afraid to express their opinions because of risks to their employment prospects. No one any longer knows whether the image one can find of a person on a social network is the “real” one or a carefully crafted public facade.

These information wars ,where every side is attempting to see as deeply as possible into the other while at the same time presenting an image of itself which best conforms to its own interest, is found up and down the line from individuals to corporations and all the way up to states. The quest for transparency, even when those on the quest mean no harm, is less about making oneself known than eliminating the uncertainty of others who are, despite all our efforts, not fully knowable. As Mae reflects:

It occurred to her, in a sudden moment of clarity, that what had always caused her anxiety, or stress, or worry, was not any one force, nothing independent and external- it wasn’t danger to herself or the calamity of other people and their problems. It was internal: it was subjective: it was not knowing. (194)

And again:

It was not knowing that was the seed of madness, loneliness, suspicion, fear. But there were ways to solve all this. Clarity had made her knowable to the world, and had made her better, had brought her close, she hoped., to perfection. Now the world would follow. Full transparency would bring full access and there would be no more not knowing. (465)

Yet, this version of eliminating uncertainty is an illusion. In fact, the more information we collect the more uncertainty increases, a point made brilliantly by another young author, who is also a scientist, Pippa Goldschmidt in her novel, The Falling Sky. To a talk show host who defines science as the search for answers she replies “That’s exactly what science isn’t about…. it’s about quantifying uncertainty.

Mae, at one point in the novel is on the verge of understanding this:

That the volume of information, of data, of judgments of measurement was too much, and there were too many people, and too many desires of too many people, and too much pain of too many people, and having it all constantly collated, collected, added and aggregated, and presented to her as if it was tidy and manageable- it was too much.  (410)

Sometimes one can end up in the exact opposite destination of where one wants to go if one is confused about the direction to follow to get there. Many of the early advocates of radical transparency thought our openness would make Orwellian nightmares of intrusive and controlling states less likely. Yet, by being blissfully unconcerned about our fundamental right to privacy, by promoting corporate monitoring and tracking of our every behavior, we have not created a society that is more open and humane but given spooks tools, democratic states would never have been able to openly construct, to spy upon us in ways that would have brought smiles to the faces of the totalitarian dictators and J Edgar Hoovers of the 20th century. We have given criminals and creeps the capability to violate the intimate sphere of our lives, and provided real authoritarian dictatorships the template and technologies to make Orwell’s warnings a reality.

Eggers, whose novel was released shortly after the first Snowden revelations was certainly capturing a change in public perception regarding the whole transparency project. It is the sense that we have been headed in the wrong direction an unease that signals the revival of our internal sense of orientation, that the course we are headed on does not feel right, and in fact somehow hints at grave dangers.

It was an unease captured equally well and around the same time by Vienna Teng’s hauntingly beautiful song Hymn of Axicom (brought to my attention by reader, Gregory Maus). Teng’s heavenly music and metalized voice- meant to be the voice of the world largest private database-  make the threshold we are at risk of crossing identified by Eggers to be somehow beautiful yet ultimately terrifying.

Giving voice to this unease and questioning the ultimate destination of the radical transparency project has done and will likely continue to do us well.  At a minimum, as the quote from Cory Doctorow with which this post began indicates, a wall between citizens and even greater mass surveillance, at least by the state, may have been established by recent events.

Yet, even if the historical pattern of our democracy repeats itself, that we are able to acknowledge and turn into law protections against a new mutation in the war of power against freedom, if privacy is indeed able to “strike back”, the proponents of radical transparency were certainly right about one thing, we can never put the genie fully back in the bottle, even if we are still free enough to restrain him with the chains of norms, law, regulation and market competition.

The technologies of transparency may not have affected a permanent change in the human condition in our relationship to the corporation and the state, criminals and the mob and the just plain creepy, unless, that is, we continue to permit it, but they have likely permanently affected the social world much closer to our daily concerns- our relationship with our family and friends our community and tribe. They have upended our relationship with one of the most precious of human ways of being, with solitude, though not our experience of loneliness, a subject that will have to wait until another time…