Am I a machine?

A question I’ve been obsessing over lately goes something like this: Is life a form of computation, and if so, is this thought somehow dangerous?

Of course, drawing an analogy between life and the most powerful and complex tool human beings have so far invented wouldn’t be a historical first. There was a time in which people compared living organisms to clocks, and then to steam engines, and then, perhaps surprisingly, to telephone networks.

Thoughtful critics, such as the roboticist Rodney Brooks think this computational metaphor has already exhausted its usefulness and that its overuse is proving to be detrimental to scientific understanding. In his view scientists and philosophers should jettison the attempt to see everything in terms of computation and information processing, which has blinded them from seeing other more productive metaphors that have been hidden by the glare of a galloping Moore’s Law. Perhaps today it’s just the computer’s turn and someday in the future we’ll have an even more advanced tool that will again seem the perfect model for the complexity of living beings. Or maybe not.

If instead we have indeed reached peak metaphor it will be because with computation we really have discovered a tool that doesn’t just resemble life in terms of features, but reveals something deep about the living world- because it allows us for the first time to understand life as it actually is. It would be as if we’ve managed to prove Giambattista Vico’s claim made right at the start of the scientific revolution “Verum et factum convertuntur” strangely right after all these years. Humanity can never know the natural world only what we ourselves have made. Maybe we will finally understand life and intelligence because we are now able to recreate a version of it in a different substrate (not to mention engineering it in the lab). We will know life because we are at last able to make it.

But let me start with the broader story…

Whether we’re blinded by the power of our latest uber-tool, or on the verge of a revolution in scientific understanding, however, might matter less than the unifying power of a universal metaphor. And I would argue that science needs just such a unifying metaphor. It needs one if it is to give us a vision of a rationally comprehensible world. It needs one for the purpose of education along with public understanding and engagement. A unifying metaphor is above all needed today as a ballast against over-specialization which traps the practitioners of the various branches of science (including the human sciences) in silos unable to communicate with one another and thereby formulate a reunified picture of a world that science itself has artificially divided up into fiefdoms as an essential first step towards understanding it. And of all the metaphors we have imagined, computation really does appear uniquely fruitful and revelatory and not just in biology but across multiple and radically different domains. A possible skeleton key for problems that have frustrated scientific understanding for decades.

One place where the computation/information analogy has grown over the past decades is in the area of fundamental physics, as an increasing number in the field have begun to borrow concepts from computer science in the hopes of bridging the gap between general relativity and quantum mechanics.

This informational turn in physics can perhaps be traced back to the Israeli physicist Jacob Bekenstein who way back in 1972 proposed what became known as the “Bekenstein bound”. An upper limit to entropy, to information, that can exist in a finite area of space. Pack information any tighter than 10⁶⁹ bits per square meter and that area will collapse to form a black hole. Physics, it seems, puts hard limits on our potential computational abilities (they’re a long way off), just as it places hard limits on our potential speed. What Bekenstein showed was that thinking of physics in terms of computation helped reveal something deeper not just about the physical world but about the nature of computation itself.

This recasting of physics in terms of computation, often called digital physics, really got a boost with an essay by the physicist John Wheeler in 1989 titled “It from Bit.” It’s probably safe to say that no one has ever really understood the enigmatic Wheeler who if one wasn’t aware that he was one of the true geniuses of 20th century physics might be confused for a mystic like Fritjof Capra, or heaven forbid, a woo-woo spouting conman- here’s looking at you- Deepak Chopra. The key idea of It from bit is captured in this quote from his aforementioned essay, a quote that also captures something of Wheeler’s koan-like style:

“It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.”

What distinguishes digital physics today from Wheeler’s late 20th century version is above all the fact that we live in a time when quantum computers have not only been given a solid theoretical basis, but have been practically demonstrated. A tool born from an almost offhand observation by the brilliant Richard Feynman who in the 1980s declared what should have been obvious. That: Nature isn’t classical . . . and if you want to make a simulation of Nature, you’d better make it quantum mechanical…”

What Feynman could not have guessed was that physics would make progress (to this point, at least) not from applying quantum computers to the problems of physics, so much as from applying the ideas of how quantum computation might work to the physical world itself. Leaps of imagination, such as seeing space itself as a kind of quantum error correcting code, reformulating space-time and entropy in terms of algorithmic complexity or compression, or explaining how, as with Schrodinger’s infamous cat, superpositions existing at the quantum level breakdown for macroscopic objects as a consequence of computational complexity. The Schrodinger equation unsolvable for large objects so long as P ≠ N.

There’s something new here, but perhaps also something very old, an echo of Empedocles vision of a world formed out of the eternal conflict between Love and Strife. If one could claim that for the ancient Greek philosopher life could only exist in the mid-point between total union and complete disunion, then we might assert that life can only exist in a universe with room for entropy to grow, neither curled up too small, or too dispersed for any structures to congregate- a causal diamond. In other words, life can only exist in a universe whose structure is computable.

Of course, one shouldn’t make the leap from claiming that everything is somehow explainable from the point of view of computation to making the claim that “a rock implements every finite state automaton” , which, as David Chalmers pointed out, is an observation verging on the vacuous. We are not, as some in Silicon Valley would have it, living in a simulation, so much as in a world that emerges from a deep and continual computation of itself. In this view one needs some idea of a layered structure to nature’s computations, whereby simple computations performed at a lower level open up the space for more complex computations on a higher stage.

Digital physics, however, is mainly theoretical at this point and not without its vocal critics. In the end it may prove just another cul de sac rather than a viable road to the unification of physics. Only time will tell, but what may bolster it could be the active development of quantum computers. Quantum theory and quantum technology, one can hope, might find themselves locked in an iterative process with each aiding the development of the other in the same way that the development of steam engines propelled the development of thermodynamics from which came yet more efficient steam engines in the 19th century. Above all, quantum computers may help physicists sort out the rival interpretations regarding what quantum mechanics actually means, where at the end of the day quantum mechanics will ultimately be understood as a theory of information. A point made brilliantly by the science writer Philip Ball in his book Beyond Weird.

There are thus good reasons for applying the computational metaphor to the realm of physics, but what about where it really matters for my opening question, that is when it comes to life. To begin, what is the point of connection between computation in merely physical systems and in those we deem alive and how do they differ?

By far the best answer to these questions that I’ve found was the case made by John E. Mayfield in his book The Engine of Complexity: Evolution as Computation.  Mayfield views both physics and biology in terms of the computation of functions. What distinguishes historical processes such as life, evolution, culture and technology from mere physical systems is their ability to pull specific structure from the much more general space of the physically possible. A common screwdriver, as an example, at the end of the day just a particular arrangement of atoms. It’s a configuration that while completely consistent with the laws of physics is also highly improbable, or, as the physicist Paul Davies put it in a more recent book on the same topic:

“The key distinction is that life brings about processes that are not only unlikely but impossible in any other way.”

Yet the existence of that same screwdriver is trivially understandable when explained with reference to agency. And this is just what Mayfield argues evolution and intelligence does. They create improbable yet possible structures in light of their own needs.

Fans of Richard Dawkins or students of scientific history might recognize an echo here of William Paley’s famous argument for design. Stumble upon a rock and its structure calls out for little explanation, a pocket- watch with its intricate internal parts, gives clear evidence of design. Paley was nothing like today’s creationists, he had both a deep appreciation for and understanding of biology, and was an amazing writer to boot. The problem is he was wrong. What Darwin showed was that a designer wasn’t required for even the most complex of structures, random mutation plus selection gave us what looks like miracles over sufficiently long periods of time.

Or at least evolution kind of does. The issue that most challenges the Theory of Natural Selection is the overwhelming size of the search space. When most random mutations are either useless or even detrimental, how does evolution find not only structure, but the right structure, and even highly complex structure at that? It’s hard to see how monkeys banging away on typewriters for the entire age of the universe get you the first pages of Hamlet let alone get you Shakespeare himself.

The key, again it seems, is to apply the metaphor of computation. Evolution is a kind of search over the space of possible structures, but these are structures of a peculiar sort. What makes these useful structures peculiar is that they’re often computationally compressible. The code to express them is much shorter than the expression itself. As Jordana Cepelewicz put it for a piece at Quanta:

“Take the well-worn analogy of a monkey pressing keys on a computer at random. The chances of it typing out the first 15,000 digits of pi are absurdly slim — and those chances decrease exponentially as the desired number of digits grows.

But if the monkey’s keystrokes are instead interpreted as randomly written computer programs for generating pi, the odds of success, or “algorithmic probability,” improve dramatically. A code for generating the first 15,000 digits of pi in the programming language C, for instance, can be as short as 133 characters.”

In somewhat of an analogy to what Turing’s “primitives” are for machine based computation, biological evolution requires only a limited number of actions to access a potentially infinite number of end states. Organisms need a way of exploring the search space, they need a way to record/remember/copy that information, along with the ability to compress this information so that it is easier to record, store and share. Ultimately, the same sort of computational complexity that stymes human built systems, may place limits on the types of solutions evolution itself is able to find.  

The reconceptualization of evolution as a sort of computational search over the space of easily compressible possible structures is a case excellently made by Andreas Wagner in his book Arrival of the Fittest. Interestingly, Wagner draws a connection between this view of evolution and Plato’s idea of the forms. The irony, as Wagner points out, being that it was Plato’s idea of ideal types applied to biology that may have delayed the discovery of evolution in the first place. It’s not “perfect” types that evolution is searching over, for perfection isn’t something that exists in an ever changing environment, but useful structures that are computationally discoverable. Molecular and morphological solutions to the problems encountered by an organism- analogs to codes for the first 15,000 digits of pi.

Does such evolutionary search always result in the same outcome? The late paleontologist and humanist, Stephen Jay Gould, would have said no. Replay the tape of evolution like what happens to George Bailey in “It’s a Wonderful Life” and you’d get radically different outcomes. Evolution is built upon historical contingency like Homer Simpson who wishes he didn’t kill that fish.

Yet our view of evolution has become more nuanced since Gould’s untimely death in 2002. In his book Improbable Destinies, Jonathan Losos lays out the current research on the issue. Natural Selection is much more a tinkerer than an engineer. It’s solutions, by necessity, clugee and a result of whatever is closest at hand. Evolution is constrained by the need for survival to move from next best solution to next best solution across the search space of adaptation like a cautious traveler walking from stone to stone across a stream.

Starting points are thus extremely important for what Natural Selection is able to discover over a finite amount of time. That said, and contra Gould, similar physics and environmental demands does often lead to morphological convergence- think bats and birds with flight. But even when species gravitate towards a particular phenotype there is often found a stunning amount of underlying genetic variety or indeterminacy. Such genetic variety within the same species can be levered into more than one solution to a challenging environment with one branch getting bigger and the other smaller in response to say predation. Sometimes evolution at the microscopic level can, by pure chance, skip over intermediate solutions entirely and land at something close to the optimal solution thanks to what in probabilistic terms should have been a fatal mutation.

The problem encountered here might be quite different from the one mentioned above, not finding the needle of useful structures in the infinite haystack of possible one, but how to avoid finding the same small sample of needles over and over again. In other words, if evolution really is so good at convergence, why does nature appear so abundant with variety?

Maybe evolution is just plain creative- a cabinet of freaks and wonders. So long as a mutation isn’t detrimental to reproduction, even if it provides no clear gain, Natural Selection is free to give it a whirl. Why do some salamanders have only four toes? Nobody knows.

This idea that evolution needn’t be merely utilitarian, but can be creative, is an argument made by the renowned mathematician Gregory Chaitin who sees deep connections between biology, computation, and the ultimately infinite space of mathematical possibility itself. Evolutionary creativity is also something that is found in the work of the ornithologist Richard O. Prum. His controversial The Evolution of Beauty arguing that we need to see the perception of beauty by animals in the service of reproductive or consumptive (as in bees and flowers) choice as something beyond a mere proxy measure for genetic fitness. Evolutionary creativity, propelled by sexual selection, can sometimes run in the opposite direction from fitness.

In Prum’s view evolution can be driven not by the search for fitness but by the way in which certain displays resonate with the perceivers doing the selection. In this way organisms become agents of their own evolution and the explored space of possible creatures and behaviors is vastly expanded from what would pertain were alignment to environmental conditions the sole engine of change.

If resonance with perceiving mates or pollinators acts to expand the search space of evolution in multicellular organisms with the capacity for complex perception- brains- then unicellular organisms do them one better through a kind of shared, parallel search over that space.

Whereas multicellular organisms can rely on sex as a way to expand the evolutionary search space, bacteria have no such luxury. What they have instead is an even more powerful form of search- horizontal gene transfer. Constantly swapping genes among themselves gives bacteria a sort of plug-n-play feature.

As Ed Yong in his I Contain Multitudes brilliantly lays out, in possession of nature’s longest lived and most extensive tool kit, bacteria are often used by multicellular organisms to do things such as process foods or signal mates, which means evolution doesn’t have to reinvent the wheel. These bacteria aren’t “stupid”. In their clonal form as bio-films, bacteria not only exhibit specialized/cooperative structures, they also communicate with one another via chemical and electrical signalling– a slime internet, so to speak.

It’s not just in this area of evolution as search that the application of the computational metaphor to biology proves fruitful. Biological viruses, which use the cellular machinery of their hosts to self-replicate bear a striking resemblance to computer viruses that do likewise in silicon. Plants too, have characteristics that resemble human  communications networks, as in forests whose trees communicate and coordinate responses to dangers and share resources using vast fungal webs.

In terms of agents with limited individual intelligence whose collective behavior can quite clearly be considered quite sophisticated, nothing trumps the eusocial insects, such as ants and termites. In such forms of swarm intelligence, computation is analogous to what we find in cellular automata where a complex task is tackled by breaking up a problem into numerous much smaller problems solved in parallel.

Dennis Bray in his superb book Wetware: A Computer in Every Living Cell has provided an extensive reading of biological function through the lens of computation/information processing. Bray meticulously details how enzyme regulation is akin to switches, their up and down regulation, like the on/off functions of a transistor, with chains of enzymes acting like electrical circuits. He describes how proteins form signal networks within an organism that allow cells to perform logical operations. Components linked not by wires, but molecular diffusion within and between cells. Structures of cells such as methyl groups allow cells to measure the concentration of attractants in the environment, in effect acting as counters- performing calculus.

Bray thinks that it is these protein networks rather than anything that goes on in the brain that are the true analog for computer programs such as artificial neural nets. Just as neural nets are trained, the connections between nodes in a network sculpted by inputs to derive the desired output, protein networks are shaped by the environment to produce a needed biological product.

The history of artificial neural nets, the foundation of today’s deep learning, is itself a fascinating study on the interitative relationship between computer science and our understanding of the brain. When Alan Turing first imagined his Universal Computer it was the human brain as then understood by the ascendent behavioral psychology that he used as its template.

It wasn’t until 1943 that the first computational model of a neuron was proposed by McCulloch and Pitts who would expand upon their work with a landmark paper in 1959 “What the frog’s eye tells the frog’s brain”. The title might sound like a snoozer, but it’s certainly worth a read as much for the philosophical leaps the authors make as for the science.

For what McCulloch and Pitts discovered in researching frog vision was that perception wasn’t passive but an active form of information processing pulling out distinct features from the environment such as “bug detectors”, what they, leaning on Immanuel Kant of all people, called a physiological synthetic a priori.

It was a year earlier in 1958 that Frank Rosenblatt superseded McCulloch and Pitts earlier and somewhat simplistic computational model of neurons with his perceptrons whose development would shape what was to be the rollercoaster like future of AI. Perceptrons were in essence single-layer artificial neural networks. What made them appear promising was the fact that they could “learn”. A single layer perceptron could, for example, be trained to identify simple shapes in a kind of analog for how the brain was discovered to be wired together by synaptic connections between neurons.

The early success of perceptrons, and it turned out the future history of AI, was driven into a ditch when in 1969 Marvin Minsky and Seymour Papert in their landmark book, Perceptrons, showed just how limited the potential of perceptions as a way of mimicking cognition actually were. Perceptrons struggled over all but the most simple of functions, and broke down when dealing with anything without really sharp boundaries such as AND/OR functions- XOR. In the 1970s AI researchers turned away from modeling the brain (connectionist) and towards the symbolic nature of thought (symbolist). The first AI Winter had begun.

The key to getting around these limitations proved to be using multiple layers of perceptrons, what we now call deep learning. Though we’ve known this since the 1980’s it took until now with our exponential improvement in computer hardware, and accumulation of massive data sets for the potential of perceptrons to come home.

Yet it isn’t clear that most of the problems with perceptrons identified by Minsky and Papert have truly been solved. Much of what deep learning does can still be characterized as “line fitting”. What’s clear is that, whatever deep learning is, it bears little resemblance to how the brains of animals actually work, which was well described by the authors at the end of their book.

“We think the difference in abilities comes from the fact that a brain is not a single, uniformly structured network. Instead, each brain contains hundreds of different types of machines, interconnected in specific ways which predestin that brain to become a large, diverse society of specialized agencies.” (273)

The mind isn’t a unified computer but an emergent property of a multitude of different computers, connected, yes, but also kept opaque from one another as a product of evolution and as the price paid for computational efficiency. It is because of this modular nature that minds remain invisible even to their possessor although these computational layers may be stacked with higher layers building off of end product of the lower, possibly like the layered, unfolding nature of the universe itself. The origin of the mysteriously hierarchical structure of nature and human knowledge?

If Minsky and Papert were right about the nature of mind, if in an updated version of their argument recently made by Kevin Kelly, or a similar one made AI researcher François Chollet that there are innumerable versions of intelligence many of which may be incapable of communicating with one another, then this has deep implications for the computational metaphor as applied to thought and life and sets limits on how far we will be able to go in engineering, modeling, or simulating life with machines following principles laid down by Turing.

The quest of Paul Davies for a unified theory of information to explain life might be doomed from the start. Nature would be shown to utilize a multitude of computational models fit for their purposes, and not just multiple models between different organisms, but multiple models within the same organism. In the struggle between Turing universality and nature’s specialized machines Feynman’s point that traditional computers just aren’t up to the task of simulating the quantum world may prove just as relevant for biology as it does for physics. To bridge the gap between our digital simulations and the complexity of the real we would need analog computers which more fully model what it is we seek to understand, though isn’t this what experiments are- a kind of custom built analog computer? No amount of data or processing speed will eliminate the need for physical experiments. Bray himself says as much:

“A single leaf of grass is immeasurably more complicated than Wolfram’s entire opus. Consider the tens of millions of cells it is built from. Every cell contains billions of protein molecules. Each protein molecule in turn is a highly complex three-dimensional array of tens of thousands of atoms. And that is not the end of the story. For the number, location, and particular chemical state of each protein molecule is sensitive to its environment and recent history. By contrast, an image on a computer screen is simply a two-dimensional array of pixels generated by an iterative algorithm. Even if you allow that pixels can show multiple colors and that underlying software can embody hidden layers of processing, it is still an empty display. How could it be otherwise? To achieve complete realism, a computer representation of a leaf would require nothing less than a life-size model with details down to the atomic level and with a fully functioning set of environmental influences.” (103)

What this diversity of computational models means is not that the Church-Turing Thesis is false (Turing machines would still be capable of duplicating any other model of computation), but that its extended version was likely false. The Extended Church-Turing Thesis claims that any possibly efficient computation can be efficiently performed by a Turing machine, yet our version of Turing machines might prove incapable of efficiently duplicating either the extended capabilities of quantum computers gained through leverage of the deep underlying structure of reality or the twisted structures utilized by biological computers “designed” by the multi-billion years force of evolutionary contingency. Yet even if the Extended Church-Turing Thesis turns out to be false, the lessons derived from reflection on idealized Turing machines and their real world derivatives will likely continue to be essential to understanding computation in both physics and biology.   

Even if science gives us an indication that we are nothing but computers all the way down, from our atoms to our cells, to our very emergence as sentient entities, it’s not at all clear what this actually means. To conclude that we are, at bottom, the product of a nested system of computation is to claim that we are machines. A very special type of machine, to be sure, but machines nonetheless.

The word machine itself conjures up all kinds of ideas very far from our notion of what it means to be alive. Machines are hard, deterministic, inflexible and precise, whereas life is notably soft, stochastic, flexible and analog. If living beings are machines they are also supremely intricate machines, they are, to borrow from an older analogy for life, like clocks. But maybe we’ve been thinking about clocks all wrong as well.

As mentioned, the idea that life is like an intricate machine and therefore is best understood by looking at the most intricate machines humans have made, namely clocks, has been with us since the very earliest days of the scientific revolution. Yet as the historian Jessica Riskin points out in her brilliant book The Restless Clock, since that day there has been a debate over what exactly was being captured in the analogy. As Riskin lays out, starting with Leibniz there was always a minority among the materialist arguing that life was clock like, that what it meant to be a clock was to be “restless”, stochastic and undetermined. In the view of this school, sophisticated machines such as clock or automatons gave us a window into what it meant to be alive, which, above all, meant to possess a sort of internally generated agency.

Life in an updated version of this view can be understood as computation, but it’s computation as performed by trillions upon trillions of interconnected, competing, cooperating, soft-machines- constructing the world through their senses and actions, each machine itself capable of some degree of freedom within an ever changing environment. A living world unlike a dead one is a world composed of such agents, and to the extent our machines have been made sophisticated enough to possess something like agency, perhaps we should consider them alive as well.

Barbara Ehenreich makes something like this argument in her caustic critique of American culture’s denial of death, Natural Causes:

“Maybe then, our animist ancestors were on to something that we have lost sight of in the last few hundred years of rigid monotheism, science, and Enlightenment. And that is the insight that the natural world is not dead, but swarming with activity, sometimes even agency and intentionality. Even the place where you might expect to find solidity, the very heart of matter- the interior of a proton or a neutron- turns out to be flickering with ghostly quantum fluxuations. I would not suggest that the universe is “alive”, since that might invite misleading biological analogies. But it is restless, quivering, and juddering from its vast patches to its tiniest crevices. “ (204)

If this is what it means to be a machine, then I am fine with being one. This does not, however, address the question of danger. For the temptation of such fully materialist accounts of living beings, especially humans, is that it provides a justification for treating individuals as nothing more than a collection of parts.

I think we can avoid this risk even while retaining the notion that what life consists of is deeply explained by reference to computation as long as we agree that it is only ethical when we treat a living being in light of the highest level at which it understands itself, computes itself. I may be a Matryoshka doll of molecular machines, but I understand myself as a father, son, brother, citizen, writer, and a human being. In other words we might all be properly called machines, but we are a very special kind of machine, one that should never be reduced to mere tools, living, breathing computers in possession of an emergent property that might once have been called a soul.

 

The Kingdom of Machines

The Book of the Machines

For anyone thinking about the future relationship between nature-man-machines I’d like to make the case for the inclusion of an insightful piece of fiction to the cannon. All of us have heard of H.G. Wells, Isaac Asimov or Arthur C. Clarke. And many, though perhaps fewer, of us have likely heard of fiction authors from the other side of the nature/technology fence, writers like Mary Shelley, or Ursula Le Guin, or nowadays, Paolo Bacigalupi, but certainly almost none of us have heard of Samuel Butler, or better, read his most famous novel Erewhon (pronounced with 3 short syllables E-re-Whon.)

I should back up. Many of us who have heard of Butler and Erewhon have likely done so through George Dyson’s amazing little book Darwin Among the Machines, a book that itself deserves a top spot in the nature-man-machine cannon and got its title from an essay of Butler’s that found its way into the fictional world of his Erewhon. Dyson’s 1997 bookwritten just as the Internet age was ramping up tried to place the digital revolution within the longue duree of human and evolutionary history, but Butler had gotten there first. Indeed, Erewhon articulated challenges ahead of us which took almost 150 years to unfold, issues being discussed today by a set of scientists and scholars concerned over both the future of machines and the ultimate fate of our species.

A few weeks back Giulio Prisco at the IEET pointed me in the direction of an editorial placed in both the Huffington Post and the Independent by a group of scientists including, Stephen Hawking, Nick Bostrom and Max Tegmark warning of the potential dangers emerging from technologies surrounding artificial intelligence that are now going through an explosive period of development. As the authors of the letter note:

Artificial-intelligence (AI) research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation.

Most seem to think we are at the beginning rather than at the end of this AI revolution and see it likely unfolding into an unprecedented development; namely, machines that are just as smart if not incredibly more so than their human creators. Should the development of AI as intelligent or more intelligent than ourselves be a concern? Giulio himself doesn’t think so  believing that advanced AIs are in a sense both our children and our evolutionary destiny. The scientists and scholars behind the letter to Huffpost and The Independent , however, are very concerned.  As they put it:

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

And again:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In a probing article about the existential risks posed by artificial intelligence, and more so the larger public’s indifference to it, James Hamblin writes of the theoretical physicists and Nobel Laureate Frank Wilczek another of the figures trying to promote greater serious public awareness of the risks posed by artificial intelligence. Wilczek thinks we will face existential risks from intelligent machines not over the long haul of human history but in a very short amount of time.

It’s not clear how big the storm will be, or how long it’s going to take to get here. I don’t know. It might be 10 years before there’s a real problem. It might be 20, it might be 30. It might be five. But it’s certainly not too early to think about it, because the issues to address are only going to get more complex as the systems get more self-willed.

To be honest, it’s quite hard to take these scientists and thinkers seriously, perhaps because I’ve been so desensitized by Hollywood dystopias staring killer robots. But when your Noahs are some of the smartest guys on the planet it’s probably a good idea if not to actually start building your boat to at least be able to locate and beat a path if necessary to higher ground.

What is the scale of the risk we might be facing? Here’s physicist Max Tegmark from Hamblin’s piece:

Well, putting it in the day-to-day is easy. Imagine the planet 50 years from now with no people on it. I think most people wouldn’t be too psyched about that. And there’s nothing magic about the number 50. Some people think 10, some people think 200, but it’s a very concrete concern.

If there’s a potential monster somewhere on your path it’s always good to have some idea of its actual shape otherwise you find yourself jumping and lunging at harmless shadows.  How should we think about potentially humanity threatening AI’s? The first thing is that we need to be clear about what is happening. For all the hype, the AI sceptics are probably right– we are nowhere near replicating the full panoply of our biological intelligence in a machine. Yet, this fact should probably increase rather than decrease our concern regarding the potential threat from “super-intelligent” AIs. However far they remain from the full complexity of human intelligence, on account of their much greater speed and memory such machines largely already run something so essential and potentially destructive as our financial markets, the military is already discussing the deployment of autonomous weapons systems, with the domains over which the decisions of AIs hold sway only likely to increase over time. One just needs to imagine one or more such systems going rouge and doing what may amount to highly destructive things its creators or programmers did not imagine or intend to comprehend the risk. Such a rogue system would be more akin to a killer-storm than Lex Luthor, and thus, Nick Bostrom is probably on to something profound when he suggest that one of the worse things we could do would be to anthropomorphize the potential dangers. Telling Ross Anderson over at Aeon:

You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent.

The crazy thing is that Erewhon a novel published in 1872 clearly stated almost all of these dangers and perspectives. If these fears prove justified it will be Samuel Butler rather than Friedrich Nietzsche who will have been the great prophet to emerge from the the 19th century.

Erewhon was released right on the eve of what’s called the second industrial revolution that lasted from the 1870’s to World War I. Here you get mass rail and steam ships, gigantic factories producing iron and steel, and the birth of electrification. It is the age romanticized and reimagined today by cultural movement of steampunk.

The novel appeared 13 years after the publication of Charles Darwin’s The Origin of Species and even more importantly one year after Darwin’s even more revolutionary The Descent of Man. As we learn in Dyson’s book, Butler and Darwin were engaged in a long lasting intellectual feud, though the issue wasn’t evolution itself, but Butler’s accusation that Charles Darwin had essentially ripped his theory from his grandfather Erasmus Darwin without giving him any of the credit. Be that as it may,

Erewhon was never intended as many thought at the time, as a satire of Darwinism, but was Darwinian to its core, and tried to apply the lessons of the worldview made apparent by the Theory of Evolution to the innovatively exploding world of machines Butler saw all around him.

The narrator in Erewhon, a man name Higgs, is out exploring the undiscovered valleys of a thinly disguised New Zealand or some equivalent looking for a large virgin territory to raise sheep. In the process he discovers an unknown community, the nation of Erewhon hidden in its deep unexplored valleys. The people there are not primitive, but exist at roughly the level of a European village before the industrial revolution.  Or as Higgs observers of them in a quip pregnant with meaning – “…savages do not make bridges.” What they find most interesting about the newcomer Higgs is, of all things, his pocket watch.

But by and by they came to my watch which I had hidden away in the pocket that I had and had forgotten when began their search. They seemed concerned and the moment that they got hold of it. They made me open it and show the works and as as I had done so they gave signs of very grave which disturbed me all the more because I not conceive wherein it could have offended them.  (58)

One needs to know a little of the history of the idea of evolution to realize how deliciously clever this narrative use of a watch by Butler is. He’s jumping off of William Paley’s argument for intelligent design found in Paley’s 1802 Natural Theology or Evidences of the Existence and Attributes of the Deityabout which it has been said:

It is a book I greatly admire for in its own time the book succeeded in doing what I am struggling to do now. He had a point to make, he passionately believed in it, and he spared no effort to ram it home clearly. He had a proper reverence for the complexity of the living world, and saw that it demands a very special kind of explanation. (4)

 

The quote above is Richard Dawkins talking in his The Blind Watchmakerwhere he recovered and popularized Paley’s analogy of the stumbled upon watch as evidence that the stunningly complex world of life around us must have been designed.

Paley begins his Natural Theology with an imagined scenario where a complex machine, a watch, hitherto unknown by its discoverer leads to speculation about it origins.

In crossing a heath suppose I pitched my foot against a stone and were asked how the stone came to be there. I might possibly answer that for any thing I knew to the contrary it had lain there for ever nor would it perhaps be very easy to show the absurdity of this answer. But suppose I had found a watch upon the ground and it should be inquired how the watch happened to be in that place. I should hardly think of the answer which I had before given that for any thing I knew the watch might have always been there. Yet why should not this answer serve for the watch as well as for the stone? Why is it not as admissible in the second case as in the first?  (1)

After investigating the intricacies of the watch, how all of its pieces seem to fit together perfectly and have precise and definable functions Paley thinks that the:

….inference….is inevitable that the watch must have had a maker that there must have existed at some time and at some place or other an artificer or artificers who formed it for the purpose which we find it actually to answer who comprehended its construction and designed its use. (3-4)

Paley thinks we should infer a designer even if we discover the watch is capable of making copies of itself:

Contrivance must have had a contriver design a designer whether the machine immediately proceeded from another machine or not. (12)

This is creationism, but as even Dawkins admits, it is an eloquent and sophisticated creationism.

Darwin, which ever one got there first, overthrew this need for an engineer as the ultimate source of complexity by replacing a conscious designer with a simple process through which the most intricate of complex entities could emerge over time- in the younger’s case – Natural Selection.

 

The brilliance of Samuel Butler in Erewhon was to apply this evolutionary emergence of complexity not just to living things, but to the machines we believe ourselves to have engineered. Perhaps the better assumption to have when we encounter anything of sufficient complexity is that to reach such complexity it must have been something that evolved over time. Higgs says of the magistrate of Erewhon obsessed with the narrator’s pocket watch that he had:

….a look of horror and dismay… a look which conveyed to me the impression that he regarded my watch not as having been designed but rather as the designer of himself and of the universe or as at any rate one of the great first causes of all things  (58)

What Higgs soon discovers, however, is that:

….I had misinterpreted the expression on the magistrate’s face and that it was one not of fear but hatred.  (58)

Erewhon is a civilization where something like the Luddites of the early 19th century have won. The machines have been smashed, dismantled, turned into museum pieces.

Civilization has been reset back to the pre-industrial era, and the knowledge of how to get out of this era and back to the age of machines has been erased, education restructured to strangle in the cradle scientific curiosity and advancement.

All this happened in Erewhon because of a book. It is a book that looks precisely like an essay the real world Butler had published not long before his novel, his essay Darwin among the Machines, where George Dyson got his title. In Erewhon it is called simply “The Book of the Machines”.

It is sheer hubris we read in “The Book of the Machines” to think that evolution, having played itself out over so many billions of years in the past and likely to play itself out for even longer in the future, that we are the creatures who have reached the pinnacle. And why should we believe there could not be such a thing as “post-biological” forms of life:

…surely when we reflect upon the manifold phases of life and consciousness which have been evolved already it would be a rash thing to say that no others can be developed and that animal life is the end of all things. There was a time when fire was the end of all things another when rocks and water were so. (189)

In the “Book of the Machines” we see the same anxiety about the rapid progress of technology that we find in those warning of the existential dangers we might face within only the next several decades.

The more highly organised machines are creatures not so much of yesterday as of the last five minutes so to speak in comparison with past time. Assume for the sake of argument that conscious beings have existed for some twenty million years, see what strides machines have made in the last thousand. May not the world last twenty million years longer. If so what will they not in the end become?  (189-190)

Butler, even in the 1870’s is well aware of the amazing unconscious intelligence of evolution. The cleverest of species use other species for their own reproductive ends. The stars of this show are the flowers a world on display in Louie Schwartzberg’s stunning documentary Wings of Life which shows how much of the beauty of our world is the product of this bridging between species as a means of reproduction. Yet flowers are just the most visually prominent example.

Our machines might be said to be like flowers unable to reproduce on their own, but with an extremely effective reproductive vehicle in use through human beings. This, at least is what Butler speculated in Erewhon, again in the section “The Book of Machines”:

No one expects that all the features of the now existing organisations will be absolutely repeated in an entirely new class of life. The reproductive system of animals differs widely from that of plants but both are reproductive systems. Has nature exhausted her phases of this power? Surely if a machine is able to reproduce another machine systematically we may say that it has a reproductive system. What is a reproductive system if it be not a system for reproduction? And how few of the machines are there which have not been produced systematically by other machines? But it is man that makes them do so. Yes, but is it not insects that make many of the plants reproductive and would not whole families of plants die out if their fertilisation were not effected by a class of agents utterly foreign to themselves? Does any one say that the red clover has no reproductive system because the humble bee and the humble bee only must aid and abet it before it can reproduce? (204)

Reproduction is only one way one species or kingdom can use another. There is also their use as a survival vehicle itself. Our increasing understanding of the human microbiome almost leads one to wonder whether our whole biology and everything that has grown up around it has all this time merely been serving as an efficient vehicle for the real show – the millions of microbes living in our guts. Or, as Butler says in what is the most quoted section of Erewhon:

Who shall say that a man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his and whether he is anything but another kind of ant heap after all. Might not man himself become a sort of parasite upon the machines. An affectionate machine tickling aphid. (196)

Yet, one might suggest that perhaps we shouldn’t separate ourselves from our machines. Perhaps we have always been transhuman in the sense that from our beginnings we have used our technology to extend our own reach. Butler well understood this argument that technology was an extension of the human self.

A machine is merely a supplementary limb this is the be all and end all of machinery We do not use our own limbs other than as machines and a leg is only a much better wooden leg than any one can manufacture. Observe a man digging with a spade his right forearm has become artificially lengthened and his hand has become a joint.

In fact machines are to be regarded as the mode of development by which human organism is now especially advancing every past invention being an addition to the resources of the human body. (219)

Even the Luddites of Erewhon understood that to lose all of our technology would be to lose our humanity, so intimately woven were our two fates. The solution to the dangers of a new and rival kingdom of animated beings arising from machines was to deliberately wind back the clock of technological development to before the time fossil fuels had freed machines from the constraints of animal, human, and cyclical power to before machines had become animate in the way life itself was animate.

Still, Butler knew it would be almost impossible to make the solution of Erewhon our solution. Given our numbers, to retreat from the world of machines would unleash death and anarchy such as the world has never seen:

The misery is that man has been blind so long already. In his reliance upon the use of steam he has been betrayed into increasing and multiplying. To withdraw steam power suddenly will not have the effect of reducing us to the state in which we were before its introduction there will be a general breakup and time of anarchy such as has never been known it will be as though our population were suddenly doubled with no additional means of feeding the increased number. The air we breathe is hardly more necessary for our animal life than the use of any machine on the strength of which we have increased our numbers is to our civilisation it is the machines which act upon man and make him man as much as man who has acted upon and made the machines but we must choose between the alternative of undergoing much present suffering or seeing ourselves gradually superseded by our own creatures till we rank no higher in comparison with them than the beasts of the field with ourselves. (215-216)

Therein lies the horns of our current dilemma. The one way we might save biological life from the risk of the new kingdom of machines would be to dismantle our creations and retreat from technological civilization. It is a choice we cannot make both because of its human toll and for the long term survival of the genealogy of earthly life. Butler understood the former, but did not grasp the latter. For, the only way the legacy of life on earth, a legacy that has lasted for 3 billion years, and which everything living on our world shares, will survive the inevitable life cycle of our sun which will boil away our planet’s life sustaining water in less than a billion years and consume the earth itself a few billion years after that, is for technological civilization to survive long enough to provide earthly life with a new home.

I am not usually one for giving humankind a large role to play in cosmic history, but there is at least a chance that something like Peter Ward’s Medea Hypothesis is correct, that given the thoughtless nature of evolution’s imperative to reproduce at all cost life ultimately destroys its own children like the murderous mother of Euripides play.

As Ward points out it is the bacteria that have almost destroyed life on earth, and more than once, by mindlessly transforming its atmosphere and poisoning it oceans. This is perhaps the risk behind us, that Bostrom thinks might explain our cosmic loneliness complex life never gets very far before bacteria kills it off. Still perhaps we might be in a sort of pincer of past and future our technological civilization, its capitalist economic system, and the AIs that might come to be at the apex of this world ultimately as absent of thought as bacteria and somehow the source of both biological life’s and its own destruction.     

Our luck has been to avoid a medea-ian fate in our past. If we can keeps our wits about us we might be able to avoid a  medea-ian fate in our future only this brought to us by the kingdom machines we have created. If most of life in the universe really has been trapped at the cellular level by something like the Medea Hypothesis, and we actually are able to survive over the long haul, then our cosmic task might be to purpose the kingdom of machines to be to spread and cultivate  higher order life as far as deeply as we can reach into space. Machines, intelligent or not, might be the interstellar pollinators of biological life. It is we the living who are the flowers.

The task we face over both the short and the long term is to somehow negotiate the rivalry not only of the two worlds that have dominated our existence so far- the natural world and the human world- but a third, an increasingly complex and distinct world of machines. Getting that task right is something that will require an enormous amount of wisdom on our part, but part of the trick might lie in treating machines in some way as we already, when approaching the question rightly, treat nature- containing its destructiveness, preserving its diversity and beauty, and harnessing its powers not just for the good of ourselves but for all of life and the future.

As I mentioned last time, the more sentient our machines become the more we will need to look to our own world, the world of human rights, and for animals similar to ourselves, animal rights, for guidance on how to treat our machines balancing these rights with the well being of our fellow human beings.

The nightmare scenario is that instead of carefully cultivating the emergence of the kingdom of machines to serve as a shelter for both biological life and those things we human beings most value they will instead continue to be tied to the dream of endless the endless accumulation of capital, or as Douglas Rushkoff recently stated it:

When you look at the marriage of Google and Kurzweil, what is that? It’s the marriage of digital technology’s infinite expansion with the idea that the market is somehow going to infinitely expand.

A point that was never better articulated or more nakedly expressed than by the imperialist business man Cecil Rhodes in the 19th century:

To think of these stars that you see overhead at night, these vast worlds which we can never reach. I would annex the planets if I could.

If that happens, a kingdom of machines freed from any dependence on biological life or any reflection of human values might eat the world of the living until the earth is nothing but a valley of our bones.

_____________________________________________________________________

That a whole new order of animated beings might emerge from what was essentially human weakness vis a-vis  other animals is yet another one of the universes wonders, but it is a wonder that needs to go right in order to make it a miracle, or even to prevent it merely from becoming a nightmare. Butler invented almost all of our questions in this regard and in speculating in Erewhon how humans might unravel their dependence on machines raised another interesting question as well; namely, whether the very freewill that we use to distinguish the human world from the worlds of animals or machines actually exists? He also pointed to an alternative future where the key event would not be the emergence of the kingdom of machines but the full harnessing and control of the powers of biological life by human beings questions I’ll look at sometime soon.

 

 

 

 

Boston, Islam and the Real Scientific Solution

Arab Astronomers

The tragic bombing on April 15 by the Tsarnaev brothers, Tamerlane and Dzhohkar collided and amplified contemporary debates- deep and ongoing disputes on subjects as diverse as the role of religion and especially Islam in inspiring political violence, and the role and use of surveillance technology to keep the public safe. It is important that we get a grip on these issues and find a way forward that reflects the reality of the situation rather than misplaced fear and hope for the danger is that the lessons we take to be the meaning of the Boston bombing will be precisely opposite to those we should on more clear headed reflection actually draw. A path that might lead us, as it has in the recent past, to make egregious mistakes that both fail to properly understand and therefore address the challenges at issue while putting our freedom at risk in the name of security. What follows then is an attempt at clear headedness.

The Boston bombing admittedly inspired by a violent version of political Islam seemed almost to fall like clockwork into a recent liberal pushback against perceived Islamophobia by the group of thinkers known as the New Atheists. In late March of this year, less than a month before the bombing, Nathan Lean at Salon published his essay Dawkins, Hitchens Harris: New Atheists Flirt With Islamophobia   which documented a series of inflammatory statements about Islam by the New Atheists including the recent statement of Richard Dawkins that “Islam was the greatest source of evil in the world today” or an older quote by Sam Harris that: “Islam, more than any other religion human beings have devised, has all the makings of a thorough going cult of death.” For someone, such as myself who does indeed find many of the statements about Islam made by the New Atheists to be, if not overtly racists, then at least so devoid of religious literacy and above all historical and political self-reflection that they seem about as accurate as pre-modern traveler’s tales about the kingdom of the cyclops, or the lands at the antipodes of the earth where people have feet atop their heads, the bombings could not have come at a worse cultural juncture.

If liberals such as Lean had hoped to dissuade the New Atheists from making derogatory comments about Muslims at the very least before they made an effort to actually understand the beliefs of the people they were talking about, so that Dawkins when asked after admitting he had never read the Koran responded in his ever so culturally sensitive way: “Of course you can have an opinion about Islam without having read the Qur’an. You don’t have to read “Mein Kampf” to have an opinion about Nazism ” the fact that the murders in Boston ended up being two Muslim brothers from Chechnya would appear to give the New Atheists all the evidence they need.  The argument for a more tolerant discourse has lost all traction.

It wasn’t only this aspect of the “God debate” with which the Boston bombing intersected. There is also the on going argument for and against the deployment of widespread surveillance technology especially CCTV. The fact that the killers were caught on tape there for all the world to see seems to give weight to those arguing that whatever the concerns of civil libertarians the widespread use of CCTV is something that would far outweigh its costs. A mere three days after the bombing Slate ran an article by Farhad Manjoo We Need More Cameras and We Need Them Now.  The title kinda says it all but here’s a quote:

Cities under the threat of terrorist attack should install networks of cameras to monitor everything that happens at vulnerable urban installations. Yes, you don’t like to be watched. Neither do I. But of all the measures we might consider to improve security in an age of terrorism, installing surveillance cameras everywhere may be the best choice. They’re cheap, less intrusive than many physical security systems, and—as will hopefully be the case with the Boston bombing—they can be extremely effective at solving crimes.

Manjoo does not think the use of ubiquitous surveillance would be limited to deterring crime or terrorism or solving such acts once they occur, but that they might eventually give us a version of precrime that seems like something right out of Philip K. Dick’s Minority Report:

The next step in surveillance technology involves artificial intelligence. Several companies are working on software that monitors security-camera images in an effort to spot criminal activity before it happens.

London is the queen of such surveillance technology, but in the US it is New York that has most strongly devoted itself to this technological path of preventing terrorism spending upwards of 40 million dollars to develop its Domain Awareness System in partnership with Microsoft. New York has done this despite the concerns of those whom Mayor Bloomberg calls “special interests”, that is those who are concerned that ubiquitous surveillance represents a grave threat to our right to privacy.

Combining the recent disputes surrounding the New Atheists treatment of Islam and the apparent success of surveillance technology in solving the Boston bombing along with the hope that technology could prevent such events from occurring in the future might give one a particular reading of contemporary events that might go something as follows. The early 21st century is an age of renewed religious fanaticism centered on the rejection of modernity in general and the findings of science in particular. With Islam being only the most representative of the desire to overturn modern society through the use of violence. The best defense of modern societies in such circumstances is to turn to the very features that have made their societies modern in the first place, that is science and technology. Science and technology not only promise us a form of deterrence against acts of violence by religiously inspired fanatics they should allow us to prevent such acts from occurring at all if, that is, they are applied with full force.

This might be one set of lessons to draw from the Boston bombings, but would it be the right one? Let’s take the issue of surveillance technology first. The objection to surveillance technology notably CCTV was brilliantly laid out by the science-fiction writer and commentator Cory Doctorow in an article for The Guardian back in 2011.

Something like CCTV works on the assumption that people are acting rationally and therefore can be deterred. No one could argue that Tamerlane and his brother were acting rationally. Their goal seemed to be to kill as many people as possible before they were caught, but they certainly knew they would be caught. The deterrence factor of CCTV and related technologies comes into play even less when suicide bombers are concerned. According to Doctorow we seem to be hoping that we can use surveillance technology as a stand in for the social contact that should bind all of us together.

But the idea that we can all be made to behave if only we are watched closely enough all the time is bunkum. We behave ourselves because of our social contract, the collection of written and unwritten rules that bind us together by instilling us with internal surveillance in the form of conscience and aspiration. CCTVs everywhere are an invitation to walk away from the contract and our duty to one another, to become the lawlessness the CCTV is meant to prevent.

This is precisely the lesson we can draw from the foiled train bombing plot in Canada that occurred at almost at the same moment bombs were going off in Boston. While the American city was reeling, Canada was making arrests of Muslim terrorists who were plotting to bomb trains headed for the US. An act that would have been far more deadly than the Boston attacks. The Canadian Royal Mounted Police was tipped off to this planned attack by members of the Muslim community in Canada a fact that highlights the difference in the relationship between law enforcement and Muslim communities in Canada and the US. As reported in the Chicago Tribune:

Christian Leuprecht, an expert in terrorism at Queen’s University in Kingston, Ontario, said the tip-off reflected extensive efforts by the Royal Canadian Mounted Police to improve ties with Muslims.

‘One of the key things, and what makes us very different from the United States, is that the RCMP has always very explicitly separated building relationships with local communities from the intelligence gathering side of the house,’ he told Reuters.

 A world covered in surveillance technology based on artificial intelligence that can “read the minds” of would be terrorists is one solution to the threat of terrorism, but the seemingly low tech approach of the RCMP is something far different and at least for the moment far more effective. In fact, when we look under the hood of what the Canadians are doing we find something much more high tech than any application of AI based sensors on the horizon.

We tend to confuse advanced technology with bells and whistles and therefore miss the fact that a solution that we don’t need to plug in or program can be just as if not more complex than anything we are currently capable of building. The religious figures who turned the Canadian plotters into the authorities were far more advanced than any “camera” we can currently construct. They were able to gauge the threat of the plotters through the use of networks of trust and communication using the most advanced machine at our disposal- the human brain. They were also able to largely avoid what will likely be the bane of first generation of the AI based surveillance technologies hoped for by Manjoo- that is false alarms. Human based threat assessment is far more targeted than the types of ubiquitous surveillance offered by our silicon friends. We only need to pay attention to those who appear threatening rather than watch everybody and separate out potential bad actors through brute force calculation.

The after effects of the Boston bombings is likely to make companies that sell surveillance technologies very rich as American cities pour billions into covering themselves with a net of “smart” cameras in the name of safety. The high profile role of such cameras in apprehending the suspects will likely result in an erosion of the kinds of civil libertarian viewpoints held by those such as Doctorow. Yet, in an age of limited resources, are these the kinds of investments cities should be making?

The Boston bombing capped off a series of massacres in Colorado and Newtown all of which might have been prevented by a greater investment in mental health services. It may seem counter intuitive to suggest that many of those drawn to terrorist activities are suffering from mental health conditions but that is what some recent research suggests.

We need better ways to identify and help persons who have fallen into some very dark places, and whereas the atomistic nature of much of American social life might not give us inroads to provide these services for many, the very connectedness of immigrant Muslim communities should allow mental health issues to be more quickly identified and addressed.  A good example of a seemingly anti-modern community that has embraced mental health services are my neighbors the Amish where problems are perhaps more quickly identified and dealt with than in my own modern community where social ties are much more diffuse. The problem of underinvestment in mental health combined with an over reliance on security technologies isn’t one confined to the US alone. Around the same time Russia was touting its superior intelligence gathering capabilities when it came to potential Chechiyan terrorist including the Tsarnaev family, an ill cared for mental health facility in Moscow burned to the ground killing 38 of its trapped residents.

Lastly, there is the issue of the New Atheists’ accusations against Islam- that it is particularly anti-modern, anti-scientific and violent. As we should well know, any belief system can be used to inspire violence especially when that violence is presented in terms of self-defense.  Yet, Islam today does seem to be a greater vector of violence than other anti-modern creeds. We need to understand why this is the case.

People who would claim that there is something anti-scientific about Islam would do well to acquaint themselves with a little history. There is a sense that the scientific revolution in the West would have been impossible without the contribution of Islamic civilization a case made brilliantly in not at least two recent books Jonathan Lyons’ House  of Wisdom  and John Freely’s Aladdin’s Lamp.  

It isn’t merely that Islamic civilization preserved the science of the ancient Greeks during the European dark ages, but that it built upon their discoveries and managed to synthesize technology and knowledge from previously disconnected civilizations from China (paper) to India (the number zero). While Western Europeans still thought diseases were caused by demons, Muslims were inventing what was the most effective medical science until the modern age. They gave us brand new forms of knowledge such as algebra, taught us how to count using our current system of arabic numerals, and the mapped the night sky giving us the names of our stars. They showed us how to create accurate maps, taught us how to tell time and measure distance, and gave us the most advanced form that most amazing of instruments, a pre-modern form of pocket computer- the astrolabe. Seeing is believing and those who doubt just how incredible the astrolabe was should checkout Tom Wujec’s great presentation on the astrolabe at TED.

Compared to the Christian West which was often busy with brutalities such as genocidal campaigns against religious dissidents, inquisitions, or the persecution, forced conversion, or expulsion of Muslims and Jews, the Islamic world was generally extremely tolerant of religious minorities to the extent that both Christian and Jewish communities thrived there.

Part of the reason Islamic civilization, which was so ahead of the West in terms of science and technology in the 1300s ,would fall so far behind was a question of geography. It wasn’t merely that the West was able to rip from the more advanced Muslims the navigational technology that would lead to the windfall of discovering the New World, it was that the way science and technology eventually developed through the 18th, 19th and 20th centuries demanded strong states which Islamic civilizations on account of geography and a tradition of weak states- in Muslim societies it was a diffuse network of religious jurists rather than a centralized Church in league with the state that controlled religion and a highly internationalized network of traders rather than a tight corporate-state alliance that dominated the economy. Modernity in its pre-21st century manifestation required strong states to put down communication and    transportation networks and to initiate and support high impact policies such as economic standardization and universal education.

Yet, technology appears to have changed this reliance on the state and brought back into play the kinds of diffuse international networks which Islamic societies continue to be extremely good at. As opposed to earlier state-centric infrastructure cell phone networks can be put up almost overnight. The global nature of trade puts a premium on international connections which the communications revolution has put at the hands of everybody. The rapid decline in the cost of creating media and the bewildering ease with which this media can be distributed globally has overturned the prior centralized and highly localized nature in which communication used to operate.

Islam’s diffuse geography and deeply ingrained assumptions regarding power left it vulnerable to both its own pitifully weak states and incursions from outside powers who had followed a more centralized developmental path.  Many of these conflicts are now playing themselves out and the legacies of Western incursions unraveling so that largely Muslim states that were created out of thin air by imperialist powers such as Iraq and Syria- are imploding. Sadly, states where a largely secular elite was able to suppress traditional publics with the help of Western aid – most ominously Egypt- are slipping towards fundamentalism. We have helped create an equation where religious fundamentalism is confused with freedom.

Given the nature of modern international communications and ease of travel we are now in a situation where an alienated Muslim such as Tamerlane is not only plugged into a worldwide anti-modern discourse, he is able to “shop around” for conflicts in which to insert himself. His unhinged mother had apparently suggested he go to Palestine in search of jihad he reportedly traveled to far away Dagestan to make contact with like minded lost souls.

Our only hope here is that these conflicts in the Muslim world will play themselves out as quickly and as peacefully as possible, and that Islam, which is in many ways poised to thrive in the new condition of globalization will remember its own globalists traditions. Not just their tradition as international traders- think of how successful diaspora peoples such as the Chinese and the Jewish people have been- but their tradition of philosophic and scientific brilliance as well. The internet allows easy access to jihadi discourses by troubled Muslims, but it also increasingly offers things such as Massive Open Online Courses or MOOCS that might, in the same way Islamic civilization did for the West, bring the lessons of modernity and science deep into the Islamic world even into areas such as Afghanistan that now suffer under the isolating curse of geography.

International communication, over the long term, might be a way to bring Enlightenment norms regarding rational debate and toleration not so much to the Muslim world as back to it. Characteristics which it in some ways passed to the West in the first place, providing a forerunner of what a global civilization tolerant of differences and committed to combining the best from all the world’s cultures might look like.

Finding Our Way Through The Great God Debate

“The way that can be spoken of. Is not the constant way; The name that can be named. Is not the constant name.”

Lao Tzu: Tao Te Ching

Over the last decade a heated debate broke onto the American scene. The intellectual conflict between the religious and those who came to be called “the New Atheists” did indeed, I think, bring something new into American society that had not existed before- the open conflict between what, at least at the top tier of researchers, is a largely secular scientific establishment, or better those who presumed to speak for them, and the traditionally religious. What emerged on the publishing scene and bestseller lists were not the kinds of “soft-core” atheism espoused by previous science popularizers such as the late and much beloved Carl Sagan, but harsh, in your face atheist suchs as the Sam Harris, Richard Dawkins and another late figure, Christopher Hitchens.

On the one hand there was nothing “new” about the New Atheists. Science in the popular imagination had for sometime been used as a bridge away from religion. I need only look at myself. What really undermined my belief in the Catholicism in which I was raised were books like Carl Sagan’s Dragons of Eden, and Cosmos, or to a lesser extent Stephen Hawkings A Brief History of Time, even a theologically sounding book like Paul Davies God and the New Physics did nothing to shore up my belief in a personal God and the philosophically difficult or troubling ideas such as “virgin birth”, “incarnation”, “transubstantiation” or “papal infallibility” under which I was raised.

What I think makes the New Atheists of the 2000s truly new is that many science popularizers have lost the kind of cool indifference, or even openness to reaching out and explaining science in terms of religious concepts that was found in someone like Carl Sagan. Indeed, their confrontationalist stance is part of their shtick and a sure way to sell books in the same way titles like the Tao of Physics sold like hotcakes in the archaic Age of Disco.

For example, compare the soft, even if New Agey style of Sagan  to the kind of anti-religious screeds given by one of the New Atheist “four horsemen”, Richard Dawkins, in which he asks fellow atheists to “mock” and “ridicule” people’s religious  beliefs. Sagan sees religion as existing along a continuum with science, an attempt to answer life’s great questions. Science may be right, but religion stems from the same deep human instinct to ask questions and understand, whereas Dawkins seems to see only a dangerously pernicious stupidity.

It impossible to tease out who fired the first shot in the new conflict between atheists and the religious, but shots were fired. In 2004, Sam Harris’ The End of Faith hit the bookstores, a book I can still remember paging through at a Barnes N’ Noble when there were such things and thinking how shocking it was in tone. That was followed by Harris’ Letter to a Christian Nation, Richard Dawkins’ The God Delusion along with his documentary about religion with the less than live-and-let-live  title- The Root of All Evil? -followed by Hitchens’ book  God is Not Great, among others.

What had happened between the easy going atheism of the late Carl Sagan and the militant atheism of Harris’ The End of Faith was the tragedy of 9/11 which acted as an accelerant on a trend that had been emerging at the very least since Dawkins’ Virus of the Mind published way back in 1993. People who once made the argument that religion was evil or that signaled out any specific religion as barbaric would have before 9/11 been labeled as intolerant or racists.  Instead, in 2005 they were published in the liberal Huffington Post. Here is Harris:

It is time we admitted that we are not at war with “terrorism”; we are at war with precisely the vision of life that is prescribed to all Muslims in the Koran.

One of the most humanistic figures in recent memory, and whose loss is thus even more to our detriment than the loss of either the poetic Sagan or the gadfly Hitchens tried early on to prevent this conflict from ever breaking out. Stephen Jay Gould as far back as 1997 tried to minimize the friction between science and religion with his idea of Nonoverlapping Magisteria (NOMA). If you read one link I have ever posted please read this humane and conciliatory essay.  The argument Gold makes in his essay is that there is no natural conflict between science and religion. Both religion and science possess their own separate domains: science deals with what is- the nature of the natural world, whereas religion deals with the question of meaning.

Science does not deal with moral questions because the study of nature is unable to yield meaning or morality on a human scale:

As a moral position (and therefore not as a deduction from my knowledge of nature’s factuality), I prefer the “cold bath” theory that nature can be truly “cruel” and “indifferent”—in the utterly inappropriate terms of our ethical discourse—because nature was not constructed as our eventual abode, didn’t know we were coming (we are, after all, interlopers of the latest geological microsecond), and doesn’t give a damn about us (speaking metaphorically). I regard such a position as liberating, not depressing, because we then become free to conduct moral discourse—and nothing could be more important—in our own terms, spared from the delusion that we might read moral truth passively from nature’s factuality.

Gould later turned his NOMA essay into a book- The Rocks of Ages in which he made an extensive argument that the current debate between science and religion is really a false one that emerges largely from great deal of misunderstanding on both sides.

The real crux of any dispute between science and religion, Gould thinks, are issues of what constitutes the intellectual domain of each endeavor. Science answers questions regarding the nature of the world, but should not attempt to provide answers for questions of meaning or ethics. Whereas religion if properly oriented should concern itself with exactly these human level meaning and ethical concerns. The debate between science and religion was not only unnecessary, but the total victory of one domain over the other would, for Gould, result in a diminishment of depth and complexity that emerges as a product of our humanity.

The New Atheists did not heed Gould’s prescient and humanistic advice. Indeed, he was already in conflict with some who would become its most vociferous figures- namely Dawkins and E.O. Wilson- even before the religion vs. science debate broke fully onto the scene. This was a consequence of Gould pushing back against what he saw as a dangerous trend towards reductionism of those applying genetics and evolutionary principles to human nature. The debate between Gould and Dawkins even itself inspired a book, the 2001, Dawkins vs. Gould published a year before Gould’s death from cancer.

Yet, for how much I respect Gould, and am attracted to his idea of NOMA, I do not find his argument to be without deep flaws. One of inadequacies of Gould’s Rocks of Ages is that it makes the case the that the conflict between science and religion is merely a matter of dispute the majority of us, the scientifically inclined along with the traditionally religious, against marginal groups such as creationists, and their equally fanatical atheists antagonists. The problem, however, may be deeper than Gould lets on.

Rocks of Ages begins with the story of the Catholic Church’s teaching on evolution as an example of NOMA in action.  That story begins with Pius XII’s 1950 encyclical, Humani Generis, which declared that the theory of evolution, if  it was eventually proven definitively true by science, would not be in direct contradiction to Church teaching.  This was followed up (almost 50 years later, at the Church’s usual geriatric speed) by Pope John Paul II’s 1996 declaration that the theory of evolution was now firmly established  as a scientific fact, and that Church theology would have to adjust to this truth. Thus, in Gould’s eyes, the Church had respected NOMA by eventually deferring to science on the question of evolution whatever challenges evolution might pose to traditional Catholic theology.

Yet, the same Pope Pius XII who grudgingly accepted that evolution might be true, one year later used the scientific discovery of the Big Bang to argue, not for a literal interpretation of the book of Genesis which hadn’t been held by the Church since Augustine, but for evidence of the more ephemeral concept of a creator that was the underlying “truth” pointed to in Genesis:

Thus, with that concreteness which is characteristic of physical proofs, it [science] has confirmed the contingency of the universe and also the well-founded deduction as to the epoch when the cosmos came forth from the hands of the Creator.

Hence, creation took place in time.

Therefore, there is a Creator. Therefore, God exists!

Pius XII was certainly violating the spirit if not the letter on NOMA here. Preserving the requisite scientific skepticism in regard to evolution until all the facts were in, largely because it posed challenges to Catholic theology, while jumping almost immediately on a scientific discovery that seemed to lend support for some of the core ideas of the Church- a creator God and creation ex nihilo.

This seeming inclination of the religious to ask for science to keep its hands off when it comes to challenging it claims, while embracing science the minute it seems to offer proof of some religious conviction, is precisely the argument Richard Dawkins makes against NOMA, I think rightly, in his The God Delusion, and for how much I otherwise disagree with Dawkins and detest his abusive and condescending style, I think he is right in his claim that NOMA only comes into play for many of the religious when science is being used to challenge their beliefs, and not when science seems to lend plausibility to its stories and metaphysics. Dawkins and his many of his fellow atheists, however, think this is a cultural problem, namely that the culture is being distorted by the “virus” of religion. I disagree. Instead, what we have is a serious theological problem on our hands, and some good questions to ask might be: what exactly is this problem? Was this always the case? And in light of this where might the solution lie?

In 2009, Karen Armstrong, the ex-nun, popular religious scholar, and what is less know one time vocal atheist, threw her hat into the ring in defense of religion. The Case for God which was Armstrong’s rejoinder to the New Atheists is a remarkable book .
At least one explanation of theological problems we have is that a book like the Bible, when taken literally, is clearly absurd in light of modern science. We know that the earth was not created in 6 days, that there was no primordial pair of first parents, that the world was not destroyed by flood, and we have no proof of miracles understood now as the suspension of the laws of nature through which God acts in the world.  

The critique against religion by the New Atheists assumes a literal understanding either of scripture or the divine figures behind its stories. Armstrong’s The Case for God sets out to show that this literalism is a modern invention. Whatever the case with the laity, among Christian religious scholars before the modern era, the Bible was never read as a statement of scientific or historical fact. This, no doubt, is part of the reason its many contradictions and omissions where one story of an event is directly at odds with a different version of the exact same.

The way in which medieval Christian scholars were taught to read the Bible gives us an indication of what this meant. They were taught to read first for the literal meaning and once they understood move to the next level to discover the moral meaning of a text. Only once they had grasped this moral meaning would they attempt to grapple with the true, spiritual meaning of a text. Exegesis thus resembles the movement of the spirit away from the earth and the body above into the transcendent.

The Christian tradition in its pre-modern form, for Armstrong then, was not attempting to provide a proto-scientific version of the world that our own science has shown to be primitive and false, and the divine order which it pointed to was not one of some big bearded guy in clouds, but understood as the source and background of  a mysterious cosmic order which the human mind was incapable of fully understanding.

The New Atheists often conflate religion with bad and primitive science we should have long outgrown: “Why was that man’s barn destroyed by a lightning bolt?” “ As punishment from God. “How can we save our crops from drought?” “Perform a rain dance for the spirits?” It must therefore be incredibly frustrating to the New Atheists for people to cling to such an outdated science. But, according to Armstrong religion has never primarily been about offering up explanations for the natural world.

Armstrong thinks the best way to understand religion is as akin to art or music or poetry and not as proto-science. Religion is about practice not about belief and its truths are only accessible for those engaged in such practices. One learns what being a Christian by imitating Jesus: forgiving those who hurt you, caring for the poor, the sick, the imprisoned, viewing suffering as redemptive or “carrying your cross”. Similar statements can be made for the world’s other religions as well though the kinds of actions one will pursue will be very different. This is, as Stephen Prothero points out in his God is not One, because the world’s diverse faith have defined the problem of the human condition quite differently and therefore have come up with quite distinct solutions. (More on that in part two.)

How then did we become so confused regarding what religion is really about? Armstrong traces how Biblical literalism arose in tandem with the scientific revolution and when you think about it this makes a lot of sense. Descartes, who helped usher in modern mathematics wrote a “proof” for God. The giant of them all, Newton, was a Biblical literalists and thought his theory of gravity proved the existence of the Christian God. Whereas the older theology had adopted a spirit of humility regarding human knowledge, the new science that emerged in the 16th century was bold in its claims that a new way had been discovered which would lead to a full understanding of the world- including the God who it was assumed had created it. It shouldn’t be surprising that Christians, perhaps especially the new Protestants, who sought to return to the true meaning of the faith through a complete knowledge of scripture, would think that God and the Bible could be proved in the same way the new scientific truths were being proved.


God thus became a “fact” like other facts and Westerners began the road to doubt and religious schizophrenia as the world science revealed showed no trace of the Christian God -or any other God(s) – amid the cold, indifferent and self-organizing natural order science revealed. To paraphrase Richard Dawkins’ The Blind Watchmaker if God was a hypothetical builder of clocks there is no need for the hypothesis now that we know how a clock can build itself.

Armstrong,thus, provides us with a good grasp of why Gould’s NOMA might have trouble gaining traction. Our understanding of God or the divine has become horribly mixed up with the history of science itself, with the idea that God would be “proven”
an assumption that helped launch the scientific revolution. Instead, what science has done in relation to this early modern idea of God as celestial mechanic and architect of the natural world, is push this idea of God into an ever shrinking “gap” of knowledge where science (for the moment) lacked a reasonable natural explanation of events. The gap into which the modern concept of God got itself jammed after Darwin’s theory of natural selection proved sufficient to explain the complexity of life eventually became the time before the Big Bang. A hole Pope Pius XII eagerly jumped into given that it seemed to match up so closely with Church theology. Combining his 1950-51 encyclicals he seemed to be saying “We’ll give you the origin of life, but we’ll gladly take the creation of the Universe”!

This was never a good strategy.

Next time I’ll try to show how this religious crisis actually might provide an opportunity for new ways of approaching age old religious questions and open up new avenues of  pursuing transcendence- a possibility that might prove particularly relevant for the community that goes under the label, transhumanist.

What’s Wrong With the New Atheism?

Dawkins and Dennett at Oxford

The New Atheism is a movement that has emerged in the last two-decades that seeks to challenge the hold of religion on the consciousness of human beings and the impact of religion on political, intellectual and social life.

In addition to being a philosophical movement, The New Atheism is a social phenomenon, a decline of the hold of traditional religion and a seeming growth in irreligiosity, especially in the United States, a place that had been an outlier of religious life among other advanced societies that have long since secularized.

New Atheists take a stance of critical honesty openly professing their unbelief where previously they might have been unwilling to publicly admit their views regarding religion. New Atheists often take an openly confrontational stance towards religion pushing back not only at the social conformity behind much of religious belief, but at what they see as threats to the scientific basis of the truth found in movements such as creationism.

The intellectuals at the heart of the New Atheism are often firebrands directly challenging what they see as the absurdities of religious belief.  The late Christopher Hitchens and Richard Dawkins being the most famous examples of such polemicists.

In my view, there are plenty of things to like about the New Atheism. The movement has fostered the ability of people to speak openly about their personal beliefs or lack of them, and spoken in the defense of the principle of the separation of church and state. Especially in the realm of science education, the New Atheists promote a common understanding of reality- that evolution is a scientific truth and not some secular humanist conspiracy, that the universe really is billions of years old rather than, as the Bible suggests, hundreds of thousands. This truthful view of the world which science has given us is the basis of our modern society, its technological prowess and the vastly better standard of living it has engendered compared to any civilization that came before. Productive conversations cannot be had unless the world shown to us by science is taken to be closest version of the truth we have yet come up with- the assumptions we need to share are we not to become unmoored from reality itself.

Yet with all that said, The New Atheism has some problems. These problems are clearly on display  in a talk last spring by two of the giants of The New Atheism, the sociobiologist, Richard Dawkins, and the philosopher, Daniel Dennett, at Oxford University.

Richard Dawkins is perhaps most famous for his ideas regarding cultural evolution, namely his concept of a “meme”.  A meme is another name for an idea, style, or behavior, that in Dawkins’ telling is analogous to a gene in biology in that it is self-replicating and subject to selective pressures.

Daniel Dennett is a philosopher of science who is an advocate of the patient victory of reason over religion. He is both a prolific writer with works such as Darwin’s Dangerous Idea, and secular- humanist activist- the brains behind a project for former and current religious clergy to securely and openly discuss their atheism with one another, The Clergy Project.

The conversation between Dawkins and Dennett at Oxford begins reasonably enough, with Dennett stating how scientifically pregnant he finds Dawkins’ idea of the meme as a vector for cultural evolution. The example Dennett gives as an example of the meme concept is an interesting one. Think of the question who was the designer of the canoe?

You might think the designer(s) are the people who have built canoes, but Dennett thinks it would be better to see them as just one part of a selective process. The real environment that the canoe is selected for is its ability to stay afloat and been steered in water. These canoe “memes” are bound to be the same all over the world- and minus artistic additions they are.

Yet, when the discussion turns to religion neither Dennett nor Dawkins, for reasons they do not explain, think the idea of memes will do. Instead, religion is described using another biological metaphor that of a parasite or virus which uses its host for its own ends. Dennett has a common sense explanation for why people are vulnerable to this parasite. It is a way for someone, such as the parent of a child lost to death, to deal with the tragedy of life. This is the first cognitive “immunological” vulnerability that religious viruses exploit.

T
he second vulnerability that the religious virus exploits is ignorance. People don’t know their own religious beliefs, don’t know that other religions hold to equally absurd and seemingly arbitrary beliefs, don’t understand how the world really works- which science tells us.

Dennett sympathizes with persons who succumb to religious explanations as a consequence of personal tragedy. He is much more interested in the hold of religion that is born of ignorance. The problem for religion, in the eyes of Dennett (and he is more than pleased that religion has this problem), is that this veil of ignorance is falling away, and therefore the necessary operating environment for religion disappearing. Knowledge and science are a form of inoculation:  people are now literate and can understand the absurdity of their own religious beliefs, they now know about other religions, and they know about science. With the growth of knowledge will come- polio- like- the slow eradication of religion.

The idea that religion should be seen as a sort of cognitive virus is one Dawkins laid out way back in 1993 in his essay Viruses of the Mind. There, Dawkins presented the case that religion was akin to computer viruses seizing the cognitive architecture of their host to further its own ends above all its own propagation.  If I can take the testimony of fellow blogger Jonny Scaramanga of Leaving Fundamentalism, this essay has had the important impact of helping individuals free themselves from what is sometimes the iron-grip of religious faith.

The problem, of course, is that religion isn’t a virus or a parasite. We are dealing here merely with an analogy, so the question becomes exactly how scientifically, philosophically or historically robust is this religion as virus analogy?

In terms of science, an objection to be raised is that considering religion as a virus does a great deal of damage to Dawkins’ original theory of cultural evolution through “memes”.  Why is religion characterized as virus like when no other sets of cultural memes are understood in such a value-laden way? A meme is a meme whether I “like” it or not.  If the meme theory of cultural evolution really does hold some validity, and I for one am not convinced, it does not seem to follow definitively that memes can be clearly separated into “good” memes and “bad” memes, or, if one does except such a categorization one better have some pretty solid criteria for segregating memes into positive and negative groups.

The two criteria Dawkins sets up for segregating “good” memes from “bad” memes are, that bad virus like memes suppresses a person’s Darwinian reproductive drives to serve its own ends, and hold the individual in the spell of an imagined reality.

Yet, there are a host of other factors that suppress the individual’s biological imperative to reproduce.  If bad memes are those that negatively impact one’s ability to reproduce, then any law, or code of conduct, or requirement that leads to such a consequence would have to fall under the umbrella of being a bad meme.  We might argue over whether a particular example truly constitutes a reduction of an individual’s ability to reproduce, as examples:  paying taxes for someone else’s children to attend schooling, serving in the military to protect the rights of non-relatives, but such suppression of an individual’s reproductive needs are well nigh universal, as Sigmund Freud long ago pointed out. Taken together we even have a word for such suppression we call it civilization.

What about Dawkins’ claim that religion is bad virus-like meme in that it induces in the individual a false sense of reality.  Again I see no clear way of distinguishing memes of with this feature from nearly all other “normal” memes.   The fact of the matter is we are surrounded by such socially created and sustained fictions. I call the one I am living in the United States of America. Indeed, if I wanted a species unique definition of humanity it might our ability to collectively believe and sustain things that aren’t actually there, which would disappear the moment the group that believes in them stopped doing so.

If the idea that religion is a virus is suspect when looked at more closely, it is nevertheless a meme itself. That is what we have now, for many atheists at least, is the Dawkins created meme that “religion is a virus”. What is the effect of this meme? For some, the idea that religion is a virus may, as mentioned, allows them to free themselves from the hold of their own native traditions. A good thing if they so wish, but how does the religion is a virus meme orient its believers to those who, foolishly in their view, continue to cling to religion?

Perhaps the most troubling thing here is that Dawkins appears to be reformulating one of the most sinister and destructive ideas of religion that of possession and using it against the religious themselves. For Dawkins, there are no good reasons why a religious person believes what she does- she is in the grip of a demon.

The meme “religion is a virus” also would appear to blind its adherents to the mixed legacy of religion. By looking at religion as merely a negative form of meme- a virus or parasite- Dawkins and Dennett, and no doubt many of their followers, tend to completely overlook the fact that religion might play some socially productive role that could be of benefit to the individual well beyond the question of dealing with personal tragedy that Dennett raised.  The examples I can come up with are legion- say the Muslim requirement of charity, which gives the individual a guaranteed safety net should he fall on hard times, or the use of religious belief to break free from addiction as in AA, which seems to help the individual to override destructive compulsions that originate from their own biology.

Even if we stuck strictly to the religion as virus analogy of Dawkins  we would quickly see that biological viruses themselves are not wholly bad or good.  While it is true that viruses have killed countless number of human beings it is also true that they comprise 8% of the human genome, and without the proteins some of them produce, such as the virus that makes syncytin- used to make the placenta that protects the fetus- none of us would be here.

The very fact that religion is universal across human societies, and that it has existed for so long, would seem to give a strong indication to the fact that religion is playing some net positive evolutionary role. We can probably see something of this role in the first reason Dennett provided for the attraction of religion- that it allowed persons to deal with extreme personal tragedy. Religion can provide the individual with the capacity for psychological resilience in the face of such events.

No recognition is made by either Dawkins nor Dennett of the how religion, for all its factionalism and the wars that have emerged from it, has been a potent force, perhaps the most potent force behind the expansion of human beings sphere of empathy- the argument Robert Wright makes in his The Evolution of God. Early Judaism united Cana’s twelve tribes, Pauline Christianity spread the gospels to Jews and gentiles alike, Islam united warring Arab tribes and created a religiously tolerant multi-ethnic empire.

So if the idea that religion is a bad virus-like form of meme seems somewhat arbitrary, and if it is the case that even if we stick to the analogy we end up with what is a mixed, and perhaps even net positive role for religion, what about the conditions for these religious memes transmission that Dennett lays out- the “immunological” vulnerability of ignorance?

Dennett appears to have what might characterized as an 18th century atheist’s view of religion. Religion is a form of superstition that will gradually be overcome by forces of reason and enlightenment. Religion is an exploitative activity built on the asymmetries in knowledge between the clerisy and the common believers with two primary components: the lay believers do not know what their supposed faith actually teaches and cling to it out of mere custom, or intellectual laziness. Secondly, the lay believers do not know what other religions actually believe and if they did would find these beliefs both absurd and yet so similar to their own faith that it would call their own beliefs into doubt.

How does the idea of the ignorance of the lay religious as a source for the power of the clerisy hold up? As history, not so well. Take the biggest and bloodiest religious conflict ever- the European Wars of Religion. Before the Reformation and the emergence of Protestant denominations the great mass of the people were not doctrinally literate.   They practiced the Christian faith, knew and revered the major characters of its stories, celebrated its feast days, respected its clergy. At the same time even were they able to get their hands on a very rare, and very expensive, copy of the scriptures they couldn’t read them, being overwhelmingly illiterate. Even their most common religious experience, that of the mass, was said in a language- Latin- all but a very educated minority understood. But with the appearance of the printing press all of that changed. There was a huge push among both Catholics and their new Protestant rivals to make sure the masses knew the “true” doctrines of the faith. The common catechism makes its appearance here alongside all sorts of other tools for communicating, educating, and binding the people to a specific doctrine.

Religious minorities that previously were ignored, if not understood, such as Jews or persons who held onto some remnant of the pre-Christian past- witches- became the target of those possessed by the new religious consciousness and the knowledge of the rivals to one’s own faith that came along with this new supercharged identity.

The spread of education, at least at first, seems to increase rather than diminishes commitment to some particular religious identity on behalf of the educated. Much more worrisome, the ability to articulate and spread some particular version of religious truth appears to increase, at least in the short-term, the commitment to dogmatic versions of the faith and to increase friction and outright conflict between different groups of believers.

And perhaps that explains the rise of both fundamentalism and the more militant strands of atheism being circulated today. After all, both fundamentalism and the New Atheism rode atop our own version of Guttenberg’s printing press- the internet. Each seems to create echo chambers in which their sharp views are exchanged between believers, and each seem to address the other in a debate few of us are paying attention to. With religious fundamentalist raving about a secular humanist take over and the New Atheists rallying in defense of the separation of church and state and openly ridiculing the views of their opponents. For both sides much of the conflict is understood in terms of a “war” between science and religion, and the “rise of secular humanism”.

At least in terms of Dennett’s explanation of the conflict between science and religion in his conversation with Dawkins, I think, once again, the quite narrow historical and geographic viewpoint Dennett uses when describing the relationship between these two forms of knowledge ends up in a distorted picture rather than an accurate representation of the current state and probable future of religion.

Dennett takes the very modern and Western conflict between science and religion to be historically and culturally universal forgetting that, except for a very brief period in ancient Greece, and the modern world, knowledge regarding nature was embedded in religious ideas. One simply couldn’t be a serious thinker without speaking in terms of religion.  This isn’t the only place where Dennett’s Eurocentrism comes into play. If religion is in decline it does not seem like the Islamic world has heard, or the myriad of other places, such as China or the former Soviet Union that are experiencing religious revivals.

Finally, on the matter of Dennett’s claim that another source of the religious virus’ power is that people are ignorant of other religions, and that if they knew about the absurdities of other faiths they would draw the conclusion that their own religious traditions are equally absurd:  It is simply false, as Dennett does, to see in the decline of religion the victory of scientifically based materialism. Rather, what we are witnessing, in the West at least, can better be described as the decline of institutionalized religion and the growth of “spirituality”.  At least some of this spirituality can be seen as the mixing and matching of different world religions as individuals become more aware of the diversity of religious traditions. Individuals who learn about other religions seem much less likely to draw the conclusion that all religions are equally ridiculous than to find, sometimes spurious, similarities between religions and to draw things from other religions into their own spiritual practice.

Fundamentalism with its creation museums and Loch Ness Monsters is an easy target for the New Atheism, but the much broader target of spirituality is a more slippery foe. The most notable proponent of the non-literalist view of religion is Karen Armstrong whose views Dawkins attacks as “bizarre” and “nonsense”.  Armstrong in her book, The Case for God, had come to the defense of religion against the onslaught on the more militant proponents of the New Atheism, of which Dawkins is the prime example. Armstrong’s point is that fundamentalist and new atheists are in fact not all that different, they are indeed but two sides of the same limited viewpoint that emerged with modernity that views God as a fact- a definable thing- provable or disprovable. Religious thinkers long ago confronted the issue of the divine’s overwhelming scope and decided that the best thing to do in the face of such enormity was to remain humbly silent.


Before the age of text that began with Guttenberg’s printing press, some of whose features were discussed earlier, the predominant religious view, in the eyes of Armstrong, was non-literalists, took a position of silence born of humility toward understanding the nature of God, saw religion less as a belief in the modern sense but as a form of spiritual practice, more akin to something like dance, music, or painting than the logos of philosophy and science, and as a consequence often viewed the scriptures in terms of metaphor and analogy rather than as scientific or historical truth.

What Armstrong thinks is needed today is a return to something like Socratic dialogue which she sees as the mutual exchange of views to obtain a more comprehensive view of reality that is nevertheless conscious and profoundly humble in the face of a fundamental ignorance all of us share.

For both Dawkins and Dennett religion has no future. But, it seems to me, we are not likely to get away from religion or spirituality as the primary way in which we find meaning in the world for the foreseeable future. In non-Western cultures the hold of spiritual practices such as the Muslim religious pilgrimage, the Haj or the Shia Muslim pilgrimages to the holy sites in Iraq that have been opened up as a consequence of the overthrow of Saddam Hussein, or the Hindu bathing in the Ganges seem unlikely to disappear anytime soon.

The question is what happens to religion in the West where the gap between the scientific understanding of the world and the “truths” of religion is experienced as a kind of cognitive dissonance that seems to demand resolution?  Rather than disappearing science itself seems to be taking on features of religion. Much of the broad public interest in sciences such as physics likely stems from the fact that they appear “religious” that is they seems to address religious themes of origins and ultimate destiny and the popularizers of science are often precisely those able to couch science in religious terms. With something like the Transhumanism and the Singularity Movement we actually see science and technology turning themselves into a religion with the goal of human immortality and god-like powers. We have no idea how this fusion of religion and science will play out, but it does seem to offer not only the possibility a brand new form of religious sensibility and practice, but also a threat to the religious heritage and practices not just the West, but all of humankind.

Thankfully, Dennett ends his conversation with Dawkins on what I thought was a hopeful note. Not all questions can be answered by science and for those that cannot politics in the form of reasoned discourse is our best answer. This is the reasonable Dennett (for Dawkins I see no hope).  I only wish Dennett had applied this desire for reasoned discourse to the very religious and philosophical questions- questions regarding meaning and purpose- or lack of both- he falsely claims science can answer.

For my part, I hope that the New Atheism eventually moves away from the mocking condescension, the historical and cultural ignorance and Eurocentrism of figures like Dennett and especially Dawkins. That it, instead, leads to more open discussion between all of us about the rationality or irrationality of our beliefs, the nature of our world and our future within it. That believers, non-believers, and those in between can someday sit at the same table and discuss openly and without apprehension of judgement the question: “what does it all mean?”

Saving Alexandria

One of the most dangerous speeches given by a public intellectual in recent memory was that of Richard Dawkins at the just held Atheist Rally in Washington DC.  Dawkins is a brilliant scientist, and a member of what the philosopher and fellow atheist Daniel Dennett has termed “the brights” a movement seeking to promote a naturalistic as opposed to supernatural view of the world. All this is for the good, and the brights were originally intended to be an inclusive movement that aimed to pull religious as well as non-religious people into a dialogue regarding some of the deeper questions of existence in so far as religious persons shared the same materialist assumptions and language as the secular and scientific mainstream of the movement.

This inclusiveness might have resulted in some very interesting public conversations, something that the neuroscientist David Eagleman has called possibilism– the space between what science definitively knows, and what religion and philosophy imagine. Instead, we have Dawkins’ speech in which he calls on atheist to challenge, “mock”, and “ridicule” the beliefs of religious people. Not only is this an invitation to incivility- where atheist are encouraged to intellectually mug religious persons who probably have not asked to engage in such conversations- it threatens to inflame the very anti-scientific tendencies of modern religion that Dawkins, rightly, opposes and detest.

To challenge religion where it has an immoral, intolerant, or dangerous effect on the larger political society is a duty of all citizens whatever their non-religious or religious persuasions.  Persons of a secular bent, among whom I include myself, need to constantly remind overly zealous religious people that theirs is not the only view and that the separation of church and state exists, not merely for their own, but for all of our protection.

Yet, the last thing science needs is to get into a fist-fight with sincerely religious people about subjects that have no effect whatsoever on the health of the public sphere. When the crowds roared in support of Dawkins’ call that they mock people who hold what he considers absurd beliefs such as that of Catholics regarding transubstantiation (an example he actually uses) one is left wondering whether the barbarians of the future might just as likely come from the secular rather than religious elements in society.

Continued in this vein, Dawkins would transform the otherwise laudable atheist movement into a lightening rod aimed right at the heart of science. No one should want a repeat of what Piss-Christ did to public funding for the arts.

Up until now, the ire of religion towards science has remained remarkably focused- evolution, reproductive technology, and, to a limited and much more dangerous extent- global warming- the last thing we need is for it to be turned on physics- cosmology, neurology or computer science.

Should the religious ever turn their attention to the singularity movement, which, after all, is a religion masking itself as science, they could stifle innovation and thus further exacerbate inequality. If the prophets of the singularity prove to be correct, they may find themselves in a state of war with traditional religion. A cynical minority of religious people may see the singularity as a major threat to their “business model”, but the majority may be reasonably inspired by their dispute with singularians over the necessarily spiritual question of what it means to be human, something the religious hold, with justification, to be their own turf.

Here, the religious may ironically actually hold the humanist higher ground. For it is difficult to see how the deep extension of the human lifespan and creation of artificial forms of intelligence promised by the singularity movement are humanistic ends given the divergence in mortality rates and educational levels between the developing and developed world. In other words, a humanist, as opposed to a trashumanist version of the future would aim at increasing the life expectancy of countries such as Chad, where a person is not expected to live past 50, rather than trying to extend ever outward the lifespan of the wealthy in the developed world. It would also be less inclined to race towards creating a new species of intelligent beings than towards making the most of the intelligent beings who are already here-us- through the old fashioned methods such as education- especially for girls.

In the not too far-off future, class and religious struggles might merge in dangerous and surprising ways, and the explosive growth of religion in the developing world might be mobilized in the name of traditional belief, and in the humanist cause of protecting the species.

Even should none of this dystopian scenario come to pass, religion is already full of anxiety in regards to science, and science imperialistic in its drive to submit every aspect of reality human and non-human alike to its “models” of reality. This anxiety and imperialism has been detrimental for religion and science both.

The confrontation between religion and science has resulted in religion becoming vulgar in the need to translate religious concepts into the “truths” of science- think the Shroud of Turin or the Creation Museum.

At the same time, science turns it sights not so much on undermining the religious world-view as the very nature of belief itself. It is equally vulgar for scientist to strap electrodes onto someone’s brain in the hopes of finding “the god spot” or some such nonsense- as if it means anything that religious belief is “proven” to be a part of neuro-anatomy- what else could it be?

We have known since the ancient Greeks that there are better ways to describe the natural world than religion. Religion isn’t, or shouldn’t be about that. It’s about the mystery of being, the search for meaning, on a human scale, a scale that science cannot provide, about good and evil.

Science may be extremely good at explaining a mental disease such as schizophrenia, and devising effective interventions. What it cannot do, what religion does so well, is to turn the devastating nature of such an illness into a sphere of meaning that can rescue purpose from the cold indifference of the universe. Without some variant of it we will freeze to death.