Yuval Harari Drinks the Kool Aid

Like everything else in life, a book’s publication can have good or bad timing. Good timing happens when a newly published book seems just a little bit ahead of the prevailing zeitgeist, when it seems to have anticipated events or realizations almost no else seemed to be grappling with on the day of its publication, but have now burst upon the public with a sudden irresistible force.

In this authors, to the extent they are still read, or even just talked about, play the role formerly occupied by prophets or Oracles. Such authorial prophecy is  a role rapidly disappearing, to be replaced, many predict, by artificial intelligence and big data. It probably won’t matter much. Neither are very good at predicting the future anyway.

A prophetic book badly timed doesn’t mean it’s analysis is wrong, but perhaps just premature. Yuval Harari’s Homo Deus: A Brief History of Tomorrow is either one or the other. It’s either badly timed and right because it’s premature, or badly timed and wrong because its analysis is deeply flawed.

For those who haven’t read the book, or as a reminder for those who have Harari’s essentially point in Homo Deus is that “Having secured unprecedented levels of prosperity, wealth and harmony, and given our past record and our current values, humanity’s next targets are likely to be immortality, happiness and divinity.” (21) Harari believes this even if while he seems to doubt the wisdom of such goals, and even in light of the fact that he admits this same humanity is facing ecological catastrophe and a crisis of ever mounting inequality between, if not within, societies.

The fact that Harari could draw this conclusion regarding what humanity should do next stems from the fact that he sees liberal humanism as the only real game left in town. He sees the revanche de deus in the Middle East and elsewhere as little but a sideshow, the real future of religion is now being forged in Silicon Valley.

Liberal humanism he defines as a twofold belief which on the one side suggests human sovereignty over nature, and on the other, that the only truth, other than the hard truths of science which such humanism believes in, is the truth that emerges from within the individual herself.

It is this reliance upon the emotions welling up from the self which Harari believes will ultimately be undone by the application of the discovery of science, which Harari holds is that, at rock bottom, the individual is nothing but “algorithms”. Once artificial algorithms are perfected they will be able to know the individual better than that individual knows herself. Liberal humanism will then give way to what Harari calls “Dataism”.

Harari’s timing proved to be horribly wrong because almost the moment proclaimed the victory of Liberal humanism all of its supposedly dead rivals, on both the right (especially) and the left (which included a renewed prospect of nuclear war) seemed to spring zombie-like from the grave as if to show that word of their demise had been greatly exaggerated. Of course, all of these rivals (to mix my undead metaphors) were merely mummified versions of early 20th century collective insanities, which meant they were also forms of humanism. Whether one chose to call them illiberal humanisms or variants of in-humanism being a matter of taste, all continued to have the human as their starting point.

Yet at the same time nature herself seemed determined to put paid to the idea that any supposed transcendence of humanity over nature had occurred in the first place. The sheer insignificance of human societies in the face of storms where an “average hurricane’s wind energy equals about half of the world’s electricity production in a year. The energy it releases as it forms clouds is 200 times the world’s annual electricity use,” and “The heat energy of a fully formed hurricane is “equivalent to a 10-megaton nuclear bomb exploding every 20 minutes,”  has recently been made all too clear. The idea that we’ve achieved the god-like status of reigning supreme over nature isn’t only a fantasy, it’s proving to be an increasingly dangerous one.

That said, Harari remains a compassionate thinker. He’s no Steven Pinker brushing under the rug past and present human and animal suffering so he can make make his case that things have never been better.  Also, unlike Pinker and his fellow travelers convinced of the notion of liberal progress, Harari maintains his sense of the tragic. Sure, 21st century peoples will achieve the world humanists have dreamed of since the Renaissance, but such a victory, he predicts, will prove pyritic. Such individuals freed from the fear of scarcity, emotional pain, and perhaps even death itself, will soon afterward find themselves reduced to puppets with artificial intelligence pulling the strings.

Harari has drank the Silicon Valley Kool Aid. His cup may be half empty when compared to that of other prophets of big data whose juice is pouring over the styrofoam edge, but it’s the same drink just the same.

Here’s Harrai manifesting all of his charm as a writer on this coming Dataism in all its artificial saccharine glory:

“Many of us would be happy to transfer much of our decision making processes into the hands of such a system, or at least consult with it whenever we make important choices. Google will advise us which movie to see, where to go on holiday, what to study in college, which job offer to accept, and even whom to date and marry. ‘Listen Google’, I will say ‘both John and Paul are courting me. I like both of them, but in different ways, and it’s so hard for me to make up my mind. Given everything you know, what do you advise me to do?’

And Google will answer: ‘Well, I’ve known you since the day you were born. I have read all your emails, recorded all your phone calls, and know your favorite films, your DNA and the entire biometric history of your heart. I have exact data about each date you went on, and, if you want, I can show you second-by-second graphs of your heart rate, blood pressure and sugar levels whenever you went on a date with John or Paul. If necessary, I can even provide you with an accurate mathematical ranking of every sexual encounter you had with either of them. And naturally, I know them as well as I know you. Based on all this information, on my superb algorithms, and on decade’s worth of statistics about millions of relationships- I advise you to go with John, with an 87 percent probability that you will be more satisfied with him in the long run.” (342)

Though at times in Homo Deus Harari seems  distressed by his own predictions, in the quote above he might as well be writing an advertisement for Google. Here he merely echoes the hype for the company expressed by Executive Chairman of Alphabet (Google’s parent company), Eric Schmidt. It was Schmidt who gave us such descriptions of what Google’s ultimate aims were as:

We don’t need you to  type at all because we know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.

And that the limits on how far into the lives of its customers the company would peer, would be “to get right up to the creepy line and not cross it”. In the pre-Snowden Silicon Valley salad days Schmidt had also dryly observed:

If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.

It’s not that Harari is wrong in suggesting that entities such as Google won’t continue to use technology to get right under their customer’s skin, it’s that he takes their claims to know us better than we know ourselves, or at least be on the road to such knowledge, as something other than extremely clever PR.

My doubts about Google et al’s potential to achieve the omnipotence of Laplace’s Demon  doesn’t stem from any romantic commitment to human emotions but from the science of emotion itself. As the cognitive neuroscientist Lisa Feldman Barrett has been vocally trying to inform a public suffused with antiquated notions about how the brain actually  works: physiologists have never been able to discover a direct correlation between a bodily state and a perceived emotion. A reported emotion, like anger, will not just manifest itself in a physiologically distinct way in two different individuals, at different times anger can physiologically manifest itself differently in the same individual.

Barrett also draws our attention to the fact that there is little evidence that particular areas of the brain are responsible for a specific emotion, implying, to my lights, that much of current FMRI scanning based on blood flows and the like may face the same fate as phrenology.

Thus the kinds of passive “biometric monitoring” Harari depicts seems unlikely to lead to an AI that can see into a person’s soul in the way he assumes, which doesn’t mean algorithmic-centric corporations won’t do their damnedest to make us think they can do just that. And many individuals probably will flatten and distort aspects of life that do not lend themselves to quantification in a quixotic quest for certainty, flattening their pocketbooks at the same time.

True believers in the “quantified self” will likely be fooled into obsessive self measurement by the success of such methods in sports along with the increasing application to them of such neo-Taylorist methods in the workplace. Yet, while perfecting one’s long-short technique, or improving at some routine task, are easily reducible to metrics, most of life, and almost all of the interesting parts about living, are not. A person who believed in his AI’s “87 percent probability” would likely think they are dealing with science when in reality they are confronting a 21st century version of the Oracle at Delphi, sadly minus the hallucinogens.

Even were we able to reach deep inside the brain to determine the wishes and needs of our “true selves”, we’d still be left with these conundrums. The decisions of an invasive AI that could override our emotions would either leave us feeling that we had surrendered our free will to become mere puppets, or would be indistinguishable from the biologically evolved emotional self we were trying to usurp. For the fact of the matter is the emotions we so often confuse with the self are nothing but the unending wave of internal contentment and desire that oscillates since the day we are born. As a good Buddhist Harari should know this. Personhood consists not in this ebb and flow, but emerges as a consequence of our commitments and life projects, and they remain real commitments and legitimate projects only to the extent we are free to break or abandon them.

Harari’s central assumption in Homo Deus, that humanity is on the verge of obtaining God like certainty and control, is, of course, a social property much more so than civilization’s longed for gift to individuals. The same kind of sovereignty he predicts individuals will gain over the contingencies of existence and their biology he believes they will collectively exercise over nature itself. Yet even collectively and at the global scale such control is an illusion.

The truth implied in the idea of the Anthropocene is not that humanity now lords over nature, but that we have reached such a scale that we have ourselves become part of nature’s force. Everything we do at scale, whatever its intention, results in unforeseen consequences we are then forced to react to and so on and so on in cycle that is now clearly inescapable. Our eternal incapacity to be self-sustaining is the surest sign that we are not God. As individuals we are inextricably entangled within societies with both entangled by nature herself. This is not a position from which either omniscience or omnipotence are in the offing.

Harari may have made his claims as a warning, giving himself the role of ironic prophet preaching not from a Levantine hillside but a California TED stage. Yet he is likely warning us about the wrong things. As we increasingly struggle with the problems generated by our entanglement, as we buckle as nature reacts, sometimes violently, to the scale of our assaults and torque, as we confront a world in which individuals and cultures are wound ever more tightly, and uncomfortably, together we might become tempted to look for saviors. One might then read Homo Deus and falsely conclude the entities of Dataism should fill such a role, not because of their benevolence, but on account of their purported knowledge and power.



Sex and Love in the Age of Algorithms

Eros and Psyche

How’s this for a 21st century Valentine’s Day tale: a group of religious fundamentalists want to redefine human sexual and gender relationships based on a more than 2,000 year old religious text. Yet instead of doing this by aiming to seize hold of the cultural and political institutions of society, a task they find impossible, they create an algorithm which once people enter their experience is based on religiously derived assumptions users cannot see. People who enter this world have no control over their actions within it, and surrender their autonomy for the promise of finding their “soul mate”.

I’m not writing a science-fiction story- it’s a tale that’s essentially true.

One of the first places, perhaps the only place, where the desire to compress human behavior into algorithmically processable and rationalized “data”, has run into a wall was in the ever so irrational realms of sex and love. Perhaps I should have titled this piece “Cupid’s Revenge”, for the domain of sex and love has proved itself so unruly and non-computable that what is now almost unbelievable has happened- real human beings have been brought back into the process of making actual decisions that affect their lives rather than relying on silicon oracles to tell them what to do.

It’s a story not much known and therefore important to tell. The story begins with the exaggerated claims of what was one of the first and biggest online dating sites- eHarmony. Founded in 2000 by Neil Clark Warren, a clinical psychologist and former marriage counselor, eHarmony promoted itself as more than just a mere dating site claiming that it had the ability to help those using its service find their “soul mate”. As their senior research scientist, Gian C. Gonzaga, would put it:

 It is possible “to empirically derive a matchmaking algorithm that predicts the relationship of a couple before they ever meet.”

At the same time it made such claims, eHarmony was also very controlling in the way its customers were allowed to use its dating site. Members were not allowed to search for potential partners on their own, but directed to “appropriate” matches based on a 200 item questionnaire and directed by the site’s algorithm, which remained opaque to its users. This model of what dating should be was doubtless driven by Warren’s religious background, for in addition to his psychological credentials, Warren was also a Christian theologian.

By 2011 eHarmony garnered the attention of sceptical social psychologists, most notably, Eli J. Finkel, who, along with his co-authors, wrote a critical piece for the American Psychological Association in 2011 on eHarmony and related online dating sites.

What Finkle wanted to know was if claims such as that of eHarmony that it had discovered some ideal way to match individuals to long term partners actually stood up to critical scrutiny. What he and his authors concluded was that while online dating had opened up a new frontier for romantic relationships, it had not solved the problem of how to actually find the love of one’s life. Or as he later put it in a recent article:

As almost a century of research on romantic relationships has taught us, predicting whether two people are romantically compatible requires the sort of information that comes to light only after they have actually met.

Faced with critical scrutiny, eHarmony felt compelled to do something, to my knowledge, none of the programmers of the various algorithms that now mediate much of our relationship with the world have done; namely, to make the assumptions behind their algorithms explicit.

As Gonzaga explained it eHarmony’s matching algorithm was based on six key characteristics of users that included things like “level of agreeableness”  and “optimism”. Yet as another critic of eHarmony Dr. Reis told Gonzaga:

That agreeable person that you happen to be matching up with me would, in fact, get along famously with anyone in this room.

Still, the major problem critics found with eHarmony wasn’t just that it made exaggerated claims for the effectiveness of its romantic algorithms that were at best a version of skimming, it’s that it asserted nearly complete control over the way its users defined what love actually was. As is the case with many algorithms, the one used by eHarmony was a way for its designers and owners to constrain those using it to impose, rightly or wrongly, their own value assumptions about the world.

And like many classic romantic tales, this one ended with the rebellion of messy human emotion over reason and paternalistic control. Social psychologist weren’t the only ones who found eHarmony’s model constraining and weren’t the first to notice its flaws. One of the founders of an alternative dating site, Christian Rudder of OkCupid, has noted that much of what his organization has done was in light of the exaggerated claims for the efficacy of their algorithms and top-down constraints imposed by the creators of eHarmony. But it is another, much maligned dating site, Tinder, that proved to be the real rebel in this story.

Critics of Tinder, where users swipe through profile pictures to find potential dates have labeled the site a “hook-up” site that encourages shallowness. Yet Finkle concludes:

Yes, Tinder is superficial. It doesn’t let people browse profiles to find compatible partners, and it doesn’t claim to possess an algorithm that can find your soulmate. But this approach is at least honest and avoids the errors committed by more traditional approaches to online dating.

And appearance driven sites are unlikely to be the last word in online dating especially for older Romeos and Juliets who would like to go a little deeper than looks. Psychologist, Robert Epstein, working at the MIT Media Lab sees two up and coming trends that will likely further humanize the 21st century dating experience. The first is the rise of non-video game like virtual dating environments. As he describes it:

….so at some point you will be able to have, you know, something like a real date with someone, but do it virtually, which means the safety issue is taken care of and you’ll find out how you interact with someone in some semi-real setting or even a real setting; maybe you can go to some exotic place, maybe you can even go to the Champs-Elyséesin Paris or maybe you can go down to the local fast-food joint with them, but do it virtually and interact with them.

The other, just as important, but less tech-sexy change Epstine sees coming is bringing friends and family back into the dating experience:

Right now, if you sign up with the eHarmony or match.com or any of the other big services, you’re alone—you’re completely alone. It’s like being at a huge bar, but going without your guy friends or your girl friends—you’re really alone. But in the real world, the community is very helpful in trying to determine whether someone is right for you, and some of the new services allow you to go online with friends and family and have, you know, your best friend with you searching for potential partners, checking people out. So, that’s the new community approach to online dating.

As has long been the case, sex and love have been among the first set of explorers moving out into a previously unexplored realm of human possibility. Yet sex and love are also because of this the proverbial canary in the coal mine informing us of potential dangers. The experience of online dating suggest that we need to be sceptical of the exaggerated claims of the various algorithms that now mediate much of lives and be privy to their underlying assumptions. To be successful algorithms need to bring our humanity back into the loop rather than regulate it away as something messy, imperfect, irrational and unsystematic.

There is another lesson here as well, for the more something becomes disconnected from our human capacity to extend trust through person-to-person contact and through taping into the wisdom of our own collective networks of trust the more dependent we become on overseers who in exchange for protecting us from deception demand the kinds of intimate knowledge from us only friends and lovers deserve.


The Algorithms Are Coming!

Attack of the Blob

It might not make a great b-movie from the late 50’s, but the rise of the algorithms over the last decade has been just as thrilling, spectacular, and yes, sometimes even scary.

I was first turned on to the rise of algorithms by the founder of the gaming company Area/Code , Ken Slavin, and his fascinating 2011 talk on the subject at TED.  I was quickly draw to one Slavin’s illustration of the new power of algorithms in the world of finance.  Algorithms now control more than 70% of US financial transactions meaning that the majority of decisions regarding the buying and selling of assets are now done by machines. I initially took, indeed I still take, the rise of algorithms in finance to be a threat to democracy. It took me much longer to appreciate Slavin’s deeper point that algorithms have become so powerful that they represent a new third player on the stage of human experience: Nature-Humanity-Algorithms. First to finance.

The global financial system has been built around the electronic net we have thrown over the world. Assets are traded at the speed of light. The system rewards those best equipped to navigate this system granting stupendous profits to those with the largest processing capacity and the fastest speeds. Processing capacity means access to incredibly powerful supercomputers, but the question of speed is perhaps more interesting.

Slavin points out how the desire to shave off a few milliseconds of trading time has led to the hollowing out of whole skyscrapers in Manhattan. We tend to think of the Internet as something that is “everywhere” but it actually has a location of sorts in the form of its 13 core root servers through which all of its traffic flows. The desire to get close to route servers and therefore be able to move faster has led not only to these internally re-configured skyscrapers, but the transformation of the landscape itself.

By far the best example of the needs of algorithms shaping the world is the 825 mile fiber optic trench dug from Chicago to New York by the company Spread Networks. Laying the tunnel for this cable was done by cutting through my formidable native Alleghenies rather than following, as regular communications networks do, the old railway lines.

Slavin doesn’t point this out, but the 13 milliseconds of trading advantage those using this cable is only partially explained by its direct route between Chicago and New York. The cable is also “dark fiber” meaning it does not need to compete with other messages zipping through it. It’s an exclusive line- the private jet of the web. Some alien archaeologist who stumbled across this cable would be able to read the economic inequality endemic to early 21st century life. The Egyptians had pyramids, we have a tube of glass hidden under one of the oldest mountain ranges on earth.

Perhaps the best writer on the intersection of digital technology and finance is the Wall Street Journal’s  Scott Peterson with books like his Quants, and even more so his Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock MarketIn Dark Pools Peterson takes us right into the heart of new algorithm based markets where a type of evolutionary struggle is raging that few of us are even aware of. There are algorithms that exist as a type of “predator” using their speed to out maneuver slow moving “herbivores” such as the mutual funds and pension funds in which the majority of us little-guys, if we have any investments at all, have our money parked. Predators, because they can make trades milliseconds faster than these “slow” funds can see a change in market position- say selling a huge chunk of stock- and then pounce taking an advantageous position relative to the sale leaving the slow mover with much less than would have been gained or much more than would have been lost had these lightning fast piranhas not been able to strike.

To protect themselves the slow moving funds have not only established things like “decoy” algorithms to throw the predators off their trail, but have shifted much of their trading into non-public markets the “dark-pools” of Peterson’s title. Yet, even these pools have become infected with predator algos. No environment is safe- the evolutionary struggle goes on.

However this ends, and it might end very badly, finance is not the only place where we have seen the rise of the algorithms. The books recommended for you by Amazon or the movies Netflix informs you might lead to a good movie night are all based on sophisticated algorithms about who you are. The same kinds of algorithms that try to “understand” you are used by online dating services or even your interaction with the person on the other end of the line at customer service.

Christopher Steiner in his Automate This  points out that little ditty at the beginning of every customer service call “this call may be monitored…” is used not so much as we might be prone to think it is- a way to gauge the performance of the person who is supposed to help you with your problem as it is to add you to a database of personality types.  Your call can be guided to someone skilled in whatever personality type you have. Want no nonsense answers? No problem! Want a shoulder to cry on? Ditto!

The uber-dream of the big technology companies is to have their algorithms understand every element of our lives and “help” us to make decisions accordingly. Whether or not help should actually be in quotes is something for us as individuals and even more so as a society to decide with the main questions being how much of our privacy are we willing to give up in order to have smooth financial transactions, and is this kind of guidance a help or a hindrance to the self-actualization we all prize?

The company closest to achieving this algorithmic mastery over our lives is Google as Steven Kovach points out in a recent article with the somewhat over the top title Google’s plan to take over the world. Most of us might think of Google as a mere search company that offers a lot of cool compliments such as Google Earth. But, as its founders have repeatedly said, the ultimate goal of the company is to achieve true artificial intelligence, a global brain covering the earth.

 Don’t think the state, which Nietzsche so brilliantly called “the coldest of all cold monsters”  hasn’t caught on to the new power and potential of algorithms. Just as with Wall Street firms and tech companies the state has seized on the capabilities of advances in artificial intelligence and computing power which allow the scanning of enormous databases. Recent revelations regarding the actions of the NSA should have come as no surprise. Not conspiracy theorists, but reputable journalists such as the Washington Post’s Dana Priest  had already informed us that the US government was sweeping up huge amounts of data about people all over the world, including American citizens, under a program with the Orwellian name of The Guardian.  Reporting by James Bamford of Wired in March of last year had already informed us that:

In the process—and for the first time since Watergate and the other scandals of the Nixon administration—the NSA has turned its surveillance apparatus on the US and its citizens. It has established listening posts throughout the nation to collect and sift through billions of email messages and phone calls, whether they originate within the country or overseas. It has created a supercomputer of almost unimaginable speed to look for patterns and unscramble codes. Finally, the agency has begun building a place to store all the trillions of words and thoughts and whispers captured in its electronic net.

The NSA scandals have the potential of shifting the ground under US Internet companies, especially companies such as Google whose business model and philosophy are built around the idea of an open Internet. Countries have even more reason now to be energetic in pursuing “Internet sovereignty”, the idea that each county should have the right and power to decide how the Internet is used within its borders.

In many cases, such as in Europe, this might serve to protect citizens against the prying eyes of the US security state, but we should not be waving the flag of digitopia quite yet. There are likely to be many more instances of the state using “Internet sovereignty” not to protect its people from US snoops, but authoritarian regimes from the democratizing influences of the outside world. Algorithms and the ecosystem of the Internet in which most of them exists might be moving from being the vector of a new global era of human civilization to being just another set of tools in the arsenal of state power. Indeed, the use of algorithms as weapons and the Internet as a means of delivery is already well under way.

At this early date it’s impossible to know whether the revolution in algorithms will ultimately be for the benefit of tyranny or freedom. As of right now I’d unfortunately have to vote for the tyrants. The increase in the ability to gather and find information in huge pools of data has, as is well known, given authoritarian regimes such as China the ability to spy on its netizens that would make the more primitive totalitarians of the 20th century salivate. Authoritarians have even leveraged the capacity of commercial firms to increase their own power, a fact that goes unnoticed when people discuss the anti- authoritarian “Twitter Revolutions” and the like.

Such was the case in Tunisia during its revolution in 2011 where the state was able to leverage the power of a commercial company- FaceBook- to spy on its citizens. Of course, resistance is fought through the Internet as well. As Parmy Olson points out in her We Are Anonymous it was not the actions of the US government but one of the most politically motivated of the members of the hacktivist groups Anonymous and LulzSec, a man with the moniker “Sabu” who later turned FBI informant that was able to launch a pushback of this authoritarian takeover of the Internet. Evidence if there ever was any of that hacktivism even when using Distributed Denial of Service Attacks (DDOS) can be a legitimate form of political speech

Yet, unlike in the movies, even the rebels in this story aren’t fully human. Anonymous’ most potent weapon DDOS attacks rely on algorithmic bots to infect or inhabit host  computers and then strike at some set moment causing a system to crash  due to surges in traffic. Still, it isn’t this hacktivism of groups like Anonymous and LulzSec that should worry any of us, but the weaponization of the Internet by states, corporations and criminals.

Perhaps the first well known example of a weaponized algorithms was the Stuxnet Worm deployed by the US, Israel, or both, against the Iranian nuclear program. This was a very smart computer worm that could find and disable valuable pieces of Iran’s nuclear infrastructure leaving one to wonder whether the algo wars on Wall Street are just a foretaste of a much bigger and more dangerous evolutionary struggle.

Hacktivist groups like Anonymous or LulzSec have made DDOS attacks famous. What I did not know, until I read Parmy Olson, is that companies are using botnets to attack other companies as when Bollywood used the company AiPlex to attack the well known copyright violators such as Pirate Bay by using DDOS attacks. What this in all likelihood means is that AiPlex unknown to their owners infiltrated perhaps millions of computers (maybe your computer) to take down companies whose pirated materials you might never have viewed. Indeed, it seems the majority of DDOS attacks are little but a-political thuggery- mobsters blackmailing gambling houses with takedowns on large betting days and that sort of Sopranosesque type of thing.

Indeed, the “black-hats” of criminal groups are embracing the algorithmic revolution with abandon. A lot of this is just annoying: it’s algorithms that keep sending you all those advertisements about penis enlargement, or unclaimed lottery winnings, but it doesn’t stop there. One of the more disturbing things I took away from Mark Bowden’s Worm the First Digital War   is that criminals who don’t know the first thing about programming can buy “kits”, crime algorithms they can customize to, say, find and steal your credit card information by hacking into your accounts. The criminal behind this need only press a few buttons and whola! he’s got himself his very own cyber-burglar.

 The most advanced of these criminal algorithms- though it might be a weapon of some state or terrorist group, we just don’t know- is the Conficker Worm, the subject of Bowden’s book which was able to not only infects millions of computers by exploiting a whole in Windows- can you believe it?!- but has created the mother of all botnets, an algorithm capable of taking down large parts of the Internet if it chose, but for whatever reason just sits there without doing a damned thing.

As for algorithms and less kinetic forms of conflict, the Obama Campaign of 2012 combined the same mix of the capability to sort huge data sets combined with the ability to sort individuals based on psychological/social profiles that we see being used by tech companies and customer service firms. Just like the telemarketers or CSRs the Obama campaign was able to tailor their approach  to the individual on the other end of their canvasing – amplifying their persuasive power. That such mobilizing prowess has not also led to an actual capacity to govern is another matter.

All this is dark, depressing stuff, I know. So, I should end with a little light. As Steiner points out in his Automate This, our new found power to play with huge data sets, and,  in what sounds like an oxymoron, customize automation, promises a whole host of amazing benefits. One of these might be our ability to have a 24/7 personal AI “physician” that monitors our health and drastically reduces medical costs. A real bone for treating undeserved patients whether in rural Appalachia or the develping world.

Steiner is also optimistic when it comes to artists. Advanced algorithms now allow, and they’ll just get better, buyers  to link with sellers in a way that has never been possible before. A movie company might be searching for a particular piece of music for their film. Now, through a service like Music-Xray  the otherwise invisible musician can be found.

Here I have to put my pessimist cap back on for just a minute, for the idea that algorithms can help artists be viable, as of this writing, is just that a hope. Sadly, there is little evidence for it in reality. This is a point hit home by the recent Atlantic Online article: The Reality of the Music Business Today: 1 Million Plays = $16.89. The algorithm used by the Internet music service, Pandora, may have helped a million people find musician David Lowery and his song “Low”, but its business model seems incapable of providing even successful musicians with meaningful income. The point that the economic model we have built around the “guy with the biggest computer” has been a bust for artists of all sorts is most strongly driven home by the virtual reality pioneer and musician Jaron Lanier. Let’s hope Lanier is ultimately wrong and algorithms eventually provide a way of linking artists and their patrons, but we are far, far from there yet. At the very least they should provide artists and writers with powerful tools to create their works.

Their are more grounds for optimism. The advance of algorithms is one of the few lit paths out of our current economic malaise. Their rise appears to signal that the deceleration in innovation which emerged because of the gap between the flood of information we could gather, the new discoveries we were making, and our ability to model those discoveries coherently, may be coming to an end almost as soon as it was identified. Advanced algorithms should allow us to make potent and amazing new models of the natural world. In the long run they may allow us to radically expand the artistic, philosophical and religious horizons of intelligence creating visions of the world of which we can today barely dream.

On a more practical and immediate level, advanced algorithms that can handle huge moving pieces of information seem perfect for dealing with something like responding to a natural disaster or managing the day to day flows of a major city such as routing traffic or managing services- something IBM is pioneering with its Smart Cities Projects in New York City and Rio De Janeiro.

What algorithms are not good at, at least so far, and we can see this in everything from the Obama campaign in light of its political aftermath, to the war on terrorism, to the 300,000 person protests in Rio this past week despite how “smart” the city, is expanding their horizon beyond the immediate present to give us solutions for the long term political, economic and social challenges we confront, instead merely acting as a globe- sized amplifier of such grievances which can bring down governments but not creating a lasting political order.  To truly solve our problems we still need the mother of all bots, collective human intelligence. I am still old fashioned enough to call it democracy.